The following steps summarize the process of TRANSPORTING A TABLESPACE. Details for each step are provided in the subsequent example.
1. For cross-platform transport, check the endian format of both platforms by querying the v$transportable_platform
Ignore this step if you are transporting your tablespace set to the same platform.
2. Pick a self-contained set of tablespaces.
3. Generate a transportable tablespace set.
A transportable tablespace set (or transportable set) consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces. You use data pump or exp to perform the export.
If any of the tablespaces contain xmltypes, you must use exp.
If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 2.
This method of generating a transportable tablespace requires that you temporarily make the TABLESPACE READ-ONLY. If this is undesirable, you can use the alternate method known as TRANSPORTABLE TABLESPACE from backup.
4. Transport the tablespace set.
Copy the datafiles and the export file to a place that is accessible to the target database.
If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now.
5. Import the tablespace set.
Invoke the data pump utility or imp to import the metadata for the set of tablespaces into the target database.
If any of the tablespaces contain xmltypes, you must use imp.
The steps for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed that, the following datafiles and tablespaces exist:
Step 1: determine if platforms are supported and endianness:
This step is only necessary if you are transporting the tablespace set to a platform different from the source platform.
If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise, you must do a conversion of the tablespace set either at the source or target database.
If you are transporting sales_1 and sales_2 to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport.
Sql>select d.platform_name, endian_format From v$transportable_platform tp, v$database d Where tp.platform_name = d.platform_name;
The following is the query result from the source platform:
Solaris[tm] oe (32-bit) big
The following is the result from the target platform:
Microsoft windows nt little
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.
Step 2: pick a self-contained set of tablespaces:
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context, “self-contained” means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. Some examples of self contained tablespace violations are:
It is not a violation if a corresponding index for a table is outside of the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. If you want to transport a subset of a partition table, you must exchange the partitions into tables.
When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether or not a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers.
To determine whether a set of tablespaces is self-contained, you can invoke the transport_set_check procedure in the oracle supplied package dbms_tts. You must have been granted the execute_catalog_role role (initially signed to sys) to execute this procedure.
When you invoke the dbms_tts package, you specify the list of tablespaces in the transportable set to be checked for self containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the tts_full_check parameter to true.
The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace point-in-time recovery (tspitr) is one such case where dependent objects must be fully contained or fully outside the transportable set.
For example, it is a violation to perform tspitr on a tablespace containing a table t but not its index i because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set.
The default for transportable tablespaces is to check for self containment rather than full containment.
The following statement can be used to determine whether tablespaces sales_1 and sales_2 are self-contained, with referential integrity constraints taken into consideration (indicated by true).
Sql>execute dbms_tts.transport_set_check('sales_1,sales_2', true);
After invoking this pl/sql package, you can see all violations by selecting from the transport_set_violations view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint, dept_fk, across the tablespace set boundary, and a partitioned table, jim.sales, that is partially contained in the tablespace set.
Sql> select * from transport_set_violations; Violations --------------------------------------------------------------------------- Constraint dept_fk between table jim.emp in tablespace sales_1 and table Jim.dept in tablespace other Partitioned table jim.sales is partially contained in the transportable set
These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export the integrity constraints.
Step 3: generate a transportable tablespace set:
Any privileged user can perform this step. However, you must have been assigned the exp_full_database role to perform a transportable tablespace export operation.
This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup.
After ensuring that you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:
1. Make all tablespaces in the set you are copying read-only.
2. Sql> alter tablespace sales_1 read only;
3. Tablespace altered.
4. Sql> alter tablespace sales_2 read only;
5. Tablespace altered.
6. Invoke the data pump export utility on the host system and specify which tablespaces are in the transportable set.
If any of the tablespaces have xmltypes, you must use exp instead of data pump. Ensure that the constraints and triggers parameters are set to y (the default).
Sql> host $ expdp system/password dumpfile=expdat.dmp directory=dpump_dir Transport_tablespaces = sales_1,sales_2
You must always specify transport_tablespaces, which determines the mode of the export operation. In this example:
If you want to perform a transport tablespace operation with a strict containment check, use the transport_full_check parameter, as shown in the following example:
$expdp system/password dumpfile=expdat.dmp directory dpump_dir Transport_tablespaces=sales_1,sales_2 transport_full_check=y
In this example, the data pump export utility verifies that there are no dependencies between the objects inside the transportable set and objects outside the transportable set. If the tablespace set being transported is not self-contained, then the export fails and indicates that the transportable set is not self-contained. You must then return to step 1 to resolve all violations.
The data pump utility is used to export only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets.
7. When finished, exit back to sql*plus:
8. $ exit
If sales_1 and sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the sales_1 and sales_2 tablespaces:
9. From sql*plus, return to the host system:
10. Sql> host
11. The rman convert command is used to do the conversion. Start rman and connect to the target database:
12. $ rman target /
13. Recovery manager: release 10.1.0.0.0
14. Copyright (c) 1995, 2003, oracle corporation. All rights reserved.
15. Connected to target database: salesdb (dbid=3295731590)
16. Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.
17. Rman> convert tablespace sales_1,sales_2
18. 2> to platform ‘microsoft windows nt’
19. 3> format ‘/temp/%u’;
20. Starting backup at 08-apr-03
21. Using target database control file instead of recovery catalog
22. Allocated channel: ora_disk_1
23. Channel ora_disk_1: sid=11 devtype=disk
24. Channel ora_disk_1: starting datafile conversion
25. Input datafile fno=00005 name=/u01/oracle/oradata/salesdb/sales_101.dbf
26. Converted datafile=/temp/data_d-10_i-3295731590_ts-admin_tbs_fno-5_05ek24v5
27. Channel ora_disk_1: datafile conversion complete, elapsed time: 00:00:15
28. Channel ora_disk_1: starting datafile conversion
29. Input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf
30. Converted datafile=/temp/data_d-10_i-3295731590_ts-example_fno-4_06ek24vl
31. Channel ora_disk_1: datafile conversion complete, elapsed time: 00:00:45
32. Finished backup at 08-apr-03
Exit recovery manager:
33. Rman> exit
34. Recovery manager complete.
Step 4: transport the tablespace set:
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.
If both the source and destination are files systems, you can use:
If either the source or destination is an automatic storage management (asm) disk group, you can use:
Exercise caution when using the unix dd utility to copy raw-device files between databases. The dd utility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file.
It is difficult to ascertain actual datafile size for a raw-device file because of hidden control information that is stored as part of the datafile. Thus, it is advisable when using the dd utility to specify copying the entire source raw-device file contents.
If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, you must do so now. This example assumes that you have completed the following steps before the transport:
1. Set the source tablespaces to be transported to be read-only.
2. Use the export utility to create an export file (in our example, expdat.dmp).
Datafiles that are to be converted on the target platform can be moved to a temporary location on the target platform. However, all datafiles, whether already converted or not, must be moved to a designated location on the target database.
Now use rman to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs, as shown in this hypothetical example:
Rman> convert datafile 2> '/hq/finance/work/tru/tbs_31.f', 3> '/hq/finance/work/tru/tbs_32.f', 4> '/hq/finance/work/tru/tbs_41.f' 5> to platform="solaris[tm] oe (32-bit)" 6> from platform="hp tru64 unix" 7> db_file_name_convert= 8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru" 9> parallelism=5;
You identify the datafiles by filename, not by tablespace name. Until the tablespace metadata is imported, the local instance has no way of knowing the desired tablespace names. The source and destination platforms are optional. Rman determines the source platform by examining the datafile, and the target platform defaults to the platform of the host running the conversion.
Step 5: import the tablespace set:
If you are transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, then you must first have a db_nk_cache_size initialization parameter entry in the receiving database parameter file.
For example, if you are transporting a tablespace with an 8k block size into a database with a 4k standard block size, then you must include a db_8k_cache_size initialization parameter entry in the parameter file. If it is not already included in the parameter file, this parameter can be set using the alter system set statement.
Any privileged user can perform this step. To import a tablespace set, perform the following tasks:
1. Import the tablespace metadata using the data pump import utility, impdp:
If any of the tablespaces contain xmltypes, you must use imp instead of data pump.
$impdp system/password dumpfile=expdat.dmp directory=dpump_dir Transport_datafiles= /salesdb/sales_101.dbf, /salesdb/sales_201.dbf Remap_schema=(dcranney:smith) remap_schema=(jfee:williams)
In this example, we specify the following:
After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.
When dealing with a large number of datafiles, specifying the list of datafile names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the data pump import utility as follows:
$impdp system/password parfile='par.f' Where the parameter file, par.f contains the following: Directory=dpump_dir Dumpfile=expdat.dmp Transport_datafiles="'/db/sales_jan','/db/sales_feb'" Remap_schema=dcranney:smith Remap_schema=jfee:williams
2. If required, put the tablespaces into read/write mode as follows:
3. Alter tablespace sales_1 read write;
4. Alter tablespace sales_2 read write;