The following sections describe some uses for TRANSPORTABLE TABLESPACES:
Typical enterprise DATA WAREHOUSES contain one or more large fact tables. These fact tables can be partitioned by date, making the enterprise data warehouse a historical database. You can build indexes to speed up star queries. Oracle recommends that you build local indexes for such historically partitioned tables to avoid rebuilding global indexes every time you drop the oldest partition from the historical database.
Suppose every month you would like to load one month of data into the data warehouse. There is a large fact table in the data warehouse called sales, which has the following columns:
Sql>create table sales (invoice_no number, Sale_year int not null, Sale_month int not null, Sale_day int not null) Partition by range (sale_year, sale_month, sale_day) (partition jan98 values less than (1998, 2, 1), Partition feb98 values less than (1998, 3, 1), Partition mar98 values less than (1998, 4, 1), Partition apr98 values less than (1998, 5, 1), Partition may98 values less than (1998, 6, 1), Partition jun98 values less than (1998, 7, 1)); You create a local non-prefixed index: Sql>create index sales_index on sales(invoice_no) local;
Initially, all partitions are empty, and are in the same default tablespace. Each month, you want to create one partition and attach it to the partitioned sales table.
Suppose it is july 1998, and you would like to load the july sales data into the partitioned table. In a staging database, you create a new tablespace, ts_jul. You also create a table, jul_sales, in that tablespace with exactly the same column types as the sales table. You can create the table jul_sales using the create table … As select statement. After creating and populating jul_sales, you can also create an index, jul_sale_index, for the table, indexing the same column as the local index in the sales table. After building the index, transport the tablespace ts_jul to the data warehouse.
In the data warehouse, add a partition to the sales table for the july sales data. This also creates another partition for the local non-prefixed index:
Sql>alter table sales add partition jul98 values less than (1998, 8, 1);
Attach the transported table jul_sales to the table sales by exchanging it with the new partition:
Sql>alter table sales exchange partition jul98 with table jul_sales Including indexes without validation;
This statement places the july sales data into the new partition jul98, attaching the new data to the partitioned table. This statement also converts the index jul_sale_index into a partition of the local index for the sales table. This statement should return immediately, because it only operates on the structural information and it simply switches database pointers. If you know that the data in the new partition does not overlap with data in previous partitions, you are advised to specify the without validation clause. Otherwise, the statement goes through all the new data in the new partition in an attempt to validate the range of that partition.
If all partitions of the sales table came from the same staging database (the staging database is never destroyed), the exchange statement always succeeds. In general, however, if data in a partitioned table comes from different databases, it is possible that the exchange operation may fail. For example, if the jan98 partition of sales did not come from the same staging database, the preceding exchange operation can fail, returning the following error:
Checkout Oracle DBA Interview Questions
Ora-19728: data object number conflict between table jul_sales and partition jan98 in table sales
To resolve this conflict, move the offending partition by issuing the following statement:
Sql>alter table sales move partition jan98;
Then retry the exchange operation.
After the exchange succeeds, you can safely drop jul_sales and jul_sale_index (both are now empty). Thus you have successfully loaded the july sales data into your data warehouse.
Publishing structured data on cds:
Transportable tablespaces provide a way to publish structured data on cds. A data provider can load a tablespace with data to be published, generate the transportable set, and copy the transportable set to a cd. This cd can then be distributed.
When customers receive this cd, they can add the cd contents to an existing database without having to copy the datafiles from the cd to disk storage. For example, suppose on a windows nt machine d: drive is the cd drive. You can import a transportable set with datafile catalog.f and export file expdat.dmp as follows:
Sql>impdp system/password dumpfile=expdat.dmp directory=dpump_dir Transport_datafiles='d:catalog.f'
You can remove the cd while the database is still up. Subsequent queries to the tablespace return an error indicating that the database cannot open the datafiles on the cd. However, operations to other parts of the database are not affected. Placing the cd back into the drive makes the tablespace readable again.
Removing the cd is the same as removing the datafiles of a read-only tablespace. If you shut down and restart the database, the database indicates that it cannot find the removed datafile and does not open the database (unless you set the initialization parameter read_only_open_delayed to true). When read_only_open_delayed is set to true, the database reads the file only when someone queries the transported tablespace. Thus, when transporting a tablespace from a cd, you should always set the read_only_open_delayed initialization parameter to true, unless the cd is permanently attached to the database.
Mounting the same tablespace read-only on multiple databases:
You can use transportable tablespaces to mount a tablespace read-only on multiple databases. In this way, separate databases can share the same data on disk instead of duplicating data on separate disks. The tablespace datafiles must be accessible by all databases. To avoid database corruption, the tablespace must remain read-only in all the databases mounting the tablespace.
The following are two scenarios for mounting the same tablespace read-only on multiple databases:
You generate a transportable set in the source database, put the transportable set onto a disk that is accessible to all databases, and then import the metadata into each database on which you want to mount the tablespace.
It is assumed that the datafiles are already on a shared disk. In the database where the tablespace already exists, you make the tablespace read-only, generate the transportable set, and then import the tablespace into the other databases, leaving the datafiles in the same location on the shared disk.
You can make a disk accessible by multiple computers in several ways. You can use either a cluster file system or raw disk. You can also use network file system (nfs), but be aware that if a user queries the shared tablespace while nfs is down, the database will hang until the nfs operation times out.
Later, you can drop the read-only tablespace in some of the databases. Doing so does not modify the datafiles for the tablespace. Thus, the drop operation does not corrupt the tablespace. Do not make the tablespace read/write unless only one database is mounting the tablespace.
Archiving historical data using transportable tablespaces:
Since a transportable tablespace set is a self-contained set of files that can be imported into any oracle database, you can archive old/historical data in an enterprise data warehouse using the transportable tablespace procedures described in this chapter.
Using transportable tablespaces to perform tspitr:
You can use transportable tablespaces to perform tablespace point-in-time recovery (tspitr).
Moving databases across platforms using transportable tablespaces:
You can use the transportable tablespace feature to migrate a database to a different platform by creating a new database on the destination platform and performing a transport of all the user tablespaces.
You cannot transport the system tablespace. Therefore, objects such as sequences, pl/sql packages, and other objects that depend on the system tablespace are not transported. You must either create these objects manually on the destination database, or use data pump to transport the objects that are not moved by transportable tablespace.
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
Oracle DBA Training | Dec 28 to Jan 12 | View Details |
Oracle DBA Training | Dec 31 to Jan 15 | View Details |
Oracle DBA Training | Jan 04 to Jan 19 | View Details |
Oracle DBA Training | Jan 07 to Jan 22 | View Details |
Technical Content Writer