Problem statement:
- How to migrate huge data from One DB to another DB.
- Multi-Terabyte data loaded on one database should be copied to another database.
Environment:
- You have multi-terabyte Database
- Your database is growing on daily basis, based on data feeds.
- Number of Indexes on these tables are very high, and thus, size of indexes are also contributing huge to the DB size.
- You have to copy this multi-terabyte DB to either any of your existing DB or create a test environment database where production issues can be replicated.
Approach:
Although, there are more than one ways to copy data from one database to another, however, what concerns here is the size of data which eventually leads to very long down time. Transferring such a huge data from one DB to another is really a challenge for any organization.
After evaluating all pros and cons for the methods available, Transportable tablespace set is the fastest way to achieve this.
What it does:
It asks for a set of tablespace(s) from source DB. It, further, validates all tablespace to be self-contained. After validating, exports metadata of these tablespaces at source. Copies all data files related to these tablespace including exported dump file to the destination DB. Imports the metadata including data files on destination DB.
Your DB is ready with all the huge data.
How it works:
“Transportable tablespace set” is the faster method of all which is used to copy data from one instance to another, particularly when your data size is really huge. Using this method oracle data files can directly be transported from one database (i.e. Source) to another (i.e. target). Like export/import method, “Transportable tablespace set” provide a mechanism for transporting metadata in addition to transporting data.
Beauty of this method is, your existing data files on source DB can be plugged into your destination DB. This will not only carry the data, but also carry statistics for the same data.
It can bring speed to your migration. On a machine with 12 core processor and 64 GB RAM, it may import nearly 8 TB of data in almost 10 Hours. Other factors may have an impact on this speed like nature and type of storage etc.
Steps to Perform:
- Prepare the list of all tablespaces which has to be migrated. Let’s say, for example, objects you want to migrate is spread across 8 tablespaces.
- 4 tablespaces are to hold data for table and its partitions
- TEST_DATA_01
- TEST_DATA_02
- TEST_DATA_03
- TEST_DATA_04
- 4 tablespaces are to hold data for indexes and its partitions.
- TEST_IND_01
- TEST_IND_02
- TEST_IND_03
- TEST_IND_04
SQL> R
1* SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES
TABLESPACE_NAME STATUS
—————————— ———
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS ONLINE
TEMP ONLINE
USERS ONLINE
TEST_DATA_01 ONLINE
TEST_DATA_02 ONLINE
TEST_DATA_03 ONLINE
TEST_DATA_04 READ ONLY
TEST_IND_01 ONLINE
TEST_IND_02 ONLINE
TEST_IND_03 ONLINE
TEST_IND_04 ONLINE
TEST_DATA_05 ONLINE
14 rows selected.
SQL>
Tablespaces marked in blue colour will be the part of TTS
- Hence, we will have a set of 8 tablespaces which will be migrated together and come under “Transportable tablespace set”.
- Check to see if these tablespaces are “self-contained“. This simply means:
- If any object which belongs to these tablespaces cannot refer any other object which does not belong to these tablespaces.
- Objects belongs to any of these 8 tablespace may have cross references among these tablespace but cannot have any references which is beyond these tablespaces.
- Furthermore, none of the objects from other tablespaces can refer to any objects of these 8 tablespaces.
- Validation will be done using TRANSPORT_SET_CHECK method :
- EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => ‘TEST_DATA_01,TEST_DATA_02,TEST_DATA_03,TEST_DATA_04,TEST_IND_01,TEST_IND_02,TEST_IND_03,TEST_IND_04’,
- Once execution of above method completed, see the validation report by querying table TRANSPORT_SET_VIOLATIONS
- SELECT * FROM TRANSPORT_SET_VIOLATIONS;
SQL>EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => ‘TEST_DATA_01,
TEST_DATA_02,
TEST_DATA_03,
TEST_DATA_04,
TEST_IND_01,
TEST_IND_02,
TEST_IND_03,
TEST_IND_04′,
incl_constraints => TRUE);
PL/SQL procedure successfully completed.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
VIOLATIONS
——————————————————————————–
ORA-39907: Index TEST_TTS.OBJECT_NAME_NOT_TTS in tablespace TEST_IND_02 points t
o table TEST_TTS.TIME2_TAB5 in tablespace TEST_DATA_05.
ORA-39921: Default Partition (Table) Tablespace TEST_DATA_05 for XX_TESTS_TTS
not contained in transportable set.
ORA-39901: Partitioned table TEST_TTS.XX_TESTS_TTS is partially contained in
the transportable set.
SQL>
- If any such references found, it needs to be fixed
- Since, we found violations in the above query, we have to fix this. Lets fix all violation one by one.
- First violation stating that an index is pointing to a table which resides outside TTS. This is a case of cross references where object (index ) of a TTS is referring to an object (table) which is not part of TTS. To fix this either we have to move the said index out of TTS or we have to bring said table inside TTS. Here, we are bringing said table inside TTS. Command would be :
- ALTER TABLE TEST_TTS.TIME2_TAB5 MOVE TABLESPACE TEST_DATA_03
- Second violation is about a table which’s partition is spread across different tablespaces and one of the tablespace is not under our TTS. The fix would be to move the partition which is outside TTS into any of tablespace which is inside TTS. First find the list of partitions which are outside TTS and then move it.
- Partition listed in blue color is supposed to be moved.
SQL> r
1* select partition_name,
table_name,
tablespace_name
from all_tab_partitions
where table_name=’XX_TESTS_TTS‘
PARTITION_NAME TABLE_NAME TABLESPACE_NAME
————— —————- —————
S1 XX_TESTS_TTS TEST_DATA_01
SYS_P35 XX_TESTS_TTS TEST_DATA_01
SYS_P36 XX_TESTS_TTS TEST_DATA_02
SYS_P37 XX_TESTS_TTS TEST_DATA_01
SYS_P38 XX_TESTS_TTS TEST_DATA_01
SYS_P39 XX_TESTS_TTS TEST_DATA_01
SYS_P40 XX_TESTS_TTS TEST_DATA_01
SYS_P29 XX_TESTS_TTS TEST_DATA_01
SYS_P30 XX_TESTS_TTS TEST_DATA_01
SYS_P31 XX_TESTS_TTS TEST_DATA_01
SYS_P25 XX_TESTS_TTS TEST_DATA_01
SYS_P26 XX_TESTS_TTS TEST_DATA_01
SYS_P23 XX_TESTS_TTS TEST_DATA_01
SYS_P22 XX_TESTS_TTS TEST_DATA_01
SYS_P41 XX_TESTS_TTS TEST_DATA_01
SYS_P42 XX_TESTS_TTS TEST_DATA_01
SYS_P43 XX_TESTS_TTS TEST_DATA_01
SYS_P44 XX_TESTS_TTS TEST_DATA_01
SYS_P45 XX_TESTS_TTS TEST_DATA_01
SYS_P46 XX_TESTS_TTS TEST_DATA_01
SYS_P32 XX_TESTS_TTS TEST_DATA_02
SYS_P33 XX_TESTS_TTS TEST_DATA_03
SYS_P34 XX_TESTS_TTS TEST_DATA_03
SYS_P27 XX_TESTS_TTS TEST_DATA_04
SYS_P28 XX_TESTS_TTS TEST_DATA_04
SYS_P24 XX_TESTS_TTS TEST_DATA_05
SYS_P21 XX_TESTS_TTS TEST_DATA_05
27 rows selected.
SQL>
- Movement is done as per the command:
SQL> ALTER TABLE TEST_TTS.XX_TESTS_TTS
MOVE PARTITION SYS_P24 TABLESPACE TEST_DATA_01;
Table altered.
SQL> ALTER TABLE TEST_TTS.XX_TESTS_TTS
MOVE PARTITION SYS_P21 TABLESPACE TEST_DATA_01;
Table altered.
SQL>
- Once all violation fixed, we can re-validate with TRANSPORT_SET_CHECK so that we can ensure no violations during next step.
- Since, at this stage all such references fixed we are good to go for TTS migration.
- Make all tablespaces read only before we proceed for the next step. We can use pl/sql code to make all tablespaces read only in one go.
begin
for tbs in (select tablespace_name
from dba_tablespaces
where tablespace_name in (‘TEST_DATA_01’,
‘TEST_DATA_02’,
‘TEST_DATA_03’,
‘TEST_DATA_04’,
‘TEST_IND_01’,
‘TEST_IND_02’,
‘TEST_IND_03’,
‘TEST_IND_04’
)
and status=’ONLINE’
)
loop
execute immediate ‘alter tablespace ‘||tbs.tablespace_name||’ read only’;
end loop;
end;
/
- Querying tablespace status:
SQL> r
1* select tablespace_name, status
from dba_tablespaces
where tablespace_name in (‘TEST_DATA_01’,
‘TEST_DATA_02’,
‘TEST_DATA_03’,
‘TEST_DATA_04’,
‘TEST_IND_01’,
‘TEST_IND_02’,
‘TEST_IND_03’,
‘TEST_IND_04’
)
TABLESPACE_NAME STATUS
—————————— ———
TEST_DATA_01 READ ONLY
TEST_DATA_02 READ ONLY
TEST_DATA_03 READ ONLY
TEST_DATA_04 READ ONLY
TEST_IND_01 READ ONLY
TEST_IND_02 READ ONLY
TEST_IND_03 READ ONLY
TEST_IND_04 READ ONLY
8 rows selected.
SQL>
- Create a logical directory in database for a physical directory on DB server. Provide access to this directory.
SQL> create directory EXPORT_DIR as ‘/u01/oracle/oradata/mydb1/exp’;
Directory created.
SQL> R
1* grant all on DIRECTORY EXPORT_DIR to TEST_TTS
Grant succeeded.
- Since, list of tablespace in TTS may be long, we can create a parameter file “EXPORT.par” to keep the command simple. Contents of this file should be :
DIRECTORY=EXPORT_DIR
DUMPFILE=TTS_EXP.dmp
LOGFILE=EXPORT_DIR:TTS_EXP.log
TRANSPORT_FULL_CHECK=Y
TRANSPORT_TABLESPACES=TEST_DATA_01,TEST_DATA_02,TEST_DATA_03,TEST_DATA_04,
TEST_IND_01 ,TEST_IND_02 ,TEST_IND_03 ,TEST_IND_04
- Executing data pump to export the metadata using EXPDP command. This will be executed on Unix prompt.
[oracle@TESTServer exp]$ expdp system/system parfile=EXPORT.par
Export: Release 11.2.0.1.0 – Production on Mon Apr 25 12:47:43 2016
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″: system/******** parfile=EXPORT.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u03/oracle/oradata/mydb1/exp/TTS_EXP.dmp
******************************************************************************
Datafiles required for transportable tablespace TEST_DATA_01:
/u03/oracle/oradata/mydb1/TEST_DATA_01_01.dbf
Datafiles required for transportable tablespace TEST_DATA_02:
/u03/oracle/oradata/mydb1/TEST_DATA_02_01.dbf
Datafiles required for transportable tablespace TEST_DATA_03:
/u03/oracle/oradata/mydb1/TEST_DATA_03_01.dbf
Datafiles required for transportable tablespace TEST_DATA_04:
/u03/oracle/oradata/mydb1/TEST_DATA_04_01.dbf
Datafiles required for transportable tablespace TEST_IND_01:
/u03/oracle/oradata/mydb1/TEST_IND_01_01.dbf
Datafiles required for transportable tablespace TEST_IND_02:
/u03/oracle/oradata/mydb1/TEST_IND_02_01.dbf
Datafiles required for transportable tablespace TEST_IND_03:
/u03/oracle/oradata/mydb1/TEST_IND_03_01.dbf
Datafiles required for transportable tablespace TEST_IND_04:
/u03/oracle/oradata/mydb1/TEST_IND_04_01.dbf
Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 12:48:21
[oracle@TESTServer exp]$
- Above log will show the details of the export process which also lists the required data files (marked in blue color for better clarity) for each tablespace of our “tablespace set” to be copied to the destination database.
- If we take a look at the directory we can found that there are two new files created in the directory “EXPORT_DIR“. Files are marked in blue color. One is DUMP file and another is log file.
[oracle@TESTServer exp]$ ls -ltr
total 320
-rw-r–r–. 1 oracle oinstall 223 Apr 20 17:44 EXPORT.par
-rw-r—–. 1 oracle oinstall 319488 Apr 25 12:48 TTS_EXP.dmp
-rw-r–r–. 1 oracle oinstall 2048 Apr 25 12:48 TTS_EXP.log
- Log file will have details of processes during export.
- Once export done successfully, we can copy the listed data files on destination DB. We will copy data files into separate folder. Copy can be done either by manually or via script in parallel to save time.
[oracle@TESTServer TTS]$ pwd
/u03/oracle/oradata/mydb2/TTS
[oracle@TESTServer TTS]$
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_DATA_01_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_DATA_02_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_DATA_03_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_DATA_04_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_IND_01_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_IND_02_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_IND_03_01.dbf .
[oracle@TESTServer TTS]$ cp /u03/oracle/oradata/mydb1/TEST_IND_04_01.dbf .
[oracle@TESTServer TTS]$
- To confirm if copy is done successfully or not, we can use cksum UNIX command on files at source and destination both places. If output is exactly same, copy is correctly done. This command is a bit time consuming. You can run this command for each file or for a file which you suspect based on upon your requirement or time availability.
On source :
[oracle@TESTServer mydb1]$ pwd
/u03/oracle/oradata/mydb1
[oracle@TESTServer mydb1]$ cksum TEST_DATA_01_01.dbf
895429582 104865792 TEST_DATA_01_01.dbf
On destination :
[oracle@TESTServer TTS]$ pwd
/u03/oracle/oradata/mydb2/TTS
[oracle@TESTServer TTS]$ cksum TEST_DATA_01_01.dbf
895429582 104865792 TEST_DATA_01_01.dbf
- Create a directory on destination DB.
SQL> create or replace directory EXPORT_DIR as ‘/u03/oracle/oradata/mydb2/exp’;
Directory created.
SQL> grant all on directory EXPORT_DIR to public;
Grant succeeded.
SQL>
- Further, we should also copy exported dump file from source to the directory created at destination DB.
- Since, for multi terabyte database, there may be numerous data files, you have to create a parameter files for import using data pump. Contents of this file (i.e. IMPORT.par) will include list of data files including its path, log file path etc.
directory= EXPORT_DIR
dumpfile=import_tts.dmp
logfile= EXPORT_DIR:import_tts.log
transport_datafiles=
‘/u03/oracle/oradata/mydb2/exp/TEST_DATA_01_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_DATA_02_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_DATA_03_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_DATA_04_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_IND_01_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_IND_02_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_IND_03_01.dbf’,
‘/u03/oracle/oradata/mydb2/exp/TEST_IND_04_01.dbf’
- Now we are ready to Import the exported dump file using IMPDP. On UNIX prompt we have to execute command
[oracle@TESTServer exp]$ impdp system/system parfile=IMPORT.par
- After execution of IMPDP it will use IMPORT.par file to read all required and supplied option and it will print log output on screen
[oracle@TESTServer exp]$ impdp system/system parfile=IMPORT.par
Import: Release 11.2.0.1.0 – Production on Mon Apr 25 16:02:06 2016
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″: system/******** parfile=IMPORT.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at 16:02:21
[oracle@TESTServer exp]$
- Once import will complete, all the data (including tables/indexes & its partitions) related to the tablespaces can be found on destination DB.
- We can validate list of tablespaces which were in our tablespace set by querying data dictionary.
SQL> r
1* select tablespace_name, status from dba_tablespaces
TABLESPACE_NAME STATUS
—————————— ———
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS ONLINE
TEMP ONLINE
TEST_DATA_01 READ ONLY
TEST_DATA_02 READ ONLY
TEST_DATA_03 READ ONLY
TEST_DATA_04 READ ONLY
TEST_IND_01 READ ONLY
TEST_IND_02 READ ONLY
TEST_IND_03 READ ONLY
TEST_IND_04 READ ONLY
- All imported tablespaces is read only so we have to make it read write again to allow oracle to write in data files of these tablespace. We can use the same script with minor changes which we used earlier to make of tablespace read only.
SQL> begin
for tbs in (select tablespace_name
from dba_tablespaces
where tablespace_name in (‘TEST_DATA_01’,
‘TEST_DATA_02’,
‘TEST_DATA_03’,
‘TEST_DATA_04’,
‘TEST_IND_01’,
‘TEST_IND_02’,
‘TEST_IND_03’,
‘TEST_IND_04’
)
)
loop
execute immediate ‘alter tablespace ‘||tbs.tablespace_name||’ read write’;
end loop;
end;
/
PL/SQL procedure successfully completed.
SQL> select tablespace_name, status from dba_tablespaces;
TABLESPACE_NAME STATUS
—————————— ———
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS ONLINE
TEMP ONLINE
TEST_DATA_01 ONLINE
TEST_DATA_02 ONLINE
TEST_DATA_03 ONLINE
TEST_DATA_04 ONLINE
TEST_IND_01 ONLINE
TEST_IND_02 ONLINE
TEST_IND_03 ONLINE
TEST_IND_04 ONLINE
12 rows selected.
- With this, our destination database will have all the data which was available in the “set of tablespaces” which we migrated.
- With this method not only data & its tables including (sub)partitions will be copied but also its related indexes, statistics will also be migrated. As a result, we do not need to gather statistics after this migration.
Cross Platform Migration:
Migration of tablespaces using this technique across databases of different platform is also possible. There are lists of platforms provided by Oracle which it supports for migration. We can find this list by querying table V$TRANSPORTABLE_PLATFORM.
Limitations:
Off course, there are some limitation of this beautiful technique. You have to consider these limitations before you proceed of plan for this bigger activity.
These limitations are:
- The source and target database must use the same character set and national character set.
- Tablespace of same name can not be migrated. However, you can rename either source or destination tablespace before migration and this will serve the purpose.
- SYSTEM tablespace of object owned by SYS can not be migrated.
- There are some limitations regarding encrypted tablespace or tablespaces that contain XMLTypes or advance queue. Please go though oracle documentation regarding this.
What is the best approach for moving the production data, with TTS we can’t have tablespace for Production database in read-only mode. I have a schema of 25TB to be moved from Prod to test environment.
Why do you want entire data to be moved to Test environment from Production? Do you really need entire 25 TB data into test, or just want to setup a test environment using Production database?
Regards,
Sanjeeva