Friday 3 July 2015

Transportable Tablespace: Oracle Database: Expdp Impdp using Transportable Tablespace



Transportable tablespace is the feature of Oracle Database Enterprise Edition. Stating from Oracle Database 10g transportable tablespace feature is supporting cross platform data transfer. But endian format for both source and destination platform should be same for using transportable tablespace.
Use following query in both source and destination database to know the endian format of the platform.



SELECT db.PLATFORM_NAME, ENDIAN_FORMAT

FROM V$TRANSPORTABLE_PLATFORM pf, V$DATABASE db
WHERE pf.PLATFORM_NAME = db.PLATFORM_NAME;


The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size.

Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use Data Pump to transfer only the metadata of the tablespace objects to the new database.

I've explained all the steps as I executed them and will also discuss about other aspects:


Prerequisites: 



1)The source and target database must use the same character set and national character set. 


If you try to do otherwise means with different character set  you'll face following error: 


ORA-39123: Data Pump transportable tablespace job aborted
ORA-29345: cannot plug a tablespace into a database using an incompatible character set


Following work arround you may try:


Say source database character set is WE8ISO8859P1.


In Target Database:


SQL> SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

VALUE$
--------------------------------------------------------------------------------
AL32UTF8


TTS is not possible while character set is diiferent in source and target dbs.



Truble soothing:



Target Database:


SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

VALUE$
--------------------------------------------------------------------------------
AL32UTF8


SQL> alter system enable restricted session;

System altered.

SQL>  declare
  2   str varchar2(100);
  3  begin
  4  for S in (select sid,serial# from v$session where username is not null  and username != 'SYS')
  5  loop
  6  str := 'alter system kill session '||chr(39)||S.sid||','||S.serial#||chr(39);
  7  execute immediate str;
  8  end loop;
  9  END;
 10  /

PL/SQL procedure successfully completed.

SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE WE8ISO8859P1;

Database altered.

SQL> SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

VALUE$
--------------------------------------------------------------------------------
WE8ISO8859P1

SQL> alter system disable restricted session;

System altered.

SQL> EXIT


Again re-run the import:

This time you might be facing following error:


ORA-39006: internal error
ORA-39213: Metadata processing is not available


Work around:

SQL> execute sys.dbms_metadata_util.load_stylesheets;

PL/SQL procedure successfully completed.


SQL> SELECT comp_id, version, status FROM dba_registry;

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
OWB                            11.2.0.3.0                     VALID
APEX                           3.2.1.00.12                    VALID
EM                             11.2.0.3.0                     VALID
AMD                            11.2.0.3.0                     VALID
SDO                            11.2.0.3.0                     VALID
ORDIM                          11.2.0.3.0                     VALID
XDB                            11.2.0.3.0                     VALID
CONTEXT                        11.2.0.3.0                     VALID
EXF                            11.2.0.3.0                     VALID
RUL                            11.2.0.3.0                     VALID
OWM                            11.2.0.3.0                     VALID

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
CATALOG                        11.2.0.3.0                     INVALID
CATPROC                        11.2.0.3.0                     VALID
JAVAVM                         11.2.0.3.0                     VALID
XML                            11.2.0.3.0                     VALID
CATJAVA                        11.2.0.3.0                     VALID
APS                            11.2.0.3.0                     VALID
XOQ                            11.2.0.3.0                     VALID
ODM                            11.2.0.3.0                     VALID

19 rows selected.

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> SELECT comp_id, version, status FROM dba_registry;

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
OWB                            11.2.0.3.0                     VALID
APEX                           3.2.1.00.12                    VALID
EM                             11.2.0.3.0                     VALID
AMD                            11.2.0.3.0                     VALID
SDO                            11.2.0.3.0                     VALID
ORDIM                          11.2.0.3.0                     VALID
XDB                            11.2.0.3.0                     VALID
CONTEXT                        11.2.0.3.0                     VALID
EXF                            11.2.0.3.0                     VALID
RUL                            11.2.0.3.0                     VALID
OWM                            11.2.0.3.0                     VALID

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
CATALOG                        11.2.0.3.0                     VALID
CATPROC                        11.2.0.3.0                     VALID
JAVAVM                         11.2.0.3.0                     VALID
XML                            11.2.0.3.0                     VALID
CATJAVA                        11.2.0.3.0                     VALID
APS                            11.2.0.3.0                     VALID
XOQ                            11.2.0.3.0                     VALID
ODM                            11.2.0.3.0                     VALID

19 rows selected.



2)You cannot transport a tablespace to a target database in which a tablespace with the same name already exists :

However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.

Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.



3)Encrypted tablespaces have some limitations:


Before transporting an encrypted tablespace, you must copy the Oracle wallet manually to the destination database, unless the master encryption key is stored in a Hardware Security Module (HSM) device instead of an Oracle wallet. When copying the wallet, the wallet password remains the same in the destination database. However, it is recommended that you change the password on the destination database so that each database has its own wallet password. 

You cannot transport an encrypted tablespace to a database that already has an Oracle wallet for transparent data encryption. In this case, you must use Oracle Data Pump to export the tablespace's schema objects and then import them to the destination database. You can optionally take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported. 

You cannot transport an encrypted tablespace to a platform with different endianness.

Tablespaces that do not use block encryption but that contain tables with encrypted columns cannot be transported. You must use Oracle Data Pump to export and import the tablespace's schema objects. You can take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported. 


4) From11g Release 1 must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes:



Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes. Beginning with Oracle Database 11g Release 1, you must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes.

The following query returns a list of tablespaces that contain XMLTypes:

select distinct p.tablespace_name from dba_tablespaces p, 
  dba_xml_tables x, dba_users u, all_all_tables t where
  t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
  and x.owner=u.username


Transporting tablespaces with XMLTypes has the following limitations:

The target database must have XML DB installed.

Schemas referenced by XMLType tables cannot be the XML DB standard schemas.

Schemas referenced by XMLType tables cannot have cyclic dependencies.

XMLType tables with row level security are not supported, because they cannot be exported or imported.

If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target database, an error is returned unless the ignore=y option is set.

If an XMLType table uses a schema that is dependent on another schema, the schema that is depended on is not exported. The import succeeds only if that schema is already in the target database.


         Expdp Impdp using Transportable Tablespace


Source :




Step-1: Source tablespace/tablespaces  objects should not have any dependencies on tablesspaces outside transportablespace set:




SQL> begin
  2  sys.dbms_tts.transport_set_check('SMAPP_DATA01',TRUE);
  3  end;
  4  /

PL/SQL procedure successfully completed.


SQL> select * from sys.transport_set_violations;

no rows selected






Step-2: Source tablespace/tablespaces should be in read only mode for consistent backup:



SQL>  alter tablespace SMAPP_DATA01 read only;

Tablespace altered.





Step-3: Identify or create the datapump directory. And user performing expdp should have read-write access on this directory:




SQL> SELECT directory_name, directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR';

DIRECTORY_NAME
------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------
DATA_PUMP_DIR
/apps/oracle/product/11.2.0/dbhome_1/rdbms/log/

GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO DBAPP ;






Step-4: Export  the matadata of the tablespace end of the expdp will come to know about the datafile we need to copy to destination check the last portion of the log:




nohup expdp userid=DBAPP/justdoit1 directory=DATA_PUMP_DIR dumpfile=SMAPP_DATA01 .dmp logfile=SMAPP_DATA01.log transport_tablespaces=SMAPP_DATA01 transport_full_check=y &

aixdev03(maz0036)/users/maz0036 $>tail -f nohup.out

Export: Release 11.2.0.2.0 - Production on Fri May 3 06:50:43 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "DBAPP"."SYS_EXPORT_TRANSPORTABLE_01":  userid=DBAPP/******** directory=DATA_PUMP_DIR dumpfile=SMAPP_DATA01 .dmp logfile=SMAPP_DATA01.log transport_tablespaces=SMAPP_DATA01 transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "DBAPP"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DBAPP.SYS_EXPORT_TRANSPORTABLE_01 is:
  /apps/oracle/product/11.2.0/dbhome_1/rdbms/log/SMAPP_DATA01.dmp
******************************************************************************
Datafiles required for transportable tablespace SMAPP_DATA01:
  /oracle/orad10/data01/dbs/smapp_data01_01.dbf
Job "DBAPP"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 06:51:43




You can check the datafile(s) name to be transferred from source database level also:


SQL> SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME = 'SMAPP_DATA01';

FILE_NAME                                             BYTES
------------------------------------------------------------   
/oracle/orad10/data01/dbs/smapp_data01_01.dbf      1049624576


 


Step-5: Copy the datafile(s) to destination server: I used SCP:




cd /oracle/orad10/data01/dbs

scp smapp_data01_01.dbf maz0036@abcde0000.amr.ca:

cd /apps/oracle/product/11.2.0/dbhome_1/rdbms/log

scp SMAPP_DATA01.dmp maz0036@abcde0000.amr.ca:

SQL> show parameter db_cache_size;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_size                        big integer 64M



The source and target database must use the same character set and national character set.
So verify the character set before you start. 


SQL> SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

VALUE$
--------------------------------------------------------------------------------
WE8ISO8859P1

SQL>










Target:


Copied files to proper place in target server.

Change necessary file permission (chmod)



SQL>  show parameter db_cache_size;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_size                        big integer 0


SQL> alter system set db_cache_size=64M scope=spfile;

System altered.






Step-6:



nohup impdp userid=DBAPP/justdoit1 directory= maci_dump dumpfile= SMAPP_DATA01.dmp  logfile=impdp.log  transport_datafiles='/oracle/ORAD120/data01/dbs/smapp_data01_01.dbf' &


ldbsd0020[ORAD120]:/home/oracle> more nohup.out

Import: Release 11.2.0.3.0 - Production on Fri May 3 09:54:08 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "DBAPP"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "DBAPP"."SYS_IMPORT_TRANSPORTABLE_01":  userid=DBAPP/******** directory= dumpfile= logfile=impdp.log transport_datafiles=/oracle/ORAD120/data01/dbs/smapp_data0
1_01.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
========================================================================

2 comments: