Best Approach to Full database migration(Full Database expdp/Impdp) using Oracle Datapump expdp/Impdp
Performing complete database export/import through expdp/impdp full=Y is not a very good approach in real time production environment, as because it's a very slow method and in production support always there will be a fire on floor. We can't have a huge production down time. I recent past my team has migrated one 3 TB database to another. I'll love to share the experience with you.
So best approach is to create a whole new database manually using create DB script. Database creation and hardening part we'll discuss in another post.
A. Expdp Part:
After creation of the new database we'll take schema wise expdp backup excluding grants,constraints,index and statistics. We'll create dumpfiles in parallel to speed up our activity.
Here I'm assuming that you have already created your datapump directory and the user have exp_full_database ,create session and connect privileges assigned to it.
Step - 1: Create the PAR file for Expdp:
vi exp_fucleus.par
userid=username/password
DIRECTORY=newnig
FILESIZE=10G
DUMPFILE=exp_fucleus_%U.dmp
CLUSTER=N
PARALLEL=4
EXCLUDE=GRANT
EXCLUDE=CONSTRAINTS
EXCLUDE=INDEX
EXCLUDE=STATISTICS
LOGFILE=exp_fucleus.log
JOB_NAME=exp_fucleus_job
schemas='FUCLEUS'
Server Screen Shot:
[oracle@YUINSLGGNSBIEDB5 newnig_expdp]$ cat exp_fucleus.par
userid=
DIRECTORY=newnig
FILESIZE=10G
DUMPFILE=exp_fucleus_%U.dmp
CLUSTER=N
PARALLEL=4
EXCLUDE=GRANT
EXCLUDE=CONSTRAINTS
EXCLUDE=INDEX
EXCLUDE=STATISTICS
LOGFILE=exp_fucleus.log
JOB_NAME=exp_fucleus_job
schemas='FUCLEUS'
Step - 2: Run the expdp job:
After checking that we have enough space for storing dump files run the expdp job:
vi exp_fucleus.sh
export ORACLE_SID=dasnig2
nohup expdp parfile=exp_fucleus.par &
sh exp_nucleus.sh
Server Screen Shot:
[oracle@YUINSLGGNSBIEDB5 newnig_expdp]$ cat exp_fucleus.sh
export ORACLE_SID=dasnig2
nohup expdp parfile=exp_fucleus.par &
You have new mail in /var/spool/mail/oracle
Follow the step-1 and step-2 for all schemas in source database:
Step - 3:- Transfer all the dump files
Transfer all the dump files to the destination server (server where new database is created) using SCP (if the new database is in another server), If the new database is in same server then this step is not required.
B. Impdp Part:
Here I'm assuming that you have already created your datapump directory and the user have imp_full_database ,create session and connect privileges assigned to it.
Step - 1: Create import par file in destination server
vi imp_FUCLEUS.par
userid=username/password
DIRECTORY=newnig_imp
REMAP_SCHEMA=FUCLEUS:FUCLEUS
DUMPFILE=exp_fucleus_%U.dmp
PARALLEL=4
LOGFILE=Imp_fucleus.log
JOB_NAME=fucleus_IMP
CLUSTER=N
Server Screen Shot:
[oracle@YUINSLGGNSBIEDB5 newmig_expdp]$ cat imp_FUCLEUS.par
userid=
DIRECTORY=newnig_imp
REMAP_SCHEMA=FUCLEUS:FUCLEUS
DUMPFILE=exp_fucleus_%U.dmp
PARALLEL=4
LOGFILE=Imp_fucleus.log
JOB_NAME=fucleus_IMP
CLUSTER=N
[oracle@YUINSLGGNSBIEDB5 newnig_expdp]$
Step - 2: Run the Impdp Job
vi imp_FUCLEUS.sh
ORACLE_SID=dasnguat2
export ORACLE_SID;
nohup impdp parfile=imp_FUCLEUS.par &
sh imp_MIG.sh
Server Screen Shot:
[oracle@YUINSLGGNSBIEDB5 newmig_expdp]$ cat imp_FUCLEUS.sh
ORACLE_SID=dasnguat2
export ORACLE_SID;
nohup impdp parfile=imp_FUCLEUS.par &
Repeat above step-1 and step-2 for all schemas exported from source database.
Here our job doesn't ends.
Now we have to use structure export/Import script:
C. Structure Export Part:
This part will be performed on source server/source database:
TXN Export:
expdp OWNER=FUCLEUS DUMPFILE=Exp_Struc_FUCLEUS.dmp LOGFILE=Exp_Struc_FUCLEUS.log CONTENT=METADATA_ONLY exclude=statistics;
OR
exp OWNER=FUCLEUS FILE=Exp_Struc_FUCLEUS.dmp LOG=Exp_Struc_FUCLEUS.log INDEXES=Y GRANTS=Y ROWS=N statistics=none;
Server Screen Shot:
exp OWNER=FUCLEUS FILE=Exp_Struc_FUCLEUS.dmp LOG=Exp_Struc_FUCLEUS.log INDEXES=Y GRANTS=Y ROWS=N statistics=none;
Repeat this for all schemas.
D. Structure Import Part:
This part will be performed on destination server/destination database:
FUCLEUS IMPORT:
Step - 1:
impdp userid=FUCLEUS DUMPFILE=Exp_Struc_FUCLEUS.dmp LOGFILE=Imp_Struc_FUCLEUS.log CONTENT=METADATA_ONLY exclude=statistics;
OR
imp FROMUSER=FUCLEUS TOUSER=FUCLEUS FILE=Exp_Struc_FUCLEUS.dmp LOG=Imp_Struc_FUCLEUS.log INDEXES=Y GRANTS=Y ROWS=N ignore=y
Server Screen Shot:
imp FROMUSER=FUCLEUS TOUSER=FUCLEUS FILE=Exp_Struc_FUCLEUS.dmp LOG=Imp_Struc_FUCLEUS.log INDEXES=Y GRANTS=Y ROWS=N ignore=y
Repeat above for all schemas.
Step - 2: Object comparison
Now we have to perform object comparison for all schemas:
Run following command for each schema in both source and destination database:
SELECT owner,object_type,COUNT(1) FROM DBA_objects where owner in ('FUCLEUS') GROUP BY OWNER,object_type order by 1,2;
Object count should exactly match in source and destination database.
E. Public synonyms and DB Links creation Part:
It doesn't ends here also public synonyms and db links will be missing in destination database:
we need to create them manually:
Step - 1:
On source and destination database run the following command and make sure if there is public synonyms missing/mismatch:
SELECT * FROM all_synonyms WHERE owner = 'PUBLIC' AND table_owner = <SCHEMA_NAME>;
Step - 2:
Running following scripts on source db we can take out all public synonym creation scripts together as a spool file.
Script to take out public synonym creation script for a particular schema
===============================================
set feedback off
set trimspool on
set pagesize 1000
set linesize 1000
set heading off
spool public_synonyms.sql
select 'create public synonym ' || synonym_name || ' for ' || table_owner || '.' || table_name || ';'
from dba_synonyms
where
owner='PUBLIC' and table_owner in ('FUCLEUS')
order by
table_owner;
spool off
exit;
Script to take out public synonym creation script for all db schema except sys and system
=======================================================
set feedback off
set trimspool on
set pagesize 1000
set linesize 1000
set heading off
spool public_synonyms.sql
select 'create public synonym ' || synonym_name || ' for ' || table_owner || '.' || table_name || ';'
from dba_synonyms
where
owner='PUBLIC' and table_owner not in ('SYS','SYSTEM')
order by
table_owner;
spool off
exit;
Step - 3:
SCP that spool file to destination server and run on destination db.
Step - 4:
Now on destination database validate/compile all invalid objects together using @?/rdbms/admin/utlrp.sql
or recompile them manually.
select count(1),status from DBA_OBJECTS where owner='FUCLEUS' and status='INVALID';
select OWNER,object_name,OBJECT_TYPE,status from DBA_OBJECTS where owner='FUCLEUS' and status='INVALID';
ALTER VIEW FUCLEUS_DAM.V_BKCHK COMPILE;
Now finally match all object counts and valid object counts with source database.
And out Task ends here.
Hi, May I know the reason for excluding the statistics ?
ReplyDelete