Monday 24 August 2015

Changing Execution Plan For SQL Query in Oracle 11G



Changing Execution Plan For SQL Query in Oracle 11G



There are times while we want to use some particular execution plan( which we think is the best plan) every time for a sql query.
We can choose our best plan from AWR snapshots and reuse the plan by setting up baseline.

Here are the steps for loading SQL Plans into SPM(SQL Plan Management) using AWR by implementing SQL Baselines for the bad query.



Step 1: Set up a SQL Baseline using best plan, (Plan Source- from AWR snapshots)


First condition to set up SQL Baseline: SQL Plan Management must be active.
Check whether optimizer_use_sql_plan_baselines parameter is true which needs to be TRUE.


SQL>show parameter optimizer_

SQL>select count(*) from dba_sql_plan_baselines;

Check the particular sql id is currently stored in memory:

SQL>select * from TABLE(dbms_xplan.display_cursor('SQL_ID'));







Step 2: Create SQL Tuning Set (STS).


A SQL tuning set (STS) is a database object. 

An STS includes:


–          A set of SQL statements

–          Associated execution context, such as user schema, application module name and action, list of bind values, and the cursor compilation environment

–          Associated basic execution statistics, such as elapsed time, CPU time, buffer gets, disk reads, rows processed, cursor fetches, the number of executions, the number of complete executions, optimizer cost, and the command type

–          Associated execution plans and row source statistics for each SQL statement (optional)

Oracle 11g makes use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database changes can be as follows:

 *   Database, operating system, or hardware upgrades.
 *   Database, operating system, or hardware configuration changes.
 *   Database initialization parameter changes.
 *   Schema changes, such as adding indexes or materialized views.
 *   Refreshing optimizer statistics.
 *   Creating or changing SQL profiles.

Now we create a SQL Tuning Set based on the slow query with a SQL_ID of 1dy987n2d6and.

SQL>begin
DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'SQL_ID_name',
description => 'SQL tuning set for loading plan into sql plan baseline');
end;
/

SQL>select dbms_sqltune.report_sql_monitor(type=>'TEXT',report_level=>'ALL',sql_id=>'******') as report from dual;


Or

Instead of creating SQL tuning set, we can directly create baseline from cursor cache using dbms_spm.load_plans_from_cursor_cache
We can also use DBMS_SPM.evolve_sql_plan_baseline to verify new baseline and finally dbms_spm.alter_sql_plan_baseline to disable old baseline






Step 3: Lets Populate the STS from AWR.


Now we will identify the AWR snapshots required to populate the STS, and load the STS based on those snapshot ID’s and the SQL_ID.

SQL>select snap_id,begin_interval_time,end_interval_time from dba_hist_snapshot order by end_interval_time desc;

SQL>declare
cursor1 sys_refcursor
begin 
open cursor1  for
select value(P)
from table(
dbms_sqltune.select_workload_repository(begin_snao=>snapno,end_snap=>snapno,
basic_filter=>'sql_id=''*******''',attribute_list=>'ALL')P;
dbms_sqltune.load_sqlset(sqlset_name=>'*****',populate_cursor=>'cursor1')
close cursor1;
end;
/





Step 4: Verify SQL Tuning Set Contents.


Now I can query the STS to verify it contains the expected data.

SQL>select * from TABLE(dbms_sqltune.select_sqlset(sqlset_name=>'*******'));







Step 5: Create Baseline:



Now we will create Baseline.






Step 6: Load desired plan from STS as SQL Plan Baseline

Now I will load the known good plan that uses the newly created index into the Baseline.

declare
best_plans pls_integer;
begin
best_plans :=dbms_spm.load_plans_from_sqlset(
sqlset_name=>'******',
basic_filter=>'plan_hash_value  = ' ' ********' ' ');
end;
/







Step 7: Verify the Baselines:


Now verify the Baseline contains the desired plan.

SQL>select count(*) from dba_sql_plan_baselines;





Step 8. Flush the current poor performing SQL Plan.


After loading the baseline, the current cursor must be flushed from the cache to make sure the new plan will be used on next execution of the sql_id.

select required information from v$sqlarea.

then

SQL>exec dbms_shared_pool.purge('address','old_hash_value','C');




Sunday 9 August 2015

Oracle RAC 11gR2 Interconnect Performance Tuning

 Oracle RAC 11gR2 Interconnect Performance Tuning

We are going to discuss RAC interconnect performance tuning today:
We are assuming here we are using Linux as operating system and UDP(User Datagram Protocol). Through interconnect nodes connect with each other, load balancing completely depends upon interconnect.So our interconnect performance is one of the major part  of our cluster performance.

So first of all we have to choose hardware wisely which will support high speed interconnect. 

Interconnect hardware

In notion to implement faster interconnect we need to implement faster hardware such as  10 Gigabit Ethernet (10 GigE) or InfiniBand
Lets discuss first option to increase the throughput of your interconnect is the implementation of 10 GigE technology, which represents the next level of Ethernet. Although it is becoming increasingly common, note that 10 GigE does require specific certification on a platform-by-platform basis, check with oracle support for your platform. 

InfiniBand is available and supported with two options. Reliable Datagram Sockets (RDS) protocol is the preferred option, because it offers up to 30 times the bandwidth advantage and 30 times the latency reduction over Gigabit Ethernet. IP over InfiniBand (IPoIB) is the other option, which does not do as well as RDS, since it uses the standard UDP or TCP, but it does still provide much better bandwidth and much lower latency than Gigabit Ethernet.


UDP buffers

We should pick fastest possible network to be used for the interconnect. To maximize your speed and efficiency on the interconnect,we should ensure that the User Datagram Protocol (UDP) buffers are set to the correct values. On Linux, you can check this via the following command:
sysctl net.core.rmem_max net.core.wmem_max net.core.rmem_default net.core.wmem_default
net.core.rmem_max = 4194300
net.core.wmem_max = 1048500
net.core.wmem_default = 262100
You can get correct value for your platform from Oracle Support.We can also verify this values  directly from the files in the directory /proc/sys/net/core. We can reset this values to correct number using  SYSCTL commands:
sysctl -w net.core.rmem_max=4194304
sysctl -w net.core.wmem_max=1048576
sysctl -w net.core.rmem_default=262144
sysctl -w net.core.wmem_default=262144

The numbers in this example are the recommended values for Oracle RAC on Linux and are more than sufficient for the majority of configurations. The values determined by rmem_max and wmem_max are on a “per-socket” basis. So if you set rmem_max to 8MB, and you have 800 processes running, each with a socket open for communications in the interconnect, then each of these 800 processes could potentially use 8MB, meaning that the total memory usage could be 1.6GB just for this UDP buffer space. So if rmem_default is set to 1MB and rmem_max is set to 8MB, you are sure that at least 800MB will be allocated (1MB per socket). Anything more than that will be allocated only as needed, up to the max value. So the total memory usage depends on the rmem_default, rmem_max, the number of open sockets, and the variable piece of how much buffer space each process is actually using.
It could depend on the network latency or other characteristics of how well the network is performing and how much network load there is altogether. Total number of Oracle-related open UDP sockets:

netstat -anp -udp | grep ora | wc -l


Jumbo Frames:

Now Starting with checking the performance of our interconnect which is already implemented assuming that our RAC is running and released to business user now:

We have have NIC errors or one of the NIC fails in 11GR2 it's not complete loss of interconnection between node with help of link aggregation.

Prior to Oracle 11gR2, system  were designed with the single point of failure. ( link aggregation=>NIC bonding, NIC teaming, or port trunking are also used for the same concept.) The central idea behind link aggregation is to have two private networks act as one. The two private networks are combined together to appear to the operating system as one unit. To the OS, the network adapters look like one adapter. If one of the physical network adapters were to fail, the OS would hardly notice and network traffic would proceed through the remaining adapter.
Oracle Grid Infrastructure now provides RAC HAIP, which is link aggregation moved to the clusterware level. Instead of bonding the network adapters on the OS side, Grid Infrastructure in instructed to use multiple network adapters. Grid Infrastructure will still start HAIP even if the system is configured with only one private network adapter.  


To find out whether we have NIC related issues:
We could check for "gc cr lost blocks" wait event in Automatic Workload Repository (AWR)/sysstats.

If we find "gc cr lost blocks" wait event, we need to check for following errors on the NIC:

Dropped packets/fragments
Buffer overflows
Packet reassembly failures or timeouts
TX/RX errors

We will use following  commands to find any errors:

netstat -s
Ifconfig -a
ORADEBUG
Now we are going to identify Interconnect performance from AWR:
These wait events from AWR/sysstat can indicate contention related to RAC.

GC current block busy
GV cr block busy
GC current buffer busy
GC buffer busy acquire/release 


These wait events in the AWR indicate that there might be a Hot Block that is causing these wait events. From the AWR Segment Statistics, you can find the objects


Enq:TX Index Contention
Gc buffer busy
Gc current block busy
Gc current split



Under Global Cache and Enqueue Services – we will check for Workload Characteristics:


Avg global cache cr block receive time (ms): should be <=15 ms
Global Cache and Enqueue Services – Messaging Statistics
Avg message sent queue time on ksxp (ms): should be <1 ms
Under Interconnect Ping Latency Stats
Avg Latency 8K msg should be close to Avg Latency 500B msg.


We will find following issue if multiple sessions are inserting into a single object or are using a sequence, and the indexed column is sequentially increasing .To address the specific issues:


Identify the indexes and Global Hash Partition them
Increase the Sequence Cache if ordering is not a problem.


Sunday 2 August 2015

when to use Bitmap Index in Oracle

            Bitmap Index – when to use it?


We all are well aware with the definition of Bitmap Index in Oracle Database. Now question is when to use it!


1. Low cardinality:


When the cardinality of a column is low it's better to use Bitmap Index.(Cardinaity=>maximum different  values a column can hold).B-tree indexes are most effective for high-cardinality data: that is, data with many possible values, such as CUSTOMER_NAME or PHONE_NUMBER.

2. No or little insert/update :



 Updating bitmap indexes take a lot of resources.each index maintained by an INSERTDELETE, or UPDATE of the indexed keys requires about three times as much resource as the actual DML operation on the table.if you INSERT into a table with three indexes, then it will be approximately 10 times slower than an INSERT into a table with no indexes.


3.Multiple bitmap indexes can be merged :


One good thing about bitmap indexes is multiple bitmap indexes can be merged and the column does not have to selective.Combining bitmap indexes on non-selective columns allows efficient AND and OR operations with a great number of rowids with minimal I/O. More than one column in the table has an index that the optimizer can use to improve performance on a table scan.

Create a data type that will only hold 0s and 1 s

Create a data type that will only hold 0 s and 1 s 





DECLARE
SUBTYPE flagtype IS PLS_INTEGER RANGE 0..1;
x flagtype;
BEGIN
BEGIN
x := 0;
dbms_output.put_line('Success: ' || TO_CHAR(x));
EXCEPTION
WHEN others THEN
dbms_output.put_line('Can not assign 0 to flagtype');
END;
BEGIN
x := 1;
dbms_output.put_line('Success: ' || TO_CHAR(x));
EXCEPTION
WHEN others THEN
dbms_output.put_line('Can not assign 1 to flagtype');
END;
....
END;
/

How Many Scan Listeners? Add SCAN listener-SCAN-ORACLE RAC 11GR2

          How Many Scan Listeners?   



SCAN is a new concept introduced by Oracle with release Oracle of 11GR2. Full form of SCAN is 
Single-Client Access Name. The Single-Client Access Name (SCAN) is the address used by clients connecting to the cluster.
The Scan is fully qualified host name located in the GNS subdomain registered to three IP addresses.
From 11gR2 onwards oracle has enhanced the availability of database from client’s perspective by introducing SCAN (Single Client Access Name). With SCAN, clients could use SCAN-NAME (resolved by 3 VIPs, for default configuration using GNS/DNS) instead of having list of all rac nodes in connect string. By default SCAN listeners (defined on SCAN VIPs) created as first point which co-ordinate with node listeners (defined on VIPs), so even if failure of node scan listener running on that node (if any) would be relocated to any surviving node while available SCAN listeners would be serving as normal.


Usually 3 SCANs can handle hundreds of new connections within few seconds. But if your environment needs more than 3 of them then you can add few more scan listeners. GNS based dynamic IP scheme still does not support change in number of SCAN listeners in your environment. But in DNS based static IP configuration you can add extra scan listeners.




              ADD SCAN LISTENER




Process of adding one extra scan listener in our DNS based static IP configuration.
It could be done in following simple steps without effecting the availability of cluster:
  1. Add additional unused IP from same subnet into DNS configuration for scan-name.
  2. Restart dns named service
  3. Update/modify scan in cluster
  4. Update/modify scan_listener in cluster
  5. Verify the changed configuration
  6. Start newly added scan
  7. Verification of service registration on newly added scan listener
 scan_name=>testscan11gr2p-scan is resolved into pair of 3 IPs (***.***.1.1/2/3)

[root@tnpdns ~]# nslookup testscan11gr2p-scan
Server: 1**.1**.1.43
Address: 1**.1**.1.43#53
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.3
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.1
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.2

We have identified IP 1**.1**.1.4 from same subnet and added that against scan_name testscangr2p-scan:

Network Team will do this for us. Or you can configure your DNS your self.

To reflect the changes we need to do restart of named service:

[root@tnpdns named]# service named restart
Stopping named: . [ OK ]
Starting named: [ OK ]

Verify that scan name is getting resolved into 4 IPs:


[root@tnpdns named]# nslookup testscan11gr2p-scan
Server: 1**.1**.1.43
Address: 1**.1**.1.43#53
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.2
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.3
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.4
Name: testscan11gr2p-scan.fiatz.klz
Address: 1**.1**.2.1

Change SCAN configuration :ges are getting reflected:
Check the current SCAN configuration:

[root@testscan11gr2p.fiatz.klz] srvctl config scan
SCAN name: testscan11gr2p-scan, Network: 1/1**.1**.2.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.1
SCAN VIP name: scan2, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.2
SCAN VIP name: scan3, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.3


Login to database cluster with root and execute following:


[root@testscan11gr2p.fiatz.klz] srvctl modify scan -n testscan11gr2p-scan


 Check the configuration again if changes are being reflected:


[root@testscan11gr2p.fiatz.klz] srvctl config scan

SCAN name: testscan11gr2p-scan, Network: 1/1**.1**.2.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.1
SCAN VIP name: scan2, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.2
SCAN VIP name: scan3, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.3
SCAN VIP name: scan4, IP: /testscan11gr2p-scan.fiatz.klz/1**.1**.2.4


Update the scan listener configuration:

Check current scan listener configuration:

[root@testscan11gr2p.fiatz.klz] srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

update the scan listener configuration:

[root@testscan11gr2p.fiatz.klz] srvctl modify scan_listener -u

Verify scan listener post updation of configuration:
[root@testscan11gr2p.fiatz.klz] srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN4 exists. Port: TCP:1521

Start the listener_scan4:

[root@testscan11gr2p1.lgk.nmk] srvctl start scan_listener -i 4

Check the status of scan listener:

[root@testscan11gr2p1.lgk.nmk] srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node testscan11gr2p2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node testscan11gr2p1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node testscan11gr2p1
SCAN Listener LISTENER_SCAN4 is enabled
SCAN listener LISTENER_SCAN4 is running on node testscan11gr2p2

Check the status of new scan listener if the services are getting registered:
lsnrctl status LISTENER_SCAN4

Friday 31 July 2015

OCR Replacement-Replace Existing OCR Disk with New One-Oracle RAC 11GR2

OCR Replacement-Replace Existing OCR Disk with New One-Oracle RAC 11GR2



----------------------Prerequisite for Activity--------------


1.OCR Backup Manually & Automatically to other location:

You can use the ocrconfig -manualbackup command to force Oracle Clusterware to perform a backup of the OCR at any time, rather than wait for the automatic backup that occurs at 4-hour intervals.
3$GRID_HOME/bin
./ocrconfig -manualbackup
./ocrconfig -showbackup

2.Export OCR Backup

./ocrconfig -export /tmp/ocr_export_date.exp

3.Import OCR Backup




---------------------Voting Disk----------------------------------


1.Require 3 Luns of 500 Mb each of different-2 storage.

/dev/mapper/VOTE_V1  ---Violin Storage
/dev/mapper/VOTE_H1 ----Hitachi  Storage
/dev/mapper/VOTE_V2 ----Violin Storage
Create 3 Luns of 500Mb each. If possible put each Lun in different controller, array or storage.

2.DISK Creation from root user on one of the Node(node 1) i.e 1.***.43.**

2.1) For Voting

oracleasm createdisk VOTE_V1 /dev/mapper/VOTE_V1
oracleasm createdisk VOTE_H1 /dev/mapper/VOTE_H1
oracleasm createdisk VOTE_V2 /dev/mapper/VOTE_V2
oracleasm scandisks
oracleasm listdisks

3.SCAN DISK ON Another Node(node 2) i.e 1.***.43.** from root user.

oracleasm scandisks

4.Create Voting Diskgroup with NORMAL REDUNDANCY along with three failgroup.

CREATE DISKGROUP VOTE_GRP NORMAL REDUNDANCY
 FAILGROUP VOTE_FAIL_V1 DISK 'ORCL:VOTE_V1'
 FAILGROUP VOTE_FAIL_H1 DISK 'ORCL:VOTE_H1'
 FAILGROUP VOTE_FAIL_V2 DISK 'ORCL:VOTE_V2'
ATTRIBUTE 'compatible.asm' = '11.2.0.1.0';

---------------------OCR Disk----------------------------------



1.Require 2 Luns of 500 Mb each of different-2 storage.

/dev/mapper/CRS_V1  ---Violin Storage
/dev/mapper/CRS_H1  ----Hitachi  Storage

2.DISK Creation from root user on one of the Node 1 i.e 1.***.43.**

oracleasm createdisk CRS_V1 /dev/mapper/CRS_V1
oracleasm createdisk CRS_H1 /dev/mapper/CRS_H1
oracleasm scandisks
oracleasm listdisks

3.SCAN DISK ON Another Node(node 2) i.e 1.***.43.** from root user.

oracleasm scandisks

4.Create CRS Diskgroup with NORMAL REDUNDANCY along with two failgroup.

CREATE DISKGROUP CRS_GRP NORMAL REDUNDANCY
 FAILGROUP CRS_FAIL_V1 DISK 'ORCL:CRS_V1'
 FAILGROUP CRS_FAIL_H1 DISK 'ORCL:CRS_H1'
 ATTRIBUTE 'compatible.asm' = '11.2.0.1.0';

---------------------------------Verification----------------------------------


TO check Diskgroup Provissioned/Not

SET LINESIZE 150
 COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
 COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT  NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM  V$ASM_DISK
WHERE  GROUP_NUMBER IN ( SELECT GROUP_NUMBER FROM V$ASM_DISKGROUP WHERE NAME IN ('CRS_GRP','VOTE_GRP'));

             Mount the diskgroup on another node (node2)i.e 1.***.43.**-

#alter diskgroup VOTE_GRP mount;
#alter diskgroup CRS_GRP mount;

-----------------------CURRENT OCR/VOTING CONFIGURATION---------

------------------------OCR--------------------------------------


[root@NEWDB1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3768
Available space (kbytes) : 258352
ID : 1579853954
Device/File Name : +OCRDATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded

------------------------VOTING--------------------------------------

[grid@NEWDB1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 8f6bd01f079d4f06bf3beb78f18e9bfc (ORCL:OCRDATA1) [OCRDATA]
Located 1 voting disk(s).

Moving Voting Files from existing disk group to New DiskGroup

This should be run during Maintaince Window.Instances should be down of all the nodes.

============From grid user under GRID_HOME

$crsctl replace votedisk VOTE_GRP   (VOTE_GRP is a new Diskgroup for Voting file)
$crsctl query css votedisk
To check current voting disk configuration
TO check Diskgroup Provissioned/Not
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT  NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM  V$ASM_DISK
WHERE  GROUP_NUMBER IN ( SELECT GROUP_NUMBER FROM V$ASM_DISKGROUP WHERE NAME IN ('VOTE_GRP'));

Moving OCR Files from existing disk group to New DiskGroup

# /u01/app/11.2.0/grid/bin/ocrconfig -add +CRS_GRP
#/u01/app/11.2.0/grid/bin/ocrcheck
# /u01/app/11.2.0/grid/bin/ocrconfig -add +VOTE_GRP
#/u01/app/11.2.0/grid/bin/ocrcheck

--------------------Replace Existing OCR Disk with New One------------------------------


# ocrconfig -replace +OCRDATA -replacement +CRS_GRP

DELETE EARLIER DISKGROUP INFORMATION FROM OCR

#/u01/app/11.2.0/grid/bin/ocrconfig -delete +OCRDATA
#/u01/app/11.2.0/grid/bin/ocrcheck

=================DROP OLD OCR DISKGROUP LATER----------------------


login from grid user & connected as sys as sysasm
sql>alter diskgroup OCRDATA dismount;  #It should dismount before drop#####
sql>alter diskgroup OCRDATA mount;
sql>drop diskgroup OCRDATA including contents;

Table Level Export Import in 9i Oracle Database-Most Time Effective Way

Table Level Export Import in 9i Oracle Database-Most Time Effective Way


From Source database copy the table creation script using below command:

select dbms_metadata.get_ddl('TABLE','BB_CONNECTION_HISTORY_12AUG','LMSOGS') from dual;

Create the table structure in destination database under same user using the script taken from source:

 CREATE TABLE "LMSOGS"."BB_CONNECTION_HISTORY_12AUG"
   (    "KAR_ACC_ORG" NUMBER(3,0) NOT NULL ENABLE,
        "KAR_ACC_ACCT" VARCHAR2(19) NOT NULL ENABLE,
        "KAR_REC_TYPE" VARCHAR2(1) NOT NULL ENABLE,
        "KAR_CURRENCY" VARCHAR2(3) NOT NULL ENABLE,
        "KAR_ACC_INSERT_DT" DATE NOT NULL ENABLE,
        "KAR_REC_STATUS" VARCHAR2(1),
        "KAR_LOGO" NUMBER(3,0),
        "KAR_FIAT_OCCURRENCE_1" NUMBER(3,0),
        "KAR_FIAT_DATE_1" DATE,
        "KAR_FIAT_CYCLE_DUE_1" NUMBER(1,0),
        "KAR_FIAT_RECENCY_1" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_1" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_1" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_1" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_1" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_1" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_1" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_1" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_1" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_1" NUMBER(23,6),
        "KAR_FIAT_OCCURRENCE_2" NUMBER(3,0),
        "KAR_FIAT_DATE_2" DATE,
        "KAR_FIAT_CYCLE_DUE_2" NUMBER(1,0),
        "KAR_FIAT_RECENCY_2" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_2" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_2" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_2" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_2" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_2" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_2" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_2" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_2" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_2" NUMBER(23,6),
        "KAR_FIAT_OCCURRENCE_3" NUMBER(3,0),
        "KAR_FIAT_DATE_3" DATE,
        "KAR_FIAT_CYCLE_DUE_3" NUMBER(1,0),
        "KAR_FIAT_RECENCY_3" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_3" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_3" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_3" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_3" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_3" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_3" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_3" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_3" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_3" NUMBER(23,6),
        "KAR_FIAT_OCCURRENCE_4" NUMBER(3,0),
        "KAR_FIAT_DATE_4" DATE,
        "KAR_FIAT_CYCLE_DUE_4" NUMBER(1,0),
        "KAR_FIAT_RECENCY_4" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_4" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_4" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_4" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_4" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_4" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_4" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_4" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_4" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_4" NUMBER(23,6),
        "KAR_FIAT_OCCURRENCE_5" NUMBER(3,0),
        "KAR_FIAT_DATE_5" DATE,
        "KAR_FIAT_CYCLE_DUE_5" NUMBER(1,0),
        "KAR_FIAT_RECENCY_5" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_5" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_5" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_5" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_5" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_5" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_5" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_5" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_5" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_5" NUMBER(23,6),
        "KAR_FIAT_OCCURRENCE_6" NUMBER(3,0),
        "KAR_FIAT_DATE_6" DATE,
        "KAR_FIAT_CYCLE_DUE_6" NUMBER(1,0),
        "KAR_FIAT_RECENCY_6" NUMBER(1,0),
        "KAR_FIAT_REVERSAL_IND_6" NUMBER(1,0),
        "KAR_FIAT_PMT_AMT_6" NUMBER(23,6),
        "KAR_FIAT_PRE_PMT_AMT_6" NUMBER(23,6),
        "KAR_FIAT_AMT_TOT_DUE_6" NUMBER(23,6),
        "KAR_FIAT_REL_PMT_AMT_6" NUMBER(23,6),
        "KAR_FIAT_ORIG_PMT_AMT_6" NUMBER(23,6),
        "KAR_FIAT_ORIG_LM_6" NUMBER(2,0),
        "KAR_FIAT_DPT_AMT_1_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_2_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_3_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_4_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_5_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_6_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_7_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_8_6" NUMBER(23,6),
        "KAR_FIAT_DPT_AMT_9_6" NUMBER(23,6),
        "KAR_UPDATE_DT" DATE,
        "KAR_CREATE_DT" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
  STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "TS_KAR_HIST_DATA";


Export the table data only from source database:

exp USERNAME TABLES=LMSOGS.BB_CONNECTION_HISTORY_12AUG GRANTS=N INDEXES=N ROWS=Y TRIGGERS=N CONTRAINTS=N IGNORE=Y FILE=BB_CONNECTION_HISTORY_12AUG.dmp LOG=exp_BB_CONNECTION_HISTORY_12AUG.log;

SCP the dump file to destination database:

Import the table to Destination database Schema:

imp LMSOGS/********* FROMUSER=LMSOGS TOUSER=LMSOGS TABLES=BB_CONNECTION_HISTORY_12AUG GRANTS=N INDEXES=N rows=y  CONSTRAINTS=N IGNORE=Y FILE=BB_CONNECTION_HISTORY_12AUG.dmp LOG=imp_BB_CONNECTION_HISTORY_12AUG.log


Create Index manually or do structure export import. Create primary key. referential constraints etc as per source.