Sunday 28 June 2015

Standalone 11GR2 Preinstallation Requirements for OEL 5


 1.  Disk Space Requirements.
 2.  Preinstallation Requirements


1.Disk Space Requirements.


·         10 GB of space in the /tmp directory

·         Two 100G Mount Point for GI & RDBMS Binary i.e /u01 & /u02.Rest of the Space will depends upon the database size & will share later on.
2.Preinstallation Requirements.



************O.S. Requirement********************

    For Linux x86-64

·         On Oracle Linux 5 Update 2

2.6.18 or later (compatible with Red Hat Enterprise kernel).

·         On Oracle Linux 5 Update 5 with the Unbreakable Enterprise Kernel for Linux

      2.6.32-100.0.19 or later.

Currently We are using 2.6.18-194.el5 in UAT/Production Servers.


****** Kernel Requirements*******************


kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

**********Security Limits*********************

oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536

session required /lib/security/pam_limits.so

session required pam_limits.so

****************DISABLE SELINUX***********************

----------------DISABLE FIREWALL-------------------------------

---------Enable NTP for Standalone Instances---------------------

*****************Packages required**********************
binutils-2.*
compat-libstdc++-33*
elfutils-libelf-0.*
elfutils-libelf-devel-*
gcc-4.*
gcc-c++-4.*
glibc-2.*
glibc-common-2.*
glibc-devel-2.*
glibc-headers-2.*
ksh-2*
libaio-0.*
libaio-devel-0.*
libgcc-4.*
libstdc++-4.*
libstdc++-devel-4.*
make-3.*
sysstat-7.*
unixODBC-2.*
unixODBC-devel-2.*

**************************ASMLIB Package for Linux*******************************

Download the Oracleasm lib RPM w.r.t Kernel Version.
  oracleasm-support—Kernel Version
   oracleasm—Kernel Version
 oracleasmlib—Kernel Version


----------------DISABLE FIREWALL-------------------------------



************Java Version**********************

JDK 6 (Java SE Development Kit 1.6.0_21)
or
JDK 5 (1.5.0_24)

RAC 11GR2 Preinstallation Requirements on Oracle Linux 5 Update 2




1.  Disk Space Requirements.2.  Preinstallation Requirements


1.Disk Space Requirements.


·         10 GB of space in the /tmp directory

·         Two 100G Mount Point for GI & RDBMS Binary i.e /u01 & /u02.Rest of the Space will depends upon the database size & will share later on.



2.Preinstallation Requirements.



************O.S. Requirement********************


For Linux x86-64

·         On Oracle Linux 5 Update 2

2.6.18 or later (compatible with Red Hat Enterprise kernel).

·         On Oracle Linux 5 Update 5 with the Unbreakable Enterprise Kernel for Linux

      2.6.32-100.0.19 or later.



Currently We are using 2.6.18-194.el5 in UAT/Production Servers.


****** Kernel Requirements*******************

kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576


**********Security Limits**********************

oracle               soft    nproc   2047
oracle               hard    nproc   16384
oracle               soft    nofile  1024
oracle               hard    nofile  65536

session required /lib/security/pam_limits.so

session required pam_limits.so



****************DISABLE SELINUX***********************

----------------DISABLE FIREWALL-------------------------------

---------Disable NTP for RAC Instances---------------------


*****************Packages required**********************
binutils-2.*
compat-libstdc++-33*
elfutils-libelf-0.*
elfutils-libelf-devel-*
gcc-4.*
gcc-c++-4.*
glibc-2.*
glibc-common-2.*
glibc-devel-2.*
glibc-headers-2.*
ksh-2*
libaio-0.*
libaio-devel-0.*
libgcc-4.*
libstdc++-4.*
libstdc++-devel-4.*
make-3.*
sysstat-7.*
unixODBC-2.*
unixODBC-devel-2.*



**************************ASMLIB Package for Linux*******************************


Download the Oracleasm lib RPM w.r.t Kernel Version.
  oracleasm-support—Kernel Version
   oracleasm—Kernel Version
 oracleasmlib—Kernel Version


----------------DISABLE FIREWALL-------------------------------


************Java Version**********************

JDK 6 (Java SE Development Kit 1.6.0_21)
or
JDK 5 (1.5.0_24)



 Two(2)  Nodes RAC database creation in Oracle-The simplest way




I had to create a 2 node RAC Oracle database and add it to existing RAC set up.
I have used following approach:



Please note take care of character set application team has mentioned to you.

Command to check characterset:

SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';

PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_CHARACTERSET               US7ASCII
NLS_NCHAR_CHARACTERSET         AL16UTF16


Step 1:


Log in to the Linux/Unix box using oracle user:

Set Environment variable  (For  First Node):

export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1
export ORACLE_SID=rine4db1


Step 2:


Verify existing RAC setup.:

[oracle@rinerac1 dbs]$ crs_stat -t

Name           Type           Tarpet    State     Host
------------------------------------------------------------
ora....L1.inst application    ONLINE    ONLINE    rinerac1
ora....L2.inst application    ONLINE    ONLINE    rinerac2
ora.ORCL.db    application    ONLINE    ONLINE    rinerac1

[oracle@rinerac1 dbs]$ olsnodes -i -p
rinerac1   rinerac1-priv      rinerac1-vip
rinerac2   rinerac2-priv      rinerac2-vip

[oracle@rinerac1 dbs]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1

[oracle@rinerac1 dbs]$ asmcmd
ASMCMD> ls -ltr
State    Type    Rebal  Name
MOUNTED  EXTERN  N      FRA/
MOUNTED  EXTERN  N      DATA/
ASMCMD>

Note : Domain name is dbrac.com  ( example : rinerac1. dbrac.com ).


Step 3 :



 Prepare initialization parameter file (initrine4db.ora) and store it in $ORACLE_HOME/dbs directory.

Note: Comment all rac related parameters before running createdb script

Pfile for node-1
-------------------------------------------------------------------------------------------------------
rine4db1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
*._cursor_features_enabled=1026
*._gc_read_mostly_locking=FALSE
*.archive_lag_tarpet=1800
*.audit_file_dest='/oraaudit/rine4db1'
*.audit_sys_operations=TRUE
*.audit_trail='OS'
*.cluster_database_instances=2
*.cluster_database=TRUE
*.compatible='11.2.0.3.0'
*.control_files='+MDLWARE_DATA/rine4db/controlfile/control01.ctl','+MDLWARE_DATA/rine4db/controlfile/control02.ctl'
*.cursor_sharing='FORCE'
*.db_block_size=8192
*.db_cache_advice='OFF'
*.db_cache_size=5120M
*.db_domain='r5.money.pe.com'
*.db_file_multiblock_read_count=16
*.db_files=1000
*.db_keep_cache_size=500M
*.db_name='rine4db'
*.db_create_file_dest='+MDLWARE_DATA'
*.db_recovery_file_dest='+MDLWARE_FRA'
*.db_recovery_file_dest_size=150G
*.db_securefile='PERMITTED'
*.diagnostic_dest='/u01/app/oracle/admin'
*.fast_start_mttr_tarpet=300
rine4db1.instance_name='rine4db1'
rine4db2.instance_name='rine4db2'
rine4db1.instance_number=1
rine4db2.instance_number=2
*.java_pool_size=512M
*.job_queue_processes=1000
*.larpe_pool_size=512M
rine4db2.local_listener='(address=(protocol=tcp)(host=G2428INPLMWDB2P-vip.r5.money.pe.com)(port=1875))'
rine4db1.local_listener='(address=(protocol=tcp)(host=G2428INPLMWDB1P-vip.r5.money.pe.com)(port=1875))'
*.log_buffer=10485760
*.open_cursors=10000
*.os_authent_prefix=''
*.parallel_max_servers=10
*.pga_aggregate_tarpet=1024M
*.processes=1500
*.remote_listener='mwaredb-scan.r5.money.pe.com:1875'
*.remote_login_passwordfile='EXCLUSIVE'
*.service_names='rine4db.r5.money.pe.com'
*.session_cached_cursors=400
*.sessions=1000
*.shared_pool_size=2048M
*.star_transformation_enabled='TRUE'
*.streams_pool_size=200M
rine4db1.thread=1
rine4db2.thread=2
*.undo_retention=10800
rine4db1.undo_tablespace='UNDOTBS1'
rine4db2.undo_tablespace='UNDOTBS2'
------------------------------------------------------------


Pfile for node-2
--------------------
rine4db2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
*._cursor_features_enabled=1026
*._gc_read_mostly_locking=FALSE
*.archive_lag_tarpet=1800
*.audit_file_dest='/oraaudit/rine4db2'
*.audit_sys_operations=TRUE
*.audit_trail='OS'
*.compatible='11.2.0.3.0'
*.control_files='+MDLWARE_DATA/rine4db/controlfile/control01.ctl','+MDLWARE_DATA/rine4db/controlfile/control02.ctl'
*.cursor_sharing='FORCE'
*.db_block_size=8192
*.db_cache_advice='OFF'
*.db_cache_size=5120M
*.db_domain='r5.money.pe.com'
*.db_file_multiblock_read_count=16
*.db_files=1000
*.db_keep_cache_size=500M
*.db_name='rine4db'
*.db_recovery_file_dest='+MDLWARE_FRA'
*.db_recovery_file_dest_size=150G
*.db_securefile='PERMITTED'
*.diagnostic_dest='/u01/app/oracle/admin'
*.fast_start_mttr_tarpet=300
*.java_pool_size=512M
*.job_queue_processes=1000
*.larpe_pool_size=512M
*.log_buffer=10485760
*.open_cursors=10000
*.os_authent_prefix=''
*.parallel_max_servers=10
*.pga_aggregate_tarpet=1024M
*.processes=1500
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=400
*.sessions=1000
*.shared_pool_size=2048M
*.star_transformation_enabled='TRUE'
*.streams_pool_size=200M
*.undo_retention=10800
####################################################################
############# Cluster Parameters####################################
####################################################################
*.cluster_database_instances=2
*.cluster_database=TRUE
rine4db1.undo_tablespace='UNDOTBS1'
rine4db2.undo_tablespace='UNDOTBS2'
rine4db1.thread=1
rine4db2.thread=2
rine4db1.instance_name='rine4db1'
rine4db2.instance_name='rine4db2'
rine4db1.instance_number=1
rine4db2.instance_number=2
rine4db2.local_listener='(address=(protocol=tcp)(host=G2428INPLMWDB2P-vip.r5.money.pe.com)(port=1875))'
rine4db1.local_listener='(address=(protocol=tcp)(host=G2428INPLMWDB1P-vip.r5.money.pe.com)(port=1875))'
*.remote_listener='mwaredb-scan.r5.money.pe.com:1875'
*.service_names='rine4db.r5.money.pe.com'


Step 4:  



Creating password file.

orapwd file=$ORACLE_HOME/dbs/orapwrine4db1 password=pem0ney# entries=10 ignorecase=y


Step 5 :



 Configure both listener.ora and tnsnames.ora file for all nodes in cluster.

-- Node #1 Listener.ora.

LISTENER_rinerac1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac1-vip)(PORT = 1521)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 151.***.1.50)(PORT = 1521)(IP = FIRST))
    )
  )

Node #2 Listener.ora.

LISTENER_rinerac2 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac2-vip)(PORT = 1521)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 151.***.1.51)(PORT = 1521)(IP = FIRST))
    )
  )

We didn’t had any local listener and was using scan listeners only so didn’t configured any listener.
tnsnames.ora file

rine4db =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = mwaredb-scan.r5.money.pe.com)(PORT = 1875))
    )
    (CONNECT_DATA =
      (SERVICE_NAME  = rine4db.r5.money.pe.com)
      (SERVER = DEDICATED)
    )
  )

rine4db1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 1.***.50.41)(PORT = 1875))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rine4DB.r5.money.pe.com)
    )
  )
LISTENERS_rine4db =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac2-vip)(PORT = 1521))
  )

rine4db =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rine4db.DBPROD.COM)
    )
  )

rine4db2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rine4db.DBPROD.COM)
      (INSTANCE_NAME = rine4db2)
    )
  )

rine4db1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rinerac1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rine4db.DBPROD.COM)
      (INSTANCE_NAME = rine4db1)
    )
  )



Step 6 : 


 Create directory for database files and diagnostic files.

[oracle@rinerac1 ~]$ mkdir -p $ORACLE_BASE/admin/$ORACLE_SID/{adump,dpdump,hdump}
[oracle@rinerac1 ~]$ cd $ORACLE_BASE/admin/$ORACLE_SID/
[oracle@rinerac1 rine4db1]$ ls -ltr
total 12
drwxr-xr-x 2 oracle oinstall 4096 Oct 16 18:42 hdump
drwxr-xr-x 2 oracle oinstall 4096 Oct 16 18:42 dpdump
drwxr-xr-x 2 oracle oinstall 4096 Oct 16 18:42 adump

[oracle@rinerac1 dbs]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base for ORACLE_HOME=/u02/app/oracle/product/11.1.0/asm_1 is /u02/app/oracle
[oracle@rinerac1 dbs]$ asmcmd mkdir +DATA/rine4db
[oracle@rinerac1 dbs]$ asmcmd ls +DATA
rine4db/ -- Directory has been created.
ORCL/
N.B.:-
Login as grid user
Su – grid
Export ORACLE_SID=+ASM1
Asmcmd
Create all paths mentioned in pfile.


Step 7: 



Execute Create database command.

SQL> startup nomount pfile=pfilename
create database  rine4db CONTROLFILE REUSE
  logfile   group 1 ('+MDLWARE_DATA/rine4db/logfile/redo1a.log', '+MDLWARE_DATA/rine4db/logfile/redo1b.log') size 512m reuse,
            group 2 ('+MDLWARE_DATA/rine4db/logfile/redo2a.log', '+MDLWARE_DATA/rine4db/logfile/redo2b.log') size 512m reuse,
            group 3 ('+MDLWARE_DATA/rine4db/logfile/redo3a.log', '+MDLWARE_DATA/rine4db/logfile/redo3b.log') size 512m reuse,
            group 4 ('+MDLWARE_DATA/rine4db/logfile/redo4a.log', '+MDLWARE_DATA/rine4db/logfile/redo4b.log') size 512m reuse,
            group 5 ('+MDLWARE_DATA/rine4db/logfile/redo5a.log', '+MDLWARE_DATA/rine4db/logfile/redo5b.log') size 512m reuse
character set AL32UTF8
national character set AL16UTF16
  datafile '+MDLWARE_DATA/rine4db/datafile/system01.dbf'
            size 1024M
            autoextend on
            next 100M maxsize unlimited
            extent manapement local
  sysaux datafile '+MDLWARE_DATA/rine4db/datafile/sysaux01.dbf'
            size 1024M
            autoextend on
            next 500M
            maxsize 10240M
  undo tablespace undotbs1
            datafile '+MDLWARE_DATA/rine4db/datafile/undotbs1_1.dbf'
            size 1024M
            default temporary tablespace temp
            tempfile '+MDLWARE_DATA/rine4db/tempfile/temp1.dbf'
            size 1024M;
----------------------------------------------------------------------------------------------------------------



Step 8: 


Once the database is created, execute all the usual good stuff - catalog, catproc, pupbld scripts (all under /rdbms/admin)

Run following script as SYS user.


SQL> CONNECT / AS SYSDBA
Connected.
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catblock.sql
SQL> @?/rdbms/admin/catproc.sql
SQL> SHOW PARAMETER CASE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE

SQL> ALTER SYSTEM SET sec_case_sensitive_logon=FALSE;

System altered.

Run following script as SYSTEM user.

SQL> connect system/oracle
Connected.
SQL> @?/sqlplus/admin/pupbld.sql
SQL> @?/sqlplus/admin/help/hlpbld.sql helpus.sql



Step 9: 



Create a UNDO tablespace for remaining nodes.


create undo tablespace UNDOTBS2 datafile '+MDLWARE_DATA' size 100M autoextend on next 50M;


Step 10: 



Adding redo log file for remaining nodes and run catclust.sqlscript as SYS.

Add new REDO thread and enable it
(otherwise it will give this error while starting the 2nd instance: ORA-01620: no public threads are available for mounting)

alter database add logfile thread 2 group 6 '+MDLWARE_DATA' size 512M;
alter database add logfile thread 2 group 7 '+MDLWARE_DATA' size 512M;
alter database enable public thread 2;
Uncomment all rac related parameters:

SQL> @?/rdbms/admin/catclust.sql


Step 11:


Login to 2nd node as oracle user:
export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1
export ORACLE_SID=rine4db2
mkdir -p $ORACLE_BASE/admin/$ORACLE_SID/{adump,dpdump,hdump}
sqlplus  / as sysdba
startup
Now on both node create spfile.
sqlplus  / as sysdba
create spfile from pfile
shut immediate both instances and startup

We have created two separate  pfile so need to keep pfile /spfile in shared location.

I didn’t run following steps in green and my rac db is running fine:

Note : Make sure that all the instance in down before continue below steps. If not bring it all the instance down.


[oracle@rinerac1 ~]$ export ORACLE_HOME=/u02/app/oracle/product/11.1.0/db_1
[oracle@rinerac1 ~]$ srvctl add database -d RINE4DB -o $ORACLE_HOME -m DBRAC.COM
[oracle@rinerac1 ~]$ srvctl add instance -d RINE4B -i rine4db1 -n rinerac1
[oracle@rinerac1 ~]$ srvctl add instance -d RINE4DB -i rine4db2 -n rinerac2

oracle@rinerac1 ~]$ crs_stat -t -v

Name           Type           R/RA   F/FT   Tarpet    State     Host
----------------------------------------------------------------------
ora....B1.inst application    0/5    0/0    OFFLINE   OFFLINE
ora....B2.inst application    0/5    0/0    OFFLINE   OFFLINE
ora.rine4db.db   application    0/0    0/1    OFFLINE   OFFLINE
ora....L1.inst application    0/5    0/0    ONLINE    ONLINE    rinerac1
ora....L2.inst application    0/5    0/0    ONLINE    ONLINE    rinerac2
[oracle@rinerac1 ~]$ srvctl start database -d rine4db
[oracle@rinerac1 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Tarpet    State     Host
----------------------------------------------------------------------
ora....B1.inst application    0/5    0/0    ONLINE    ONLINE    rinerac1
ora....B2.inst application    0/5    0/0    ONLINE    ONLINE    rinerac2
ora.rine4db.db   application    0/0    0/1    ONLINE    ONLINE    rinerac2
SQL> col host_name for a25
SQL> col instance_name for a20
SQL> select instance_name,host_name from gv$instance;

INSTANCE_NAME        HOST_NAME
-------------------- -------------------------
rine4db2               rinerac2.dbrac.com
rine4db1               rinerac1.dbrac.com


[oracle@rinerac1 ~]$ srvctl status database -d rine4db
Instance rine4db1 is running on node rinerac1
Instance rine4db2 is running on node rinerac2
[oracle@rinerac1 ~]$ srvctl status asm -n rinerac1
ASM instance +ASM1 is running on node rinerac1.
[oracle@rinerac1 ~]$ srvctl status asm -n rinerac2
ASM instance +ASM2 is running on node rinerac2.
[oracle@rinerac1 ~]$ srvctl status instance -d rine4db -i rine4db1
Instance rine4db1 is running on node rinerac1
[oracle@rinerac1 ~]$ srvctl status instance -d rine4db -i rine4db2
Instance rine4db2 is running on node rinerac2
[oracle@rinerac1 ~]$ srvctl config database -d rine4db
rinerac1 rine4db1 /u02/app/oracle/product/11.1.0/db_1
rinerac2 rine4db2 /u02/app/oracle/product/11.1.0/db_1

DB Creation ends here
================================================================================================================================================

Thursday 25 June 2015

CREATE INDEX In Oracle - In Most Time Efficient Way

CREATE INDEX In Oracle - In Most Time Efficient Way



Create index in shortest time: In a production support environment main moto is to save time. There will be always fire on the floor.
In production databases indexes are of huge size. Creation of index normally takes a lot of time. If we create index in following way it will take half of the usual time.



02:02:30 SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
newnpp1

Elapsed: 00:00:00.00




Step-1:

Create index as unusable:

It will get created in fraction of second.

02:06:26 SQL> CREATE INDEX NPP.NU_E_V_M_ID ON NPP.UW_PHON_VERSIONS(ENTITY_INDICATOR, PHON_VERSION_MASTER_ID) tablespace POS_INDX02 unusable;

Index created.

Elapsed: 00:00:00.03



Step-2:

Rebuild the index online in parallel:

Parallelism of rebuilding activity will speed it up. 

03:07:26 SQL> alter index NPP.NU_E_V_M_ID rebuild online parallel 8;

Index altered.

Elapsed: 01:33:59.90




Step-3:

Make the index noparallel.

05:01:13 SQL> alter index NPP.NU_E_V_M_ID noparallel;

Index altered.

Elapsed: 00:00:00.01




Step-4:

Gather statistics of the table on which the index is created upon.

05:02:55 SQL> exec dbms_stats.gather_table_stats('NPP', 'UW_PHON_VERSIONS',degree=>6,cascade=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:09:37.08

Wednesday 24 June 2015

RAC Health Check OR Cluster Health Check - Oracle 11GR2 RAC




RAC Health Check OR Cluster Health Check - Oracle 11GR2 RAC



This document covers RAC Health Check or Cluster Health Check in Oracle 11GR2 RAC with common errors  faced and resolution applied to them:



Switch to Oracle user:

[H21401113@a1212testingracdb1d ~]$ su - oracle
Password:

First Check what all database instances are availble on database server:

[oracle@a1212testingracdb1d ~]$ ps -ef|grep pmon
grid     32509     1  0 Jun07 ?        00:09:20 asm_pmon_+ASM1
oracle   37038 36979  0 12:09 pts/4    00:00:00 grep pmon
oracle   42969     1  0 Jun07 ?        00:11:53 ora_pmon_RACTESTDB1
oracle   43326     1  0 Jun07 ?        00:53:52 ora_pmon_REMARKSDBPRD1
oracle   43743     1  0 Jun07 ?        00:12:01 ora_pmon_SALPRDDB1


Switch to Grid user:

[oracle@a1212testingracdb1d ~]$ su - grid
Password:



Go to GRID_HOME/bin directory:


[grid@a1212testingracdb1d ~]$ cd $GRID_HOME/bin




Check all cluster resources from a top level view(Not Detailed):


Use Command: ./crsctl check cluster -all


[grid@a1212testingracdb1d bin]$ ./crsctl check cluster -all
**************************************************************
a1212testingracdb1d:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
a1212testingracdb2d:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************





Check Statistics of all cluster resources:

Use Command:./crsctl stat res -t


[grid@a1212testingracdb1d bin]$ ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CARDONE.dg
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
ora.FRA_CARDONE.dg
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
ora.LISTENER.lsnr
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
ora.OCRDATA.dg
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
ora.asm
               ONLINE  ONLINE       a1212testingracdb1d          Started
               ONLINE  ONLINE       a1212testingracdb2d          Started
ora.gsd
               OFFLINE OFFLINE      a1212testingracdb1d
               OFFLINE OFFLINE      a1212testingracdb2d
ora.net1.network
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
ora.ons
               ONLINE  ONLINE       a1212testingracdb1d
               ONLINE  ONLINE       a1212testingracdb2d
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       a1212testingracdb1d
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       a1212testingracdb2d
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       a1212testingracdb2d
ora.c1proddb.db
      1        ONLINE  ONLINE       a1212testingracdb1d          Open
      2        ONLINE  ONLINE       a1212testingracdb2d          Open
ora.cvu
      1        ONLINE  ONLINE       a1212testingracdb2d
ora.a1212testingracdb1d.vip
      1        ONLINE  ONLINE       a1212testingracdb1d
ora.a1212testingracdb2d.vip
      1        ONLINE  ONLINE       a1212testingracdb2d
ora.oc4j
      1        ONLINE  ONLINE       a1212testingracdb1d
ora.scan1.vip
      1        ONLINE  ONLINE       a1212testingracdb1d
ora.scan2.vip
      1        ONLINE  ONLINE       a1212testingracdb2d
ora.scan3.vip
      1        ONLINE  ONLINE       a1212testingracdb2d





Please Note:


If ora.gsd is offline:

ora.gsd OFFLINE OFFLINE
ora.labsx86-1.gsd OFFLINE OFFLINE
ora.labsx86-2.gsd OFFLINE OFFLINE

One may only need to enable this, if you are running Oracle 9i RAC in the cluster. Otherwise nothing to worry.



If ora.cvu is offline:

In you don't want use cluster verify utility then it's fine. Otherwise enable it if you want to use cluvfy utility.


Check if all cluster nodes are online and visible to all nodes:


Run following command from each cluster node:

Use Command:./olsnodes

[grid@a1212testingracdb1d bin]$ ./olsnodes
a1212testingracdb1d
a1212testingracdb2d


CLUVFY:

If you doubt that you are facing cluster configuration related issue you can check your RAC configuration settings using following command:

Use Command: cluvfy stage -post crsinst -n a1212testingracdb1d,a1212testingracdb2d



[grid@a1212testingracdb1d bin]$ cluvfy stage -post crsinst -n a1212testingracdb1d,a1212testingracdb2d
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "a1212testingracdb1d"

Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "bond0"
Node connectivity passed for interface "bond0"
TCP connectivity check passed for subnet "3.143.50.0"

Check: Node connectivity for interface "bond1"
Node connectivity passed for interface "bond1"
TCP connectivity check passed for subnet "194.168.7.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "3.143.50.0".
Subnet mask consistency check passed for subnet "194.168.7.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "1.***.50.0" for multicast communication with multicast group "200.0.1.0"...
Check of subnet "1.***.50.0" for multicast communication with multicast group "200.0.1.0" passed.
Checking subnet "111.***.7.0" for multicast communication with multicast group "200.0.1.0"...
Check of subnet "111.***.7.0" for multicast communication with multicast group "200.0.1.0" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Oracle Cluster Voting Disk configuration...
ASM Running check passed. ASM is running on all specified nodes
Oracle Cluster Voting Disk configuration check passed
Checking Cluster manager integrity...

Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed

UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations

UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations
Default user file creation mask check passed
Checking cluster integrity...

Cluster integrity check passed

Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations

ASM Running check passed. ASM is running on all specified nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful

Disk group for ocr location "+OCRDATA" available on all the nodes

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "a1212testingracdb1d,a1212testingracdb2d"
Checking existence of ONS node application (optional)
ONS node application check passed

Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "cardonedb-scan.r5.money.ge.com"...
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Checking OLR config file...
OLR config file check successful

Checking OLR file attributes...
OLR file check successful

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check failed for "ntpd"
Check failed on nodes:
        a1212testingracdb1d,a1212testingracdb2d
PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
PRVF-5415 : Check to see if NTP daemon or service is running failed
Clock synchronization check using Network Time Protocol(NTP) failed

PRVF-9652 : Cluster Time Synchronization Services check failed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was unsuccessful.
Checks did not pass for the following node(s):
        a1212testingracdb1d,a1212testingracdb2d



----------------------------------------------------------------------------------------------------------------------

I am getting "INFO: PRVF-9652 : Cluster Time Synchronization Services check failed"

When testing the Cluster Time Synchronization Services check failed as below.


Another way to verify this:

Use command: ./cluvfy comp clocksync



[grid@node-01 bin]$ ./cluvfy comp clocksync

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed


Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
PRVF-5402 : Warning: Could not find NTP configuration file "/etc/ntp.conf" on node "node-01"
PRVF-5405 : The NTP configuration file "/etc/ntp.conf" does not exist on all nodes
PRVF-5414 : Check of NTP Config file failed on all nodes. Cannot proceed further for the NTP tests

Checking daemon liveness...
Liveness check failed for "ntpd"
Check failed on nodes:
        node-01
PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
PRVF-5415 : Check to see if NTP daemon or service is running failed
Clock synchronization check using Network Time Protocol(NTP) failed


--------------------------------------------------------------------------------------------------------------------------


Resolution:


mv  /etc/sysconfig/ntpd  /etc/sysconfig/ntpd_bk

mv /etc/ntp.conf /etc/ntp.conf_bk

[grid@a1212testingracdb1d bin]$ mv  /etc/sysconfig/ntpd  /etc/sysconfig/ntpd_bk

[oracle@a1212testingracdb1d ~]$ mv /etc/ntp.conf /etc/ntp.conf_bk


Then run "cluvfy comp clocksync" in both nodes.


[grid@node-01 ~]$cd $GRID_HOME/bin
[grid@node-01 ~]$ cluvfy comp clocksync

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed


Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.
[grid@node-01 ~]$