RAC Troubleshooting

This is the one section what will be updated frequently as my experience with RAC grows, as RAC has been around for a while most problems can be resolve with a simple google lookup, but a basic understanding on where to look for the problem is required. In this section I will point you where to look for problems, every instance in the cluster has its own alert logs, which is where you would start to look. Alert logs contain startup and shutdown information, nodes joining and leaving the cluster, etc.

Here is my complete alert log file of my two node RAC starting up.

The cluster itself has a number of log files that can be examined to gain any insight of occurring problems, the table below describes the information that you may need of the CRS components

$ORA_CRS_HOME/crs/log contains trace files for the CRS resources
$ORA_CRS_HOME/crs/init contains trace files for the CRS daemon during startup, a good place to start
$ORA_CRS_HOME/css/log contains cluster reconfigurations, missed check-ins, connects and disconnects from the client CSS listener. Look here to obtain when reboots occur
$ORA_CRS_HOME/css/init contains core dumps from the cluster synchronization service daemon (OCSd)
$ORA_CRS_HOME/evm/log log files for the event volume manager and eventlogger daemon
$ORA_CRS_HOME/evm/init pid and lock files for EVM
$ORA_CRS_HOME/srvm/log log files for Oracle Cluster Registry (OCR)
$ORA_CRS_HOME/log log files for Oracle clusterware which contains diagnostic messages at the Oracle cluster level

As in a normal Oracle single instance environment, a RAC environment contains the standard RDBMS log files, these files are located by the parameter background_dest_dump. The most important of these are

$ORACLE_BASE/admin/udump contains any trace file generated by a user process
$ORACLE_BASE/admin/cdump contains core files that are generated due to a core dump in a user process

Now lets look at a two node startup and the sequence of events

First you must check that the RAC environment is using the connect interconnect, this can be done by either of the following

logfile ## The location of my alert log, yours may be different
ifcfg command oifcfg getif
table check select inst_id, pub_ksxpia, picked_ksxpia, ip_ksxpia from x$ksxpia;
oradebug SQL> oradebug setmypid
SQL> oradebug ipc

Note: check the trace file which can be located by the parameter user_dump_dest
system parameter cluster_interconnects

Note: used to specify which address to use

When the instance starts up the Lock Monitor's (LMON) job is to register with the Node Monitor (NM) (see below table). Remember when a node joins or leaves the cluster the GRD undergoes a reconfiguration event, as seen in the logfile it is a seven step process (see below for more details on the seven step process).

The LMON trace file also has details about reconfigurations it also details the reason for the event

reconfiguation reason
means that the NM initiated the reconfiguration event, typical when a node joins or leaves a cluster

means that an instance has died

How does the RAC detect an instance death, every instance updates the control file with a heartbeat through its checkpoint (CKPT), if the heartbeat information is missing for x amount of time, the instance is considered to be dead and the Instance Membership Recovery (IMR) process initiates reconfiguration.

means communication failure of a node/s. Messages are sent across the interconnect if a message is not received in an amount of time then a communication failure is assumed by default UDP is used and can be unreliable so keep an eye on the logs if too many reconfigurations happen for reason 3.
Example of a reconfiguration, taken from the alert log. Sat Mar 20 11:35:53 2010
Reconfiguration started (old inc 2, new inc 4)
List of nodes:
  0 1
  Global Resource Directory frozen
  * allocate domain 0, invalid = TRUE
  Communication channels reestablished
  Master broadcasted resource hash value bitmaps
  Non-local Process blocks cleaned out
Sat Mar 20 11:35:53 2010
  LMS 0: 0 GCS shadows cancelled, 0 closed
  Set master node info
  Submitted all remote-enqueue requests
  Dwn-cvts replayed, VALBLKs dubious
  All grantable enqueues granted
  Post SMON to start 1st pass IR
Sat Mar 20 11:35:53 2010
  LMS 0: 0 GCS shadows traversed, 3291 replayed
Sat Mar 20 11:35:53 2010
  Submitted all GCS remote-cache requests
  Post SMON to start 1st pass IR
  Fix write in gcs resources
Reconfiguration complete

Note: when a reconfiguration happens the GRD is frozen until the reconfiguration is completed

Confirm that the database has been started in cluster mode, the log file will state the following

cluster mode Sat Mar 20 11:36:02 2010
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)

Staring with 10g the SCN is broadcast across all nodes, the system will have to wait until all nodes have seen the commit SCN. You can change the board cast method using the system parameter _lgwr_async_broadcasts.

Lamport Algorithm

The lamport algorithm generates SCNs in parallel and they are assigned to transaction on a first come first served basis, this is different than a single instance environment, a broadcast method is used after a commit operation, this method is more CPU intensive as it has to broadcast the SCN for every commit, but he other nodes can see the committed SCN immediately.

The initialization parameter max_commit_propagation_delay limits the maximum delay allow for SCN propagation, by default it is 7 seconds. When set to less than 100 the broadcast on commit algorithm is used.

Disable/Enable Oracle RAC

There are times when you may wish to disable RAC, this feature can only be used in a Unix environment (no windows option).

Disable Oracle RAC (Unix only)
  1. Log in as Oracle in all nodes
  2. shutdown all instances using either normal or immediate option
  3. change to the working directory $ORACLE_HOME/lib
  4. run the below make command to relink the Oracle binaries without the RAC option (should take a few minutes)

        make -f ins_rdbms.mk rac_off
  5. Now relink the Oracle binaries

        make -f ins_rdbms.mk ioracle
Enable Oracle RAC (Unix only)
  1. Log in as Oracle in all nodes
  2. shutdown all instances using either normal or immediate option
  3. change to the working directory $ORACLE_HOME/lib
  4. run the below make command to relink the Oracle binaries without the RAC option (should take a few minutes)

        make -f ins_rdbms.mk rac_on
  5. Now relink the Oracle binaries

        make -f ins_rdbms.mk ioracle

Performance Issues

Oracle can suffer a number of different performance problems and can be categorized by the following

A hung database is basically an internal deadlock between to processes, usually Oracle will detect the deadlock and rollback one of the processes, however if the situation occurs with the internal kernel-level resources (latches or pins), it is unable to automatically detect and resolve the deadlock, thus hanging the database. When this event occurs you must obtain dumps from each of the instances (3 dumps per instance in regular times), the trace files will be very large.

capture information

## Using alter session
SQL> alter session set max_dump_file_size = unlimited;
SQL> alter session set events 'immediate trace name systemstate level 10';

# using oradebug
SQL> select * from dual;
SQL> oradebug setmypid
SQL> unlimit
SQL> oradebug dump systemstate 10

# using oradebug from another instance
SQL> select * from dual;
SQL> oradebug setmypid
SQL> unlimit
SQL> oradebug -g all dump systemstate 10

Note: the select statement above is to avoid problems on pre 8 Oracle

SQLPlus - problems connecting ## If you get problems connecting with SQLPLUS use the command below
$ sqlplus -prelim
Enter user-name: / as sysdba

A severe performance problem can be mistaken for a hang, this usually happen because of contention problems, a systemstate dump is normally used to analyze this problem, however a systemstate dump taken a long time to complete, it also has a number of limitations

To overcome these limitations a new utility command was released with 8i called hanganalyze which provides clusterwide information in a RAC environment on a single shot.

sql method alter session set events 'immediate trace hanganalyze level <level>';
oradebug SQL> oradebug hanganalyze <level>

## Another way using oradebug
SQL> setmypid
SQL> setinst all
SQL> oradebug -g def hanganalyze <level>

Note: you will be told where the output will be dumped to
hanganalyze levels
only hanganalyze output, no process dump at all, click here for an example level 1 dump
Level 2 + Dump only processes thought to be in a hang (IN_HANG state)
Level 3 + Dump leaf nodes (blockers) in wait chains (LEAF, LEAF_NW, IGN_DMP state)
Level 4 + Dump all processes involved in wait chains (NLEAF state)
Dump all processes (IGN state)

The hanganalyze command uses internal kernel calls to determine whether a session is waiting for a resource and reports the relationship between blockers and waiters, systemdump is better but if you over whelmed try hanganalyze first.

Debugging Node Eviction

A node is evicted from the cluster after it kills itself because it is not able to service the application, this generally happens when you have communication problems. For eviction node problems look for ora-29740 errors in the alert log file and LMON trace files.

To understand eviction problems you need to now the basics of node membership and instance membership recovery (IMR) works. When a communication failure happens the heartbeat information in the control cannot happen, thus data corruption can happen. IMR will remove any nodes from the cluster that it deems as a problem, IMR will ensure that the larger part of the cluster will survive and kills any remaining nodes. IMR is part of the service offered by Cluster Group Services (CGS). LMON handles many of the CGS functionalities, this works at the cluster level and can work with 3rd party software (Sun Cluster, Veritas Cluster). The Node Monitor (NM) provides information about nodes and their health by registering and communicating with the Cluster Manager (CM). Node membership is represented as a bitmap in the GRD. LMON will let other nodes know of any changes in membership, for example if a node joins or leaves the cluster, the bitmap is rebuilt and communicated to all nodes.

Node registering (alert log) lmon registered with NM - instance id 1 (internal mem no 0)

One thing to remember is that all nodes must be able to read from and write to the controlfile. CGS makes sure that members are valid, it uses a voting mechanism to check the validity of each member. I have already discussed the voting disk in my architecture section, as stated above memberships is held in a bitmap in the GRD, the CKPT process updates the controlfile every 3 seconds in an operation known as a heartbeat. It writes into a single block that is unique for each instance, thus intra-instance coordination is not required, this block is called the checkpoint progress record. You can see the controlfile records using the gv$controlfile_record_section view, all members attempt to obtain a lock on the controlfile record for updating, the instance that obtains the lock tallies the votes from all members, the group membership must conform to the decided (voted) membership before allowing the GCS/GES reconfiguration to proceed, the controlfile vote result is stored in the same block as the heartbeat in the control file checkpoint progress record.

A cluster reconfiguration is performed using 7 steps

  1. Name service is frozen, the CGS contains an internal database of all the members/instances in the cluster with all their configurations and servicing details.
  2. Lock database (IDLM) is frozen, this prevents processes from obtaining locks on resources that were mastered by the departing/dead instance
  3. Determination of membership and validation and IMR
  4. Bitmap rebuild takes place, instance name and uniqueness verification, GCS must synchronize the cluster to be sure that all members get the reconfiguration event and that they all see the same bitmap.
  5. Delete all dead instance entries and republish all names newly configured
  6. Unfreeze and release name service for use
  7. Hand over reconfiguration to GES/GCS

Debugging CRS and GSD

Oracle server management configuration tools include a diagnostic and tracing facility for verbose output for SRVCTL, GSD, GSDCTL or SRVCONFIG.

To capture diagnose following the below

  1. use vi to edit the gsd.sh/srvctl/srvconfig file in the $ORACLE_HOME/bin directory
  2. At the end of the file look for the below line

    exec $JRE -classpath $CLASSPATH oracle.ops.mgmt.daemon.OPSMDaemon $MY_OHOME
  3. Add the following just before the -classpath in the exec $JRE line

  4. the string should look like this

    exec $JRE -DTRACING.ENABLED=true -DTRACING.LEVEL=2 -classpath...........

In Oracle database 10g setting the below variable accomplishes the same thing, set it to blank to remove the debugging

Enable tracing  $ export SRVM_TRACE=true
Disable tracing  $ export SRVM_TRACE=""

Previous Menu Next