facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Latest Post

RAC : Rollback PSU Patch From Standard Cluster 12c

Written By askMLabs on Wednesday, March 12, 2014 | 10:29 AM


In this article, we are going to see the steps to rollback a patch from RAC 12c standard cluster. Make sure you have all the cluster services up and running before starting the rollback.


1. Make sure you have all the cluster services up and running.
 /u01/app/12.1.0/grid_1/bin/crsctl status res -t

[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.asm
               ONLINE  ONLINE       askmrac1                 Started,STABLE
               ONLINE  ONLINE       askmrac2                 Started,STABLE
               ONLINE  ONLINE       askmrac3                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.ons
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       askmrac2                 169.254.200.16 10.10
                                                             .10.232,STABLE
ora.askmrac1.vip
      1        ONLINE  ONLINE       askmrac1                 STABLE
ora.askmrac2.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.askmrac3.vip
      1        ONLINE  ONLINE       askmrac3                 STABLE
ora.cvu
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       askmrac3                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       askmrac1                 Open,STABLE
      2        ONLINE  ONLINE       askmrac2                 Open,STABLE
      3        ONLINE  ONLINE       askmrac3                 Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
--------------------------------------------------------------------------------
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/crsctl status res -t | grep -i offline
[root@askmrac1 ~]#
2. Use the following syntax to rollback the patch from 12c standard cluster home.
opatchauto rollback <patch location>
[root@askmrac1 ~]# which opatchauto
/u01/app/12.1.0/grid_1/OPatch/opatchauto
[root@askmrac1 ~]# opatchauto rollback /mnt/software/RAC/1201_PSU/17272829
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-08_05-34-09_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Rolling back patch(es) from"/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "17027533,17077442" successfully rolled back from "/u01/app/oracle/product/12.1.0/dbhome_1".
Stopping CRS ... Successful
Rolling back patch(es) from"/u01/app/12.1.0/grid_1" ...
Patch "17027533,17077442,17303297" successfully rolled back from "/u01/app/12.1.0/grid_1".
Starting CRS ... Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
[WARNING] SQL changes, if any, could not be rolled back on the following database(s): ORCL ... Please refer to the log file for more details.
Rollback Summary:
Following patch(es) are successfully rolled back:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
opatchauto succeeded.
[root@askmrac1 ~]# 

Use the above syntax to rollback the patch from grid home and database home on all other cluster nodes.

3. Verify that the patch is rolledback.
From Grid Home:
[root@askmrac1 ~]# su - grid
[grid@askmrac1 ~]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0/grid_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0/grid_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-08_06-11-01AM_1.log
Lsinventory Output file location : /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/lsinv/lsinventory2014-03-08_06-11-01AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.1.0.1.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Patch level status of Cluster nodes :
 Patching Level                  Nodes
 --------------                  -----
 0                               askmrac3,askmrac2,askmrac1
--------------------------------------------------------------------------------
OPatch succeeded.
[grid@askmrac1 ~]$

From Oracle Home : 
[root@askmrac1 ~]# su - oracle
[oracle@askmrac1 ~]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-03-08_06-11-27AM_1.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-03-08_06-11-27AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 12c                                                  12.1.0.1.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = askmrac1
  Remote node = askmrac2
  Remote node = askmrac3
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@askmrac1 ~]$

Please see my other articles on how to apply PSU patches to standard cluster and flex cluster.
RAC 12c : JAN2014 PSU to 4 node flex cluster
RAC : Apply PSU patch to Standard Cluster 12c

Hope This Helps
SRI

RAC : Apply PSU Patch To Standard Cluster 12c


In this article, we will see how to apply PSU patch to the standard cluster. I have a RAC 12c standard cluster installed with 3 nodes.

Environment Details : 
RAC Version Cluster Type
12c  12.1.0.1.0
Standard Cluster
Cluster Nodes
askmrac1/2/3
DB Running on
All Cluster Nodes
Task
Applying OCT2013 PSU Patch



Environment Configuration :
Type
Path
Owner
Version
Shared
Grid Infra Home
/u01/app/12.1.0/grid_1
Grid
12.1.0.1
False
Database Home
/u01/app/oracle/product/12.1.0/dbhome_1
Oracle
12.1.0.1
Flase


Step By Step
  1. 1. Download and Unzip the Latest OPatch to all cluster nodes
  2. 2. Validate and Record Pre-Patch information
  3. 3. Create OCM response file if it does not exist
  4. 4. Download and Unzip the JAN2014 PSU patch for GI 12c ie 12.1.0.1.2
  5. 5. One-off patch Conflicts detection and Resolution
  6. 6. Patch application
  7. 7. Verification of Patch application
  8. 8. Issues and Resolutions




1. Download and Unzip the Latest OPatch to all cluster nodes:
You must use the OPatch utility version12.1.0.1.1 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 12.1 releases, which is available for download from My Oracle Support patch 6880880.

It is always a best practice to keep both GRID_HOME and DATABASE HOME at same opatch level. So change the opatch in DATABASE HOME also.

Repeat this step to all the nodes in the cluster.


2. Validate and Record Pre-Patch information :
Validate using the following commands :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1

Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Connect to each instance and record registry information.
SQL> select comp_name,version,status from dba_registry;

3. Create OCM Response File If It Does Not Exist : 
Create ocm response file using the following command and provide appropriate values for the prompts.
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrsp
Verify the created file using,
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrsp –verbose ocm.rsp
NOTE: The Opatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. Without which we cant proceed further.

Copy this response file ocm.rsp to all the nodes in the cluster to the same location. Or you can create a new response file on each node using the same method above.

4. Download and Unzip the JAN2014 PSU patch : (as grid user)
Patch 17272829 is the OCT2013 PSU patch. It is downloaded and unzipped to the locaiton "/mnt/software/RAC/1201_PSU/"

5. One-off patch Conflicts detection and Resolution :
Determine whether any currently installed one-off patches conflict with the PSU patch.
$GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/17272829  analyze

But i don't have any patches applied to my home yet. So i can ignore this step. But if you have any conflicts identified in GI home or in DB home, follow MOS ID 1061295.1 to resolve the conflicts.

6. Patch Application : 
Now patch can be applied using the following syntax.
# opatchauto apply <PATH_TO_PATCH_DIRECTORY> -ocmrf <ocm response file>
On all Cluster nodes ( askmrac1/2/3)
[root@askmrac1 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH
[root@askmrac1 ~]# which opatchauto
/u01/app/12.1.0/grid/OPatch/opatchauto
[root@askmrac1 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/17272829 -ocmrf /tmp/askm/ocm.rspOPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-08_04-17-23_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/mnt/software/RAC/1201_PSU/17272829/17027533" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Patch "/mnt/software/RAC/1201_PSU/17272829/17077442" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Patch "/mnt/software/RAC/1201_PSU/17272829/17027533" successfully applied to "/u01/app/12.1.0/grid_1".
Patch "/mnt/software/RAC/1201_PSU/17272829/17077442" successfully applied to "/u01/app/12.1.0/grid_1".
Patch "/mnt/software/RAC/1201_PSU/17272829/17303297" successfully applied to "/u01/app/12.1.0/grid_1".
Starting CRS ... Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
[WARNING] SQL changes, if any, could not be applied on the following database(s): ORCL ... Please refer to the log file for more details.
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
opatchauto ran into some warnings during patch installation (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
opatchauto succeeded.
[root@askmrac1 ~]#

Repeat the above step to all the cluster nodes in the standard cluster.

7. Verification of Patch application : 
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1

Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Connect to each instance and verify registry information.
SQL> select comp_name,version,status from dba_registry;

[root@askmrac1 ~]# su - grid
[grid@askmrac1 ~]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0/grid_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0/grid_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-08_04-46-22AM_1.log
Lsinventory Output file location : /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/lsinv/lsinventory2014-03-08_04-46-22AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.1.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (3) :
Patch  17303297     : applied on Sat Mar 08 04:35:19 EST 2014
Unique Patch ID:  16881795
Patch description:  "ACFS Patch Set Update 12.1.0.1.1"
   Created on 14 Oct 2013, 07:25:50 hrs US/Central
   Bugs fixed:
     14487556, 16398970, 16552813, 16930184, 16420645, 16170117, 16436434
     16476044, 16458315, 16463033, 16095100, 16545876, 16429953, 14826673
     16001893, 16482869, 16371746, 16435343, 14476443, 16294308, 16671486
     16386110, 15978267, 16085530, 16347837, 16814544, 16022372, 16167084
     14510092, 16450287, 16399406
Patch  17077442     : applied on Sat Mar 08 04:31:12 EST 2014
Unique Patch ID:  16881794
Patch description:  "Oracle Clusterware Patch Set Update 12.1.0.1.1"
   Created on 12 Oct 2013, 06:33:53 hrs US/Central
   Bugs fixed:
     16505840, 16505255, 16390989, 16399322, 16505617, 16505717, 17486244
     16168869, 16444109, 16505361, 13866165, 16505763, 16208257, 16904822
     17299876, 16246222, 16505214, 16505540, 15936039, 16580269, 16838292
     16505449, 16801843, 16309853, 16505395, 17507349, 17475155, 16493242
     17039197, 16196609, 17463260, 16505667, 15970176, 16488665, 16670327
Patch  17027533     : applied on Sat Mar 08 04:27:54 EST 2014
Unique Patch ID:  16677152
Patch description:  "Database Patch Set Update : 12.1.0.1.1 (17027533)"
   Created on 27 Sep 2013, 05:30:33 hrs PST8PDT
   Bugs fixed:
     17034172, 16694728, 16448848, 16863422, 16634384, 16465158, 16320173
     16313881, 16910734, 16816103, 16911800, 16715647, 16825779, 16707927
     16392068, 14197853, 16712618, 17273253, 16902138, 16524071, 16856570
     16465149, 16705020, 16689109, 16372203, 16864864, 16849982, 16946613
     16837842, 16964279, 16459685, 16978185, 16845022, 16195633, 14536110
     16964686, 16787973, 16850996, 16674842, 16838328, 16178562, 15996344
     16503473, 16842274, 16935643, 17000176, 14355775, 16362358, 16994576
     16485876, 16919176, 16928832, 16864359, 16617325, 16921340, 16679874
     16788832, 16483559, 16733884, 16784167, 16286774, 15986012, 16660558
     16674666, 16191248, 16697600, 16993424, 16946990, 16589507, 16173738
     16784143, 16772060, 16991789, 17346196, 16495802, 16859937, 16590848
     16910001, 16603924, 16427054, 16730813, 16227068, 16663303, 16784901
     16836849, 16186165, 16457621, 16007562, 16170787, 16663465, 16524968
     16543323, 17027533, 16675710, 17005047, 16795944, 16668226, 16070351
     16212405, 16523150, 16698577, 16621274, 16930325, 17330580, 16443657


Patch level status of Cluster nodes :
 Patching Level                  Nodes
 --------------                  -----
 1650217826                      askmrac1
 0                               askmrac3,askmrac2
--------------------------------------------------------------------------------
OPatch succeeded.
[grid@askmrac1 ~]$

[root@askmrac1 ~]# su - oracle
[oracle@askmrac1 ~]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-03-08_04-46-46AM_1.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-03-08_04-46-46AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 12c                                                  12.1.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :
Patch  17077442     : applied on Fri Mar 07 12:48:25 EST 2014
Unique Patch ID:  16881794
Patch description:  "Oracle Clusterware Patch Set Update 12.1.0.1.1"
   Created on 12 Oct 2013, 06:33:53 hrs US/Central
   Bugs fixed:
     16505840, 16505255, 16390989, 16399322, 16505617, 16505717, 17486244
     16168869, 16444109, 16505361, 13866165, 16505763, 16208257, 16904822
     17299876, 16246222, 16505214, 16505540, 15936039, 16580269, 16838292
     16505449, 16801843, 16309853, 16505395, 17507349, 17475155, 16493242
     17039197, 16196609, 17463260, 16505667, 15970176, 16488665, 16670327
Patch  17027533     : applied on Fri Mar 07 12:48:00 EST 2014
Unique Patch ID:  16677152
Patch description:  "Database Patch Set Update : 12.1.0.1.1 (17027533)"
   Created on 27 Sep 2013, 05:30:33 hrs PST8PDT
   Bugs fixed:
     17034172, 16694728, 16448848, 16863422, 16634384, 16465158, 16320173
     16313881, 16910734, 16816103, 16911800, 16715647, 16825779, 16707927
     16392068, 14197853, 16712618, 17273253, 16902138, 16524071, 16856570
     16465149, 16705020, 16689109, 16372203, 16864864, 16849982, 16946613
     16837842, 16964279, 16459685, 16978185, 16845022, 16195633, 14536110
     16964686, 16787973, 16850996, 16674842, 16838328, 16178562, 15996344
     16503473, 16842274, 16935643, 17000176, 14355775, 16362358, 16994576
     16485876, 16919176, 16928832, 16864359, 16617325, 16921340, 16679874
     16788832, 16483559, 16733884, 16784167, 16286774, 15986012, 16660558
     16674666, 16191248, 16697600, 16993424, 16946990, 16589507, 16173738
     16784143, 16772060, 16991789, 17346196, 16495802, 16859937, 16590848
     16910001, 16603924, 16427054, 16730813, 16227068, 16663303, 16784901
     16836849, 16186165, 16457621, 16007562, 16170787, 16663465, 16524968
     16543323, 17027533, 16675710, 17005047, 16795944, 16668226, 16070351
     16212405, 16523150, 16698577, 16621274, 16930325, 17330580, 16443657


Rac system comprising of multiple nodes
  Local node = askmrac1
  Remote node = askmrac2
  Remote node = askmrac3
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@askmrac1 ~]$
8. Issues and Resolution : 
Issue 1:
Don't have enough free space in the mount point where we have database home created. The opatch is looking for  min 5528.488MB free space in the oracle home mount point.
[root@askmrac1 askm]# export PATH=$PATH:/u01/app/12.1.0/grid_1/OPatch
[root@askmrac1 askm]# which opatchauto
/u01/app/12.1.0/grid_1/OPatch/opatchauto
[root@askmrac1 askm]# opatchauto apply /tmp/askm/17272829 -ocmrf /tmp/askm/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_09-46-11_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /tmp/askm/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Command "/u01/app/oracle/product/12.1.0/dbhome_1/OPatch/opatch napply -phBaseFile /tmp/OraDB12Home1_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/oracle/product/12.1.0/dbhome_1 -silent -ocmrf /tmp/askm/ocm.rsp" execution failed:
UtilSession failed:
Prerequisite check "CheckSystemSpace" failed.
Log file Location for the failed command: /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-03-07_09-51-07AM_1.log
[WARNING] The local database instance 'ORCL1' from '/u01/app/oracle/product/12.1.0/dbhome_1' is not running. SQL changes, if any,  will not be applied. Please refer to the log file for more details.
For more details, please refer to the log file "/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_09-46-11_deploy.debug.log".
Apply Summary:
Following patch(es) failed to be installed:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
opatchauto failed with error code 2.
[root@askmrac1 askm]#
Verification :
Verify the log file :

/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_09-46-11_deploy.log
COMMAND EXECUTION FAILURE :
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0/grid_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-03-07_09-51-07AM_1.log
Verifying environment and performing prerequisite checks...
Prerequisite check "CheckSystemSpace" failed.
The details are:
Required amount of space(5528.488MB) is not available.
Log file location: /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-03-07_09-51-07AM_1.log
OPatch failed with error code 73
Resolution : 
Make sure you have atleast 6GB of free space in database oracle home.


Issue 2:
Not able to stop the cluster stack even with force option during the patch application.
[root@askmrac1 askm]# opatchauto apply /tmp/askm/17272829 -ocmrf /tmp/askm/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_10-38-20_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /tmp/askm/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/tmp/askm/17272829/17027533" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Patch "/tmp/askm/17272829/17077442" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Stopping CRS ... Failed
Command "/usr/bin/perl /u01/app/12.1.0/grid_1/crs/install/rootcrs.pl -prepatch" execution failed:
Died at /u01/app/12.1.0/grid_1/crs/install/crspatch.pm line 609.

[WARNING] The local database instance 'ORCL1' from '/u01/app/oracle/product/12.1.0/dbhome_1' is not running. SQL changes, if any,  will not be applied. Please refer to the log file for more details.
For more details, please refer to the log file "/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_10-38-20_deploy.debug.log".
Apply Summary:
Following patch(es) are successfully installed:
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
Following patch(es) failed to be installed:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
opatchauto failed with error code 2.
[root@askmrac1 askm]#
Verification :
From Log file :

/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_10-38-20_deploy.log
COMMAND EXECUTION FAILURE :
Using configuration parameter file: /u01/app/12.1.0/grid_1/crs/install/crsconfig_params
Oracle Clusterware active version on the cluster is [12.1.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
2014/03/07 10:49:18 CLSRSC-349: The Oracle Clusterware stack failed to stop
^[[0m
ERROR:
Died at /u01/app/12.1.0/grid_1/crs/install/crspatch.pm line 609.
From the logfile:
/u01/app/12.1.0/grid_1/cfgtoollogs/crsconfig/crspatch_askmrac1_2014-03-07_10-46-59AM.log
>  CRS-1151: The cluster was successfully set to rolling patch mode.
>End Command output
2014-03-07 10:47:17: Successfully set the cluster in rolling patch mode
2014-03-07 10:47:17: Stopping Oracle Clusterware stack ...
2014-03-07 10:47:17: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl stop crs -f
2014-03-07 10:49:18: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.crsd' on 'askmrac1'
>  CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.oc4j' on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.cvu' on 'askmrac1'
>  CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'askmrac1'
>  CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'askmrac1' succeeded
>  CRS-2673: Attempting to stop 'ora.askmrac1.vip' on 'askmrac1'
>  CRS-2677: Stop of 'ora.MGMTLSNR' on 'askmrac1' succeeded
>  CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'askmrac3'
>  CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'askmrac1' succeeded
>  CRS-2673: Attempting to stop 'ora.scan1.vip' on 'askmrac1'
>  CRS-2677: Stop of 'ora.askmrac1.vip' on 'askmrac1' succeeded
>  CRS-2672: Attempting to start 'ora.askmrac1.vip' on 'askmrac3'
>  CRS-2677: Stop of 'ora.scan1.vip' on 'askmrac1' succeeded
>  CRS-2672: Attempting to start 'ora.scan1.vip' on 'askmrac3'
>  CRS-2676: Start of 'ora.askmrac1.vip' on 'askmrac3' succeeded
>  CRS-2676: Start of 'ora.scan1.vip' on 'askmrac3' succeeded
>  CRS-2677: Stop of 'ora.cvu' on 'askmrac1' succeeded
>  CRS-2672: Attempting to start 'ora.cvu' on 'askmrac2'
>  CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'askmrac3'
>  CRS-2676: Start of 'ora.cvu' on 'askmrac2' succeeded
>  CRS-2676: Start of 'ora.MGMTLSNR' on 'askmrac3' succeeded
>  CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'askmrac3' succeeded
>  CRS-2677: Stop of 'ora.oc4j' on 'askmrac1' succeeded
>  CRS-2672: Attempting to start 'ora.oc4j' on 'askmrac2'
>  CRS-2675: Stop of 'ora.OCR_VOTE.dg' on 'askmrac1' failed
>  CRS-2679: Attempting to clean 'ora.OCR_VOTE.dg' on 'askmrac1'
>  CRS-2678: 'ora.OCR_VOTE.dg' on 'askmrac1' has experienced an unrecoverable failure
>  CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'askmrac1'
>  CRS-2675: Stop of 'ora.OCR_VOTE.dg' on 'askmrac1' failed
>  CRS-2679: Attempting to clean 'ora.OCR_VOTE.dg' on 'askmrac1'
>  CRS-2676: Start of 'ora.oc4j' on 'askmrac2' succeeded
>  CRS-2678: 'ora.OCR_VOTE.dg' on 'askmrac1' has experienced an unrecoverable failure
>  CRS-2799: Failed to shut down resource 'ora.OCR_VOTE.dg' on 'askmrac1'
>  CRS-2799: Failed to shut down resource 'ora.askmrac1.ASM1.asm' on 'askmrac1'
>  CRS-2799: Failed to shut down resource 'ora.asm' on 'askmrac1'
>  CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'askmrac1' has failed
>  CRS-2675: Stop of 'ora.crsd' on 'askmrac1' failed
>  CRS-2799: Failed to shut down resource 'ora.crsd' on 'askmrac1'
>  CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'askmrac1' has failed
>  CRS-4687: Shutdown command has completed with errors.
>  CRS-4000: Command Stop failed, or completed with errors.
Resolution : 
The issue seems to be intermittent and environment specific. After some research, i found that there is some problem with the disk group where i have my OCR and Voting files. Restarting the node resolved the issue.

Issue 3:
Don't have enough free space in the mount point where we have grid infra home created. The opatch is looking for  min 10578.277MB free space in the grid home mount point.
[root@askmrac1 askm]# opatchauto apply /tmp/askm/17272829 -ocmrf /tmp/askm/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_14-38-30_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /tmp/askm/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/tmp/askm/17272829/17027533" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Patch "/tmp/askm/17272829/17077442" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home1_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -ocmrf /tmp/askm/ocm.rsp" execution failed:
UtilSession failed:
Prerequisite check "CheckSystemSpace" failed.
Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-07_14-43-51PM_1.log
[WARNING] The local database instance 'ORCL1' from '/u01/app/oracle/product/12.1.0/dbhome_1' is not running. SQL changes, if any,  will not be applied. Please refer to the log file for more details.
For more details, please refer to the log file "/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_14-38-30_deploy.debug.log".
Apply Summary:
opatchauto ran into some warnings during patch installation (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
Following patch(es) failed to be installed:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
opatchauto failed with error code 2.
[root@askmrac1 askm]# 

Verification :
From the logfile :

/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-07_14-38-30_deploy.log
COMMAND EXECUTION FAILURE :
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0/grid_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0/grid_1/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-07_14-43-51PM_1.log
Verifying environment and performing prerequisite checks...
Prerequisite check "CheckSystemSpace" failed.
The details are:
Required amount of space(10578.277MB) is not available.
Log file location: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-07_14-43-51PM_1.log
OPatch failed with error code 73
ERROR:
UtilSession failed:
Prerequisite check "CheckSystemSpace" failed.
Resolution : 
Make sure you have enough free space ( ie min 11GB) in the oracle home where you have installed grid home.

Issue 4:
Permission Issue on the patch source files where it is unzipped.
[root@askmrac1 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/17272829 -ocmrf /tmp/askm/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid_1
opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-08_03-28-09_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/mnt/software/RAC/1201_PSU/17272829/17027533" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Patch "/mnt/software/RAC/1201_PSU/17272829/17077442" is already installed on "/u01/app/oracle/product/12.1.0/dbhome_1". Please rollback the existing identical patch first.
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home1_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -ocmrf /tmp/askm/ocm.rsp" execution failed:
UtilSession failed:
Prerequisite check "CheckApplicable" failed.
Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-08_03-33-17AM_1.log
[WARNING] The local database instance 'ORCL1' from '/u01/app/oracle/product/12.1.0/dbhome_1' is not running. SQL changes, if any,  will not be applied. Please refer to the log file for more details.
For more details, please refer to the log file "/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-08_03-28-09_deploy.debug.log".
Apply Summary:
opatchauto ran into some warnings during patch installation (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
Following patch(es) failed to be installed:
GI Home: /u01/app/12.1.0/grid_1: 17027533, 17077442, 17303297
opatchauto failed with error code 2.
[root@askmrac1 ~]#

Verification :
From the logfile :

/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/17272829/opatch_gi_2014-03-08_03-28-09_deploy.log
Verifying environment and performing prerequisite checks...
Prerequisite check "CheckApplicable" failed.
The details are:
Patch 17077442:
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/bin/appvipcfg.pl" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'appvipcfg.pl' to '/u01/app/12.1.0/grid_1/bin/appvipcfg.pl'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/bin/oclumon.bin" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'oclumon.bin' to '/u01/app/12.1.0/grid_1/bin/oclumon.bin'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/bin/ologgerd" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'ologgerd' to '/u01/app/12.1.0/grid_1/bin/ologgerd'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/bin/osysmond.bin" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'osysmond.bin' to '/u01/app/12.1.0/grid_1/bin/osysmond.bin'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/crs/demo/coldfailover/act_db.pl" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'act_db.pl' to '/u01/app/12.1.0/grid_1/crs/demo/coldfailover/act_db.pl'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/crs/demo/coldfailover/act_listener.pl" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'act_listener.pl' to '/u01/app/12.1.0/grid_1/crs/demo/coldfailover/act_listener.pl'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/crs/demo/coldfailover/act_resgroup.pl" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'act_resgroup.pl' to '/u01/app/12.1.0/grid_1/crs/demo/coldfailover/act_resgroup.pl'
Copy Action: Source File "/mnt/software/RAC/1201_PSU/17272829/17077442/files/crs/demo/demoActionScript" does not exists or is not readable
'oracle.crs, 12.1.0.1.0': Cannot copy file from 'demoActionScript' to '/u01/app/12.1.0/grid_1/crs/demo/demoActionScript'

Log file location: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-03-08_03-33-17AM_1.log
OPatch failed with error code 73
Resolution : 
Given 777 permission to the directory where we have unzipped PSU patch.

Logfile Location For PSU Patch : 
/u01/app/12.1.0/grid_1/cfgtoollogs/opatch/
/u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/

Important Note :
Whenever you have some issue while applying the patch and there were some services shutdown as part of patch application and these services still remain down after the occurance of the error. So when you start applying the patch again after resolving the issue, make sure you have started all the cluster stack services and database services on that node.
Use the following commands to start services ..
crsctl start has  or crsctl start cluster ( Don't user -all option. It is required to start services on local node only)
srvctl start instance -d ORCL -i ORCL1

Hope This Helps
SRI

RAC : Node Deletion - Standard Cluster 12c

In this article we are going to the step by step procedure to delete a node from a standard 12c cluster. Standard cluster in 12c is same as the cluster we have in pre 12c. This procedure step by step procedure holds good to rac environments with release 11.2.0.2 and later also.

  1. 1.About the environment
  2. 2.Removing a node
    1. a.Removing a oracle database instance
    2. b.Removing RDBMS software
    3. c.Removing node from cluster
    4. d.Verification
    5. e.Removing remaining components
1. About the environment :
This environment is a 3 node 12c rac grid infrastructure with ASM as common storage.  The installation is role separation i.e. separate users for grid and oracle. Grid infrastructure is installed in grid user and RDBMS is installed in oracle user. The three nodes in the clusterware are askmrac1/2/3 and the databases instances are orcl1/2/3. As a practice in this demo, we are going to delete askmrac3 from the environment.

2. Removing a Node : 

Before starting removing a node, you need to make sure that the following tasks are completed.
  • -Remove oem db console if you have db console installed in your environment
  • -Backup OCR
  • -If you have any services running with the instance to be deleted as preferred or available instance, please modify the services.
[root@askmrac1 ~]# su - grid
[grid@askmrac1 ~]$ olsnodes
askmrac1
askmrac2
askmrac3
[grid@askmrac1 ~]$ crsctl get cluster mode status
Cluster is running in "standard" mode
[grid@askmrac1 ~]$ srvctl config gns
PRKF-1110 : Neither GNS server nor GNS client is configured on this cluster
[grid@askmrac1 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
[grid@askmrac1 ~]$ crsctl get node role config
Node 'askmrac1' configured role is 'hub'
[grid@askmrac1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled
[grid@askmrac1 ~]$ asmcmd showclusterstate
Normal
[grid@askmrac1 ~]$ srvctl status asm -detail
ASM is running on askmrac3,askmrac2,askmrac1
ASM is enabled.
[grid@askmrac1 ~]$ crsctl get node role config -all
Node 'askmrac1' configured role is 'hub'
Node 'askmrac2' configured role is 'hub'
Node 'askmrac3' configured role is 'hub'
[grid@askmrac1 ~]$ crsctl get node role status -all
Node 'askmrac1' active role is 'hub'
Node 'askmrac2' active role is 'hub'
Node 'askmrac3' active role is 'hub'
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.asm
               ONLINE  ONLINE       askmrac1                 Started,STABLE
               ONLINE  ONLINE       askmrac2                 Started,STABLE
               ONLINE  ONLINE       askmrac3                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
ora.ons
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
               ONLINE  ONLINE       askmrac3                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       askmrac2                 169.254.200.16 10.10
                                                             .10.232,STABLE
ora.askmrac1.vip
      1        ONLINE  ONLINE       askmrac1                 STABLE
ora.askmrac2.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.askmrac3.vip
      1        ONLINE  ONLINE       askmrac3                 STABLE
ora.cvu
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       askmrac3                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       askmrac1                 Open,STABLE
      2        ONLINE  ONLINE       askmrac2                 Open,STABLE
      3        ONLINE  ONLINE       askmrac3                 Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
--------------------------------------------------------------------------------
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ exit
logout
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/olsnodes -s
askmrac1        Active
askmrac2        Active
askmrac3        Active
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a6aeac6ed4274f91bfe7e12d4592e4f0 (/dev/xvdc1) [OCR_VOTE]
Located 1 voting disk(s).
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       7784
         Available space (kbytes) :     401784
         ID                       : 2080220530
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl status database -d orcl
Instance ORCL1 is running on node askmrac1
Instance ORCL2 is running on node askmrac2
Instance ORCL3 is running on node askmrac3
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl config service -d ORCL
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl status service -d orcl

2.a Removing a oracle database instance : 
As the Oracle software owner, run the Oracle Database Configuration Assistant (DBCA) in silent mode from a node that will remain in the cluster to remove the orcl3 instance from the existing cluster database. The instance that's being removed by DBCA must be up and running.

[root@askmrac1 ~]# su - oracle
[oracle@askmrac1 ~]$ which dbca
/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbca
[oracle@askmrac1 ~]$ dbca -silent -deleteInstance -nodeList askmrac3 -gdbName ORCL -instanceName ORCL3 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCL.log" for further details.
[oracle@askmrac1 ~]$ srvctl status database -d orcl
Instance ORCL1 is running on node askmrac1
Instance ORCL2 is running on node askmrac2
[oracle@askmrac1 ~]$ srvctl config database -d orcl -v
Database unique name: ORCL
Database name: ORCL
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +OCR_VOTE/orcl/spfileorcl.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL
Database instances: ORCL1,ORCL2
Disk Groups: OCR_VOTE
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed
[oracle@askmrac1 ~]$ clear
[oracle@askmrac1 ~]$ sqlplus '/as sysdba'

SQL*Plus: Release 12.1.0.1.0 Production on Sat Mar 8 10:51:53 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select inst_id, instance_name, status, to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;

   INST_ID INSTANCE_NAME    STATUS       START_TIME
---------- ---------------- ------------ --------------------
         1 ORCL1            OPEN         08-MAR-2014 07:25:23
         2 ORCL2            OPEN         08-MAR-2014 07:25:26

Check if the redo log thread and UNDO tablespace for the deleted instance is removed (which for my example, they were successfully removed). If not, manually remove them. 

SQL> select thread# from v$thread where instance='ORCL';

no rows selected

SQL> select thread# from v$thread where upper(instance) = upper('orcl');

no rows selected

SQL> select group# from v$log where thread# = 3;

no rows selected

SQL> select member from v$logfile ;

MEMBER
--------------------------------------------------------------------------------
+OCR_VOTE/orcl/onlinelog/group_2.262.841375503
+OCR_VOTE/orcl/onlinelog/group_1.261.841375497
+OCR_VOTE/orcl/onlinelog/group_3.268.841375693
+OCR_VOTE/orcl/onlinelog/group_4.269.841375695

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@askmrac1 ~]$ clear
[oracle@askmrac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
  /u01/app/12.1.0/grid_1 on node(s) askmrac3,askmrac2,askmrac1
End points: TCP:1521
[oracle@askmrac1 ~]$ ssh 192.168.1.233
Last login: Sat Mar  8 06:17:06 2014 from askmrac1.localdomain
[oracle@askmrac3 ~]$ exit

If you find any redolog and undo references of the deleted instance in the cluster, use the following commands to remove those references.

alter database disable thread 3;
alter database drop logfile group 5;
alter database drop logfile group 6;
drop tablespace undotbs3 including contents and datafiles;
alter system reset undo_tablespace scope=spfile sid = 'orcl3';
alter system reset instance_number scope=spfile sid = 'orcl3';

2.b Removing RDBMS software :
On the node which is to be deleted from the cluster , run the following command ...

[root@askmrac3 ~]# su - oracle
[oracle@askmrac3 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0/dbhome_1
[oracle@askmrac3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@askmrac3 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin
[oracle@askmrac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={askmrac3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2044 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@askmrac3 bin]$

Now run the following command on node 3 , to deinstall oracle home from this node.

[oracle@askmrac3 ~]$ cd $ORACLE_HOME/deinstall
[oracle@askmrac3 deinstall]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/deinstall
[oracle@askmrac3 deinstall]$ ./deinstall -local

On any cluster node that remains in the cluster , run the following command ....

[oracle@askmrac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@askmrac1 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin
[oracle@askmrac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={askmrac1,askmrac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2038 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@askmrac1 bin]$

Now verify the inventory and make sure that the database on node3 is completely removed.

On askmrac1 or 2 :
[oracle@askmrac1 bin]$ cd /u01/app/oraInventory/ContentsXML/
[oracle@askmrac1 ContentsXML]$ ls
comps.xml  inventory.xml  libs.xml
[oracle@askmrac1 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@askmrac1 ContentsXML]$

On askmrac3 :
[oracle@askmrac3 bin]$ cd /u01/app/oraInventory/ContentsXML
[oracle@askmrac3 ContentsXML]$ ls
comps.xml  inventory.xml  libs.xml
[oracle@askmrac3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@askmrac3 ContentsXML]$

2.c Removing Node From Cluster : 
Run the following command as root to determine whether the node you want to delete is active and whether it is pinned.
[root@askmrac1 ~]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[root@askmrac1 ~]# export GRID_HOME=/u01/app/12.1.0/grid_1
[root@askmrac1 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1        Active  Unpinned
askmrac2        Active  Unpinned
askmrac3        Active  Unpinned
[root@askmrac1 ~]#

[root@askmrac3 ~]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[root@askmrac3 ~]# export GRID_HOME=/u01/app/12.1.0/grid_1
[root@askmrac3 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1        Active  Unpinned
askmrac2        Active  Unpinned
askmrac3        Active  Unpinned
[root@askmrac3 ~]#

Disable the Oracle Clusterware applications and daemons running on the node to be deleted from the cluster. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted. 

[root@askmrac3 ~]# cd $GRID_HOME/crs/install
[root@askmrac3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node askmrac1
VIP Name: askmrac1-vip
VIP IPv4 Address: 192.168.1.234
VIP IPv6 Address:
VIP exists: network number 1, hosting node askmrac2
VIP Name: askmrac2-vip
VIP IPv4 Address: 192.168.1.235
VIP IPv6 Address:
VIP exists: network number 1, hosting node askmrac3
VIP Name: askmrac3-vip
VIP IPv4 Address: 192.168.1.236
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'askmrac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'askmrac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'askmrac3'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.oc4j' on 'askmrac3'
CRS-2677: Stop of 'ora.oc4j' on 'askmrac3' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'askmrac1'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'askmrac3'
CRS-2677: Stop of 'ora.asm' on 'askmrac3' succeeded
CRS-2676: Start of 'ora.oc4j' on 'askmrac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'askmrac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.crf' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'askmrac3'
CRS-2677: Stop of 'ora.storage' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'askmrac3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'askmrac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'askmrac3'
CRS-2677: Stop of 'ora.cssd' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'askmrac3'
CRS-2677: Stop of 'ora.gipcd' on 'askmrac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'askmrac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2014/03/08 11:22:19 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node

From a node that is to remain a member of the Oracle RAC, run the following command from the Grid_home/bin directory as root to update the Clusterware configuration to delete the node from the cluster.

[root@askmrac1 ~]# $GRID_HOME/bin/crsctl delete node -n askmrac3
CRS-4661: Node askmrac3 successfully deleted.
[root@askmrac1 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1        Active  Unpinned
askmrac2        Active  Unpinned
[root@askmrac1 ~]#

As the Oracle Grid Infrastructure owner, execute runInstaller from Grid_home/oui/bin on the node being removed to update the inventory.

[grid@askmrac3 ~]$ cd $ORACLE_HOME/oui/bin
[grid@askmrac3 bin]$ pwd
/u01/app/12.1.0/grid_1/oui/bin
[grid@askmrac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid_1 "CLUSTER_NODES={askmrac3}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@askmrac3 bin]$

Run deinstall as the Grid Infrastructure software owner from the node to be removed in order to delete the Oracle Grid Infrastructure software.

Please pay extra care while responding to the prompts. When supplying the values to listener, give only local listener value and don't specify scan_listener for deletion.

On Node3:(as grid s/w owner)
[grid@askmrac3 ~]$ cd /u01/app/12.1.0/grid_1/deinstall/
[grid@askmrac3 deinstall]$ pwd
/u01/app/12.1.0/grid_1/deinstall
[grid@askmrac3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Grid Infrastructure software owner to update the inventories with a list of the nodes that are to remain in the cluster.

On Node1 or Node2:
[grid@askmrac1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@askmrac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid_1 "CLUSTER_NODES={askmrac1,askmrac2}" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2038 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@askmrac1 bin]$ 

2.d Verification : 
[grid@askmrac1 bin]$ cd /u01/app/oraInventory/ContentsXML/
[grid@askmrac1 ContentsXML]$ clear
[grid@askmrac1 ContentsXML]$ ls
comps.xml  inventory.xml  libs.xml
[grid@askmrac1 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[grid@askmrac1 ContentsXML]$ clear
[grid@askmrac1 ContentsXML]$ cd
[grid@askmrac1 ~]$ which cluvfy
/u01/app/12.1.0/grid_1/bin/cluvfy
[grid@askmrac1 ~]$ cluvfy stage -post nodedel -n askmrac3 -verbose

Performing post-checks for node removal

Checking CRS integrity...
The Oracle Clusterware is healthy on node "askmrac1"

CRS integrity check passed

Clusterware version consistency passed.
Result:
Node removal check passed

Post-check for node removal was successful.
[grid@askmrac1 ~]$ olsnodes -s -t
askmrac1        Active  Unpinned
askmrac2        Active  Unpinned
[grid@askmrac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
ora.asm
               ONLINE  ONLINE       askmrac1                 Started,STABLE
               ONLINE  ONLINE       askmrac2                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
ora.ons
               ONLINE  ONLINE       askmrac1                 STABLE
               ONLINE  ONLINE       askmrac2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       askmrac2                 169.254.200.16 10.10
                                                             .10.232,STABLE
ora.askmrac1.vip
      1        ONLINE  ONLINE       askmrac1                 STABLE
ora.askmrac2.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.cvu
      1        ONLINE  ONLINE       askmrac2                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       askmrac1                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       askmrac1                 Open,STABLE
      2        ONLINE  ONLINE       askmrac2                 Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       askmrac2                 STABLE
--------------------------------------------------------------------------------
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ crsctl status res -t | grep -i askmrac3
[grid@askmrac1 ~]$


On Node 3:
[grid@askmrac3 bin]$ cd /u01/app/oraInventory/ContentsXML/
[grid@askmrac3 ContentsXML]$ ls
comps.xml  inventory.xml  libs.xml
[grid@askmrac3 ContentsXML]$ clear
[grid@askmrac3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="askmrac1"/>
      <NODE NAME="askmrac2"/>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
   <NODE_LIST>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
   <NODE_LIST>
      <NODE NAME="askmrac3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[grid@askmrac3 ContentsXML]$ 

2.e Removing remaining components : 
  • Remove asmlib if you are using asmlib for ASM storage
  • Revmoce udev rules, if you are using udev rules for ASM storage
  • Remove oracle and grid users and also corresponding groups.

Hope this helps
SRI

 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger