facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Home » , , , , , » RAC : Deleting Leaf Node From Flex Cluster - 12101

RAC : Deleting Leaf Node From Flex Cluster - 12101

Written By askMLabs on Monday, March 10, 2014 | 12:49 PM

In this article, we will see the step by step procedure to delete a leaf node from flex cluster. Flex cluster is introduced in database version 12.1.0.1. Leaf node in a flex cluster is not attached to the storage and there will be no database running on this node. This is part of the cluster running all high availability cluster services. Initially during the beta version of the 12c database, oracle told that even the leaf node can host the database instance, but the release 12.1.0.1 does not still support to have the database running on leaf node. 

So as of this time, when i am writing this article, ( 12.1.0.1 ) version, the leaf node is not supported to run the database instances, but it can be used to give high availability to any other application servicies like application server, e-business suite etc...

So during the demo, i tried to high light this point as well.



Having said the above theory about leaf node, now the deletion and addition procedure of leaf node to RAC flex cluster is different from the actual procedure that we do till 11gR2 versions.

Before proceeding with the procedure, let me explain a little about the environment where i am going to perform the leaf node deletion. I have a 4 node cluster environment with 12c flex cluster enabled. I have 3 hub nodes and one leaf node. I am going to delete the existing leaf node from the 4 node flex cluster. rac12cnode1/2/3 are my hub nodes and rac12cnode4 is leaf node.

1) Flex cluster status ...(execute the following commands from the owner for clusterware owner, ie grid)
olsnodes -s -t  ( To know the cluster nodes in my environment)
crsctl get cluster mode config ( To know cluster mode ie. flex or standard )
crsctl get cluster mode status ( To know the status of the cluster )
crsctl get node role config -all ( To know each nodes role ie hub or leaf )
crsctl get node role status -all  ( To know the staus of each nodes role )
crsctl status res -t ( To know the complete cluster status )
srvctl status asm -detail ( Note from this command that asm will be running on hub nodes, not on leaf node)
srvctl status database -d orcl ( Note : database is running only on hub nodes, not on leaf node )
srvctl config gns ( gns is enabled in my environment )
oifcfg getif ( Note that i am using a separate network for asm in my environment )

Session Log From Environment :
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode config
Cluster is configured as "flex"
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER_LEAF.lsnr
               OFFLINE OFFLINE      rac12cnode4              STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.ons
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.asm
      1        ONLINE  ONLINE       rac12cnode1              STABLE
      2        ONLINE  ONLINE       rac12cnode2              STABLE
      3        ONLINE  ONLINE       rac12cnode3              STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.orcl.db
      1        ONLINE  ONLINE       rac12cnode1              Open,STABLE
      2        ONLINE  ONLINE       rac12cnode2              Open,STABLE
      3        ONLINE  ONLINE       rac12cnode3              Open,STABLE
ora.rac12cnode1.vip
      1        ONLINE  ONLINE       rac12cnode1              STABLE
ora.rac12cnode2.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.rac12cnode3.vip
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
--------------------------------------------------------------------------------
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ srvctl status asm -details
PRKO-2002 : Invalid command line option: -details
[grid@rac12cnode1 ~]$ srvctl status asm -detail
ASM is running on rac12cnode1,rac12cnode2,rac12cnode3
ASM is enabled.
[grid@rac12cnode1 ~]$ srvctl status database -d orcl
Instance ORCL1 is running on node rac12cnode1
Instance ORCL2 is running on node rac12cnode2
Instance ORCL3 is running on node rac12cnode3
[grid@rac12cnode1 ~]$

Session Log On rac12cnode4 : ( Which confirms that this is part of flex cluster )
[root@rac12cnode4 ~]# su - grid
[grid@rac12cnode4 ~]$ ps ucx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grid      1299  0.0  1.0 712564 31572 ?        Ssl  Mar07   1:17 oraagent.bin
grid      1312  0.0  0.6 193248 19864 ?        Ssl  Mar07   0:04 mdnsd.bin
grid      1314  0.5  1.0 683336 32984 ?        Ssl  Mar07  14:35 evmd.bin
grid      1327  0.0  1.0 539796 30812 ?        Ssl  Mar07   0:40 gpnpd.bin
grid      1336  2.2 12.4 955216 383348 ?       Sl   Mar07  58:39 gipcd.bin
grid      1350  0.0  0.4 189812 14680 ?        S    Mar07   0:00 evmlogger.bin
grid      1407  0.3  4.2 683808 131232 ?       SLl  Mar07   9:20 ocssdrim.bin
grid      8210  0.0  0.0 108384  1808 pts/0    S+   07:15   0:00 bash
grid      8353  0.5  0.0 108384  1716 pts/1    S    07:53   0:00 bash
grid      8376  0.0  0.0 110232  1116 pts/1    R+   07:53   0:00 ps
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER_LEAF.lsnr
               OFFLINE OFFLINE      rac12cnode4              STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.ons
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.asm
      1        ONLINE  ONLINE       rac12cnode1              STABLE
      2        ONLINE  ONLINE       rac12cnode2              STABLE
      3        ONLINE  ONLINE       rac12cnode3              STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.orcl.db
      1        ONLINE  ONLINE       rac12cnode1              Open,STABLE
      2        ONLINE  ONLINE       rac12cnode2              Open,STABLE
      3        ONLINE  ONLINE       rac12cnode3              Open,STABLE
ora.rac12cnode1.vip
      1        ONLINE  ONLINE       rac12cnode1              STABLE
ora.rac12cnode2.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.rac12cnode3.vip
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
--------------------------------------------------------------------------------
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/srvctl status database -d orcl
Instance ORCL1 is running on node rac12cnode1
Instance ORCL2 is running on node rac12cnode2
Instance ORCL3 is running on node rac12cnode3
[grid@rac12cnode4 ~]$

You can also verify the /u01/app/oraInventory/ContentsXML/inventory.xml file for its content before and after the node deletion on each node. This was one of the confirmation steps we used to do in 11gR2.

2) Run the following command on the node you want to delete, in our case it is rac12cnode4.
[grid@rac12cnode4 ~]$ cd /u01/app/12.1.0/grid/
[grid@rac12cnode4 grid]$ echo $ORACLE_HOME
[grid@rac12cnode4 grid]$ export ORACLE_HOME=/u01/app/12.1.0/grid
[grid@rac12cnode4 grid]$ cd ./oui/bin
[grid@rac12cnode4 bin]$ clear
[grid@rac12cnode4 bin]$ pwd
/u01/app/12.1.0/grid/oui/bin
[grid@rac12cnode4 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={rac12cnode4}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@rac12cnode4 bin]$

3) Now run the deinstall on node which you want to delete from the cluster. I am not having a shared grid oracle home, so i am executing the following command. If you have a shared grid oracle home, please refer the document for proper commands to run.
NOTE : It is very important to run the deinstall script on node to be deleted in local mode. If we don't specify local option with deinstall, it will deinstall complete cluster.
[grid@rac12cnode4 bin]$ pwd
/u01/app/12.1.0/grid/oui/bin
[grid@rac12cnode4 bin]$ cd ../../deinstall/
[grid@rac12cnode4 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0/grid
The following nodes are part of this cluster: rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3
Checking for sufficient temp space availability on node(s) : 'rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2014-03-09_08-13-20AM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-03-09_08-13-22-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-03-09_08-13-22-AM.log
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-03-09_08-13-22-AM.log
Database Check Configuration END
######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac12cnode4'.
Oracle Home selected for deinstall is: /u01/app/12.1.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-03-09_08-12-29-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-03-09_08-12-29-AM.err'
######################## CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-03-09_08-13-48-AM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-03-09_08-13-48-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-03-09_08-13-48-AM.log
Network Configuration clean config END

---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac12cnode4".
/tmp/deinstall2014-03-09_08-09-52AM/perl/bin/perl -I/tmp/deinstall2014-03-09_08-09-52AM/perl/lib -I/tmp/deinstall2014-03-09_08-09-52AM/crs/install /tmp/deinstall2014-03-09_08-09-52AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-03-09_08-09-52AM/response/deinstall_OraGrid12c.rsp"
Press Enter after you finish running the above commands
<----------------------------------------

Now it will ask you to run the deinstall from the node to be deleted as root user. Please run the specified scripts from another session on rac12cnode4.

From different Session from node 4 as root user
[root@rac12cnode4 ~]# /tmp/deinstall2014-03-09_08-09-52AM/perl/bin/perl -I/tmp/deinstall2014-03-09_08-09-52AM/perl/lib -I/tmp/deinstall2014-03-09_08-09-52AM/crs/install /tmp/deinstall2014-03-09_08-09-52AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-03-09_08-09-52AM/response/deinstall_OraGrid12c.rsp"
Using configuration parameter file: /tmp/deinstall2014-03-09_08-09-52AM/response/deinstall_OraGrid12c.rsp
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node rac12cnode1
VIP Name: rac12cnode1-vip
VIP IPv4 Address: 192.168.1.135
VIP IPv6 Address:
VIP exists: network number 1, hosting node rac12cnode2
VIP Name: rac12cnode2-vip
VIP IPv4 Address: 192.168.1.136
VIP IPv6 Address:
VIP exists: network number 1, hosting node rac12cnode3
VIP Name: rac12cnode3-vip
VIP IPv4 Address: 192.168.1.137
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016
PRCC-1017 : ons was already stopped on rac12cnode4
PRCR-1005 : Resource ora.ons is already stopped
PRKO-2440 : Network resource is already stopped.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac12cnode4'
CRS-2677: Stop of 'ora.crsd' on 'rac12cnode4' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.storage' on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12cnode4'
CRS-2677: Stop of 'ora.storage' on 'rac12cnode4' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12cnode4' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac12cnode4' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac12cnode4' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac12cnode4' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac12cnode4' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac12cnode4'
CRS-2677: Stop of 'ora.cssd' on 'rac12cnode4' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12cnode4'
CRS-2677: Stop of 'ora.gipcd' on 'rac12cnode4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2014/03/09 08:16:54 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node
[root@rac12cnode4 ~]#


Now press enter in the previous session where it prompted to run the above command.

---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac12cnode4".
/tmp/deinstall2014-03-09_08-09-52AM/perl/bin/perl -I/tmp/deinstall2014-03-09_08-09-52AM/perl/lib -I/tmp/deinstall2014-03-09_08-09-52AM/crs/install /tmp/deinstall2014-03-09_08-09-52AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-03-09_08-09-52AM/response/deinstall_OraGrid12c.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
Remove the directory: /tmp/deinstall2014-03-09_08-09-52AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/12.1.0/grid' on the local node : Done
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END

## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-03-09_08-09-52AM' on node 'rac12cnode4'
Clean install operation removing temporary directory '/tmp/deinstall2014-03-09_08-09-52AM' on node 'rac12cnode1,rac12cnode2,rac12cnode3'
## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node "rac12cnode4"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/12.1.0/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############
[grid@rac12cnode4 deinstall]$

4) Now verify the leaf node deletion on flex cluster 
Verify on rac12cnode4
[grid@rac12cnode4 deinstall]$ cd
[grid@rac12cnode4 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1020(asmadmin),1021(asmdba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[grid@rac12cnode4 ~]$ ps ucx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grid      8210  0.0  0.0 108384  1808 pts/0    S+   07:15   0:00 bash
grid      8353  0.0  0.0 108384  1816 pts/1    S    07:53   0:00 bash
grid     15639  0.0  0.0 110236  1124 pts/1    R+   08:21   0:00 ps
[grid@rac12cnode4 ~]$ cd /u01/app/12.1.0/
[grid@rac12cnode4 12.1.0]$ ls
[grid@rac12cnode4 12.1.0]$ 
Verify from any other node in the cluster 
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ srvctl config gns
GNS is enabled.
[grid@rac12cnode1 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.11.0.0  global  asm
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ asmcmd showclusterstate
Normal
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
[grid@rac12cnode1 ~]$

5) Final verification check with our great tool cluvfy :
cluvfy stage -post nodedel -n rac12cnode4 -verbose
[grid@rac12cnode1 ~]$ cluvfy stage -post nodedel -n rac12cnode4 -verbose
Performing post-checks for node removal
Checking CRS integrity...
The Oracle Clusterware is healthy on node "rac12cnode1"
CRS integrity check passed
Clusterware version consistency passed.
Result:
Node removal check passed
Post-check for node removal was successful.
[grid@rac12cnode1 ~]$

Hope this helps
SRI

Share this article :

Related Articles By Category



Post a Comment

Thank you for visiting our site and leaving your valuable comment.

 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger