facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Home » , , , » RAC : Leaf Node Addition To Flex Cluster - 12101

RAC : Leaf Node Addition To Flex Cluster - 12101

Written By askMLabs on Tuesday, March 11, 2014 | 8:20 AM

In this article, we will see the step by step procedure to add a leaf node to flex cluster. Flex cluster is introduced in database version 12.1.0.1. Leaf node in a flex cluster is not attached directly to the storage and there will be no database running on this node. This is part of the cluster running all high availability cluster services. Initially during the beta version of the 12c database, oracle told that even the leaf node can host the database instance, but the release 12.1.0.1 does not still support to have the database running on leaf node.

So as of this time, when i am writing this article, ( 12.1.0.1 ) version, the leaf node is not supported to run the database instances, but it can be used to give high availability to any other application servicies like application server, e-business suite etc...


Having said the above theory about leaf node, now the deletion and addition procedure of leaf node to RAC flex cluster is different from the actual procedure that we do till 11gR2 versions.

Before proceeding with the procedure, let me explain a little about the environment where i am going to perform the leaf node addition. I have a 3 node cluster environment with 12c flex cluster enabled. I have 3 hub nodes and 0 leaf node. I am going to add  leaf node to the 3 node flex cluster. rac12cnode1/2/3 are my hub nodes and rac12cnode4 is leaf node to be added to the cluster.

We need to prepare the node to add it to the cluster. I am mentioning some important points here for your reference to complete before starting the actual node addition.
  1. 1.Make physical connections to the node
  2. 2.Install OS on the node. (Make sure it is the same version as it is on all other cluster nodes)
  3. 3.Create necessary user accounts. We are using role separation , so create grid and oracle users
  4. 4.Complete all network configurations. Update your DNS or /etc/hosts file to properly resolve the new node in the cluster. Please note that the leaf node may or may not have vip. It is not compulsory.
  5. 5.Make sure time synchronization is done. It is by CTSS or NTPD.
  6. 6.Create any directory structure needed in the new server.
  7. 7.Complete SSH setup ( use the automated script availabe in $GRID_HOME/oui/bin/runSSHSetup.sh)
  8. 8.Set kernerl parameters as per the document.
  9. 9.It is not mandatory that shared storage should be attached to the leaf node in the cluster. 
  10. 10.Complete all the required packages installation.
  11. 11.Verify if you have enough free space for grid infra management repository.(It is optional step. Required only  If you have cluster health monitor configured in your environment)

( Refer to my rac installation articles to complete all the above steps )

1) Flex cluster status ...(execute the following commands from the owner for clusterware owner, ie grid)

olsnodes -s -t  ( To know the cluster nodes in my environment)
crsctl get cluster mode config ( To know cluster mode ie. flex or standard )
crsctl get cluster mode status ( To know the status of the cluster )
crsctl get node role config -all ( To know each nodes role ie hub or leaf )
crsctl get node role status -all  ( To know the staus of each nodes role )
crsctl status res -t ( To know the complete cluster status )
srvctl status asm -detail ( Note from this command that asm will be running on hub nodes, not on leaf node)
srvctl status database -d orcl ( Note : database is running only on hub nodes, not on leaf node )
srvctl config gns ( gns is enabled in my environment )
oifcfg getif ( Note that i am using a separate network for asm in my environment )

Session Log:

[grid@rac12cnode1 ~]$ iduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1020(asmadmin),1021(asmdba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[grid@rac12cnode1 ~]$ olsnodes
rac12cnode1
rac12cnode2
rac12cnode3
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ srvctl config asm
ASM home: /u01/app/12.1.0/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@rac12cnode1 ~]$ srvctl status asm -detail
ASM is running on rac12cnode1,rac12cnode2,rac12cnode3
ASM is enabled.
[grid@rac12cnode1 ~]$
[grid@rac12cnode1 ~]$ oclumon manage -get repsize
CRS-9003-Cluster Health Monitor is not supported in this configuration.[grid@rac12cnode1 ~]$ 
2) Verify the nodes readyness to add to the cluster with cluvfy utility ... 
cluvfy stage -post hwos -n rac12cnode4
cluvfy comp peer -refnode rac12cnode1 -n rac12cnode4 -orainv oinstall -osdba dba -verbose
cluvfy stage -pre nodeadd -n rac12cnode4 -fixup -verbose
Session Log in my environment for the above commands 
[grid@rac12cnode1 ~]$ cluvfy stage -post hwos -n rac12cnode4
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac12cnode1"

Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "10.11.0.0"
Node connectivity passed for subnet "10.11.0.0" with node(s) rac12cnode4
TCP connectivity check passed for subnet "10.11.0.0"

Check: Node connectivity using interfaces on subnet "10.10.10.0"
Node connectivity passed for subnet "10.10.10.0" with node(s) rac12cnode4
TCP connectivity check passed for subnet "10.10.10.0"

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rac12cnode4
TCP connectivity check passed for subnet "192.168.1.0"

Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251" passed.
Checking subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
WARNING:
rac12cnode4:PRVF-7017 : Package cvuqdisk not installed
        rac12cnode4
No shared storage found

Shared storage check failed on nodes "rac12cnode4"
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@rac12cnode1 ~]$
[grid@rac12cnode1 ~]$ cluvfy comp peer -refnode rac12cnode1 -n rac12cnode4 -orainv oinstall -osdba dba -verbose
Verifying peer compatibility
Checking peer compatibility...
Compatibility check: Physical memory [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   2.9326GB (3075076.0KB)    2.9326GB (3075076.0KB)    matched
Physical memory check passed
Compatibility check: Available memory [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   2.7669GB (2901284.0KB)    908.1406MB (929936.0KB)   matched
Available memory check passed
Compatibility check: Swap space [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   2GB (2097148.0KB)         2GB (2097148.0KB)         matched
Swap space check passed
Compatibility check: Free disk space for "/usr" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   7.5781GB (7946240.0KB)    7.6396GB (8010752.0KB)    mismatched
Free disk space check failed
Compatibility check: Free disk space for "/var" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   7.5781GB (7946240.0KB)    7.6396GB (8010752.0KB)    mismatched
Free disk space check failed
Compatibility check: Free disk space for "/etc" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   7.5781GB (7946240.0KB)    7.6396GB (8010752.0KB)    mismatched
Free disk space check failed
Compatibility check: Free disk space for "/sbin" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   7.5781GB (7946240.0KB)    7.6396GB (8010752.0KB)    mismatched
Free disk space check failed
Compatibility check: Free disk space for "/tmp" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   7.5781GB (7946240.0KB)    7.6396GB (8010752.0KB)    mismatched
Free disk space check failed
Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   24.4902GB (2.5679872E7KB)  17.9102GB (1.878016E7KB)  matched
Free disk space check passed
Compatibility check: User existence for "grid" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   grid(1100)                grid(1100)                matched
User existence for "grid" check passed
Compatibility check: Group existence for "oinstall" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   oinstall(1000)            oinstall(1000)            matched
Group existence for "oinstall" check passed
Compatibility check: Group existence for "dba" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   dba(1031)                 dba(1031)                 matched
Group existence for "dba" check passed
Compatibility check: Group membership for "grid" in "oinstall (Primary)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   yes                       yes                       matched
Group membership for "grid" in "oinstall (Primary)" check passed
Compatibility check: Group membership for "grid" in "dba" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   no                        no                        matched
Group membership for "grid" in "dba" check passed
Compatibility check: Run level [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   3                         3                         matched
Run level check passed
Compatibility check: System architecture [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   x86_64                    x86_64                    matched
System architecture check passed
Compatibility check: Kernel version [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   2.6.39-400.109.1.el6uek.x86_64  2.6.39-400.109.1.el6uek.x86_64  matched
Kernel version check passed
Compatibility check: Kernel param "semmsl" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   250                       250                       matched
Kernel param "semmsl" check passed
Compatibility check: Kernel param "semmns" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   32000                     32000                     matched
Kernel param "semmns" check passed
Compatibility check: Kernel param "semopm" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   100                       100                       matched
Kernel param "semopm" check passed
Compatibility check: Kernel param "semmni" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   128                       128                       matched
Kernel param "semmni" check passed
Compatibility check: Kernel param "shmmax" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   4398046511104             4398046511104             matched
Kernel param "shmmax" check passed
Compatibility check: Kernel param "shmmni" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   4096                      4096                      matched
Kernel param "shmmni" check passed
Compatibility check: Kernel param "shmall" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   0                         0                         matched
Kernel param "shmall" check passed
Compatibility check: Kernel param "file-max" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   6815744                   6815744                   matched
Kernel param "file-max" check passed
Compatibility check: Kernel param "ip_local_port_range" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   9000 65500                9000 65500                matched
Kernel param "ip_local_port_range" check passed
Compatibility check: Kernel param "rmem_default" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   262144                    262144                    matched
Kernel param "rmem_default" check passed
Compatibility check: Kernel param "rmem_max" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   4194304                   4194304                   matched
Kernel param "rmem_max" check passed
Compatibility check: Kernel param "wmem_default" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   262144                    262144                    matched
Kernel param "wmem_default" check passed
Compatibility check: Kernel param "wmem_max" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   1048576                   1048576                   matched
Kernel param "wmem_max" check passed
Compatibility check: Kernel param "aio-max-nr" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   1048576                   1048576                   matched
Kernel param "aio-max-nr" check passed
Compatibility check: Package existence for "binutils" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2-5.36.el6  matched
Package existence for "binutils" check passed
Compatibility check: Package existence for "compat-libcap1" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   compat-libcap1-1.10-1     compat-libcap1-1.10-1     matched
Package existence for "compat-libcap1" check passed
Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   compat-libstdc++-33-3.2.3-69.el6 (x86_64)  compat-libstdc++-33-3.2.3-69.el6 (x86_64)  matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed
Compatibility check: Package existence for "libgcc (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   libgcc-4.4.7-3.el6 (x86_64)  libgcc-4.4.7-3.el6 (x86_64)  matched
Package existence for "libgcc (x86_64)" check passed
Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   libstdc++-4.4.7-3.el6 (x86_64)  libstdc++-4.4.7-3.el6 (x86_64)  matched
Package existence for "libstdc++ (x86_64)" check passed
Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   libstdc++-devel-4.4.7-3.el6 (x86_64)  libstdc++-devel-4.4.7-3.el6 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed
Compatibility check: Package existence for "sysstat" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   sysstat-9.0.4-20.el6      sysstat-9.0.4-20.el6      matched
Package existence for "sysstat" check passed
Compatibility check: Package existence for "gcc" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   gcc-4.4.7-3.el6           gcc-4.4.7-3.el6           matched
Package existence for "gcc" check passed
Compatibility check: Package existence for "gcc-c++" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   gcc-c++-4.4.7-3.el6       gcc-c++-4.4.7-3.el6       matched
Package existence for "gcc-c++" check passed
Compatibility check: Package existence for "ksh" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   ksh-20100621-19.el6_4.4   ksh-20100621-19.el6_4.4   matched
Package existence for "ksh" check passed
Compatibility check: Package existence for "make" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   make-3.81-20.el6          make-3.81-20.el6          matched
Package existence for "make" check passed
Compatibility check: Package existence for "glibc (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   glibc-2.12-1.107.el6_4.2 (x86_64)  glibc-2.12-1.107.el6_4.2 (x86_64)  matched
Package existence for "glibc (x86_64)" check passed
Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   glibc-devel-2.12-1.107.el6_4.2 (x86_64)  glibc-devel-2.12-1.107.el6_4.2 (x86_64)  matched
Package existence for "glibc-devel (x86_64)" check passed
Compatibility check: Package existence for "libaio (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   libaio-0.3.107-10.el6 (x86_64)  libaio-0.3.107-10.el6 (x86_64)  matched
Package existence for "libaio (x86_64)" check passed
Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   libaio-devel-0.3.107-10.el6 (x86_64)  libaio-devel-0.3.107-10.el6 (x86_64)  matched
Package existence for "libaio-devel (x86_64)" check passed
Compatibility check: Package existence for "nfs-utils" [reference node: rac12cnode1]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode4   nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-36.el6    matched
Package existence for "nfs-utils" check passed
Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        rac12cnode4
[grid@rac12cnode1 ~]$
[grid@rac12cnode1 ~]$ cluvfy stage -pre nodeadd -n rac12cnode4 -fixup -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "rac12cnode1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac12cnode4                           yes
Result: Node reachability check passed from node "rac12cnode1"

Checking user equivalence...
Check: User equivalence for user "grid"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode4                           passed
Result: User equivalence check passed for user "grid"
Check: Package existence for "cvuqdisk"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode2   cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed
  rac12cnode1   cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed
  rac12cnode4   missing                   cvuqdisk-1.0.9-1          failed
  rac12cnode3   cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed
Result: Package existence check failed for "cvuqdisk"
Checking CRS integrity...
The Oracle Clusterware is healthy on node "rac12cnode1"
CRS integrity check passed
Clusterware version consistency passed.
Checking shared resources...
Checking CRS home location...
Location check passed for: "/u01/app/12.1.0/grid"
Result: Shared resources check for node addition passed

Checking node connectivity...
Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           passed
  rac12cnode2                           passed
  rac12cnode3                           passed
  rac12cnode4                           passed
Verification of the hosts config file successful

Interface information for node "rac12cnode1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.131   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth0   192.168.1.135   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth1   10.10.10.131    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth1   169.254.251.99  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth2   10.11.0.135     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:0B 1500

Interface information for node "rac12cnode2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.132   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.140   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.139   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.136   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth1   10.10.10.132    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth1   169.254.169.69  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth2   10.11.0.136     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:11 1500

Interface information for node "rac12cnode3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.133   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth0   192.168.1.137   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth1   10.10.10.133    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth1   169.254.177.41  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth2   10.11.0.137     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:13 1500

Interface information for node "rac12cnode4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.134   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:1E 1500
 eth1   10.10.10.134    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:14 1500
 eth2   10.11.0.138     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:15 1500

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Check: Node connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.132]      yes
  rac12cnode2[192.168.1.139]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.139]      rac12cnode1[192.168.1.135]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.139]      rac12cnode1[192.168.1.131]      yes
  rac12cnode2[192.168.1.139]      rac12cnode3[192.168.1.137]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.136]      yes
  rac12cnode2[192.168.1.139]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.132]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.132]      rac12cnode1[192.168.1.135]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.132]      rac12cnode1[192.168.1.131]      yes
  rac12cnode2[192.168.1.132]      rac12cnode3[192.168.1.137]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.136]      yes
  rac12cnode2[192.168.1.132]      rac12cnode4[192.168.1.134]      yes
  rac12cnode3[192.168.1.133]      rac12cnode1[192.168.1.135]      yes
  rac12cnode3[192.168.1.133]      rac12cnode2[192.168.1.140]      yes
  rac12cnode3[192.168.1.133]      rac12cnode1[192.168.1.131]      yes
  rac12cnode3[192.168.1.133]      rac12cnode3[192.168.1.137]      yes
  rac12cnode3[192.168.1.133]      rac12cnode2[192.168.1.136]      yes
  rac12cnode3[192.168.1.133]      rac12cnode4[192.168.1.134]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.140]      yes
  rac12cnode1[192.168.1.135]      rac12cnode1[192.168.1.131]      yes
  rac12cnode1[192.168.1.135]      rac12cnode3[192.168.1.137]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.136]      yes
  rac12cnode1[192.168.1.135]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.140]      rac12cnode1[192.168.1.131]      yes
  rac12cnode2[192.168.1.140]      rac12cnode3[192.168.1.137]      yes
  rac12cnode2[192.168.1.140]      rac12cnode2[192.168.1.136]      yes
  rac12cnode2[192.168.1.140]      rac12cnode4[192.168.1.134]      yes
  rac12cnode1[192.168.1.131]      rac12cnode3[192.168.1.137]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.136]      yes
  rac12cnode1[192.168.1.131]      rac12cnode4[192.168.1.134]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.136]      yes
  rac12cnode3[192.168.1.137]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.136]      rac12cnode4[192.168.1.134]      yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rac12cnode2,rac12cnode3,rac12cnode1,rac12cnode4

Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2:192.168.1.139       rac12cnode2:192.168.1.132       passed
  rac12cnode2:192.168.1.139       rac12cnode3:192.168.1.133       passed
  rac12cnode2:192.168.1.139       rac12cnode1:192.168.1.135       passed
  rac12cnode2:192.168.1.139       rac12cnode2:192.168.1.140       passed
  rac12cnode2:192.168.1.139       rac12cnode1:192.168.1.131       passed
  rac12cnode2:192.168.1.139       rac12cnode3:192.168.1.137       passed
  rac12cnode2:192.168.1.139       rac12cnode2:192.168.1.136       passed
  rac12cnode2:192.168.1.139       rac12cnode4:192.168.1.134       passed
Result: TCP connectivity check passed for subnet "192.168.1.0"

Check: Node connectivity using interfaces on subnet "10.10.10.0"
Check: Node connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode1[10.10.10.131]       rac12cnode4[10.10.10.134]       yes
  rac12cnode1[10.10.10.131]       rac12cnode2[10.10.10.132]       yes
  rac12cnode1[10.10.10.131]       rac12cnode3[10.10.10.133]       yes
  rac12cnode4[10.10.10.134]       rac12cnode2[10.10.10.132]       yes
  rac12cnode4[10.10.10.134]       rac12cnode3[10.10.10.133]       yes
  rac12cnode2[10.10.10.132]       rac12cnode3[10.10.10.133]       yes
Result: Node connectivity passed for subnet "10.10.10.0" with node(s) rac12cnode1,rac12cnode4,rac12cnode2,rac12cnode3

Check: TCP connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode1:10.10.10.131        rac12cnode4:10.10.10.134        passed
  rac12cnode1:10.10.10.131        rac12cnode2:10.10.10.132        passed
  rac12cnode1:10.10.10.131        rac12cnode3:10.10.10.133        passed
Result: TCP connectivity check passed for subnet "10.10.10.0"

Check: Node connectivity using interfaces on subnet "10.11.0.0"
Check: Node connectivity of subnet "10.11.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode4[10.11.0.138]        rac12cnode1[10.11.0.135]        yes
  rac12cnode4[10.11.0.138]        rac12cnode2[10.11.0.136]        yes
  rac12cnode4[10.11.0.138]        rac12cnode3[10.11.0.137]        yes
  rac12cnode1[10.11.0.135]        rac12cnode2[10.11.0.136]        yes
  rac12cnode1[10.11.0.135]        rac12cnode3[10.11.0.137]        yes
  rac12cnode2[10.11.0.136]        rac12cnode3[10.11.0.137]        yes
Result: Node connectivity passed for subnet "10.11.0.0" with node(s) rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3

Check: TCP connectivity of subnet "10.11.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode4:10.11.0.138         rac12cnode1:10.11.0.135         passed
  rac12cnode4:10.11.0.138         rac12cnode2:10.11.0.136         passed
  rac12cnode4:10.11.0.138         rac12cnode3:10.11.0.137         passed
Result: TCP connectivity check passed for subnet "10.11.0.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.11.0.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251" passed.
Checking subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Task ASM Integrity check started...
Checking if connectivity exists across cluster nodes on the ASM network
Checking node connectivity...
Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode4                           passed
Verification of the hosts config file successful

Interface information for node "rac12cnode4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.134   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:1E 1500
 eth1   10.10.10.134    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:14 1500
 eth2   10.11.0.138     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:15 1500

Check: Node connectivity using interfaces on subnet "10.11.0.0"
Check: Node connectivity of subnet "10.11.0.0"
Result: Node connectivity passed for subnet "10.11.0.0" with node(s) rac12cnode4

Check: TCP connectivity of subnet "10.11.0.0"
Result: TCP connectivity check passed for subnet "10.11.0.0"

Result: Node connectivity check passed
Network connectivity check across cluster nodes on the ASM network passed
Task ASM Integrity check passed...
Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   2.9326GB (3075076.0KB)    4GB (4194304.0KB)         failed
  rac12cnode4   2.9326GB (3075076.0KB)    4GB (4194304.0KB)         failed
Result: Total memory check failed
Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   892.875MB (914304.0KB)    50MB (51200.0KB)          passed
  rac12cnode4   2.766GB (2900356.0KB)     50MB (51200.0KB)          passed
Result: Available memory check passed
Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   2GB (2097148.0KB)         2.9326GB (3075076.0KB)    failed
  rac12cnode4   2GB (2097148.0KB)         2.9326GB (3075076.0KB)    failed
Result: Swap space check failed
Check: Free disk space for "rac12cnode1:/usr,rac12cnode1:/var,rac12cnode1:/etc,rac12cnode1:/sbin,rac12cnode1:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              rac12cnode1   /             7.6396GB      1.0635GB      passed
  /var              rac12cnode1   /             7.6396GB      1.0635GB      passed
  /etc              rac12cnode1   /             7.6396GB      1.0635GB      passed
  /sbin             rac12cnode1   /             7.6396GB      1.0635GB      passed
  /tmp              rac12cnode1   /             7.6396GB      1.0635GB      passed
Result: Free disk space check passed for "rac12cnode1:/usr,rac12cnode1:/var,rac12cnode1:/etc,rac12cnode1:/sbin,rac12cnode1:/tmp"
Check: Free disk space for "rac12cnode4:/usr,rac12cnode4:/var,rac12cnode4:/etc,rac12cnode4:/sbin,rac12cnode4:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              rac12cnode4   /             7.5762GB      1.0635GB      passed
  /var              rac12cnode4   /             7.5762GB      1.0635GB      passed
  /etc              rac12cnode4   /             7.5762GB      1.0635GB      passed
  /sbin             rac12cnode4   /             7.5762GB      1.0635GB      passed
  /tmp              rac12cnode4   /             7.5762GB      1.0635GB      passed
Result: Free disk space check passed for "rac12cnode4:/usr,rac12cnode4:/var,rac12cnode4:/etc,rac12cnode4:/sbin,rac12cnode4:/tmp"
Check: Free disk space for "rac12cnode1:/u01/app/12.1.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/12.1.0/grid  rac12cnode1   /u01          17.8799GB     6.9GB         passed
Result: Free disk space check passed for "rac12cnode1:/u01/app/12.1.0/grid"
Check: Free disk space for "rac12cnode4:/u01/app/12.1.0/grid"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/12.1.0/grid  rac12cnode4   /u01          24.4902GB     6.9GB         passed
Result: Free disk space check passed for "rac12cnode4:/u01/app/12.1.0/grid"
Check: User existence for "grid"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac12cnode1   passed                    exists(1100)
  rac12cnode4   passed                    exists(1100)
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "grid"
Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   3                         3,5                       passed
  rac12cnode4   3                         3,5                       passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac12cnode1       hard          65536         65536         passed
  rac12cnode4       hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac12cnode1       soft          65536         1024          passed
  rac12cnode4       soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac12cnode1       hard          16384         16384         passed
  rac12cnode4       hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac12cnode1       soft          16384         2047          passed
  rac12cnode4       soft          16384         2047          passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   x86_64                    x86_64                    passed
  rac12cnode4   x86_64                    x86_64                    passed
Result: System architecture check passed
Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   2.6.39-400.109.1.el6uek.x86_64  2.6.32                    passed
  rac12cnode4   2.6.39-400.109.1.el6uek.x86_64  2.6.32                    passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       250           250           250           passed
  rac12cnode4       250           250           250           passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       32000         32000         32000         passed
  rac12cnode4       32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       100           100           100           passed
  rac12cnode4       100           100           100           passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       128           128           128           passed
  rac12cnode4       128           128           128           passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       4398046511104  4398046511104  1574438912    passed
  rac12cnode4       4398046511104  4398046511104  1574438912    passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       4096          4096          4096          passed
  rac12cnode4       4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       1073741824    1073741824    307507        passed
  rac12cnode4       1073741824    1073741824    307507        passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       6815744       6815744       6815744       passed
  rac12cnode4       6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
  rac12cnode4       between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       262144        262144        262144        passed
  rac12cnode4       262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       4194304       4194304       4194304       passed
  rac12cnode4       4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       262144        262144        262144        passed
  rac12cnode4       262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       1048576       1048576       1048576       passed
  rac12cnode4       1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac12cnode1       1048576       1048576       1048576       passed
  rac12cnode4       1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
  rac12cnode4   binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   compat-libcap1-1.10-1     compat-libcap1-1.10       passed
  rac12cnode4   compat-libcap1-1.10-1     compat-libcap1-1.10       passed
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac12cnode4   compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed
  rac12cnode4   libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed
  rac12cnode4   libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed
  rac12cnode4   libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
  rac12cnode4   sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   gcc-4.4.7-3.el6           gcc-4.4.4                 passed
  rac12cnode4   gcc-4.4.7-3.el6           gcc-4.4.4                 passed
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed
  rac12cnode4   gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   ksh-20100621-19.el6_4.4   ksh-...                   passed
  rac12cnode4   ksh-20100621-19.el6_4.4   ksh-...                   passed
Result: Package existence check passed for "ksh"
Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   make-3.81-20.el6          make-3.81                 passed
  rac12cnode4   make-3.81-20.el6          make-3.81                 passed
Result: Package existence check passed for "make"
Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   glibc(x86_64)-2.12-1.107.el6_4.2  glibc(x86_64)-2.12        passed
  rac12cnode4   glibc(x86_64)-2.12-1.107.el6_4.2  glibc(x86_64)-2.12        passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   glibc-devel(x86_64)-2.12-1.107.el6_4.2  glibc-devel(x86_64)-2.12  passed
  rac12cnode4   glibc-devel(x86_64)-2.12-1.107.el6_4.2  glibc-devel(x86_64)-2.12  passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
  rac12cnode4   libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
  rac12cnode4   libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "nfs-utils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed
  rac12cnode4   nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed
Result: Package existence check passed for "nfs-utils"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           passed
  rac12cnode4                           passed
Check for consistency of root user's primary group passed
Check: Group existence for "asmadmin"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac12cnode1   passed                    exists
  rac12cnode4   passed                    exists
Result: Group existence check passed for "asmadmin"
Check: Group existence for "asmdba"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac12cnode1   passed                    exists
  rac12cnode4   passed                    exists
Result: Group existence check passed for "asmdba"
Checking ASMLib configuration.
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           passed
  rac12cnode4                           passed
Result: Check for ASMLib configuration passed.
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  rac12cnode1                           yes
  rac12cnode4                           yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
  Node Name                             Port Open?
  ------------------------------------  ------------------------
  rac12cnode1                           yes
  rac12cnode4                           yes
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[rac12cnode1, rac12cnode4]"...
Check: Clock time offset from NTP Time Server
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac12cnode1   0.0                       passed
  rac12cnode4   0.0                       passed
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[rac12cnode1, rac12cnode4]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac12cnode1   passed                    does not exist
  rac12cnode4   passed                    does not exist
Result: User "grid" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
"search" entry does not exist in any "/etc/resolv.conf" file
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           failed
  rac12cnode4                           failed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac12cnode1,rac12cnode4
Check for integrity of file "/etc/resolv.conf" failed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Checking GNS integrity...
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.1.140" resolves to a valid IP address
Checking the status of GNS VIP...
Checking status of GNS resource...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rac12cnode1   no                        yes
  rac12cnode2   yes                       yes
  rac12cnode3   no                        yes
GNS resource configuration check passed
Checking status of GNS VIP resource...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rac12cnode1   no                        yes
  rac12cnode2   yes                       yes
  rac12cnode3   no                        yes
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
--------------                ---------------     ----------------
Check failed.                 Failed on nodes     Reboot required?
--------------                ---------------     ----------------
Package: cvuqdisk-1.0.9-1     rac12cnode4         no


Execute "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes "rac12cnode4" to perform the fix up operations manually
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes "rac12cnode4"



Fix: Package: cvuqdisk-1.0.9-1
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode4                           failed
ERROR:
PRVG-9023 : Manual fix up command "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" was not issued by root user on node "rac12cnode4"
Result: "Package: cvuqdisk-1.0.9-1" could not be fixed on nodes "rac12cnode4"
Fix up operations for selected fixable prerequisites were unsuccessful on nodes "rac12cnode4"
Pre-check for node addition was unsuccessful on all the nodes.
[grid@rac12cnode1 ~]$ 
[root@rac12cnode4 ~]# /tmp/CVU_12.1.0.1.0_grid/runfixup.sh
All Fix-up operations were completed successfully.
[root@rac12cnode4 ~]#  

I had some DNS issue which caused some test cases to fail. But i am proceeding further to see if that actually halts me to proceed with the node addition. In pre 12c versions( standard cluster), we were able to proceed without DNS and with just /etc/hosts. But in 12c, particularly in flex cluster, it is not allowing me to proceed further with the node addition unless i have proper DNS entries. I am going to show you in a short while the issue and then i will modify my DNS to have proper entries to my RAC servers.

3) Now let me start adding the leaf node to the flex cluster.
addnode.sh -silent "CLUSTER_NEW_NODES={rac12cnode4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac12cnode4-vip}" "CLUSTER_NEW_NODE_ROLES={leaf}"
or
addnode.sh -silent "CLUSTER_NEW_NODES={rac12cnode4}" "CLUSTER_NEW_NODE_ROLES={leaf}"
NOTE :  When you are adding a node as leaf node to cluster, you may or may not specify the VIP address for that node. It is not mandatory. So i choose to run without VIP details ...
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac12cnode4}" "CLUSTER_NEW_NODE_ROLES={leaf}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 7460 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1072 MB    Passed
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-11_12-51-25AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-11_12-51-25AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
[grid@rac12cnode1 addnode]$ 
I got issue and it failed to add a node. I reviewed the log file. It is failed on some pre-reqs. So thought to execute the same command , with the variable IGNORE_PREADDNODE_CHECKS=Y as we were doing in pre 12c releases.
[grid@rac12cnode1 addnode]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac12cnode4}" "CLUSTER_NEW_NODE_ROLES={leaf}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 7460 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1073 MB    Passed
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-11_12-59-26AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-11_12-59-26AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
[grid@rac12cnode1 addnode]$
Now i reviewed the logs and there are three errors. Out of three one is critical and two are ignorable.
INFO: ------------------List of failed Tasks------------------
INFO: *********************************************
INFO: Physical Memory: This is a prerequisite condition to test whether the system has at least 4GB (4194304.0KB) of total physical memory.
INFO: Severity:IGNORABLE
INFO: OverallStatus:VERIFICATION_FAILED
INFO: *********************************************
INFO: Swap Size: This is a prerequisite condition to test whether sufficient total swap space is available on the system.
INFO: Severity:IGNORABLE
INFO: OverallStatus:VERIFICATION_FAILED
INFO: *********************************************
INFO: Task resolv.conf Integrity: This task checks consistency of file /etc/resolv.conf file across nodes
INFO: Severity:CRITICAL
INFO: OverallStatus:VERIFICATION_FAILED
INFO: -----------------End of failed Tasks List----------------
Now here is the place where it did not allow me to go further with all mimics as we were doing in previous releases. So i have to have entries in DNS to proceed further. So i have completed DNS configuration and added all nodes to DNS server.
[grid@rac12cnode1 addnode]$ nslookup rac12cnode4
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode4.localdomain
Address: 192.168.1.134
[grid@rac12cnode1 addnode]$ nslookup 192.168.1.134
Server:         192.168.1.51
Address:        192.168.1.51#53
134.1.168.192.in-addr.arpa      name = rac12cnode4.localdomain.
[grid@rac12cnode1 addnode]$ nslookup rac12cnode3
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode3.localdomain
Address: 192.168.1.133
[grid@rac12cnode1 addnode]$ nslookup rac12cnode2
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode2.localdomain
Address: 192.168.1.132
[grid@rac12cnode1 addnode]$
[root@rac12cnode4 ~]# nslookup 192.168.1.134
Server:         192.168.1.51
Address:        192.168.1.51#53
134.1.168.192.in-addr.arpa      name = rac12cnode4.localdomain.
[root@rac12cnode4 ~]# nslookup 192.168.1.133
Server:         192.168.1.51
Address:        192.168.1.51#53
133.1.168.192.in-addr.arpa      name = rac12cnode3.localdomain.
[root@rac12cnode4 ~]# nslookup 192.168.1.132
Server:         192.168.1.51
Address:        192.168.1.51#53
132.1.168.192.in-addr.arpa      name = rac12cnode2.localdomain.
[root@rac12cnode4 ~]# nslookup 192.168.1.131
Server:         192.168.1.51
Address:        192.168.1.51#53
131.1.168.192.in-addr.arpa      name = rac12cnode1.localdomain.
[root@rac12cnode4 ~]# nslookup rac12cnode4
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode4.localdomain
Address: 192.168.1.134
[root@rac12cnode4 ~]# nslookup rac12cnode3
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode3.localdomain
Address: 192.168.1.133
[root@rac12cnode4 ~]# nslookup rac12cnode2
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode2.localdomain
Address: 192.168.1.132
[root@rac12cnode4 ~]# nslookup rac12cnode1
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode1.localdomain
Address: 192.168.1.131
[root@rac12cnode4 ~]#
Now i am able to successfully complete the execution of addnode command. But as part of this execution it asked me to execute root.sh. 
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac12cnode4}" "CLUSTER_NEW_NODE_ROLES={leaf}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 7460 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1058 MB    Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-11_01-37-37AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-11_01-37-37AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
..................................................   9% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2014-03-11_01-37-37AM.log
Instantiate files in progress.
Instantiate files successful.
..................................................   15% Done.
Copying files to node in progress.
Copying files to node successful.
..................................................   79% Done.
Saving cluster inventory in progress.
..................................................   87% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[rac12cnode4]
The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes.
..........
Update Inventory in progress.
..................................................   100% Done.
Update Inventory successful.
Successfully Setup Software.
[grid@rac12cnode1 addnode]$
When ran root.sh, it did not configure the cluster. Here i took some time to actually debug the issue.
[root@rac12cnode4 12.1.0]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-11_02-19-08.log for the output of root script
[root@rac12cnode4 12.1.0]# cat /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-11_02-19-08.log
Performing root user operation for Oracle 12c
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl

To configure Grid Infrastructure for a Cluster execute the following command as grid user:
/u01/app/12.1.0/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
[root@rac12cnode4 12.1.0]#

So it is just asking me to run config.sh to configure my cluster. But i don't know actually why is it asking me to run this script. Thinking it might be some change, i started running, config.sh , then as per my experience, i understood that i am going somewhere in wrong direction. Then i started debugging the scripts root.sh and all the scripts it runs.
Then realized the problem with the script /u01/app/12.1.0/grid/crs/config/rootconfig.sh. After careful review, i edited the oracle delivered script and uncommented the lines as shown below to allow the script to configure the cluster for leaf node.
[root@rac12cnode4 ~]# cp  /u01/app/12.1.0/grid/crs/config/rootconfig.sh  /u01/app/12.1.0/grid/crs/config/rootconfig.sh_bak_askm
[root@rac12cnode4 ~]# vi /u01/app/12.1.0/grid/crs/config/rootconfig.sh
[root@rac12cnode4 ~]# diff /u01/app/12.1.0/grid/crs/config/rootconfig.sh /u01/app/12.1.0/grid/crs/config/rootconfig.sh_bak_askm
33,37c33,37
< if [ "$ADDNODE" = "true" ]; then
<   SW_ONLY=false
<   HA_CONFIG=false
< fi
---
> #if [ "$ADDNODE" = "true" ]; then
> #  SW_ONLY=false
> #  HA_CONFIG=false
> #fi
[root@rac12cnode4 ~]#
Then ran root.sh again, which completed successfully this time.
[root@rac12cnode4 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-11_04-14-13.log for the output of root script
[root@rac12cnode4 ~]#
The above logfile shows following output while execution ...
[root@rac12cnode4 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-11_04-14-13.log
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2014/03/11 04:16:23 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode4'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode4'
CRS-2676: Start of 'ora.evmd' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gipcd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode4'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode4'
CRS-2676: Start of 'ora.diskmon' on 'rac12cnode4' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode4'
CRS-2676: Start of 'ora.cssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode4'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode4'
CRS-2676: Start of 'ora.storage' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode4'
CRS-2676: Start of 'ora.crsd' on 'rac12cnode4' succeeded
CRS-6017: Processing resource auto-start for servers: rac12cnode4
CRS-6016: Resource auto-start has completed for server rac12cnode4
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/11 04:21:03 CLSRSC-343: Successfully started Oracle clusterware stack
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/11 04:21:11 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
^C
[root@rac12cnode4 ~]#
4) Post verification Steps :
On node 1:
[grid@rac12cnode1 addnode]$ olsnodes -s -trac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 addnode]$ clear
[grid@rac12cnode1 addnode]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 addnode]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 addnode]$ srvctl config gns
GNS is enabled.
[grid@rac12cnode1 addnode]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.11.0.0  global  asm
[grid@rac12cnode1 addnode]$ crsctl get node role config
Node 'rac12cnode1' configured role is 'hub'
[grid@rac12cnode1 addnode]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 addnode]$ asmcmd showclusterstate
Normal
[grid@rac12cnode1 addnode]$ srvctl status asm -detail
ASM is running on rac12cnode1,rac12cnode2,rac12cnode3
ASM is enabled.
[grid@rac12cnode1 addnode]$ srvctl config asm
ASM home: /u01/app/12.1.0/grid
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@rac12cnode1 addnode]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 addnode]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode1 addnode]$ clear
[grid@rac12cnode1 addnode]$ crsctl status res -t | grep -i offline
               OFFLINE OFFLINE      rac12cnode4              STABLE
      1        OFFLINE OFFLINE                               STABLE
[grid@rac12cnode1 addnode]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER_LEAF.lsnr
               OFFLINE OFFLINE      rac12cnode4              STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.ons
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac12cnode1              STABLE
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.asm
      1        ONLINE  ONLINE       rac12cnode1              STABLE
      2        ONLINE  ONLINE       rac12cnode2              STABLE
      3        ONLINE  ONLINE       rac12cnode3              STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.orcl.db
      1        ONLINE  ONLINE       rac12cnode1              Open,STABLE
      2        ONLINE  ONLINE       rac12cnode2              Open,STABLE
      3        ONLINE  ONLINE       rac12cnode3              Open,STABLE
ora.rac12cnode1.vip
      1        ONLINE  ONLINE       rac12cnode1              STABLE
ora.rac12cnode2.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.rac12cnode3.vip
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
--------------------------------------------------------------------------------
[grid@rac12cnode1 addnode]$ srvctl start listener -listener LISTENER_LEAF
[grid@rac12cnode1 addnode]$

On node 4(Leaf Node):
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/crsctl get node role status  -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode4 ~]$ ps ucx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
grid     10841  0.0  0.0  98216  2180 ?        S    03:51   0:01 sshd
grid     10842  0.0  0.0 108384  1696 pts/2    Ss   03:51   0:00 bash
grid     10864  0.0  0.0 106140  1200 pts/2    S+   03:52   0:00 config.sh
grid     10866  0.0  0.1 127920  5208 pts/2    S+   03:52   0:00 perl
grid     10869  0.6  4.3 1628076 134544 pts/2  Sl+  03:52   0:16 java
grid     23614  0.0  1.0 712668 31480 ?        Ssl  04:19   0:01 oraagent.bin
grid     23627  0.6  1.0 682972 32520 ?        Ssl  04:19   0:06 evmd.bin
grid     23629  0.0  0.6 193248 19892 ?        Ssl  04:19   0:00 mdnsd.bin
grid     23653  0.0  0.4 189816 14688 ?        S    04:19   0:00 evmlogger.bin
grid     23657  0.1  0.9 539796 30556 ?        Ssl  04:19   0:01 gpnpd.bin
grid     23663  4.0  1.5 617292 48008 ?        Sl   04:19   0:42 gipcd.bin
grid     23708  0.5  4.2 683812 130852 ?       SLl  04:20   0:05 ocssdrim.bin
grid     24167  0.0  0.0 108384  1804 pts/0    S    04:36   0:00 bash
grid     24197  0.0  0.0 110236  1120 pts/0    R+   04:37   0:00 ps
[grid@rac12cnode4 ~]$
5) Now it is time to do final verification step : cluvfy 
cluvfy stage -post nodeadd -n rac12cnode4 -verbose
[grid@rac12cnode1 ~]$ cluvfy stage -post nodeadd -n rac12cnode4 -verbose
Performing post-checks for node addition
Checking node reachability...
Check: Node reachability from node "rac12cnode1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac12cnode4                           yes
Result: Node reachability check passed from node "rac12cnode1"

Checking user equivalence...
Check: User equivalence for user "grid"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode4                           passed
Result: User equivalence check passed for user "grid"
WARNING:
Unable to obtain VIP information from node "rac12cnode4".

Checking node connectivity...
Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           passed
  rac12cnode2                           passed
  rac12cnode4                           passed
  rac12cnode3                           passed
Verification of the hosts config file successful

Interface information for node "rac12cnode1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.131   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth0   192.168.1.135   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth1   10.10.10.131    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth1   169.254.251.99  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth2   10.11.0.135     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:0B 1500

Interface information for node "rac12cnode2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.132   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.140   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.139   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.136   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth1   10.10.10.132    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth1   169.254.169.69  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth2   10.11.0.136     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:11 1500

Interface information for node "rac12cnode4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.134   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:1E 1500
 eth1   10.10.10.134    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:14 1500
 eth2   10.11.0.138     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:15 1500

Interface information for node "rac12cnode3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.133   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth0   192.168.1.137   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth1   10.10.10.133    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth1   169.254.177.41  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth2   10.11.0.137     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:13 1500

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Check: Node connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2[192.168.1.136]      rac12cnode3[192.168.1.137]      yes
  rac12cnode2[192.168.1.136]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.136]      rac12cnode1[192.168.1.135]      yes
  rac12cnode2[192.168.1.136]      rac12cnode2[192.168.1.139]      yes
  rac12cnode2[192.168.1.136]      rac12cnode1[192.168.1.131]      yes
  rac12cnode2[192.168.1.136]      rac12cnode2[192.168.1.132]      yes
  rac12cnode2[192.168.1.136]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.136]      rac12cnode3[192.168.1.133]      yes
  rac12cnode3[192.168.1.137]      rac12cnode4[192.168.1.134]      yes
  rac12cnode3[192.168.1.137]      rac12cnode1[192.168.1.135]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.139]      yes
  rac12cnode3[192.168.1.137]      rac12cnode1[192.168.1.131]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.132]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.140]      yes
  rac12cnode3[192.168.1.137]      rac12cnode3[192.168.1.133]      yes
  rac12cnode4[192.168.1.134]      rac12cnode1[192.168.1.135]      yes
  rac12cnode4[192.168.1.134]      rac12cnode2[192.168.1.139]      yes
  rac12cnode4[192.168.1.134]      rac12cnode1[192.168.1.131]      yes
  rac12cnode4[192.168.1.134]      rac12cnode2[192.168.1.132]      yes
  rac12cnode4[192.168.1.134]      rac12cnode2[192.168.1.140]      yes
  rac12cnode4[192.168.1.134]      rac12cnode3[192.168.1.133]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.139]      yes
  rac12cnode1[192.168.1.135]      rac12cnode1[192.168.1.131]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.132]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.140]      yes
  rac12cnode1[192.168.1.135]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.139]      rac12cnode1[192.168.1.131]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.132]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.139]      rac12cnode3[192.168.1.133]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.132]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.140]      yes
  rac12cnode1[192.168.1.131]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.132]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.140]      rac12cnode3[192.168.1.133]      yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rac12cnode2,rac12cnode3,rac12cnode4,rac12cnode1

Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2:192.168.1.136       rac12cnode3:192.168.1.137       passed
  rac12cnode2:192.168.1.136       rac12cnode4:192.168.1.134       passed
  rac12cnode2:192.168.1.136       rac12cnode1:192.168.1.135       passed
  rac12cnode2:192.168.1.136       rac12cnode2:192.168.1.139       passed
  rac12cnode2:192.168.1.136       rac12cnode1:192.168.1.131       passed
  rac12cnode2:192.168.1.136       rac12cnode2:192.168.1.132       passed
  rac12cnode2:192.168.1.136       rac12cnode2:192.168.1.140       passed
  rac12cnode2:192.168.1.136       rac12cnode3:192.168.1.133       passed
Result: TCP connectivity check passed for subnet "192.168.1.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Checking cluster integrity...
  Node Name
  ------------------------------------
  rac12cnode1
  rac12cnode2
  rac12cnode3
  rac12cnode4
Cluster integrity check passed

Checking CRS integrity...
The Oracle Clusterware is healthy on node "rac12cnode1"
CRS integrity check passed
Clusterware version consistency passed.
Checking shared resources...
Checking CRS home location...
"/u01/app/12.1.0/grid" is not shared
Result: Shared resources check for node addition passed

Checking node connectivity...
Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode1                           passed
  rac12cnode2                           passed
  rac12cnode4                           passed
  rac12cnode3                           passed
Verification of the hosts config file successful

Interface information for node "rac12cnode1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.131   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth0   192.168.1.135   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:22 1500
 eth1   10.10.10.131    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth1   169.254.251.99  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:09 1500
 eth2   10.11.0.135     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:0B 1500

Interface information for node "rac12cnode2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.132   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.140   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.139   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth0   192.168.1.136   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:23 1500
 eth1   10.10.10.132    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth1   169.254.169.69  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:0C 1500
 eth2   10.11.0.136     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:11 1500

Interface information for node "rac12cnode4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.134   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:1E 1500
 eth1   10.10.10.134    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:14 1500
 eth2   10.11.0.138     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:15 1500

Interface information for node "rac12cnode3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.133   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth0   192.168.1.137   192.168.1.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:24 1500
 eth1   10.10.10.133    10.10.10.0      0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth1   169.254.177.41  169.254.0.0     0.0.0.0         192.168.1.1     00:21:F6:00:00:12 1500
 eth2   10.11.0.137     10.11.0.0       0.0.0.0         192.168.1.1     00:21:F6:00:00:13 1500

Check: Node connectivity using interfaces on subnet "10.10.10.0"
Check: Node connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode4[10.10.10.134]       rac12cnode3[10.10.10.133]       yes
  rac12cnode4[10.10.10.134]       rac12cnode2[10.10.10.132]       yes
  rac12cnode4[10.10.10.134]       rac12cnode1[10.10.10.131]       yes
  rac12cnode3[10.10.10.133]       rac12cnode2[10.10.10.132]       yes
  rac12cnode3[10.10.10.133]       rac12cnode1[10.10.10.131]       yes
  rac12cnode2[10.10.10.132]       rac12cnode1[10.10.10.131]       yes
Result: Node connectivity passed for subnet "10.10.10.0" with node(s) rac12cnode4,rac12cnode3,rac12cnode2,rac12cnode1

Check: TCP connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode4:10.10.10.134        rac12cnode3:10.10.10.133        passed
  rac12cnode4:10.10.10.134        rac12cnode2:10.10.10.132        passed
  rac12cnode4:10.10.10.134        rac12cnode1:10.10.10.131        passed
Result: TCP connectivity check passed for subnet "10.10.10.0"

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Check: Node connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode1[192.168.1.131]      rac12cnode1[192.168.1.135]      yes
  rac12cnode1[192.168.1.131]      rac12cnode3[192.168.1.137]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.132]      yes
  rac12cnode1[192.168.1.131]      rac12cnode3[192.168.1.133]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.139]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.136]      yes
  rac12cnode1[192.168.1.131]      rac12cnode2[192.168.1.140]      yes
  rac12cnode1[192.168.1.131]      rac12cnode4[192.168.1.134]      yes
  rac12cnode1[192.168.1.135]      rac12cnode3[192.168.1.137]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.132]      yes
  rac12cnode1[192.168.1.135]      rac12cnode3[192.168.1.133]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.139]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.136]      yes
  rac12cnode1[192.168.1.135]      rac12cnode2[192.168.1.140]      yes
  rac12cnode1[192.168.1.135]      rac12cnode4[192.168.1.134]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.132]      yes
  rac12cnode3[192.168.1.137]      rac12cnode3[192.168.1.133]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.139]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.136]      yes
  rac12cnode3[192.168.1.137]      rac12cnode2[192.168.1.140]      yes
  rac12cnode3[192.168.1.137]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.132]      rac12cnode3[192.168.1.133]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.139]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.136]      yes
  rac12cnode2[192.168.1.132]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.132]      rac12cnode4[192.168.1.134]      yes
  rac12cnode3[192.168.1.133]      rac12cnode2[192.168.1.139]      yes
  rac12cnode3[192.168.1.133]      rac12cnode2[192.168.1.136]      yes
  rac12cnode3[192.168.1.133]      rac12cnode2[192.168.1.140]      yes
  rac12cnode3[192.168.1.133]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.136]      yes
  rac12cnode2[192.168.1.139]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.139]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.136]      rac12cnode2[192.168.1.140]      yes
  rac12cnode2[192.168.1.136]      rac12cnode4[192.168.1.134]      yes
  rac12cnode2[192.168.1.140]      rac12cnode4[192.168.1.134]      yes
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) rac12cnode1,rac12cnode3,rac12cnode2,rac12cnode4

Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode1:192.168.1.131       rac12cnode1:192.168.1.135       passed
  rac12cnode1:192.168.1.131       rac12cnode3:192.168.1.137       passed
  rac12cnode1:192.168.1.131       rac12cnode2:192.168.1.132       passed
  rac12cnode1:192.168.1.131       rac12cnode3:192.168.1.133       passed
  rac12cnode1:192.168.1.131       rac12cnode2:192.168.1.139       passed
  rac12cnode1:192.168.1.131       rac12cnode2:192.168.1.136       passed
  rac12cnode1:192.168.1.131       rac12cnode2:192.168.1.140       passed
  rac12cnode1:192.168.1.131       rac12cnode4:192.168.1.134       passed
Result: TCP connectivity check passed for subnet "192.168.1.0"

Check: Node connectivity using interfaces on subnet "10.11.0.0"
Check: Node connectivity of subnet "10.11.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2[10.11.0.136]        rac12cnode1[10.11.0.135]        yes
  rac12cnode2[10.11.0.136]        rac12cnode3[10.11.0.137]        yes
  rac12cnode2[10.11.0.136]        rac12cnode4[10.11.0.138]        yes
  rac12cnode1[10.11.0.135]        rac12cnode3[10.11.0.137]        yes
  rac12cnode1[10.11.0.135]        rac12cnode4[10.11.0.138]        yes
  rac12cnode3[10.11.0.137]        rac12cnode4[10.11.0.138]        yes
Result: Node connectivity passed for subnet "10.11.0.0" with node(s) rac12cnode2,rac12cnode1,rac12cnode3,rac12cnode4

Check: TCP connectivity of subnet "10.11.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac12cnode2:10.11.0.136         rac12cnode1:10.11.0.135         passed
  rac12cnode2:10.11.0.136         rac12cnode3:10.11.0.137         passed
  rac12cnode2:10.11.0.136         rac12cnode4:10.11.0.138         passed
Result: TCP connectivity check passed for subnet "10.11.0.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.11.0.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251" passed.
Checking subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.11.0.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Checking node application existence...
Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   yes                       yes                       passed
  rac12cnode2   yes                       yes                       passed
  rac12cnode3   yes                       yes                       passed
VIP node application check passed
Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   yes                       yes                       passed
  rac12cnode2   yes                       yes                       passed
  rac12cnode3   yes                       yes                       passed
NETWORK node application check passed
Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac12cnode1   no                        yes                       passed
  rac12cnode2   no                        yes                       passed
  rac12cnode3   no                        yes                       passed
ONS node application check passed

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac12cnode4   passed                    does not exist
Result: User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  rac12cnode4                           passed
Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  rac12cnode4                           Observer
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  rac12cnode4                           yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
  Node Name                             Port Open?
  ------------------------------------  ------------------------
  rac12cnode4                           yes
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[rac12cnode4]"...
Check: Clock time offset from NTP Time Server
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac12cnode4   0.0                       passed
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[rac12cnode4]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed

Oracle Cluster Time Synchronization Services check passed
Post-check for node addition was successful.
[grid@rac12cnode1 ~]$

Hope It Helps
SRI

Share this article :

Related Articles By Category



Post a Comment

Thank you for visiting our site and leaving your valuable comment.

 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger