facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Home » , , , , , , » RAC : Clone Flex Cluster To Extend Cluster By Adding Hub and Lead Nodes

RAC : Clone Flex Cluster To Extend Cluster By Adding Hub and Lead Nodes

Written By askMLabs on Saturday, March 15, 2014 | 11:57 PM


In this article , we will see how to clone RAC environment to extend the existing cluster. Clone can also be used to prepare a new cluster environment. But in this article, we are going to see extending the existing using clone method. A cluster can also be extended using addnode method. But cloning is a different method to extend the cluster.

Environment Details :
RAC Version Cluster Type
12c  12.1.0.1.0
Flex Cluster
Hub Nodes
rac12cnode1/2
Leaf Nodes
No
DB Running on
All Hub Nodes
Task
Clone cluster to extend from 2 node to 4 node

[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
[grid@rac12cnode1 ~]$


Step By Step : 

  1. 1. Prepare the new cluster nodes 
  2. 2. Prepare the existing cluster nodes
  3. 3. Deploy the Grid Infrastructure to new cluster nodes
  4. 4. Run clone.pl on all the cluster nodes.
  5. 5. Run orainstRoot.sh script on all target cluster nodes.
  6. 6. Execute addnode.sh in silent mode to add hub node and leaf node
  7. 7. Copy config files from source system to all target cluster nodes
  8. 8. Run root.sh on all the target nodes to configure cluster.
  9. 9. Verify the cloned cluster with cluvfy
1. Prepare the new cluster nodes:
Our task is to add two nodes to the existing flex cluster in clone method. We need to first prepare the nodes which are to be added to the cluster.

I have the following points from documentation, but you can also follow my other articles/videos on rac to prepare nodes for cluster addition.
On each destination node, perform the following preinstallation steps:
  1. Specify the kernel parameters
  2. Configure block devices for Oracle Clusterware devices
  3. Ensure that you have set the block device permissions correctly
  4. Use short, nondomain-qualified names for all of the names in the /etc/hosts file
  5. Test whether the interconnect interfaces are reachable using the ping command
  6. Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail)
  7. On AIX systems, and on Solaris x86-64-bit systems running vendor clusterware, if you add a node to the cluster, then you must run the rootpre.sh script (located at the mount point it you install Oracle Clusterware from a DVD or in the directory where you unzip the tar file if you download the software) on the node before you add the node to the cluster
  8. Run CVU to verify your hardware and operating system environment
Complete all above steps , so that the nodes are ready for adding / clone to cluster node.

Verify if the nodes are ready to add to the cluster using the following commands 
[grid@rac12cnode1 ~]$ nslookup rac12cnode3
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode3.localdomain
Address: 192.168.1.133
[grid@rac12cnode1 ~]$ nslookup rac12cnode4
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode4.localdomain
Address: 192.168.1.134
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ cluvfy stage -pre nodeadd -n rac12cnode3,rac12cnode4 -fixup -verbose
Performing pre-checks for node addition......
2. Prepare the existing cluster nodes:
In this step, we need to create a copy of the existing oracle grid infrastructure home and remove the unnecessary files from the copied home. You can perform this step while the clusterware is up and running.
Create an exclusion list to be excluded while creating the tar backup.
[root@rac12cnode1 askm]# cat excl_list.txt
/u01/app/12.1.0/grid/rac12cnode1
/u01/app/12.1.0/grid/log/host_name
/u01/app/12.1.0/grid/gpnp/host_name
/u01/app/12.1.0/grid/crs/init/*
/u01/app/12.1.0/grid/cdata/*
/u01/app/12.1.0/grid/crf/*
/u01/app/12.1.0/grid/network/admin/*.ora
/u01/app/12.1.0/grid/root.sh*
*.ouibak
*.ouibak1
[root@rac12cnode1 askm]#
Create compressed copy of the oracle grid infrastructure home using tar utility. Execute the following command on any existing node in the cluster : 
[root@rac12cnode1 12.1.0]# tar -czf gridHome.tar.gz -X /tmp/askm/excl_list.txt /u01/app/12.1.0/grid
tar: Removing leading `/' from member names
tar: /u01/app/12.1.0/grid/log/rac12cnode1/gipcd/gipcd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/agent/crsd/oraagent_grid/oraagent_grid.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/ctssd/octssd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/cssd/ocssd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/rdbms/audit: file changed as we read it
[root@rac12cnode1 12.1.0]# ls -lrt
total 3100700
drwxr-xr-x. 74 root oinstall       4096 Mar 13 03:41 grid
-rw-r--r--.  1 root root     3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]#

3. Deploy the Grid Infrastructure to new cluster nodes:
Now copy the compressed backup of the oracle grid infrastructure home created in step 2 above to all the target nodes ie rac12cnode3/4.
[root@rac12cnode1 12.1.0]# ls -lrt
total 3100700
drwxr-xr-x. 74 root oinstall       4096 Mar 13 03:41 grid
-rw-r--r--.  1 root root     3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]# scp gridHome.tar.gz root@rac12cnode3:/u01/app/12.1.0
root@rac12cnode3's password:
gridHome.tar.gz                                                                            100% 3025MB   9.7MB/s   05:12
[root@rac12cnode1 12.1.0]# ls -lrt gridHome.tar.gz
-rw-r--r--. 1 root root 3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]# du -sh gridHome.tar.gz
3.0G    gridHome.tar.gz
[root@rac12cnode1 12.1.0]# scp gridHome.tar.gz root@rac12cnode4:/u01/app/12.1.0
root@rac12cnode4's password:
gridHome.tar.gz                                                                            100% 3025MB   9.3MB/s   05:25
[root@rac12cnode1 12.1.0]#

Extract the compressed backup to node 3 and node 4.
[root@rac12cnode4 ~]# cd /u01/app/12.1.0/
[root@rac12cnode4 12.1.0]# ls
[root@rac12cnode4 12.1.0]# ls -lrt
total 3100696
-rw-r--r--. 1 root root 3172006616 Mar 14 06:33 gridHome.tar.gz
[root@rac12cnode4 12.1.0]# du -sh *
3.0G    gridHome.tar.gz
[root@rac12cnode4 12.1.0]# tar xvzf gridHome.tar.gz -C /
..
..
[root@rac12cnode4 12.1.0]#
[root@rac12cnode3 ~]# cd /u01/app/12.1.0/
[root@rac12cnode3 12.1.0]# ls -lrt
total 3100696
-rw-r--r--. 1 root root 3172006616 Mar 14 05:16 gridHome.tar.gz
[root@rac12cnode3 12.1.0]# tar xvzf gridHome.tar.gz -C /
..
..
[root@rac12cnode3 12.1.0]#
NOTE : Review the copied homes on target nodes , If you still have any files need to be deleted from the copied home on nodes 3 and 4.
I have created following shell script to delete any unwanted files from the new homes created on node 3 and 4.
[root@rac12cnode3 log]# cat /tmp/askm/file_delete.sh
cd /u01/app/12.1.0/grid/
rm -rf /u01/app/12.1.0/grid/log/rac12cnode1
rm -rf /u01/app/12.1.0/grid/gpnp/rac12cnode1
find gpnp -type f -exec rm -f {} \;
rm -rf /u01/app/12.1.0/grid/cfgtoollogs/*
rm -rf /u01/app/12.1.0/grid/crs/init/*
rm -rf /u01/app/12.1.0/grid/cdata/*
rm -rf /u01/app/12.1.0/grid/crf/*
rm -rf /u01/app/12.1.0/grid/network/admin/*.ora
rm -rf /u01/app/12.1.0/grid/crs/install/crsconfig_params
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf /u01/app/12.1.0/grid/root.sh*
rm -rf /u01/app/12.1.0/grid/rdbms/audit/*
rm -rf /u01/app/12.1.0/grid/rdbms/log/*
rm -rf /u01/app/12.1.0/grid/inventory/backup/*
[root@rac12cnode3 log]#
Now execute the file  /tmp/askm/file_delete.sh  on nodes 3 and 4.

4. Run clone.pl on all the cluster nodes.
Run the clone.pl script located in the Grid_home/clone/bin directory on Node 3 and Node 4.
On Node 3:(HUB Node ):
[grid@rac12cnode3 ~]$ cd $ORACLE_HOME/clone/bin
[grid@rac12cnode3 bin]$ perl clone.pl ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGrid12c ORACLE_BASE=/u01/app/grid "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode3'" CRS=TRUE INVENTORY_LOCATION=/u01/app/oraInventory
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/app/12.1.0/grid" "ORACLE_HOME_NAME=OraGrid12c" "ORACLE_BASE=/u01/app/grid" "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode3'" "CRS=TRUE" "INVENTORY_LOCATION=/u01/app/oraInventory" -silent -paramFile /u01/app/12.1.0/grid/clone/clone_oraparam.ini
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB.   Actual 7237 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2042 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-14_08-10-41AM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2014-03-14_08-10-41AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..................................................   90% Done.
..................................................   95% Done.
Copy files in progress.
Copy files successful.
Link binaries in progress.
Link binaries successful.
Setup files in progress.
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The cloning of OraGrid12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2014-03-14_08-10-41AM.log' for more details.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh

..................................................   100% Done.
[grid@rac12cnode3 bin]$

On Node 4: (Leaf Node)
[grid@rac12cnode4 ~]$ cd /u01/app/12.1.0/grid/clone/bin
[grid@rac12cnode4 bin]$ pwd
/u01/app/12.1.0/grid/clone/bin
[grid@rac12cnode4 bin]$ perl clone.pl ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGrid12c ORACLE_BASE=/u01/app/grid "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode4'" CRS=TRUE INVENTORY_LOCATION=/u01/app/oraInventory
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/app/12.1.0/grid" "ORACLE_HOME_NAME=OraGrid12c" "ORACLE_BASE=/u01/app/grid" "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode4'" "CRS=TRUE" "INVENTORY_LOCATION=/u01/app/oraInventory" -silent -paramFile /u01/app/12.1.0/grid/clone/clone_oraparam.ini
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB.   Actual 7239 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2041 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-14_08-15-13AM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2014-03-14_08-15-13AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..................................................   90% Done.
..................................................   95% Done.
Copy files in progress.
Copy files successful.
Link binaries in progress.
Link binaries successful.
Setup files in progress.
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The cloning of OraGrid12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2014-03-14_08-15-13AM.log' for more details.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh

..................................................   100% Done.
[grid@rac12cnode4 bin]$

IMP IMP : Please don't run root.sh at this point of time. 

5. Run orainstRoot.sh script on all target cluster nodes:(as root user)
This script populates the /etc/oraInst.loc directory with the location of the central inventory.
[root@rac12cnode3 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac12cnode3 ~]# 
[root@rac12cnode4 oraInventory]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac12cnode4 oraInventory]#
6. Execute addnode.sh in silent mode to add hub node and leaf node:
Run the addnode.sh script from $GRID_HOME/addnode/
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent -noCopy ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NEW_NODES={rac12cnode3,rac12cnode4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac12cnode3-vip}" "CLUSTER_NEW_NODE_ROLES={HUB,LEAF}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 7444 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 407 MB    Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
..................................................   40% Done.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[rac12cnode3, rac12cnode4]
The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes.
..................................................   60% Done.
Update Inventory in progress.
..................................................   100% Done.
Update Inventory successful.
Successfully Setup Software.
[grid@rac12cnode1 addnode]$ 
In the preceding syntax example, Node 4 is designated as a Leaf Node and does not require that a VIP be included.

7. Copy config files from source system to all target cluster nodes:
Copy the following files from Node 1, on which you ran addnode.sh, to Node3 and Node4.
Grid_home/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
[root@rac12cnode1 grid]# scp /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/gpnp.tar.gz 192.168.1.133:/tmp/askm
root@192.168.1.133's password:
crsconfig_addparams                                                                        100% 1089     1.1KB/s   00:00
crsconfig_params                                                                           100% 5509     5.4KB/s   00:00
gpnp.tar.gz                                                                                100%   90KB  90.1KB/s   00:00
[root@rac12cnode1 grid]# scp /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/gpnp.tar.gz 192.168.1.134:/tmp/askm
root@192.168.1.134's password:
crsconfig_addparams                                                                        100% 1089     1.1KB/s   00:00
crsconfig_params                                                                           100% 5509     5.4KB/s   00:00
gpnp.tar.gz                                                                                100%   90KB  90.1KB/s   00:00
[root@rac12cnode1 grid]# 
[root@rac12cnode4 ~]# mv /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_addparams_bak
[root@rac12cnode4 ~]# mv /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/crs/install/crsconfig_params_bak
[root@rac12cnode4 ~]# mv /tmp/askm/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_addparams
[root@rac12cnode4 ~]# mv /tmp/askm/crsconfig_params /u01/app/12.1.0/grid/crs/install/crsconfig_params
[root@rac12cnode4 ~]# chown grid:oinstall /u01/app/12.1.0/grid/crs/install/crsconfig_addparams
[root@rac12cnode4 ~]# chown grid:oinstall /u01/app/12.1.0/grid/crs/install/crsconfig_params
[root@rac12cnode4 ~]# cd /u01/app/12.1.0/grid
[root@rac12cnode4 grid]# mv gpnp gpnp_bak
[root@rac12cnode4 grid]# pwd
/u01/app/12.1.0/grid
[root@rac12cnode4 grid]# tar xzf /tmp/askm/gpnp.tar.gz
[root@rac12cnode4 grid]# ls -ld gpnp
drwxr-x---. 8 grid oinstall 4096 Mar  7 09:06 gpnp
[root@rac12cnode4 grid]#

Complete the this step on node 3 and node 4.

8. Run root.sh on all the target nodes to configure cluster:
On Node 3 and Node 4, run the Grid_home/root.sh script.
[root@rac12cnode3 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log for the output of root script
[root@rac12cnode3 grid]# cat /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log
Performing root user operation for Oracle 12c
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl

To configure Grid Infrastructure for a Cluster execute the following command as grid user:
/u01/app/12.1.0/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@rac12cnode3 grid]#
If you get this output after executing the root.sh , then your cluster is not configured correctly. You need to modify the script "/u01/app/12.1.0/grid/crs/config/rootconfig.sh" to uncomment the some part of the script. Please refer to my video if you need more details on this part. 
[root@rac12cnode4 ~]# diff /u01/app/12.1.0/grid/crs/config/rootconfig.sh /u01/app/12.1.0/grid/crs/config/rootconfig.sh_bak_askm
33,37c33,37
< if [ "$ADDNODE" = "true" ]; then
<   SW_ONLY=false
<   HA_CONFIG=false
< fi
---
> #if [ "$ADDNODE" = "true" ]; then
> #  SW_ONLY=false
> #  HA_CONFIG=false
> #fi
[root@rac12cnode4 ~]# 
Now run the root.sh again. This time it will execute successfully and configures the cluster.
[root@rac12cnode3 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log for the output of root script
[root@rac12cnode3 grid]#
[root@rac12cnode4 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log for the output of root script
[root@rac12cnode4 grid]#
The output for above executions for confirmation that they configured clusterware correctly ....
[root@rac12cnode3 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2014/03/15 11:59:52 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode3'
CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode3'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode3'
CRS-2676: Start of 'ora.gipcd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode3'
CRS-2676: Start of 'ora.diskmon' on 'rac12cnode3' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode3'
CRS-2676: Start of 'ora.cssd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode3'
CRS-2676: Start of 'ora.ctssd' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3'
CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode3'
CRS-2676: Start of 'ora.storage' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode3'
CRS-2676: Start of 'ora.crsd' on 'rac12cnode3' succeeded
CRS-6017: Processing resource auto-start for servers: rac12cnode3
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.ons' on 'rac12cnode3'
CRS-2676: Start of 'ora.ons' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3'
CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded
CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2'
CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode2'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2'
CRS-6016: Resource auto-start has completed for server rac12cnode3
CRS-2672: Attempting to start 'ora.proxy_advm' on 'rac12cnode3'
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/15 12:10:16 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/15 12:11:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac12cnode3 ~]#
[root@rac12cnode4 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2014/03/15 12:27:14 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode4'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode4'
CRS-2676: Start of 'ora.evmd' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gipcd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode4'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode4'
CRS-2676: Start of 'ora.diskmon' on 'rac12cnode4' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode4'
CRS-2676: Start of 'ora.cssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode4'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode4'
CRS-2676: Start of 'ora.storage' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode4'
CRS-2676: Start of 'ora.crsd' on 'rac12cnode4' succeeded
CRS-6017: Processing resource auto-start for servers: rac12cnode4
CRS-6016: Resource auto-start has completed for server rac12cnode4
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/15 12:31:56 CLSRSC-343: Successfully started Oracle clusterware stack
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/15 12:32:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac12cnode4 ~]#

9. Verify the cloned cluster with cluvfy:
[root@rac12cnode1 grid]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ srvctl config gns
GNS is enabled.
[grid@rac12cnode1 ~]$ crsctl get node role config
Node 'rac12cnode1' configured role is 'hub'
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ srvctl status asm -detail
ASM is running on rac12cnode1,rac12cnode2,rac12cnode3
ASM is enabled.
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ cluvfy stage -post nodeadd -n rac12cnode3,rac12cnode4 -verbose
...
....
Post-check for node addition was successful.
[grid@rac12cnode1 ~]$ 

Hope this helps
SRI

Share this article :

Related Articles By Category



+ comments + 1 comments

Anonymous
April 8, 2014 at 12:48 PM

Very good informative. Thanks for the detailed explanation.

Post a Comment

Thank you for visiting our site and leaving your valuable comment.

 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger