facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Latest Post

Duplicate Database Using RMAN Without Connecting To Target Database

Written By askMLabs on Tuesday, April 1, 2014 | 12:57 PM

In this document, we are going to explain the step by step configuration details to duplicate a database from a different previous incarnation without connecting to the target database.

I have seen many articles on google about this and found no article with the practical approach. Many articles described to duplicate the database without connecting to target database, but copying the backups to auxiliary database server. In this article, i will show you the practical approach how the backup available on tsm should be configured to be available to auxiliary server.

1. Environment :

Environment
Host
Version
TSM configuration
Rman Catalog
TARGET
CRPROD
dbprl.askmlabs.com
11gR2
tsm5
rcatprod
AUXILIARY
CRPERF
dbrfl.askmlabs.com
11gR2
tsm6
rcatdev

 Please note here that the target environment is the environment which we use as source to duplicate database and auxiliary database is the database which is to be created by using the target database backups.

Please try to understand the complexity of the environment. Here, target database and auxiliary databases are configured with different tsm's.  The target database backups are available on tsm5 and we need to present these backups to dbrfl server.

2. Task :
We need to create a new duplicated database CRPERF  from the backups of CRPROD  to a date prior to the point that it is opened with resetlogs  ie  we need to duplicate CRPROD to CRPERF, to the parent incarnation of the CRPROD. There are different approaches to complete this task. In the present document, we are going to use "duplicating a database without connecting to target database" as described in the oracle documentation here.

RMAN> list incarnation of database crprod;
List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
2721859 2721860 CRPROD   1190017710       PARENT  1               18-MAR-11
2721859 233687610 CRPROD   1190017710       CURRENT 2321471232053         21-MAR-14

CRPROD is opened with reset logs on 21-MAR-2014 and it started a new incarnation. Our aim is to restore and recover the CRPROD as of time 13-MAR-2014. The time 13-MAR-2014 is not in the current incarnation and it is in the parent incarnation. We will get the following error if we try to duplicate the database connecting to target database

RMAN-06004: ORACLE error from recovery catalog database: RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time

3. Procedure
3.1 Prepare auxiliary environment :
Calculate the space requirements and make sure you have enough space available for duplicating database.
Prepare init.ora file for auxiliary database and make sure you include following two variables in the init.ora file.
db_file_name_convert =("/u02/oradata/crprod/", "/u02/oradata/crperf/")
log_file_name_convert =("/u02/oradata/crprod/", "/u02/oradata/crperf/")

Please refer to my other article for detailed steps on how to configure environment for duplication RMAN DUPLICATION FROM TAPE BACKUPS.

3.2 Configure TSM on auxiliary environment:
The tsm backups for production are available on tsm5 where as the CRPERF environment is configured to have its backups on tsm6. So we need to complete the following configuration to make sure that the production backups available on tsm5 are available to crperf servers.

Create  a temporary directory on dbrfl.askmlabs.com to keep all the tsm5 configuration files.
mkdir $HOME/dup_perf

Copy the following files to the directory created above from the server dbprl.askmlabs.com.
  • dsm.opt.tsm5   ( This configuration file specified whether tsm backups are on tsm5 or tsm6)
  • CRPROD_tdpo.opt  ( tsm tape configuration files to connect to the tsm5)
  • TDPO.tdpdbprl  ( password files from production dbprl.askmlabs.com )
Now create the following symlinks to point the configuration files.
$ cd $HOME/dup_perf
$ ln -s dsm.opt.tsm5   dsm.opt
$ ln -s /opt/tivoli/tsm/client/ba/bin/dsm.sys  dsm.sys

Now the directory  $HOME/dup_perf  should look as below ...
 [oracle@dbrfl ~]$ ls -lrt /home/oracle/dup_perf
total 32
-rw-r--r-- 1 oracle dba  48 Mar 27 16:34 TDPO.tdpdbprl
-rw-r--r-- 1 oracle dba 744 Mar 27 16:39 dsm.opt.tsm5
lrwxrwxrwx 1 oracle dba  12 Mar 27 16:40 dsm.opt -> dsm.opt.tsm5
lrwxrwxrwx 1 oracle dba  37 Mar 27 16:40 dsm.sys -> /opt/tivoli/tsm/client/ba/bin/dsm.sys
-rwxr-xr-x 1 oracle dba 693 Mar 27 16:45 CRPROD_tdpo.opt
[oracle@dbrfl ~]$

Edit the CRPROD_tdpo.opt file to point the password file(TDPO_PSWDPATH) and DSM configuration files (DSMI_ORC_CONFIG ) to the location created above.
DSMI_ORC_CONFIG    /home/oracle/ dup_perf/dsm.opt
TDPO_PSWDPATH       /home/oracle/ dup_perf

Execute the following command to verify the tsm configuration on perf server :
# tdpoconf showenv -TDPO_OPT=/home/oracle/ dup_perf /CRPROD_tdpo.opt
( in the output verify the details  Server Name ,  Server Address  and  Node Name that they are reflecting the correct values)


3.3 Execute the duplicate command :
Connect to the production database and get the database id which will be used in the rman duplication command.

Connect to the auxiliary database server and start the database in nomount.

Connect to the rman as below ( NOTE : we are not connecting to the target database)

rman auxiliary /  catalog rmancat/xxxxxxxx@rcatprod
RMAN> run {
configure auxiliary channel 1 device type sbt parms="ENV=(TDPO_OPTFILE=/home/oracle/dup_perf /CRPROD_tdpo.opt)";
DUPLICATE DATABASE crprod DBID 1895637710 to crperf until time "TO_DATE('03/13/2014', 'MM/DD/YYYY')"   NOFILENAMECHECK;
}


4. Verification:
Connect to the newly duplicated database and exeucute the following commands
SQL> select instance_name,status from v$instance;
SQL> select created from v$database;
SQL> archive log list;   ( disable archive log if it is enabled)

5. Post Duplication Steps :
Register the CRPERF database with the RMAN dev catalog.

Contact me if you have any doubts in this process.

Hope this helps
SRI

RAC 12c : JAN2014 PSU To 4 Node Flex Cluster

Written By askMLabs on Sunday, March 16, 2014 | 2:25 PM

In this article, we will see how to apply PSU patch to the flex cluster. I have a RAC 12c flex cluster installed with 4 nodes. 3nodes being hub nodes and 1 node being leaf node.
Environment Details : 
RAC Version Cluster Type
12c  12.1.0.1.0
Flex Cluster
Hub Nodes
rac12cnode1/2/3
Leaf Nodes
rac12cnode4
DB Running on
All Hub Nodes
Task
Applying JAN2014 PSU Patch

Environment Configuration :
Type
Path
Owner
Version
Shared
Grid Infra Home
/u01/app/12.1.0/grid
Grid
12.1.0.1
False
Database Home
/u01/app/oracle/product/12.1.0/dbhome_1
Oracle
12.1.0.1
Flase

[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode1 ~]$

Step By Step
  1. 1. Download and Unzip the Latest OPatch to all cluster nodes
  2. 2. Validate and Record Pre-Patch information
  3. 3. Create OCM response file if it does not exist
  4. 4. Download and Unzip the JAN2014 PSU patch for GI 12c ie 12.1.0.1.2
  5. 5. One-off patch Conflicts detection and Resolution
  6. 6. Patch application
  7. 7. Verification of Patch application
  8. 8. Issues and Resolutions

1. Download and Unzip the Latest OPatch to all cluster nodes:
You must use the OPatch utility version12.1.0.1.1 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 12.1 releases, which is available for download from My Oracle Support patch 6880880

[grid@rac12cnode1 1201_PSU]$ cd $ORACLE_HOME[grid@rac12cnode1 grid]$ ls -ld OPatchdrwxr-xr-x. 7 grid oinstall 4096 Mar  7 08:36 OPatch
[grid@rac12cnode1 grid]$ which opatch/u01/app/12.1.0/grid/OPatch/opatch
[grid@rac12cnode1 grid]$ opatch versionOPatch Version: 12.1.0.1.0
OPatch succeeded.
[grid@rac12cnode1 grid]$ pwd
/u01/app/12.1.0/grid
[grid@rac12cnode1 grid]$ mv OPatch OPatch_bakmv: cannot move `OPatch' to `OPatch_bak': Permission denied
[grid@rac12cnode1 grid]$ exit
logout
[root@rac12cnode1 ~]# cd /u01/app/12.1.0/grid/[root@rac12cnode1 grid]# mv OPatch OPatch_bak[root@rac12cnode1 grid]# unzip /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zipArchive:  /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zip
..
..
[root@rac12cnode1 grid]# ls -ld OPatchdrwxr-xr-x. 7 root root 4096 Oct  9 20:25 OPatch
[root@rac12cnode1 grid]# chown -R grid:oinstall OPatch[root@rac12cnode1 grid]# ls -ld OPatchdrwxr-xr-x. 7 grid oinstall 4096 Oct  9 20:25 OPatch
[root@rac12cnode1 grid]# pwd
/u01/app/12.1.0/grid
[root@rac12cnode1 grid]# cd OPatch[root@rac12cnode1 OPatch]# ls
datapatch      docs         jlib  opatch      opatch.bat  opatch.pl      operr      operr_readme.txt  README.txt
datapatch.bat  emdpatch.pl  ocm   opatchauto  opatch.ini  opatchprereqs  operr.bat  oplan             version.txt
[root@rac12cnode1 OPatch]# ./opatch versionOPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode1 OPatch]#
It is always a best practice to keep both GRID_HOME and DATABASE HOME at same opatch level. So change the opatch in DATABASE HOME also.
[root@rac12cnode1 OPatch]# cd /u01/app/oracle/product/12.1.0/[root@rac12cnode1 12.1.0]# ls
dbhome_1
[root@rac12cnode1 12.1.0]# cd dbhome_1/[root@rac12cnode1 dbhome_1]# ls -ld OPatch
drwxr-xr-x. 7 oracle oinstall 4096 May 27  2013 OPatch
[root@rac12cnode1 dbhome_1]# mv OPatch OPatch_bak[root@rac12cnode1 dbhome_1]# unzip /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zipArchive:  /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zip
..
..
[root@rac12cnode1 dbhome_1]# chown -R oracle:oinstall OPatch[root@rac12cnode1 dbhome_1]# cd Opatch
-bash: cd: Opatch: No such file or directory
[root@rac12cnode1 dbhome_1]# ./opatch version
-bash: ./opatch: No such file or directory
[root@rac12cnode1 dbhome_1]# cd OPatch
[root@rac12cnode1 OPatch]# ./opatch versionOPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode1 OPatch]#
Repeat the above step to all the nodes in the cluster.
[root@rac12cnode2 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode2 OPatch]#
[root@rac12cnode3 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode3 OPatch]#
[root@rac12cnode4 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode4 OPatch]#

2. Validate and Record Pre-Patch information :
Validate using the following commands :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1

Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Connect to each instance and record registry information.
SQL> select comp_name,version,status from dba_registry;

[grid@rac12cnode1 bin]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0/grid/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_02-41-32AM_1.log
Lsinventory Output file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2014-03-13_02-41-32AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.1.0.1.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Patch level status of Cluster nodes :
 Patching Level                  Nodes
 --------------                  -----
 0                               rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3
--------------------------------------------------------------------------------
OPatch succeeded.
[grid@rac12cnode1 bin]$
3. Create OCM Response File If It Does Not Exist : 
Create ocm response file using the following command and provide appropriate values for the prompts.
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrsp
Verify the created file using,
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrsp –verbose ocm.rsp
NOTE: The Opatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. Without which we cant proceed further.
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ cd $ORACLE_HOME/OPatch/ocm/bin
[grid@rac12cnode1 bin]$ ls
emocmrsp
[grid@rac12cnode1 bin]$ ./emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates.  All rights reserved.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (ocm.rsp) was successfully created.
[grid@rac12cnode1 bin]$ ls
emocmrsp  ocm.rsp
[grid@rac12cnode1 bin]$ ls -lrt
total 16
-rwxr-----. 1 grid oinstall 9063 Nov 27  2009 emocmrsp
-rw-r--r--. 1 grid oinstall  621 Mar 13 02:38 ocm.rsp
[grid@rac12cnode1 bin]$ chmod 777 ocm.rsp
[grid@rac12cnode1 bin]$
Copy this response file ocm.rsp to all the nodes in the cluster to the same location. Or you can create a new response file on each node using the same method above.

4. Download and Unzip the JAN2014 PSU patch : (as grid user)
Patch 17735306 is the JAN2014 PSU patch. It is downloaded and unzipped to the locaiton "/mnt/software/RAC/1201_PSU/JAN2014_PSU/"

5. One-off patch Conflicts detection and Resolution :
Determine whether any currently installed one-off patches conflict with the PSU patch.
$GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/17735306 analyze

But i don't have any patches applied to my home yet. So i can ignore this step. But if you have any conflicts identified in GI home or in DB home, follow MOS ID 1061295.1 to resolve the conflicts.

6. Patch Application : 
While applying the patch to a flex cluster, please note the following points .
  1. a. First node must be a HUB node.The last node can be either hub/leaf node.
  2. b. Make sure GI Stack is up on the first hub node and at least one other hub node.
  3. c. Make sure stack is up on all the leaf nodes
  4. d. The opatchauto command will restart the stack on the local node and restart the Databases on the local node.
Now patch can be applied using the following syntax ..
# opatchauto apply <PATH_TO_PATCH_DIRECTORY> -ocmrf <ocm response file>
On all HUB nodes ( rac12cnode1/2/3)
[root@rac12cnode1 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode1 ~]# which opatchauto/u01/app/12.1.0/grid/OPatch/opatchauto
[root@rac12cnode1 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_02-55-54_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
[WARNING] SQL changes, if any, could not be applied on the following database(s): ORCL ... Please refer to the log file for more details.
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17077442, 17552800
opatchauto succeeded.
[root@rac12cnode1 ~]#
While applying the patch, the services running on node 1 are relocated to other nodes in the cluster. While applying the patch to node 1 ( rac12cnode1) , i have the following status for the clusterware services..
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/srvctl status database -d orcl
Instance ORCL1 is not running on node rac12cnode1
Instance ORCL2 is running on node rac12cnode2
Instance ORCL3 is running on node rac12cnode3
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.LISTENER_LEAF.lsnr
               OFFLINE OFFLINE      rac12cnode4              STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.ons
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac12cnode2              STABLE
               ONLINE  ONLINE       rac12cnode3              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.asm
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       rac12cnode2              STABLE
      3        ONLINE  ONLINE       rac12cnode3              STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.gns
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.orcl.db
      1        OFFLINE OFFLINE                               STABLE
      2        ONLINE  ONLINE       rac12cnode2              Open,STABLE
      3        ONLINE  ONLINE       rac12cnode3              Open,STABLE
ora.rac12cnode1.vip
      1        ONLINE  INTERMEDIATE rac12cnode3              FAILED OVER,STABLE
ora.rac12cnode2.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
ora.rac12cnode3.vip
      1        ONLINE  ONLINE       rac12cnode3              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cnode2              STABLE
--------------------------------------------------------------------------------
[grid@rac12cnode4 ~]$

Repeat the above step to all the hub nodes in the flex cluster.

On Leaf Node :
Leaf node is not connected to storage directly and there is no oracle database instance running on this node. So when we run the patch application, it will  apply the patch only to grid home. You can see that difference from the below session log from node 4( rac12cnode4) which is leaf node in my flex cluster.
[root@rac12cnode4 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode4 ~]# which opatchauto
/u01/app/12.1.0/grid/OPatch/opatchauto
[root@rac12cnode4 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_10-00-11_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
opatchauto succeeded.
[root@rac12cnode4 ~]#

NOTE : please verify the difference in applying the patch to hub node and leaf node from the above output.

7. Verification of Patch application : 
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1

Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'

Connect to each instance and verify registry information.
SQL> select comp_name,version,status from dba_registry;

8. Issues and Resolution : 
Issue 1:
 While applying the patch i got one error saying "OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'..."

Content From Logs:
ERROR:
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'

2014-03-13_05-10-11 :
Failed to run this command :
/u01/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGrid12c_patchList -local  -invPtrLoc /u01/app/12.1.0/grid/oraInst.loc -oh /u01/app/12.1.0/grid -silent -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
oracle.opatch.gi.RunExecutionSteps.runGenericShellCommands(RunExecutionSteps.java:724)
oracle.opatch.gi.RunExecutionSteps.processAllSteps(RunExecutionSteps.java:183)
oracle.opatch.gi.GIPatching.processPatchingSteps(GIPatching.java:747)
oracle.opatch.gi.OPatchautoExecution.main(OPatchautoExecution.java:101)
Command "/u01/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGrid12c_patchList -local  -invPtrLoc /u01/app/12.1.0/grid/oraInst.loc -oh /u01/app/12.1.0/grid -silent -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp" execution failed:
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'

Log file Location for the failed command: /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_04-41-37AM_1.log

==
[Mar 13, 2014 4:53:26 AM]    The following actions have failed:
[Mar 13, 2014 4:53:26 AM]    OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 4:53:26 AM]    Do you want to proceed? [y|n]
[Mar 13, 2014 4:53:29 AM]    N (auto-answered by -silent)
[Mar 13, 2014 4:53:29 AM]    User Responded with: N
[Mar 13, 2014 4:53:29 AM]    OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
                             '
[Mar 13, 2014 4:53:29 AM]    Restoring "/u01/app/12.1.0/grid" to the state prior to running NApply...
[Mar 13, 2014 5:10:09 AM]    Checking if OPatch needs to invoke 'make' to restore some binaries...
[Mar 13, 2014 5:10:09 AM]    OPatch was able to restore your system. Look at log file and timestamp of each file to make sure your system is in the state prior to applying the patch.
[Mar 13, 2014 5:10:09 AM]    OUI-67124:
                             NApply restored the home. Please check your ORACLE_HOME to make sure:
                               - files are restored properly.
                               - binaries are re-linked correctly.
                             (use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
                             "/u01/app/12.1.0/grid/.patch_storage/NApply/2014-03-13_04-41-37AM"
[Mar 13, 2014 5:10:10 AM]    OUI-67073:UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
                             '
[Mar 13, 2014 5:10:10 AM]    --------------------------------------------------------------------------------
[Mar 13, 2014 5:10:10 AM]    The following warnings have occurred during OPatch execution:
[Mar 13, 2014 5:10:10 AM]    1) OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 5:10:10 AM]    2) OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
                             '
[Mar 13, 2014 5:10:10 AM]    3) OUI-67124:
                             NApply restored the home. Please check your ORACLE_HOME to make sure:
                               - files are restored properly.
                               - binaries are re-linked correctly.
                             (use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
                             "/u01/app/12.1.0/grid/.patch_storage/NApply/2014-03-13_04-41-37AM"
[Mar 13, 2014 5:10:10 AM]    --------------------------------------------------------------------------------
[Mar 13, 2014 5:10:10 AM]    Finishing UtilSession at Thu Mar 13 05:10:10 EDT 2014
[Mar 13, 2014 5:10:10 AM]    Log file location: /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_04-41-37AM_1.log
[Mar 13, 2014 5:10:10 AM]    Stack Description: java.lang.RuntimeException: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...

Resolution : 
Verified permission and there is no issue accessing and copying file manually to target location. Then re-executed without any modificaiton. Then it went fine. May be it is my environment specific as i have PSU patch located on NFS file system. But if you have also got the same issue, it might be something common to many environment and intemittent issue.

Logfile Location For PSU Patch : 
/u01/app/12.1.0/grid/cfgtoollogs/opatch/
/u01/app/12.1.0/grid/cfgtoollogs/opatchauto/

Hope This Helps
SRI

RAC 12c Flex Cluster Installation



RAC : Clone Flex Cluster To Extend Cluster By Adding Hub and Lead Nodes

Written By askMLabs on Saturday, March 15, 2014 | 11:57 PM


In this article , we will see how to clone RAC environment to extend the existing cluster. Clone can also be used to prepare a new cluster environment. But in this article, we are going to see extending the existing using clone method. A cluster can also be extended using addnode method. But cloning is a different method to extend the cluster.

Environment Details :
RAC Version Cluster Type
12c  12.1.0.1.0
Flex Cluster
Hub Nodes
rac12cnode1/2
Leaf Nodes
No
DB Running on
All Hub Nodes
Task
Clone cluster to extend from 2 node to 4 node

[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
[grid@rac12cnode1 ~]$


Step By Step : 

  1. 1. Prepare the new cluster nodes 
  2. 2. Prepare the existing cluster nodes
  3. 3. Deploy the Grid Infrastructure to new cluster nodes
  4. 4. Run clone.pl on all the cluster nodes.
  5. 5. Run orainstRoot.sh script on all target cluster nodes.
  6. 6. Execute addnode.sh in silent mode to add hub node and leaf node
  7. 7. Copy config files from source system to all target cluster nodes
  8. 8. Run root.sh on all the target nodes to configure cluster.
  9. 9. Verify the cloned cluster with cluvfy
1. Prepare the new cluster nodes:
Our task is to add two nodes to the existing flex cluster in clone method. We need to first prepare the nodes which are to be added to the cluster.

I have the following points from documentation, but you can also follow my other articles/videos on rac to prepare nodes for cluster addition.
On each destination node, perform the following preinstallation steps:
  1. Specify the kernel parameters
  2. Configure block devices for Oracle Clusterware devices
  3. Ensure that you have set the block device permissions correctly
  4. Use short, nondomain-qualified names for all of the names in the /etc/hosts file
  5. Test whether the interconnect interfaces are reachable using the ping command
  6. Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail)
  7. On AIX systems, and on Solaris x86-64-bit systems running vendor clusterware, if you add a node to the cluster, then you must run the rootpre.sh script (located at the mount point it you install Oracle Clusterware from a DVD or in the directory where you unzip the tar file if you download the software) on the node before you add the node to the cluster
  8. Run CVU to verify your hardware and operating system environment
Complete all above steps , so that the nodes are ready for adding / clone to cluster node.

Verify if the nodes are ready to add to the cluster using the following commands 
[grid@rac12cnode1 ~]$ nslookup rac12cnode3
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode3.localdomain
Address: 192.168.1.133
[grid@rac12cnode1 ~]$ nslookup rac12cnode4
Server:         192.168.1.51
Address:        192.168.1.51#53
Name:   rac12cnode4.localdomain
Address: 192.168.1.134
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ cluvfy stage -pre nodeadd -n rac12cnode3,rac12cnode4 -fixup -verbose
Performing pre-checks for node addition......
2. Prepare the existing cluster nodes:
In this step, we need to create a copy of the existing oracle grid infrastructure home and remove the unnecessary files from the copied home. You can perform this step while the clusterware is up and running.
Create an exclusion list to be excluded while creating the tar backup.
[root@rac12cnode1 askm]# cat excl_list.txt
/u01/app/12.1.0/grid/rac12cnode1
/u01/app/12.1.0/grid/log/host_name
/u01/app/12.1.0/grid/gpnp/host_name
/u01/app/12.1.0/grid/crs/init/*
/u01/app/12.1.0/grid/cdata/*
/u01/app/12.1.0/grid/crf/*
/u01/app/12.1.0/grid/network/admin/*.ora
/u01/app/12.1.0/grid/root.sh*
*.ouibak
*.ouibak1
[root@rac12cnode1 askm]#
Create compressed copy of the oracle grid infrastructure home using tar utility. Execute the following command on any existing node in the cluster : 
[root@rac12cnode1 12.1.0]# tar -czf gridHome.tar.gz -X /tmp/askm/excl_list.txt /u01/app/12.1.0/grid
tar: Removing leading `/' from member names
tar: /u01/app/12.1.0/grid/log/rac12cnode1/gipcd/gipcd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/agent/crsd/oraagent_grid/oraagent_grid.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/ctssd/octssd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/log/rac12cnode1/cssd/ocssd.log: file changed as we read it
tar: /u01/app/12.1.0/grid/rdbms/audit: file changed as we read it
[root@rac12cnode1 12.1.0]# ls -lrt
total 3100700
drwxr-xr-x. 74 root oinstall       4096 Mar 13 03:41 grid
-rw-r--r--.  1 root root     3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]#

3. Deploy the Grid Infrastructure to new cluster nodes:
Now copy the compressed backup of the oracle grid infrastructure home created in step 2 above to all the target nodes ie rac12cnode3/4.
[root@rac12cnode1 12.1.0]# ls -lrt
total 3100700
drwxr-xr-x. 74 root oinstall       4096 Mar 13 03:41 grid
-rw-r--r--.  1 root root     3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]# scp gridHome.tar.gz root@rac12cnode3:/u01/app/12.1.0
root@rac12cnode3's password:
gridHome.tar.gz                                                                            100% 3025MB   9.7MB/s   05:12
[root@rac12cnode1 12.1.0]# ls -lrt gridHome.tar.gz
-rw-r--r--. 1 root root 3172006616 Mar 14 04:53 gridHome.tar.gz
[root@rac12cnode1 12.1.0]# du -sh gridHome.tar.gz
3.0G    gridHome.tar.gz
[root@rac12cnode1 12.1.0]# scp gridHome.tar.gz root@rac12cnode4:/u01/app/12.1.0
root@rac12cnode4's password:
gridHome.tar.gz                                                                            100% 3025MB   9.3MB/s   05:25
[root@rac12cnode1 12.1.0]#

Extract the compressed backup to node 3 and node 4.
[root@rac12cnode4 ~]# cd /u01/app/12.1.0/
[root@rac12cnode4 12.1.0]# ls
[root@rac12cnode4 12.1.0]# ls -lrt
total 3100696
-rw-r--r--. 1 root root 3172006616 Mar 14 06:33 gridHome.tar.gz
[root@rac12cnode4 12.1.0]# du -sh *
3.0G    gridHome.tar.gz
[root@rac12cnode4 12.1.0]# tar xvzf gridHome.tar.gz -C /
..
..
[root@rac12cnode4 12.1.0]#
[root@rac12cnode3 ~]# cd /u01/app/12.1.0/
[root@rac12cnode3 12.1.0]# ls -lrt
total 3100696
-rw-r--r--. 1 root root 3172006616 Mar 14 05:16 gridHome.tar.gz
[root@rac12cnode3 12.1.0]# tar xvzf gridHome.tar.gz -C /
..
..
[root@rac12cnode3 12.1.0]#
NOTE : Review the copied homes on target nodes , If you still have any files need to be deleted from the copied home on nodes 3 and 4.
I have created following shell script to delete any unwanted files from the new homes created on node 3 and 4.
[root@rac12cnode3 log]# cat /tmp/askm/file_delete.sh
cd /u01/app/12.1.0/grid/
rm -rf /u01/app/12.1.0/grid/log/rac12cnode1
rm -rf /u01/app/12.1.0/grid/gpnp/rac12cnode1
find gpnp -type f -exec rm -f {} \;
rm -rf /u01/app/12.1.0/grid/cfgtoollogs/*
rm -rf /u01/app/12.1.0/grid/crs/init/*
rm -rf /u01/app/12.1.0/grid/cdata/*
rm -rf /u01/app/12.1.0/grid/crf/*
rm -rf /u01/app/12.1.0/grid/network/admin/*.ora
rm -rf /u01/app/12.1.0/grid/crs/install/crsconfig_params
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf /u01/app/12.1.0/grid/root.sh*
rm -rf /u01/app/12.1.0/grid/rdbms/audit/*
rm -rf /u01/app/12.1.0/grid/rdbms/log/*
rm -rf /u01/app/12.1.0/grid/inventory/backup/*
[root@rac12cnode3 log]#
Now execute the file  /tmp/askm/file_delete.sh  on nodes 3 and 4.

4. Run clone.pl on all the cluster nodes.
Run the clone.pl script located in the Grid_home/clone/bin directory on Node 3 and Node 4.
On Node 3:(HUB Node ):
[grid@rac12cnode3 ~]$ cd $ORACLE_HOME/clone/bin
[grid@rac12cnode3 bin]$ perl clone.pl ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGrid12c ORACLE_BASE=/u01/app/grid "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode3'" CRS=TRUE INVENTORY_LOCATION=/u01/app/oraInventory
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/app/12.1.0/grid" "ORACLE_HOME_NAME=OraGrid12c" "ORACLE_BASE=/u01/app/grid" "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode3'" "CRS=TRUE" "INVENTORY_LOCATION=/u01/app/oraInventory" -silent -paramFile /u01/app/12.1.0/grid/clone/clone_oraparam.ini
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB.   Actual 7237 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2042 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-14_08-10-41AM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2014-03-14_08-10-41AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..................................................   90% Done.
..................................................   95% Done.
Copy files in progress.
Copy files successful.
Link binaries in progress.
Link binaries successful.
Setup files in progress.
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The cloning of OraGrid12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2014-03-14_08-10-41AM.log' for more details.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh

..................................................   100% Done.
[grid@rac12cnode3 bin]$

On Node 4: (Leaf Node)
[grid@rac12cnode4 ~]$ cd /u01/app/12.1.0/grid/clone/bin
[grid@rac12cnode4 bin]$ pwd
/u01/app/12.1.0/grid/clone/bin
[grid@rac12cnode4 bin]$ perl clone.pl ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGrid12c ORACLE_BASE=/u01/app/grid "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode4'" CRS=TRUE INVENTORY_LOCATION=/u01/app/oraInventory
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/app/12.1.0/grid" "ORACLE_HOME_NAME=OraGrid12c" "ORACLE_BASE=/u01/app/grid" "'CLUSTER_NODES={rac12cnode1, rac12cnode2,rac12cnode3,rac12cnode4}'" "'LOCAL_NODE=rac12cnode4'" "CRS=TRUE" "INVENTORY_LOCATION=/u01/app/oraInventory" -silent -paramFile /u01/app/12.1.0/grid/clone/clone_oraparam.ini
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB.   Actual 7239 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2041 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-14_08-15-13AM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2014-03-14_08-15-13AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..................................................   90% Done.
..................................................   95% Done.
Copy files in progress.
Copy files successful.
Link binaries in progress.
Link binaries successful.
Setup files in progress.
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The cloning of OraGrid12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2014-03-14_08-15-13AM.log' for more details.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh

..................................................   100% Done.
[grid@rac12cnode4 bin]$

IMP IMP : Please don't run root.sh at this point of time. 

5. Run orainstRoot.sh script on all target cluster nodes:(as root user)
This script populates the /etc/oraInst.loc directory with the location of the central inventory.
[root@rac12cnode3 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac12cnode3 ~]# 
[root@rac12cnode4 oraInventory]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac12cnode4 oraInventory]#
6. Execute addnode.sh in silent mode to add hub node and leaf node:
Run the addnode.sh script from $GRID_HOME/addnode/
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent -noCopy ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NEW_NODES={rac12cnode3,rac12cnode4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac12cnode3-vip}" "CLUSTER_NEW_NODE_ROLES={HUB,LEAF}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 7444 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 407 MB    Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
..................................................   40% Done.
As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[rac12cnode3, rac12cnode4]
The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes.
..................................................   60% Done.
Update Inventory in progress.
..................................................   100% Done.
Update Inventory successful.
Successfully Setup Software.
[grid@rac12cnode1 addnode]$ 
In the preceding syntax example, Node 4 is designated as a Leaf Node and does not require that a VIP be included.

7. Copy config files from source system to all target cluster nodes:
Copy the following files from Node 1, on which you ran addnode.sh, to Node3 and Node4.
Grid_home/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
[root@rac12cnode1 grid]# scp /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/gpnp.tar.gz 192.168.1.133:/tmp/askm
root@192.168.1.133's password:
crsconfig_addparams                                                                        100% 1089     1.1KB/s   00:00
crsconfig_params                                                                           100% 5509     5.4KB/s   00:00
gpnp.tar.gz                                                                                100%   90KB  90.1KB/s   00:00
[root@rac12cnode1 grid]# scp /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/gpnp.tar.gz 192.168.1.134:/tmp/askm
root@192.168.1.134's password:
crsconfig_addparams                                                                        100% 1089     1.1KB/s   00:00
crsconfig_params                                                                           100% 5509     5.4KB/s   00:00
gpnp.tar.gz                                                                                100%   90KB  90.1KB/s   00:00
[root@rac12cnode1 grid]# 
[root@rac12cnode4 ~]# mv /u01/app/12.1.0/grid/crs/install/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_addparams_bak
[root@rac12cnode4 ~]# mv /u01/app/12.1.0/grid/crs/install/crsconfig_params /u01/app/12.1.0/grid/crs/install/crsconfig_params_bak
[root@rac12cnode4 ~]# mv /tmp/askm/crsconfig_addparams /u01/app/12.1.0/grid/crs/install/crsconfig_addparams
[root@rac12cnode4 ~]# mv /tmp/askm/crsconfig_params /u01/app/12.1.0/grid/crs/install/crsconfig_params
[root@rac12cnode4 ~]# chown grid:oinstall /u01/app/12.1.0/grid/crs/install/crsconfig_addparams
[root@rac12cnode4 ~]# chown grid:oinstall /u01/app/12.1.0/grid/crs/install/crsconfig_params
[root@rac12cnode4 ~]# cd /u01/app/12.1.0/grid
[root@rac12cnode4 grid]# mv gpnp gpnp_bak
[root@rac12cnode4 grid]# pwd
/u01/app/12.1.0/grid
[root@rac12cnode4 grid]# tar xzf /tmp/askm/gpnp.tar.gz
[root@rac12cnode4 grid]# ls -ld gpnp
drwxr-x---. 8 grid oinstall 4096 Mar  7 09:06 gpnp
[root@rac12cnode4 grid]#

Complete the this step on node 3 and node 4.

8. Run root.sh on all the target nodes to configure cluster:
On Node 3 and Node 4, run the Grid_home/root.sh script.
[root@rac12cnode3 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log for the output of root script
[root@rac12cnode3 grid]# cat /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log
Performing root user operation for Oracle 12c
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl

To configure Grid Infrastructure for a Cluster execute the following command as grid user:
/u01/app/12.1.0/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@rac12cnode3 grid]#
If you get this output after executing the root.sh , then your cluster is not configured correctly. You need to modify the script "/u01/app/12.1.0/grid/crs/config/rootconfig.sh" to uncomment the some part of the script. Please refer to my video if you need more details on this part. 
[root@rac12cnode4 ~]# diff /u01/app/12.1.0/grid/crs/config/rootconfig.sh /u01/app/12.1.0/grid/crs/config/rootconfig.sh_bak_askm
33,37c33,37
< if [ "$ADDNODE" = "true" ]; then
<   SW_ONLY=false
<   HA_CONFIG=false
< fi
---
> #if [ "$ADDNODE" = "true" ]; then
> #  SW_ONLY=false
> #  HA_CONFIG=false
> #fi
[root@rac12cnode4 ~]# 
Now run the root.sh again. This time it will execute successfully and configures the cluster.
[root@rac12cnode3 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log for the output of root script
[root@rac12cnode3 grid]#
[root@rac12cnode4 grid]# ./root.sh
Check /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log for the output of root script
[root@rac12cnode4 grid]#
The output for above executions for confirmation that they configured clusterware correctly ....
[root@rac12cnode3 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2014/03/15 11:59:52 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode3'
CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode3'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode3'
CRS-2676: Start of 'ora.gipcd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode3'
CRS-2676: Start of 'ora.diskmon' on 'rac12cnode3' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode3'
CRS-2676: Start of 'ora.cssd' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode3'
CRS-2676: Start of 'ora.ctssd' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3'
CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode3'
CRS-2676: Start of 'ora.storage' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode3'
CRS-2676: Start of 'ora.crsd' on 'rac12cnode3' succeeded
CRS-6017: Processing resource auto-start for servers: rac12cnode3
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3'
CRS-2672: Attempting to start 'ora.ons' on 'rac12cnode3'
CRS-2676: Start of 'ora.ons' on 'rac12cnode3' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3'
CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded
CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2'
CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode2'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1'
CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2'
CRS-6016: Resource auto-start has completed for server rac12cnode3
CRS-2672: Attempting to start 'ora.proxy_advm' on 'rac12cnode3'
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/15 12:10:16 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/15 12:11:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac12cnode3 ~]#
[root@rac12cnode4 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
OLR initialization - successful
2014/03/15 12:27:14 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode4'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode4'
CRS-2676: Start of 'ora.evmd' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode4'
CRS-2676: Start of 'ora.gipcd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode4'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode4'
CRS-2676: Start of 'ora.diskmon' on 'rac12cnode4' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode4'
CRS-2676: Start of 'ora.cssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode4'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode4'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode4' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode4'
CRS-2676: Start of 'ora.storage' on 'rac12cnode4' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode4'
CRS-2676: Start of 'ora.crsd' on 'rac12cnode4' succeeded
CRS-6017: Processing resource auto-start for servers: rac12cnode4
CRS-6016: Resource auto-start has completed for server rac12cnode4
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/15 12:31:56 CLSRSC-343: Successfully started Oracle clusterware stack
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/15 12:32:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac12cnode4 ~]#

9. Verify the cloned cluster with cluvfy:
[root@rac12cnode1 grid]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1     Active  Unpinned
rac12cnode2     Active  Unpinned
rac12cnode3     Active  Unpinned
rac12cnode4     Active  Unpinned
[grid@rac12cnode1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode
[grid@rac12cnode1 ~]$ srvctl config gns
GNS is enabled.
[grid@rac12cnode1 ~]$ crsctl get node role config
Node 'rac12cnode1' configured role is 'hub'
[grid@rac12cnode1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[grid@rac12cnode1 ~]$ srvctl status asm -detail
ASM is running on rac12cnode1,rac12cnode2,rac12cnode3
ASM is enabled.
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ clear
[grid@rac12cnode1 ~]$ cluvfy stage -post nodeadd -n rac12cnode3,rac12cnode4 -verbose
...
....
Post-check for node addition was successful.
[grid@rac12cnode1 ~]$ 

Hope this helps
SRI

 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger