Monday, August 25, 2008

Oracle RAC installation on Solaris SPARC 64 bit

Keyword:

Oracle RAC Installation RAC Installation

Installing Oracle RAC Installing

Oracle RAC on Solaris



Few weeks back i did a 2 node oracle RAC installation
The Machines were Soalris 10 SPARC 64 bit (Sun-Fire-T2000).
The Shared storage was NAS
Even though Solaris 10 uses resource control, the kernel parameters were added in the /etc/system (#Metalink: Note:367442.1)

The group OINSTALL and user ORACLE were created on both nodes

Few parameters were tuned in /etc/rc2.d/S99nettune

bash-3.00# more /etc/rc2.d/S99nettune
#!/bin/sh
ndd -set /dev/ip ip_forward_src_routed 0
ndd -set /dev/ip ip_forwarding 0
ndd -set /dev/tcp tcp_conn_req_max_q 16384
ndd -set /dev/tcp tcp_conn_req_max_q0 16384
ndd -set /dev/tcp tcp_xmit_hiwat 400000
ndd -set /dev/tcp tcp_recv_hiwat 400000
ndd -set /dev/tcp tcp_cwnd_max 2097152
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
ndd -set /dev/tcp tcp_rexmit_interval_max 10000
ndd -set /dev/tcp tcp_rexmit_interval_min 3000
ndd -set /dev/tcp tcp_max_buf 4194304
ndd -set /dev/tcp tcp_maxpsz_multiplier 10
#Oracle Required
ndd -set /dev/udp udp_recv_hiwat 65535
ndd -set /dev/udp udp_xmit_hiwat 65535

Check /etc/system is readable by ORACLE (else RDBMS installation will fail)

-rw-r–r– 1 root root 2561 Apr 17 16:03 /etc/system
Checked the system config on both nodes
For RAM
/usr/sbin/prtconf | grep “Memory size”
Memory size: 8064 Megabytes

For SWAP

/usr/sbin/swap -s
total: 4875568k bytes allocated + 135976k reserved = 5011544k used, 9800072k available
For /tmp
df -h /tmp

Filesystem size used avail capacity Mounted on

swap 9.4G 31M 9.4G 1% /tmp

For OS

/bin/isainfo -kv
64-bit sparcv9 kernel modules

For user

id -a #both UID and GID of user ORACLE should be same on both nodes
uid=300(ORACLE) gid=300(oinstall) groups=300(oinstall),301(dba),503(tms),504(mscat),102(dwh)

User nobody should exist

id -a nobody

uid=60001(nobody) gid=60001(nobody) groups=60001(nobody)
I had the below entries in the /etc/hosts on both nodes
[]cat /etc/hosts

#Public:
3.208.169.203 ownserver01 ownserver01ipmp0 loghost
3.208.169.207 ownserver02 ownserver02ipmp0 loghost
#Private:
10.47.2.82 ownserver01ipmp1 # e1000g1 -Used this while installing cluster
10.47.2.85 ownserver02ipmp1 # e1000g1 -Used this while installing cluster
10.47.2.76 ownserver01ipmp2 # e1000g0
10.47.2.79 ownserver02ipmp2 # e1000g0
#Vip:
3.208.169.202 ownserverv01
3.208.169.206 ownserverv02

All the interfaces had their ipmp groups.

Confirmed that the interface names of both Private and Public are same across the nodes.
e1000g3 was the Public Interface on both nodes.
e1000g0 and e1000g1 were the Private Interface on both nodes.
-I had 2 interfaces for Private Interconnect, of which i used e1000g1 during the cluster installation.
-The interface names for e1000g1 on each node were ownserver01ipmp1 and ownserver02ipmp1

Below is the ‘ifconfig -a‘ from Ownserver01

ownserver01 [SHCL1DR1]$ ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 10.47.2.76 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp2
e1000g0:1: flags=9040843 mtu 1500 index 2
inet 10.47.2.77 netmask ffffffe0 broadcast 10.47.2.95
e1000g0:2: flags=9040842 mtu 1500 index 2
inet 10.47.2.77 netmask ff000000 broadcast 10.255.255.255
e1000g1: flags=1000843 mtu 1500 index 3
inet 10.47.2.82 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp1
e1000g1:1: flags=9040843 mtu 1500 index 3
inet 10.47.2.83 netmask ffffffe0 broadcast 10.47.2.95
e1000g1:2: flags=9040842 mtu 1500 index 3
inet 10.47.2.83 netmask ff000000 broadcast 10.255.255.255
e1000g2: flags=1000843 mtu 1500 index 4
inet 10.47.2.11 netmask ffffffc0 broadcast 10.47.2.63
groupname ipmp3
e1000g2:1: flags=9040843 mtu 1500 index 4
inet 10.47.2.12 netmask ffffffc0 broadcast 10.47.2.63
e1000g2:2: flags=9040842 mtu 1500 index 4
inet 10.47.2.12 netmask ff000000 broadcast 10.255.255.255
e1000g3: flags=1000843 mtu 1500 index 5
inet 3.208.169.203 netmask ffffffc0 broadcast 3.208.169.255
groupname ipmp0
e1000g3:1: flags=9040843 mtu 1500 index 5
inet 3.208.169.204 netmask ffffffc0 broadcast 3.208.169.255
e1000g3:2: flags=9040842 mtu 1500 index 5
inet 3.208.169.204 netmask ff000000 broadcast 3.255.255.255
nxge0: flags=69040843 mtu 1500 index 6
inet 10.47.2.13 netmask ffffffc0 broadcast 10.47.2.63
groupname ipmp3
nxge1: flags=69040843 mtu 1500 index 7
inet 3.208.169.205 netmask ffffffc0 broadcast 3.208.169.255
groupname ipmp0
nxge2: flags=69040843 mtu 1500 index 8
inet 10.47.2.78 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp2
nxge3: flags=69040843 mtu 1500 index 9
inet 10.47.2.84 netmask ffffffe0 broadcast 10.47.2.95
groupname ipmp1



From the above output its clear that On Ownserver01

e1000g0 is 10.47.2.76 and e1000g1 is 10.47.2.82 - For Private Interconnect
e1000g2 is 10.47.2.11 - For Shared Storage (NAS)
e1000g3 is 3.208.169.203 - For Public

On Ownserver02

e1000g0 is 10.47.2.79 and e1000g1 is 10.47.2.85 - For Private Interconnect
e1000g2 is 10.47.2.14 - For Shared Storage (NAS)
e1000g3 is 3.208.169.207 - For Public

Checked for SSH and SCP in /usr/local/bin/

The cluster verification utility(runcluvfy.sh ) checks for scp and ssh in /usr/local/bin/.
Create soft links of ssh and scp in /usr/local/bin/ if they are not there.
cd /usr/local/bin/
ls -l
lrwxrwxrwx 1 root root 12 Apr 25 16:57 /usr/local/bin/scp -> /usr/bin/scp
lrwxrwxrwx 1 root root 12 Apr 25 16:57 /usr/local/bin/ssh -> /usr/bin/ssh

Checked SSH equivalency between the nodes

ownserver01 [SHCL1DR1]$ ssh ownserver01 date
ssh_exchange_identification: Connection closed by remote host
ownserver01 [SHCL1DR1]$
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa
usr/bin/ssh-keygen -t dsa
touch ~/.ssh/authorized_keys
ssh ownserver01 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys
Passowrd: *****
ssh ownserver01 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys
Password: *****
ssh ownserver02 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys
Password: *****
ssh ownserver02 cat /home/oracle/.ssh/id_dsa.pub>>authorized_keys
Password:*****
From Ownserver01
ownserver01 [SHCL1DR1]$ ssh OWNSERVER01 date
ownserver01 [SHCL1DR1]$ ssh OWNSERVER02 date
From Ownserver02
ownserver02 [SHCL1DR2]$ ssh OWNSERVER01 date
ownserver02 [SHCL1DR2]$ ssh OWNSERVER02 date
The time on both nodes were same at any time.
I made sure that from Ownserver01 i could SSH to ownserver02 and also to ownserver01 itself and the same from Ownserver02 too.

Note: also mv /etc/issue to /etc/issue.bak or user equivalence will fail – This clears the login banner. (this may be for Solaris only)

Checked for file /usr/lib/libdce.so [ Metalink Note Note:333348.1]

The 10gR2 installer on Soalris 64 bit fails if the file /usr/lib/libdce.so is present.
Check Metalink Note Note:333348.1 for the workaround.

Configure the .profile of user ORACLE

stty cs8 -istrip -parenb
PATH=/usr/bin:/usr/local/bin
EDITOR=/usr/bin/vi
#umask 077
umask 022
ulimit -c 0
export PATH EDITOR
set -o vi
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export OH=$ORACLE_HOME
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export CH=$ORA_CRS_HOME
export ORACLE_SID=SHCL1DR1
#export NLS_LANG=Japanese_Japan.UTF8
export NLS_LANG=AMERICAN_AMERICA.UTF8
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/sbin:/usr/bin:/usr/ccs/bin:/usr/ucb:/etc:/usr/X/bin:/usr/openwin/bin:/usr/local/bin:/usr/sbin
export PS1=`hostname`” [$ORACLE_SID]\$ “
####
alias bdump=’cd /u01/app/oracle/admin/SHCL1/bdump/’
alias talert=’tail -f $ORACLE_BASE/admin/SHCL1/bdump/alert_$ORACLE_SID.log’
alias tns=’cd $ORACLE_HOME/network/admin’
alias udump=’cd /u01/app/oracle/admin/SHCL1/bdump/’
alias valert=’view $ORACLE_BASE/admin/SHCL1/bdump/alert_$ORACLE_SID.log’
alias home=’cd $ORACLE_HOME’

Created the directories on both nodes

For ORACLE_BASE

mkdir -p /u01/app/oracle
chown -R ORACLE:oinstall /u01/app/oracle
chmod -R 770 /u01/app/oracle

For ORA_CRS_HOME

mkdir -p /u01/app/oracle/product/crs
chown -R root:oinstall /u01/app/oracle/product/crs

For ORACLE_HOME [RDBMS]

mkdir -p /u01/app/oracle/product/10.2.0/db_1
chown -R ORACLE:oinstall /u01/app/oracle/product/10.2.0/db_1

For OCR and Voting disks

mkdir -p /u02/oracle/crs/
mkdir -p /u03/oracle/crs/
mkdir -p /u04/oracle/crs/
Check the privileges on the directories [should be ORACLE:oinstall]

Created OCR and Voting Disk files

In Linux, the oracle provided cluster file system OCSF2 is used on the shared disk,so ‘touch’ ing ocr_disk1 and vote_disk1would do.
But since i used NAS as the shared storage (which is mounted on each nodes), i had to create raw files for ocr and voting disk.
OCR

chown root:oinstall /u02/oracle/crs/ocr_disk1
chown root:oinstall /u03/oracle/crs/ocr_disk2
chmod 660 /u02/oracle/crs/ocr_disk1
chmod 660 /u03/oracle/crs/ocr_disk2

VOTING DISK

chown ORACLE:oinstall /u02/oracle/crs/vote_disk1
chown ORACLE:oinstall /u03/oracle/crs/vote_disk2
chown ORACLE:oinstall /u04/oracle/crs/vote_disk3
chmod 660 /u02/oracle/crs/vote_disk1
chmod 660 /u03/oracle/crs/vote_disk2
chmod 660 /u04/oracle/crs/vote_disk3

Downloaded and unzipped Oracle 10.2.0.1 installation files

10gr2_cluster_sol.cpio
10gr2_companion_sol.cpio
10gr2_db_sol.cpio

Run the Cluster Verification Utility available in

10gr2_cluster_sol.cpio
ownserver01 [SHCL1DR1]$ ./runcluvfy.sh stage -pre crsinst -n OWNSERVER01,OWNSERVER02 -verbose
Performing pre-checks for cluster services setup
Checking node reachability…
Check: Node reachability from node “ownserver01″
Destination Node Reachable?
———————————— ————————
OWNSERVER01 yes
OWNSERVER02 yes
Result: Node reachability check passed from node “ownserver01″.
Checking user equivalence…
Check: User equivalence for user “ORACLE”
Node Name Comment
———————————— ————————
OWNSERVER02 passed
OWNSERVER01 passed
Result: User equivalence check passed for user “ORACLE”.
Checking administrative privileges…
Check: Existence of user “ORACLE”
Node Name User Exists Comment
———— ———————— ————————
OWNSERVER02 yes passed
OWNSERVER01 yes passed
Result: User existence check passed for “ORACLE”.
Check: Existence of group “oinstall”
Node Name Status Group ID
———— ———————— ————————
OWNSERVER02 exists 300
OWNSERVER01 exists 300
Result: Group existence check passed for “oinstall”.
Check: Membership of user “ORACLE” in group “oinstall” [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
—————- ———— ———— ———— ———— ————
OWNSERVER02 yes yes yes yes passed
OWNSERVER01 yes yes yes yes passed
Result: Membership check for user “ORACLE” in group “oinstall” [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity…
Interface information for node “OWNSERVER02″
Interface Name IP Address Subnet
—————————— —————————— —————-
e1000g0 10.47.2.79 10.47.2.64
e1000g0 10.47.2.80 10.47.2.64
e1000g0 10.47.2.80 10.0.0.0
e1000g1 10.47.2.85 10.47.2.64
e1000g1 10.47.2.86 10.47.2.64
e1000g1 10.47.2.86 10.0.0.0
e1000g2 10.47.2.14 10.47.2.0
e1000g2 10.47.2.15 10.47.2.0
e1000g2 10.47.2.15 10.0.0.0
e1000g3 3.208.169.207 3.208.169.192
e1000g3 3.208.169.208 3.208.169.192
e1000g3 3.208.169.208 3.0.0.0
nxge0 10.47.2.16 10.47.2.0
nxge1 3.208.169.209 3.208.169.192
nxge2 10.47.2.81 10.47.2.64
nxge3 10.47.2.87 10.47.2.64
Interface information for node “OWNSERVER01″
Interface Name IP Address Subnet
—————————— —————————— —————-
e1000g0 10.47.2.76 10.47.2.64
e1000g0 10.47.2.77 10.47.2.64
e1000g0 10.47.2.77 10.0.0.0
e1000g1 10.47.2.82 10.47.2.64
e1000g1 10.47.2.83 10.47.2.64
e1000g1 10.47.2.83 10.0.0.0
e1000g2 10.47.2.11 10.47.2.0
e1000g2 10.47.2.12 10.47.2.0
e1000g2 10.47.2.12 10.0.0.0
e1000g3 3.208.169.203 3.208.169.192
e1000g3 3.208.169.204 3.208.169.192
e1000g3 3.208.169.204 3.0.0.0
nxge0 10.47.2.13 10.47.2.0
nxge1 3.208.169.205 3.208.169.192
nxge2 10.47.2.78 10.47.2.64
nxge3 10.47.2.84 10.47.2.64
Check: Node connectivity of subnet “10.47.2.64″
Source Destination Connected?
—————————— —————————— —————-
OWNSERVER02:e1000g0 OWNSERVER02:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER02:nxge2 yes
OWNSERVER02:e1000g0 OWNSERVER02:nxge3 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER01:nxge2 yes
OWNSERVER02:e1000g0 OWNSERVER01:nxge3 yes
OWNSERVER02:e1000g0 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER02:nxge2 yes
OWNSERVER02:e1000g0 OWNSERVER02:nxge3 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER01:nxge2 yes
OWNSERVER02:e1000g0 OWNSERVER01:nxge3 yes
OWNSERVER02:e1000g1 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER02:nxge2 yes
OWNSERVER02:e1000g1 OWNSERVER02:nxge3 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER01:nxge2 yes
OWNSERVER02:e1000g1 OWNSERVER01:nxge3 yes
OWNSERVER02:e1000g1 OWNSERVER02:nxge2 yes
OWNSERVER02:e1000g1 OWNSERVER02:nxge3 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER01:nxge2 yes
OWNSERVER02:e1000g1 OWNSERVER01:nxge3 yes
OWNSERVER02:nxge2 OWNSERVER02:nxge3 yes
OWNSERVER02:nxge2 OWNSERVER01:e1000g0 yes
OWNSERVER02:nxge2 OWNSERVER01:e1000g0 yes
OWNSERVER02:nxge2 OWNSERVER01:e1000g1 yes
OWNSERVER02:nxge2 OWNSERVER01:e1000g1 yes
OWNSERVER02:nxge2 OWNSERVER01:nxge2 yes
OWNSERVER02:nxge2 OWNSERVER01:nxge3 yes
OWNSERVER02:nxge3 OWNSERVER01:e1000g0 yes
OWNSERVER02:nxge3 OWNSERVER01:e1000g0 yes
OWNSERVER02:nxge3 OWNSERVER01:e1000g1 yes
OWNSERVER02:nxge3 OWNSERVER01:e1000g1 yes
OWNSERVER02:nxge3 OWNSERVER01:nxge2 yes
OWNSERVER02:nxge3 OWNSERVER01:nxge3 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g0 OWNSERVER01:nxge2 yes
OWNSERVER01:e1000g0 OWNSERVER01:nxge3 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g0 OWNSERVER01:nxge2 yes
OWNSERVER01:e1000g0 OWNSERVER01:nxge3 yes
OWNSERVER01:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g1 OWNSERVER01:nxge2 yes
OWNSERVER01:e1000g1 OWNSERVER01:nxge3 yes
OWNSERVER01:e1000g1 OWNSERVER01:nxge2 yes
OWNSERVER01:e1000g1 OWNSERVER01:nxge3 yes
OWNSERVER01:nxge2 OWNSERVER01:nxge3 yes
Result: Node connectivity check passed for subnet “10.47.2.64″ with node(s) OWNSERVER02,OWNSERVER01.
Check: Node connectivity of subnet “10.0.0.0″
Source Destination Connected?
—————————— —————————— —————-
OWNSERVER02:e1000g0 OWNSERVER02:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER02:e1000g2 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g0 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g1 OWNSERVER02:e1000g2 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g1 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g0 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g1 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g1 yes
OWNSERVER01:e1000g0 OWNSERVER01:e1000g2 yes
OWNSERVER01:e1000g1 OWNSERVER01:e1000g2 yes
Result: Node connectivity check passed for subnet “10.0.0.0″ with node(s) OWNSERVER02,OWNSERVER01.
Check: Node connectivity of subnet “10.47.2.0″
Source Destination Connected?
—————————— —————————— —————-
OWNSERVER02:e1000g2 OWNSERVER02:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER02:nxge0 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER01:nxge0 yes
OWNSERVER02:e1000g2 OWNSERVER02:nxge0 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER02:e1000g2 OWNSERVER01:nxge0 yes
OWNSERVER02:nxge0 OWNSERVER01:e1000g2 yes
OWNSERVER02:nxge0 OWNSERVER01:e1000g2 yes
OWNSERVER02:nxge0 OWNSERVER01:nxge0 yes
OWNSERVER01:e1000g2 OWNSERVER01:e1000g2 yes
OWNSERVER01:e1000g2 OWNSERVER01:nxge0 yes
OWNSERVER01:e1000g2 OWNSERVER01:nxge0 yes
Result: Node connectivity check passed for subnet “10.47.2.0″ with node(s) OWNSERVER02,OWNSERVER01.
Check: Node connectivity of subnet “3.208.169.192″
Source Destination Connected?
—————————— —————————— —————-
OWNSERVER02:e1000g3 OWNSERVER02:e1000g3 yes
OWNSERVER02:e1000g3 OWNSERVER02:nxge1 yes
OWNSERVER02:e1000g3 OWNSERVER01:e1000g3 yes
OWNSERVER02:e1000g3 OWNSERVER01:e1000g3 yes
OWNSERVER02:e1000g3 OWNSERVER01:nxge1 yes
OWNSERVER02:e1000g3 OWNSERVER02:nxge1 yes
OWNSERVER02:e1000g3 OWNSERVER01:e1000g3 yes
OWNSERVER02:e1000g3 OWNSERVER01:e1000g3 yes
OWNSERVER02:e1000g3 OWNSERVER01:nxge1 yes
OWNSERVER02:nxge1 OWNSERVER01:e1000g3 yes
OWNSERVER02:nxge1 OWNSERVER01:e1000g3 yes
OWNSERVER02:nxge1 OWNSERVER01:nxge1 yes
OWNSERVER01:e1000g3 OWNSERVER01:e1000g3 yes
OWNSERVER01:e1000g3 OWNSERVER01:nxge1 yes
OWNSERVER01:e1000g3 OWNSERVER01:nxge1 yes
Result: Node connectivity check passed for subnet “3.208.169.192″ with node(s) OWNSERVER02,OWNSERVER01.
Check: Node connectivity of subnet “3.0.0.0″
Source Destination Connected?
—————————— —————————— —————-
OWNSERVER02:e1000g3 OWNSERVER01:e1000g3 yes
Result: Node connectivity check passed for subnet “3.0.0.0″ with node(s) OWNSERVER02,OWNSERVER01.
Suitable interfaces for VIP on subnet “3.208.169.192″:
OWNSERVER02 e1000g3:3.208.169.207 e1000g3:3.208.169.208
OWNSERVER01 e1000g3:3.208.169.203 e1000g3:3.208.169.204
Suitable interfaces for VIP on subnet “3.208.169.192″:
OWNSERVER02 nxge1:3.208.169.209
OWNSERVER01 nxge1:3.208.169.205
I ignored the last messages about the VIP, as i knew there wasn’t any problem.






Click “Add” to add the second node
Public Node Name :ownserver02
Private Node Name:ownserver02ipmp1
Virtual Host Name:ownserverv02







On Ownserver01
bash-3.00# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
bash-3.00# ls -l /u01/app/oracle/product/crs/root.sh
-rwxr-xr-x 1 ORACLE oinstall 105 Apr 30 11:14 /u01/app/oracle/product/crs/root.sh
bash-3.00# /u01/app/oracle/product/crs/root.sh
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: ownserver01 ownserver01ipmp1 ownserver01
node 2: ownserver02 ownserver02ipmp1 ownserver02
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /u02/oracle/crs/vote_disk1
Now formatting voting device: /u03/oracle/crs/vote_disk2
Now formatting voting device: /u04/oracle/crs/vote_disk3
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
ownserver01
CSS is inactive on these nodes.
ownserver02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
bash-3.00# /u01/app/oracle/product/crs/root.sh
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: ownserver01 ownserver01ipmp1 ownserver01
node 2: ownserver02 ownserver02ipmp1 ownserver02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
ownserver01
ownserver02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes…
Creating GSD application resource on (2) nodes…
Creating ONS application resource on (2) nodes…
Starting VIP application resource on (2) nodes…
Starting GSD application resource on (2) nodes…
Starting ONS application resource on (2) nodes…
Done.