You are on page 1of 21

Unit 22:

Add Node Lab

© Copyright IBM Corporation 2013


Agenda

• Configure the hardware

• Set up operating system environment for the new node

• Extending the Oracle Grid Infrastructure Home to the New


Node

• Extending the Oracle RAC Home Directory

• Add database instances using DBCA

© Copyright IBM Corporation 2013


Reconfigure hardware
• Public and private network connections should already be in
place

• Provide access from the new nodes to the existing clusters’


shared storage

© Copyright IBM Corporation 2013


Setup activities: Day 1 lab
• Conventions
• Set up tools on student laptop
• Synchronize time between nodes
• Verify, install required AIX filesets and fixes
• Configure tuning parameters
• Configure networks
• Create operating system user, groups and shell environment
• Set up user equivalence (ssh)
• Configure shared storage
• Set up directories for Oracle binaries
• Run Oracle RDA / HCVE script
• Set up Oracle staging directories and final checks

© Copyright IBM Corporation 2013


Setup notes
• Configure network
– Public and private networks on the new node
– Add entries in /etc/hosts for every node
• Entries in existing nodes’ /etc/hosts for the new node.
• Entries in the new node for the existing nodes
• Don’t forget the VIP for the new node
• Users
– Make users’ id (grid, oracle)and group id’s (dba) are the same on all nodes
– Check and Grant Privileges
– modify users shell limitation
• ssh equivalence
– Merge /home/oracle/.ssh/authorized_keys
– Populate known_hosts file by logging in (using ssh) between the existing nodes and the new node.
• Short and fully qualified hostnames
• Include private network
– If using GPFS, also set up host equivalence for root

© Copyright IBM Corporation 2013


Setup notes: Shared storage (1 of 4)
• To determine the size of the shared LUNs, use the
bootinfo command:
For example:
# bootinfo –s hdisk0
70006 70GB (nominal) disk

© Copyright IBM Corporation 2013


Setup notes: Shared storage (2 of 4)
• CAUTION: Do not set or clear PVIDs on hdisks that are in
use
• To check disk mapping:
– On LPARs physically attached to the SAN, use the
“pcmpath query
device” (as root user) andmatch up Serial IDs
– Otherwise, use lquerypv to dump and match up the
hdisk header

© Copyright IBM Corporation 2013


Setup notes: Shared storage (3 of 4)
For ASM disks,
# lquerypv -h /dev/rhdisk2
00000000 00820101 00000000 80000000 D2193DF6 |..............=.|
00000010 00000000 00000000 00000000 00000000|................|
00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
00000030 00000000 00000000 00000000 00000000|................|
00000040 0A100000 00000103 4447315F 30303030 |........DG1_0000|
00000050 00000000 00000000 00000000 00000000|................|
00000060 00000000 00000000 44473100 00000000 |........DG1.....|
00000070 00000000 00000000 00000000 00000000|................|
00000080 00000000 00000000 4447315F 30303030 |........DG1_0000|
00000090 00000000 00000000 00000000 00000000|................|
000000A0 00000000 00000000 00000000 00000000|................|
000000B0 00000000 00000000 00000000 00000000|................|
000000C0 00000000 00000000 01F5D870 1DA82400 |...........p..$.|
000000D0 01F5D870 1E2DA400 02001000 00100000 |...p.-..........|
000000E0 0001BC80 00001400 00000002 00000001 |................|
000000F0 00000002 00000002 00000000 00000000|................|

© Copyright IBM Corporation 2013


Setup notes: Shared storage (4 of 4)
• On the new node
– Make grid the owner of the the ASM disks and set
permissions.
For example, if your shared LUNs are hdisk1 – hdisk6:
# chown grid:dba /dev/*hdisk[123456]
# chmod 660 /dev/*hdisk(123456]
–DO NOT use “dd” to write to any of the disks!!

© Copyright IBM Corporation 2013


GPFS add node (1 of 2)
• Check GPFS filesets on target node:
For example:
# lslpp -L |grep gpfs
gpfs.base 3.5.0.12 C F GPFS File Manager
gpfs.docs.data 3.5.0.4 C F GPFS Server Manpages and
gpfs.gnr 3.5.0.8 C F GPFS Native RAID
• Verify remote command execution:
For example (with rsh):
# rsh lpnx date
Wed Jun 6 11:57:19 PDT 2007
• Check cluster membership on existing node:
For example:
# /usr/lpp/mmfs/bin/mmlsnode -a .
GPFS nodeset Node list
------------- ----------------------------------
rac30-priv rac30-priv rac31-priv

© Copyright IBM Corporation 2013


GPFS add node (2 of 2)
• Accept license on the new node:
#mmchlicense server --accept -N rac32-priv
• Add the node:
For example:
# /usr/lpp/mmfs/bin/mmaddnode rac32-priv
Wed Jun 6 11:57:19 PDT 2007: 6027-1664 mmaddnode: Processing node
lp61-priv.smc.iic.ihost.com
mmaddnode: Command successfully completed
mmaddnode: 6027-1371 Propagating the changes to all affected nodes.
This is an asynchronous process.
# usr/lpp/mmfs/bin/mmlsnode -C .
GPFS nodeset Node list
------------- -------------------------------------------------------
rac30-priv rac30-priv rac31-priv rac32-priv
• Check to make sureall nodes are active
# mmgetstate –a
If nodes are down,
# mmstartup –w <node name>

© Copyright IBM Corporation 2013


Extending the Oracle Grid Infrastructure Home
to the New Node

• At any existing node, as gird user, run addnode.sh


[grid]$GRID_HOME/oui/bin/addnode.sh
$ ./addNode.sh -silent
"CLUSTER_NEW_NODES={rac32}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3
2-vip}“
• When the script finishes, run the orainstRoot.sh and
root.sh script as the root user on the new node

© Copyright IBM Corporation 2013


Extending the Oracle RAC Home Directory

• At any existing node, as oracle user, run


addnode.sh
[oracle]$ORACLE_HOME/oui/bin/addnode.sh
$ ./addNode.sh -silent
"CLUSTER_NEW_NODES={rac32}"
• When the script finishes, run the root.sh script as
the root user on the new node

© Copyright IBM Corporation 2013


Check all nodes resources
$ ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ROUP.dg ora....up.type ONLINE ONLINE rac30
ora....ER.lsnr ora....er.type ONLINE ONLINE rac30
ora....N1.lsnr ora....er.type ONLINE ONLINE rac30
ora....TEE2.dg ora....up.type ONLINE ONLINE rac30
ora....ROUP.dg ora....up.type ONLINE ONLINE rac30
ora.asm ora.asm.type ONLINE ONLINE rac30
ora.asmdb2.db ora....se.type ONLINE OFFLINE
ora.dgtest.db ora....se.type ONLINE ONLINE rac31
ora.eons ora.eons.type ONLINE ONLINE rac30
ora.gpfsdb.db ora....se.type OFFLINE OFFLINE
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac30
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac30
ora....SM2.asm application ONLINE ONLINE rac30
ora....30.lsnr application ONLINE ONLINE rac30
ora.rac30.gsd application OFFLINE OFFLINE
ora.rac30.ons application ONLINE ONLINE rac30
ora.rac30.vip ora....t1.type ONLINE ONLINE rac30
ora....SM1.asm application ONLINE ONLINE rac31
ora....31.lsnr application ONLINE ONLINE rac31
ora.rac31.gsd application OFFLINE OFFLINE
ora.rac31.ons application ONLINE ONLINE rac31
ora.rac31.vip ora....t1.type ONLINE ONLINE rac31
ora....SM3.asm application ONLINE ONLINE rac32
ora....32.lsnr application ONLINE ONLINE rac32
ora.rac32.gsd application OFFLINE OFFLINE
ora.rac32.ons application ONLINE ONLINE rac32
ora.rac32.vip ora....t1.type ONLINE ONLINE rac32
ora.scan1.vip ora....ip.type ONLINE ONLINE rac30

© Copyright IBM Corporation 2013


Add new instance for RAC database

[oracle]$./dbca

© Copyright IBM Corporation 2013


Add new instance for RAC database

© Copyright IBM Corporation 2013


Add new instance for RAC database

© Copyright IBM Corporation 2013


Add new instance for RAC database

© Copyright IBM Corporation 2013


Add new instance for RAC database

© Copyright IBM Corporation 2013


Add new instance for RAC database

© Copyright IBM Corporation 2013


Check the new instance
• In any node:
$ sqlplus / as sysdba
SQL> select INSTANCE_NAME,HOST_NAME,STATUS from gv$instance;
INSTANCE_NAME HOST_NAME STATUS
---------------------------------------------------------------- ------------
gpfsdb3 rac32 OPEN
gpfsdb1 rac31 OPEN
gpfsdb2 rac30 OPEN

© Copyright IBM Corporation 2013

You might also like