Setup activities: Day 1 lab • Conventions • Set up tools on student laptop • Synchronize time between nodes • Verify, install required AIX filesets and fixes • Configure tuning parameters • Configure networks • Create operating system user, groups and shell environment • Set up user equivalence (ssh) • Configure shared storage • Set up directories for Oracle binaries • Run Oracle RDA / HCVE script • Set up Oracle staging directories and final checks
Setup notes • Configure network – Public and private networks on the new node – Add entries in /etc/hosts for every node • Entries in existing nodes’ /etc/hosts for the new node. • Entries in the new node for the existing nodes • Don’t forget the VIP for the new node • Users – Make users’ id (grid, oracle)and group id’s (dba) are the same on all nodes – Check and Grant Privileges – modify users shell limitation • ssh equivalence – Merge /home/oracle/.ssh/authorized_keys – Populate known_hosts file by logging in (using ssh) between the existing nodes and the new node. • Short and fully qualified hostnames • Include private network – If using GPFS, also set up host equivalence for root
Setup notes: Shared storage (1 of 4) • To determine the size of the shared LUNs, use the bootinfo command: For example: # bootinfo –s hdisk0 70006 70GB (nominal) disk
Setup notes: Shared storage (2 of 4) • CAUTION: Do not set or clear PVIDs on hdisks that are in use • To check disk mapping: – On LPARs physically attached to the SAN, use the “pcmpath query device” (as root user) andmatch up Serial IDs – Otherwise, use lquerypv to dump and match up the hdisk header
Setup notes: Shared storage (4 of 4) • On the new node – Make grid the owner of the the ASM disks and set permissions. For example, if your shared LUNs are hdisk1 – hdisk6: # chown grid:dba /dev/*hdisk[123456] # chmod 660 /dev/*hdisk(123456] –DO NOT use “dd” to write to any of the disks!!
GPFS add node (1 of 2) • Check GPFS filesets on target node: For example: # lslpp -L |grep gpfs gpfs.base 3.5.0.12 C F GPFS File Manager gpfs.docs.data 3.5.0.4 C F GPFS Server Manpages and gpfs.gnr 3.5.0.8 C F GPFS Native RAID • Verify remote command execution: For example (with rsh): # rsh lpnx date Wed Jun 6 11:57:19 PDT 2007 • Check cluster membership on existing node: For example: # /usr/lpp/mmfs/bin/mmlsnode -a . GPFS nodeset Node list ------------- ---------------------------------- rac30-priv rac30-priv rac31-priv
GPFS add node (2 of 2) • Accept license on the new node: #mmchlicense server --accept -N rac32-priv • Add the node: For example: # /usr/lpp/mmfs/bin/mmaddnode rac32-priv Wed Jun 6 11:57:19 PDT 2007: 6027-1664 mmaddnode: Processing node lp61-priv.smc.iic.ihost.com mmaddnode: Command successfully completed mmaddnode: 6027-1371 Propagating the changes to all affected nodes. This is an asynchronous process. # usr/lpp/mmfs/bin/mmlsnode -C . GPFS nodeset Node list ------------- ------------------------------------------------------- rac30-priv rac30-priv rac31-priv rac32-priv • Check to make sureall nodes are active # mmgetstate –a If nodes are down, # mmstartup –w <node name>
Extending the Oracle Grid Infrastructure Home to the New Node
• At any existing node, as gird user, run addnode.sh
[grid]$GRID_HOME/oui/bin/addnode.sh $ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac32}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3 2-vip}“ • When the script finishes, run the orainstRoot.sh and root.sh script as the root user on the new node
addnode.sh [oracle]$ORACLE_HOME/oui/bin/addnode.sh $ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac32}" • When the script finishes, run the root.sh script as the root user on the new node
Check the new instance • In any node: $ sqlplus / as sysdba SQL> select INSTANCE_NAME,HOST_NAME,STATUS from gv$instance; INSTANCE_NAME HOST_NAME STATUS ---------------------------------------------------------------- ------------ gpfsdb3 rac32 OPEN gpfsdb1 rac31 OPEN gpfsdb2 rac30 OPEN