You are on page 1of 172

Quick Installation Guide Oracle 10g RAC on IBM on AIX 5L pSeries

Version 1.10 April 2005

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 1/ 1

Document history :
Version 1.00 Date March 2005 Creation Update Who Frederic Michiara Thierry Plumeau Validated by Fabienne Lepetit Paul Bramy Alain Benhaim

1.10

April 2005

Updates Add Add chapter Oracle necessary patches to apply Add chapter Synchronize the System Time on Cluster Nodes

Frederic Michiara

Contributors :
o o o o o o o Thierry Plumeau : IBM France (EMEA ORACLE/IBM Joint Solutions Center) Paul Bramy : Oracle Corp. (EMEA ORACLE/IBM Joint Solutions Center) Alain Benhaim : IBM France (EMEA ORACLE/IBM Joint Solutions Center) Fabienne Lepetit : Oracle France (EMEA ORACLE/IBM Joint Solutions Center) Rick Piasecki : IBM US Frederic Michiara : Oracle France (EMEA ORACLE/IBM Joint Solutions Center) Michel Passet : IBM France

Contact :
o EMEA ORACLE / IBM Joint Solutions Center oraclibm@fr.ibm.com

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 2/ 2

1 2 3 4

The aim of this document ................................................................................. 5 Whats new......................................................................................................... 6 Hardware architecture....................................................................................... 7


3.1 4.1 4.2 4.3 Network architecture example with 2 nodes ............................................................................7 Oracle CRS on raw devices and database on ASM................................................................8 Oracle CRS Data and database on GPFS ............................................................................10 Oracle CRS Data and database on raw devices part of an HACMP volume group resource12

Installation steps ............................................................................................... 8

5 6

CHECK LIST to use and follow....................................................................... 14 Preparing the system ...................................................................................... 15


6.1 Hardware requirements .........................................................................................................15 6.2 Software requirements...........................................................................................................15 6.2.1 AIX Requirements...........................................................................................................15 6.2.2 Oracle Software Requirements ......................................................................................20 6.3 Concurrent disks requirements..............................................................................................21 6.3.1 Raw Disks for CRS/Voting disks (ASM implementation) ...............................................22 6.3.2 Disks for ASM .................................................................................................................24 6.3.3 Disks for HACMP implementation ..................................................................................25 6.3.3.1 Creating a concurrent volume group .......................................................................25 6.3.3.2 Creating the concurrent logical volumes .................................................................30 6.4 Users and Groups..................................................................................................................34 6.5 Configure Kernel Parameters and Shell Limits......................................................................35 6.5.1 Configure Shell Limits.....................................................................................................35 6.5.2 Configure System Configuration Parameters.................................................................36 6.5.3 Configure Network Tuning Parameters ..........................................................................36 6.6 NETWORK CONFIGURATION .............................................................................................39 6.7 ORACLE environment setup..................................................................................................40

GPFS Implementation ..................................................................................... 41


7.1 7.2 7.3 7.4 INSTALLING GPFS ...............................................................................................................41 Creating a RSCT peer domain ..............................................................................................42 Creating a GPFS cluster ........................................................................................................43 Creating a GPFS filesystem...................................................................................................45 Installing HACMP...................................................................................................................48 Post install tasks ....................................................................................................................48 Creating the cluster HACMP..................................................................................................49 Creating the resource HACMP ..............................................................................................58 STARTING HACMP...............................................................................................................62

HACMP implementation .................................................................................. 48


8.1 8.2 8.3 8.4 8.5

9 Synchronize the System Time on Cluster Nodes ......................................... 66 10 Important TIPS for Oracle software and patchs Installation Instructions67
10.1 10.2 10g Installation on Aix 5.3, Failed with Checking operating system version must be 520067 10g recommended steps before installation and applying any Patch Set on AIX .............68

11 12
12.1

Install the Custer Ready Services (CRS) .................................................... 69 Install the Oracle 10g software.................................................................... 81
VIP Configuration Asssitant................................................................................................86

13
13.1 13.2

Oracle patches to apply ............................................................................... 93


Patch to apply on top of 10.1.0.2........................................................................................93 Patch to apply on top of 10.1.0.3........................................................................................94

14

CREATING THE DATABASE USING DBCA ................................................ 99

14.1 Database creation On GPFS .......................................................................................... 102 14.2 Database Creation On ASM............................................................................................ 107 14.3 Database Creation On Raw Devices .............................................................................. 116 14.3.1 Content example of Raw device DB configuration file ................................................ 116
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 3/ 3

14.4 Manual Database Creation ............................................................................................. 118 14.4.1 Tips to create a database in 10g Real Application Clusters. ................................... 118 14.4.2 Configure listener.ora / sqlnet.ora / tnsnames.ora ...................................................... 120 14.4.3 Configure Oracle Enterprise Manager......................................................................... 120

15
15.1 15.2 15.3

Grid control installation ............................................................................. 122


Grid Control Management Server Installation ................................................................. 122 Metalink Note................................................................................................................... 138 Grid Control Agent Deployments .................................................................................... 139

16 17
17.1 17.2

Appendix A : Logical volumes creation.................................................... 146 Appendix B : examples of configuration files.......................................... 147
Network ........................................................................................................................... 147 listener.ora and tnsnames.ora Configuration example ................................................... 149

18

Appendix C : Oracle technical notes ........................................................ 150

18.1 CRS and 10g Real Application Clusters ......................................................................... 150 18.2 About RAC ...................................................................................................................... 160 18.3 About CRS ...................................................................................................................... 161 18.4 About VIP ... .................................................................................................................... 162 18.5 About manual database cration ... .................................................................................. 162 18.6 About Grid Control .......................................................................................................... 162 18.7 About TAF ... ................................................................................................................... 162 18.8 About Adding/Removing Node ... .................................................................................... 163 18.9 About ASM ... .................................................................................................................. 163 18.10 Note #2064876.102 : How to setup High Availability Group Services (HAGS) on IBM AIX/RS6000. .................................................................................................................................. 165 18.11 Note # 115792.1 , explaining how to setup HACMP cluster interconnect adapter. ........ 167

19 20

Appendix D : HACMP Cluster verification output .................................... 168 Appendix F : Filesets to be installed on the machines of the cluster.... 171

20.1 RSCT 2.3.4 (provided with AIX 5.2) for all implementation............................................. 171 20.2 When implementing HACMP........................................................................................... 172 20.2.1 HACMP 5.1 filesets...................................................................................................... 172 20.2.2 RSCT 2.3.4 filesets for HACMP Implementation......................................................... 172 20.3 For GPFS implementation............................................................................................... 172

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 4/ 4

THE AIM OF THIS DOCUMENT

This document is written to provide help installing Oracle10g Real Application Clusters (10.1) release 1 on IBM pSeries servers with AIX 5L. We will describe step by step three different architecture: - Oracle CRS (Cluster Ready Service) on raw devices and database on ASM (Automatic Storage Management), - Oracle CRS Data and database on GPFS (General Parallel File System), - Oracle CRS Data and database on raw devices part of an HACMP volume group resource. With these three different implementations, youll be able to install some other combinations. Metalink (http://metalink.oracle.com/metalink/plsql/ml2_gui.startup)
Titre Origine Rfrence

Oracle Real Application Clusters Installation and Configuration Guide - 10g (10.1) for AIX-Based Systems Oracle Real Application Clusters Administrator's Guide - 10g Release 1 (10.1) Oracle Database Release Notes 10g Release 1 (10.1.0.2.0) for AIX-Based Systems Oracle Universal Installer Concepts Guide Release 2.2

Oracle Metalink

B10766-08

Oracle Metalink Oracle Metalink Oracle Metalink Oracle Metalink

B10765-01 B13611-05 A96697-01

GPFS for AIX5L AIX Clusters Concepts, Planning and Installation Guide GPFS for AIX5L AIX Clusters Administration and Programming Reference GPFS on AIX Clusters High Performance File System Administration Simplified

IBM IBM IBM

GA22-7895-01 SA22-7896-01 SG24-6035-00

The information contained in this paper resulted from : - Oracle and IBM documentations - Workshop experiences done in the Oracle/IBM Joint Solutions Center - Benchmarks and POC implementations for customers performed by EMEA PSSC Montpellier - This documentation is a joint effort from Oracle and IBM specialists. Please also refer to Oracle online documentation for more information : http://docs.oracle.com http://tahiti.oracle.com http://technet.oracle.com/docs/products/oracle9i/doc_library/release2/index.htm http://otn.oracle.com/products/oracle9i/content.html Oracle RAC home page : http://www.oracle.com/database/rac_home.html For HACMP Documentation refer to : http://www-1.ibm.com/servers/eserver/pseries/library/hacmp_docs.html For GPFS Documentation refer to : http://www-1.ibm.com/servers/eserver/pseries/library/gpfs.html For more information : http://www-1.ibm.com/servers/eserver/pseries/software/sp/gpfs_faq.html
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 5/ 5

Your comments are important for us. We want our technical papers to be as helpful as possible. Please send us your comments about this document to the EMEA Oracle/IBM Joint Solutions Center. Use our email address :

oraclibm@fr.ibm.com
or our phone number :

+33 (0)4 67 34 67 49 2 WHATS NEW

As shown is the figure below, the main difference between Oracle 9i RAC and 10g RAC is the clusterware. The three main implementations are: - Oracle CRS Data and database on raw devices part of an HACMP volume group resource, - Oracle CRS (Cluster Ready Service) on raw devices and database on ASM (Automatic Storage Management), - Oracle CRS Data and database on GPFS (General Parallel File System).

Business Unit or Product Name

New features Oracle10g


Implementations choices
History for Oracle 9iRAC / AIX : HACMP was mandatory with Oracle 9iRAC 2 implementations types : RAC / HACMP for database in concurrent Raw Devices RAC / HACMP / GPFS pour database en cluster files system 10gRAC and AIX : HACMP not mandatory 3 implementations types : 10gRAC et HACMP / Database in concurrent raw devices (Only case for HACMP to be mandatory) 10gRAC et ASM (Oracle Automated Storage Management) without HACMP nor GPFS 10gRAC et GPFS (Without HACMP) / Database on cluster files system

IBM / ORACLE Joint Solution Center | Workshop

2004 JSC

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 6/ 6

HARDWARE ARCHITECTURE
pSeries 595 using AIX 5L.

The cluster, is composed of three partitions on an IBM

A private network (for instance a gigabit ethernet network, using a gigabit switch to link each cluster nodes) is designed only for Oracle interconnect use (cache fusion between instances). This dedicated network is mandatory.

aGigabit switch is mandatory even for only 2 nodes architecture for production (Cross-over cable can be used only for test purpose, and its not supported by Oracle Support, please read RAC FAQ on http://metalink.oracle.com). This network can also be used for GPFS. A second gigabit ethernet interconnect, with a different network mask, can be setup for security purposes or performance issues.
Oracle 10g RAC code is installed on the three machines of the cluster on internal disks. Data for CRS and the database are stored on DSX000 disks connected to all the nodes. All the disks must be physically attached to the three nodes. Because theses disks are connected on all the nodes, all the disks can be accessed by both of the nodes, in a concurrent mode. For this test we have used a IBM DS4000. The cluster, named rac10g_cluster, is composed of three IBM pSeries using AIX 5L.

The IBM AIX clustering layer, HACMP filesets, should not be installed if youve chosen an implementation without HACMP. If this layer is implemented for other purpose, raw devices necessary to install and run CRS data will have to be part of an HACMP volume group resource.

3.1

NETWORK ARCHITECTURE EXAMPLE WITH 2 NODES

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 7/ 7

4
4.1

INSTALLATION STEPS
ORACLE CRS ON RAW DEVICES AND DATABASE ON ASM

Hardware requirements (chap. 6.1)

Software requirements (chap. 6.2)

Disks for CRS on raw-disks (chap. 6.3.1)

Disks for ASM (chap. 6.3.2)

Users and Groups (chap. 6.4)

Configure Kernel Parameters and Shell Limits (chap. 6.5)

Network configuration (chap. 6.6)

Oracle environment setup (chap. 6.7)

Synchronize the System Time on Cluster Nodes (chap. 9)

End on next page ....

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 8/ 8

Install the Cluster Ready Services (chap. 11)

Install the Oracle 10g software (chap. 12)

VIPCA Utility (chap. 12.1)

Creating the DB using DBCA (chap. 14 / 14.2) ASM Database

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 9/ 9

4.2

ORACLE CRS DATA AND DATABASE ON GPFS

Hardware requirements (chap. 6.1)

Software requirements (chap. 6.2)

Disks for CRS on raw-devices (chap. 6.3.1)

Users and Groups (chap. 6.4)

Configure Kernel Parameters and Shell Limits (chap. 6.5)

Network configuration (chap. 6.6)

Oracle environment setup (chap. 6.7)

GPFS Implementation (chap. 7)

Synchronize the System Time on Cluster Nodes (chap. 9)

End on next page ....

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 10/ 10

Install the Cluster Ready Services (chap. 11) OCR and Voting Disks on HACMP

Install the Oracle 10g software (chap. 12)

VIPCA Utility (chap. 12.1)

Creating the DB using DBCA (chap. 14/14.1) GPFS Database

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 11/ 11

4.3

ORACLE CRS RESOURCE

DATA AND DATABASE ON RAW DEVICES PART OF AN HACMP VOLUME GROUP

Hardware requirements (chap. 5.1)

Software requirements (chap. 5.2)

Disks for HACMP implementation (chap. 6.3.3)

Users and Groups (chap. 6.4)

Configure Kernel Parameters and Shell Limits (chap. 6.5)

Network configuration (chap. 6.6)

Oracle environment setup (chap. 6.7)

HACMP Implementation (chap. 8)

Synchronize the System Time on Cluster Nodes (chap. 9)

End on next page ....

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 12/ 12

Install the Cluster Ready Services (chap. 11) OCR and Voting Disks on HACMP ressource

Install the Oracle 10g software (chap. 12)

VIPCA Utility (chap. 12.1)

Creating the DB using DBCA (chap. 14 / 14.3) Raw-Device

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 13/ 13

CHECK LIST TO USE AND FOLLOW

This is the list of operations you should do, before moving to Oracle Installation steps :

Operations

Done on each node : Yes/No ?


Node 1 Node 2 Node 3 Node 4 Node 5 ...

1 2 3 4 5 6 7 8 9

Check the Hardware Requirements Check the Network Requirements Check the Software Requirements Create Required UNIX Groups and User Configure Kernel Parameters and Shell Limits Identify Required Software Directories Identify or Create an Oracle Base Directory Create the CRS Home Directory Choose a Storage Option for Oracle CRS, Database, and Recovery Files Create Directories for Oracle CRS, Database, or Recovery Files Configure Disks for Automatic Storage Management Or Configure Raw Devices And Verify the Cluster Software Configuration Or Configure GPFS for Shared Files System

11

12

And ACCESS Disk Storage concurently from all nodes participating to the RAC cluster 13 14 15 Synchronize the System Time on Cluster Nodes Stop Existing Oracle Processes Configure the oracle User's Environment

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 14/ 14

6
6.1

PREPARING THE SYSTEM


HARDWARE REQUIREMENTS

- RAM >= 512 MB. Command to check the physical memory: lsattr El sys0 a realmem - Internal disk >= 4 GB for the oracle code - Paging space = 2 x RAM, with a minimum of 400 MB and a maximum of 2 GB. To check the paging space configured : lsps a - Temporary Disk Space: The Oracle Universal Installer requires up to 400 MB of free space in the /tmp directory. To check the free temporary space available: df k /tmp You can use an other filesystem instead of /tmp. Set the TEMP environment variable (used by Oracle) and the TMPDIR environment variable to the new location. For example: export TEMP=/new_tmp export TMPDIR=/new_tmp - CD-ROM drive.

6.2 6.2.1

SOFTWARE REQUIREMENTS AIX Requirements

have the latest information please refer to Metalink Note 282036.1. on http://metalink.oracle.com, this document include last update which is Version 7 (March 29, 2005) Check that the required software and patches are installed on the system.

aTo

Check for Required Software


Depending on the products that you intend to install, verify that the following software is installed on the system. The procedure following the table describes how to check these requirements.
AIX release supported with Oracle 10g RAC : AIX 5L Version 5.2 ML04 and higher Note: Maintenance Level 04 (ML04) contains both: 1) Recommended maintenance updates for all AIX 5L v5.2 installations. 2) Support code for POWER5 processor-based servers

AIX 5L Version 5.3 and higher


pSeries with AIX 5L page 15/ 15

Installing Oracle 10g RAC on IBM

To determine which version of AIX is installed, enter the following command:


# oslevel -r

If the operating system version is lower than AIX 5.2.0.0 Maintenance Level 4 (5200-04), upgrade your operating system to this level. AIX 5L version 5.2 maintenance packages are available from the following Web site:
http://www-912.ibm.com/eserver/support/fixes/

AIX filesets required


Os Release AIX 5.2 Maintenance Level ML 4 Minimum And higher Filesets AIX 5.3 bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte

To ensure that the system meets these requirements, follow these steps: To determine whether the required filesets are installed and committed, enter a command similar to the following:

# lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \ bos.perf.libperfstat bos.perf.proctools rsct.basic.rte If a fileset is not installed and committed, then install it. Refer to your operating system or software documentation for information about installing filesets.

Check for Required Patches


Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 16/ 16

Depending on the products that you intend to install, verify that the following patches are installed on the system. The procedure following the table describes how to check these requirements. AIX Patches (APAR) needed
(Note: If the PTF is not downloadable, customers should request an efix through AIX customer support.) Os Release 5.2 Maintenance Level ML 4 Minimum And higher ALL (ASM, GPFS, RAW) For all implementations (ASM, GPFS, RAW) IY43980: libperfstat.h not ANSI-compliant IY44810: DSI IN BMRECYCLE IY45462: Definition of isnan() in math.h incorrect IY45707: J2 READAHEAD/CIO INTERACTION IY46214: dropping partial connections leaves them on so_q0 IY46605: exec of 32 bit application can fail on 64 bit kernel IY48525: SDK 1.4.1 32-BIT SR1: CA141-20030930 IY51801: race condition in aio_nwait_timeout
IY56024: CIO WRITE RETURNS INCORRECT LENGTH The following program technical fix (PTF) is required if you intend to use ASM for database file storage: U496549: bos.rte.aio.5.2.0.15

IY64978 : Possible system hang while concurrently renaming and unlinking under JFS. This APAR is currently available from the Fix Central download Web site located at: http://www-1.ibm.com/servers/eserver/support/pseries/aixfixes.html IY63366 : Loader may fail to find symbol even though the symbol is present in the symbol table. This can cause applications that use dynamically loaded modules to fail. Prior to APAR availability, an emergency fix is available at: ftp://service.software.ibm.com/aix/efixes/iy63366/ IY59082 : Heavily loaded systems running JFS2 file systems may hang. AIX 5.2 systems with the kernels filesets (bos.mp, bos.mp64, bos.up) at the 5.2.0.40 level, and using JFS2 file systems should apply the fix for this APAR. In addition, customers may need to install one or more of the following emergency fixes: 1. Systems running bos.rte.lvm 5.2.0.41 or later should install APAR IY64691. APAR IY64691 fixes a problem with the chvg-B command that can cause data corruption on Big volume groups which were converted from normal volume groups. Prior to APAR availability, obtain the emergency fix for APAR IY64691 from: ftp://service.software.ibm.com/aix/efixes/iy64691/ Systems running bos.rte.lvm 5.2.0.50 should install APAR IY65001. APAR IY65001 fixes a possible corruption issue with mirrored logical volumes. This APAR also contains the fix for APAR IY64691. Prior to APAR availability, obtain the emergency fix for APAR IY65001 from:ftp://service.software.ibm.com/aix/efixes/iy65001/ Systems running bos.rte.aio 5.2.0.50 should install APAR IY64737. APAR
pSeries with AIX 5L page 17/ 17

2.

3.
Installing Oracle 10g RAC on IBM

IY64737 fixes a problem where applications that use Asynchronous I/O (AIO) can cause a system hang. Prior to APAR availability, obtain the emergency fix for APAR IY64737 from: ftp://service.software.ibm.com/aix/efixes/iy64737/ ----------------------------------------------------------------------------------------------------APARs and PTFs required for HACMP: Note: HACMP is required only if you want to use raw logical volumes for Oracle CRS or database file storage, however, it is supported for all installations. IY42783: CT:LX: RMC daemon may hang if managed nodes recycle IY43602: DISK FAILURES CAUSING QUORUM TO BE LOST IS NOT IY45695: BASE FIXES FOR HACMP 5.1.0 U496124: cluster.es.server.rte.5.1.0.2 -----------------------------------------------------------------------------------------------------

General Parallel File System (GPFS)


Note: GPFS is required only if you want to use a cluster file system for Oracle CRS or database files. GPFS 2.3.0.1 (GPFS 2.3 plus APAR IY63969 - or later) All new Oracle and GPFS customers on AIX 5L v5.2 should use GPFS 2.3.0.1 or later. Current GPFS 2.1 and 2.2 customers are encouraged to upgrade to GPFS 2.3.0.1 or later. For more information regarding GPFS 2.3 and Oracle, see Metalink Note: 302806.1 If installing GPFS on AIX 5L Version 5.2 ML04, the AIX PTF for APAR IY60609 is mandatory. APARs required for GPFS v2.2: GPFS 2.2.1.1 (GPFS PTF 6) IY54739: GPFS 2.2 mandatory service -ORAPARs and PTFs required for GPFS v2.1: IY52454: DirectIO fixes for Linux, Inerop: backup extended U489058: mmfs.base.cmds.3.5.0.6 U496347: mmfs.gpfs.rte.3.5.0.10 U496395: mmfs.gpfs.rte.2.1.0.10 ----------------------------------------------------------------------------------------------------PTFs required for VisualAge C compiler (If needed) : U489726: vac.C.6.0.0.4 (or later) Note: These APARs are required only if you are using the associated JDK version. ----------------------------------------------------------------------------------------------------APAR required for JDK depending on which version are installed: JDK 1.4.1.1 (64-bit): IY48526 JDK 1.3.1.11 (32-bit): IY47055
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 18/ 18

JDK 1.2.2.18: IY40034

AIX Patches (APAR) needed


(Note: If the PTF is not downloadable, customers should request an efix through AIX customer support.) Os Release 5.3 Maintenance Level And higher APAR For all implementations (ASM, GPFS, RAW) IY58143 "REQUIRED UPDATE FOR AIX 5.3" IY59386 "libdepend.mk files are all empty" IY60930 Unable to delete network routes IY66513 LDR_CNTRL TURNS ON UNDESIRABLE OPTION WHEN INITIALIZED WITH INCORRECT VALUE (customers should efix hrough AIX customer support) -----------------------------------------------------------------------------------------------------Note: HACMP is required only if you want to use raw logical volumes for Oracle CRS or database file storage, however, it is supported for all installations. For either HACMP 5.2 or HACMP 5.1 (RAW Devices implementation) IY61034: INCORRECT GW ADDR FOR DEFAULT ROUTE AFTER ALIAS DELETE IY61770: MISC SERVICE UPDATES IY62191: THREADS BLOCKED IN _GLOBAL_LOCK_COMMON HACMP V5.2 plus (RAW Devices implementation): IY60759: "ODM PERMISSION CHANGES REQUIRED TO SUPPORT ORACLE 9i" HACMP V5.1 plus (RAW Devices implementation): U498114 and all co-requisite PTFs. ------------------------------------------------------------------------------------------------------

General Parallel File System (GPFS)


Note: GPFS is required only if you want to use a cluster file system for Oracle CRS or database files. GPFS (Cluster files system implementation) : For latest GPFS information, see: http://publib.boulder.ibm.com/clresctr/library/gpfs_faqs.html GPFS 2.3.0.1 (GPFS 2.3 plus APAR IY63969 - or later) All Oracle and GPFS customers on AIX 5L v5.3 should use GPFS 2.3.0.1 For more information regarding GPFS 2.3 and Oracle, see Metalink Note: 302806.1 -ORGPFS 2.2.1.0 (GPFS PTF 5) plus: IY59339: "Misc Service Updates" -ORInstalling Oracle 10g RAC on IBM pSeries with AIX 5L page 19/ 19

GPFS 2.2.1.1 (GPFS PTF 6) -----------------------------------------------------------------------------------------------------APAR required for JDK depending on which version are installed: IBM JDK 1.3.1.11(32 bit) IY47055 IBM JDK 1.4.1.2 (32 bit) IY47536 IBM JDK 1.4.1.1 (64 bit) IY47538 -----------------------------------------------------------------------------------------------------PTFs required for VisualAge C compiler (If needed) : VAC 6 C and C++ for AIX July PTF (6.0.0.4) U489726

To ensure that the system meets these requirements, follow these steps:

To determine whether an APAR is installed, enter a command similar to the following: # /usr/sbin/instfix -i -k "IY43980 IY44810 IY45462 IY45707 IY46214 IY46605 \ IY48525 IY51801 IY56024" If an APAR is not installed, download it from the following Web site and install it: http://www-912.ibm.com/eserver/support/fixes/ To determine whether a PTF is installed, enter a command similar to the following: # lslpp -l -B U489726 U485561 ... If a PTF is not installed, download it from the following Web site and install it: http://www-912.ibm.com/eserver/support/fixes/

6.2.2

Oracle Software Requirements

Orale CDs needed for the RAC installation :

Oracle Database 10g Release 1 (10.1.0.2)


Enterprise/Standard Edition for AIX5L Based Systems () Oracle Cluster Ready Services Release 1 (10.1.0.2) for AIX5L Based Systems (1 CD)

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 20/ 20

6.3

CONCURRENT DISKS REQUIREMENTS

Choose a Storage Option for Oracle CRS, Database, and Recovery Files
The following table shows the storage options supported for storing Oracle Cluster Ready Services (CRS) files, Oracle database files, and Oracle database recovery files. Oracle database files include datafiles, control files, redo log files, the server parameter file, and the password file. Oracle CRS files include the Oracle Cluster Registry (OCR) and the CRS voting disk. For all installations, you must choose the storage option that you want to use for Oracle CRS files and Oracle database files. If you want to enable automated backups during the installation, you must also choose the storage option that you want to use for recovery files (the flash recovery area).

Note: For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site: http://metalink.oracle.com

File Types Supported Storage Option Automatic Storage Management Cluster file system Note: Requires GPFS Shared raw logical volumes Note: Requires HACMP Shared raw disk devices Yes Yes No Yes Yes No CRS No Yes Database Yes Yes Recovery Yes Yes

- Use the following guidelines when choosing the storage options that you want to use for each file type: - You can choose any combination of the supported storage options for each file type as long as you satisfy any requirements listed for the chosen storage options. - For Standard Edition installations, ASM is the only supported storage option for database or recovery files. - Automatic Storage Management cannot be used to store Oracle CRS files, because these files must be accessible before any Oracle instance starts. - If you are not using HACMP, you cannot use shared raw logical volumes for CRS or database file storage. - For information about how to configure disk storage before you start the installation, refer to one of the following sections depending on your choice: o o To use ASM for database or recovery file storage, refer to chapters 6.3.1 to prepare RAW disks for OCR/Voting disks and 6.3.2 to prepare disks for ASM implementation.. To use GPFS cluster file system for Oracle CRS, database, or recovery file storage, refer to chapter 7 for GPFS implementation to prepare cluster files system for OCR/Voting disks (seen as files). To use raw devices (disks or logical volumes) for Oracle CRS or database file storage, refer to chapter 8 for HACMP installation/implementation and chapter 6.3.3 to prepare RAW devices for OCR/Voting disks .
pSeries with AIX 5L page 21/ 21

Installing Oracle 10g RAC on IBM

6.3.1

Raw Disks for CRS/Voting disks (ASM implementation)

Identify two disks available for OCR and Voting disks, on each node. The screen before shows the LUN mapping for nodes used in our cluster. The LUN for OCR disk and Voting disk have ids 1 and 2. These ids will help us on to identify which hdisk will be used.

a Be carefull hdisk2 on node1 is not necessary hdisk2 on node2.


node1:root-/> lscfg vl hdisk2 hdisk2 L1000000000000 hdisk3 L2000000000000 3552 U1.9-P1-I1/Q1-W200300A0B80C5404(500) Disk Array Device U1.9-P1-I1/Q1-W200300A0B80C5404(500) Disk Array Device

node1:root-/> lscfg vl hdisk3 3552

node1:root-/> rsh node2 node2:root-/> lscfg vl hdisk3 hdisk2 L1000000000000 hdisk3 L2000000000000 3552 U1.9-P1-I1/Q1-W200300A0B80C5404(500) Disk Array Device U1.9-P1-I1/Q1-W200300A0B80C5404(500) Disk Array Device

node2:root-/> lscfg vl hdisk4 3552

- If these disks do not have a PVID assign one to it as follow :


node1:root-/> lspv

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 22/ 22

hdisk0 active hdisk1 hdisk2 hdisk3 hdisk4 hdisk5

0033c670e214eac5

rootvg

none none none none none -a pv=yes -a pv=yes

None None None None None

node1:root-/> chdev-l hdisk2 node1:root-/> chdev-l hdisk3 node1:root-/> lspv hdisk0 active hdisk1 hdisk2 hdisk3 hdisk4 hdisk5

0033c670e214eac5 none 0033c670b0f3ee84 0033c670b0f3fb87 none none None None None None None

rootvg

- Change the reserve_policy attribute to no_reserve for ocr and voting disks on each nodes.
node1:root-/> chdev-l hdisk2 node1:root-/> chdev-l hdisk3 node1:root-/> rsh node2 node2:root-/> chdev-l hdisk3 -a reserve_policy=no_reserve -a reserve_policy=no_reserve -a reserve_policy=no_reserve

- Issue the command lsattr E l hdisk2 to vizualize all attributes for hdisk2 - As described before, disks might have different names from one node to another for example hdisk2 on node1 might be hdisk3 on node2, etc Create a device on each node called /dev/ocr_disk and /dev/vote_disk with the right major and minor number.
node1:root-/> ls -l /dev/*hdisk2 brw------/dev/hdisk2 crw------/dev/rhdisk2 1 root 1 root system system 20, 20, 4 Oct 18 17:52 4 Oct 18 17:52

node1:root-/> mknod /dev/ocr_disk c 20 4 node1:root-/> mknod /dev/vote_disk c 20 5 node1:root-/> rsh node2 node2:root-/> ls -l /dev/*hdisk3 brw------/dev/hdisk3 crw------/dev/rhdisk3 1 root 1 root system system 12, 12, 9 Oct 18 16:22 9 Oct 18 16:22

node2:root-/> mknod /dev/ocr_disk c 12,9 node2:root-/> mknod /dev/vote_disk c 12,10 ..

- Change owner, group and permission for /dev/ocr_disk and /dev/vote_disk on each node of the cluster:
node1:root-/> chown oracle.dba /dev/ocr_disk node1:root-/> chown oracle.dba /dev/vote_disk node1:root-/> chmod 660 /dev/ocr_disk node1:root-/> chmod 660 /dev/vote_disk
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 23/ 23

node1:root-/> ls l /dev/* | grep 20, brw------/dev/hdisk2 brw------/devr/hdisk2 1 root 1 root system system dba |grep 20, system system dba

4 20, 20, 20, 5 20, 20, 20, 5 Oct 18 17:52 5 Oct 18 17:52 5 Oct 19 13:28 4 Oct 18 17:52 4 Oct 18 17:52 4 Oct 19 13:27

crw-rw---1 oracle /dev/ocr_disk node1:root-/> ls l /dev/* brw------/dev/hdisk3 crw------/dev/rhdisk3 1 root 1 root

crw-rw---1 oracle /dev/vote_disk node1:root-/> rsh node2

node2:root-/> chown oracle.dba /dev/ocr_disk .

6.3.2

Disks for ASM two disks available for ASM( see 6.3.1

- Identify

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 24/ 24

Raw Disks for CRS/Voting disks) :


node1:root-/> lsdev -Ccdisk

- If these disks do not have a PVID, assign one to it and change the reserve_policy attributes for each disks dedicated to ASM, on each nodes of the cluster:
node1:root-/> chdev-l hdisk6 node1:root-/> chdev-l hdisk6 . -a pv=yes -a reserve_policy=no_reserve

- Change owner, group and permission for ASM disks on each node of the cluster:
node1:root-/> chown oracle.dba /dev/rhdisk6 node1:root-/> chown oracle.dba /dev/rhdisk7 node1:root-/> rsh node2 node2:root-/> chown oracle.dba /dev/rdisk6 node2:root-/> chown oracle.dba /dev/rhdisk11 .

6.3.3

Disks for HACMP implementation

6.3.3.1 Creating a concurrent volume group


The database files and/or ocr and voting disk are stored on external concurrent disks.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 25/ 25

Creating a concurrent Volume Group (VG)


Node 1 VG mode
Unknown

Node 2 VG mode
Unknown
As conc.

Defined

As conc.

As conc.

Action

Create vg

Varyon

Varyoff

xx X

Import

Varyon

Varyoff

Start HACMP
As conc. As conc.
As conc.

HACMP running

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

As conc.

Defined

Defined

Online

Online

ON

ON

As conc.

Defined

Online

Online

OFF

OFF

ON

ON

page 26/ 26

- Check if the target disks are physically shared between the servers of the cluster and identify which one will be used, as shown in 6.3.1

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 27/ 27

Raw Disks for CRS/Voting disks .

aRemenber the disk name can be different, depending on the others disks connected on each machine.
- Create at the AIX level on the first machine (node1) a concurrent volume group, oradatavg:
node1:root-/> smit vg
Add a Volume Group Type or select values in entry fields. Press Enter AFTER making all desired changes. VOLUME GROUP name Physical partition SIZE in megabytes * PHYSICAL VOLUME names Activate volume group AUTOMATICALLY at system restart? Volume Group MAJOR NUMBER Create VG Concurrent Capable? Auto-varyon in Concurrent Mode? [Entry Fields] [oradatavg] 64 [hdisk4] no See Note 1 [80] yes no See Note 2

+ + + +# + +

a Note 1: Never choose YES to activate at system restart, neither auto-varyon in concurrent mode. These
tasks have to be managed by HACMP. The volume group just has to be created with concurrent capability.

aNote 2: You must choose the major number to be sure the volume groups have the same major number on all the nodes (caution, before choosing this number, you must be sure it is free on all the nodes). The lvlstmajor command lists the first free major number on a machine.
- The major number for oradatavg volume group is 80.
node1:root-/> ls al /dev/oradatavg crw-rw---/dev/oradatavg 1 root system 80, 2 Oct 12 13:39

- Start manually the volume group on node1 :


node1:root-/> varyonvg oradatavg

- Create all the logical volumes (raw devices) you need for your database, including raw devices for ocr and voting and shared configuration disks. - Stop oradatavg on node1 :
node1:root-/> varyoffvg oradatavg

- Import oradatavg volume group on the second machine (node2)


node1:root-/> smit vg
Import a Volume Group Type or select values in entry fields. Press Enter AFTER making all desired changes. VOLUME GROUP name * PHYSICAL VOLUME name Volume Group MAJOR NUMBER Make this VG Concurrent Capable? Make default varyon of VG Concurrent? [Entry Fields] [oradatavg] [hdisk4] [80] yes no

+ +# + +

The physical volume name (hdisk) could not have the same number on both sides. Check the PVID (lspv) of the disk, because its the only information reliable and unique thru the cluster. Be sure to have the same major number. This number has to be undefined on all the nodes.

aThe import of a volume group resets to YES the field

Activate volume group automatically at system restart, which has to be set to NO. To do it manually : chvg a n or you can also modify this flag using smit vg
pSeries with AIX 5L page 28/ 28

Installing Oracle 10g RAC on IBM

- Varyon manually the volume group on node2 :


node2:root-/> varyonvg oradatavg

- Repeat the import off the oradatavg on the third node. Do not forget to stop the volume group on the previous node. The new volume group is now defined on the all the machines of the cluster, with the concurrent capable feature set on. - Test the availability of the volume group on the two nodes The volume group is now defined on all the nodes of the cluster, with concurrent capability. But it is not yet varied on in concurrent mode. This is the goal of HACMP. Before HACMP is up and running, the oradatavg volume group is defined on all the nodes, but can be varied on at one time on a single machine only. Dont try to varyon oradatavg manually in concurrent mode, this task is devoted to HACMP only. - To list the volume groups defined on the machine : - To list the active volume groups : lsvg lsvg o

- To switch the volume group from one node to the other one, do : varyoffvg oradatavg on the machine where it is varied on varyonvg oradatavg on the other node to list the volume group characteristics VG IDENTIFIER: PP SIZE: TOTAL PPs: FREE PPs: 64 megabyte(s) 79 (5056 79 (5056 lsvg oradatavg :

VOLUME GROUP: oradatavg 0033c67000004c00000000ff87432e75 VG STATE: active VG PERMISSION: read/write megabytes) MAX LVs: 512 megabytes) LVs: 0 OPEN LVs: 0 TOTAL PVs: 1 STALE PVs: 0 ACTIVE PVs: 1 Concurrent: Enhanced-Capable VG Mode: Non-Concurrent MAX PPs per PV: 1016 LTG size: 128 kilobyte(s) HOT SPARE: no You should have a similar output on all the nodes.

USED PPs: 0 (0 megabytes) QUORUM: 2 VG DESCRIPTORS: 2 STALE PPs: 0 AUTO ON: no Auto-Concurrent: Disabled MAX PVs: AUTO SYNC: BB POLICY: 128 no relocatable

aNote that at this point of the disks setup, the VG Mode is not be concurrent yet. But it must be concurrent
after the starting of HACMP !

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 29/ 29

6.3.3.2 Creating the concurrent logical volumes


Adding a Logical Volume in an active Volume Group (VG) (Via Logical Volume Manager)
Node 1 VG mo de
Unknown

N ode 2 VG mode
Unknown

As conc.

As conc.

As conc.

Action

Active VG

Stop HACMP

Varyon

Mklv *

* At this point this new LV is unknown on this node

xx X

varyoff

export

import

* At this point this new LV is known on this node

Start HACMP

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

As conc.

Defined

Defined

Online

Online

OFF

OFF

ON

ON

page 30/ 30

Adding a Logical Volume in an active Volume Group (VG) (Via HACMP) Node 1 VG mode
Unknown As conc.

Node 2 VG mode
Unknown As conc.

As conc.

Action

Active VG

Smit hacmp Cluster manager Conc. LVM Conc LV Add conc LV

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 31/ 31

As conc.

Defined

Defined

Online

Online

OFF

OFF

ON

ON

In a raw-device implementation, each logical volume created is used as a raw-device and will represent one datafile (or redolog or control file). We will need a raw-device for ocr disk and another one for voting disk. See the table #1 below to have a list of logical volumes required for a minimal database implementation. Table #1 . A minimal database implementation requires the following raw devices : Size 100 MB 20 MB 512 MB 512 MB 512 MB 200 MB 200 MB 200 MB 200 MB 32 MB 32 MB 32 MB 32 MB 128 MB 512 MB 512 MB 256 MB Logical Volume name ocr_disk vote_disk rac_system rac_undotbs01 rac_undotbs02 log11 log12 log21 log22 rac_control01 rac_control02 rac_control03 rac_spfile srvconfig rac_data rac_index rac_temp Raw device name /dev/rocr_disk /dev/rvote_disk /dev/rrac_system /dev/rrac_undotbs1 /dev/rrac_undotbs2 /dev/rlog11 /dev/rlog12 /dev/rlog21 /dev/rlog22 /dev/rrac_control01 /dev/rrac_control02 /dev/rrac_control03 /dev/rrac_spfile /dev/rsrvconfig /dev/rrac_data /dev/rrac_index /dev/rrac_temp Purpose OCR disk Voting disk System UNDO Tablespace (instance #1) UNDO Tablespace (instance #2) Redolog Thread #1, Group #1 Redolog Thread #1, Group #2 Redolog Thread #2, Group #1 Redolog Thread #2, Group #2 Control File # 1 Control File # 2 Control File # 3 For spfile (if used) For srvctl tool (if used) DATA tablespace INDEX tablespace TEMP tablespace

a The redologs and undo logical volumes have to be dedicated on each instance.
A script is provided in Appendix A (Logical volumes creation) to create the volumes groups and the logical volume as specified above. Some samples command from this appendix :
node1:root-/> mklv -y'rac_systemlv' -t'jfs2' oradatavg

The volume group, oradatavg, is build with a physical partition (PP) size (of 64MB for example). The size of the logical volume is expressed in number of PP, not in Kbytes. Once a logical volume new_lv is created on a node, two new entries appears in the /dev directory : - /dev/new_lv which is normally used by LVM for file systems. - /dev/rnew_lv where the r stands for raw device. This is the device name to use with Oracle for a rawdevice implementation.

aImportant for raw-device implementation : check that the owner of /dev/rnew_lv


dba, and that oracle have the read and write privilege. If needed, do as root (on all the nodes) :
node1:root-/> chown oracle:dba /dev/*rac_* node1:root-/> chmod go+rw /dev/*rac_*

is oracle, group

Once new logical volumes are created or modified on a node, the other node has to update its ODM (AIX internal repository), by retrieving the information on the concurrent disk.
node1:root-/> redefinevg d <disk name> oradatavg

This command can also done with the following sequence :


node1:root-/> varyoffvg oradatavg node1:root-/> exportvg oradatavg
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 32/ 32

node1:root-/> <diskname>

importvg

oradatavg

V<major

number>

node1:root-/> varyonvg oradatavg

The same command have to be executed on the others nodes, as soon as an update has been made on one node. It is important for HACMP that all the ODM databases of all the nodes have the same level of update about the logical volumes of the concurrent disk. This point is checked by HACMP during the synchronization process.

You should have a similar output on all the nodes.


node1:root-/> lsvg l oradatavg (raw-devices implementation, according table #1): oradatavg: LV NAME ocr_disk vote_disk rac_system undotbs01 undotbs02 log11 log12 log21 log22 rac_control01 rac_control02 rac_control03 rac_srvconfig rac_data rac_index rac_temp

TYPE jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2 jfs2

LPs 2 2 8 8 8 4 4 4 4 1 1 1 2 8 8 4

PPs 2 2 8 8 8 4 4 4 4 1 1 1 2 8 8 4

PVs 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

LV STATE open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd

MOUNT POINT N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 33/ 33

6.4

USERS AND GROUPS

aThis setup has to be done on all the nodes of the cluster. Be sure that all the groups and user numbers
(203, 204 and 205 in our case) are identical thru the nodes. - smit group to create the following groups dba Primary group for the oracle user. hagsuser For high availability (if not already created, and if HACMP used). oinstall The ora inventory group. This group is not mandatory. If it exists, it will be the group owner of the oracle code files. This group is a secondary group for oracle user. - smit user to create the users oracle Owner of the database.

The oracle user must have dba as primary group, oinstall and hagsuser as secondary groups. Also add the secondary group hagsuser (if HACMP used) to the root account.

Verification : check if the file /etc/group contains lines such as : (the numbers could be different)
hagsuser:!:203:oracle, root dba:!:204:oracle oinstall:!:205:oracle

- Check if there is some AIX default limitations (especially on the file size) : File size limitation: ulimit f All limitations : ulimit a

See also the file /etc/security/limits which shows the limits for each user. The default stanza applies to all new user to be created. This file can be modified by root with vi.

The default limits should be set to unlimited, except for core (e.g. 1 in the file /etc/security/limits). To turn some user limitation to unlimited, use smit users. - Set a password to oracle user, the same for all the nodes of the cluster, with the command : passwd oracle

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 34/ 34

6.5

CONFIGURE KERNEL PARAMETERS AND SHELL LIMITS

Configuring Shell Limits, System Configuration, and Netw ork Tuning Parameters (Extract from Oracle Documentation) Note: The parameter and shell limit values shown in this section are recommended values only. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system. Refer to your operating system documentation for more information about tuning kernel parameters.

aOracle recommends that you set shell limits, system configuration parameters, and network tuning parameters as described in this section on all cluster nodes.
6.5.1 Configure Shell Limits Verify that the shell limits shown in the following table are set to the values shown. The procedure following the table describes how to verify and set the values. Shell Limit (As Shown in smit) Soft FILE size Soft CPU time Recommended Value -1 (Unlimited) -1 (Unlimited) Note: This is the default value. Soft DATA segment Soft STACK size -1 (Unlimited) -1 (Unlimited)

To view the current value specified for these shell limits, and to change them if necessary, follow these steps: 1. Enter the following command:

# smit chuser 2. 3. In the User NAME field, enter the user name of the Oracle software owner, for example oracle. Scroll down the list and verify that the value shown for the soft limits listed in the previous table is -1. If necessary, edit the existing value. 4. When you have finished making changes, press F10 to exit.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 35/ 35

6.5.2

Configure System Configuration Parameters

Verify that the maximum number of processes allowed per user is set to 2048 or greater :

Note: For production systems, this value should be at least 128 plus the sum of the PROCESSES and PARALLEL_MAX_SERVERS initialization parameters for each database running on the system. 1. Enter the following command: # smit chgsys 2. Verify that the value shown for Maximum number of PROCESSES allowed per user is greater than or equal to 2048. If necessary, edit the existing value. 3. When you have finished making changes, press F10 to exit. 6.5.3 Configure Network Tuning Parameters

Verify that the network tuning parameters shown in the following table are set to the values shown or higher values. The procedure following the table describes how to verify and set the values. Network Tuning Parameter ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace Udp_recvspace Recommended Value 512 1 2*655360 65536 65536 655360 Note: The recommended value of this parameter is 10 times the value of the udp_sendspace parameter. The value must be less than the value of the sb_max parameter. Udp_sendspace 65536 Note: This value is suitable for a default database installation. For production databases, the minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT initialization parameter: (DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB

To view the current value specified for these parameters, and to change them if necessary, follow these steps :

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 36/ 36

1. To check the current values of the network tuning parameters, enter commands similar to the following: # /usr/sbin/no -a | more 2. If you must change the value of any parameter, enter the following command to determine whether the system is running in compatibility mode: # /usr/sbin/lsattr -E -l sys0 -a pre520tune If the system is running in compatibility mode, the output is similar to the following, showing that the value of the pre520tune attribute is enable: pre520tune enable Pre-520 tuning compatibility mode True 3. If the system is running in compatibility mode, follow these steps to change the parameter values: a. Enter commands similar to the following to change the value of each parameter: # /usr/sbin/no -o parameter_name=value For example: # /usr/sbin/no -o udp_recvspace=655360 b. Add entries similar to the following to the /etc/rc.net file for each parameter that you changed in the previous step: if [ -f /usr/sbin/no ] ; then /usr/sbin/no -o udp_sendspace=65536 /usr/sbin/no -o udp_recvspace=655360 /usr/sbin/no -o tcp_sendspace=65536 /usr/sbin/no -o tcp_recvspace=65536 /usr/sbin/no -o rfc1323=1 /usr/sbin/no -o sb_max=2*655360 /usr/sbin/no -o ipqmaxlen=512 fi By adding these lines to the /etc/rc.net file, the values persist when the system restarts. 4. If the system is not running in compatibility mode, enter commands similar to the following to change the parameter values: ipqmaxlen parameter:

/usr/sbin/no -r -o ipqmaxlen=512 Other parameter:

/usr/sbin/no -p -o parameter=value

Note: If you modify the ipqmaxlen parameter, you must restart the system. These commands modify the /etc/tunables/nextboot file, causing the attribute values to persist when the system restarts.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 37/ 37

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 38/ 38

6.6

NETWORK CONFIGURATION

Set up user equivalence for the oracle account, to enable rsh, rcp, rlogin commands. You should have the entries on each node for : /etc/hosts, /etc/hosts.equiv and on root/oracle home directory $HOME/.rhosts.
node1:root-/> pg /etc/hosts # Public Network 10.2.12.81 10.2.12.82 10.2.12.83 # Virtual IP address 10.2.12.181 10.2.12.182 10.2.12.183 10.10.12.81 10.10.12.81 10.10.12.81 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs node1 node2 node3

# Interconnect RAC & GPFS

node1:root-/> pg /etc/hosts.equiv node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs root root root root root root root root root oracle oracle oracle oracle oracle oracle oracle oracle oracle

node1:root-/> pg $HOME/.rhosts node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs


Installing Oracle 10g RAC on IBM

root root root root root root root


pSeries with AIX 5L page 39/ 39

node2_gpfs node3_gpfs node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs

root root oracle oracle oracle oracle oracle oracle oracle oracle oracle

node1:root-/> su - oracle node1:oracle-/home/oracle> pg $HOME/.rhosts node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs root root root root root root root root root oracle oracle oracle oracle oracle oracle oracle oracle oracle

Note : It is possible, but not advised because of security reasons, to put a + in hosts.equiv and .rhosts files. Test if the user equivalence is correctly set up (node2 is the secondary cluster machine). You are logged on node1 as root:
node1:root-/> node2:root-/> rlogin node2 (=> no password) rcp /tmp/toto node2:/tmp/toto

node2:root-/> su - oracle node2 :oracle-/home/oracle> rsh node3 date node2 :oracle-/home/oracle> sdfsvdv<sdv<vd<d 2004

6.7

ORACLE ENVIRONMENT SETUP

Oracle environment : $HOME/.profile file in Oracles home directory


Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 40/ 40

export ORACLE_BASE=/oh10g export ORACLE_HOME=$ORACLE_BASE/db10g export ORACLE_CRS=$ORACLE_BASE/crs export PATH=$ORACLE_HOME/bin:$PATH export ORACLE_BASE=/oh10g export AIXTHREAD_SCOPE=S umask 022

aTo be done on each node.


The oracle code can be located on an internal disk and propagated on the other machines of the cluster. The Oracle Universal Installer manage the cluster-wide installation, that is done only once. Regular file systems are used for Oracle code. On both nodes, create the file system for Oracle code. This file system of 4 GB, is generally located on an internal disk. - To list the internal disks :
node1:root-/> lsdev Ccdisk | grep SCSI

- Create a volume group called oraclevg :


node1:root-/> mkvg -f -y'oraclevg' -s'32' hdisk1

- Create a 4GB file system /oracle in the previous volume group (large file enabled) :
node1:root-/> crfs -v jfs2 -a bf=true -g'oraclevg' size='8388608' -m'/oracle' -A'yes' -p'rw' -t'no' nbpi='8192' -a ag='64' node1:root-/> node1:root-/> mount /oracle chown oracle:dba /oracle -a -a

The Oracle code can also be located on shared concurrent disks on a GPFS file-system.

7
7.1

GPFS IMPLEMENTATION
INSTALLING GPFS For latest IBM GPFS information, see: http://publib.boulder.ibm.com/clresctr/library/gpfs_faqs.html GPFS 2.3.0.1 (GPFS 2.3 plus APAR IY63969 - or later) All new Oracle and GPFS customers on AIX 5L v5.2 should use GPFS 2.3.0.1 or later. Current GPFS 2.1 and 2.2 customers are encouraged to upgrade to GPFS 2.3.0.1 or later. For more information regarding GPFS 2.3 and Oracle, see Metalink Note: 302806.1 Doc ID: Note:302806.1 Subject: IBM General Parallel File System (GPFS) and Oracle RAC on AIX 5L and IBM eServer pSeries BULLETIN Type: Status: PUBLISHED Content Type: Creation Date: Last Revision Date: TEXT/XHTML 25-MAR2005 31-MAR2005

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 41/ 41

On the directory containing all the filesets, check if the hidden file .toc exists. If not, when located on this directory, make inutoc . If you are using a cdrom as source for this install, this file always exists on the installation media. smit install specify the directory containing the filesets

Type F4 to list the filesets to install, rather than choosing all latest. To be sure of what you are doing, you can use the field preview only before proceeding to the real install. Filsesets mmfs.xx.xxx See Appendix H : Filesets to be installed on the machines of the cluster, paragraph GPFS to check that all the necessary filesets have been installed. This appendix provides the result of the command :
lslpp -L | grep mmfs

Post install task Add GPFS binaries in roots path.


export PATH=$PATH:/usr/lpp/mmfs/bin

Note : the steps listed here are for a GPFS implementation with GPFS version 2.2, specificities of version 2.3 are added in when necessary. 7.2 CREATING A RSCT PEER DOMAIN

Note for GPFS 2.3 : RSCT peer domain is not needed. Skip this paragraph and go to6.3 Creating a GPFS cluster. - Prepare the nodes to be part of the peer domain with the preprpnode command. You must issue the command for each node part of the peer domain on all node. On node1:
node1:/> preprpnode node1 node1:/> preprpnode node2 node1:/> preprpnode node3

On node2:
node2:/> preprpnode node1 node2:/> preprpnode node2 node2:/> preprpnode node3 ..

At this point, all the commands are only issued once, on one of the node part of the peer domain. - Create the node list file which will contain hostnames part of the peer domain. Example :
node1:/> cat node1 node2 node3 /var/mmfs/etc/rpd_node.list

- Create a new peer domain. Example :


node1:/> mkrpdomain f /var/mmfs/etc/rpd_node.list V demo10g

- Start the peer domain:


node1:/> startrpdomain demo10g

Check the configuration and check if all nodes are online:


Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 42/ 42

node1:/> lsrpdomain demo10g

Name

OpState RSCTActiveVersion MixedVersions TSPort GSPort 2.3.3.0 NO 12347 12348

demo10g Online node1:/> lsrpdnode

Name Opstate RSCTVersion node1 Online 2.3.3.0 node2 Online 2.3.3.0 node3 Online 2.3.3.0

7.3

CREATING A GPFS CLUSTER

Follow these steps to configure GPFS : - Include the path of GPFS binaries in roots path, if not already done:
export PATH=$PATH:/usr/lpp/mmfs/bin - Create the GPFS nodefile which will contain IP hostnames of GPFS interconnect network.

Example:
node1:/> cat node1_gpfs node2_gpfs node3_gpfs /var/mmfs/etc/node.list

Note for GPFS 2.3 :


node1:/> cat /var/mmfs/etc/node.list node1_gpfs:1:quorum node1_gpfs:2:quorum node1_gpfs:3:quorum

- Create the gpfs cluster :


node1:/> mmcrcluster t /var/mmfs/etc/node.list lc p node1_gpfs s node2_gpfs -n

Note for GPFS 2.3 :


node1:/> mmcrcluster p node1_gpfs /var/mmfs/etc/node.list C MyCluster A s node2_gpfs -n

- Create the nodeset


node1:/> mmconfig n /var/mmfs/etc/node.list A C MyCluster D /tmp

Note for GPFS 2.3 : this step is not needeed (this command does not exist anymore in 2.3). You can check the GPFS creation with the mmlscluter and the cluster configuration with mmlsconfig. In our example you see :
node1:/> mmlscluster GPFS cluster information GPFS cluster type: GPFS cluster id: RSCT peer domain name: Remote shell command: Remote file copy command: lc gpfs1083860813 demo10g /usr/bin/rsh /usr/bin/rcp

GPFS system data repository servers:


Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 43/ 43

------------------------------------Primary server: node1_gpfs Secondary server: node2_gpfs Nodes for nodeset MyCluster: ------------------------------------------------1 node1_gpfs 10.10.12.81 node1_gpfs 2 node2_gpfs 10.10.12.82 node2_gpfs 3 node3_gpfs 10.10.12.83 node3_gpfs

Note for GPFS 2.3 : some lines change/ disapear in the previous output
node1:/> mmlsconfig Configuration data for nodeset MyCluster: -----------------------------------pagepool 80M dataStructureDump /tmp/mmfs multinode yes autoload yes useSingleNodeQuorum no wait4RVSD no comm_protocol TCP clusterType hacmp group Gpfs.set1 recgroup GpfsRec.MyCluster File systems in nodeset MyCluster: ----------------------------(none)

Note for GPFS 2.3 : some lines change/ disapear in the previous output
- Start GPFS node1:/> mmstartup C MyCluster Wed Sep 02 17:38:26 NFT 2004: mmstartup: Starting GPFS ... node1_gpfs: 0513-059 The Subsystem PID is 65653. node2_gpfs: 0513-059 The Subsystem PID is 48626. Node3_gpfs: 0513-059 The Subsystem PID is 58734. mmfs mmfs mmfs Subsystem Subsystem Subsystem has has has been been been started. started. started.

Note for GPFS 2.3 : some lines change/ disapear in the previous output You can check the GPFS sub-system is started : node1:/> lssrc s mmfsd
Subsytem Group mmfsd PID aixmm 65653 Status active

Note for GPFS 2.3 : dont check the status of the subsystem with this command because even if GPFS is started, the output of lssrc will show inoperative status.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 44/ 44

7.4

CREATING A GPFS FILESYSTEM

- Create a disk descriptor file containing the disks devices that will be shared in the GPFS cluster

Example :
node1:/> cat /var/mmfs/etc/nsddisk.list

hdisk12:node1_gpfs:node2_gpfs:dataAndMetadata: hdisk15:node1_gpfs:node2_gpfs:dataAndMetadata: -

Create the network shared disk with mmcrnsd command:


node1:/> mmcrnsd F /var/mmfs/etc/nsddisk.list

-F <list of disks to use with descriptors> You can list the nsd that were created by command mmlsnsd :
node1:/> mmlsnsd File system (free disk) (free disk) Disk name gpfs1nsd gpfs2nsd Primary node node1_gpfs node1_gpfs Backup node node2_gpfs node2_gpfs

-----------------------------------------------------

- Create the file-system with mmcrfs


node1:/> mmcrfs /oradata_gpfs /dev/oradata_gpfs /var/mmfs/etc/nsddisk.list A yes -B 512k -n 4 -N 80000 -F

-F <list of disks and descriptors> modified from mmcrlv -A Auto mount the file system at GPFS startup -B Block size and Stripe size -n estimated number of nodes -N Number of I-Nodes
node1:/> mmlsnsd File system Disk name Primary node node1_gpfs node1_gpfs Backup node node2_gpfs node2_gpfs

----------------------------------------------------/dev/oradata_gpfs gpfs1nsd /dev/oradata_gpfs gpfs2nsd

node1:/> cat /etc/filesystems | grep -p /oradata_gpfs /oradata_gpfs: dev = /dev/oradata_gpfs vfs = mmfs nodename = mount = false type = mmfs account = false - Mount the newly created file-system : node1:/> mount /oradata_gpfs
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 45/ 45

- Set the permissions and the ownership of the file-systems node1:/> chown oracle:dba /oradata_gpfs node1:/> chmod go+rw /oradata_gpfs

Note : - GPFS block size should be around (256K, 512K, 1024K) - GPFS block size is the file system stripe size. - Not that important for regular database I/O since Direct I/O is used. - Very important for operations that increase data file size. Note for GPFS 2.3 : if you have a 2-nodes cluster you must add a tie-breaker disk. So, you should have kept a disk dedicated to the tie-breaker. To do this: - create a disk descriptor file :
node1:/> cat /var/mmfs/etc/nsddisk_tiebreaker.list hdisk17:node1_gpfs:node2_gpfs::

- create the NSD:


node1:/> mmcrnsd F /var/mmfs/etc/nsddisk_tiebreaker.list

do not mount a GPFS on this network shared disk if GPFS is started, stop it with mmshutdown and define the newly created NSD as a tie-breaker disk
node1:/> mmchconfig tiebreakerDisks=<name_of_the_disk>

(you can see the name of the disk in the disk descriptor file after it has been updated by the mmcrnsd command) Tuning suggestions - aioservers : AIO should be enabled. The general rule for heavy I/O environments is to initially set the maxservers value to at least 10*(number of disks accessed asynchronously) and the minservers value can be set to half of the maxservers value. For an 8 disk GPFS file system, an example of the command to modify would be:
node1:/> chdev -l aio0 -a maxservers='80 node1:/> chdev -l aio0 -a minservers='40'

See the Tuning Asynchronous Disk I/O section in the AIX 5L Version 5.1 Performance Management Guide for more discussion and details. - pagepool : the pagepool is used to cache user data and indirect blocks. It is the GPFS pagepool mechanism that allows GPFS to implement read as well as write requests asynchronously. Increasing the size of pagepool increases the amount of GPFS buffer cache pinned memory available. The default value for pagepool is 20 MB and the maximum allowable value is 512 MB. Applications that may benefit from a larger pagepool (compared to the default) potentially include those that either reuse data, those that have a random I/O pattern, and/or those that have a higher per client performance requirement. The size of the pagepool will depend on the "working set" of I/O data that needs to be cached. For instance to change pagepool to 100 MB:
node1:/> mmchconfig pagepool=100M node1:/> mmlsconfig Configuration data for nodeset MyCluster: -----------------------------------pagepool 100M dataStructureDump /tmp/mmfs multinode yes autoload yes
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 46/ 46

useSingleNodeQuorum no wait4RVSD no comm_protocol TCP clusterType hacmp group Gpfs.set1 recgroup GpfsRec.MyCluster File systems in nodeset MyCluster: ----------------------------/oradata_gpfs

- ipqmaxlen : the ipqmaxlen network option controls the number of incoming packets that can exist on the IP interrupt queue. Since both GPFS and IBM Virtual Shared Disk use IP, the default value of 128 is often insufficient. This is especially important if your virtual shared disks are configured over IP. The recommended setting is 512.
node1:/> no a ipqmaxlen=512 node1:/> rsh node2 no a ipqmaxlen=512

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 47/ 47

8
8.1

HACMP IMPLEMENTATION
INSTALLING HACMP

HACMP (High Availability Cluster Management Protocol) is a product which provide a high availability on a cluster of machines (from 2 to 32 nodes). HACMP/ES is the enhanced version of HACMP (Enhanced Scalability). HACMP/CRM (Concurrent Ressource Manager) layer includes the CLVM, which enable the concurrent logical volume manager. In all this CookBook, HACMP means HACMP/ES/CRM. The instances of the same parallel database have a concurrent access on the same external disks. It is a concurrent access, and not a shared one. In this Oracle10g RAC installation guide, the purpose is not to set up all the parameter stuff of HACMP. We will focus on the concurrent volumes groups, which is enough to install and run Oracle10g RAC. In this cookbook, HACMP 5.1 ptf 5 is used and the RSCT 2.3.4. RSCT (RS/6000 Cluster Technology) are the base filesets used by HACMP to manage the cluster. Exactly the same filesets have to be installed on all the machines of the cluster (node1, node2 & node3) All the official HACMP documentation is available at : http://hacmp.aix.dfw.ibm.com/ How to install HACMP using smit install : On the directory containing all the filesets, check if the hidden file .toc exists. If not, when located on this directory, make inutoc . If you are using a cdrom as source for this install, this file always exists on the installation media. smit install specify the directory containing the filesets

Type F4 to list the filesets to install, rather than choosing all latest. To be sure of what you are doing, you can use the field preview only before proceeding to the real install. See 20.2.1 HACMP 5.1 filesets on page 172 and 20.2.2RSCT 2.3.4 filesets for HACMP Implementation on page 172 to check that all the necessary filesets have been installed. This appendix provides the result of the command :
node1:/> lslpp -l | grep cluster node1:/> lslpp -l | grep rsct

aDuring the installation, select all the common filesets, the HACMP support one, but not the PSSP support
one. 8.2 POST INSTALL TASKS

You must have at least the filesets listed in 20.2.1 HACMP 5.1 filesets on page 172 and 20.2.2RSCT 2.3.4 filesets for HACMP Implementation on page 172, with a release number equal or higher.
node1:/> lslpp -l | grep cluster node1:/> lslpp -l | grep rsct

add in the PATH environment variable the following directories /usr/es/sbin/cluster /usr/es/sbin/cluster/utilities /usr/es/sbin/cluster/sbin /usr/es/sbin/cluster/diag

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 48/ 48

aAfter both RSCT and HACMP have been installed successfully on all the nodes, all the machines have to
been rebooted before going on with HACMP configuration. 8.3 CREATING THE CLUSTER HACMP

Main steps for creating an HACMP configuration : Consult the HACMP documentation, available on http://hacmp.aix.dfw.ibm.com/

aAll this steps have to be done using one node only. During the synchronization process, HACMP will copy
the whole configuration to every node in the cluster. This functionality is called SPOC (Single Point Of Control)

aEvery time you have modified the topology or resources of the cluster, dont forget to synchronize. Dont try
to make cluster configuration at the same time on two nodes.
Configuring HACMP using Smit menus

Cluster Configuration

Extended topology

Configure HACMP cluster Add/Change/Show an HACMP cluster Configure HACMP node Add a node to the HACMP Cluster Configure HACMP Networks Add an IP-based Network to the HACM Cluster

Configure HACMP Communication Interfaces/Devices Add Communication Interfaces/Devices Configure HACMP Networks Change/Show a Network in the HACMP Cluster
HACMP Verification and Synchronization

Discover HACMP-related Information from Configured Nodes Extended Resource Configuration HACMP Extended Resource Group Configuration

Add a Resource Group

Change/Show Resources and Attributes for a Resource Group

HACMP Verification and Synchronization

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 49/ 49

On one of the nodes of the cluster, configure the cluster : smit hacmp, and follow the screens. First of all, define the cluster hardware topology, which includes the nodes participating to the cluster, the network, and network interfaces.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 50/ 50

In the step under you give an identifier and a name to your cluster.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 51/ 51

Configure cluster nodes: Here, you enter the name of the nodes participating to the cluster. Those names are the hostname, not the name planned for interconnect network (in our case node1, node2 and node3).

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 52/ 52

Configure the network In the case of the implementation of a concurrent volume group, only a private network is needed. Its not necessary to define boot, service and standby addresses like in cascading or rotating HACMP configurations. In RAC environments, you will need a high performance private network between the machines participating to the RAC cluster. The network type of this interconnect can be HPS (High Performance Switch) or ethernet (use a point to point gigabit ethernet, for example).

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 53/ 53

Configure the adapters There must be one adapter by machine. This adapter is configured for the interconnect network. It is not the hostname.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 54/ 54

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 55/ 55

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 56/ 56

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 57/ 57

Synchronize the cluster topology It is better, but not mandatory, that HACMP is stopped when starting synchronizing.

8.4

CREATING THE RESOURCE HACMP

The second configuration step is to define the concurrent volume group, which has to be varied on by HACMP. This is called the resource. Discover concurrent resources

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 58/ 58

Define a resource group The node relationship must be concurrent. All participating nodes must be entered.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 59/ 59

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 60/ 60

aBe careful to enter the concurrent volume group name in the line Concurrent Volume Group, and not
Volume Group !

Synchronize the cluster resources

aImportant : your cluster must be synchronized after each new modification (topology or resource)
To check your HACMP cluster configuration, see 19 Appendix D : HACMP Cluster verification output Page 168
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 61/ 61

8.5

STARTING HACMP

Is HACMP correctly configured ? Checklist. You have to check if the concurrent Volume Group is active or not on both nodes. The concurrent volume group is managed by HACMP. When HACMP is stopped, the volume group is not available (i.e. varyoff) When HACMP is started, the volume group is varyon in concurrent mode access (i.e. varyon) To check the available volume groups lsvg o In the command output, you should see rac92_cvg only when HACMP is up and running. If the volume group is available on one of the nodes, make varyoffvg rac92_cvg on this node. Do the following on each node of the cluster.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 62/ 62

The lock manager does not have to be started. The locks are managed by Oracle. Is HACMP up and running ? When you have OK at the previous smit screen, this does not mean that HACMP is running yet. It means that the start process is engaged. Because HACMP is very long to be up (synchronization between the nodes), wait for 10 minutes. If after this time HACMP is not running, there is a problem, and you have to have a look into the logs. HACMP logs : /tmp/hacmp.out on each node A useful command : tail f /tmp/hacmp.out clstat , xclstat : programs displaying the state of the HACMP cluster and every node participating.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 63/ 63

You can also check the HAGS sockets and the cluster sub-systems. HAGS socket The HAGS socket needs to be writable by "oracle" and the "cldomain" executable needs to be executable by "oracle". By configuring the group and permissions for the "grpsvcsdsocket.<domain>" file the instance will be able to communicate with HAGS and the Oracle instance will mount. On all the nodes of the cluster, perform the following tasks :
check hagsuser group exists, else create it place "oracle" into the "hagsuser" group change the permissions on the "cldomain" executable : # chmod a+x /usr/sbin/cluster/utilities/cldomain change the group to "hagsuser" for the "svcsdsocket.rac92_cluster socket : # chgrp hagsuser /var/ha/soc/grpsvcsdsocket.rac92_cluster change the group permissions for the "grpsvcsdsocket.rac92_cluster" socket : # chmod g+w /var/ha/soc/grpsvcsdsocket.rac92_cluster

You should have something like the following : ls l /var/ha/soc total 4


srw-rw-rwsrw-rw---drwxrwxrwx srw-rw-rwdrwxrwx--drwxr-x--drwxr-xr-x 1 1 2 1 2 2 2 root root root root root root root haemrm haemrm system hagsuser haemrm system system 0 0 512 0 512 512 512 Jun Jun Jun Jun Jun Apr Jun 19 19 19 19 19 02 19 16:58 16:58 17:51 16:57 16:57 23:30 16:57 em.clsrv.rac92_cluster em.rmsrv.rac92_cluster grpsvcs.clients.rac92_cluster grpsvcsdsocket.rac92_cluster haem hats topsvcs

Daemons, etc Execute the following command :


lssrc -a | egrep 'svcs|ES'

It should give the following output : topsvcs topsvcs grpsvcs grpsvcs grpglsm grpsvcs emsvcs emsvcs emaixos emsvcs clstrmgrES cluster clsmuxpdES cluster

65698 63872 68862 69132 62336 70086 65306

active active active active active active active

You have also another way to see if HACMP is up : list the online volume groups. If your concurrent volume group oradatavg is varyon on all the nodes, all is OK ! This is the only use of HACMP for RAC databases : providing concurrent access to disks. To list the online volume groups : lsvg o on each node. To see th VOLUME GROUP: oradatavg VG IDENTIFIER: 0033c67000004c00000000ff87432e75 VG STATE: active PP SIZE: 64 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 79 (5056 megabytes) MAX LVs: 512 FREE PPs: 79 (4672 megabytes) LVs: 2 USED PPs: 6 (384 megabytes) OPEN LVs: 0 QUORUM: 2 TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: no Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 64/ 64

VG Mode: Concurrent Node ID: 1 MAX PPs per PV: 1016 LTG size: 128 kilobyte(s) HOT SPARE: no

Active Nodes: MAX PVs: AUTO SYNC: BB POLICY:

2 3 128 no relocatable

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 65/ 65

SYNCHRONIZE THE SYSTEM TIME ON CLUSTER NODES

aMANDATORY a
To ensure that RAC operates efficiently, you must synchronize the system time on all cluster nodes. Oracle recommends that you use xntpd for this purpose. xntpd is a complete implementation of the Network Time Protocol (NTP) version 3 standard and is more accurate than timed. To configure xntpd, follow these steps on each cluster node: 1. Enter the following command to create required files, if necessary:
# touch /etc/ntp.drift /etc/ntp.trace /etc/ntp.conf

2. Using any text editor, edit the /etc/ntp.config file:


# vi /etc/ntp.conf

3. Add entries similar to the following to the file:


# Sample NTP Configuration file # Specify the IP Addresses of three clock server systems. server ip_address1 server ip_address2 server ip_address3 # Most of the routers are broadcasting NTP time information. If your # router is broadcasting, then the following line enables xntpd # to listen for broadcasts. broadcastclient # Write clock drift parameters to a file. This enables the system # clock to quickly sychronize to the true time on restart. driftfile /etc/ntp.drift tracefile /etc/ntp.trace

4. To start xntpd, follow these steps: a. Enter the following command:


# /usr/bin/smitty xntpd

b. Choose Start Using the xntpd Subsystem, then choose BOTH.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 66/ 66

10 IMPORTANT TIPS FOR ORACLE SOFTWARE AND PATCHS INSTALLATION INSTRUCTIONS


10.1 10G INSTALLATION ON AIX 5.3, FAILED WITH CHECKING OPERATING SYSTEM VERSION MUST BE 5200

Doc ID: Note:293750.1 Subject 10g Installation on Aix 5.3, Failed with Checking operating system version must be 5200 Failed : Type: PROBLEM Status: MODERATED @ (AuthWiz 2.0) Created from SR 4219914.995. @ Click here to edit in wizard.

Content Type: TEXT/X-HTML Creation Date: 13-DEC-2004 Last Revision 02-MAR-2005 Date:

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review. The information in this document applies to: Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.1.0.2 AIX 5.3 Based Systems (64-bit) Symptoms 10g Installation on Aix 5.3, Failed with Checking operating system version must be 5200 Failed Installation Log show following details : Using paramFile: /u06/OraDb10g/Disk1/install/oraparam.ini Checking installer requirements... Checking operating system version: must be 5200 Failed << Cause This issue is same as the following : Oracle bug fixes for AIX 5L v5.3 interoperability: When running the Oracle Universal Installer (OUI) the following message or similar may appear: "OUI-18001: The operating system AIX Version 5300.0x is not supported." Fix

Workaround is to run the OUI as follows: ./ runInstaller -ignoreSysPrereqs


This parameter tell the installer to not stop because encountering an OS version greater than expected. References Note 282036.1 - Minimum software versions and patches required to Support Oracle Products on IBM pSeries.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 67/ 67

10.2

10G RECOMMENDED STEPS BEFORE INSTALLATION AND APPLYING ANY PATCH SET ON AIX The patch set instructions for Installation or Patch Sets on AIX platforms do not include instructions to run "slibclean" before installing. This can lead to write errors and / or strange other errors during the application of the Patch Set or during upgrade / operation of a database after the Patch Set has been applied.

Description

The recommended steps before installation and applying any Patch Set on AIX are:
1. Shutdown all instances which use the target ORACLE_HOME (being sure to exit the SQLPLUS session used to shut down each instance ). 2. Stop all listeners started from the target ORACLE_HOME 3. Stop all client application code / daemons which use the target ORACLE_HOME 4. Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced libraries from memory. 5. Follow the install steps for the Patch Set

Possible Symptoms Write Errors during Patch Set installation <Note:169705.1> describes some of the "write" errors which can occur during application of the Patch Set if slibclean is not run.

Explanation When AIX loads a shared library into memory the image of that library is kept in memory even if no process is using the library. If the on-disk copy of the library is altered then applications using that library still use the in-memory copy and not the updated disk copy. This is normal expected behaviour on AIX. In the case of applying an Oracle Patch Set then shutting down all the instances , listeners and applications still leaves shared libraries in memory (eg: libjox9.a stays in memory). Application of the Patch Set updates the disk copy, but subsequent startup of an instance uses the in-memory library images (if they are still present). Hence the version banner can show the old release, and upgrade steps may fail as the instance is running an unsupported combination of libraries. Running "slibclean" before starting the upgrade flushes libraries which are not currently in use from memory.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 68/ 68

11 INSTALL THE CUSTER READY SERVICES (CRS)

Please read following Metalink note about CRS and 10gRAC : CRS and 10g Real Application Clusters 259301. 15-MAR1 2005 Oracle Server Enterprise Edition Generic BULLETIN

Oracle Cluster Ready Services installation is necessary and mandatory. This installation just have to be done only starting from one node. Once the first node is installed, Oracle OUI automatically starts the copy of the mandatory files on the others nodes, using rcp command. This step should not last long. But in any case, dont think the OUI is stalled, and look at the network traffic before canceling the installation !

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 69/ 69

You can create a staging area. The name of the subdirectories is in the format Disk1 As root user, execute : xhost + Login as oracle and follow the procedure hereunder

a Setup and export your DISPLAY, TMP and TEMP variables


- export DISPLAY=....... - export TMP=/tmp - export TEMP=/tmp With /tmp or other destination having enough free space, about 500Mb

aIF AIX5L release 5.3 is used, do modify the file oraparam.ini, and cluster.ini in Disk1/installer
update entries AIX5200 to AIX5300 on both files, and execute :
$/<cdrom_mount_point>/runInstaller

Or execute : ./runInstaller -ignoreSysPrereqs OUI (Oracle Universal Installer) check the operating system requirements for AIX5L 5.2.0.0. If AIX maintenance level 1, 2, 3 or/and 4 are installed, the installer will notice (no further actions) and will go to the next step. To chek AIX mantenance level installed on each node : -> instfix i|grep ML All filesets for 5.2.0.0_AIX_ML were found. All filesets for 5200-01_AIX_ML were found. All filesets for 5200-02_AIX_ML were found. All filesets for 5200-03_AIX_ML were found. All filesets for 5200-04_AIX_ML were found. OUI (Oracle Universal Installer) asks you to run rootpre.sh as root. At the OUI Welcome screen, click Next.

a Make sure to execute rootpre.sh on each node before you click to the next
step.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 70/ 70

At the OUI Welcome screen

Just click Next ...

Specify where you want to create the inventory directory. By default it will be created in the $ORACLE_BASE Operating system group name should be set as dba

Then click Next ...

The OUI ask the user to run orainstRoot.sh in a separate window, if its the first Oracle product install on this machine. This script creates the file /etc/oraInst.loc, which is used by OUI for the list of installed products. Connect as root on node 1, and run the orainstRoot.sh located in $ORACLE_BASE/oraInventory This will change permissions, and group name to dba on the /etc/oraInst.loc file.

Then click Continue ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 71/ 71

Specify file locations Dont change the source path Specify an ORACLE_HOME name and destination directory for the CRS installation. The destination directory should be inside the $ORACLE_BASE

Then click Next ...

Language Selection : Just choose the language you need. English should always be selected in addition of the others. In our example, well just need English.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 72/ 72

a Cluster Configuration :
Specify a cluster name, crs in our example Specify Public Node Name and Private Node Name for all nodes to be part of the Oracle cluster. Public correspond to IP address linked to connected to the public network Private correspond to IP address linked to the RAC interconnect. In our case, we are using the same interconnect link for GPFS and RAC.

Then click Next ...

If you have problems at this stage when clicking on Next (error messages) Check your network configuration, and User equivalence configurations on all nodes.

a Private Interconnect
Enforcement : Specify which network card correspond to the public network, and which one correspond to the private network (RAC Interconnect) In our example : En0 must exist as private on each node. En1 must exist as public on each node.

Then click Next ... At this stage, you must have already configured the shared storage. At least for : Oracle Cluster Registry Voting Disk You must know where to install these 2 files to be reachable by all nodes participating to the RAC cluster You have 3 implementations choices : Raw Disks with Oracle ASM (Automated Storage Management) Cluster files system with IBM GPFS Concurrent Raw Devices with HACMP
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 73/ 73

Oracle Cluster Registry : Specify the OCR location, this must be a shared location on the shared storage reachable from all nodes. And you must have the read/wright permissions on this shared location from all nodes. If ASM implementation is used, specify raw disk location If GPFS implementation is used, specify files system location If Raw Devices implementation is used, specify raw device location

Then click Next ...

Voting Disk : Specify the Voting Disk location, this must be a shared location on the shared storage reachable from all nodes. And you must have the read/wright permissions on this shared location from all nodes. If ASM implementation is used, specify raw disk location If GPFS implementation is used, specify files system location If Raw Devices implementation is used, specify raw device location

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 74/ 74

orainstRoot.sh : execute the orainstRoot.sh on all nodes. The file is located in the installation path of the CRS on each nodes.

Then click Continue ...

Summary : Check Cluster Nodes and Remote Nodes lists. The OUI will install the Oracle CRS software on to the local node, and then copy this information to the other selected nodes.

Then click Install ...

Install : The Oracle Universal Installer will proceed the installation on the first node, then will copy automatically the code on the 2 others selected nodes. During the copy, you may be prompt to enter the source location for Disk2.

Just wait for the next screen ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 75/ 75

Install : Copying installed files from node 1 to the other selected nodes. Node 2 and 3 in our example.

Just wait for the next screen ...

aInstall : Setup Privileges


At this stage, you need to run the root.sh script on each selected nodes.

Start with node 1 and wait for the result before executing on node 2, then when finished on node 2, execute on node 3. This file is located in $ORACLE_BASE/crs directory on each node.

aInstall : Setup Privileges


Step 1 Execute root.sh on node 1 And wait

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 76/ 76

aInstall : Setup Privileges


Step1 When finished, CSS deamon should be active on node 1 and inactive on node 2 and 3 Check for line CSS is active on these nodes. node1

aInstall : Setup Privileges


Step 2 Execute root.sh on node 2. When finished, CSS deamon should be active on node 1 and 2, and inactive on node 3 Check for line CSS is active on these nodes. node1 node2 Step 3 Execute root.sh on node 3 When finished, CSS deamon should be active on node 1, 2, and 3. Check for line CSS is active on these nodes. node1 node2 node3 If CSS is not active on all nodes, or on one of the nodes, this means that you could have a problem with the network configuration, or the shared disks configuration for accessing OCR and Voting Disks. Check your network , shared disks configuration, and owner and access permissions (read/write) on OCR and Voting disks from each participating node. And execute again the root.sh script on node having the problem.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 77/ 77

Metalink note to use in case of problem with CRS ... Subject Doc ID Modified Date Product Platform Generic Type HOWTO

1 10G: How to Stop the 263897. 14-SEPCluster Ready 1 2004 Services (CRS) 2 How to verify if CRS install is Valid 295871. 04-FEB1 2005

Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition

Generic Generic

HOWTO TROUBLE SHOOTING BULLETIN BULLETIN

265769. 16-MAR3 10g RAC: Troubleshooting CRS 1 2005 Reboots 4. CRS and 10g Real Application Clusters 5 Repairing or Restoring an Inconsistent OCR in RAC 259301. 15-MAR1 2005 268937. 02-MAR1 2005

Generic Generic

293819. 01-MAR6 Placement of voting and OCR disk files in 1 2005 10gRAC 7 10g RAC: How to Clean Up After a Failed CRS Install 8 239998. 23-FEB1 2005

Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition

Generic

BULLETIN

Generic

BULLETIN

CRS 10g Diagnostic 272332. 15-NOVCollection Guide 1 2004

Generic Generic

BULLETIN PROBLEM

10 10g RAC: Stopping 239989. 03-MAYReboot Loops When 1 2004 CRS Problems Occur 11 HOW TO REMOVE 298073. 11-FEBCRS AUTO START 1 2005 AND RESTART FOR A RAC INSTANCE 12 HOW TO REMOVE 298069. 11-FEBCRS AUTO START 1 2005 AND RESTART FOR A RAC INSTANCE 14 CRS Home Is Only Partially Copied to Remote Node 284949. 05-DEC1 2004

Enterprise Manager for RDBMS

Generic

BULLETIN

Enterprise Manager for RDBMS

Generic

BULLETIN

Oracle Server Enterprise Edition

Generic

PROBLEM

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 78/ 78

Install : Setup Privileges Coming back to this previous screen.

Just click OK to continue ...

Configuration Assistants : 2 configuration assistants will be automatically executed. Check for the result to be successful.

Then click Next ...

End of Installation

Then click Exit ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 79/ 79

Post-Installation Oracle unix Profile Update : aTo be done on each node.


Oracle environment : vi $HOME/.profile file in Oracles home directory Add the entries in bold blue color export ORACLE_BASE=/oh10g export AIXTHREAD_SCOPE=S (S for system-wide thread scope) umask 022 export ORACLE_CRS=$ORACLE_BASE/crs export ORACLE_CRS_HOME=$ORACLE_BASE/crs export LD_LIBRARY_PATH=:$ORACLE_CRS/lib:ORACLE_CRS/lib32 export PATH=$ORACLE_CRS/bin:$PATH

At this stage :
The Oracle Cluster Registry and Voting Disk are creates and configured The Oracle Cluster Ready Services is installed, and started on all nodes.

Post-Installation Check list:


Check CRS processes on each nodes : ps ef|grep d.bin Execute crs_stat on each node as oracle user :

Command to start/stop the CRS deamons : /etc/init.crs start /etc/init.crs stop to start the CRS to stop the CRS

If necessary and only for troubleshooting purpose, disable the automatic reboot of AIX nodes when node fail to communicate with CRS daemons, or fail to access OCR and Voting disk.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 80/ 80

12 INSTALL THE ORACLE 10G SOFTWARE

Oracle RAC option installation just have to be done only starting from one node. Once the first node is installed, Oracle OUI automatically starts the copy of the mandatory files on the second node, using rcp command. This step could last long, depending on the network speed (one hour), without any message. So, dont think the OUI is stalled, and look at the network traffic before canceling the installation ! You can also create a staging area. The name of the subdirectories is in the format Disk1 to Disk3 As root user, execute : xhost + Login as oracle and follow the procedure hereunder

a Setup and export your DISPLAY, TMP and TEMP variables


- export DISPLAY=....... - export TMP=/tmp - export TEMP=/tmp With /tmp or other destination having enough free space, about 500Mb

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 81/ 81

aIF AIX5L release 5.3 is used, do modify the file oraparam.ini, and cluster.ini in Disk1/installer
update entries AIX5200 to AIX5300 on both files, and execute :
$/<cdrom_mount_point>/runInstaller

Or execute : ./runInstaller -ignoreSysPrereqs OUI (Oracle Universal Installer) chek the operating system requirements for AIX5L 5.2.0.0. If AIX maintenance level 1, 2, 3 or/and 4 are installed, the installer will notice (no further actions) and will go to the next step. To chek AIX mantenance level installed on each node : -> instfix i|grep ML All filesets for 5.2.0.0_AIX_ML were found. All filesets for 5200-01_AIX_ML were found. All filesets for 5200-02_AIX_ML were found. All filesets for 5200-03_AIX_ML were found. All filesets for 5200-04_AIX_ML were found. OUI (Oracle Universal Installer) asks you to run rootpre.sh as root. At the OUI Welcome screen, click Next.

a Make sure to execute rootpre.sh on each node (Should be already done


with the CRS Installation), before you click to the next step

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 82/ 82

At the OUI Welcome screen

Just click Next ...

Specify File Locations : Do not change the Source field Specify a different ORACLE_HOME Name with its own directory for the Oracle software installation.

This ORACLE_HOME must be different then the CRS ORACLE_HOME.

Then click Next ...

aIf you dont see the following screen with Node selection, it might be that your CRS is down on one or
all nodes. Please check if CRS is up and running on all nodes. Specify Hardware Cluster Installation Mode : Select the other nodes on to which the Oracle RDBMS software will be installed. It is not necessary to select the node on which the OUI is currently running. Click Next.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 83/ 83

Select the installation type : You have the option to choose Enterprise, Standard Edition, or Custom to proceed. Choose the Custom option to avoid creating a database by default.

Then click Next ...

The installer will check some productspecific Prerequisite. Dont take care of the 3 lines with checking at status Not executed, These are just warnings because AIX maintenance level might be higher then 5200, which is the case in our example (ML04).

Then click Next ...

Available Product Components : Select the product components for Oracle Database 10g.

Make sure to select the Oracle Real Application Clusters 10.1.0.2.0 option Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 84/ 84

Privileged Operating Systems Groups : Verify the UNIX primary group name of the user which controls the installation of the Oracle10g software. (Use unix command id to find out) And specify the Privileged Operating System Groups to the value found. In our example, this must be dba (Primary group of unix oracle user) to be set for both entries. Then click Next ...

Create Database :

Choose No, we dont want to create a database at this stage.

Then click Next ...

Summary : The Summary screen will be presented. Confirm that the RAC database software and other selected options will be installed. Check Cluster Nodes and Remote Nodes lists. The OUI will install the Oracle 10g software on to the local node, and then copy this information to the other selected nodes. Then click Install ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 85/ 85

Install : The Oracle Universal Installer will proceed the installation on the first node, then will copy automatically the code on the 2 others selected nodes. Just wait for the next screen ...

During the copy, you may be prompt to enter the source location for Disk2 and Disk3.

Setup Privileges : At this stage, run the root.sh script on each selected nodes

a Setup the Display value as root in


your x-windows terminal to be able to proceed with the VIP graphical assistant when executing root.sh on the first node Execute the root.sh on each node ...

12.1

VIP CONFIGURATION ASSSITANT

aOn the first node as root user, you must setup the DISPLAY before running the root.sh script
Wait for Next Screen ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 86/ 86

The VIP Welcome graphical screen will appear at the end of the root.sh script

Then click Next ...

1 of 2 : Select one and only one network interface. aSelect the network interface corresponding to the Public Network Remember that each public network card on each node must have the same name, en1 for example in our case. En0 is the RAC Interconnect, or private network. Please check with ifconfig a on each node as root. Select en1 in our case Then click Next ... 2 of 2 : In the Virtual IPs for cluster nodes screen, you must provide the VIP node name for node1 and stroke the TAB key to automatically feel the rest. Check validity of the entries before proceeding. Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 87/ 87

The Summary screen will appear, please validate the entries.

Then click Finish ...

The VIP configuration Assistant will proceed with creation, configuration and startup of all application resources on all selected nodes. VIP, GSD and ONS will be the application resources to be created.

Wait while progressing ...

If you dont get any errors, youll be prompted to click OK as the configuration is 100% completed.

Then click OK ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 88/ 88

Check the Configuration results.

Then click Exit ...

a Run the root.sh script on each other node before proceeding to the next stage.
No VIP screen assistant will be displayed on the other nodes when executing the root.sh script. In our example, well run root.sh on Node 2 and node 3 as root user.

Then click Next ...

ifconfig a

a Using ifconfig a on each node,


check that each network card configured for Public network is mapping a virtual IP.

a If problems occurs with VIP configuration assistant, please use the metalink notes specified in this chapter.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 89/ 89

Metalink notes to use for VIP : Subject Doc ID Modified Date Product Platform Type

1 Configuring the IBM 296856.1 17-FEB-2005 AIX 5L Operating System for the Oracle 10g VIP 2 Changing the check 294336.1 23-FEB-2005 interval for the Oracle 10g VIP 2 Modifying the VIP of a Cluster Node 276434.1 13-JAN-2005

Oracle Server - Enterprise Edition

Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition

BULLETIN

BULLETIN

3 Modifying the default 298895.1 07-MAR-2005 gateway address used by the Oracle 10g VIP 4 How to Configure Virtual IPs for 10g RAC 264847.1 24-FEB-2005

BULLETIN

HOWTO

Install : Setup Privileges Coming back to this previous screen.

Just click OK to continue ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 90/ 90

Configuration Assistants : The Universal Installer will proceed with the Oracle Net Configuration Assistant

Wait for next screen ...

Select Perform Typical Configuration

Then click Next ...

End of Installation : This screen will automatically appear. Check that it is successful and write down the URL list of the J2EE applications that have been deployed (isqlplus, ). Then click Exit ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 91/ 91

Post-Installation Oracle unix Profile Update :


a To be done on each node. Oracle environment : vi $HOME/.profile file in Oracles home directory Add the entries in bold blue color
export ORACLE_BASE=/oh10g export AIXTHREAD_SCOPE=S umask 022 export ORACLE_CRS=$ORACLE_BASE/crs export ORACLE_CRS_HOME=$ORACLE_BASE/crs export ORACLE_HOME=$ORACLE_BASE/db10g export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_CRS/lib: . . $ORACLE_HOME/lib32:$ORACLE_CRS/lib32 export PATH=$ORACLE_HOME/bin:$ORACLE_CRS/bin:$PATH

At this stage :
The Oracle 10g Software with the Real Application Cluster option is installed. The VIP (Virtual IP), GSD ans ONS application resources are configured on all nodes. The listeners are configured and started.

Post-Installation Check list:


Execute crs_stat on each node as oracle user Make sure VIP is present and configured on each node : as root, execute ipconfig -a

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 92/ 92

13 ORACLE PATCHES TO APPLY


Before any patch update, backup your installation, database, and OCR (Oracle Cluster Registry) disks If you apply patchset 1 (10.1.0.3), do apply : Patch Description Release 10.1.0.3 Updated Size

3761843 Oracle Database Family: Patchset 10.1.0.3 PATCH SET FOR ORACLE DATABASE SERVER And go to Patch to apply on top of 10.1.0.3 13.1 PATCH TO APPLY ON TOP OF 10.1.0.2

17-AUG-2004 916M

Extracted from http://metalink.oracle.com (5th of April 2005), please consult Metalink for last updated patchs list. All patches are not necessary, just apply patches that will solve problems you may encounter. Results for Platform : AIX5L Based Systems (64-bit) Patch 4003051 Description Oracle Database Family: Patch PLEASE PROVIDE MLR ON TOP OF 10.1.0.2 FOR SECURITY ROLLUP Oracle Database Family: Patch MERGE LABEL REQUEST ON TOP OF 10.1.0.2 Oracle Database Family: Patch SRVCTL CONFIG DATABASE FAILS FOR 9.2 TARGET Oracle Database Family: Patch NOT USING VALUES RETURNED BY IOCTL WHEN THE DEVTYPE IS DD_SCDISK. Oracle Database Family: Patch MERGE LABEL REQUEST ON TOP OF 10.1.0.2 RDBMS Server: Patch ORA-600: [KGHSTACK_UNDERFLOW_INTERNAL_2] TAKING BACKUP OF CONTROLFILE VIA RMAN RDBMS Server: Patch BACKPORT THE ZSUBTYPE PATCH FOR VERITAS DEVICE FOR 9.2.0.X AND 10.1.0.X. Release 10.1.0.2 Updated 21-JAN-2005 Size 29M

3838804 3458343

10.1.0.2 10.1.0.2

10-NOV2004

4.0M

28-SEP-2004 16K

3785451

10.1.0.2

02-SEP-2004 109K

3811942 3572753

10.1.0.2 10.1.0.2

31-AUG2004 15-AUG2004 06-AUG2004 29-JUL-2004

19M 91K

3712203

10.1.0.2

338K

3452566

RDBMS Server: Patch 10.1.0.2 ORA-15131: BLOCK 43008 OF FILE 3 IN DISKGROUP 1 COULD NOT BE READ
pSeries with AIX 5L

34K

Installing Oracle 10g RAC on IBM

page 93/ 93

3728908

RDBMS Server: Patch ORA-600:[KCCSBCK_FIRST] HAPPEN AFTER RESTART INSTANCE BY CRS WITH HACMP RDBMS Server: Patch KPOFDR-LONG INTERNAL ERROR DUING SAP-SD BENCHMARK RDBMS Server: Patch PIECEWISE FETCHING OF VARCHAR2 COLUMN FAILED IN SOLARIS CLIENT -> NT SERVER MODE RDBMS Server: Patch ORA-01017 WHEN 10G CLIENT CONNECTS TO 9I OR 8I DATABASE WITH DIFF CHAR SET Oracle Database Family: Patch ROOT.SH SHOULD BE MODIFIED TO RECOGNISE SILENT INSTALLS RDBMS Server: Patch 10G-SH9: ASM RESOURCE RACGMAIN CORE DUMP WITH SHARED ORACLE HOME RDBMS Server: Patch THREAD SPAWN FAILURES IN THE OCR LOGS CAUSING SESSIONS TO HANG RDBMS Server: Patch MERGE LABEL REQUEST ON TOP OF 10.1.0.2

10.1.0.2

23-JUL-2004

342K

3612581

10.1.0.2

06-JUL-2004

15K

3417593

10.1.0.2

20-JUN-2004 14K

3564573

10.1.0.2

12-MAY2004 11-MAY2004 07-MAY2004

16K

3527628

10.1.0.2

35K

3385592

10.1.0.2

37K

3492889

10.1.0.2

30-APR-2004 421K

3540898

10.1.0.2

21-APR-2004 15K

13.2

PATCH TO APPLY ON TOP OF 10.1.0.3

Extracted from http://metalink.oracle.com (5th of April 2005) Please consult Metalink for last updated patchs list. All patches are not necessary, just apply patches that will solve problems you may encounter. Results for Platform : AIX5L Based Systems (64-bit) Patch 4130275 Description RDBMS Server: Patch ORA-354 ON LOGICAL STANDBY HOT MINING AFTER PRIMARY TRANSPORT ERROR RDBMS Server: Patch THE DEFAULT VALUE FOR FAIL_WHEN_ALL_LINK_DOWN SHOULD BE 1 (TRUE) ON AIX5L RDBMS Server: Patch 10103ROOT.SH REQUESTING LIB/LIBCLNTSH.A(SHR.O) RDBMS Server: Patch ORA-07445 [KNLCINLINEDATA()+793] IN CAPTURE PROCESS
pSeries with AIX 5L

Release 10.1.0.3

Updated

Size

05-APR-2005 342K

4054910

10.1.0.3

05-APR-2005 21K

4274434

10.1.0.3

04-APR-2005 330K

4061535

10.1.0.3

04-APR-2005 45K

Installing Oracle 10g RAC on IBM

page 94/ 94

3897915

RDBMS Server: Patch ORA-7445 KOKLMIFRDHT WHEN RUNNING LOB RELATED TESTS WITH ARCHIVING ON

10.1.0.3

01-APR-2005 59K

4258825

RDBMS Server: Patch 10.1.0.3 CORRUPTED R-TREE INDEX - CONTAINS ORPHAN ROWID'S RDBMS Server: Patch 10.1.0.3 ASM FREE SPACE NOT USED WHEN TABLESPACE CREATION USES MANY, SMALLER FILES RDBMS Server: Patch APPSST10G: PLS-907 CANNOT LOAD LIBRARY UNIT 10.1.0.3

31-MAR2005 24-MAR2005 16-MAR2005 15-MAR2005 14-MAR2005 09-MAR2005 09-MAR2005 07-MAR2005 03-MAR2005 23-FEB-2005 22-FEB-2005 21-FEB-2005 18-FEB-2005 10-FEB-2005

14K

4128156

17K

3818542

29K

4230634

RDBMS Server: Patch 10.1.0.3 MERGE LABEL REQUEST ON TOP OF 10.1.0.3 FOR BUGS 4149653 3819875 RDBMS Server: Patch IMPORT DOES NOT WORK WELL WHEN USE A PARTICULAR XML SCHEMA 10.1.0.3

566K

2686629

846K

4090879

RDBMS Server: Patch 10.1.0.3 MERGE LABEL REQUEST ON TOP OF 10.1.0.3 FOR BUGS 3777747 3849723 3590881 3863235 RDBMS Server: Patch PIECEWISE FETCHING OF VARCHAR2 COLUMN FAILED IN SOLARIS CLIENT -> NT SERVER MODE 10.1.0.3

1.2M

3417593

14K

4091881

Oracle Database Family: Patch 10.1.0.3 MERGE LABEL REQUEST ON TOP OF 10.1.0.3 FOR BUGS 3982434 4087215 RDBMS Server: Patch OPROCD IS PRONE TO TIME REGRESSION DUE TO CURRENT API USED RDBMS Server: Patch KGLHDDEP SGA LEAK IN RAC RDBMS Server: Patch CSS CORE DUMPS ON ALL 5 NODES 10.1.0.3

2.0M

4206159

18M

3910149 4090777 3635177 4113001 3972424

10.1.0.3 10.1.0.3

55K 732K 15K 52K 491K

RDBMS Server: Patch 10.1.0.3 PSRC: ORA-7445 [KXCCUIN] POSSIBLE FROM DML RDBMS Server: Patch MERGE LABEL REQUEST FOR PEOPLESOFT Oracle Database Family: Patch 10.1.0.3.0 LISTENER HANGS INTERMITTENTLY (ONS) RDBMS Server: Patch RMAN COMMAND FAILS WITH ORA-6502 ERROR RDBMS Server: Patch ORA-00600: [QKEBVALIDATEOPN:2], [198]
pSeries with AIX 5L

10.1.0.3 10.1.0.3

3857039 3228560

10.1.0.3 10.1.0.3

02-FEB-2005 01-FEB-2005

740K 69K

Installing Oracle 10g RAC on IBM

page 95/ 95

4063442

RDBMS Server: Patch INCORRECT EXCEPTION HANDLING IN QMX0.C:QMXUPARSEXOBWITHPROPCS() RDBMS Server: Patch PLEASE PROVIDE MLR ON TOP OF 10.1.0.3 FOR SECURITY ROLLUP RDBMS Server: Patch WRONG RESULTS WHEN USING HASH JOIN WITH PARALLEL EXECTION.

10.1.0.3

27-JAN-2005

76K

4003062

10.1.0.3

18-JAN-2005

2.8M

4034746

10.1.0.3

17-JAN-2005

18K

4120185

RDBMS Server: Patch 10.1.0.3 MERGE LABEL REQUEST ON TOP OF 10.1.0.3 FOR BUGS 3672097 4011806 RDBMS Server: Patch 10.1.0.3 ORA-600: [KGHSTACK_UNDERFLOW_INTERNAL_2] TAKING BACKUP OF CONTROLFILE VIA RMAN RDBMS Server: Patch NEW INIT.CSSD PROVIDED IN BUG#3923542 MIGHT CAUSE INCORRECT BEHAVIOR RDBMS Server: Patch WRONG RESULT OCCURS WHEN SPACE CHARACTER IS SCANNED WITH UNIQUE INDEX RDBMS Server: Patch CAPTURE PROCESS DISCARDS MVDD INFO FOR RENAMED TABLES RDBMS Server: Patch EXTRANEOUS TRACE FILES PRODUCED RDBMS Server: Patch BUNDLE FOR BUGS3992673 AND 3942568 RDBMS Server: Patch CRSD/EVMD PIDFILE/LOCK MECHANISM IS FAILURE PRONE. RDBMS Server: Patch ORA-7445 WHEN UPDATING A VIEW WITH A TRIGGER ON THE VIEW RDBMS Server: Patch APPSPERF: RDBMS: 600 ERROR [QCTCTE1] RDBMS Server: Patch UPDATE STATEMENT FAILS WITH OERI [QERTBFETCHBYROWID],OERI [4511],.. RDBMS Server: Patch BACKPORT THE ZSUBTYPE PATCH FOR VERITAS DEVICE FOR 9.2.0.X AND 10.1.0.X. Oracle Database Family: Patch MLR: ONS ROLLUP ON TOP OF 10.1.0.3 RDBMS Server: Patch APPSST10103: ORA-31693 ORA-604 PLS-306 ON
pSeries with AIX 5L

17-JAN-2005

96K

3572753

16-JAN-2005

91K

3966811

10.1.0.3

06-JAN-2005

877K

3828269

10.1.0.3

28-DEC-2004 56K

4026166

10.1.0.3

21-DEC-2004 74K

3843277 4051111 3455036

10.1.0.3 10.1.0.3 10.1.0.3

08-DEC-2004 193K 06-DEC-2004 1.6M 06-DEC-2004 21K

3854375

10.1.0.3

29-NOV-2004 20K

3844049 4000840

10.1.0.3 10.1.0.3

25-NOV-2004 20K 16-NOV-2004 45K

3712203

10.1.0.3

16-NOV-2004 316K

3953420 3924482

10.1.0.3 10.1.0.3

12-NOV-2004 540K 09-NOV-2004 70K


page 96/ 96

Installing Oracle 10g RAC on IBM

1159 APPS DB DURING DATAPUMP EXPORT 3814603 3625392 RDBMS Server: Patch ORA-600[KCCSBCK_FIRST] RDBMS Server: Patch WRONG RESULTS OR ORA-7445 [EVAOPN2] WITH STAR TRANSFORMATION HINT RDBMS Server: Patch ISSCHEMAVALID CRASH IN QMXINSERTNODEBEFORE WITH CDATA TAG DOCUMENT RDBMS Server: Patch NODE CRASHING WHEN HEARTBEAT CABLE IS PULLED ON OTHER (MASTER) NODE RDBMS Server: Patch WRONG RESULTS JOINING CHAR OR NCHAR FROM 10G TO 9I & 8I ACROSS DB LINK RDBMS Server: Patch QUERY WITH ALL OPERATOR HANGS RDBMS Server: Patch ORA-600 [12410] ON SIMPLE QUERY WITH ANALYTIC FUNCTION RDBMS Server: Patch FXD: ORA-942 DURING FUNCTIONAL INVOCATION OF CONTAINS ON ANOTHER USER'S TABLE RDBMS Server: Patch 0G CLIENT CAN'T CONNECT VIA ASO TO 10G DB ON DIFFERENT ENDIAN ORDER PLATFORM RDBMS Server: Patch DP EXPORT FAILS WITH ORA-600 [KTCSETSCN-1] WITH LARGE SCN VALUE Oracle Database Family: Patch NOT USING VALUES RETURNED BY IOCTL WHEN THE DEVTYPE IS DD_SCDISK. RDBMS Server: Patch KPOFDR-LONG INTERNAL ERROR DUING SAP-SD BENCHMARK RDBMS Server: Patch NO ROWS FOUND WHEN USING BINDVARIABLES AND CBO RDBMS Server: Patch ALTER TABLE SPLIT PARTITION FAILS WITH ORA00933 ERROR FROM RECURSIVE SQL RDBMS Server: Patch EXCESSIVE REDO GENERATION ON 10G PRODUCTION RELEASE WHEN COMPARED TO
pSeries with AIX 5L

10.1.0.3 10.1.0.3

05-NOV-2004 728K 05-NOV-2004 251K

3941443

10.1.0.3

04-NOV-2004 111K

3942568

10.1.0.3

01-NOV-2004 733K

3672097

10.1.0.3

31-OCT-2004 21K

3880640 3800614

10.1.0.3 10.1.0.3

18-OCT-2004 44K 18-OCT-2004 14K

3412553

10.1.0.3

13-OCT-2004 29K

3555800

10.1.0.3

08-OCT-2004 41K

3476889

10.1.0.3

07-OCT-2004 50K

3785451

10.1.0.3

01-OCT-2004 115K

3612581

10.1.0.3

01-OCT-2004 15K

3755693

10.1.0.3

28-SEP-2004 57K

3554494

10.1.0.3

20-SEP-2004 50K

3686661

10.1.0.3

20-SEP-2004 53K

Installing Oracle 10g RAC on IBM

page 97/ 97

ORACLE 9I 3877070 RDBMS Server: Patch MLR FOR CSFB BUG FIXES ON TOP OF 10.1.0.3 10.1.0.3 13-SEP-2004 1.8M

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 98/ 98

14 CREATING THE DATABASE USING DBCA


Connect as Oracle unix user from first node, and setup your DISPLAY Execute dbca & to lanch the database configuration assistant DBCA Welcome Screen

Select the Oracle Real Application Cluster Database option.

Then click Next ...

Step 1: Operations

Select the Create a Databse option.

Then click Next ...

Step 2 : Node Selection

Make sure to select all RAC nodes.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 99/ 99

Step 3 : Database Templates

Select General Purpose Or Custom Database if you want to generate the creation scripts.

Then click Next ...

Step 4 : Database Identification

Specify the Global Database Name The SID Prefix will be automatically updated. (by default it is the Global Database Name)

Then click Next ...

Step 5 : Management Options Check Configure the database with Enterprise Manager if you want to use the Database Control (local administration). Or Dont check if you plan to administrate the database using the Grid Control (global network administration) Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 100/ 100

Step 6 : Database Credentials Specify same password for all administrator users, or specify individual password for each user.

Then click Next ...

Step 7 : Storage Options

aSelect the storage option


depending of the storage setup you did choose in the storage setup part : o o o Cluster File system (GPFS) Automatic Storage Management (ASM) Raw Device (HACMP)

Then click Next ...

a Depending storage option you did select, please go to next step as specified below : a
o Cluster File system (GPFS) Go to 14.1 Database On GPFS : Architecture GPFS to implement Automatic Storage Management (ASM) Go to 14.2 Database On ASM : Architecture ASM to implement Raw Device (HACMP) Go to 14.3 Database On RAW DEVICES : Architecture RAW DEVICE to implement

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 101/ 101

14.1

DATABASE CREATION ON GPFS

Step 8-a : Storage Options

a Choose Cluster File System

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 102/ 102

Step 9-a : Database File Locations

Specify the way you want the database files to be managed. And specify directory if necessary (option Common location for all database)

Then click Next ...

Step 10-a : Recovery Configuration Specify if you want to use a Flash Recovery Area, and if so : o o Specify destination directory and area size Then click Next ...

Step 11-a : Database Content

Select Sample Schemas if you need And add Custom scripts if necessary Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 103/ 103

Step 12-a : Database Services

Define the TAF (Transaction Application Failover) strategy at this point, or move on to next step. TAF can be configured later Then click Next ...

Step 13-a : Initialization Parameters

Check-up default parameters, or Specify your own parameters. Or move to next step Then click Next ...

Step 14-a : Database Storage

Check-up the entries. Then click Next or Finish...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 104/ 104

Step 15-a : Summary Read/save the summary, and move to next step

Then click OK ...

Step 16-a : Creation Options

Just wait while processing ...

Step 17-a : Passwords Management

Enter in password management, if you need to change pasword, and unlock some user accounts that are locked by default (for security purpose). Then click Exit ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 105/ 105

At this stage :
The GPFS Oracle Cluster Database is created on the shared Cluster File Sysem configured with IBM GPFS, with 1 database, 2 databases instances (1 on each nodes), export ORACLE_SID=GPFS1 on node1 to access database instance with sqlplus

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 106/ 106

14.2

DATABASE CREATION ON ASM

#chown oracle:dba /dev/rhdiskn #chmod 660 /dev/rhdiskn /usr/sbin/chdev l hdiskn a reserve_lock=no /usr/sbin/chdev l hdiskn a reserve_policy=no_reserve /usr/sbin/chdev l hdiskn a pv=yes n standing for disk number

a MANDATORY a a Change on each nodes for all


ASM Disks (LUNs) candidates to be used in the ASM Disks Group.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 107/ 107

Step 8-b : Storage Options

aChoose Automatic Storage


Management (ASM) Then click Next ...

Step 8-b Create ASM Instance Specify SYS password for the ASM Instance Choose the type of parameter file

Then click Next ...

Step 9-b Create ASM Instance

Then click OK..

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 108/ 108

ASM Disk Groups

Then Click on Create New ...

Step 10-b : Create Disk Group Select Show Candidates Then Select Disks And External (for no ASM mirroring)

Then click Ok ...

Step 11-b : Create Disk Group ASM Disk Group Creation

Just Wait while processing ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 109/ 109

Step 12-b : Create Disk Group Now the ASM Disks Group is created Select your DiskGroup for the database

Then click Next ...

Step 13-b : Database File Locations Select Use Oracle-Managed Files

Then click Next ...

Step 14-b : Create Disk Group For Flash Recovery Area Select Show Candidates Then Select Disks And External (for no ASM mirroring) Then click Ok ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 110/ 110

Step 15-b : Create Disk Group Now the ASM Disks Group is created Select your DiskGroup for the Flash Recovery Area

Then click Ok ...

Step 16-b : Database File Locations Select Use Oracle-Managed Files

Then click Next ...

Step 17-b : Database Content Select the options needed And Enterprise Manager Repository

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 111/ 111

Step 18-b : Database Services and TAF (Transaction Application Failover) policy

Then click Next ...

Step 19-b : Initialization Parameters Select the parameters needed

Then click Next ...

Step 20-b : Database Storage Check the datafiles organization

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 112/ 112

Step 21-b : Creation Options Select the options needed Create Database Save as a Database Template Generate Database Creation Scripts

Then click Finish ...

Step 22-b : Summary Check the description Save the HTML summary file if needed Then click Ok ...

Step 23-b : The template is created. Then click Ok ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 113/ 113

Step 24-b : The script have been generated Then click Ok ...

Step 25-b : Database creation on progress Just wait while processing ...

Step 26-a : Passwords Management

Enter in password management, if you need to change pasword, and unlock some user accounts that are locked by default (for security purpose). Then click Exit ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 114/ 114

At this stage :
The ASM Oracle Cluster Database is created on Shared ASM Disks Group with 1 database, 2 databases instances (1 on each nodes), 2 ASM instances (1 on each nodes). export ORACLE_SID=+ASM1 on node1 to access ASM instance with sqlplus export ORACLE_SID=ASMDB1 on node1 to access database instance with sqlplus

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 115/ 115

14.3

DATABASE CREATION ON RAW DEVICES

Create a parameter file $HOME/dbca_raw_config which will be used by DBCA to map the typical tablespaces to raw devices. This parameter file needs to be pointed to by the environment variable DBCA_RAW_CONFIG and makes easier to create the database. Set DBCA_RAW_CONFIG env variable in the Oracle user, before running DBCA : EXPORT DBCA_RAW_CONFIG=$HOME/dbca_raw_config

14.3.1

Content example of Raw device DB configuration file

$HOME/dbca_raw_config
Raw devices parameter file for the Database Configuration Assistant
system=/dev/rrac_system undo1=/dev/rrac_undotbs01 undo2=/dev/rrac_undotbs02 redo01=/dev/rlog11 redo02=/dev/rlog12 redo03=/dev/rlog21 redo04=/dev/rlog22 control1=/dev/rrac_control01 control2=/dev/rrac_control02 control3=/dev/rrac_control03 data=/dev/rrac_data index=/dev/rrac_index temp=/dev/rrac_temp

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 116/ 116

Step 8-c : Storage Options

a Choose Raw Devices

Then click Next ...

At this stage :
The Oracle Cluster Database is created on concurent Shared RAW DEVICES with 1 database, 2 databases instances (1 on each nodes), export ORACLE_SID=RACDB1 on node1 to access database instance with sqlplus

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 117/ 117

14.4

MANUAL DATABASE CREATION Content Type: TEXT/PLAIN Creation Date: 30-MAY2003 Last Revision 07-DEC2004 Date:

Doc ID: Note:240052.1 Subject: 10g Manual Database Creation in Oracle (Single Instance and RAC) BULLETIN Type: Status: PUBLISHED

PURPOSE ------The purpose of this bulletin is to give an example of a manual database creation in 10g. MANDATORY ========== FROM 10g RELEASE WE HAVE SYSAUX TABLESPACE MANDATORY FOR STATISTICS WORKLOAD REPOSITORY FACILITIES (SWRF) GOOD PRACTICE ============= CREATE DEFAULT TABLESPACE, WHILE CREATING THE DATABASE. So whenever DBA will create a new user it will, use the default permanent tablespace, unless DBA is mentioning the DEFAULT TABLESPACE clause while creating the user. To use default tablespace option, it is mandatory to use the init.ora parameter "Compatible must be >=10.0" SCOPE & APPLICATION ------------------Oracle recommends using the Database Configuration Assistant (DBCA) to create your database. These steps are available for DBAs who want to manually create a 10g database either in single instance or Real Application Clusters mode. 14.4.1Tips to create a database in 10g Real Application Clusters. ============================================================== Manual Database Creation steps for Real Application Clusters ============================================================== Here are the steps to be followed to create a Real Application Clusters database: 1. Make a init<SID>.ora in your $ORACLE_HOME/dbs directory. On Windows this file is in $ORACLE_HOME\database. To simplify, you can copy init.ora to init<SID>.ora and modify the file. Remember that your control file must be pointing to a pre-existing raw device or cluster file system location. *** Path names, file names, and sizes will need to be modified Example parameter settings for the first instance: Cluster-Wide Parameters for Database "RAC": db_block_size=8192 db_cache_size=52428800 background_dump_dest=/u01/32bit/app/oracle/product/9.0.1/rdbms/log core_dump_dest=/u01/32bit/app/oracle/product/9.0.1/rdbms/log user_dump_dest=/u01/32bit/app/oracle/product/9.0.1/rdbms/log timed_statistics=TRUE control_files=("/dev/RAC/control_01.ctl", "/dev/RAC/control_02.ctl")
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 118/ 118

db_name=RAC shared_pool_size=52428800 sort_area_size=524288 undo_management=AUTO cluster_database=true cluster_database_instances=2 remote_listener=LISTENERS_RAC Instance Specific Parameters for Instance "RAC1": instance_name=RAC1 instance_number=1 local_listener=LISTENER_RAC1 thread=1 undo_tablespace=UNDOTBS * The local_listener parameter requires that you first add the listener address to the TNSNAMES.ORA - remember to do so on both Node 1 and Node 2. ** You can also use an spfile as described in Note 136327.1. 2. Run the following sqlplus command to connect to the database: sqlplus '/ as sysdba' 3. Startup up the database in NOMOUNT mode: SQL> startup nomount 4. Create the Database (All raw devices must be pre-created) : *** Path names, file names, and sizes will need to be modified CREATE DATABASE <db_name> CONTROLFILE REUSE MAXDATAFILES 254 MAXINSTANCES 32 MAXLOGHISTORY 100 MAXLOGMEMBERS 5 MAXLOGFILES 64 DATAFILE '/dev/RAC/system_01_400.dbf' SIZE 900M segment space management auto REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED UNDO TABLESPACE "UNDOTBS" DATAFILE '/dev/RAC/undotbs_01_210.dbf' SIZE 200M REUSE DEFAULT TABLESPACE USER_DEFAULT DATAFILE '/u01/oracle/rbdb1/user_default_1.dbf' size 2000M REUSE segment space management auto SYSAUX DATAFILE '/u01/oracle/rbdb1/sysaux_1.dbf' size 500M REUSE segment space management auto CHARACTER SET US7ASCII LOGFILE GROUP 1 ('/dev/RAC/redo1_01_100.dbf') SIZE 100M REUSE, GROUP 2 ('/dev/RAC/redo1_02_100.dbf') SIZE 100M REUSE; 5. Create a Temporary Tablespace: *** Path names, file names, and sizes will need to be modified CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE '/dev/RAC/temp_01_50.dbf' SIZE 40M REUSE 6. Create a 2nd Undo Tablespace:
pSeries with AIX 5L page 119/ 119

Installing Oracle 10g RAC on IBM

*** Path names, file names, and sizes will need to be modified CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/RAC/undotbs_02_210.dbf' SIZE 200M REUSE NEXT 5120K MAXSIZE UNLIMITED; 7. Run the necessary scripts to build views, synonyms, etc.: The primary scripts that you must run are: i> CATALOG.SQL--creates the views of data dictionary tables and the dynamic performance views ii> CATPROC.SQL--establishes the usage of PL/SQL functionality and creates many of the PL/SQL Oracle supplied packages iii> CATPARR.SQL--creates RAC specific views 8. Edit init<SID>.ora and set appropriate values for the 2nd instance on the 2nd Node: *** Names may need to be modified instance_name=RAC2 instance_number=2 local_listener=LISTENER_RAC2 thread=2 undo_tablespace=UNDOTBS2 9. From the first instance, run the following command: *** Path names, file names, and sizes will need to be modified alter database add logfile thread 2 group 3 ('/dev/RAC/redo2_01_100.dbf') size 100M, group 4 ('/dev/RAC/redo2_02_100.dbf') size 100M; alter database enable public thread 2; 10. Start the second Instance. (Assuming that your cluster configuration is up and running)

14.4.2 Configure listener.ora / sqlnet.ora / tnsnames.ora Use netca and/or netmgr to check the configuration of the listener and configure Oracle Net services (by default the Net service may be equal to the global database name (see instance parameter service_names ). 14.4.3 Configure Oracle Enterprise Manager Then start the OEM agent: $agentctl start Check /etc/oratab The file should contain a reference to the database name, not to the instance name. The last field should always be N on a RAC environment to avoid 2 instances of the same name to be started. Register the database with srvctl (this should not be necessary if the database was not created by DBCA) srvctl add db p <db_name> o <ORACLE_HOME path> srvctl add instance p <db_name> i <SID1> n <node1>
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 120/ 120

srvctl add instance p <db_name> i <SID2> n <node1>

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 121/ 121

15 GRID CONTROL INSTALLATION

15.1

GRID CONTROL MANAGEMENT SERVER INSTALLATION

a Port 1521 must not be used by any other Oracle Listener, or other process a
aIF AIX5L release 5.3 is used, please apply same AIX required APAR/Filesets for RAC installation on AIX5.3 (read first chapters of this cookbook) a
As root user, execute : xhost + Login as oracle on the server which will host the Grid Control Management Server

Grid Control Server Must be different then RAC nodes


And follow the procedure hereunder

a Setup and export your DISPLAY, TMP and TEMP variables


- export DISPLAY=....... - export TMP=/tmp - export TEMP=/tmp With /tmp or other destination having enough free space, about 500Mb

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 122/ 122

aIF AIX5L release 5.3 is used, do modify the file oraparam.ini, and cluster.ini in Disk1/installer
update entries AIX5200 to AIX5300 on both files, and execute :
$/<cdrom_mount_point>/runInstaller

Or execute : ./runInstaller -ignoreSysPrereqs OUI (Oracle Universal Installer) chek the operating system requirements for AIX5L 5.2.0.0. If AIX maintenance level 1, 2, 3 or/and 4 are installed, the installer will notice (no further actions) and will go to the next step. To chek AIX mantenance level installed on each node : -> instfix i|grep ML All filesets for 5.2.0.0_AIX_ML were found. All filesets for 5200-01_AIX_ML were found. All filesets for 5200-02_AIX_ML were found. All filesets for 5200-03_AIX_ML were found. All filesets for 5200-04_AIX_ML were found. OUI (Oracle Universal Installer) asks you to run rootpre.sh as root. At the OUI Welcome screen, click Next.

a Make sure to execute rootpre.sh before you go to the next step


Running the rootpre.sh Install Oracle Kernel Extension Loader for AIX Configure and load pwsyscall.64bit_kernel Configure Asynchronous I/O And POSIX Asynchronous I/O Then Return to previous screen to answer yes ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 123/ 123

RunInstaller and Java Cryptography Extension (JCE)

Provide patch to the jce-1_2_2.zip file

Then Return ...

RunInstaller and Java Cryptography Extension (JCE) Checking the Operating System Level Then Return ...

Welcome Screen

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 124/ 124

Specify Inventory directory and credentials Enter Inventory directory, or validate default directory. And Choose group dba

Then click Next ...

OrainstRoot.sh

As root, execute OrainstRoot.sh In the specified directory

Then click Next ...

{node1:root}/oragridctl/oraInventory -> ./orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing groupname of /oragridctl/oraInventory to dba. {node1:root}/oragridctl/oraInventory -> Content of /etc/oraInst.loc : inventory_loc=/oragridctl/oraInventory inst_group=dba

OrainstRoot.sh

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 125/ 125

Specify File Locations

Enter the Grid Control ORACLEHOME and name Then click Next ...

Select a Product to Install

aSelect Enterprise Manager 10g


Grid Control Using a New Database Then click Next ...

Product-specific Prerequisite Checks

Dont take care of Not executed message, its just because the AIX Maintenance level is higher than specified

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 126/ 126

Set Up Administrator for Enterprise Manager

Enter password for sysman user

Choose email notification or not Then click Next ...

Sysman Password

SYSMAN password must start with at least 5 character and at least one number.

Then click OK ...

Specify Database Passwords

Specify same password for all administrator users, or specify individual password for each user.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 127/ 127

Metalink and Proxy Information :

Enter Metalink user and password, if available ...

Then click Next ...

Privileged Operating System Groups

Specify group dba for both group

Then click Next ...

Database Identification

Specify the Global Database Name The SID Prefix will be automatically updated. (by default it is the Global Database Name)

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 128/ 128

Database File Location

Specify directory destination for Grid Control database datafiles

Then click Next ...

Summary

Then click Next ...

Install

Just wait while processing ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 129/ 129

Install : Setup Privileges

Need to execute root.sh script as root ...

Execute Root.sh as root in the specified directory

Then return to previous screen to click on Ok ...

Configuration Assistants

Check all status to be Succeeded

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 130/ 130

Database Configuration Assistant

Just wait while processing ...

Configuration Assistants

Carry on processing Check all status to be Succeeded

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 131/ 131

Configuration Assistants

Launching Second Installation Session for Oracle Agent

Then click Next ...

Specify File locations :

Enter ORACLE_HOME name and directory destination for Grid Agent. Must be Different ORACLE_HOME then the Grid Control ORACLE_HOME Accept the default proposed values

Then click Next ...

Select a product to install :

Choose Additional Management Agent

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 132/ 132

Specify Oracle Manaement Service Location :

Specify the name of the management server (Actual server on which you are running the Grid installation) Memorize the default management service port : 4889

Then click Next ...

Warning :

Read the warning windows

Then click OK ...

Summary

Check the details and

Then click Install ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 133/ 133

Installation in progress Wait while processing ...

Then click Next ...

Setup Privileges :

Open a telnet session, and ... Execute root.sh as root

...

Executing root.sh as root

Then go back to previous screen to click OK ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 134/ 134

Configuration Assistants : Assistants will execute automatically Verify status is succeded at the end, and retry if necessary

Then click Next ...

End of Installation :

Access the README_EM.txt fil for technical details.

Then click Exit ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 135/ 135

ps -ef|grep emrep Check the running processes oracle 123082 1 0 12:15:58 - 0:00 ora_qmn0_emrep oracle 172102 1 0 12:15:58 - 0:02 ora_ckpt_emrep oracle 217310 1 0 12:15:58 - 0:00 ora_lgwr_emrep oracle 233494 1 0 12:15:58 - 0:00 ora_cjq0_emrep oracle 258186 1 0 12:15:58 - 0:00 ora_smon_emrep oracle 303274 1 0 12:15:58 - 0:00 ora_dbw0_emrep oracle 307318 1 0 12:15:58 - 0:00 ora_pmon_emrep oracle 331942 139416 0 13:06:48 pts/0 0:00 grep emrep oracle 479436 1 0 12:22:20 - 0:00 ora_j002_emrep oracle 487664 1 0 12:15:58 - 0:00 ora_reco_emrep oracle 503952 1 0 12:22:47 - 0:00 oracleemrep (LOCAL=NO) oracle 516274 1 0 12:22:20 - 0:00 ora_j001_emrep oracle 520250 1 0 12:22:51 - 0:00 oracleemrep (LOCAL=NO) oracle 561186 1 0 12:22:20 - 0:12 ora_j000_emrep oracle 589866 1 0 12:22:20 - 0:00 ora_j004_emrep oracle 602154 1 0 12:22:20 - 0:00 ora_j003_emrep oracle 610348 1 0 12:22:21 - 0:00 ora_j005_emrep oracle 626846 1 0 12:22:52 - 0:09 oracleemrep (LOCAL=NO) oracle 655480 1 0 12:22:51 - 0:01 oracleemrep (LOCAL=NO) oracle 659588 1 0 12:22:51 - 0:00 oracleemrep (LOCAL=NO) oracle 663696 1 0 12:22:52 - 0:00 oracleemrep (LOCAL=NO) oracle 671816 1 0 12:22:51 - 0:01 oracleemrep (LOCAL=NO) oracle 729092 1 0 12:59:39 - 0:02 oracleemrep (LOCAL=NO) oracle 741454 1 0 12:59:39 - 0:00 oracleemrep (LOCAL=NO)

Then click Next ...

emctl status agent Check Grid agent status Oracle Enterprise Manager 10g Release 10.1.0.2.0. Copyright (c) 1996, 2004 Oracle Corporation. All rights reserved. --------------------------------------------------------------Version : 4.0.1.0.0 Agent Home : /oragridctl/grid10g Agent Process ID : 581686 Parent Process ID : 557086 Agent URL : http://node4:1830/emd/main/ Started at : 2004-09-21 12:21:51 Started by user : oracle Last Reload : 2004-09-21 12:21:51 Last successful upload : (none) Last attempted upload : (none) Total Megabytes of XML files uploaded so far : 0.00 Number of XML files pending upload : 0 Size of XML files pending upload(MB) : 0.00 Available disk space on upload filesystem : 37.82% --------------------------------------------------------------Agent is Running and Ready

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 136/ 136

To start/Stop all components from Grid Control (oms, http, OC4J, WebCache, ..) $ORACLE_HOME/opmn/bin/opmnctl startall opmnctl: starting opmn and all managed processes... $ORACLE_HOME/opmn/bin/opmnctl status

$ORACLE_HOME/opmn/bin/opmnctl stoptall opmnctl: stopping opmn and all managed processes...

To access Enterprise Manager Grid Control html interface Add entrie in your local winnt/system32/drivers/etc/hosts OmsServerIP 10.2.12.21 OmsServerName dbs13

ex.: Then :

http://OmsServerName:4889/em User to login : sysman Password : ********

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 137/ 137

Post-Installation Oracle Grid Control unix Profile Update : aTo be done on each node.
Oracle environment : vi $HOME/.profile file in Oracles home directory Add the entries in bold blue color export ORACLE_BASE=/oh10g export AIXTHREAD_SCOPE=S umask 022 export ORACLE_HOME=$ORACLE_BASE/gridcontrol export LD_LIBRARY_PATH=$ORACLE_HOME/lib: $ORACLE_HOME/lib32 export PATH=$ORACLE_HOME/bin: $PATH

15.2

METALINK NOTE Content Type: Creation Date: Last Revision Date: TEXT/X-HTML 29-MAR-2004 04-AUG-2004

Doc ID: Note:266769.1 Subject: EM 10g GRID Control Release Notes 10.1.0.2.0 README Type: PUBLISHED Status:

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 138/ 138

15.3

GRID CONTROL AGENT DEPLOYMENTS

As root user on the first RAC cluster node, execute : xhost + Login as oracle, and follow the procedure hereunder

aSetup and export your DISPLAY, TMP and TEMP variables


export DISPLAY=....... export TMP=/tmp export TEMP=/tmp With /tmp or other destination having enough free space, about 500Mb

aIF AIX5L release 5.3 is used, do modify the file oraparam.ini, and cluster.ini in Disk1/installer
update entries AIX5200 to AIX5300 on both files, and execute :
$/<cdrom_mount_point>/runInstaller

Or execute : ./runInstaller -ignoreSysPrereqs OUI (Oracle Universal Installer) chek the operating system requirements for AIX5L 5.2.0.0. If AIX maintenance level 1, 2, 3 or/and 4 are installed, the installer will notice (no further actions) and will go to the next step. To chek AIX mantenance level installed on each node : -> instfix i|grep ML All filesets for 5.2.0.0_AIX_ML were found. All filesets for 5200-01_AIX_ML were found. All filesets for 5200-02_AIX_ML were found. All filesets for 5200-03_AIX_ML were found. All filesets for 5200-04_AIX_ML were found. OUI (Oracle Universal Installer) asks you to run rootpre.sh as root. At the OUI Welcome screen, click Next.

a !!!

You must do it , if not already done !!!

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 139/ 139

RunInstaller and Java Cryptography Extension (JCE)

Provide patch to the jce-1_2_2.zip file

Then Return ...

RunInstaller and Java Cryptography Extension (JCE) Checking the Operating System Level Then Return ...

Welcome Screen

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 140/ 140

Specify File Locations ... Specific ORACLE_HOME Name And ORACLE_HOME directory

Then click Next ...

Specify Hardware Cluster Installation Mode : Cluster Installation (Select Node Name) Or Local Installation (Need to proceed the full installation process on each node) Then click Next ...

Select a product to install : Additional Management Agent

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 141/ 141

Checking the summary for remote nodes ...

Checking the summary ...

Then click Install ...

Installation in progress ...

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 142/ 142

Installation in progress ... Copying binaries from node 1 to remote nodes. Then click Next ...

Setup Privileges : A root.sh script has to be run on each node where Grid Control Agent is deployed.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 143/ 143

Setup Privileges : Running root.sh on each nodes where the agent is deployed

Then click Next ...

Configuration Assistants : Going thru the Agent configuration assistant.

Then click Next ...

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 144/ 144

Configuration Assistants : If you get this error message, it will mean that Grid Control Agent has been well deployed and configured on node 1 (able to start and stop the agent), and not completed on other nodes. Solutions : Running a local Grid Control Agent installation on each other nodes, or copying binaries from node 1 to the others, and process configuration files modifications on each node. Then click OK ...

Configuration Assistants :

Then click Next to Finish ...

To start/stop the Oracle Grid Control Intelligent Agent : agentctl start agentctl stop agentctl status

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 145/ 145

16 APPENDIX A : LOGICAL VOLUMES CREATION.


This is for raw devices implementation
#!/bin/ksh # Creation of a concurrent volume group # To be executed on the primary node (node1) # 64 is a free major number. It has to be unused on all the nodes mkvg -f c -y oradatvg V'80' -s'64' <disk list> # Creation of the minimum logical volumes (raw devices) # The number is the size of the LV (number of 64 MB physical partitions) # To be exectued on the primary node (node1) mklv -y'ocr_disk' tjfs2 oradatvg mklv -y'vote_disk' tjfs2 oradatvg 2 ext_disk 2 ext_disk # 128 MB # 128 MB # 512 MB # 512 MB # 512 MB # # # # 256 256 256 256 MB MB MB MB # # # # 32 MB # 128 MB # 512 MB # 512 MB # 512 MB 64 MB 64 MB 64 MB

mklv -y'rac_system' tjfs2 oradatvg 8 ext_disk mklv -y'rac_undotbs01' tjfs2 oradatvg mklv -y'rac_undotbs02' tjfs2 oradatvg mklv mklv mklv mklv -y'log11' -y'log12' -y'log21' -y'log22' tjfs2 tjfs2 tjfs2 tjfs2 oradatvg oradatvg oradatvg oradatvg 4 4 4 4 8 ext_disk 8 ext_disk ext_disk ext_disk ext_disk ext_disk 1 ext_disk 1 ext_disk 1 ext_disk

mklv -y'rac_control01' tjfs2 oradatvg mklv -y'rac_control02' tjfs2 oradatvg mklv -y'rac_control03' tjfs2 oradatvg

mklv -y'rac_spfile' tjfs2 oradatvg 1 ext_disk mklv -y'rac_srvconfig' tjfs2 oradatvg 2 ext_disk

mklv -y'rac_data' tjfs2 oradatvg 8 ext_disk mklv -y'rac_index' tjfs2 oradatvg 8 ext_disk mklv -y'rac_temp' tjfs2 oradatvg 8 ext_disk varyoffvg oradatvg

# To be exectued on the secondary node (node2) importvg -y oradatvg V80 <one of the disks> varyonvg oradatvg chfs c oradatvg # To be exectued on all the nodes (node1 & node2) when oradatvg is varied on chown oracle.dba /dev/r* chmod go+rw /dev/r*

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 146/ 146

17
17.1

APPENDIX B : EXAMPLES OF CONFIGURATION FILES


NETWORK

This appendix provides examples of the configuration files that are mentioned in the document.

/etc/hosts

(on all nodes)

# Public Network 10.2.12.81 10.2.12.82 10.2.12.83 # Virtual IP address 10.2.12.181 10.2.12.182 10.2.12.183 10.10.12.81 10.10.12.81 10.10.12.81 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs (on all nodes) root root root root root root root root root oracle oracle oracle oracle oracle oracle oracle oracle oracle node1 node2 node3

# Interconnect RAC & GPFS

/etc/hosts.equiv
node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs

.rhosts
In the roots and oracles home directory, put the list of machines.
$HOME/.rhosts node1
Installing Oracle 10g RAC on IBM

root
pSeries with AIX 5L page 147/ 147

node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs node1 node2 node3 node1_vip node2_vip node3_vip node1_gpfs node2_gpfs node3_gpfs

root root root root root root root root oracle oracle oracle oracle oracle oracle oracle oracle oracle

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 148/ 148

17.2

LISTENER.ORA AND TNSNAMES.ORA CONFIGURATION EXAMPLE

listener.ora
LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = NODE1_VIP)(PORT = 1521)) ) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /oracle/product/10.1.0) (SID_NAME = RAC1) )

tnsnames.ora implementing TAF


RAC2 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = RAC) (INSTANCE_NAME = RAC2) ) ) RAC1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = RAC) (INSTANCE_NAME = RAC1) ) ) RAC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = RAC) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = PRECONNECT) (RETRIES = 20) (DELAY = 60) ) ) )

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 149/ 149

18

APPENDIX C : ORACLE TECHNICAL NOTES

This appendix provides some useful notes coming from Oracle support. These notes can be found in Metalink. 18.1 CRS AND 10G REAL APPLICATION CLUSTERS

Doc ID: Note:259301.1 Subject: CRS and 10g Real Application Clusters BULLETIN Type: Status: PUBLISHED PURPOSE -------

Content Type: Creation Date: Last Revision Date:

TEXT/XHTML 05-DEC-2003 15-MAR2005

This document is to provide additional information on CRS (Cluster Ready Services) in 10g Real Application Clusters. SCOPE & APPLICATION ------------------This document is intended for RAC Database Administrators and Oracle support enginneers. CRS and 10g REAL APPLICATION CLUSTERS ------------------------------------CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters that provides a standard cluster interface on all platforms and performs new high availability operations not available in previous versions. CRS KEY FACTS ------------Prior to installing CRS and 10g RAC, there are some key points to remember about CRS and 10g RAC: - CRS is REQUIRED to be installed and running prior to installing 10g RAC. - CRS can either run on top of the vendor clusterware (such as Sun Cluster, HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster, etc...) or can run without the vendor clusterware. The vendor clusterware was required in 9i RAC but is optional in 10g RAC. - The CRS HOME and ORACLE_HOME must be installed in DIFFERENT locations. - Shared Location(s) or devices for the Voting File and OCR (Oracle
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 150/ 150

Configuration Repository) file must be available PRIOR to installing CRS. The voting file should be at least 20MB and the OCR file should be at least 100MB. - CRS and RAC require that the following network interfaces be configured prior to installing CRS or RAC: - Public Interface - Private Interface - Virtual (Public) Interface For more information on this, see Note 264847.1. - The root.sh script at the end of the CRS installation starts the CRS stack. If your CRS stack does not start, see Note 240001.1. - Only one set of CRS daemons can be running per RAC node. - On Unix, the CRS stack is run from entries in /etc/inittab with "respawn". - If there is a network split (nodes loose communication with each other). One or more nodes may reboot automatically to prevent data corruption. - The supported method to start CRS is booting the machine. MANUAL STARTUP OF THE CRS STACK IS NOT SUPPORTED UNTIL 10.1.0.4 OR HIGHER. - The supported method to stop is shutdown the machine or use "init.crs stop". - Killing CRS daemons is not supported unless you are removing the CRS installation via Note 239998.1 because flag files can become mismatched. - For maintenance, go to single user mode at the OS. Once the stack is started, you should be able to see all of the daemon processes with a ps -ef command: [rac1]/u01/home/beta> ps -ef | grep crs oracle /u01 oracle root oracle 1363 999 1003 1002 999 1 1 1 0 11:23:21 ? 0 11:21:39 ? 0 11:21:39 ? 0 11:21:39 ? 0:00 /u01/crs_home/bin/evmlogger.bin -o 0:01 /u01/crs_home/bin/evmd.bin 0:01 /u01/crs_home/bin/crsd.bin 0:01 /u01/crs_home/bin/ocssd.bin

CRS DAEMON FUNCTIONALITY -----------------------Here is a short description of each of the CRS daemon processes: CRSD: - Engine for HA operation - Manages 'application resources' - Starts, stops, and fails 'application resources' over - Spawns separate 'actions' to start/stop/check application resources
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 151/ 151

- Maintains configuration profiles in the OCR (Oracle Configuration Repository) - Stores current known state in the OCR. - Runs as root - Is restarted automatically on failure OCSSD: - OCSSD is part of RAC and Single Instance with ASM - Provides access to node membership - Provides group services - Provides basic cluster locking - Integrates with existing vendor clusteware, when present - Can also runs without integration to vendor clustware - Runs as Oracle. - Failure exit causes machine reboot. --- This is a feature to prevent data corruption in event of a split brain. EVMD: - Generates events when things happen - Spawns a permanent child evmlogger - Evmlogger, on demand, spawns children - Scans callout directory and invokes callouts. - Runs as Oracle. - Restarted automatically on failure CRS LOG DIRECTORIES ------------------When troubleshooting CRS problems, it is important to review the directories under the CRS Home. $ORA_CRS_HOME/crs/log - This directory includes traces for CRS resources that are joining, leaving, restarting, and relocating as identified by CRS. $ORA_CRS_HOME/crs/init - Any core dumps for the crsd.bin daemon should be written here. Note 1812.1 could be used to debug these. $ORA_CRS_HOME/css/log - The css logs indicate all actions such as reconfigurations, missed checkins , connects, and disconnects from the client CSS listener . In some cases the logger logs messages with the category of (auth.crit) for the reboots done by oracle. This could be used for checking the exact time when the reboot occured. $ORA_CRS_HOME/css/init - Core dumps from the ocssd primarily and the pid for the css daemon whose death is treated as fatal are located here. If there are abnormal restarts for css then the core files will have the formats of core.<pid>. Note 1812.1 could be used to debug these. $ORA_CRS_HOME/evm/log - Log files for the evm and evmlogger daemons. used as often for debugging as the CRS and CSS directories. Not

$ORA_CRS_HOME/evm/init - Pid and lock files for EVM. Core files for EVM should also be written here. Note 1812.1 could be used to debug these.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 152/ 152

$ORA_CRS_HOME/srvm/log - Log files for OCR. STATUS FOR CRS RESOURCES -----------------------After installing RAC and running the VIPCA (Virtual IP Configuration Assistant) launched with the RAC root.sh, you should be able to see all of your CRS resources with crs_stat. Example: cd $ORA_CRS_HOME/bin ./crs_stat NAME=ora.rac1.gsd TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac1.oem TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac1.ons TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac1.vip TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac2.gsd TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac2.oem TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac2.ons TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac2.vip TYPE=application TARGET=ONLINE STATE=ONLINE

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 153/ 153

There is also a script available to view CRS resources in a format that is easier to read. Just create a shell script with: --------------------------- Begin Shell Script -----------------------------#!/usr/bin/ksh # # Sample 10g CRS resource status query script # # Description: # - Returns formatted version of crs_stat -t, in tabular # format, with the complete rsc names and filtering keywords # - The argument, $RSC_KEY, is optional and if passed to the script, will # limit the output to HA resources whose names match $RSC_KEY. # Requirements: # - $ORA_CRS_HOME should be set in your environment RSC_KEY=$1 QSTAT=-u AWK=/usr/xpg4/bin/awk

# if not available use /usr/bin/awk

# Table header:echo "" $AWK \ 'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State"; printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}' # Table body: $ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \ 'BEGIN { FS="="; state = 0; } $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1}; state == 0 {next;} $1~/TARGET/ && state == 1 {apptarget = $2; state=2;} $1~/STATE/ && state == 2 {appstate = $2; state=3;} state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}' --------------------------- End Shell Script -----------------------------Example output: [opcbsol1]/u01/home/usupport> ./crsstat HA Resource ----------ora.V10SN.V10SN1.inst ora.V10SN.V10SN2.inst ora.V10SN.db ora.opcbsol1.ASM1.asm ora.opcbsol1.LISTENER_OPCBSOL1.lsnr ora.opcbsol1.gsd ora.opcbsol1.ons ora.opcbsol1.vip ora.opcbsol2.ASM2.asm ora.opcbsol2.LISTENER_OPCBSOL2.lsnr ora.opcbsol2.gsd ora.opcbsol2.ons ora.opcbsol2.vip CRS RESOURCE ADMINISTRATION
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 154/ 154

Target -----ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE

State ----ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE

on on on on on on on on on on on on on

opcbsol1 opcbsol2 opcbsol2 opcbsol1 opcbsol1 opcbsol1 opcbsol1 opcbsol1 opcbsol2 opcbsol2 opcbsol2 opcbsol2 opcbsol2

--------------------------You can use srvctl to manage these resources. examples. Below are syntax and

-----------------------------------------------------------------------------CRS RESOURCE STATUS srvctl status database -d <database-name> [-f] [-v] [-S <level>] srvctl status instance -d <database-name> -i <instance-name> >[,<instancename-list>] [-f] [-v] [-S <level>] srvctl status service -d <database-name> -s <service-name>[,<service-namelist>] [-f] [-v] [-S <level>] srvctl status nodeapps [-n <node-name>] srvctl status asm -n <node_name> EXAMPLES: Status of the database, all instances and all services. srvctl status database -d ORACLE -v Status of named instances with their current services. srvctl status instance -d ORACLE -i RAC01, RAC02 -v Status of a named services. srvctl status service -d ORACLE -s ERP -v Status of all nodes supporting database applications. srvctl status node -----------------------------------------------------------------------------START CRS RESOURCES srvctl start database -d <database-name> [-o < start-options>] [-c <connect-string> | -q] srvctl start instance -d <database-name> -i <instance-name> [,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q] srvctl start service -d <database-name> [-s <service-name>[,<service-namelist>]] [-i <instance-name>] [-o <start-options>] [-c <connect-string> | q] srvctl start nodeapps -n <node-name> srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>] EXAMPLES: Start the database with all enabled instances. srvctl start database -d ORACLE Start named instances. srvctl start instance -d ORACLE -i RAC03, RAC04 Start named services. Dependent instances are started as needed. srvctl start service -d ORACLE -s CRM Start a service at the named instance. srvctl start service -d ORACLE -s CRM -i RAC04 Start node applications. srvctl start nodeapps -n myclust-4
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 155/ 155

-----------------------------------------------------------------------------STOP CRS RESOURCES srvctl stop database -d <database-name> [-o <stop-options>] [-c <connect-string> | -q] srvctl stop instance -d <database-name> -i <instance-name> [,<instancename-list>] [-o <stop-options>][-c <connect-string> | -q] srvctl stop service -d <database-name> [-s <service-name>[,<service-namelist>]] [-i <instance-name>][-c <connect-string> | -q] [-f] srvctl stop nodeapps -n <node-name> srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>] EXAMPLES: Stop the database, all instances and all services. srvctl stop database -d ORACLE Stop named instances, first relocating all existing services. srvctl stop instance -d ORACLE -i RAC03,RAC04 Stop the service. srvctl stop service -d ORACLE -s CRM Stop the service at the named instances. srvctl stop service -d ORACLE -s CRM -i RAC04 Stop node applications. Note that instances and services also stop. srvctl stop nodeapps -n myclust-4 -----------------------------------------------------------------------------ADD CRS RESOURCES srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] [-n <db_name>] srvctl add instance -d <name> -i <inst_name> -n <node_name> srvctl add service -d <name> -s <service_name> -r <preferred_list> [-a <available_list>] [-P <TAF_policy>] [-u] srvctl add nodeapps -n <node_name> -o <oracle_home> [-A <name|ip>/netmask[/if1[|if2|...]]] srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> OPTIONS: -A vip range, node, and database, address specification. The format of address string is: [<logical host name>]/<VIP address>/<net mask>[/<host interface1[ | host interface2 |..]>] [,] [<logical host name>]/<VIP address>/<net [/<host interface1[ | host interface2 |..]>] for services, list of available instances, this list cannot include preferred instances domain name with the format us.mydomain.com node name that will support one or more instances $ORACLE_HOME to locate Oracle binaries for services, TAF preconnect policy - NONE, PRECONNECT for services, list of preferred instances, this list cannot include
pSeries with AIX 5L page 156/ 156

mask> -a -m -n -o -P -r

Installing Oracle 10g RAC on IBM

-s -u the

available instances. spfile name updates the preferred or available list for the service to support specified instance. Only one instance may be specified with the -u switch. Instances that already support the service should not be included.

EXAMPLES: Add a new node: srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME A 139.184.201.1/255.255.255.0/hme0 Add a new database. srvctl add database -d ORACLE -o $ORACLE_HOME Add named instances to an existing database. srvctl add instance -d ORACLE -i RAC01 -n myclust-1 srvctl add instance -d ORACLE -i RAC02 -n myclust-2 srvctl add instance -d ORACLE -i RAC03 -n myclust-3 Add a service to an existing database with preferred instances (-r) and available instances (-a). Use basic failover to the available instances. srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 Add a service to an existing database with preferred instances in list one and available instances in list two. Use preconnect at the available instances. srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT -----------------------------------------------------------------------------REMOVE CRS RESOURCES srvctl srvctl srvctl name>] srvctl remove database -d <database-name> remove instance -d <database-name> [-i <instance-name>] remove service -d <database-name> -s <service-name> [-i <instanceremove nodeapps -n <node-name>

EXAMPLES: Remove the applications for a database. srvctl remove database -d ORACLE Remove the applications for named instances of an existing database. srvctl remove instance -d ORACLE -i RAC03 srvctl remove instance -d ORACLE -i RAC04 Remove the service. srvctl remove service -d ORACLE -s STD_BATCH Remove the service from the instances. srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04 Remove all node applications from a node. srvctl remove nodeapps -n myclust-4

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 157/ 157

-----------------------------------------------------------------------------MODIFY CRS RESOURCES srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] srvctl modify instance -d <database-name> -i <instance-name> -n <node-name> srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r} srvctl modify service -d <database-name> -s <service_name> -i <instancename> -t <instance-name> [-f] srvctl modify service -d <database-name> -s <service_name> -i <instancename> -r [-f] srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x] OPTIONS: -i <instance-name> -t <instance-name> the instance name (-i) is replaced by the instance name (-t) -i <instance-name> -r the named instance is modified to be a preferred instance -A address-list for VIP application, at node level -s <asm_inst_name> add or remove ASM dependency EXAMPLES: Modify an instance to execute on another node. srvctl modify instance -d ORACLE -n myclust-4 Modify a service to execute on another node. srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02 Modify an instance to be a preferred instance for a service. srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 r -----------------------------------------------------------------------------RELOCATE SERVICES srvctl relocate service -d <database-name> -s <service-name> [-i <instancename >]-t<instance-name > [-f] EXAMPLES: Relocate a service from one instance to another srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 158/ 158

-----------------------------------------------------------------------------ENABLE CRS RESOURCES (The resource may be up or down to use this function) srvctl enable database -d <database-name> srvctl enable instance -d <database-name> -i <instance-name> [,<instancename-list>] srvctl enable service -d <database-name> -s <service-name>] [, <servicename-list>] [-i <instance-name>] EXAMPLES: Enable the database. srvctl enable database -d ORACLE Enable the named instances. srvctl enable instance -d ORACLE -i RAC01, RAC02 Enable the service. srvctl enable service -d ORACLE -s ERP,CRM Enable the service at the named instance. srvctl enable service -d ORACLE -s CRM -i RAC03 -----------------------------------------------------------------------------DISABLE CRS RESOURCES (The resource must be down to use this function) srvctl disable database -d <database-name> srvctl disable instance -d <database-name> -i <instance-name> [,<instancename-list>] srvctl disable service -d <database-name> -s <service-name>] [,<servicename-list>] [-i <instance-name>] EXAMPLES: Disable the database globally. srvctl disable database -d ORACLE Disable the named instances. srvctl disable instance -d ORACLE -i RAC01, RAC02 Disable the service globally. srvctl disable service -d ORACLE -s ERP,CRM Disable the service at the named instance. srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04 -----------------------------------------------------------------------------For more information on this see the Oracle10g Real Application Clusters Administrators Guide - Appendix B RELATED DOCUMENTS ----------------Oracle10g Real Application Clusters Installation and Configuration Oracle10g Real Application Clusters Administrators Guide

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 159/ 159

18.2

ABOUT RAC ... Modified Date

Subject

Doc ID

Product

Platform

Type

1 Minimum software versions 282036.1 21-FEBand patches required to 2005 Support Oracle Products on ... 2 Pre-Install checks for 10g RDBMS on AIX 3 RAC: Frequently Asked Questions 4 Raw Devices and Cluster Filesystems With Real Application Clusters 5 10g Installation on Aix 5.3, Failed with Checking operating system version mu... 283743.1 10-DEC2004 220970.1 16-MAR2005 183408.1 19-FEB2005 293750.1 02-MAR2005

Oracle Server IBM RS 6000 AIX BULLETIN - Enterprise 5L Edition OSS Support IBM RS 6000 AIX RULE 5L SETS Tools Oracle Server Generic - Enterprise Edition Oracle Server Generic - Enterprise Edition Oracle Server AIX-Based - Enterprise Systems (64-bit) Edition FAQ

BULLETIN

PROBLEM

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 160/ 160

18.3

ABOUT CRS ... Doc ID Modified Date Product Platform Generic Type HOWTO

Subject

1 10G: How to Stop the 263897. 14-SEPCluster Ready 1 2004 Services (CRS) 2 How to verify if CRS install is Valid 295871. 04-FEB1 2005

Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition

Generic Generic

HOWTO TROUBLE SHOOTING BULLETIN BULLETIN

265769. 16-MAR3 10g RAC: Troubleshooting CRS 1 2005 Reboots 4. CRS and 10g Real Application Clusters 5 Repairing or Restoring an Inconsistent OCR in RAC 259301. 15-MAR1 2005 268937. 02-MAR1 2005

Generic Generic

293819. 01-MAR6 Placement of voting and OCR disk files in 1 2005 10gRAC 7 10g RAC: How to Clean Up After a Failed CRS Install 8 239998.1

Oracle Server Enterprise Edition 23FEB2005 15NOV2004 09OCT2004 03MAY2004 11FEB2005 Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition Oracle Server Enterprise Edition

Generic

BULLETIN

Generic

BULLETIN

CRS 10g Diagnostic 272332.1 Collection Guide How to Restore a Lost Voting Disk in 10g 279793.1

Generic

BULLETIN

Generic

PROBLEM

10 10g RAC: Stopping 239989.1 Reboot Loops When CRS Problems Occur 11 HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE 12 HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE 14 CRS Home Is Only Partially Copied to Remote Node 15 How to recreate ONS,GSD,VIP deleted from ocr by crs_unregister
Installing Oracle 10g RAC on IBM

Generic

PROBLEM

298073.1

Enterprise Manager for RDBMS

Generic

BULLETIN

298069.1

11FEB2005

Enterprise Manager for RDBMS

Generic

BULLETIN

284949.1

05DEC2004 08OCT2004
pSeries with AIX 5L

Oracle Server Enterprise Edition Oracle Server Enterprise Edition

Generic

PROBLEM

285046.1

Generic

HOWTO

page 161/ 161

18.4

ABOUT VIP ... Doc ID Modified Date Product Platform Type

Subject

1 Configuring the IBM 296856.1 17-FEB-2005 AIX 5L Operating System for the Oracle 10g VIP 2 Changing the check 294336.1 23-FEB-2005 interval for the Oracle 10g VIP 2 Modifying the VIP of a Cluster Node 276434.1 13-JAN-2005

Oracle Server - Enterprise Edition

Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition

BULLETIN

BULLETIN

3 Modifying the default 298895.1 07-MAR-2005 gateway address used by the Oracle 10g VIP 4 How to Configure Virtual IPs for 10g RAC 18.5 264847.1 24-FEB-2005

BULLETIN

HOWTO

ABOUT MANUAL DATABASE CRATION ... 240052.1 07-DEC-2004 Oracle Diagnostics Pack Generic BULLETIN

1 10g Manual Database Creation in Oracle (Single Instance and RAC) 18.6

ABOUT GRID CONTROL ... 04-OCT-2004 Oracle Enterprise Manager Generic BULLETIN

1 Enterprise Manager 284707.1 Grid Control 10.1.0.3.0 Release Notes 2 EM 10G Grid Control Preinstall Steps for AIX 5.2 18.7 ABOUT TAF ... 277420.1

06-JUL-2004

Enterprise IBM RS 6000 AIX 5L Manager Core

1 Troubleshooting TAF Issues in 10g RAC

271297. 11-MAY-2004 1

Oracle Net Services

Generic

BULLETIN

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 162/ 162

18.8

ABOUT ADDING/REMOVING NODE ... 269320.1 06-OCT2004 14-JAN2005 Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition BULLETI N BULLETI N

1 Removing a Node from a 10g RAC Cluster


H H H H

2 Adding a Node to a 270512.1 10g RAC Cluster

18.9

ABOUT ASM ... Doc ID


H H

Subject
H H H

Modified Date
H H

Product
H H

Platform
H

Type
H H

1 10G New Storage Features and Enhancements


H

243245.1 04-AUG-2004

Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Generic Generic

BULLETIN

2 Re-creating ASM Instances and Diskgroups


H H

268481.1 24-FEB-2005

BULLETIN

3 SGA sizing for ASM instances and databases that use ASM
H H

282777.1 10-SEP-2004

BULLETIN

4 Creating an ASMenabled Database


H H H

274738.1 05-AUG-2004

Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server - Generic Enterprise Edition Oracle Server Generic Enterprise Edition Generic

HOWTO

5 New Feature on ASM 249992.1 05-AUG-2004 (Automatic Storage Manager).


H

BULLETIN

252219.1 04-JUN-2004 6 Steps To Migrate Database From NonASM to ASM And Vice-Versa
H H

BULLETIN

7 How To Move Archive Files from ASM


H H H

293234.1 07-DEC-2004

HOWTO

270066.1 05-AUG-2004 8 Manage ASM instance-creating diskgroup,adding/dro pping/resizing disks.


H

HOWTO

300472.1 04-MAR-2005 9 How To Delete Archive Log Files Out Of +Asm?


H H

HOWTO

Oracle Server Enterprise Edition http://metalink.oracle.com/metalink/plsql/docs/ASM.pdf For full article, download Automatic Storage Management (154K/pdf)
H H H H H H

10 ASM Technical Best Practices

265633.1 16-JUN-2004

WHITE PAPER

11 Oracle ASM and MultiH

294869. 03-JAN- Oracle Server pSeries with AIX 5L

AIX-Based

WHITE
page 163/ 163

Installing Oracle 10g RAC on IBM

Pathing Technologies
H

2005

Enterprise Edition

Systems

PAPER

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 164/ 164

18.10 NOTE #2064876.102 : HOW TO SETUP HIGH AVAILABILITY GROUP SERVICES (HAGS) ON IBM AIX/RS6000.

Article-ID: Circulation: Folder: Topic: Title: Document-Type: Impact: Skill-Level: Server-Version: Updated-Date: References: Shared-Refs: Authors: Attachments: Content-Type: Keywords: Products: Platforms: Purpose =======

<Note:2064876.102> PUBLISHED (EXTERNAL) server.OPS.Parallelserver - - - IBM RS6000 and SP How to setup High Availability Group Services (HAGS) on IBM AIX/RS6000 BULLETIN LOW CASUAL 08.01.XX.XX.XX 04-FEB-2002 08:21:25 BLEVE.US NONE TEXT/PLAIN LMON; OPS; PARALLEL; SERVER; SUBCOMP-OPS; 236/RDBMS (08.01.XX.XX.XX); 319 (4.3);

This article gives quick reference instructions on how to configure High Availability Group Services (HAGS) on IBM AIX RS6000 for Oracle 8.1.X. Scope and Application ===================== These instructions are helpful to any customer using Oracle on IBM AIX/RS6000 on which HACMP is installed. How to Configure High Availability Group Services (HAGS) ======================================================== In order to configure High Availability Group Services (HAGS), you need to be connected as root. Do the following on all nodes that form the cluster: 1. Create the "hagsuser" group and place "oracle" into the "hagsuser" group: Verify the group does not exists: # grep hagsuser /etc/group If this returns nothing do the following: # smitty groups Select "Add a Group" and fill in the following: Group Name USER list ----> hagsuser ----> oracle

You can take the defaults for the other settings. Also note that after the group is created you will have to log out and log back in as "oracle" to be sure "oracle" is part of the "hagsuser" group.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 165/ 165

2. Change the permissions on the "cldomain" executable: # chmod a+x /usr/sbin/cluster/utilities/cldomain 3. Change the group to "hagsuser" for the "svcsdsocket.rac_cluster" socket: # chgrp hagsuser /var/ha/soc/grpsvcsdsocket.rac_cluster 4. Change the group permissions for the "grpsvcsdsocket.rac_cluster" socket: # chmod g+w /var/ha/soc/grpsvcsdsocket.rac_cluster The HAGS socket needs to be writeable by "oracle" and the "cldomain" executable needs to be executable by "oracle". By configuring the group and permissions for the "grpsvcsdsocket.rac_cluster" file the instance will be able to communicate with HAGS and the instance will mount. References ========== Oracle Installation Guide for AIX RS6000, release 8.1.5. Search Words ============ OPS HAGS RS6000

You should have something like the following : ls l /var/ha/soc


total 4 srw-rw-rwsrw-rw---drwxrwxrwx srw-rw-rwdrwxrwx--drwxr-x--drwxr-xr-x 1 1 2 1 2 2 2 root root root root root root root haemrm haemrm system hagsuser haemrm system system 0 0 512 0 512 512 512 Jun Jun Jun Jun Jun Apr Jun 19 19 19 19 19 02 19 16:58 16:58 17:51 16:57 16:57 23:30 16:57 em.clsrv.rac92_cluster em.rmsrv.rac92_cluster grpsvcs.clients.rac92_cluster grpsvcsdsocket.rac92_cluster haem hats topsvcs

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 166/ 166

18.11 NOTE # 115792.1 , EXPLAINING HOW TO SETUP HACMP CLUSTER INTERCONNECT ADAPTER.

Article-ID: Circulation: Folder: Topic: Title: Document-Type: Impact: Skill-Level: Server-Version: Updated-Date: References: Shared-Refs: Authors: Attachments: Content-Type: Keywords: Errors: Products: Platforms:

<Note:115792.1> PENDING_DELETE (EXTERNAL) server.OPS.Parallelserver - - - IBM RS6000 and SP AIX: ORA-600 [KCCSBCK_FIRST] starting up second OPS Instance BULLETIN LOW NOVICE 08.01.06.0X to 08.01.06.0X 05-FEB-2002 13:10:49 RKIRCHHE.DE NONE TEXT/PLAIN HACMP; OPS; ORA-600; [KCCSBCK_FIRST]; 5/RDBMS; 319;

************************************************************* This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article. ************************************************************* PURPOSE ------This article helps to resolve problems with Oracle Parallel Server startup related to HACMP configuration SCOPE & APPLICATION ------------------How to setup HACMP cluster interconnect adapter ----------------------------Oracle Parallel Server software is successfully installed. The first OPS instance starts without errors. Trying to start a second OPS instance on another cluster node fails with ORA-600 [KCCSBCK_FIRST]. $ORACLE_HOME/bin/lsnodes will list all cluster nodes. /usr/sbin/cluster/diag/clverify doesn't show any errors. Check HACMP interconnect network adapter configuration with /usr/sbin/cluster/utilities/cllsif cllsif on a working configuration should look like this: Adapter Type Network NetType Attribute Node IP ether ether private private node1 node2 Address InterfaceName 10.10.11.141 10.10.11.142 en3 en5

interconnect_node1 interconnect_node2 RELATED DOCUMENTS -----------------

service service

rac92_network rac92_network

<List related manuals, articles and other documents.>


Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 167/ 167

19

APPENDIX D : HACMP CLUSTER VERIFICATION OUTPUT

All the following outputs come from an HACMP configuration for raw devices implementation which was including a parallel instance database (HACMP concurrent mode) and a single instance database (HACMP rotating mode). (Information about the concurrent mode configuration is in bold.) Cluster description : output from a cllscf command. Node node1 Interfaces to network net_ether_01 Communication Interface: Name node1_gpfs, Attribute private, IP address 10.10.12.81 Node node2 Interfaces to network net_ether_01 Communication Interface: Name node2_gpfs, Attribute private, IP address 10.10.12.82 Cluster Description of Cluster ora9irac There were 1 networks defined: net_ether_01 There are 3 nodes in this cluster NODE node1: This node has 1 service IP label(s): Service IP Label node1_gpfs: IP address: 10.10.12.81 Hardware Address: Network: net_ether_01 Attribute: private Aliased Address?: Not Supported Service IP Label node1_gpfs has no communication interfaces. Service IP Label node1_gpfs has no communication interfaces for recovery. NODE node8: This node has 2 service IP label(s): Service IP Label node8_gpfs: IP address: 10.10.12.88 Hardware Address: Network: net_ether_02 Attribute: private Aliased Address?: Not Supported Service IP Label node8_gpfs has no communication interfaces. Service IP Label node8_gpfs has no communication interfaces for recovery. Service IP Label node8: IP address: 10.2.12.88 Hardware Address: Network: net_ether_03 Attribute: private Aliased Address?: Not Supported Service IP Label node8 has no communication interfaces.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L page 168/ 168

Service IP Label node8 has no communication interfaces for recovery.

Breakdown of network connections: Connections to network net_ether_02 Node node5 is connected to network net_ether_02 by these interfaces: node5_gpfs Node node8 is connected to network net_ether_02 by these interfaces: node8_gpfs Connections to network net_ether_03 Node node5 is connected to network net_ether_03 by these interfaces: node5 Node node8 is connected to network net_ether_03 by these interfaces: node8

Cluster network interfaces : output from a cllsif command.


Adapter Type Network NetType Attribute Node IP ether ether private private node1 node2 Address InterfaceName 10.10.12.141 10.10.12.142 en3 en5

interconnect_node1 interconnect_node2

service service

rac92_network rac92_network

Adapter IP Address Alias for HB node5_gpfs 10.10.12.85 255.255.255.0 node5 10.2.12.85 255.255.255.0 node8_gpfs 10.10.12.88 255.255.255.0 node8 10.2.12.88 255.255.255.0

Type Network Net Type Attribute Hardware Address Interface Name Global Name service service service service net_ether_02 ether en0 net_ether_03 ether en1 net_ether_02 ether en1 net_ether_03 ether en0 private private private private

Node Netmask node5 node5 node8 node8

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 169/ 169

Cluster resource group : output from a clshowres command.


Resource Group Name Node Relationship Participating Node Name(s) Dynamic Node Priority Service IP Label Filesystems Filesystems Consistency Check Filesystems Recovery Method Filesystems/Directories to be exported Filesystems to be NFS mounted Network For NFS Mount Volume Groups Concurrent Volume Groups Disks Connections Services Fast Connect Services Shared Tape Resources Application Servers Highly Available Communication Links Miscellaneous Data Automatically Import Volume Groups Inactive Takeover Cascading Without Fallback 9333 Disk Fencing SSA Disk Fencing Filesystems mounted before IP configured Run Time Parameters: Node Name Debug Level Host uses NIS or Name Server Format for hacmp.out Node Name Debug Level Host uses NIS or Name Server Format for hacmp.out racdisk concurrent node1 node2

fsck sequential

oradatavg

false false false false false false node1 high false Standard node2 high false Standard

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 170/ 170

20

APPENDIX F : FILESETS TO BE INSTALLED ON THE MACHINES OF THE CLUSTER.


RSCT 2.3.4 (PROVIDED WITH AIX 5.2) FOR ALL IMPLEMENTATION C C C C C C C C C C C C C C C C C C C C C C C F F F F F F F F F F F F F F F F F F F F F F F RSCT Basic Function RSCT Event Management RSCT Event Management RSCT Audit Log Resource RSCT Event Response RSCT File System RSCT Graphical User RSCT Host Resource RSCT Resource Monitoring RSCT Security RSCT Sensor Resource RSCT Registry RSCT Utilities RSCT Event Response RM RSCT File System RM Msgs RSCT GUI Msgs - U.S. RSCT RMC Msgs - U.S. RSCT Security Msgs RSCT Registry Msgs RSCT Utilities Msgs RSCT Event Response RM RSCT File System RM Msgs RSCT GUI Msgs - U.S.

This appendix provides the list of filesets which must be installed for each kind of implementation. 20.1

rsct.basic.rte 2.3.4.0 rsct.compat.basic.rte 2.3.4.0 Basic rsct.compat.clients.rte 2.3.4.0 Client rsct.core.auditrm 2.3.4.0 rsct.core.errm 2.3.4.0 Resource rsct.core.fsrm 2.3.4.0 Resource rsct.core.gui 2.3.4.0 Interface rsct.core.hostrm 2.3.4.0 Manager rsct.core.rmc 2.3.4.0 and rsct.core.sec 2.3.4.0 rsct.core.sensorrm 2.3.4.0 Manager rsct.core.sr 2.3.4.0 rsct.core.utils 2.3.4.0 rsct.msg.EN_US.core.auditrm rsct.msg.EN_US.core.errm 2.3.0.0 Msgs rsct.msg.EN_US.core.fsrm 2.3.0.0 rsct.msg.EN_US.core.gui 2.3.0.0 English rsct.msg.EN_US.core.hostrm rsct.msg.EN_US.core.rmc 2.3.0.0 English rsct.msg.EN_US.core.sec 2.3.0.0 U.S. rsct.msg.EN_US.core.sensorrm rsct.msg.EN_US.core.sr 2.3.0.0 U.S. rsct.msg.EN_US.core.utils 2.3.0.0 U.S. rsct.msg.en_US.core.auditrm rsct.msg.en_US.core.errm 2.3.0.0 Msgs rsct.msg.en_US.core.fsrm 2.3.0.0 rsct.msg.en_US.core.gui 2.3.0.0 English rsct.msg.en_US.core.gui.com rsct.msg.en_US.core.hostrm rsct.msg.en_US.core.rmc 2.3.0.0 English rsct.msg.en_US.core.rmc.com rsct.msg.en_US.core.sec 2.3.0.0 U.S.
Installing Oracle 10g RAC on IBM pSeries with AIX 5L

C C

F F

RSCT RMC Msgs - U.S. RSCT Security Msgs page 171/ 171

rsct.msg.en_US.core.sensorrm rsct.msg.en_US.core.sr 2.3.0.0 U.S. rsct.msg.en_US.core.utils 2.3.0.0 U.S. 20.2 20.2.1 WHEN IMPLEMENTING HACMP HACMP 5.1 filesets

C C

F F

RSCT Registry Msgs RSCT Utilities Msgs -

Result of the command : lslpp -L | grep cluster cluster.adt.es.client.demos cluster.adt.es.client.include cluster.adt.es.client.samples.clinfo cluster.adt.es.client.samples.clstat cluster.adt.es.client.samples.demos cluster.adt.es.client.samples.libcl cluster.adt.es.java.demo.monitor cluster.adt.es.server.demos cluster.adt.es.server.samples.demos cluster.adt.es.server.samples.images cluster.es.client.lib 5.1.0.5 cluster.es.client.rte 5.1.0.3 cluster.es.client.utils 5.1.0.4 cluster.es.clvm.rte 5.1.0.0 Access cluster.es.cspoc.cmds 5.1.0.5 cluster.es.cspoc.dsh 5.1.0.0 cluster.es.cspoc.rte 5.1.0.5 Commands cluster.es.server.diag 5.1.0.4 cluster.es.server.events 5.1.0.5 cluster.es.server.rte 5.1.0.5 cluster.es.server.utils 5.1.0.5 cluster.license 5.1.0.0 20.2.2

C C C C C C C C C C C C

F F F F F F F F F F F F

ES ES ES ES

Client Libraries Client Runtime Client Utilities for AIX Concurrent

ES CSPOC Commands ES CSPOC dsh ES CSPOC Runtime ES Server Diags ES Server Events ES Base Server Runtime ES Server Utilities HACMP Electronic License

RSCT 2.3.4 filesets for HACMP Implementation 20.1 RSCT 2.3.4 (provided with AIX 5.2) for all C C C F F F RSCT Basic Function RSCT Event Management RSCT Event Management

Add these three filesets to the ones mentinoned in implementation on page 171 rsct.basic.hacmp 2.3.4.0 (HACMP/ES rsct.compat.basic.hacmp 2.3.4.0 Basic rsct.compat.clients.hacmp 2.3.4.0 Client 20.3 FOR GPFS IMPLEMENTATION
3.5.0.6 3.5.0.9 2.1.0.9 3.5.0.0 3.5.0.0 C C C C C

mmfs.base.cmds mmfs.base.rte mmfs.gpfs.rte mmfs.gpfsdocs.data mmfs.msg.en_US

F F F F F

GPFS GPFS GPFS GPFS GPFS

File Manager Commands File Manager File Manager Server Manpages and Server Messages - U.S.

Installing Oracle 10g RAC on IBM

pSeries with AIX 5L

page 172/ 172

You might also like