You are on page 1of 14

Step by step Guide to Install 11g R2 installation

Server Information In this class each student is provided with 3 machines. We will perform the two node rac install initially. We will use the third node for the node addition and removal classes.

Name wysheid11gr 21 wysheid11gr 22 wysheid11gr 23

Public IP 10.17.57. 121 10.17.57. 122 10.17.57. 123

Private IP 10.17.56. 121 10.17.56. 122 10.17.56. 123

VIP 10.17.57. 221 10.17.57. 222 10.17.57. 223

OS Version OEL 5.4 OEL 5.4 OEL 5.4

Database Version 11.2.1.0 11.2.1.0 11.2.1.0

10.17.57.160 10.17.57.161 10.17.57.162

RPM Requirement

64-bit (x86_64) Installations binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 elfutils-libelf-devel-static-0.125 gcc-4.1.2

gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit) libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81 pdksh-5.2.14 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)

Client Software Required From Lab

1) Putty 2) Xmanager These software are installed on your lab machine From Home 1. Putty 2. Nxclient for windows (Please refer the nxclient setup mail for setting up) Additional Information Scan Cluster Name : scan-cluster.wysheid $ nslookup scan-cluster.wysheid 3 ip address Cluster Name Software location : scan-cluster : /wysheid on each node

The scan cluster name will be provided separately for each installation As convention, we have given # symbol for the commands to be executed as root user. And $ symbol to represent the commands to be executed as grid and oracle user. Prerequisite Check 1) Minimum 10G space required on all the nodes of the cluster. # df h 2) Minimum 1.5 GB RAM required # free m 3) Make sure the /etc/hosts file contains the information about all the nodes involved in the installation. You can edit /etc/hosts file as root user.

You must have an entry similar to this in each machines /etc/hosts file ============================================== == 10.17.57.121 wysheid11gr21 wysheid11gr21.wysheid

10.17.57.122 wysheid11gr22 wysheid11gr22.wysheid

10.17.57.221 wysheid11gr21-vip wysheid11gr21-vip.wysheid 10.17.57.222 wysheid11gr22-vip wysheid11gr22-vip.wysheid

10.17.56.102 openfiler1 =======================================s======= ===

4) Verify /etc/resolv.conf pointing to the dns server You should have an entry similar to this nameserver 10.17.57.155

Login as root user and edit the file, if the nameserver entry is missing or incorrect.

5) Make sure you are able to resolve the scan cluster name given to you.

# nslookup scan-cluster.wysheid This command will result in 3 ip address

Common Terminology

$GRID_HOME :- The location at which the grid infrastructure is installed $ORACLE_HOME :- The location at which RDBMS software is installed.

Installation Steps

User creation These steps has to be performed from both nodes of the cluster

#groupadd -g 500 oinstall #groupadd g 501 dba #groupadd g 503 asmadmin #groupadd g 504 asmdba #groupadd g 505 asmoper

#useradd u 501 g oinstall G asmadmin,asmoper,asmdba grid #useradd u 500 g oinstall G dba,asmdba oracle

Unzip the software

# cd /wysheid # unzip <zip files>

# unzip linux.x64_11gR2_database_1of2.zip # unzip linux.x64_11gR2_database_2of2.zip # unzip linux.x64_11gR2_grid.zip

Create directories and grant required permission(Has to be done on all nodes of cluster)

oracle base

# mkdir p /wysheid/11.2.0 # chown R grid:oinstall /wysheid/11.2.0

Oracle Inventory #mkdir p /wysheid/orainventory #chown R grid:oinstall /wysheid/orainventory

Grid Home #mkdir p /wysheid/grid_home #chown R grid:oinstall /wysheid/grid_home

Oracle Home #mkdir p /wysheid/db_home1 #chown R oracle:oinstall /wysheid/db_home1

Chmod R 775 <all directories>

Configure oracleasm ( Has to be done on both nodes)

# rpm qa |grep oracleasm

#/usr/sbin/oracleasm configure i

When prompted for asm owner provide , grid When prompted for asmgroup provide , asmadmin

Load oracleasm libraries as follows

#/usr/sbin/oracleasm init

Check the status of oracleasm libraries .The status should be ON to proceed with next steps

#/usr/sbin/oracleasm status

Create asm disks for OCR/Votingdisk and Database( This has to be done from any one of the node)

The partitioned unformatted disks presented to you are as follows.

o o

/dev/sdb1 /dev/sdc1

2 GB size - used for CRS/Voting Disk 8 GB size used for Database

You can confirm the above by the following command which displays the partition named and size

# fdisk -l

Create the asm disk as follows. This to be done from one of the node.

#/usr/sbin/oracleasm createdisk VOTE1 /dev/sdb1 disk here #/usr/sbin/oracleasm createdisk DATA1 /dev/sdc1 disk here

- give the 2GB

give the 8GB

Scan and List the disks (do this from both nodes)

#/usr/sbin/oracleasm scandisks #/usr/sbin/oracleasm listdisks created above -- This should list all the disks

Grid infrastructure installation

We need to perform the installation from the first node . The installation will be continued to the other nodes also.

$ login as grid user $ cd /wysheid/grid - assuming this is the directory where the grid software is unzipped $ set the DISPLAY as explained in the lab. Make sure the working with xclock command $ Start installation as follows

(i) $ ./runInstaller (ii) Select option Install grid infrastructure for cluster (iii)Select ADVANCED option (iv) Select the language as ENGLISH

(v) Uncheck GNS option (vi) Provide the cluster name as scan-cluster, scan name as scan-cluster.wysheid and scan port as 1521 (vii) On Configure Node option, by default the node details of the node from the installation started will available. Click on add and provide the public name and vip name for the second node as specified in the /etc/hosts file

Click of SSH Connectivity and setup ssh . Test it and make sure its working fine. You have to provide the password of grid user to setup ssh

(viii) On the network configuration page , select eth1 as the private interconnect (ix) On Configure storage page , select ASM

(x) On create ASM DISK Group page , give DISK Group name as CRS , select redundancy as external and select ORCL:VOTE1 as disk (xi) On Specify asm password page , give command password for all user (xii) (xiii) Donot use IPMI Configure Opertaing system privileged group as follows. Asm instanace administrator asmadmin Asm database administrator asmdba Asm operator asmoper

Please pay special attention on this page, else the installation may fail. (i) Set the installation location as follows Oracle base = /wysheid/11.2.0 Oracle_home=/wysheid/grid_home this is your grid infrastrture home

(ii) Specify the inventory location as /wysheid/orainventory (iii)Run the pre-requisite checks and fix the errors. Use the fixup scripts if necessary (iv) Click on FINISH to start the installation

(v) Run oraInstroot.sh and root.sh on both node as specified on the GUI. Please be careful at this steps, you have to do it in the order ie complete oraInstroot.sh on node1 and node2 . Then root.sh on node1 and node2

The sample order is

Node ======

Script ==========

Wysheid11gr21: /wysheid/orainventory/orainstroot.sh Wysheid11gr22 : /wysheid/orainventory/orainstroot.sh

Wysheid11gr21 /wysheid/grid_home/root.sh Wysheid11gr21 /wysheid/grid_home/root.sh

(i) Continue the installation once the root.sh is completed

Verifying the cluster installation success

$ login as grid user to the first node $ . oraenv give +ASM1 as SID name $ crsctl check cluster all

If the installation is success, we can see the following background process online on both node

ohasd crsd cssd evmd

Change permission for oracle executable

To avoid the bug which fails to display the disk group for oracle user while running DBCA perform the following

# cd $GRID_HOME/bin # ls al oracle # chmod 6751 oracle

- use the patch of Grid Home

Disk Group Creation for Database

This disk group is used to store the database file.

Login as grid user to the first node. Set the DISPLAY $ . oraenv ( note there is space after . ). When prompted give +ASM1 $ asmca On GUI select create new disk group option . Give the name DATA for disk group. Choose external redundancy. Choose disk ORCL:DATA1 . Select create disk group and make sure that the disk group is mounted on both nodes

RDBMS Installation

$ login as oracle user $ cd /wysheid/database - assuming this is the location where RDBMS software is unzipped $ ./runInstaller

Choose RAC installation Choose Install software only option Choose Enterprise Edition Choose the install location as follows ORACLE_HOME=/wysheid/db_home1 ORACLE_BASE=/wysheid/11.2.0

Choose the privileged Operating system groups as follows Sysdba group dba Sysoper group dba

Complete the pre requisite check. Run fixup script if required. Click FINISH to complete the installation

Database Creation

$ login as oracle user from node1 Set the DISPLAY $ cd $ORACLE_HOME/bin ./dbca Choose RAC Database Choose custom database

Storage option ASM Choose common location for all datafiles . Choose the diskgroup +DATA Choose same diskgroup for FLASH Recovery Area also Choose the memory as 40% Complete the database creation

Confirming the success of the installation

$ login as grid user from the first node $ . oraenv $ crs_stat t If the installation is successful, we can see all the resources except GSD and OCJ4 as online give the instance name as +ASM1

You might also like