You are on page 1of 9

Levantar base de datos completa ( todas las instancias de base de datos )

$ srvctl start database -d SID

Verificar el estado de la base de datos


$ srvctl status database -d DAPRO

Para la base de datos


$ srvctl stop database -d SID

Gestionar instancias de RAC con srvctl


Levantar la instancia de uno de los nodos ( En este caso levantamos la instancia que est en el nodo 2 )
$ srvctl start instance -d SID -i SID2

Verficar el estado de la instancia


$ srvctl status instance -d SID -i SID2

Bajar la instancia
$ srvctl stop instance -d SID -i SID2

Gestionar listener por nodo con srvctl


Levantar el listener por nodo
$ srvctl start listener -l SID_ORACLE1

Ver el estado del listener por nodo


$ srvctl status listener -l SID_ORACLE1

Para el listener por nodo


$ srvctl stop listener -l SID_ORACLE1

Gestionar Servicios RAC por VIP, GSD, Listener ASM, ONS con srvctl
Levantar servicios de RAC por VIP, GSD, Listener ASM, ONS por nodo.
$ srvctl start nodeapps -n oracle1

Las instancias y servicios tambin se levantan Ver estado VIP, GSD, Listener ASM, ONS por nodo.

$ srvctl status nodeapps -n oracle1

Bajar servicios de RAC por VIP, GSD, Listener ASM, ONS por nodo.
$ srvctl stop nodeapps -n oracle1

Check created Database The srvctl utility shows the current configuration and status of the RAC database. Display configuration for the AKA Cluster Database oracle@gentic> srvctl config database -d AKA gentic AKA1 /u01/app/oracle/product/11.1.0 cellar AKA2 /u01/app/oracle/product/11.1.0 Status of all instances and services oracle@gentic> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is running on node cellar Status of node applications on a particular node oracle@gentic> srvctl status nodeapps -n gentic VIP is running on node: gentic GSD is running on node: gentic Listener is running on node: gentic ONS daemon is running on node: gentic Display the configuration for node applications - (VIP, GSD, ONS, Listener) oracle@gentic> srvctl config nodeapps -n gentic -a -g -s -l

VIP exists.: /gentic-vip/192.168.138.130/255.255.255.0/eth0 GSD exists. ONS daemon exists. Listener exists.

All running instances in the cluster sqlplus system/manager@AKA1 SELECT inst_id, instance_number, instance_name, parallel, status, database_status, active_state, host_name host FROM gv$instance ORDER BY inst_id; INST_ID INSTANCE_NUMBER INSTANCE_NAME PAR STATUS DATABASE_STATUS ACTIVE_ST HOST ---------- --------------- ---------------- --- ------------ ----------------- --------- -------1 1 AKA1 YES OPEN ACTIVE NORMAL gentic 2 2 AKA2 YES OPEN ACTIVE NORMAL cellar

SELECT name FROM v$datafile UNION SELECT member FROM v$logfile UNION SELECT name FROM v$controlfile UNION SELECT name FROM v$tempfile;

NAME -------------------------------------/u01/oradat/AKA/control01.ctl /u01/oradat/AKA/control02.ctl /u01/oradat/AKA/control03.ctl /u01/oradat/AKA/redo01.log /u01/oradat/AKA/redo02.log /u01/oradat/AKA/redo03.log /u01/oradat/AKA/redo04.log /u01/oradat/AKA/sysaux01.dbf /u01/oradat/AKA/system01.dbf /u01/oradat/AKA/temp01.dbf /u01/oradat/AKA/undotbs01.dbf /u01/oradat/AKA/undotbs02.dbf /u01/oradat/AKA/users01.dbf

The V$ACTIVE_INSTANCES view can also display the current status of the instances. SELECT * FROM v$active_instances; INST_NUMBER INST_NAME ----------- ---------------------------------------------1 gentic:AKA1 2 cellar:AKA2

Finally, the GV$ allow you to display global information for the whole RAC. SELECT inst_id, username, sid, serial# FROM gv$session WHERE username IS NOT NULL; INST_ID USERNAME SID SERIAL# ---------- ------------------------------ ---------- ---------1 SYSTEM 113 137 1 DBSNMP 114 264 1 DBSNMP 116 27 1 SYSMAN 118 4 1 SYSMAN 121 11 1 SYSMAN 124 25 1 SYS 125 18 1 SYSMAN 126 14 1 SYS 127 7

1 DBSNMP 128 370 1 SYS 130 52 1 SYS 144 9 1 SYSTEM 170 608 2 DBSNMP 117 393 2 SYSTEM 119 1997 2 SYSMAN 123 53 2 DBSNMP 124 52 2 SYS 127 115 2 SYS 128 126 2 SYSMAN 129 771 2 SYSMAN 134 18 2 DBSNMP 135 5 2 SYSMAN 146 42 2 SYSMAN 170 49

The Enterprise Manager Console is shared for the whole Cluster, in our Example it listens on https://gentic:1158/em

Stopping the Oracle RAC 11g Environment At this point, we've installed and configured Oracle RAC 11g entirely and have a fully functional clustered database. All services - including Oracle Clusterware, all Oracle instances, Enterprise Manager Database Console will start automatically on each reboot of the Linux nodes. There are times, however, when you might want to shut down a node and manually start it back up. Or you may find that Enterprise Manager is not running and need to start it. oracle@gentic> emctl stop dbconsole oracle@gentic> srvctl stop instance -d AKA -i AKA1 oracle@gentic> srvctl stop nodeapps -n gentic oracle@cellar> emctl stop dbconsole oracle@cellar> srvctl stop instance -d AKA -i AKA2 oracle@cellar> srvctl stop nodeapps -n cellar

Starting the Oracle RAC 11g Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). When the node applications are successfully started, then bring up the Oracle instance (and related services) and the Enterprise Manager Database console. oracle@gentic> srvctl start nodeapps -n gentic oracle@gentic> srvctl start instance -d AKA -i AKA1 oracle@gentic> emctl start dbconsole oracle@cellar> srvctl start nodeapps -n cellar oracle@cellar> srvctl start instance -d AKA -i AKA2 oracle@cellar> emctl start dbconsole Start/Stop All Instances with SRVCTL Start/stop all the instances and their enabled services oracle@gentic> srvctl start database -d AKA oracle@gentic> srvctl stop database -d AKA SQL> COLUMN instance_name FORMAT a13 COLUMN host_name FORMAT a9 COLUMN failover_method FORMAT a15 COLUMN failed_over FORMAT a11 SELECT DISTINCT v.instance_name AS instance_name, v.host_name AS host_name, s.failover_type AS failover_type, s.failover_method AS failover_method, s.failed_over AS failed_over FROM v$instance v, v$session s WHERE s.username = 'SYSTEM'; INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ------------- --------- ------------- --------------- ----------AKA2 cellar SELECT BASIC NO

oracle@cellar> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is running on node cellar oracle@cellar> srvctl stop instance -d AKA -i AKA2 -o abort oracle@cellar> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is not running on node cellar Now let's go back to our SQL session on VIPER and rerun the SQL statement: SQL> COLUMN instance_name FORMAT a13

COLUMN host_name FORMAT a9 COLUMN failover_method FORMAT a15 COLUMN failed_over FORMAT a11 SELECT DISTINCT v.instance_name AS instance_name, v.host_name AS host_name, s.failover_type AS failover_type, s.failover_method AS failover_method, s.failed_over AS failed_over FROM v$instance v, v$session s WHERE s.username = 'SYSTEM'; INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ------------- --------- ------------- --------------- ----------AKA1 gentic SELECT BASIC YES

# Levantar la Base $ srvctl start database -d DAPRO # Verficar el estado de la base $ srvctl status database -d DAPRO Instance DAPRO01 is running on node Instance DAPRO02 is running on node Instance DAPRO03 is running on node Instance DAPRO04 is running on node # Bajar la Base $ srvctl stop database -d DAPRO

saturno01lx saturno02lx saturno03lx saturno04lx

b) Subir / Bajar / verificar status de solo una Instancia. Vamos a tomar como modelo la instancia 2 del RAC.
# Levantar la instancia $ srvctl start instance -d DAPRO -i DAPRO02

# Verficar el estado de la instancia $ srvctl status instance -d DAPRO -i DAPRO02 Instance DAPRO02 is running on node saturno02lx # Bajar la instancia $ srvctl stop instance -d DAPRO -i DAPRO02

c) Subir /Bajar / verificar status de Instancias ASM.


Para este ejemplo vamos a tomar la instancia nmero tres, donde vamos realizar las pruebas.
# Levantar la instancia ASM del 3 nodo. $ srvctl start asm -n saturno03lx # Verificar el status de la instancia ASM. $ srvctl status asm -n saturno03lx ASM instance +ASM3 is running on node saturno03lx. # Bajar la instancia ASM del 3 nodo. $ srvctl stop asm -n saturno03lx

d) Subir / bajar / verificar status de listeners.


# Levantar Listener por nodo. $ srvctl stop listener -l DAPRO_SATURNO01LX # Status Listener por nodo. $ srvctl status listener -l DAPRO_SATURNO01LX # Bajar Listener por nodo. $ srvctl start listener -l DAPRO_SATURNO01LX

e ) Subir / bajar / verificar status de VIP, GSD, Listener ASM, ONS .


# Levantar servicios de RAC por VIP, GSD, Listener ASM, ONS por nodo. $ srvctl start nodeapps -n saturno01lx # Status VIP, GSD, Listener ASM, ONS por nodo. $ srvctl status nodeapps -n saturno01lx VIP is running on node: saturno01lx GSD is running on node: saturno01lx Listener is running on node: saturno01lx ONS daemon is running on node: saturno01lx # Bajar servicios de RAC por VIP, GSD, Listener ASM, ONS por nodo. $ srvctl stop nodeapps -n saturno01lx

Start a rac database (order: nodeapps asm database) srvctl start nodeapps -n nodename srvctl start asm -n nodename srvctl start database -d dbname

options are: srvctl start database -d dbname -o open | -o mount | -o nomount Stop a rac database (order: database asm nodeapps) srvctl stop database -d dbname -o immediate options are: srvctl stop database -d dbname -o normal | -o transactional | -o immediate | -o abort srvctl stop asm -n nodename options are: srvctl stop asm -n nodename -o immediate srvctl stop nodeapps -n nodename To check status and configurations Nodeapps: srvctl status nodeapps -n nodename srvctl config nodeapps -n nodename ASM: srvctl status asm -n nodename srvctl config asm -n nodename Database: srvctl status database -d dbname srvctl config database -d dbname (shows instances name, node and oracle home) Instance: srvctl status instance -d dbname -i instancename Services: srvctl status service -d dbname To start and stop instances srvctl start instance -d dbname -i instancename srvctl stop instance -d dbname -i instancename To start, stop and manage services srvctl status service -d dbname srvctl config service -d dbname srvctl start service -d dbname -s servicename srvctl stop service -d dbname -s servicename srvctl relocate service -d dbname -s servicename -i instancename -t newinstancename [-

You might also like