Professional Documents
Culture Documents
2 for RAC Database with emca
Doc.ID:395162.1
Modified Date: 21JUL2008
In this Document
Purpose
Scope and Application
How to manage DB Control 10.2 for RAC Database with emca
Difference between DB Control for RAC 10.1.x.x and DB Control for RAC 10.2.x.x
Environment: RAC Database 10.2.0.2 with 3 instances running on a Cluster with 3 nodes
Configure DB Control for a RAC database running 3 instances on a 3 RAC cluster node
Reconfigure DB Control to have 2 dbconsole started and running to manage a RAC database running 3
instances on a 3 RAC cluster node
Remove an instance from DB Control monitoring
Add an instance to DB Control monitoring
Drop DB Control keeping the repository
Example: DB Control deconfiguration removing the repository
References
Applies to:
Enterprise Manager Grid Control - Version: 10.2.0.2 to 10.2.0.4
Information in this document applies to any platform.
Purpose
This bulletin will clarify the way to set up/configure/deconfigure DB Control 10.2.x.x for
RAC database 10.2.x.x.
Examples of emca commands will be illustrated and the directory structure will be
explained as well.
This bulletin is not intended to replace Oracle Documentation but to illustrate it.
For a complete overview of emca, please refer to the following documentation available
at OTN:
http://download.oracle.com/docs/cd/B16240_01/doc/nav/portal_booklist.htm
Enterprise Manager Advanced Configuration (Available in both HTML and PDF format)
Chapter 1.2.6 Configuring Database Control During and After the Oracle Database 10g
Installation
Topic 1.2.6.5 Using EMCA with Real Application Clusters
The RAC database has been created manually without using dbca, therefore emca has not
been run and there is no DB Control repository created in the RAC database.
The RAC database is not the hosting database for a Grid Control repository
This can be checked running the following SQL statement connected as a DBA user to
the database:
SQL> select username from DBA_USERS where username = 'SYSMAN';
If the SQL Statement returns 'SYSMAN', it means that there is already a DB Control
repository or a Grid Control repository present in the database.
To distinguish a Grid Control repository and a DB Control repository, you need to check
the tablespaces. A DB Control repository is created with objects in SYSAUX tablespace.
A Grid Control repository is created with objects in MGMT_TABLESPACE and
MGMT_ECM_DEPOT_TS.
If
SQL> select tablespace_name from dba_tablespaces order by 1;
If a Grid Control repository or a DB Control repository already exists and if you want to
drop the existing repository because you are sure that this repository is decommissioned,
you can use the following process:
The resulting sub-directories of the RDBMS ORACLE_HOME will be the same on each
node of the cluster:
$ORACLE_HOME/node1.mycompany.com_EM1021
$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_node1.mycompany.com_EM1021
$ORACLE_HOME/node2.mycompany.com_EM1022
$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_node2.mycompany.com_EM1022
$ORACLE_HOME/node3.mycompany.com_EM1023
$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_node3.mycompany.com_EM1023
The other directories are just "witnesses" of the current DB Control configuration on the
cluster.
Note: If you deploy another DB Control for another RAC database on the same cluster,
you will have as well on each node of the cluster a new set of directories created under
the
RDBMS ORACLE_HOME.
The format of the sub-directories is always hostnamei_SIDi.
The active sub-directories on node i is always hostanmei_SIDi.
To check the configuration files, log files, targets.xml, contents of upload or recv
directories
You must look only into the "active" directory.
This means that to check the file emd.properties of the agent on node1, you must login to
node1 and look into
$ORACLE_HOME/node1.mycompany.com_EM1021/sysman/config.
If you want to check the targets.xml of the agent on node3, you must login to node3 and
look into or
Login to node3, set the environment variables
- ORACLE_HOME
- ORACLE_SID
- ORACLE_HOME and ORACLE_HOME/bin are set in the environment variable
$PATH
- run
$ emctl config agent listtargets
<Targets AGENT_SEED="231030912">
<Target TYPE="oracle_emd" NAME="node1.mycompany.com:3938"/>
<Target TYPE="host" NAME="node1.mycompany.com">
<CompositeMembership>
<MemberOf TYPE="cluster" NAME="mycrsname"
ASSOCIATION="cluster_member"/>
</CompositeMembership>
</Target>
<Target TYPE="cluster" NAME="mycrsname">
<Property NAME="OracleHome" VALUE="/u01/app/oracle/product/10.2.0/crs"/>
</Target>
<Target TYPE="oracle_database" NAME="EM102_EM1021">
<Property NAME="MachineName" VALUE="node1vip.mycompany.com"/>
<Property NAME="Port" VALUE="1521"/>
<Property NAME="SID" VALUE="EM1021"/>
<Property NAME="OracleHome" VALUE="/u01/app/oracle/product/10.2.0/db"/>
<Property NAME="UserName" VALUE="8454cd093bd33db3"
ENCRYPTED="TRUE"/>
<Property NAME="password" VALUE="b9ae96b172e0d873" ENCRYPTED="TRUE"/>
<CompositeMembership>
<MemberOf TYPE="rac_database" NAME="EM102"
ASSOCIATION="cluster_member"/>
</CompositeMembership>
</Target>
<Target TYPE="rac_database" NAME="EM102">
<Property NAME="MachineName" VALUE="node1vip.mycompany.com"/>
<Property NAME="Port" VALUE="1521"/>
<Property NAME="SID" VALUE="EM1021"/>
<Property NAME="OracleHome" VALUE="/u01/app/oracle/product/10.2.0/db"/>
<Property NAME="UserName" VALUE="8454cd093bd33db3"
ENCRYPTED="TRUE"/>
<Property NAME="password" VALUE="b9ae96b172e0d873" ENCRYPTED="TRUE"/>
<Property NAME="ClusterName" VALUE="mycrsname"/>
<Property NAME="ServiceName" VALUE="EM102"/>
</Target>
<Target TYPE="oracle_listener"
NAME="LISTENER_node1.mycompany.com_node1.mycompany.com">
<Property NAME="Machine" VALUE="node1vip.mycompany.com"/>
<Property NAME="LsnrName" VALUE="LISTENER_node1.mycompany.com"/>
<Property NAME="Port" VALUE="1521"/>
<Property NAME="ListenerOraDir"
VALUE="/u01/app/oracle/product/10.2.0/db/network/admin"/>
<Property NAME="OracleHome" VALUE="/u01/app/oracle/product/10.2.0/db"/>
</Target>
<Target TYPE="osm_instance" NAME="+ASM1_node1.mycompany.com">
<Property NAME="SID" VALUE="+ASM1"/>
<Property NAME="MachineName" VALUE="node1vip.mycompany.com"/>
<Property NAME="OracleHome" VALUE="/u01/app/oracle/product/10.2.0/db"/>
<Property NAME="UserName" VALUE="SYS"/>
<Property NAME="password" VALUE="b9ae96b172e0d873" ENCRYPTED="TRUE"/>
<Property NAME="Role" VALUE="SYSDBA"/>
<Property NAME="Port" VALUE="1521"/>
</Target>
</Targets>
INFO:
• there are 3 instances monitored EM1021, EM1022 and EM1023 on nodes node1,
node2 and node3 respectively.
• the agents on nodes node1, node2 and node3 are all reporting to the dbconsole
running on node node1.mycompany.com, because emca was run from
node1.mycompany.com. If emca had been run from node2.mycompany.com, all
agents would have reported to node2.mycompany.com
Login to DB Control
At the end of the deployment, emca gives the URL to login to the dbconsole.
This URL is also available in the dbconsole log file.
This URL is also available in the file $ORACLE_HOME/install/readme.txt
The Console ports and Agent ports can be checked in the file
$ORACLE_HOME/install/portlist.ini
For example we want to have a dbconsole running on node1 and node2 with agents on
node1 reporting to the dbconsole on node1 and agent on node2 and node3 reporting to the
dbconsole on node2.
This emca command can be run from any node in the cluster.
According to the documentation and the prompt help we would then run:
$ emca -reconfig dbcontrol -cluster -EM_NODE node2 -EM_SID_LIST
EM1022,EM1023 or
$ emca -reconfig dbcontrol -cluster
INFO:
INFO:
This command can be run from any node in the cluster, except from the node from where
runs the instance for which we want to stop the monitoring.
Please refer to:
Note 394445.1 - emca -deleteInst db fails with Database Instance unavailable
For example if we want to stop monitoring the instance on node2, we can run the
following command either from node1 or from node3:
$ emca -deleteInst db
Enter the following information:
Node name: node2
Database unique name: EM102
Database SID: EM1022
• here we stopped monitoring the instance EM1022, which dropped the dbconsole
on node2
• however the agent on node3 was reporting to this dbconsole; emca has also
reconfigured the agent on node3 to upload to the dbconsole on node1.
INFO:
$ emca -addInst db
Enter the following information:
Node name: node2
Database unique name: EM102
Database SID: EM1022
INFO:
Known issue
emca completes successfully but with the following warning:
This issue has been logged in the internal bug (not visible through METALINK):
Bug: WHILE DECONFIG DBCONTROL FROM RAC DB, UNABLE TO REMOVE
DBMS JOBS