You are on page 1of 27

Oracle Database 12c Available for Download

Finally, Oracle has released the most awaited oracle Database i.e, 12c (Oracle 12.1.0.1) . It is available
for download from the Oracle Software Cloud (formerly know as eDelivery) and OTN (Oracle Tech
Network) for 64bit linux and solaris . Oracle 12c is not available for 32 bit . Oracle has yet not
released the AIX and Window database 12c s/w, hopefully will release soon . Here are links for
download Oracle s/w.
eDelivery
: Click here to download from eDelivery .
OTN
: Click here to download from OTN .
Documentation : Click here to download document of 12c
There are some very exiting features in Oracle 12c Database .One of them is "Pluggable Database "
which allows a single Oracle database instance to hold many other databases, allowing for more
efficient use of system resources and easier management. I will be soon download and post about
this features .
An official Oracle price list, which was updated Tuesday, showed a "multitenant" database option
priced at $17,500 per processor. A processor license for the main Enterprise Edition remained priced
at $47,500 per processor.

ORA-00020 and Impact on database on increasing processes values


The maximum number of processes is specified by the initialization parameter "processes" . When
this maximum number of process is reached, no more requests will be processed. If we are try to
connect with database then we get the below errors .
Here for testing purpose, i have set my processes value to 30 for this demo .
[oracle@server1 ~]$ sqlplus "sys/xxx as sysdba"
SQL*Plus: Release 11.2.0.1.0 Production on Fri May 24 13:36:53 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
ERROR:
ORA-00020: maximum number of processes (30) exceeded
I usually check my alert logfile for any oracle errors and find following info .
Alert logfile
Fri May 24 13:38:30 2013
ORA-00020: No more process state objects available
ORA-20 errors will not be written to the alert log for
the next minute. Please look at trace files to see all
the ORA-20 errors.
According to oracle docs :
Error: ORA 20
Text: maximum number of processes <num> exceeded
------------------------------------------------------------------------------Cause : An operation requested a resource that was unavailable. The maximum number of processes
is specified by the initialization parameter PROCESSES. When this maximum is reached, no more
requests are processed.
Action : Try the operation again in a few minutes. If this message occurs often, shut down Oracle,
increase the PROCESSES parameter in the initialization parameter file, and restart Oracle.
Finally , i have decided to increase the no. of processes but didn't find any exact formula or any
optimal value to set this parameter . So i have set it to 200 for now . Another issue here with us to
connect with oracle, since we getting error while connecting with oracle . Here is one trick to create a
session by using "Prelim" option . Interesting things about this option is that we can only use the
"shut abort" command nothing else (AFAIK). Here are the steps to set the processes value :

[oracle@server1 ~]$ sqlplus -prelim "sys/xxxx as sysdba"


SQL*Plus: Release 11.2.0.1.0 Production on Fri May 24 13:38:55 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> shut immediate
ORA-01012: not logged on
SQL> shut abort
ORACLE instance shut down.
SQL> exit
Once the instance is down we can easily increase the process value at mount stage .
[oracle@server1 ~]$ sqlplus "sys/sys as sysdba"
SQL*Plus: Release 11.2.0.1.0 Production on Fri May 24 13:51:40 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 418484224 bytes
Fixed Size
1336932 bytes
Variable Size
310380956 bytes
Database Buffers
100663296 bytes
Redo Buffers
6103040 bytes
Database mounted.
SQL> alter system set processes=200 scope=spfile;
System altered.
SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 418484224 bytes
Fixed Size
1336932 bytes
Variable Size
310380956 bytes
Database Buffers
100663296 bytes
Redo Buffers
6103040 bytes
Database mounted.
Database opened.
SQL> select name,open_mode from v$database;
NAME
OPEN_MODE
--------- -------------------ORCL
READ WRITE
Impact on db while increasing the processes ::
While googling , i found a very useful comment by "Jonathan Lewis" .According to him , Increasing
processes from say 1000 to 5000 increases the amount of shared memory that needs to be reserved

at the O/S level and disrupt the memory in the SGA. The OS must be configured to support the larger
amount of shared memory.
The impact at the O/S is that every process that starts up will want to build a memory map for the
SGA - depending on the way we have configured memory pages and the way that strategy our O/S
adopts to build maps it could demand a huge amount of O/S memory in a short time. The technology
we need to avoid this issue comes in two different flavours: large memory pages, and shared memory
maps.
The impact on the SGA is two-fold - each process and session has to create an entry in v$process and
v$session, and allocate various memory structures in the SGA: acquiring the rows in v$session and
v$process are serial actions, and the memory allocation in the SGA can cause massive flushing of the
library cache .
So , it is advisable to increase the number of processes while keeping it's impact in mind .

Manual Upgradation From Oracle 9i to 10g


Upgradation is the process of replacing our existing software with a newer version of the same
product . For example, replacing oracle 9i release to oracle 10g release . Upgrading our
applications usually does not require special tools. Our existing reports should look and behave the
same in both products. However, sometimes minor changes may be seen in product .Upgradation is
done at Software level .

I received a mail from a reader regarding the upgradation of database . He wants to upgrade his
database from 9i to 10g . Here, i will like advice that it's better to upgrade our database from
9i to 11g as compare to 9i to 10g because Oracle extended support for 10gR2 will ends on 31Jul-2013 and also there are more features available in Oracle 11g . We can directly upgrade to oracle
11g, if our curent database is 9.2.0.4 or newer then its supports direct upgrades to versions 9.2.0.4,
10.1and 10.2 . We can upgrade the version as
7.3.3 -> 7.3.4 -> 9.2.0.8 -> 11.1

8.0.5 -> 8.0.6 -> 9.2.0.8 -> 11.1

8.1.7 -> 8.1.7.4 -> 9.2.0.8 -> 11.1

9.0.1.3-> 9.0.1.4 -> 9.2.0.8 -> 11.1

9.2.0.3 (or lower) -> 9.2.0.8 -> 11.1


Oracle 11g client can access Oracle databases of versions 8i, 9i and 10g.
There are generally four method to Upgrade the Oracle database .
1.) Manual Upgradation :
2.) Upgradation Using the DBUA .
3.) export/import
4.) Data Copying
Let's have a look on manual upgradation .

Manual Upgradation : A manual upgrade consists of running SQL scripts and utilities from a
command line to upgrade a database to the new Oracle Database 10g release. While a manual
upgrade gives us finer control over the upgrade process, it is more susceptible to error if any of
the upgrade or pre-upgrade steps are either not followed or are performed out of order. Below are
the steps
1.) Install Oracle 10g software : For Upgradation , Invoke the .exe or rumInstaller ad select
"Install software only" to Install the Oracle S/w .
2.) Take Full Backup Database : Take full database backup of database which is to be upgraded .
3.) Check the invalid Objects : Check the invalid objects by running ultrp.sql scripts as

SQL> @ORACLE_HOME/rdbms/admin/utlrp.sql
4.) Login into 9i home and run the utlu102i.sql : This script is in oracle 10g home .
SQL> spool pre_upgrd.sql
SQL> @<ORACLE_10G_HOME>/rdbms/admin/utlu102i.sql
SQL> spool off

The above scripts checks a number of areas to make sure the instance is suitable for upgrade
including
Database version

Log file sizes

Tablespace sizes

Server options

Initialization parameters (updated, depercated and obsolete)

Database components

Miscellaneous Warnings

SYSAUX tablespace present

Cluster information
The issues indicated by this script should be resolved before a manual upgrade is attempted.
Once we have resolved the above warning , then re-run the above script once more to crosscheck .
5.) Check for the timestamp with timezone Datatype : The time zone files that are
supplied with Oracle Database 10g have been updated from version 1 to version 2 to reflect
changes in transition rules for some time zone regions. The changes may affect existing data of
TIMESTAMP WITH TIME ZONE datatype. To preserve this TIMESTAMP data for updating according
to the new time zone transition rules, we must run the utltzuv2.sql script on the database
before upgrading. This script analyzes our database for TIMESTAMP WITH TIME ZONE columns
that a re affected by the updated time zone transition rules.
SQL> @ORACLE_10G_HOME/rdbms/admin/utltzuv2.sql
SQL> select * from sys.sys_tzuv2_temptab;
If the utltzuv2.sql script identifies columns with time zone data affected by a database
upgrade, then back up the data in character format before we upgrade the database. After the
upgrade, we must update the tables to ensure that the data is stored based on the new
rules. If we export the tables before upgrading and import them after the upgrade, the
conversion will happen automatically during the import.
6.) Shutdown the database :
shut down the database and copy the spfile(or pfile) and password file from 9i home to 10g home .
7.) Upgrade Database : Set following environment for 10g and login using "SYS" user . It takes
roughly half an hour to complete. Spool the output to a file so that you can review it afterward.
ORACLE_SID=<sid>
ORACLE_HOME=<10g home>
PATH=<10g path>
sqlplus / as sysdba
SQL> startup upgrade
SQL>spool upgrd_log.sql
SQL>@catupgrd.sql
SQL> spool off
8.) Recompile any invalid objects : Compare the number of invalid objects with the number noted
in step 4 . It should hopefully be the same or less.

SQL>@ORACLE_HOME/rdbms/admin/utlrp.sql
9.) Check the status of the upgrade :
SQL> @ORACLE_HOME/rdbms/admin/utlu102s.sql
The above script queries the DBA_SERVER_REGISTRY to determine upgrade status and provides
information about invalid or incorrect component upgrades. It also provides names of scripts to rerun
to fix the errors.
10.) Edit the spfile : Create a pfile from spfile as
SQL>create pfile from spfile ;
Open the pfile and set the compatible parameter to 10.2.0.0.0 . Shutdown the database and create
the new modified spfile .
SQL>shut immediate
SQL> create spfile from pfile ;
11.) Start the database normally
SQL> startup
and finally configure the Oracle net and drop the old Oracle database software i.e, 9i using the OUI .

Control File Parallel Read & Write wait Event


A control file contains information about the associated database that is required for access by an
instance, both at startup and during normal operation . It is the Oracle control file(s) that records
information about the consistency of a database's physical structures and operational statuses . The
database state changes through activities such as adding data files, altering the size or location of
datafiles, redo being generated, archive logs being created, backups being taken, SCN numbers
changing, or checkpoints being taken.
Why Control File Waits Occur ?
Control File Waits Occur due to the following reason .
1.) This wait occurs when a server process is updating all copies of the controlfile i.e, the session is
writing physical blocks to all control files at same time .
2.) The session commits a transaction to a controlfile
3.) Changing a generic entry in the controlfile, the new value is being written to all controlfiles
4.) Controlfile resides on such a disk which is heavily used i.e, facing lots of i/o's .
We can check the wait experience by a session using v$session_wait views as
SQL>select event, wait_time, p1, p2, p3 frpm v$session_wait wher event like '%control%';
Here wait_time is the elapsed time for reads or writes.
Possible steps to reduce this wait :
1.) Reduce the number of controlfile copies to the minimum that ensures that not all copies can be lost
at the same time.
2.) Move the controlfile copies to less saturated storage locations.
3.) Reduce the frequent log switches . To find the optimal time and size for log switch check this
post .
As far my experience , Control file access is governed by activities such as redo logfile switching and
checkpointing . Therefore it can only be influenced indirectly by tuning . It rarely occurs in my report
and it get solved automatically when i check the other wait metric especially "log sync wait" and
others waits .

Interpreting Raw Sql Trace File

SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide
range of information and statistics that can be used to tune SQL operations. The sql trace file contains
a great deal of information . Each cursor that is opened after tracing has been enabled will be recorded
in the trace file.
The raw trace file mostly contains the cursor number . Eg, PARSING IN CURSOR #3 . EXECutes,
FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed
statement within that cursor . Firstly, let's have a look on "Wait Events" .
WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682 blocks=1 obj#=221
tim=506028963546
WAIT
= An event that we waited for.
nam
= What was being waited for . The wait events here are the same as are seen in view
V$SESSION_WAIT .
ela
= Elapsed time for the operation.(microseconds)
p1 = P1 for the given wait event.
p2 = P2 for the given wait event.
p3
= P3 for the given wait event.
Example No. 1 : WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682
blocks=1 obj#=221 tim=506028963546
The above line can be translated as : Completed waiting under CURSOR no 6 for "db file sequential
read" . We waited 8458 microseconds i.e. approx. 8.5 milliseconds .For a read of: File 110, start block
63682, for 1 Oracle block of Object number 221. Timestamp was 506028963546 .
Example no.2 : WAIT #1: nam='library cache: mutex X' ela= 814 idn=3606132107
value=3302829850624 where=4 obj#=-1 tim=995364327604
The above line can be translated as : Completed WAITing under CURSOR no 1 for "library cache: mutex
X" .We waited 814 microseconds i.e. approx. 0.8 milliseconds .To get an eXclusive library cache latch
with Identifier 3606132107 value 3302829850624 location 4 . It was not associated with any
particular object (obj#=-1) Timestamp 995364327604.
The trace file also show the processing of the sql statements . Oracle processes SQL statements as
follow :
Stage 1: Create a Cursor
Stage 2: Parse the Statement
Stage 3: Describe Results
Stage 4: Defining Output
Stage 5: Bind Any Variables
Stage 6: Execute the Statement
Stage 7: Parallelize the Statement
Stage 8: Fetch Rows of a Query Result
Stage 9: Close the Cursor
Now let's move to another important term PARSING IN CURSOR #n . EXECutes, FETCHes and WAITs
are recorded against a cursor. The information applies to the most recently parsed statement within
that cursor.
PARSING IN CURSOR# :
Cursor : In order for Oracle to process an SQL statement, it needs to create an area of memory
known as the context area; this will have the information needed to process the statement. This
information includes the number of rows processed by the statement, a pointer to the parsed
representa-tion of the statement (parsing an SQL statement is the process whereby information is
transferred to the server, at which point the SQL statement is evaluated as being valid).
A cursor is a handle, or pointer, to the context area. Through the cursor, a PL/SQL program can control
the context area and what happens to it as the statement is processed. Two important features about
the cursor are
1.) Cursors allow you to fetch and process rows returned by a SE-LECT statement, one row at a time.

2.) A cursor is named so that it can be referenced.


Parsing : Oracle Parsing is the first step in processing of any database statement . PARSE record is
accompanied by the cursor number. Let's have a look on "Parsing in Cursor" of a particular trace file .
PARSING IN CURSOR #2 len=92 dep=0 uid=0 oct=3 lid=0 tim=277930332201 hv=1039576264
ad='15d51e60' sqlid='dsz47ssyzdb68'
select p.PID,p.SPID,s.SID from v$process p,v$session s where s.paddr = p.addr and s.sid = 12
END OF STMT
PARSE#2:c=31250,e=19173,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=836746634,tim=2793033
2198
EXEC #2:c=0,e=86,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930335666
WAIT #2: nam='SQL*Net message to client' ela= 10 driver id=1413697536 #bytes=1 p3=0
obj#=116 tim=77930335778
FETCH #2:c=0,e=805,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=836746634,tim=77930336684
WAIT #2: nam='SQL*Net message from client' ela= 363 driver id=1413697536 #bytes=1 p3=0
obj#=116 tim=77930337227
FETCH #2:c=0,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930337421
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=0
size=152 card=1)'
STAT #2 id=2 cnt=27 pid=1 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=156
us cost=0 size=96 card=1)'
STAT #2 id=3 cnt=1 pid=2 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=0
size=39 card=1)'
STAT #2 id=4 cnt=1 pid=3 pos=1 obj=0 op='FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0
pw=0 time=0 us cost=0 size=26 card=1)'
STAT #2 id=5 cnt=1 pid=3 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0
pw=0 time=0 us cost=0 size=13 card=1)'
STAT #2 id=6 cnt=27 pid=2 pos=2 obj=0 op='BUFFER SORT (cr=0 pr=0 pw=0 time=78 us cost=0
size=57 card=1)'
STAT #2 id=7 cnt=27 pid=6 pos=1 obj=0 op='FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0
time=130 us cost=0 size=57 card=1)'
STAT #2 id=8 cnt=1 pid=1 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0
pw=0 time=0 us cost=0 size=56 card=1)'
WAIT #2: nam='SQL*Net message to client' ela= 7 driver id=1413697536 #bytes=1 p3=0 obj#=116
tim=77930338248
*** 2012-05-19 15:07:22.843
WAIT #2: nam='SQL*Net message from client' ela= 38291082 driver id=1413697536 #bytes=1 p3=0
obj#=116 tim=77968629417
CLOSE #2:c=0,e=30,dep=0,type=0,tim=77968629737
len
= the number of characters in the SQL statement
dep = tells the application/trigger depth at which the SQL statement was executed. dep=0 indicates
that it was executed by the client application. dep=1 indicates that the SQL statement was executed
by a trigger, the Oracle optimizer, or a space management call. dep=2 indicates that the SQL
statement was called from a trigger, dep=3 indicates that the SQL statement was called from a trigger
that was called from a trigger.
uid
= Schema id under which SQL was parsed.
oct = Oracle command type.
lid = Privilege user id
tim = Timestamp.
hv = Hash id.
ad = SQLTEXT address

PARSE #3: c=15625, e=177782, p=2, cr=3, cu=0, mis=1, r=0, dep=0, og=1, plh=272002086,
tim=276565143470

c
= CPU time (microseconds rounded to centiseconds granularity on 9i & above)
e
= Elapsed time (centiseconds prior to 9i, microseconds thereafter)
p
= Number of physical reads.
cr
= Number of buffers retrieved for CR reads.(Consistent reads)
cu =Number of buffers retrieved in current mode.
mis = Cursor missed in the cache.
r
= Number of rows processed.
dep = Recursive call depth (0 = user SQL, >0 = recursive).
og = Optimizer goal: 1=All_Rows, 2=First_Rows, 3=Rule, 4=Choose
From the above Parse line it is very clear that the total time taken in parsing the statement is 0.177
sec and the no. of physical reads done are 2 .

Bind Variables : If the SQL statement does reference bind variables, then the following SQL
statement shown in the cursor can locate a section of text associated with each bind variable. For each
bind variable there are a number of attributes listed. The following are the ones we are interested in
here:
mxl
= the maximum length - ie. the maximum number of bytes occupied by the variable. Eg.
dty=2 and mxl=22 denotes a NUMBER(22) column.
scl
= the scale (for NUMBER columns)
pre
= the precision (for NUMBER columns)
value = the value of the bind variable
dty
= the datatype. Typical values are:
1
VARCHAR2 or NVARCHAR2
2
NUMBER
8
LONG
11
ROWID
12
DATE
23
RAW
24
LONG RAW
96
CHAR
112 CLOB or NCLOB
113 BLOB
114 BFILE

EXEC : Execute a pre-parsed statement. At this point, Oracle has all necessary information and
resources, so the statement is executed. For example
EXEC #2:c=0,e=225,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3684871272,tim=282618239403
Fetch : Fetch rows from a cursor . For example
FETCH
#4:c=0,e=8864,p=1,cr=26,cu=0,mis=0,r=1,dep=0,og=1,plh=3564694750,tim=282618267037
STAT : Lines report explain plan statistics for the numbered [CURSOR]. These let us know the 'run
time' explain plan. For example
STAT #1 id=1 cnt=7 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us cost=2
size=2128 card=1)'
id
= Line of the explain plan which the row count applies to (starts at line 1). This is effectively the
row source row count for all row sources in the execution tree
cnt = Number of rows for this row source.

pid = Parent id of this row source.


pos = Position in explain plan.
obj
= Object id of row source (if this is a base object).
op='...' The row source access operation
XCTEND A transaction end marker. For example XCTEND rlbk=0, rd_only=1, tim=282636050491
rlbk
=1 if a rollback was performed, 0 if no rollback (commit).
rd_only
=1 if transaction was read only, 0 if changes occurred.
CLOSE
c
e
dep
type

cursor is closed .for example CLOSE #4:c=0,e=32,dep=0,type=0,tim=282636050688


= CPU time (microseconds rounded to centiseconds granularity on 9i and above)
= Elapsed time (centiseconds prior to 9i, microseconds thereafter)
= Recursive depth of the cursor
= Type of close operation

Note : Timestamp are used to determine the time between any 2 operations.
Reference : Metalink [ID 39817.1]

Difference between LGWR SYNC and ASYNC in Oracle DataGuard


Oracle Data Guard redo log transport offers synchronous log transport mode (LogXptMode = 'SYNC')
orasynchronous log transport mode (LogXptMode = 'ASYNC'). The difference is all about when the
COMMIT happens .
LogXptMode = ('SYNC'): As the name implies, SYNC mode synchronizes the primary with the
standby database and all DML on the primary server will NOT be committed until the logs have been
successfully transported to the standby servers. The synchronous log transport mode is required for
the Maximum Protection and Maximum Availability data protection modes.
LogXptMode = ('ASYNC'): Conversely, asynchronous mode (ASYNC) allows updates (DML) to be
committed on the primary server before the log file arrives on the standby servers. The asynchronous
log transport mode is required for the Maximum Performance data protection mode.
There is a very good post witten by "Shawn Kelley" related to Sync and Async in dataguard .
LGWR is an attribute of the LOG_ARCHIVE_DEST_n parameter which is used to specify the network
transmission mode. Specifying the SYNC attribute (which is the default), tells the LGWR process to
synchronously archive to the local online redo log files at the same time it transmits redo data to
archival destinations. Specifically, the SYNC atrribute performs all network I/O synchornously in
conjunction with each write operation to the online redo log file. Transactions are not committed on
the primary database until the redo data necessary to recover the transactions is received by the
destination.
The ASYNC attribute perfoms all network I/O asynchronously and control is returned to the executing
application or user immediately. When this attribute is specified, the LGWR process archives to the
local online redo log file and submits the network I/O request to the network server (LNSn process for
that destination, and the LGWR process continues processing the next request without waiting for the
network I/O to complete.
What happens if the network between the Primary and Standby [database] is lost with
LGWR SYNC and ASYNC ? or What happens if the standby database is shutdown with LGWR
SYNC and ASYNC?
This is dependent upon the database mode we have set. If we have set Maximum Protection, we have
chosen a configuration that guarantees that no data loss will occur. We have set this up by specifying
the LWGR, SYNC, and AFFIRM attributes of the LOG_ARCHIVE_DEST_n parameter for at least one
standby database. This mode provides the highest level of data protection possible and to achieve this
the redo data needed to recover each transaction must be written to both the local online redo log and
the standby redo log on at least one standby database before the transaction commits. To ensure data

loss cannot occur, the primary database shuts down if a fault (such as the network going down)
prevents it from writing its redo stream to at least one remote standby redo log.
If we have set the Maximum Availability mode, we have chosen a configuration that provides the
highest level of data protection that is possible without compromising the availablity of the primary
database. Like the maximum database does not shut down if a fault prevents it from writing its redo
stream to a remote standby redo log. Instead, the primary database operates in maximum
performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps
are resolved, the primary database automatically resumes operating in maximum availabitly mode.
This guarantees that no data loss will occur if the primary database fails, but only if a second fault
does not complete set of redo data being sent from the primary database to at least one standby
database.
If we have set the Maximum Performance mode (the default), we have chosen a mode that provides
the highest level of data protection that is possible without affecting the performance of the primary
database. This is accomplished by allowing a transaction to commit as soon as the redo data needed
to recover the transaction is written to the local online redo log. The primary database's redo data
stream is also written to at least one standby database, bu that the redo stream is written
asynchronously with respect to the commitment of the transactions that create the redo data.
The maximum performance mode enables us to either set the LGWR and AYSNC attributes, or set the
ARCH attribute on the LOG_ARCHIVE_DEST_n parameter for the standby database destination. If the
primary database fails, we can reduce the amount of data that is not received on the standby
destination by setting the LGWR and ASYNC attributes.
If LGWR SYNC or ASYNC is deployed, what process(es) bring(s) the standby database back
into sync with the primary [database] if the network is lost and is then restored? How does
it do it?
Again, this is dependent upon the mode we have chosen for our database. The LGWR process (and
possibly the LNSn process if we have multiple standby databases) is responsible for closing the gap.
My biggest question is, when the network to the standby is lost with SYNC or ASYNC, where is the
information queued and how is it retransmitted once the network has been re-established?
This implies that our database has been set to either maximum availability or maximum performance
mode. We cannot use the ASYNC attribute with maximum protection mode. The information is queued
in the local online redo log and the LGWR (and the LNSn) process will transmit the data to the standby
database's online redo log file to close the gap once the network connectivity has been re-established
Gap recovery is handled through the polling mechanism. For physical and logical standby databases,
Oracle Change Data Capture, and Oracle Streams, Data Guard performs gap detection and resolution
by automatically retrieving missing archived redo log files from the primary database. No extra
configuration settings are required to poll the standby database(s) to detect any gaps or to resolve the
gaps.
The important consideration here is that automatic gap recovery is contigent upon the availablity of
the primary database. If the primary database is not available and we have a configuration with
mulitple physical standby databases, we can set up additional initialization parameters so that the
Redo Apply can resolve archive gaps from another standby database.
It is possible to manually determine if a gap exists and to resolve those archive gaps. To manually
determine if a gap exists, query the V$ARCHIVE_GAP view on our physical standby database. If a gap
is found, we will then need to locate the archived log files on our primary database, copy them to our
standby database, and register them.

Understanding Indexes Concept

Indexes plays and crucial role in the performance tunning of a database . It is very important to
know how the index work i.e, how indexes fetches the data's from a tables . There is a very good post
by rleishman on the working of indexes . Let's have a look .
What is an Index ?
An index is a schema object that contains an entry for each value that appears in the indexed
column(s) of the table or cluster and provides direct, fast access to rows. It is just as the index in this
manual helps us to locate information faster than if there were no index, an Oracle Database index
provides a faster access path to table data .
Blocks
First we need to understand a block. A block - or page for Microsoft boffins - is the smallest unit of
disk that Oracle will read or write. All data in Oracle - tables, indexes, clusters - is stored in blocks.
The block size is configurable for any given database but is usually one of 4Kb, 8Kb, 16Kb, or 32Kb.
Rows in a table are usually much smaller than this, so many rows will generally fit into a single block.
So we never read "just one row"; we will always read the entire block and ignore the rows we don't
need. Minimising this wastage is one of the fundamentals of Oracle Performance Tuning.
Oracle uses two different index architectures: b-Tree indexes and bitmap indexes. Cluster indexes,
bitmap join indexes, function-based indexes, reverse key indexes and text indexes are all just
variations on the two main types. b-Tree is the "normal" index .
The "-Tree" in b-Tree
A b-Tree index is a data structure in the form of a tree - no surprises there - but it is a tree of
database blocks, not rows. Imagine the leaf blocks of the index as the pages of a phone book . Each
page in the book (leaf block in the index) contains many entries, which consist of a name (indexed
column value) and an address (ROWID) that tells us the physical location of the telephone (row in the
table).
The names on each page are sorted, and the pages - when sorted correctly - contain a complete
sorted list of every name and address
A sorted list in a phone book is fine for humans, beacuse we have mastered "the flick" - the ability to
fan through the book looking for the page that will contain our target without reading the entire page.
When we flick through the phone book, we are just reading the first name on each page, which is
usually in a larger font in the page header. Oracle cannot read a single name (row) and ignore the
reset of the page (block); it needs to read the entire block.
If we had no thumbs, we may find it convenient to create a separate ordered list containing the first
name on each page of the phone book along with the page number. This is how the branch-blocks of
an index work; a reduced list that contains the first row of each block plus the address of that block.
In a large phone book, this reduced list containing one entry per page will still cover many pages, so
the process is repeated, creating the next level up in the index, and so on until we are left with a
single page: the root of the tree.
For example :
To find the name Gallileo in this b-Tree phone book, we:
=> Read page 1. This tells us that page 6 starts with Fermat and that page 7 starts with Hawking.
=> Read page 6. This tells us that page 350 starts with Fyshe and that page 351 starts with Garibaldi.
=> Read page 350, which is a leaf block; we find Gallileo's address and phone number.
=> That's it; 3 blocks to find a specific row in a million row table. In reality, index blocks often fit 100
or more rows, so b-Trees are typically quite shallow. I have never seen an index with more than 5
levels. Curious? Try this:
SQL> select index_name, blevel+1 from user_indexes order by 2 ;

user_indexes.blevel is the number of branch levels. Always add 1 to include the leaf level; this tells us
the number of blocks a unique index scan must read to reach the leaf-block. If we're really, really,
insatiably curious; try this in SQL*Plus:
SQL> accept index_name prompt "Index Name: "
SQL> alter session set tracefile_identifier='&index_name' ;
SQL> column object_id new_value object_id
SQL> select object_id from user_objects where object_type = 'INDEX' and
object_name=upper('&index_name');
SQL> alter session set events 'Immediate trace name treedump level &object_id';
SQL> alter session set tracefile identifier="" ;
SQL> show parameter user_dump_dest

Give the name of an index on a smallish table (because this will create a BIG file). Now, on the Oracle
server, go to the directory shown by the final SHOW PARAMETER user_dump_dest command and find
the trace file - the file name will contain the index name. Here is a sample:
---- begin tree dump
branch: 0x68066c8 109078216 (0: nrow: 325, level: 1)
leaf: 0x68066c9 109078217 (-1: nrow: 694 rrow: 694)
leaf: 0x68066ca 109078218 (0: nrow: 693 rrow: 693)
leaf: 0x68066cb 109078219 (1: nrow: 693 rrow: 693)
leaf: 0x68066cc 109078220 (2: nrow: 693 rrow: 693)
leaf: 0x68066cd 109078221 (3: nrow: 693 rrow: 693)
...
...
leaf: 0x68069cf 109078991 (320: nrow: 763 rrow: 763)
leaf: 0x68069d0 109078992 (321: nrow: 761 rrow: 761)
leaf: 0x68069d1 109078993 (322: nrow: 798 rrow: 798)
leaf: 0x68069d2 109078994 (323: nrow: 807 rrow: 807)
----- end tree dump
This index has only a root branch with 323 leaf nodes. Each leaf node contains a variable number of
index entries up to 807! A deeper index would be more interesting, but it would take a while to dump.
"B" is for...
Contrary to popular belief, b is not for binary; it's balanced.
As we insert new rows into the table, new rows are inserted into index leaf blocks. When a leaf block
is full, another insert will cause the block to be split into two blocks, which means an entry for the new
block must be added to the parent branch-block. If the branch-block is also full, it too is split. The
process propagates back up the tree until the parent of split has space for one more entry, or the root
is reached. A new root is created if the root node splits. Staggeringly, this process ensures that every
branch will be the same length.
How are Indexes used ?
Indexes have three main uses:

To quickly find specific rows by avoiding a Full Table Scan


We've already seen above how a Unique Scan works. Using the phone book metaphor, it's not hard to
understand how a Range Scan works in much the same way to find all people named "Gallileo", or all
of the names alphabetically between "Smith" and "Smythe". Range Scans can occur when we use >,
<, LIKE, or BETWEEN in a WHERE clause. A range scan will find the first row in the range using the
same technique as the Unique Scan, but will then keep reading the index up to the end of the range.
It is OK if the range covers many blocks.

To avoid a table access altogether


If all we wanted to do when looking up Gallileo in the phone book was to find his address or phone
number, the job would be done. However if we wanted to know his date of birth, we'd have to phone
and ask. This takes time. If it was something that we needed all the time, like an email address, we
could save time by adding it to the phone book.
Oracle does the same thing. If the information is in the index, then it doesn't bother to read the table.
It is a reasonably common technique to add columns to an index, not because they will be used as
part of the index scan, but because they save a table access. In fact, Oracle may even perform a Fast
Full Scan of an index that it cannot use in a Range or Unique scan just to avoid a table access.
To avoid a sort
This one is not so well known, largely because it is so poorly documented (and in many cases,
unpredicatably implemented by the Optimizer as well). Oracle performs a sort for many reasons:
ORDER BY, GROUP BY, DISTINCT, Set operations (eg. UNION), Sort-Merge Joins, uncorrelated INsubqueries, Analytic Functions). If a sort operation requires rows in the same order as the index, then
Oracle may read the table rows via the index. A sort operation is not necessary since the rows are
returned in sorted order.
Despite all of the instances listed above where a sort is performed, I have only seen three cases where
a sort is actually avoided.
1. GROUP BY :
SQL> select src_sys, sum(actl_expns_amt), count(*) from ef_actl_expns
where src_sys = 'CDW' and actl_expns_amt > 0
group by src_sys ;
----------------------------------------------------------------------------------------| Id |
Operation
|
Name
|
---------------------------------------------------------------------------------------| 0 | SELECT STATEMENT
|
|
| 1 | SORT GROUP BY NOSORT <------|
|
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | EF_ACTL_EXPNS |
|* 3 | INDEX RANGE SCAN
| EF_AEXP_PK
|
--------------------------------------------------------------------------------------Predicate Information (identified by operation id):
---------------------------------------------------------------2 - filter("ACTL_EXPNS_AMT">0)
3 - access("SRC_SYS"='CDW')
Note the NOSORT qualifier in Step 1.
2. ORDER BY :
SQL> select * from ef_actl_expns
where src_sys = 'CDW' and actl_expns_amt > 0
order by src_sys
---------------------------------------------------------------------------------------| Id | Operation
|
Name
|
---------------------------------------------------------------------------------------| 0 | SELECT STATEMENT
|
|
|* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID | EF_ACTL_EXPNS|
|* 2 | INDEX RANGE SCAN
| EF_AEXP_PK
|
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


--------------------------------------------------1 - filter("ACTL_EXPNS_AMT">0)
2 - access("SRC_SYS"='CDW')
Note that there is no SORT operation, despite the ORDER BY clause. Compare this to the following:
SQL> select * from ef_actl_expns
where src_sys = 'CDW' and actl_expns_amt > 0
order by actl_expns_amt ;
--------------------------------------------------------------------------------------------| Id | Operation
|
Name
|
--------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT
|
|
| 1 | SORT ORDER BY
|
|
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | EF_ACTL_EXPNS |
|* 3 | INDEX RANGE SCAN
| EF_AEXP_PK
|
---------------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------2 - filter("ACTL_EXPNS_AMT">0)
3 - access("SRC_SYS"='CDW')
3. DISTINCT :
SQL> select distinct src_sys from ef_actl_expns
where src_sys = 'CDW' and actl_expns_amt > 0 ;
----------------------------------------------------------------------------------------------| Id |
Operation
|
Name
|
----------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT
|
|
| 1 | SORT UNIQUE NOSORT
|
|
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | EF_ACTL_EXPNS |
|* 3 | INDEX RANGE SCAN
| EF_AEXP_PK
|
-------------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------2 - filter("ACTL_EXPNS_AMT">0)
3 - access("SRC_SYS"='CDW')
Again, note the NOSORT qualifier.
This is an extraordinary tuning technique in OLTP systems like SQL*Forms that return one page of
detail at a time to the screen. A SQL with a DISTINCT, GROUP BY, or ORDER BY that uses an index to
sort can return just the first page of matching rows without having to fetch the entire result set for a
sort. This can be the difference between sub-second response time and several minutes or hours.
Full table Scans are not bad :
Up to now, we've seen how indexes can be good. It's not always the case; sometimes indexes are no
help at all, or worse: they make a query slower.
A b-Tree index will be no help at all in a reduced scan unless the WHERE clause compares indexed
columns using >, <, LIKE, IN, or BETWEEN operators. A b-Tree index cannot be used to scan for any
NOT style operators: eg. !=, NOT IN, NOT LIKE. There are lots of conditions, caveats, and complexities

regarding joins, sub-queries, OR predicates, functions (inc. arithmetic and concatenation), and casting
that are outside the scope of this article. Consult a good SQL tuning manual.
Much more interesting - and important - are the cases where an index makes a SQL slower. These are
particularly common in batch systems that process large quantities of data.
To explain the problem, we need a new metaphor. Imagine a large deciduous tree in our front yard.
It's Autumn, and it's our job to pick up all of the leaves on the lawn. Clearly, the fastest way to do this
(without a rake, or a leaf-vac...) would be get down on hands and knees with a bag and work our way
back and forth over the lawn, stuffing leaves in the bag as we go. This is a Full Table Scan, selecting
rows in no particular order, except that they are nearest to hand. This metaphor works on a couple of
levels: we would grab leaves in handfuls, not one by one. A Full Table Scan does the same thing: when
a bock is read from disk, Oracle caches the next few blocks with the expectation that it will be asked
for them very soon. Type this in SQL*Plus:
SQL> show parameter db_file_multiblock_read_count
Just to shake things up a bit (and to feed an undiagnosed obsessive compulsive disorder), we decide
to pick up the leaves in order of size. In support of this endeavour, we take a digital photograph of the
lawn, write an image analysis program to identify and measure every leaf, then load the results into a
Virtual Reality headset that will highlight the smallest leaf left on the lawn. Ingenious, yes; but this is
clearly going to take a lot longer than a full table scan because we cover much more distance walking
from leaf to leaf.
So obviously Full Table Scan is the faster way to pick up every leaf. But just as obvious is that the
index (virtual reality headset) is the faster way to pick up just the smallest leaf, or even the 100
smallest leaves. As the number rises, we approach a break-even point; a number beyond which it is
faster to just full table scan. This number varies depending on the table, the index, the database
settings, the hardware, and the load on the server; generally it is somewhere between 1% and 10% of
the table.
The main reasons for this are :

As implied above, reading a table in indexed order means more movement for the disk head.

Oracle cannot read single rows. To read a row via an index, the entire block must be read with
all but one row discarded. So an index scan of 100 rows would read 100 blocks, but a FTS might read
100 rows in a single block.

The db_file_multiblock_read_count setting described earlier means FTS requires fewer visits to
the physical disk.

Even if none of these things was true, accessing the entire index and the entire table is still
more IO than just accessing the table.
So what's the lesson here? Know our data! If our query needs 50% of the rows in the table to resolve
our query, an index scan just won't help. Not only should we not bother creating or investigating the
existence of an index, we should check to make sure Oracle is not already using an index. There are a
number of ways to influence index usage; once again, consult a tuning manual. The exception to this
rule - there's always one - is when all of the columns referenced in the SQL are contained in the index.
If Oracle does not have to access the table then there is no break-even point; it is generally quicker to
scan the index even for 100% of the rows.
Summary :

Indexes are not a dark-art; they work in an entirely predictable and even intuitive way. Understanding
how they work moves Performance Tuning from the realm of guesswork to that of science; so embrace
the technology and read the manual.

How to Identify the Static and Dynamic Parameter in Oracle


Sometimes, we may not very sure whether an oracle parameter is static(restarting database is
required to come under the action) parameter or dynamic(can be changed without restarting)
parameter . We can check this by using the v$parameter2 view which is very similar to v$parameter
having few extra rows for long parameters . The another difference between the v$parameter and
v$parameter2 is that the format of the output .. For example, if a parameter value say "x,y" in
V$PARAMETER view does not tell us if the parameter has two values ("x" and "y") or one value ("x, y")
whereas V$PARAMETER2 makes the distinction between the list parameter values clear.
SQL> select value from v$parameter WHERE name LIKE 'control_files' ;

SQL> select value from v$parameter2 WHERE name LIKE 'control_files' ;

Here, If ISSES_MODIFIABLE parameter is true, the parameter can be changed on session level ,
and if ISSES_MODIFIABLE or ISINSTANCE_MODIFIABLE is true, then parameter can be changed
on system level. Here is an example
SQL> SELECT name,Value ,ISSES_MODIFIABLE , ISINSTANCE_MODIFIABLE FROM v$parameter2
WHERE name LIKE '%target%' ;

What is redo log thread in oracle ?


On googling about the redo log thread, i have not found proper documentation that clearly explains
clearly what the redo log thread is . Here i am trying to cover the redo log threads in case of single
instance and RAC taking reference from ASKTOM site .
Each instance has it's own personal set of redo and each redo thread is made up of at least two
groups that have one or more members (files) .Two instances will never write to the same redo files each instance has it's own set of redo logs to write to . Another instance may well READ some other
instances redo logs - after that other instance fails for example - to perform recovery. Here is a
scenario which helps us to understand the thread concepts .
Most V$ views work by selecting information from the corresponding GV$ view with a predicate "where
instance_id = <that instance>". So V$SESSION in single Instance(i.e, 1) is actually
SQL>select * from gv$instance where inst_id= 1 ;

On a three node RAC database, if we select from v$session, we get sessions from that instance only.
Selecting from GV$SESSION creates parallel query slaves on the other instances and gets the
information back to our session.
This works fine in almost all cases. There are few exceptions: in case of redo logs, the RAC instance
must see all the redo logs of other instances as they become important for its recovery. Therefore,
V$LOG actually shows all the redo logs, of all the instances, not just of its own. Contrast this with
V$SESSION, which shows only sessions of that instance, not all. So, if there are 3 log file groups per
instance (actually, per "thread") and there are 3 instances, V$LOG on any instance will show all 9
logfile groups, not 3.
When we select form GV$LOG, remember, the session gets the information from other instances as
well. Unfortunately, the PQ servers on those instances also get 9 records each, since they also see the
same information seen by the first instance. On a three instance RAC, we will get 3X9 = 27 records in
GV$LOG!
To avoid this:
1.) Always select from V$LOG, V$LOGFILE and V$THREAD in a RAC instance. GV$ views are
misleading or
2.) Add a predicate to match THREAD# with INST_ID. (Beware: thread numbers are by default the
same as the instance_id; but we may have defined a different thread number while creating the
database) as
SQL> select * from gv$log where inst_log=thread# ;

User Managed Hot Backups in Oracle


A cold backup does have the somewhat bad side effect of wiping out our shared pool, our buffer cache
and preventing our users from logging in to do work. Our database is like a car, it runs better when it
is warmed up. If we want to cold start it - be prepared for rough running when we restart as we have
to rebuild that shared pool, that buffer cache and so on . I would never pick cold over hot given the
chance. No benefit, only downsides (Acc. to Tkye). The only kind of backup we do on our production
systems here is hot .
There are two ways to perform Oracle backup and recovery :
1.) Recovery Manager (RMAN) : It is an Oracle utility that can backup, restore, and recover
database files. It is a feature of the Oracle database server and does not require separate installation.
2.) User-Managed backup and recovery : We use operating system commands for backups and
SQL*Plus for recovery. This method is called user-managed backup and recovery and is fully
supported by Oracle, although use of RMAN is highly recommended because it is more robust and
greatly simplifies administration.
There are basically two types of backup .The backup are as

1.) Consistent Backup : This is also know as Cold Backup . A consistent backup is one in which the
files being backed up contain all changes up to the same system change number (SCN). This means
that the files in the backup contain all the data taken from a same point in time .
2.) Inconsistent Backup : This is also known as Hot backup . An inconsistent backup is a backup in
which the files being backed up do not contain all the changes made at all the SCNs . This can occur
because the datafiles are being modified as backups are being taken.

There are some DBAs which prefer oracle user-managed backups.They put their database into backup
mode prior to backing up and take it out of backup mode after backup. If we 're going to perform
user-managed backups, we must back up all of the following file :
Datafiles

Control files

Online redo logs (if performing a cold backup)

The parameter file (not mandatory )

Archived redo logs

Password file if used


The below diagram shows the Whole Database Backup Options :

A hot backup requires quite a bit more work than cold backup.Below are steps required for Hot backup.
Step 1 : Check the log mode of the database Whenever we go for hot backup then the database
must be in archivelog mode .
SQL> SELECT LOG_MODE FROM V$DATABASE ;
LOG_MODE
--------------ARCHIVELOG
Step 2 : Put the database into backup mode If we are using the oracle 10gR2 or later , then we
can put the entire database into backup mode and if we are using the oracle prior to 10gR2 ,then we
have to put each tablespace in backup mode . In my case , I am having 11gR2 .
SQL> alter database begin backup ;
Database altered.
In case of oracle prior to 10gR2 use the below command as
SQL> set echo off
SQL> set heading off
SQL> set feedback off
SQL> set termout off
SQL> spool backmode.sql
SQL> select 'alter tablespace '||name||' begin backup ;' "Tablespace in backup mode" from
v$tablespace;
SQL> spool off
SQL> @C:\backmode.sql

Step 3 : Backup all the datafiles Copy all the datafile using the operating system command and
Paste it on the desired backup location .Meanwhile,we can verify the status of the datafile by using the
v$backup view to check the status of the datafiles.
SQL> select * from v$backup ;
FILE# STATUS
CHANGE# TIME
---------- ------------------ ---------- --------1 ACTIVE
3967181 03-APR-12
2 ACTIVE
3967187 03-APR-12
3 ACTIVE
3967193 03-APR-12
4 ACTIVE
3967199 03-APR-12
5 ACTIVE
3967205 03-APR-12
6 ACTIVE
3967211 03-APR-12
7 ACTIVE
3967217 03-APR-12
8 ACTIVE
3967223 03-APR-12
9 ACTIVE
3967229 03-APR-12
The Column STATUS=ACTIVE shows that the datafiles are in backup mode .

Step 4 : Take out the database from backup mode If we are using 10gR2 or above version of
oracle , we use the below command to take out the database from backup mode as
SQL> alter database end backup ;
Database Altered
If we are having version prior to 10gR2 , then we use the below command as above :
SQL> set echo off
SQL> set heading off
SQL> set feedback off
SQL> set termout off
SQL> spool end_mode.sql
SQL> select 'alter tablespace '||name||' end backup ;' "tablespace in backup mode" from
v$tablespace ;
SQL> spool off
SQL> @C:\endmode.sql

Step 5 : Switch the redolog file and backup archivelogs After taking the database out of Hot
Backup we must switch logfile (preferably more than once) and backup the archivelogs generated .We
may backup archivelogs while the database is in backup mode but we must also backup the first
archivelog(s) after the end backup. The best method to do both is to run the SQL command alter
system archive log current. This switches the logfile but does not return the prompt until the previous
redo log has been archived. We can run alter system switch logfile, but then we won't be sure that the
latest redo log has been archived before we move on to the next step.
SQL> alter system archive log current ;
System altered.
SQL> /
System altered.
Now backup the archivelogs to the backup location .
Step 6 : Back up the control file Now , we can backup the controlfile as binary file and as human
readable .We should use both methods to back up the control file; either one may come in handy at
different times . The commands are as
(Human readable)
SQL> alter database backup controlfile to trace ; or
Database altered.
SQL> alter database backup controlfile to trace as '<backup location>' ;
Database altered.
(Binary format)
SQL> alter database backup controlfile to '<backup location>' ;
Database altered.

Step 7 : Backup the passwordfile and spfile We can backup the passwordfile and spfile though it
is not mandatory.

Some Points Worth Remembering


We need to backup all the archived log files, these files are very important to do recovery.

It is advisable to backup all of tablespaces (except read-only tablespaces), else complete


recovery is not possible.

Backup of online redo log files are not required, as the online log file has the end of backup
marker and would cause corruption if used in recovery.

It is Preferable to start the hot backups at low activity time.


When hot backups are in progress we "cannot" shutdown the database in NORMAL or
IMMEDIATE mode (and it is also not desirable to ABORT).

Difference between Dataguard and Active Dataguard

I found people are bit confused between Dataguard and Active Data guard . They assume that Active
dataguard is having different configuration or properties . Here, i have tried to cover the Active
dataguard .
Active Data Guard is a new option for Oracle Database 11g Enterprise Edition . Oracle Active Data
Guard enables read-only access to a physical standby database . Active Data Guard is the possibility
to query our Physical Standby Database while it is constantly actualized to reflect the state of the
Primary Database . It is additional to 11g Data Guard and comes with an extra charge .
Oracle Active Data Guard enhances the Quality of Service (QoS) for production databases by offloading resource-intensive operations to one or more standby databases, which are synchronized
copies of the production database. With Oracle Active Data Guard, a physical standby database can be
used for real-time reporting, with minimal latency between reporting and production data. Compared
with traditional replication methods, Active Data Guard is very simple to use, transparently supports
all datatypes, and offers very high performance. Oracle Active Data Guard also allows backup
operations to be off-loaded to the standby database, and be done very fast using intelligent
incremental backups. Oracle Active Data Guard thus is a very effective way to insulate interactive
users and critical business tasks on the production system from the impact of such long-running
operations. Oracle Active Data Guard provides the additional benefit of high availability and disaster
protection by quickly failing over to the standby database in the event of a planned or an unplanned
outage at the production site.
The Active Data Guard contains the following features :

Physical Standby with Real-time Query

Fast Incremental Backup on Physical Standby

Automatic Block Repair


If a physical standby database in a Data Guard configuration has any of the above features enabled,
then the Active Data Guard option must be licensed for every such physical standby, and also for the
primary database.
Conversion from Physical standby to Active Data Guard :
We can convert the physical standby into active Data Guard standby . Below are the steps
1.) Stop Apply Services
SQL> alter database recover managed standby database cancel ;
2.) Shut the database and open in mount stage
SQL> shut immediate
SQL> startup mount
SQL> alter database recover managed standby database using current logfile disconnect ;
SQL> alter database open ;
It enables us to have a physical standby read only open, while redo apply is still done in the
background .
How to Check if Active Data Guard is Enabled or Not
Use the following query to confirm that Data Guard is in active mode:
SQL> select 'Using Active Data Guard' ADG
m.process like 'MRP%' ;
ADG
----------------------Using Active Data Guard

from v$managed_standby m,v$database d where

or from standby datbase


SQL> select open_mode,controlfile_type from v$database;

OPEN_MODE
-------------------READ ONLY WITH APPLY

CONTROLFILE
-----------------STANDBY

If the query does not return the above result, and instead returns : no rows selected, then Active Data
Guard is not enabled.

Difference Between Unique Indexes and Unique Constraints


There is a very general confusion that whenever we create a unique key constraint or primary key
then a corresponding index is created . Primary key and Unique key creates the unique indexes , but
this is not always true . Lets have a look ...
SQL> create table T1 (id number ) ;
Table created.
SQL> alter table T1 add constraint T_ID_IDX unique(id) ;
Table altered.
SQL> select index_name,table_name,uniqueness from dba_indexes where table_name='T1';
INDEX_NAME
TABLE_NAME
UNIQUENES
-----------------------------------------------T_ID_IDX
T1
UNIQUE
SQL> select
constraint_name,constraint_type
table_name='T1' and owner='HR' ;
CONSTRAINT_NAME
C
TABLE_NAME
------------------------------------------T_ID_IDX
U
T1

,table_name

from

dba_constraints

where

Here, we see that when we have created a table unique index get created . Now have another look ..
SQL> create table T2 (id number ) ;
Table created.
SQL> create unique index T2_id_idx on T2(id) ;
Index created.
SQL> select index_name,table_name,uniqueness from dba_indexes where table_name='T2' ;
INDEX_NAME
TABLE_NAME
UNIQUENES
------------------------------------------------T2_ID_IDX
T2
UNIQUE
SQL>
select
constraint_name,constraint_type
table_name='T2' and owner='HR' ;
no rows selected
SQL> alter table T2 add constraint T2_ID_IDX
Table altered.

,table_name

from

dba_constraints

where

unique(id) ;

Now, we expecting two indexes i.e; one from the unique index and other from unique constraints .
let's look on the below query :
SQL> select
constraint_name,constraint_type
,table_name
table_name='T2' and owner='HR' ;
CONSTRAINT_NAME
C
TABLE_NAME
--------------------------------------------T2_ID_IDX
U
T2

from

dba_constraints

where

SQL> drop index T2_ID_IDX;


drop index T2_ID_IDX
*
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key
Above query show only one indexes . Hence from the above demo, we can only say that " a unique
constraint does not necessarily create an index or a unique constraint does not necessarily
create a UNIQUE index " .
If we want a unique index in place, it is suggested we should explicitly create it by using CREATE
UNIQUE INDEX . A primary key or unique constraint is not guaranteed to create a new index, nor is
the index they create guaranteed to be a unique index. Therefore, if we desire a unique index to be
created for query performance issues, we should explicitly create one.
A question may arises that why do we need a unique constraint when we already have a unique index?
The reason are
1. ) The difference between a unique index and a unique constraint starts with the fact that the
constraint is a rule while the index is a database object that is used to provide improved performance
in the retrieval of rows from a table. It is a physical object that takes space and is created with the
DDL command .
2.) we can use either a unique OR non-unique index to support a unique constraint. Constraints are
metadata, more metadata is good. We can define a foreign key to a unique constraint, not so a unique
index.
3.) A constraint has different meaning to an index. It gives the optimiser more information and allows
us to have foreign keys on the column whereas a unique index doesn't. But most importantly because
it is the right way to do it.

What is bootstrap?
Bootstrap is a technique for loading the first few instructions of a computer program into active
memory and then using them to bring in the rest of the program.

What is bootstrap in Oracle ?


In Oracle, Bootstrap refers to loading of metadata (data dictionary) before we OPEN the
database.Bootstrap objects are classified as the objects (tables / indexes / clusters) with the object_id
below 56 as bootstrap objects. These objects are mandatory to bring up an instance, as this contains
the most important metadata of the database.

What happens on database startup?


This shall be explained by setting the SQL_TRACE while opening the database.Connect as sysdba and
do the following
SQL> startup
SQL>
SQL>
SQL>
SQL>
SQL>

alter session set events '10046


alter
database
alter
session
set
events
ORADEBUG
ORADEBUG TRACEFILE_NAME

mount
trace

name

'10046

;
context forever, level 12 ' ;
open
;
trace
name
context off
';
SETMYPID

The sql_trace of the above process explains the following operations behind startup. The bootstrap
operation happens between MOUNT stage and OPEN stage.
1.) The first SQL after in the above trace shows the creation of the bootstrap$ table. Something
similar to the following:
create table bootstrap$ ( line# number not null, obj# number not null, sql_text varchar2(4000) not
null) storage (initial 50K objno 56 extents (file 1 block 377))
This sys.bootstrap$ table contains the DDLs for other bootstrap tables (object_id below 56). Actually
these tables were created internally by the time of database creation (by sql.bsq), The create DDL
passed between MOUNT and OPEN stage will be executed through different driver routines. In simple
words these are not standard CREATE DDLs.
While starting up the database oracle will load these objects into memory (shared_pool), (ie) it will
assign the relevant object number and refer to the datafile and the block associated with that. And
such operations happen only while warm startup.
@ The internals of the above explained in kqlb.c.

2.) Now a query executed against the sys.bootstrap$ table, which holds the create sqls for other base
tables.
select line#, sql_text from bootstrap$ where obj# != :1 (56)
Subsequently it will create those objects by running those queries.
Object number 0 (System Rollback Segment)
Object number 2 to 55 (Other base tables)
Object number 1 is NOT used by any of the objects.
3.) Performs various operations to keep the bootstrap objects in consistent state.
Upon the successful completion of bootstrap the database will do the other tasks like recovery and will
open the database.

Which objects are classified as bootstrap objects in oracle


database?
Objects with data_object_id less than 56 are classified as core bootstrap
objects.The objects are added to the bootstrap. The objects affected are :
hist_head$
histgrm$
i_hh_obj#_col#
i_hh_obj#_intcol#
i_obj#_intcol#
i_h_obj#_col#
c_obj#_intcol#
From 10.1 the following objects have been added:
fixed_obj$
tab_stats$
ind_stats$
i_fixed_obj$_obj#

i_tab_stats$_obj#
i_ind_stats$_obj#
object_usage
These additional objects shall be re-classified (or) ignored by following methods.
1. Opening the database in migrate mode
2. Using event 38003
Event 38003 affects the bootstrap process of loading the fixed cache in kqlblfc(). Per default certain
objects are marked as bootstrap objects (even though they are not defined as such in sys.bootstrap$)
but by setting the event they will be left as non-bootstrapped.

What is bootstrap process failure? or ORA-00704


This ORA-00704 error SERIOUS if reported at startup. This error refers to some problem during
bootstrap operation. Any ORA-00704 error on STARTUP / RECOVER is serious, this error normally rose
due to some inconsistency with the bootstrap segments (or) data corruption on bootstrap$ (or) any of
the base tables below object_id 56. After this error it might not allow to open that database.

When ORA-00704 shall occur?


1. There is a probable of this error when any unsupported operations are tried to force open the
database.
2. This error can also occur when system datafile has corrupted blocks. (ORA-01578)
3. In earlier releases of oracle (prior to 7.3.4 and 8.0.3) this issue shall arise due to Bug 434596
The option is to restore it from a good backup and recover it.
-> If the underlying cause is physical corruption that is due to hardware problems then do complete
recovery.
-> If the issue is not relating to any physical corruption, then the problem could be due some
unsupported actions on Bootstrap, and a Point In Time Recovery would be an option in such cas.

SQL*Plus Error Logging in Oracle 11g


SQL*Plus is the commonly used tools by the DBAs . Sql*Plus Error Logging is one of the new useful
feature in Oracle 11g .It provides additional methods of trapping errors . When error logging is enabled,
it records sql , pl/sql and sql*plus errors and associated parameters in an error log table(SPERRORLOG
by default) and we can then query the log table to review errors resulting from a query.
Note : It is a 11g SQL*Plus feature not with database engine.
Why Error Logging ?
We normally spool the syntax to capture the errors from the scripts and track the spool logs for the
error output . This is work fine for single or few script but cumbersome when multiple scripts are
involved.Secondly we need the OS path to store the scripts,permission and all . To overcome from this
scenario's Error Logging is useful feature to capture and locate the errors in the database table rather
than the OS files .
Steps to Activate the Error Logging :

1.) Check the status of Error Logging : To check the status of error logging , fire the below
command
SQL> show errorlogging
errorlogging is OFF
Note: Error logging is set OFF by default.

2.) Enable the Error Logging : Whenever we enable the error loging the default table
SPERRORLOG is created . Enable by using the below command
SQL> set errorlogging on
SQL> show errorlogging
errorlogging is ON TABLE SCOTT.SPERRORLOG
As, we see that the default table "SPERRORLOG" is created in scott schemas, since the current user is
scott . Hence, sperrorlog table is created current user .
Creating a User Defined Error Log Table :
We can create one or more error log tables to use other than the default . Before specifying a user
defined error log table , let's have look on default errorlog
SQL> desc sperrorlog
Name
Null?
----------------------USERNAME
TIMESTAMP
SCRIPT
IDENTIFIER
MESSAGE
STATEMENT

Type
-------------------VARCHAR2(256)
TIMESTAMP(6)
VARCHAR2(1024)
VARCHAR2(256)
CLOB
CLOB

For each error, the error logging feature logs the following bits of information. To use a user defined
log table, we must have permission to access the table, and we must issue the SET ERRORLOGGING
command with the TABLE schema.tablename option to identify the error log table and the schema if
applicable. Here is syntax to create user-defined table..
SQL> set errorlogging on table [schema].[table]
for example :
SQL> set errorlogging on table hr.Error_log_table
Demo to create user-defined table :

Step 1 : Create the table : If we want create the error logging user-defined table and if table
doesnot exist then get the below error as
SQL> set errorlogging on table hr.Error_log_table
SP2-1507: Errorlogging table, role or privilege is missing or not accessible
create the table as
SQL> create table Hr.Error_log_table ( username varchar(256), timestamp TIMESTAMP, script
varchar(1024), identifier varchar(256), message CLOB, statement CLOB) ;
Table created.
Step 2 : Create user-defined error logging table
SQL> show errorlogging
errorlogging is OFF
SQL> set errorlogging on table hr.Error_log_table

SQL> show errorlogging


errorlogging is ON TABLE hr.Error_log_table
Step 3 : Generate some errors
SQL> selet * from employees ;
SP2-0734: unknown command beginning "selet * fr..." - rest of line ignored.
SQL> select * from employe ;
select * from employe
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> set linesze 2000
SP2-0158: unknown SET option "linesze"
Step 4 : Check the error logging from the user-defined errorlog
SQL> select * from hr.Error_log_table ;
SQL> commit ;

Without commit, other sessions wont see this information. Here i have commit it and taken the output
from other session for the sake of proper formatting purpose.

We can truncate to clear all existing rows in the error log table and begins recording errors from the
current session. as
SQL> set errorlogging on truncate
SQL> select * from Error_log_table ;
No rows selected

We can also set an unique identifier to make it easier to identify the logging record.We can use it to
identify errors from a particular session or from a particular version of a query.
SQL> set errorlogging on identifier 'MARK'
SQL > select * from employ ;
select * from employ
*
ERROR at line 1:
ORA-00942: table or view does not exist
Now check the identifier :
SQL> select * from hr.Error_log_table ;

We can delete records as the regular table.


SQL> delete hr.Error_log_table where IDENTIFIER='MARK' ;
SQL> commit;
Disable Error Logging :
SQL> set errorlogging OFF
SQL> show errorlogging
errorlogging is OFF

You might also like