Professional Documents
Culture Documents
Finally, Oracle has released the most awaited oracle Database i.e, 12c (Oracle 12.1.0.1) . It is available
for download from the Oracle Software Cloud (formerly know as eDelivery) and OTN (Oracle Tech
Network) for 64bit linux and solaris . Oracle 12c is not available for 32 bit . Oracle has yet not
released the AIX and Window database 12c s/w, hopefully will release soon . Here are links for
download Oracle s/w.
eDelivery
: Click here to download from eDelivery .
OTN
: Click here to download from OTN .
Documentation : Click here to download document of 12c
There are some very exiting features in Oracle 12c Database .One of them is "Pluggable Database "
which allows a single Oracle database instance to hold many other databases, allowing for more
efficient use of system resources and easier management. I will be soon download and post about
this features .
An official Oracle price list, which was updated Tuesday, showed a "multitenant" database option
priced at $17,500 per processor. A processor license for the main Enterprise Edition remained priced
at $47,500 per processor.
at the O/S level and disrupt the memory in the SGA. The OS must be configured to support the larger
amount of shared memory.
The impact at the O/S is that every process that starts up will want to build a memory map for the
SGA - depending on the way we have configured memory pages and the way that strategy our O/S
adopts to build maps it could demand a huge amount of O/S memory in a short time. The technology
we need to avoid this issue comes in two different flavours: large memory pages, and shared memory
maps.
The impact on the SGA is two-fold - each process and session has to create an entry in v$process and
v$session, and allocate various memory structures in the SGA: acquiring the rows in v$session and
v$process are serial actions, and the memory allocation in the SGA can cause massive flushing of the
library cache .
So , it is advisable to increase the number of processes while keeping it's impact in mind .
I received a mail from a reader regarding the upgradation of database . He wants to upgrade his
database from 9i to 10g . Here, i will like advice that it's better to upgrade our database from
9i to 11g as compare to 9i to 10g because Oracle extended support for 10gR2 will ends on 31Jul-2013 and also there are more features available in Oracle 11g . We can directly upgrade to oracle
11g, if our curent database is 9.2.0.4 or newer then its supports direct upgrades to versions 9.2.0.4,
10.1and 10.2 . We can upgrade the version as
7.3.3 -> 7.3.4 -> 9.2.0.8 -> 11.1
Manual Upgradation : A manual upgrade consists of running SQL scripts and utilities from a
command line to upgrade a database to the new Oracle Database 10g release. While a manual
upgrade gives us finer control over the upgrade process, it is more susceptible to error if any of
the upgrade or pre-upgrade steps are either not followed or are performed out of order. Below are
the steps
1.) Install Oracle 10g software : For Upgradation , Invoke the .exe or rumInstaller ad select
"Install software only" to Install the Oracle S/w .
2.) Take Full Backup Database : Take full database backup of database which is to be upgraded .
3.) Check the invalid Objects : Check the invalid objects by running ultrp.sql scripts as
SQL> @ORACLE_HOME/rdbms/admin/utlrp.sql
4.) Login into 9i home and run the utlu102i.sql : This script is in oracle 10g home .
SQL> spool pre_upgrd.sql
SQL> @<ORACLE_10G_HOME>/rdbms/admin/utlu102i.sql
SQL> spool off
The above scripts checks a number of areas to make sure the instance is suitable for upgrade
including
Database version
Tablespace sizes
Server options
Database components
Miscellaneous Warnings
Cluster information
The issues indicated by this script should be resolved before a manual upgrade is attempted.
Once we have resolved the above warning , then re-run the above script once more to crosscheck .
5.) Check for the timestamp with timezone Datatype : The time zone files that are
supplied with Oracle Database 10g have been updated from version 1 to version 2 to reflect
changes in transition rules for some time zone regions. The changes may affect existing data of
TIMESTAMP WITH TIME ZONE datatype. To preserve this TIMESTAMP data for updating according
to the new time zone transition rules, we must run the utltzuv2.sql script on the database
before upgrading. This script analyzes our database for TIMESTAMP WITH TIME ZONE columns
that a re affected by the updated time zone transition rules.
SQL> @ORACLE_10G_HOME/rdbms/admin/utltzuv2.sql
SQL> select * from sys.sys_tzuv2_temptab;
If the utltzuv2.sql script identifies columns with time zone data affected by a database
upgrade, then back up the data in character format before we upgrade the database. After the
upgrade, we must update the tables to ensure that the data is stored based on the new
rules. If we export the tables before upgrading and import them after the upgrade, the
conversion will happen automatically during the import.
6.) Shutdown the database :
shut down the database and copy the spfile(or pfile) and password file from 9i home to 10g home .
7.) Upgrade Database : Set following environment for 10g and login using "SYS" user . It takes
roughly half an hour to complete. Spool the output to a file so that you can review it afterward.
ORACLE_SID=<sid>
ORACLE_HOME=<10g home>
PATH=<10g path>
sqlplus / as sysdba
SQL> startup upgrade
SQL>spool upgrd_log.sql
SQL>@catupgrd.sql
SQL> spool off
8.) Recompile any invalid objects : Compare the number of invalid objects with the number noted
in step 4 . It should hopefully be the same or less.
SQL>@ORACLE_HOME/rdbms/admin/utlrp.sql
9.) Check the status of the upgrade :
SQL> @ORACLE_HOME/rdbms/admin/utlu102s.sql
The above script queries the DBA_SERVER_REGISTRY to determine upgrade status and provides
information about invalid or incorrect component upgrades. It also provides names of scripts to rerun
to fix the errors.
10.) Edit the spfile : Create a pfile from spfile as
SQL>create pfile from spfile ;
Open the pfile and set the compatible parameter to 10.2.0.0.0 . Shutdown the database and create
the new modified spfile .
SQL>shut immediate
SQL> create spfile from pfile ;
11.) Start the database normally
SQL> startup
and finally configure the Oracle net and drop the old Oracle database software i.e, 9i using the OUI .
SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide
range of information and statistics that can be used to tune SQL operations. The sql trace file contains
a great deal of information . Each cursor that is opened after tracing has been enabled will be recorded
in the trace file.
The raw trace file mostly contains the cursor number . Eg, PARSING IN CURSOR #3 . EXECutes,
FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed
statement within that cursor . Firstly, let's have a look on "Wait Events" .
WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682 blocks=1 obj#=221
tim=506028963546
WAIT
= An event that we waited for.
nam
= What was being waited for . The wait events here are the same as are seen in view
V$SESSION_WAIT .
ela
= Elapsed time for the operation.(microseconds)
p1 = P1 for the given wait event.
p2 = P2 for the given wait event.
p3
= P3 for the given wait event.
Example No. 1 : WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682
blocks=1 obj#=221 tim=506028963546
The above line can be translated as : Completed waiting under CURSOR no 6 for "db file sequential
read" . We waited 8458 microseconds i.e. approx. 8.5 milliseconds .For a read of: File 110, start block
63682, for 1 Oracle block of Object number 221. Timestamp was 506028963546 .
Example no.2 : WAIT #1: nam='library cache: mutex X' ela= 814 idn=3606132107
value=3302829850624 where=4 obj#=-1 tim=995364327604
The above line can be translated as : Completed WAITing under CURSOR no 1 for "library cache: mutex
X" .We waited 814 microseconds i.e. approx. 0.8 milliseconds .To get an eXclusive library cache latch
with Identifier 3606132107 value 3302829850624 location 4 . It was not associated with any
particular object (obj#=-1) Timestamp 995364327604.
The trace file also show the processing of the sql statements . Oracle processes SQL statements as
follow :
Stage 1: Create a Cursor
Stage 2: Parse the Statement
Stage 3: Describe Results
Stage 4: Defining Output
Stage 5: Bind Any Variables
Stage 6: Execute the Statement
Stage 7: Parallelize the Statement
Stage 8: Fetch Rows of a Query Result
Stage 9: Close the Cursor
Now let's move to another important term PARSING IN CURSOR #n . EXECutes, FETCHes and WAITs
are recorded against a cursor. The information applies to the most recently parsed statement within
that cursor.
PARSING IN CURSOR# :
Cursor : In order for Oracle to process an SQL statement, it needs to create an area of memory
known as the context area; this will have the information needed to process the statement. This
information includes the number of rows processed by the statement, a pointer to the parsed
representa-tion of the statement (parsing an SQL statement is the process whereby information is
transferred to the server, at which point the SQL statement is evaluated as being valid).
A cursor is a handle, or pointer, to the context area. Through the cursor, a PL/SQL program can control
the context area and what happens to it as the statement is processed. Two important features about
the cursor are
1.) Cursors allow you to fetch and process rows returned by a SE-LECT statement, one row at a time.
PARSE #3: c=15625, e=177782, p=2, cr=3, cu=0, mis=1, r=0, dep=0, og=1, plh=272002086,
tim=276565143470
c
= CPU time (microseconds rounded to centiseconds granularity on 9i & above)
e
= Elapsed time (centiseconds prior to 9i, microseconds thereafter)
p
= Number of physical reads.
cr
= Number of buffers retrieved for CR reads.(Consistent reads)
cu =Number of buffers retrieved in current mode.
mis = Cursor missed in the cache.
r
= Number of rows processed.
dep = Recursive call depth (0 = user SQL, >0 = recursive).
og = Optimizer goal: 1=All_Rows, 2=First_Rows, 3=Rule, 4=Choose
From the above Parse line it is very clear that the total time taken in parsing the statement is 0.177
sec and the no. of physical reads done are 2 .
Bind Variables : If the SQL statement does reference bind variables, then the following SQL
statement shown in the cursor can locate a section of text associated with each bind variable. For each
bind variable there are a number of attributes listed. The following are the ones we are interested in
here:
mxl
= the maximum length - ie. the maximum number of bytes occupied by the variable. Eg.
dty=2 and mxl=22 denotes a NUMBER(22) column.
scl
= the scale (for NUMBER columns)
pre
= the precision (for NUMBER columns)
value = the value of the bind variable
dty
= the datatype. Typical values are:
1
VARCHAR2 or NVARCHAR2
2
NUMBER
8
LONG
11
ROWID
12
DATE
23
RAW
24
LONG RAW
96
CHAR
112 CLOB or NCLOB
113 BLOB
114 BFILE
EXEC : Execute a pre-parsed statement. At this point, Oracle has all necessary information and
resources, so the statement is executed. For example
EXEC #2:c=0,e=225,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3684871272,tim=282618239403
Fetch : Fetch rows from a cursor . For example
FETCH
#4:c=0,e=8864,p=1,cr=26,cu=0,mis=0,r=1,dep=0,og=1,plh=3564694750,tim=282618267037
STAT : Lines report explain plan statistics for the numbered [CURSOR]. These let us know the 'run
time' explain plan. For example
STAT #1 id=1 cnt=7 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us cost=2
size=2128 card=1)'
id
= Line of the explain plan which the row count applies to (starts at line 1). This is effectively the
row source row count for all row sources in the execution tree
cnt = Number of rows for this row source.
Note : Timestamp are used to determine the time between any 2 operations.
Reference : Metalink [ID 39817.1]
loss cannot occur, the primary database shuts down if a fault (such as the network going down)
prevents it from writing its redo stream to at least one remote standby redo log.
If we have set the Maximum Availability mode, we have chosen a configuration that provides the
highest level of data protection that is possible without compromising the availablity of the primary
database. Like the maximum database does not shut down if a fault prevents it from writing its redo
stream to a remote standby redo log. Instead, the primary database operates in maximum
performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps
are resolved, the primary database automatically resumes operating in maximum availabitly mode.
This guarantees that no data loss will occur if the primary database fails, but only if a second fault
does not complete set of redo data being sent from the primary database to at least one standby
database.
If we have set the Maximum Performance mode (the default), we have chosen a mode that provides
the highest level of data protection that is possible without affecting the performance of the primary
database. This is accomplished by allowing a transaction to commit as soon as the redo data needed
to recover the transaction is written to the local online redo log. The primary database's redo data
stream is also written to at least one standby database, bu that the redo stream is written
asynchronously with respect to the commitment of the transactions that create the redo data.
The maximum performance mode enables us to either set the LGWR and AYSNC attributes, or set the
ARCH attribute on the LOG_ARCHIVE_DEST_n parameter for the standby database destination. If the
primary database fails, we can reduce the amount of data that is not received on the standby
destination by setting the LGWR and ASYNC attributes.
If LGWR SYNC or ASYNC is deployed, what process(es) bring(s) the standby database back
into sync with the primary [database] if the network is lost and is then restored? How does
it do it?
Again, this is dependent upon the mode we have chosen for our database. The LGWR process (and
possibly the LNSn process if we have multiple standby databases) is responsible for closing the gap.
My biggest question is, when the network to the standby is lost with SYNC or ASYNC, where is the
information queued and how is it retransmitted once the network has been re-established?
This implies that our database has been set to either maximum availability or maximum performance
mode. We cannot use the ASYNC attribute with maximum protection mode. The information is queued
in the local online redo log and the LGWR (and the LNSn) process will transmit the data to the standby
database's online redo log file to close the gap once the network connectivity has been re-established
Gap recovery is handled through the polling mechanism. For physical and logical standby databases,
Oracle Change Data Capture, and Oracle Streams, Data Guard performs gap detection and resolution
by automatically retrieving missing archived redo log files from the primary database. No extra
configuration settings are required to poll the standby database(s) to detect any gaps or to resolve the
gaps.
The important consideration here is that automatic gap recovery is contigent upon the availablity of
the primary database. If the primary database is not available and we have a configuration with
mulitple physical standby databases, we can set up additional initialization parameters so that the
Redo Apply can resolve archive gaps from another standby database.
It is possible to manually determine if a gap exists and to resolve those archive gaps. To manually
determine if a gap exists, query the V$ARCHIVE_GAP view on our physical standby database. If a gap
is found, we will then need to locate the archived log files on our primary database, copy them to our
standby database, and register them.
Indexes plays and crucial role in the performance tunning of a database . It is very important to
know how the index work i.e, how indexes fetches the data's from a tables . There is a very good post
by rleishman on the working of indexes . Let's have a look .
What is an Index ?
An index is a schema object that contains an entry for each value that appears in the indexed
column(s) of the table or cluster and provides direct, fast access to rows. It is just as the index in this
manual helps us to locate information faster than if there were no index, an Oracle Database index
provides a faster access path to table data .
Blocks
First we need to understand a block. A block - or page for Microsoft boffins - is the smallest unit of
disk that Oracle will read or write. All data in Oracle - tables, indexes, clusters - is stored in blocks.
The block size is configurable for any given database but is usually one of 4Kb, 8Kb, 16Kb, or 32Kb.
Rows in a table are usually much smaller than this, so many rows will generally fit into a single block.
So we never read "just one row"; we will always read the entire block and ignore the rows we don't
need. Minimising this wastage is one of the fundamentals of Oracle Performance Tuning.
Oracle uses two different index architectures: b-Tree indexes and bitmap indexes. Cluster indexes,
bitmap join indexes, function-based indexes, reverse key indexes and text indexes are all just
variations on the two main types. b-Tree is the "normal" index .
The "-Tree" in b-Tree
A b-Tree index is a data structure in the form of a tree - no surprises there - but it is a tree of
database blocks, not rows. Imagine the leaf blocks of the index as the pages of a phone book . Each
page in the book (leaf block in the index) contains many entries, which consist of a name (indexed
column value) and an address (ROWID) that tells us the physical location of the telephone (row in the
table).
The names on each page are sorted, and the pages - when sorted correctly - contain a complete
sorted list of every name and address
A sorted list in a phone book is fine for humans, beacuse we have mastered "the flick" - the ability to
fan through the book looking for the page that will contain our target without reading the entire page.
When we flick through the phone book, we are just reading the first name on each page, which is
usually in a larger font in the page header. Oracle cannot read a single name (row) and ignore the
reset of the page (block); it needs to read the entire block.
If we had no thumbs, we may find it convenient to create a separate ordered list containing the first
name on each page of the phone book along with the page number. This is how the branch-blocks of
an index work; a reduced list that contains the first row of each block plus the address of that block.
In a large phone book, this reduced list containing one entry per page will still cover many pages, so
the process is repeated, creating the next level up in the index, and so on until we are left with a
single page: the root of the tree.
For example :
To find the name Gallileo in this b-Tree phone book, we:
=> Read page 1. This tells us that page 6 starts with Fermat and that page 7 starts with Hawking.
=> Read page 6. This tells us that page 350 starts with Fyshe and that page 351 starts with Garibaldi.
=> Read page 350, which is a leaf block; we find Gallileo's address and phone number.
=> That's it; 3 blocks to find a specific row in a million row table. In reality, index blocks often fit 100
or more rows, so b-Trees are typically quite shallow. I have never seen an index with more than 5
levels. Curious? Try this:
SQL> select index_name, blevel+1 from user_indexes order by 2 ;
user_indexes.blevel is the number of branch levels. Always add 1 to include the leaf level; this tells us
the number of blocks a unique index scan must read to reach the leaf-block. If we're really, really,
insatiably curious; try this in SQL*Plus:
SQL> accept index_name prompt "Index Name: "
SQL> alter session set tracefile_identifier='&index_name' ;
SQL> column object_id new_value object_id
SQL> select object_id from user_objects where object_type = 'INDEX' and
object_name=upper('&index_name');
SQL> alter session set events 'Immediate trace name treedump level &object_id';
SQL> alter session set tracefile identifier="" ;
SQL> show parameter user_dump_dest
Give the name of an index on a smallish table (because this will create a BIG file). Now, on the Oracle
server, go to the directory shown by the final SHOW PARAMETER user_dump_dest command and find
the trace file - the file name will contain the index name. Here is a sample:
---- begin tree dump
branch: 0x68066c8 109078216 (0: nrow: 325, level: 1)
leaf: 0x68066c9 109078217 (-1: nrow: 694 rrow: 694)
leaf: 0x68066ca 109078218 (0: nrow: 693 rrow: 693)
leaf: 0x68066cb 109078219 (1: nrow: 693 rrow: 693)
leaf: 0x68066cc 109078220 (2: nrow: 693 rrow: 693)
leaf: 0x68066cd 109078221 (3: nrow: 693 rrow: 693)
...
...
leaf: 0x68069cf 109078991 (320: nrow: 763 rrow: 763)
leaf: 0x68069d0 109078992 (321: nrow: 761 rrow: 761)
leaf: 0x68069d1 109078993 (322: nrow: 798 rrow: 798)
leaf: 0x68069d2 109078994 (323: nrow: 807 rrow: 807)
----- end tree dump
This index has only a root branch with 323 leaf nodes. Each leaf node contains a variable number of
index entries up to 807! A deeper index would be more interesting, but it would take a while to dump.
"B" is for...
Contrary to popular belief, b is not for binary; it's balanced.
As we insert new rows into the table, new rows are inserted into index leaf blocks. When a leaf block
is full, another insert will cause the block to be split into two blocks, which means an entry for the new
block must be added to the parent branch-block. If the branch-block is also full, it too is split. The
process propagates back up the tree until the parent of split has space for one more entry, or the root
is reached. A new root is created if the root node splits. Staggeringly, this process ensures that every
branch will be the same length.
How are Indexes used ?
Indexes have three main uses:
regarding joins, sub-queries, OR predicates, functions (inc. arithmetic and concatenation), and casting
that are outside the scope of this article. Consult a good SQL tuning manual.
Much more interesting - and important - are the cases where an index makes a SQL slower. These are
particularly common in batch systems that process large quantities of data.
To explain the problem, we need a new metaphor. Imagine a large deciduous tree in our front yard.
It's Autumn, and it's our job to pick up all of the leaves on the lawn. Clearly, the fastest way to do this
(without a rake, or a leaf-vac...) would be get down on hands and knees with a bag and work our way
back and forth over the lawn, stuffing leaves in the bag as we go. This is a Full Table Scan, selecting
rows in no particular order, except that they are nearest to hand. This metaphor works on a couple of
levels: we would grab leaves in handfuls, not one by one. A Full Table Scan does the same thing: when
a bock is read from disk, Oracle caches the next few blocks with the expectation that it will be asked
for them very soon. Type this in SQL*Plus:
SQL> show parameter db_file_multiblock_read_count
Just to shake things up a bit (and to feed an undiagnosed obsessive compulsive disorder), we decide
to pick up the leaves in order of size. In support of this endeavour, we take a digital photograph of the
lawn, write an image analysis program to identify and measure every leaf, then load the results into a
Virtual Reality headset that will highlight the smallest leaf left on the lawn. Ingenious, yes; but this is
clearly going to take a lot longer than a full table scan because we cover much more distance walking
from leaf to leaf.
So obviously Full Table Scan is the faster way to pick up every leaf. But just as obvious is that the
index (virtual reality headset) is the faster way to pick up just the smallest leaf, or even the 100
smallest leaves. As the number rises, we approach a break-even point; a number beyond which it is
faster to just full table scan. This number varies depending on the table, the index, the database
settings, the hardware, and the load on the server; generally it is somewhere between 1% and 10% of
the table.
The main reasons for this are :
As implied above, reading a table in indexed order means more movement for the disk head.
Oracle cannot read single rows. To read a row via an index, the entire block must be read with
all but one row discarded. So an index scan of 100 rows would read 100 blocks, but a FTS might read
100 rows in a single block.
The db_file_multiblock_read_count setting described earlier means FTS requires fewer visits to
the physical disk.
Even if none of these things was true, accessing the entire index and the entire table is still
more IO than just accessing the table.
So what's the lesson here? Know our data! If our query needs 50% of the rows in the table to resolve
our query, an index scan just won't help. Not only should we not bother creating or investigating the
existence of an index, we should check to make sure Oracle is not already using an index. There are a
number of ways to influence index usage; once again, consult a tuning manual. The exception to this
rule - there's always one - is when all of the columns referenced in the SQL are contained in the index.
If Oracle does not have to access the table then there is no break-even point; it is generally quicker to
scan the index even for 100% of the rows.
Summary :
Indexes are not a dark-art; they work in an entirely predictable and even intuitive way. Understanding
how they work moves Performance Tuning from the realm of guesswork to that of science; so embrace
the technology and read the manual.
Here, If ISSES_MODIFIABLE parameter is true, the parameter can be changed on session level ,
and if ISSES_MODIFIABLE or ISINSTANCE_MODIFIABLE is true, then parameter can be changed
on system level. Here is an example
SQL> SELECT name,Value ,ISSES_MODIFIABLE , ISINSTANCE_MODIFIABLE FROM v$parameter2
WHERE name LIKE '%target%' ;
On a three node RAC database, if we select from v$session, we get sessions from that instance only.
Selecting from GV$SESSION creates parallel query slaves on the other instances and gets the
information back to our session.
This works fine in almost all cases. There are few exceptions: in case of redo logs, the RAC instance
must see all the redo logs of other instances as they become important for its recovery. Therefore,
V$LOG actually shows all the redo logs, of all the instances, not just of its own. Contrast this with
V$SESSION, which shows only sessions of that instance, not all. So, if there are 3 log file groups per
instance (actually, per "thread") and there are 3 instances, V$LOG on any instance will show all 9
logfile groups, not 3.
When we select form GV$LOG, remember, the session gets the information from other instances as
well. Unfortunately, the PQ servers on those instances also get 9 records each, since they also see the
same information seen by the first instance. On a three instance RAC, we will get 3X9 = 27 records in
GV$LOG!
To avoid this:
1.) Always select from V$LOG, V$LOGFILE and V$THREAD in a RAC instance. GV$ views are
misleading or
2.) Add a predicate to match THREAD# with INST_ID. (Beware: thread numbers are by default the
same as the instance_id; but we may have defined a different thread number while creating the
database) as
SQL> select * from gv$log where inst_log=thread# ;
1.) Consistent Backup : This is also know as Cold Backup . A consistent backup is one in which the
files being backed up contain all changes up to the same system change number (SCN). This means
that the files in the backup contain all the data taken from a same point in time .
2.) Inconsistent Backup : This is also known as Hot backup . An inconsistent backup is a backup in
which the files being backed up do not contain all the changes made at all the SCNs . This can occur
because the datafiles are being modified as backups are being taken.
There are some DBAs which prefer oracle user-managed backups.They put their database into backup
mode prior to backing up and take it out of backup mode after backup. If we 're going to perform
user-managed backups, we must back up all of the following file :
Datafiles
Control files
A hot backup requires quite a bit more work than cold backup.Below are steps required for Hot backup.
Step 1 : Check the log mode of the database Whenever we go for hot backup then the database
must be in archivelog mode .
SQL> SELECT LOG_MODE FROM V$DATABASE ;
LOG_MODE
--------------ARCHIVELOG
Step 2 : Put the database into backup mode If we are using the oracle 10gR2 or later , then we
can put the entire database into backup mode and if we are using the oracle prior to 10gR2 ,then we
have to put each tablespace in backup mode . In my case , I am having 11gR2 .
SQL> alter database begin backup ;
Database altered.
In case of oracle prior to 10gR2 use the below command as
SQL> set echo off
SQL> set heading off
SQL> set feedback off
SQL> set termout off
SQL> spool backmode.sql
SQL> select 'alter tablespace '||name||' begin backup ;' "Tablespace in backup mode" from
v$tablespace;
SQL> spool off
SQL> @C:\backmode.sql
Step 3 : Backup all the datafiles Copy all the datafile using the operating system command and
Paste it on the desired backup location .Meanwhile,we can verify the status of the datafile by using the
v$backup view to check the status of the datafiles.
SQL> select * from v$backup ;
FILE# STATUS
CHANGE# TIME
---------- ------------------ ---------- --------1 ACTIVE
3967181 03-APR-12
2 ACTIVE
3967187 03-APR-12
3 ACTIVE
3967193 03-APR-12
4 ACTIVE
3967199 03-APR-12
5 ACTIVE
3967205 03-APR-12
6 ACTIVE
3967211 03-APR-12
7 ACTIVE
3967217 03-APR-12
8 ACTIVE
3967223 03-APR-12
9 ACTIVE
3967229 03-APR-12
The Column STATUS=ACTIVE shows that the datafiles are in backup mode .
Step 4 : Take out the database from backup mode If we are using 10gR2 or above version of
oracle , we use the below command to take out the database from backup mode as
SQL> alter database end backup ;
Database Altered
If we are having version prior to 10gR2 , then we use the below command as above :
SQL> set echo off
SQL> set heading off
SQL> set feedback off
SQL> set termout off
SQL> spool end_mode.sql
SQL> select 'alter tablespace '||name||' end backup ;' "tablespace in backup mode" from
v$tablespace ;
SQL> spool off
SQL> @C:\endmode.sql
Step 5 : Switch the redolog file and backup archivelogs After taking the database out of Hot
Backup we must switch logfile (preferably more than once) and backup the archivelogs generated .We
may backup archivelogs while the database is in backup mode but we must also backup the first
archivelog(s) after the end backup. The best method to do both is to run the SQL command alter
system archive log current. This switches the logfile but does not return the prompt until the previous
redo log has been archived. We can run alter system switch logfile, but then we won't be sure that the
latest redo log has been archived before we move on to the next step.
SQL> alter system archive log current ;
System altered.
SQL> /
System altered.
Now backup the archivelogs to the backup location .
Step 6 : Back up the control file Now , we can backup the controlfile as binary file and as human
readable .We should use both methods to back up the control file; either one may come in handy at
different times . The commands are as
(Human readable)
SQL> alter database backup controlfile to trace ; or
Database altered.
SQL> alter database backup controlfile to trace as '<backup location>' ;
Database altered.
(Binary format)
SQL> alter database backup controlfile to '<backup location>' ;
Database altered.
Step 7 : Backup the passwordfile and spfile We can backup the passwordfile and spfile though it
is not mandatory.
Backup of online redo log files are not required, as the online log file has the end of backup
marker and would cause corruption if used in recovery.
I found people are bit confused between Dataguard and Active Data guard . They assume that Active
dataguard is having different configuration or properties . Here, i have tried to cover the Active
dataguard .
Active Data Guard is a new option for Oracle Database 11g Enterprise Edition . Oracle Active Data
Guard enables read-only access to a physical standby database . Active Data Guard is the possibility
to query our Physical Standby Database while it is constantly actualized to reflect the state of the
Primary Database . It is additional to 11g Data Guard and comes with an extra charge .
Oracle Active Data Guard enhances the Quality of Service (QoS) for production databases by offloading resource-intensive operations to one or more standby databases, which are synchronized
copies of the production database. With Oracle Active Data Guard, a physical standby database can be
used for real-time reporting, with minimal latency between reporting and production data. Compared
with traditional replication methods, Active Data Guard is very simple to use, transparently supports
all datatypes, and offers very high performance. Oracle Active Data Guard also allows backup
operations to be off-loaded to the standby database, and be done very fast using intelligent
incremental backups. Oracle Active Data Guard thus is a very effective way to insulate interactive
users and critical business tasks on the production system from the impact of such long-running
operations. Oracle Active Data Guard provides the additional benefit of high availability and disaster
protection by quickly failing over to the standby database in the event of a planned or an unplanned
outage at the production site.
The Active Data Guard contains the following features :
OPEN_MODE
-------------------READ ONLY WITH APPLY
CONTROLFILE
-----------------STANDBY
If the query does not return the above result, and instead returns : no rows selected, then Active Data
Guard is not enabled.
,table_name
from
dba_constraints
where
Here, we see that when we have created a table unique index get created . Now have another look ..
SQL> create table T2 (id number ) ;
Table created.
SQL> create unique index T2_id_idx on T2(id) ;
Index created.
SQL> select index_name,table_name,uniqueness from dba_indexes where table_name='T2' ;
INDEX_NAME
TABLE_NAME
UNIQUENES
------------------------------------------------T2_ID_IDX
T2
UNIQUE
SQL>
select
constraint_name,constraint_type
table_name='T2' and owner='HR' ;
no rows selected
SQL> alter table T2 add constraint T2_ID_IDX
Table altered.
,table_name
from
dba_constraints
where
unique(id) ;
Now, we expecting two indexes i.e; one from the unique index and other from unique constraints .
let's look on the below query :
SQL> select
constraint_name,constraint_type
,table_name
table_name='T2' and owner='HR' ;
CONSTRAINT_NAME
C
TABLE_NAME
--------------------------------------------T2_ID_IDX
U
T2
from
dba_constraints
where
What is bootstrap?
Bootstrap is a technique for loading the first few instructions of a computer program into active
memory and then using them to bring in the rest of the program.
mount
trace
name
'10046
;
context forever, level 12 ' ;
open
;
trace
name
context off
';
SETMYPID
The sql_trace of the above process explains the following operations behind startup. The bootstrap
operation happens between MOUNT stage and OPEN stage.
1.) The first SQL after in the above trace shows the creation of the bootstrap$ table. Something
similar to the following:
create table bootstrap$ ( line# number not null, obj# number not null, sql_text varchar2(4000) not
null) storage (initial 50K objno 56 extents (file 1 block 377))
This sys.bootstrap$ table contains the DDLs for other bootstrap tables (object_id below 56). Actually
these tables were created internally by the time of database creation (by sql.bsq), The create DDL
passed between MOUNT and OPEN stage will be executed through different driver routines. In simple
words these are not standard CREATE DDLs.
While starting up the database oracle will load these objects into memory (shared_pool), (ie) it will
assign the relevant object number and refer to the datafile and the block associated with that. And
such operations happen only while warm startup.
@ The internals of the above explained in kqlb.c.
2.) Now a query executed against the sys.bootstrap$ table, which holds the create sqls for other base
tables.
select line#, sql_text from bootstrap$ where obj# != :1 (56)
Subsequently it will create those objects by running those queries.
Object number 0 (System Rollback Segment)
Object number 2 to 55 (Other base tables)
Object number 1 is NOT used by any of the objects.
3.) Performs various operations to keep the bootstrap objects in consistent state.
Upon the successful completion of bootstrap the database will do the other tasks like recovery and will
open the database.
i_tab_stats$_obj#
i_ind_stats$_obj#
object_usage
These additional objects shall be re-classified (or) ignored by following methods.
1. Opening the database in migrate mode
2. Using event 38003
Event 38003 affects the bootstrap process of loading the fixed cache in kqlblfc(). Per default certain
objects are marked as bootstrap objects (even though they are not defined as such in sys.bootstrap$)
but by setting the event they will be left as non-bootstrapped.
1.) Check the status of Error Logging : To check the status of error logging , fire the below
command
SQL> show errorlogging
errorlogging is OFF
Note: Error logging is set OFF by default.
2.) Enable the Error Logging : Whenever we enable the error loging the default table
SPERRORLOG is created . Enable by using the below command
SQL> set errorlogging on
SQL> show errorlogging
errorlogging is ON TABLE SCOTT.SPERRORLOG
As, we see that the default table "SPERRORLOG" is created in scott schemas, since the current user is
scott . Hence, sperrorlog table is created current user .
Creating a User Defined Error Log Table :
We can create one or more error log tables to use other than the default . Before specifying a user
defined error log table , let's have look on default errorlog
SQL> desc sperrorlog
Name
Null?
----------------------USERNAME
TIMESTAMP
SCRIPT
IDENTIFIER
MESSAGE
STATEMENT
Type
-------------------VARCHAR2(256)
TIMESTAMP(6)
VARCHAR2(1024)
VARCHAR2(256)
CLOB
CLOB
For each error, the error logging feature logs the following bits of information. To use a user defined
log table, we must have permission to access the table, and we must issue the SET ERRORLOGGING
command with the TABLE schema.tablename option to identify the error log table and the schema if
applicable. Here is syntax to create user-defined table..
SQL> set errorlogging on table [schema].[table]
for example :
SQL> set errorlogging on table hr.Error_log_table
Demo to create user-defined table :
Step 1 : Create the table : If we want create the error logging user-defined table and if table
doesnot exist then get the below error as
SQL> set errorlogging on table hr.Error_log_table
SP2-1507: Errorlogging table, role or privilege is missing or not accessible
create the table as
SQL> create table Hr.Error_log_table ( username varchar(256), timestamp TIMESTAMP, script
varchar(1024), identifier varchar(256), message CLOB, statement CLOB) ;
Table created.
Step 2 : Create user-defined error logging table
SQL> show errorlogging
errorlogging is OFF
SQL> set errorlogging on table hr.Error_log_table
Without commit, other sessions wont see this information. Here i have commit it and taken the output
from other session for the sake of proper formatting purpose.
We can truncate to clear all existing rows in the error log table and begins recording errors from the
current session. as
SQL> set errorlogging on truncate
SQL> select * from Error_log_table ;
No rows selected
We can also set an unique identifier to make it easier to identify the logging record.We can use it to
identify errors from a particular session or from a particular version of a query.
SQL> set errorlogging on identifier 'MARK'
SQL > select * from employ ;
select * from employ
*
ERROR at line 1:
ORA-00942: table or view does not exist
Now check the identifier :
SQL> select * from hr.Error_log_table ;