You are on page 1of 187

Hands-On Guide

FOR
ORACLE DBAs

By
Anup Kumar Srivastav

All rights reserved. No part of this publication may be reproduced, stored in retrieval system or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior permission of the author.

@ Anup Kumar Srivastav


Published by
Anup Kumar Srivastav

Powered by
Pothi.com
http://pothi.com

ABOUT ME
I am Anup Kumar Srivastav. I have been over 7+ years of experience in IT Industries, of which
3-year as ERP techno-Functional, 1.2 years as a Oracle Core DBA and 3 years 8 months as a
Oracle Application DBA, with major clients like Container Corporation of India, FLEX
Industries Ltd, Shiv-Vani Oil & Exploration Services Ltd, Jhunjulwala Vanaspati Ltd and Lord
Distillery.
I have a broad experience in Implementation, Managing and up-gradation of Oracle Database and
Oracle Application (EBS) on Windows, Linux and Sun Solaris platform.
At present I am working as Senior System Engineer with HCL COMNET LIMITED, Noida.
Thanking You.
Yours Truly,
Anup Kumar Srivastava

Content

1.

Description of Content

Page No

Oracle Architecture
How to Oracle work
Oracle Architecture (Diagram)
Basics of Oracle Architecture
Starting up a database
Shutdown the database

7-16
7
7
7
11
12

Managing an Oracle Instance


Oracle Background Processes

13
13

2.

Oracle 10gR12 Software/binary installation


10gR2 Installation on Linux
10gR2 installation on Solaris

17-22
17
20

3.

Database Creation (Manually)


For File System Storage Mechanism on Linux/Solaris
For OMF Enable Database on Linux/Solaris
For ASM Mechanism on Linux
For ASM Mechanism on Solaris
For RAW Device on Linux
For RAW Device on Solaris
For ASM Mechanism on Windows

23-45
23
24
25
33
37
41
42

4.

Managing Tablespace
How to Manage Tablespace
Query Related to Tablespace

46-49
46
48

5.

Managing Database
Managing Data file
How to Drop Data file
Managing Control File
Managing Redo Log File
Managing Temporary Tablespace
Managing Undo Tablespace

50-64
50
52
54
56
58
61

6.

Creating and Managing 10G ASM


About ASM, Functionality and Advantage
ASM instance, archicture and management
Managing ASM Directory
Managing ASM Disk Group
Move Non-ASM database to ASM
How to access ASM file
ASM Disk Crash Case Study
Useful Query for ASM

65-80
65
66
66
69
72
75
76
78

7.

User Managed Backup/recovery

81-90

User-Managed Backup Terminology


81
Type of Recovery and Diff. between Resetlog and No resetlogs Option
Recovery from missing or corrupted datafile
85
Recovery from missing or corrupted redo log group
86
Disaster recovery
87

83

8.

Recovery Manager (RMAN)


Basic RMAN Tutorial
Type of RMAN Backup
RMAN: RESTORE Concept
How to Configure RMAN
Use RMAN with a Recovery Catalog

91-104
91
93
96
101
103

9.

RMAN Recovery Case Study


Datafile Recovery
Controlfile Recovery
Redolog file Recovery
Disaster Recovery

105-120
105
109
112
114

10

Standby Database
Creating standby database using Manual
Creating standby database using RMAN
Standby Database Maintenance
Database Switchover/Failover
Standby Diagnosis Query for Primary Node
Standby Diagnosis Query for Standby Node

121-136
121
127
131
131
132
135

11.

Oracle Database Up-gradation


Why upgrade oracle database to higher version
Role of DBA during database up-gradation
Different option for upgrade method, difference and process
Difference between Up-gradation and Migration
About Oracle release and Upgrade path
Converting database to 64 bit
Moving from standard edition to enterprise edition and via-verse
Upgrade Project

137-170
137
137
137
140
140
143
145

12.

Project A
Upgrade 8.1.x(x>7) to 8.1.7

147

Project B
Upgrade 8.1.7 to 9.2.0

154

Project C
Upgrade 8.1.7 to 10.2.0

163

Project D
Upgrade 9.2.0 to 10.2.0

168

Migration of Oracle Database Instances across OS Platforms

171-187

Migrate through Export / Import


Migrate through RMAN Convert database on source host
Migrate through RMAN Convert database on destination host
Migrate by using Transporting Tablespace

171
172
176
181

Oracle Architecture
How oracle works?
An instance is currently running on the computer that is executing Oracle called database server.
A computer is running an application (local machine) runs the application in a user process.
The client application attempts to establish a connection to the server using the proper Net8
driver.
When the oracle server detects the connection request from the client its check client
authentication, if authentication pass the oracle server creates a (dedicated) server process on
behalf of the user process. When the user executes a SQL statement and commits the transaction.
For example, the user changes a name in a row of a table. The server process receives the
statement and checks the shared pool for any shared SQL area that contains an identical SQL
statement. If a shared SQL area is found, the server process checks the user's access privileges to
the requested data and the previously existing shared SQL area is used to process the statement;
if not, a new shared SQL area is allocated for the statement so that it can be parsed and
processed. The server process retrieves any necessary data values from the actual datafile or
those stored in the system global area. The server process modifies data block in the system
global area. The DBWn process writes modified blocks permanently to disk when doing so is
efficient. Because the transaction committed, the LGWR process immediately records the
transaction in the online redo log file. If the transaction is successful, the server process sends a
message across the network to the application. If it is not successful, an appropriate error
message is transmitted. Throughout this entire procedure, the other background processes run,
watching for conditions that require intervention.
Oracle Architecture Diagram

The Basics of Oracle Architecture


As an Oracle DBA, you must be understand the concepts of Oracle architecture clearly. It is a
basic step or main point that you need before you go to manage your database. By this article, I
will try to share my knowledge about it. Hope it can be useful for you.

What is An Oracle Database?


Basically, there are two main components of Oracle database instance and database itself. An
instance consists of some memory structures (SGA) and the background processes.

Above figure show, two main components of Oracle database


Instance
As we cover above, instance is consist of the memory structures and background processes. The
memory structure itself consists of System Global Area (SGA), Program Global Area (PGA). In
the other hand, the mandatory background processes are Database Writer (DBWn), Log Writer
(LGWR), Checkpoint (CKPT), System Monitor (SMON), and Process Monitor (PMON). And
another optional background processes are Archiver (ARCn), Recoverer (RECO), etc.

Figure show the instance components


System Global Area
SGA is the primary memory structures. This area is broken into a few of part memory Buffer
Cache, Shared Pool, Redo Log Buffer, Large Pool, and Java Pool.

Buffer Cache
Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when
user retrieves data from database, the data will be stored in buffer cache. Its size can be
manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool
Shared pool is broken into two small part memories Library Cache and Dictionary Cache. The
library cache is used to stores information about the commonly used SQL and PL/SQL
statements; and is managed by a Least Recently Used (LRU) algorithm. It is also enables the
sharing those statements among users. In the other hand, dictionary cache is used to stores
information about object definitions in the database, such as columns, tables, indexes, users,
privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization
parameter file.
Redo Log Buffer
Each DML statement (insert, update, and delete) executed by users will generates the redo entry.
What is a redo entry? It is an information about all data changes made by users. That redo entry
is stored in redo log buffer before it is written into the redo log files. To manipulate the size of
redo log buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool
Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on
the shared pool. It is also used for I/O processes. The large pool size can be set by
LARGE_POOL_SIZE
parameter
in
init.ora
initialization
parameter
file.
Java Pool
As its name, Java pool is used to services parsing of the Java commands. Its size can be set by
JAVA_POOL_SIZE
parameter
in
init.ora
initialization
parameter
file.
Oracle Background Processes
Oracle background processes is the processes behind the scene that work together with the
memories.
DBWn
Database writer (DBWn) process is used to write data from buffer cache into the datafiles.
Historically, the database writer is named DBWR. But since some of Oracle version allows us to
have more than one database writer, the name is changed to DBWn, where n value is a number 0
to
9.
LGWR
Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer
into the redo log files.
CKPT
Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into
datafiles. It will also updates datafiles and control files header when log file switch occurs.
SMON

System Monitor (SMON) process is used to recover the system crach or instance failure by
applying
the
entries
in
the
redo
log
files
to
the
datafiles.
PMON
Process Monitor (PMON) process is used to clean up work after failed processes by rolling back
the
transactions
and
releasing
other
resources.
Database
We can broken up database into two main structures Logical structures and Physical
structures.
Logical Structures
The logical units are tablespace, segment, extent, and data block.
Bellow Figure, will illustrate the relationships between those units.

Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more
tablespaces. In the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1,
and Tablespace 2. Tablespace is composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects.
That is, every table in the database will store into a specific segment (named Data Segment) and
every index in the database will also store in its own segment (named Index Segment). The other

10

segment

types

are

Temporary

Segment

and

Rollback

Segment.

Extent
A segment is further broken into extents. An extent consists of one or more data block. When the
database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an
extent
cannot
be
named.
Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a
specific number of bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are
not directly manipulated by users. The physical structure consists of datafiles, redo log files, and
control files.
Datafiles
A datafile is a file that correspondens with a tablespace. One datafile can be used by one
tablespace, but one tablespace can has more than one datafiles.
Redo Log Files
Redo log files are the files that store the redo entries generated by DML statements. It can be
used for recovery processes.
Control Files
Control files are used to store information about physical structure of database, such as datafiles
size and location, redo log files location, etc.
Starting up a database
This article explains the procedures involved in starting an Oracle instance and database.
First Stage: Oracle engine start an Oracle Instance
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for
database information, and creates background processes. At this point, no database is associated
with these memory structures and processes.
Second Stage: Mount the Database
To mount the database, the instance finds the database control files and opens them. Control files
are specified in the CONTROL_FILES initialization parameter in the parameter file used to start
the instance. Oracle then reads the control files to get the names of the database's datafiles and
redo log files.
At this point, the database is still closed and is accessible only to the database administrator. The
database administrator can keep the database closed while completing specific maintenance
operations. However, the database is not yet available for normal operations.
Final Stage: Database opens for normal operation

11

Opening a mounted database makes it available for normal database operations. Usually, a
database administrator opens the database to make it available for general use.
When you open the database, Oracle opens the online datafiles and online redo log files. If a
tablespace was offline when the database was previously shut down, the tablespace and its
corresponding datafiles will still be offline when you reopen the database.
If any of the datafiles or redo log files are not present when you attempt to open the database,
then Oracle returns an error. You must perform recovery on a backup of any damaged or missing
files before you can open the database.
Open a Database in Read-Only Mode
You can open any database in read-only mode to prevent its data from being modified by user
transactions. Read-only mode restricts database access to read-only transactions, which cannot
write to the datafiles or to the redo log files.
Disk writes to other files, such as control files, operating system audit trails, trace files, and alert
files, can continue in read-only mode. Temporary tablespaces for sort operations are not affected
by the database being open in read-only mode. However, you cannot take permanent tablespaces
offline while a database is open in read-only mode. Also, job queues are not available in readonly mode.
Read-only mode does not restrict database recovery or operations that change the database's state
without generating redo data. For example, in read-only mode: Datafiles can be taken offline and
online
Offline datafiles and tablespaces can be recovered
The control file remains available for updates about the state of the database one useful
application of read-only mode is that standby databases can function as temporary reporting
databases.
Database Shutdown
The three steps to shutting down a database and its associated instance are:
Close the database.
Unmount the database.
Shut down the instance.
Close a Database
When you close a database, Oracle writes all database data and recovery data in the SGA to the
datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and online redo
log files. At this point, the database is closed and inaccessible for normal operations. The control
files remain open after a database is closed but still mounted.
Close the Database by Terminating the Instance
In rare emergency situations, you can terminate the instance of an open database to close and
completely shut down the database instantaneously. This process is fast, because the operation of
writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The
subsequent reopening of the database requires recovery, which Oracle performs automatically.

12

Un mount a Database
After the database is closed, Oracle un mounts the database to disassociate it from the
instance. At this point, the instance remains in the memory of your computer.
After a database is un mounted, Oracle closes the control files of the database.
Shut Down an Instance
The final step in database shutdown is shutting down the instance. When you shut down an
instance, the SGA is removed from memory and the background processes are terminated.
Abnormal Instance Shutdown
In unusual circumstances, shutdown of an instance might not occur cleanly; all memory
structures might not be removed from memory or one of the background processes might not be
terminated. When remnants of a previous instance exist, a subsequent instance startup most likely
will fail. In such situations, the database administrator can force the new instance to start up by
first removing the remnants of the previous instance and then starting a new instance, or by
issuing a SHUTDOWN ABORT statement in Enterprise Manager.
Managing an oracle instance
When Oracle engine starts an instance, it reads the initialization parameter file to determine the
values of initialization parameters. Then, it allocates an SGA and creates background processes.
At this point, no database is associated with these memory structures and processes.
Type of initialization file:
Static (PFILE)
Text file
Modification with an OS editor
Modification made manually

Persistent (SPFILE)
Binary file
Cannot Modified
Maintained by the Server

Initialization parameter file content:


Instance parameter
Name of the database
Memory structure of the SGA
Name and location of control file
Information about undo segments
Location of udump, bdump and cdump file
Creating an SPFILE:
Create SPFILE=..ORA
From PFILE=..ORA;
Note:
Required SYSDBA Privilege. Execute before or after instance startup.
Oracle Background Processes
An Oracle instance runs two types of processes
Server Process
Background Process

13

Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine
create a process called Server processes. Server process communicate with oracle instance on the
behalf of user process.
Each background process is useful for a specific purpose and its role is well defined.
Background processes are invoked automatically when the instance is started.
Database Writer (DBWr)
Process Name: DBW0 through DBW9 and DBWa through DBWj
Max Processes: 20
This process writes the dirty buffers for the database buffer cache to data files. One database
writer process is sufficient for most systems; more can be configured if essential. The
initialisation parameter, DB_WRITER_PROCESSES, specifies the number of database writer
processes to start.
The DBWn process writes dirty buffer to disk under the following conditions:

When a checkpoint is issued.


When a server process cannot find a clean reusable buffer after scanning a threshold
number of buffers.
Every 3 seconds.
When we place a normal or temporary table space offline and read only mode
When we drop and truncate table.
When we put a table space in backup mode;

Log Writer (LGWR)


Process Name: LGWR
Max Processes: 1
The log writer process writes data from the redo log buffers to the redo log files on disk.
The writer is activated under the following conditions:

When a transaction is committed, a System Change Number (SCN) is generated and


tagged to it. Log writer puts a commit record in the redo log buffer and writes it to disk
immediately along with the transaction's redo entries.
Every 3 seconds.
When the redo log buffer is 1/3 full.
When DBWn signals the writing of redo records to disk. All redo records associated with
changes in the block buffers must be written to disk first (The write-ahead protocol).
While writing dirty buffers, if the DBWn process finds that some redo information has
not been written, it signals the LGWR to write the information and waits until the control
is returned.

Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is
identified with a redo log file, the log writer will log an error in the LGWR trace file and the
system Alert Log. Sometimes, when additional redo log buffer space is required, the LGWR will

14

even write uncommitted redo log entries to release the held buffers. LGWR can also use group
commits (multiple committed transaction's redo entries taken together) to write to redo logs when
a database is undergoing heavy write operations.
The log writer must always be running for an instance.
System Monitor
Process Name: SMON
Max Processes: 1
This process is responsible for instance recovery, if necessary, at instance startup. SMON also
cleans up temporary segments that are no longer in use. SMON wakes up about every 5 minutes
to perform housekeeping activities. SMON must always be running for an instance.
Process Monitor
Process Name: PMON
Max Processes: 1
This process is responsible for performing recovery if a user process fails. It will rollback
uncommitted transactions. PMON is also responsible for cleaning up the database buffer cache
and freeing resources that were allocated to a process. PMON also registers information about
the instance and dispatcher processes with network listener.
PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be
running for an instance.
Checkpoint Process
Process Name: CKPT
Max processes: 1
Checkpoint process signals the synchronization of all database files with the checkpoint
information. It ensures data consistency and faster database recovery in case of a crash.
CKPT ensures that all database changes present in the buffer cache at that point are written to the
data files, the actual writing is done by the Database Writer process. The datafile headers and the
control files are updated with the latest SCN (when the checkpoint occurred), this is done by the
log writer process.
The CKPT process is invoked under the following conditions:

When a log switch is done.


When
the
time
specified
by
the
initialization
parameter
LOG_CHECKPOINT_TIMEOUT exists between the incremental checkpoint and the tail
of the log; this is in seconds.
When the number of blocks specified by the initialization parameter
LOG_CHECKPOINT_INTERVAL exists between the incremental checkpoint and the
tail of the log; these are OS blocks.
The
number
of
buffers
specified
by
the
initialization
parameter
FAST_START_IO_TARGET required to perform roll-forward is reached.
Oracle 9i onwards, the time specified by the initialization parameter
FAST_START_MTTR_TARGET is reached; this is in seconds and specifies the time
required for a crash recovery. The parameter FAST_START_MTTR_TARGET replaces
15

LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET,


parameters can still be used.
When the ALTER SYSTEM SWITCH LOGFILE command is issued.
When the ALTER SYSTEM CHECKPOINT command is issued.

but

these

Incremental Checkpoints initiate the writing of recovery information to datafile headers and
controlfiles. Database writer is not signaled to perform buffer cache flushing activity here.

16

Oracle 10gR12 Software/binary installation


(10gR2 Installation on Linux)

Note: Bellow mention steps are for 10g Release 1 & 2 (32-bit/64-bit) on Red Hat Enterprise
Linux 4.
Note: When you install Linux for Oracle you should create separate file system for ORACLE
Software and Oracle Database
2.5 GB of disk space should be required by Oracle Software
1.3 GB of disk space should be required by

General

Purpose

Database.

Here we have created two mount point /oracle and /database for oracle software and database.
Prerequisite Steps:
Check Memory
Memory should be 512 MB of RAM.
How to check the size of physical memory?
$ grep MemTotal /proc/meminfo
Check swap space
Swap space should be 1 GB or Twice the size of RAM
How to check the size of Swap space?
$ grep SwapTotal /proc/meminfo
If you dont have 1 GB of twice of the size of RAM. You must add temporary swap space to
your system by creating a temporary swap file. Bellow I am describing How to Add Swap
Space?
How to Add Swap Space?
Log in as a ROOT
$ dd if=/dev/zero of=tmpswap bs=1k count=900000
$ chmod 600 tmpswap
$ mkswap tmpswap
$ swapon tmpswap
Check TMP Space
Oracle Universal Installer (OUI) requires up to 400 MB of free space in the /tmp directory.
How to check the space in /tmp?
$ df /tmp
If you dont have enough space in the /tmp filesystem, you must temporarily create a tmp
directory in another filesystem. Bellow I am describing all steps for adding temp space?

17

Step 1 Log in as a ROOT user


Step 2 mkdir //tmp
Step 3 chown root.root //tmp
Step 4 chmod 1777 //tmp
Step 5 export TEMP=/
Step 6 export TMPDIR=/
After Completion of Installation, you must remove the temporary /temp directory.
Bellow I am describing How to remove temporary /temp directory?
Step 1 Log in as a Root
Step 2 $ rmdir //tmp
Step 3 $unset TEMP
Step 4 $unset TMPDIR
Check Software RPM Package
Before install of Oracle Database 10g software you need to check the system for required RPMs.
make-3.79
binutils-2.14
gcc-3.2
libaio-0.3.96
glibc-2.3.2-95.27
Check Kernel parameter
How to see all kernel parameters?
Step 1 Log in as a Root user
Step 2 $ sysctl a
Bellow I am describing kernel parameters, which have to be set to values greater than or equal to
the recommended values which can be changed in the proc filesystem:
shmmax = 2147483648
shmmni = 4096
shmall = 2097152
shmmin = 1
shmseg = 10
semmsl = 250
semmns = 32000
semopm = 100
semmni = 128
file-max = 65536
ip_local_port_range = 1024 65000
NOTE: Do not change the value of any kernel parameter on a system where it is already higher
than listed as minimum requirement.

18

Note: Just you add minimum required value in /etc/sysctl.conf file which is used during the boot
process:
kernel.shmmax=2147483648
kernel.sem=250 32000 100 128
fs.file-max=65536
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
become effective immediately, execute the following command:
Step 1 Log in as a Root
Step 2 $ sysctl p
Installation Steps:
Creating Oracle User Accounts and Group
Step 1 Log in as a Root
Step 2 $groupadd dba # group of users to be granted SYSDBA system privilege
Step 3 $groupadd oinstall # group owner of Oracle files
Step 4 useradd -c "Oracle software owner" d /home/oracle -g oinstall -G dba oracle
Step 5 chown oracle:dba /home/oracle /oracle
Step 6 chown oracle:dba /home/oracle /database
Step 5 passwd oracle
Start Installation
Starting Oracle Universal InstallerInsert the Oracle CD that contains the image of the software. If
you install Oracle10g from a CD, mount the CD by running the following commands in another
terminal:
su root
mount /mnt/cdrom
run installer ./ runInstaller
Note: Message: - Can not connect to X11 windows server
Step Log in as a Root user
Step 2 $xhost +
Step 3 su oracle
Step 4 Export Display=localhost:0.0
Step 5 ./runinstaller
Post Installation Steps:
Put following entries in bash file.
export ORACLE_HOME=$ORACLE_BASE/oracle/product/10.2.0/db_1
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib

19

Oracle 10gR12 Software/binary installation


(10gR2 Installation on Solaris)

Note: Bellow mention steps are for 10g Release 1 & 2 (32-bit/64-bit) on Solaris 10.
Note: When you install Solaris for Oracle you should create separate file system for ORACLE
Software and Oracle Database
2.5 GB of disk space should be required by Oracle Software
1.3 GB of disk space should be required by

General

Purpose

Database.

Here we have created two mount point /oracle and /database for oracle software and database.
Note: No specified operating system patches are required with Solaris 10 OS.
Per-requisite Steps:
Make sure that following software packages has been installed.
SUNWarc
SUNWbtool
SUNWhea
SUNWlibm
SUNWlibms
SUNWsprot
SUNWtoo
SUNWilof
SUNWxwfnt
SUNWilcs
SUNWsprox
SUNWil5cs
we can verify that packages are installed or not by using following command: $pkginfo -i.
Check following executable file must be presents in /usr/ccs/bin
make
ar
il
nm
Checks swap space
Swap space should be 512MB or Twice the size of RAM. Use following command to know
about Physical Memory and Swap space:
$ /usr/sbin/prtconf grep size
$ /usr/sbin/swap l

20

Need at least 400 MB of free space in /tmp directory.


Check kernel parameter
Set following kernel parameter in /etc/system file and reboot the server.
Set shmsys:shminfo_shmmax=4294967295
Set shmsys:shminfo_shmmni=100
Set semsys:seminfo_semmsl=256
Set semsys:seminfo_semmni=100
Create a group.
$ groupadd g 300 dba *
$ groupadd g 301 oinstall **
* DBA group will be use by the Oracle Software Owner and Database Administrators.
** OINSTALL group will be use when you installing multiple copies of the oracle software on
one server and you will want some logins to be able to log onto some databases with DBA
privileges but not others.
Create User
Create a UNIX user that will be the Oracle Software Owner.
$ useradd c Oracle Software Owner d /oracle g oinstall G dba m u 300 s /usr/bin/ksh
oracle
$passwd oracle
Create Directory Structure
Create directory for oracle software and Database.
$ mkdir /oracle/app /oracle/oradata
$chown oracle:dba /oracle
$chmod 775 /oracle
Create the /var/opt/oracle directory
$mkdir /var/opt/oracle
$chown oracle:dba /var/opt/oracle
$chmod 755 /var/opt/oracle
Edit profile file
edit .profile file and type following endearments
export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/app

21

export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Set X window enveiroment.
Log in as a root with CDE (Common Desktop Environment Session)
$ DISPLAY=:0.0
$export DISPLAY
$xhost +
$ su oracle
$DISPLAY=:0.0
$export DISPLAY
$/usr/openwin/bin/xclock
Execute runInstaller
Log in as a ORACLE user and execute run installer.
$./runInstaller

22

Database Creation (Manually)


(For File System Storage Mechanism on Linux/Solaris)
Step 1 Create a initSID.ora (Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/
<Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = TEST
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora
password=<password> entries=5
Step 3 Set your ORACLE_SID
$ export ORACLE_SID=test
$ export ORACLE_HOME=/<Destination>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
$sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the Database. Use following scripts
create database test
logfile group 1 ('<Destination>/redo1.log') size 100M,
group 2 ('<Destination>/redo2.log') size 100M,
group 3 ('<Destination>/redo3.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '<Destination>/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent
management local
sysaux datafile '<Destination>/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '<Destination>/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '<Destination>/temp01.dbf' size 100M;
Step 6 Run the scripts necessary to build views, synonyms, etc.:

23

CATALOG.SQL-- creates the views of data dictionary tables and the dynamic
performance views.
CATPROC.SQL-- establishes the usage of PL/SQL functionality and creates many of the
PL/SQL Oracle supplied packages.
(For OMF Enable Database on Linux/Solaris)

Step 1 Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.


Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/
<Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = /<Put DB File Destination> #OMF
db_create_online_log_dest_1 = /<Put first redo and control file destination> #OMF
db_create_online_log_dest_2 = /<Put second redo and control file destination> #OMF
db_recovery_file_dest = /<put flash recovery area destination> #OM
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora
password=<password> entries=5
Step 3 Set your ORACLE_SID
export ORACLE_SID=test
export ORACLE_HOME=/<oracle home path>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the database
create database test
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
Step 6 Run catalog and catproc

24

@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
(Implement ASM Enable Database on LHEL 4)
Now we are ready to implement ASM Enable Database on RHEL. We divided all task in
following Section:
Section A: Prepare/Install RHEL Machine
Section B: Prepare ASM Disk.
Section C: Configure Oracle Automatic Storage Management *
Section D: Create ASM Instance Manually
Section E: Create ASM Enable Database
*Important: We can use two methods to configure Oracle Automatic Storage Management
on Linux:
ASM With ASMLib I/O Database created on raw block devices with this method.
ASM With Standard Linux I/O Database created on raw character devices with this method.
NOTE: Here we use ASM with ASMLib I/O method in our task and also discuss about ASM
with Standard Linux I/O in Section C.
SECTION - A First we prepare RHEL4 machine.
Step 1 Install the RHEL 4

Here, we create partition (/dev/sda2) for Oracle Software at the time of OS Installation.
Step 2 Create the oracle user (log in as a ROOT and execute following)
# groupadd oinstall
# groupadd dba
# useradd -d /oracle -g oinstall -G dba -s /bin/ksh oracle
# chown oracle:dba /oracle
# passwd oracle
New Password:
Re-enter new Password:

25

passwd: password successfully changed for oracle


Step 3 Install oracleasmlib package.
oracleasmlib-2.0.2-1.i386.rpm, oracleasm-support-2.0.3-2 and oracleasm-2.6.942.0.0.0.1.ELsmp-2.0.3-2
EXAMPLE: # rpm -Uvh oracleasmlib-2.0.2-1.i386.rpm
Step 4 Configure the kernel parameters.
Add below listed to /etc/sysctl.conf file. To make the changes effective immediately, execute
/sbin/sysctl p.
# more /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
Step 5 Create the oracle user environment file.
/oracle/.profile
export EDITOR=vi
export ORACLE_SID=CONPROD
export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/10.2.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/bin:
/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
SECTION-B Prepare ASM Disk.
We will create 2 physical partitions on /dev/sdb and /dev/sdc. We must create Single Partition on
whole Device. Reason is we have one controller per disk in this case so as faster IO and
redundancy.
Step 1 Down the Machine and Attached Hard Disk with Machine and Start the machine
Step 2 Create Disk Partition for Oracle ASM storage (Prepare a set of raw disks for Oracle ASM
(/dev/sdb, /dev/sdc)
Step 3 Execute: # fdisk l

26

You will be seeing here new added disk.


Step 4 Follow following Screen Shot

Step 5 Follow following Screen Shot

27

SECTION - C Configure Oracle Automatic Storage Management (ASM)


Follow bellow steps, If you want ASM With ASMLib I/O
Step 1 Configure ASMLib. Execute following command:
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
Step 2 Create ASM disks. Log ia as a root user and execute:
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
Marking disk "/dev/sdb1" as an ASM disk: [ OK ]
28

# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1


Marking disk "/dev/sdc1" as an ASM disk: [ OK ]
Verify that the ASM disks are visible from every node.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
Follow bellow steps, if you want ASM with Standard Linux I/O
Step 1 Map raw devices for ASM disks.
A raw device mapping is required only if you are planning on creating ASM disks using standard
Linux I/O. The raw devices have to bind with the block devices each time a node boots.
Add the following lines in /etc/sysconfig/rawdevices.
/etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
To make the mapping effective immediately, execute the following commands as the root user:
# /sbin/service rawdevices restart
Assigning devices:
/dev/raw/raw1 --> /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 33
/dev/raw/raw2 --> /dev/sdc1
/dev/raw/raw2: bound to major 8, minor 49
done
# chown oracle:dba /dev/raw/raw[1-2]
# chmod 660 /dev/raw/raw[1-2]
# ls -lat /dev/raw/raw*
crw-rw---- 1 oracle dba 162, 2 Nov 4 07:04 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 1 Nov 4 07:04 /dev/raw/raw1
Modify /etc/udev/permissions.d/50-udev.permissions.
Raw devices are remapped on boot. The ownership of the raw devices will change to the root
user by default upon boot. ASM will have problem accessing the shared partitions if the
ownership is not the oracle user. Comment the original line, raw/*:root:disk:0660 in
/etc/udev/permissions.d/50-udev.permissions and add a new line,
raw/*:oracle:dba:0660.
/etc/udev/permissions.d/50-udev.permissions

29

# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660

SECTION-D Create ASM Instance (Manually)


Step 1 If CSS service is not there; create it by executing the following batch file:
$<ORACLE_HOME/bin/localconfig add
Important: Cluster Synchronization Services (CSS) is required to enable synchronization
between an Automatic Storage Management (ASM) instance and the database instances.
Step 2 Create Admin Directories
$mkdir /oracle/ASM/bdump
$mkdir /oracle/ASM/cdump
$mkdir /oracle/ASM/udump
$mkdir /oracle/ASM/pfile
Step 3 Create ASM Instance Parameter file. In /oracle/ASM/pfile directory
INSTANCE_TYPE=ASM
DB_UNIQUE_NAME=+ASM
LARGE_POOL_SIZE=16M
BACKGROUND_DUMP_DEST = '/oracle/ASM/bdump'
USER_DUMP_DEST=/oracle/ASM/udump
CORE_DUMP_DEST = '/oracle/ASM/cdump'
ASM_DISKGROUPS='DB_DATA'
ASM_DISKSTRING ='ORCL: *
Important:

If you do not want to use above parameter just set only one parameter
INSTANCE_TYPE=ASM. ASM instance will start with default values for other
parameters.
Bellow I am giving you 5 key parameter that you must configure for ASM instance.

INSTANCE_TYPE
DB_UNIQUE_NAME
ASM_POWER_LIMIT (Indicate the maximum speed to be used by this ASM instance during a
disk rebalancing operation. Default is 1 and the range 1 to 11)
ASM_DISKSTRING (set the disk location for oracle to consider during a disk-discovery
process)

30

ASM_DISKGROUP (specify the name of any disk group that you want the ASM instance to
automatically mount at instance startup)

ASM instance uses the LARGE_POOL_SIZE memory buffer. We should allocate at least
8MB .

Step 4 Creating ASM Instance


$export ORACLE_SID=+ASM
$sqlplus /nolog
SQL> startup nomount pfile='/oracle/ASm/pfile/init+ASM.ora'
IMPORTANT: We can find error "ORA-29701: unable to connect to Cluster Manager" at the
time of ASM instance startup. The Cause of error is Connect to CM failed or timed out. Just
delete the CM local configuration and add local configuration by using. /localconfig delete and.
/localconfig add command. This shall script is reside in $ORACLE_HOME/bin directory.
Step 5 Create ASM Disk Group
Let's start by determining if Oracle can find these four new disks: The view V$ASM_DISK can
be queried from the ASM instance to determine which disks are being used or may potentially be
used as ASM disks.
$export oracle_sid=+ASM
$sqlplus "/ as sysdba"
SQL> SELECT group_number, disk_number, mount_status, header_status, state, path FROM
v$asm_disk
Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
is available but hasn't yet been assigned to a disk group.
Using SQL*Plus, the following will create a disk group with normal redundancy and two failure
groups:
SQL> CREATE DISKGROUP DB_DATA NORMAL REDUNDANCY
FAILGROUP controller1 DISK 'ORCL:VOL1'
FAILGROUP controller2 DISK 'ORCL:VOL2';
Diskgroup created.
Step 6 ALTER DISKGROUP ALL MOUNT;
Now your ASM Instance has been created. Restart the ASM Instance.

31

SECTION - E Create ASM Enable Database


Step 1 Set your ORACLE_SID
export ORACLE_SID=INDIAN
Step 2 Create a minimal init.ora ( In Default Location -$ORACLE_HOME/dbs/init<sid>.ora)
control_files = +DB_DATA
undo_management = AUTO
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA
db_create_online_log_dest_1 = +DB_DATA

Step 3 Create a password file


$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=oracle
entries=5
Step 4 Start the instance
sqlplus / as sysdba
startup nomount
Step 5 Create the database
create database indian
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
Step 6 Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

32

(Implement ASM Enable Database on Solaris)


Now we are ready to implement ASM Instance on Solaris. We divided all task in following
Section:
Section A: Install Oracle binaries
Section B: Prepare Disk for ASM (Read Adding Disk/ Create Partition in Solaris section)
Section C: Set ownership of Disk
Section D: Create ASM Instance Manually
Section E: Create ASM Enable Database
SECTION-A Install Oracle binaries
Go to page 20.
SECTION - B Prepare Disk for ASM
Take help of System Administrator
SECTION - C Set ownership of Disk
Default owner is root:sys needs to be changed to oracle:dba .We need to set oracle:dba ( here
oracle is a Oracle owner and dba is a group) ownership of the disk. As root we have take partition
s0 of disk (c0d1s0 and c1d1s0)
Check ownership of Disk. Execute following command as a root user.
bash-3.00# ls -lhL /dev/rdsk/c0d1s0
crw-r----- 1 root sys
118, 64 Feb 16 02:10 /dev/rdsk/c0d1s0
bash-3.00# ls -lhL /dev/rdsk/c1d1s0
crw-r----- 1 root sys
118, 64 Feb 16 02:10 /dev/rdsk/c1d1s0
bash-3.00# chown oracle:dba /dev/rdsk/c0d1s0
bash-3.00# chown oracle:dba /dev/rdsk/c1d1s0
SECTION - D Create ASM Instance (Manually)
Step 1 If CSS service is not there; create it by executing the following batch file:
$<ORACLE_HOME/bin/localconfig add
Important: Cluster Synchronization Services (CSS) is required to enable synchronization
between an Automatic Storage Management (ASM) instance and the database instances.
Step 2 Create Admin Directories
$mkdir /oracle/ASM/bdump
$mkdir /oracle/ASM/cdump
$mkdir /oracle/ASM/udump

33

$mkdir /oracle/ASM/pfile
Step 3 Create ASM Instance Parameter file. In /oracle/ASM/pfile directory
INSTANCE_TYPE=ASM
DB_UNIQUE_NAME=+ASM
LARGE_POOL_SIZE=16M
BACKGROUND_DUMP_DEST = '/oracle/ASM/bdump'
USER_DUMP_DEST=/oracle/ASM/udump
CORE_DUMP_DEST = '/oracle/ASM/cdump'
ASM_DISKGROUPS='DB_DATA'
ASM_DISKSTRING =/dev/rdsk/*
Important:
If you do not want to use above parameter just set only one parameter INSTANCE_TYPE=ASM.
ASM instance will start with default values for other parameters.
Bellow I am giving you 5 key parameter that you must configure for ASM instance.
INSTANCE_TYPE
DB_UNIQUE_NAME
ASM_POWER_LIMIT (Indicate the maximum speed to be used by this ASM instance during a
disk rebalancing operation. Default is 1 and the range 1 to 11)
ASM_DISKSTRING (set the disk location for oracle to consider during a disk-discovery
process)
ASM_DISKGROUP (specify the name of any disk group that you want the ASM instance to
automatically mount at instance startup)
ASM instance uses the LARGE_POOL_SIZE memory buffer. We should allocate at least 8MB .

Step 4 Creating ASM Instance


$export ORACLE_SID=+ASM
$sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount pfile='/oracle/ASm/pfile/init+ASM.ora'
IMPORTANT: We can find error "ORA-29701: unable to connect to Cluster Manager" at the
time of ASM instance startup. The Cause of error is Connect to CM failed or timed out. Just
delete the CM local configuration and add local configuration by using. /localconfig delete and.
/localconfig add command. This shall script is reside in $ORACLE_HOME/bin directory.
Step 5 Create ASM Disk Group

34

Let's start by determining if Oracle can find these four new disks: The view V$ASM_DISK can
be queried from the ASM instance to determine which disks are being used or may potentially be
used as ASM disks.
$export oracle_sid=+ASM
$sqlplus "/ as sysdba"
SQL> SELECT group_number, disk_number, mount_status, header_status, state, path FROM
v$asm_disk
Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
is available but hasn't yet been assigned to a disk group.
Using SQL*Plus, the following will create a disk group with normal redundancy and two failure
groups:
SQL> CREATE DISKGROUP DB_DATA NORMAL REDUNDANCY
FAILGROUP controller1 DISK ' c0d1s0'
FAILGROUP controller2 DISK ' c1d1s0';
Diskgroup created.
Step 6 ALTER DISKGROUP ALL MOUNT;
Now your ASM Instance has been created. Restart the ASM Instance.
SECTION - E Create (OMF) Database
Step 1 Set your ORACLE_SID
export ORACLE_SID=INDIAN
Step 2 Create a minimal init.ora ( In Default Location -$ORACLE_HOME/dbs/init<sid>.ora)
control_files = +DB_DATA
undo_management = AUTO
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA
db_create_online_log_dest_1 = +DB_DATA

Step 3 Create a password file


$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=oracle
entries=5
Step 4 Start the instance

35

sqlplus / as sysdba
startup nomount
Step 5 Create the database
create database indian
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;

Step 6 Run catalog and catproc


@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

36

(Implement Database on Linux RAW Device)


Now we will implement database on Linux RAW Device.
How we will achieve this:
We will use the entire /dev/sdb hard drive for the Oracle database.
Next, we will create an LVM Physical Volume called /dev/pv1 for the entire /dev/sdb
hard drive.
Next, we will create an LVM Volume Group.
After creating the LVM Volume Group, then we will create an LVM Logical Volume for
each datafile, controlfile, and online redo log file for our new database.
Next , we bind the RAW devices to LVM Logical Volume
Step 1 Removing Any Partitions on Hard Drive
# dd if=/dev/zero of=/dev/sdb bs=1K count=1
1+0 records in
1+0 records out
# blockdev --rereadpt /dev/sdb
At this point, you should have a hard drive recognized as /dev/sdb with no partitions
Step 2 Create the LVM Physical Volume
We will create an LVM Physical Volume called /dev/pv1 for the entire hard disk /dev/sdb.
pvcreate /dev/sdb
pvcreate -- physical volume "/dev/sdb" successfully created
Step 3 Create the LVM Volume Group
We will create an LVM Volume Group that will contain the LVM Physical Volume /dev/pv1.
# vgcreate -l 256 -p 256 -s 128k /dev/vg1 /dev/sdb
vgcreate -- volume group "vg1" successfully created and activated
Step 4 Create Logical Volumes for database
Now we create all of the Logical Volumes that will be used to house all of the Oracle database
files.
# lvcreate L 15m /dev/vg1
#Control File 1
lvcreate -- logical volume "lvol0" successfully created
# lvcreate L 15m /dev/vg1
#Control File 2
lvcreate -- logical volume "lvol1" successfully created
# lvcreate L 15m /dev/vg1
#Control file 3
lvcreate -- logical volume "lvol2" successfully created

37

# lvcreate L 20m /dev/vg1


# Redo group 1
lvcreate -- logical volume "lvol3" successfully created
# lvcreate L 20m /dev/vg1
# redo group 2
lvcreate -- logical volume "lvol4" successfully created
# lvcreate L 20m /dev/vg1
#Redo group 3
lvcreate -- logical volume "lvol5" successfully created
# lvcreate L 500m /dev/vg1
# System tablesapce
lvcreate -- logical volume "lvol6" successfully created
# lvcreate L 500m /dev/vg1
#SysAux Tablesapce
lvcreate -- logical volume "lvol7" successfully created
# lvcreate L 1024m /dev/vg1
#UNDO
lvcreate -- logical volume "lvol8" successfully created
# lvcreate L 1024m /dev/vg1
#TEMP
lvcreate -- logical volume "lvol9" successfully created
Step 5 Create RAW Bindings
# vgchange -a y /dev/vg1
vgchange -- volume group "vg1" successfully activated
# /usr/bin/raw /dev/raw/raw1 /dev/vg1/lvol0
# /usr/bin/raw /dev/raw/raw2 /dev/vg1/lvol1
# /usr/bin/raw /dev/raw/raw3 /dev/vg1/lvol2
# /usr/bin/raw /dev/raw/raw4 /dev/vg1/lvol3
# /usr/bin/raw /dev/raw/raw5 /dev/vg1/lvol4
# /usr/bin/raw /dev/raw/raw6 /dev/vg1/lvol5
# /usr/bin/raw /dev/raw/raw7 /dev/vg1/lvol6
# /usr/bin/raw /dev/raw/raw8 /dev/vg1/lvol7
# /usr/bin/raw /dev/raw/raw9 /dev/vg1/lvol8

38

# /usr/bin/raw /dev/raw/raw10 /dev/vg1/lvol9


Step 6 modify ownership on RAW Devices
# /bin/chmod 600 /dev/raw/raw1
# /bin/chmod 600 /dev/raw/raw2
# /bin/chmod 600 /dev/raw/raw3
# /bin/chmod 600 /dev/raw/raw4
# /bin/chmod 600 /dev/raw/raw5
# /bin/chmod 600 /dev/raw/raw6
# /bin/chmod 600 /dev/raw/raw7
# /bin/chmod 600 /dev/raw/raw8
# /bin/chmod 600 /dev/raw/raw9
# /bin/chmod 600 /dev/raw/raw10
# /bin/chown oracle:dba /dev/raw/raw1
# /bin/chown oracle:dba /dev/raw/raw2
# /bin/chown oracle:dba /dev/raw/raw3
# /bin/chown oracle:dba /dev/raw/raw4
# /bin/chown oracle:dba /dev/raw/raw5
# /bin/chown oracle:dba /dev/raw/raw6
# /bin/chown oracle:dba /dev/raw/raw7
# /bin/chown oracle:dba /dev/raw/raw8
# /bin/chown oracle:dba /dev/raw/raw9
# /bin/chown oracle:dba /dev/raw/raw10
Step 7 creates Database.
Now we are ready to create RAW Enable database.
7.1 Create initSID.ora file and put in default location. Put control_file parameter as RAW.
7.2 Set bellow mention ENV.
Export ORACLE_HOME=<Oracle_Home Destination>
Export ORCALE_SID=PROD
Export PATH=$PATH:$ORACLE_HOME/bin
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
7.2 Invoke SQLPLUS, startup nomount database and create database.
$sqlplus /nolog
SQL> conn / as sysdba
SQL>startup nomount
SQL> Create database prod
logfile group 1 (/dev/raw/raw4),
group 2 (/dev/raw/raw5),
group 3 (/dev/raw/raw6)
datafile /dev/raw/raw7
sysaux datafile /dev/raw/raw8

39

undo Tablespace undotbs1 datafile /dev/raw/raw9


default temporary Tablespace temp tempfile /dev/raw/raw10;
7.3 Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

40

(Implement Database on Solaris RAW Device)


Now we will implement database on Solaris RAW Device.
How we will achieve this:
We will use the entire c0d1 hard drive for the Oracle database.
Next, we will create an RAW Partitions
We will create bellow mention raw partition:
Disk
Slice
Size
Use for
c0d1
s0
50MB
Control file
c0d1
s1
10MB
Redo group1
c0d1
s3
10MB
Redo group 2
c0d1
s4
500MB
System
c0d1
s5
500MB
Sysaux
c0d1
s7
1024MB
Undo
c1d1
s0
1024MB
Temp
Step 1 Change owner ship:
Default owner is root:sys needs to be changed to oracle:dba .We need to set oracle:dba ( here
oracle is a Oracle owner and dba is a group) ownership of the disk. As root we have take partition
s0 of disk (c0d1s0 and c1d1s0)
Check ownership of Disk. Execute following command as a root user.
bash-3.00# ls -lhL /dev/rdsk/c0d1s0
crw-r----- 1 root sys
118, 64 Feb 16 02:10 /dev/rdsk/c0d1s0
bash-3.00# ls -lhL /dev/rdsk/c1d1s0
crw-r----- 1 root sys
118, 64 Feb 16 02:10 /dev/rdsk/c1d1s0
bash-3.00# chown oracle:dba /dev/rdsk/c0d1s0
bash-3.00# chown oracle:dba /dev/rdsk/c1d1s0
Note: Execute one by one for all slices
Step 2 creates Database.
Now we are ready to create RAW Enable database.
2.1 Create initSID.ora file and put in default location. Put control_file parameter as RAW.
2.2 Set bellow mention ENV.
Export ORACLE_HOME=<Oracle_Home Destination>
Export ORCALE_SID=PROD
Export PATH=$PATH:$ORACLE_HOME/bin
Export LD_LIBRARY_PATH=$ORACLE_HOME/lib
2.3 Invoke SQLPLUS, startup nomount database and create database.
$sqlplus /nolog
SQL> conn / as sysdba
SQL>startup nomount

41

SQL> Create database prod


logfile group 1 /dev/rdsk/raw1,
group 2 /dev/rdsk/raw3
datafile /dev/rdsk/raw4
sysaux datafile /dev/rdsk/raw5;
2.4 Create Undo Tablespace
CREATE UNDO TABLESPACE UNDOTBS
DATAFILE '/dev/rdsk/c0d1s7' REUSE AUTOEXTEND ON;
2.4 Create Temp Tablespace
CREATE TEMPORARY TABLESPACE temp
TEMPFILE '/dev/rdsk/c1d1s0'
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
2.5 Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

(Implement ASM Enable Database on Windows)


Step 1 Create New Partation for Device Files
G:\
H:\
Step 2 If CSS service is not there; create it by executing the following batch file:
<orahome>\bin\localconfig add
Step 3 Building the ASM Candidate "disks".
ASMTOOL -create G:\DISK1 1024
ASMTOOL -create H:\DISK2 1024
ASMTOOL create G:\DISK3 1024
ASMTOOL create H:\DISK4 1024
Step 4 Create Admin Directories
D:\DATABASE\admin\+ASM\bdump
D:\DATABASE\admin\+ASM\cdump
D:\DATABASE\admin\+ASM\hdump
D:\DATABASE\admin\+ASM\pfile
D:\DATABASE\admin\+ASM\udump

42

Step 5 Create ASM Instance Parameter File


INSTANCE_TYPE=ASM
_ASM_ALLOW_ONLY_RAW_DISKS = FALSE
DB_UNIQUE_NAME = +ASM
ASM_DISKSTRING ='G:\DISK*','H:\DISK*'
LARGE_POOL_SIZE = 16M
BACKGROUND_DUMP_DEST = 'D:\DATABASE\admin\+ASM\bdump'
USER_DUMP_DEST = 'D:\DATABASE\admin\+ASM\udump'
CORE_DUMP_DEST = 'D:\DATABASE\admin\+ASM\cdump'
ASM_DISKGROUPS='DB_DATA'
Step 6 Creating ASM Instance
C:\> oradim -new -asmsid +ASM -syspwd asm123 pfile d:\database\admin\+ASM\pfile\init.ora
-startmode a
Step 7 Starting the ASM Instance
SQL> startup nomount pfile=' d:\database\admin\+ASM\pfile\init.ora';
ASM instance started
Total System Global Area 125829120 bytes
Fixed Size
769268 bytes
Variable Size
125059852 bytes
Database Buffers
0 bytes
Redo Buffers
0 bytes
Step 8 Create ASM Disk Groups
Let's start by determining if Oracle can find these four new disks:
The view V$ASM_DISK can be queried from the ASM instance to determine which disks are
being used or may potentially be used as ASM disks.
set oracle_sid=+ASM
C:>sqlplus "/ as sysdba"
SQL> SELECT group_number, disk_number, mount_status, header_status, state, path
FROM v$asm_disk
GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH
------------ ----------- ------- ------------ ----------------------------0
0 CLOSED CANDIDATE NORMAL C:\ASMDISKS\_FILE_DISK1
0
1 CLOSED CANDIDATE NORMAL C:\ASMDISKS\_FILE_DISK2
0
2 CLOSED CANDIDATE NORMAL C:\ASMDISKS\_FILE_DISK3
0
3 CLOSED CANDIDATE NORMAL C:\ASMDISKS\_FILE_DISK4

Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk

43

is available but hasn't yet been assigned to a disk group. Using SQL*Plus, the following will
create a disk group with normal redundancy and two failure groups:
set oracle_sid=+ASM
sqlplus "/ as sysdba"
SQL> CREATE DISKGROUP DB_DATA NORMAL REDUNDANCY FAILGROUP
controller1
DISK 'G:\DISK1', 'H:\DISK2'
FAILGROUP controller2 DISK 'G:\DISK3', 'H:\DISK4';
Diskgroup created.
Step 9 ALTER DISKGROUP ALL MOUNT;
Now your ASM Instance has been created. Restart the ASM Instance.
Step 10 Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/database/
directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = +DB_DATA
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA #OMF
db_create_online_log_dest_1 = +DB_DATA #OMF
db_create_online_log_dest_2 = +DB_DATA #OMF
#db_recovery_file_dest = +DB_DATA

Step 11 Creating Database TEST Instance


C :\> oradim -new -sid TEST -syspwd test123 -startmode a
Step 12 Starting the Database Instance
SQL> startup nomount
Step 13 Create the database
create database prod

44

character set WE8ISO8859P1


national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
Step 14 Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

45

Managing Tablespace
How to Manage Tablespace
Tablespace is a logical storage unit. Why we are say logical because a tablespace is not visible in
the file system. Oracle store data physically is datafiles. A tablespace consist of one or more
datafile.
Type of Tablespace
System Tablespace
Created with the database
Required in all database
Contain the data dictionary
Non System Tablespace:
Separate undo, temporary, application data and application index segments Control the
amount of space allocation to the users objects
Enable more flexibility in database administration
How to Create Tablespace?
CREATE TABLESPACE "tablespace name"
DATAFILE <DATAFILE LOCATION> SIZE <Size of Datafile> REUSE
MENIMUM EXTENT (This ensure that every used extent size in the tablespace is a multiple of
the integer)
BLOCKSIZE
LOGGING | NOLOGGING (Logging: By default tablespace have all changes written to redo,
Nologging: tablespace do not have all changes written to redo)
ONLIN | OFFLINE (OFFLINE: tablespace unavailable immediately after creation)
PERMANENT | TEMPORARY (Permanent: tablespace can used to hold permanent object,
temporary: tablespace can used to hold temp object)
EXTENT MANAGEMENT clause
Example:
CREATE TABLESPACE "USER1"
DATAFILE 'C: \LOCAL\ORADATA\USER_DATA.DBF' SIZE 10m REUSE
BLOCKSIZE 8192
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL
How to manage space in Tablespace?
Note: Tablespace allocate space in extent.
Locally managed tablespace:

46

The extents are managed with in the tablespace via bitmaps. In locally managed tablespace, all
tablespace information store in datafile header and dont use data dictionary table for store
information. Advantage of locally managed tablespace is that no DML generate and reduce
contention on data dictionary tables and no undo generated when space allocation or deallocation
occurs.
Extent Management [Local | Dictionary]
The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and
DEFAULT STORAGE are not valid for segments stored in locally managed tablespaces.
To create a locally managed tablespace, you specify LOCAL in the extent management clause of
the CREATE TABLESPACE statement. You then have two options. You can have Oracle
manage extents for you automatically with the AUTOALLOCATE option, or you can specify
that the tablespace is managed with uniform extents of a specific size (UNIFORM SIZE).
If the tablespace is expected to contain objects of varying sizes requiring different extent sizes
and having many extents, then AUTOALLOCATE is the best choice.
If you do not specify either AUTOALLOCATE or UNIFORM with the LOCAL parameter, then
AUTOALLOCATE is the default.
Dictionary Managed tablespace
When we declaring a tablespace as a Dictionary Managed, the data dictionary manages the
extents. The Oracle server updates the appropriate tables (sys.fet$ and sys.uet$) in the data
dictionary whenever an extent is allocated or deallocated.
How to Create a Locally Managed Tablespace?
The following statement creates a locally managed tablespace named USERS, where
AUTOALLOCATE causes Oracle to automatically manage extent size.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Alternatively, this tablespace could be created specifying the UNIFORM clause. In this example,
a 512K extent size is specified. Each 512K extent (which is equivalent to 64 Oracle blocks of
8K) is represented by a bit in the bitmap for this file.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
How to Create a Dictionary Managed Tablespace?
The following is an example of creating a DICTIONARY managed tablespace in Oracle9i:
CREATE TABLESPACE users

47

DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M


EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE (
INITIAL 64K
NEXT 64K
MINEXTENTS 2
MAXEXTENTS 121
PCTINCREASE 0);
What is Segment Space Management Options?
Two choices for segment-space management, one is manual (the default) and another auto.
Manual: This is default option. This option use free lists for managing free space within
segments. What are free lists: Free lists are lists of data blocks that have space available for
inserting new rows.
Auto: This option use bitmaps for managing free space within segments. This is typically called
automatic segment-space management
Example:
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 10M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
How to Convert between LMT and DMT Tablespace?
The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily convert between LMT
and DMT mode. Look at these examples:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
PL/SQL procedure successfully completed.
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');
PL/SQL procedure successfully completed.
Important Query
How to retrieve tablespace default storage Parameters?
SELECT TABLESPACE_NAME "TABLESPACE",
INITIAL_EXTENT "INITIAL_EXT",
NEXT_EXTENT "NEXT_EXT",
MIN_EXTENTS "MIN_EXT",
MAX_EXTENTS "MAX_EXT",

48

PCT_INCREASE
FROM DBA_TABLESPACES;
How to retrieve information tablesapce and associated datafile?
SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME
FROM DBA_DATA_FILES;
How to retrieve Statistics for Free Space (Extents) of Each Tablespace?
SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,
COUNT(*) "PIECES",
MAX(blocks) "MAXIMUM",
MIN(blocks) "MINIMUM",
AVG(blocks) "AVERAGE",
SUM(blocks) "TOTAL"
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME, FILE_ID;
PIECES

shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM
show the largest and smallest contiguous area of space in database blocks, AVERAGE shows the
average size in blocks of a free space extent, and TOTAL shows the amount of free space in each
tablespace file in blocks. This query is useful when you are going to create a new object or you
know that a segment is about to extend, and you want to make sure that there is enough space in
the containing tablespace.

49

Managing Database

What is datafile?
Datafiles are physical files of the OS that store the data of all logical structures in the database.
Datafile must be created for each tablespace.
How to determine the number of datafiles?
At least one datafile is required for the SYSTEM tablespace. We can create separate datafile for
other teblespace. When we create DATABASE , MAXDATAFILES may be or not specify in
create database statement clause. Oracle assassin db_files default value to 200. We can also
specify the number of datafiles in init file.
When we start the oracle instance , the DB_FILES initialization parameter reserve for datafile
information and the maximum number of datafile in SGA. We can change the value of DB_FILES
(by changing the initialization parameter setting), but the new value does not take effect until you
shut down and restart the instance.
Important:

If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit.
Example : if init parameter db_files set to 2 then you can not add more then 2 in your
database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.
When you issue CREATE DATABASE or CREATE CONTROLFILE statements, the
MAXDATAFILES parameter specifies an initial size. However, if you attempt to add a new
file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, the
control file will expand automatically so that the datafiles section can accommodate more
files.

Note:
If you add new datafiles to a tablespace and do not fully specify the filenames, the database
creates the datafiles in the default database directory . Oracle recommends you always specify a
fully qualified name for a datafile. Unless you want to reuse existing files, make sure the new
filenames do not conflict with other files. Old files that have been previously dropped will be
overwritten.
How to add datafile in execting tablespace?
alter tablespace <Tablespace_Name> add datafile <Datafile Path.dbf size 10m autoextend on;
How to resize the datafile?
alter database datafile '/............../......./file01.dbf' resize 100M;
How to bring datafile online and offline?
alter database datafile '/............../......./file01.dbf' online;
alter database datafile '/............../......./file01.dbf' offline;
50

How to renaming the datafile in a single tablesapce?


Step: 1 Take the tablespace that contains the datafiles offline. The database must be open.
alter tablespace <Tablespace_Name> offline normal;
Step: 2 rename the datafiles using the operating system.
Step:3 Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the
filenames within the database.
alter tablespace <Tablespace_Name> rename datafile '/...../..../..../user.dbf' to
'/..../..../.../users1.dbf';
Step 4: Back up the database. After making any structural changes to a database, always perform
an immediate and complete backup.
How to relocate datafile in a single tablesapce?
Step:1 Use following query to know the specifiec file name or size.
select file_name,bytes from dba_data_files where tablespace_name='<tablespace_name>';
Step: 2 take the tablespace containing the datafiles offline:
alter tablespace <Tablespace_Name> offline normal;
Step:3 Copy the datafiles to their new locations and rename them using the operating system.
Step: 4 rename the datafiles within the database.
ALTER TABLESPACE <Tablespace_Name> RENAME DATAFILE
'/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf'
TO '/u03/oracle/rbdb1/users01.dbf','/u04/oracle/rbdb1/users02.dbf';
Step: 5 back up the database. After making any structural changes to a database, always perform
an immediate and complete backup.
How to Renaming and Relocating Datafiles in Multiple Tablespaces?
Step:1 Ensure that the database is mounted but closed.
Step:2 Copy the datafiles to be renamed to their new locations and new names, using the
operating system.
Step:3 Use ALTER DATABASE to rename the file pointers in the database control file.
ALTER DATABASE
RENAME FILE

51

'/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
Step:4 Back up the database. After making any structural changes to a database, always perform
an immediate and complete backup.
How to drop a datafile from a Tablespace
Important: Oracle does not provide an interface for dropping datafiles in the same way you
would drop a schema object such as a table or a user.
Reasons why you want to remove a datafile from a tablespace:

You may have mistakenly added a file to a tablespace.


You may have made the file much larger than intended and now want to remove it.
You may be involved in a recovery scenario and the database won't start because a
datafile is missing.

Important: Once the DBA creates a datafile for a tablespace, the datafile cannot be removed. If
you want to do any critical operation like dropping datafiles, ensure you have a full backup of the
database.
Step: 1 determining how many datafiles makes up a tablespace
To determine how many and which datafiles make up a tablespace, you can use the following
query:
SELECT
file_name, tablespace_name
FROM
dba_data_files
WHERE
tablespace_name ='<name of tablespace>';
Case 1
If you have only one datafile in the tablespace and you want to remove it. You can simply drop
the entire tablespace using the following:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
The above command will remove the tablespace, the datafile, and the tablespace's contents from
the data dictionary.
Important: Oracle will not drop the physical datafile after the DROP TABLESPACE command.
This action needs to be performed at the operating system.
Case 2

52

If you have more than one datafile in the tablespace, and you wnat to remove all datafiles and
also no need the information contained in that tablespace, then use the same command as above:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
Case 3
If you have more than one datafile in the tablespace and you want to remove only one or two (not
all) datafile in the tablesapce or you want to keep the objects that reside in the other datafile(s)
which are part of this tablespace, then you must export all the objects inside the tablespace.
Step: 1 Gather information on the current datafiles within the tablespace by running the
following query in SQL*Plus:
SELECT
file_name, tablespace_name
FROM
dba_data_files
WHERE
tablespace_name ='<name of tablespace>';
Step: 2 you now need to identify which objects are inside the tablespace for the purpose of
running an export. To do this, run the following query:
SELECT
owner, segment_name, segment_type
FROM
dba_segments
WHERE
tablespace_name='<name of tablespace>'
Step : 3 Now, export all the objects that you wish to keep.
Step : 4 Once the export is done, issue the
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS.
Step : 5 Delete the datafiles belonging to this tablespace using the operating system.
Step : 6 Recreate the tablespace with the datafile(s) desired, then import the objects into that
tablespace.
Case : 4
If you do not want to follow any of these procedures, there are other things that can be done
besides dropping the tablespace.

If the reason you wanted to drop the file is because you mistakenly created the file of the
wrong size, then consider using the RESIZE command.

If you really added the datafile by mistake, and Oracle has not yet allocated any space
within this datafile, then you can use ALTER DATABASE DATAFILE <filename>
RESIZE; command to make the file smaller than 5 Oracle blocks. If the datafile is resized
to smaller than 5 oracle blocks, then it will never be considered for extent allocation. At
some later date, the tablespace can be rebuilt to exclude the incorrect datafile.

Important : The ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP


command is not meant to allow you to remove a datafile. What the command really means is that
you are offlining the datafile with the intention of dropping the tablespace.

53

Important : If you are running in archivelog mode, you can also use: ALTER DATABASE
DATAFILE <datafile name> OFFLINE; instead of OFFLINE DROP. Once the datafile is
offline, Oracle no longer attempts to access it, but it is still considered part of that tablespace.
This datafile is marked only as offline in the controlfile and there is no SCN comparison done
between the controlfile and the datafile during startup (This also allows you to startup a database
with a non-critical datafile missing). The entry for that datafile is not deleted from the controlfile
to give us the opportunity to recover that datafile.
Managing Control Files
A control file is a small binary file that records the physical structure of the database with
database name, Names and locations of associated datafiles, online redo log files, timestamp of
the database creation, current log sequence number and Checkpoint information.
Note:

Without the control file, the database cannot be mounted.


You should create two or more copies of the control file during database creation.

Role of Control File:


When Database instance mount, Oracle recognized all listed file in Control file and open it.
Oracle writes and maintains all listed control files during database operation.
Important:
If you do not specify files for CONTROL_FILES before database creation, and you are
not using the Oracle Managed Files feature, Oracle creates a control file in
<DISK>:\ORACLE_HOME\DTATBASE\ location and uses a default filename. The
default name is operating system specific.
Every Oracle database should have at least two control files, each stored on a different
disk. If a control file is damaged due to a disk failure, the associated instance must be shut
down.
Oracle writes to all filenames listed for the initialization parameter CONTROL_FILES in
the database's initialization parameter file.
The first file listed in the CONTROL_FILES parameter is the only file read by the Oracle
database server during database operation.
If any of the control files become unavailable during database operation, the instance
becomes inoperable and should be aborted.
How to Create Control file at the time of database creation:
The initial control files of an Oracle database are created when you issue the CREATE
DATABASE statement. The names of the control files are specified by the CONTROL_FILES
parameter in the initialization parameter file used during database creation.
How to Create Additional Copies, Renaming, and Relocating Control Files
Step: 1 shut down the database.
Step: 2 Copy an existing control file to a different location, using operating system commands.

54

Step: 3 edit the CONTROL_FILES parameter in the database's initialization parameter file to add
the new control file's name, or to change the existing control filename.
Step: 4 restart the database.
When you Create New Control Files?

All control files for the database have been permanently damaged and you do not have a
control file backup.
You want to change one of the permanent database parameter settings originally specified
in the CREATE DATABASE statement. These settings include the database's name and
the
following
parameters:
MAXLOGFILES,
MAXLOGMEMBERS,
MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES.

Steps for Creating New Control Files


Step: 1 Make a list of all datafiles and online redo log files of the database.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'CONTROL_FILES';
Step: 2 shut down the database.
Step: 3 back up all datafiles and online redo log files of the database.
Step: 4 Start up a new instance, but do not mount or open the database:
STARTUP NOMOUNT
Step: 5 create a new control file for the database using the CREATE CONTROLFILE statement.
Example:
CREATE CONTROLFILE REUSE DATABASE "<DB_NAME" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '<DISK>:\Directory\REDO01.LOG' SIZE 5024K,
GROUP 2 '<DISK>:\Directory\REDO02.LOG' SIZE 5024K,
GROUP 3 '<DISK>:\Directory\REDO03.LOG' SIZE 5024K
# STANDBY LOGFILE
DATAFILE
'<DISK>:\Directory\SYSTEM.DBF',
'<DISK>:\Directory\UNDOTBS.DBF'
CHARACTER SET WE8MSWIN1252
;
Step: 6 open the database using one of the following methods:

55

If you specify NORESETLOGS when creation the control file, use following commands:
ALTER DATABASE OPEN;
If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;

TIPS:
When creating a new control file, select the RESETLOGS option if you have lost any online redo
log groups in addition to control files. In this case, you will need to recover from the loss of the
redo logs . You must also specify the RESETLOGS option if you have renamed the database.
Otherwise, select the NORESETLOGS option.
Backing Up Control Files
Method 1:
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
ALTER DATABASE BACKUP CONTROLFILE TO '<DISK>:\Directory\control.bkp';
Method 2:
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
How to retrieve information related to Control File:

V$DATABASE (Displays database information from the control file)


V$CONTROLFILE (Lists the names of control files)
V$CONTROLFILE_RECORD_SECTION (Displays information about control file
record sections)
Managing Redo Log Files

Redo logs consists of two or more pre allocated files that store all changes made to the database.
Every instance of an Oracle database has an associated online redo log to protect the database in
case of an instance failure.
Main points to consider before creating redo log files?

Members of the same group should be stores in separate disk so that no single disk failure
can cause LGWR and database instance to fail.
Set the archive destination to separate disk other than redo log members to avoid
contention between LGWR and Arch.
With mirrored groups of online redo logs , all members of the same group must be the
same size.

What are the parameters related to Redo log files?


56

Parameters related to redo log files are

MAXLOGFILES
MAXLOGMEMEBERS

MAXLOGFILES and MAXLOGMEMEBERS parameters are defined while creation of


database. You can increase these parameters by recreating the control file.
How do you create online Redo log group?
Alter database add logfile group <group Number>
(<DISK>:\Directory\<LOG_FILE_NAME>.log,
(<DISK>:\Directory\<LOG_FILE_NAME>.log) size 500K;
How to check the status of added redo log group?
Select * form v$log;
Interpretation:
Here you will observe that status is UNUSED means that this redo log file is not being used by
oracle as yet. ARC is the archived column in v$log , it is by default YES when you create a redo
log file. It will returns to NO if the system is not in archive log mode and this file is used by
oracle. Sequence# 0 also indicate that it is not being used as yet.
How to create online redo log member ?
alter database add logfile member
'<DISK>:\Directory\<LOG_FILE_NAME>.log,'<DISK>:\Directory\<LOG_FILE_NAME>.log
' to group <GROUP NUMBER>;
How to rename and relocate online redo log members?
Important: Take the backup before renaming and relocating.
Step:1 Shutdown the database .
Step: 2 Startup the database as startup mount.
Step: 3 Copy the desired redo log files to new location. You can change the name of redo log file
in the new location.
Step:4
Alter database rename file <DISK>:\Directory\<LOG_FILE_NAME>.log
path><DISK>:\Directory\<LOG_FILE_NAME>.log,

to

<new

Step: 5 alter database open;


Step: 6 Shutdown the database normal and take the backup.

57

How to drop online redo log group?


Important:

You must have at- least two online groups.


You can not drop a active online redo log group. If it active switch it by alter system
switch logfile before dropping.
Also make sure that online redo log group is archived ( if archiving is enabled).

Syntax:

If you want to drop log group:

Alter database drop logfile group <GROUP_NUMBER>;

If you want to drop a logfile member:

Alter database drop logfile member <DISK>:\Directory\<LOG_FILE_NAME>.log;


How to Viewing Online Redo Log Information?
Select * from v$log;
Select * from v$logfile;
Note: If STATUS is blank for a member, then the file is in use.
Temporary Tablespace
First we will discus about use of temporary tablespace. We use it to manage space for database
sort operation. For example: if we join two large tables it require space for sort operation because
oracle cannot do shorting in memory. This sort operation will be done in temperory tablespace.
We must assign a temporary tablespace to each user in the database; if we dont assign temperory
tablespace to user in the database oracle allocate sort space in the SYSTEM tablespace by
default.
Important:

That a temporary tablespace cannot contain permanent objects and therefore doesn't need
to be backed up.
When we create a TEMPFILE, Oracle only writes to the header and last block of the file.
This is why it is much quicker to create a TEMPFILE than to create a normal database
file.
TEMPFILEs are not recorded in the database's control file.
We cannot remove datafiles from a tablespace until you drop the entire tablespace but we
can remove a TEMPFILE from a database:

58

SQL>ALTER DATABASE TEMPFILE ''<disk>:\<directory>\<Tablespace Name>.dbf'


DROP INCLUDING DATAFILES;

Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a
locally managed temporary tablespace (operations like rename, set to read only, recover,
etc. will fail).

How does create Temporary Tablespaces?


CREATE TEMPORARY TABLESPACE temp
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' size 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
For best performance, the UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE
parameter.
How can define Default Temporary Tablespaces?
We can define a Default Temporary Tablespace at database creation time, or by issuing an
"ALTER DATABASE" statement:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Important:

The default Default Temporary Tablespace is SYSTEM.


Each database can be assigned one and only one Default Temporary Tablespace.
Temporary Tablespace is automatically assigned to users.

Restriction:
The following restrictions apply to default temporary tablespaces:

The Default Temporary Tablespace must be of type TEMPORARY


The DEFAULT TEMPORARY TABLESPACE cannot be taken off-line
The DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create
another one.

How to see the default temporary tablespace for a database?


SELECT * FROM DATABASE_PROPERTIES where
PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
How to Monitoring Temporary Tablespaces and Sorting?
Use following query to view temp file information:
Select * from dba_temp_files; or Select * from v$tempfile;
Use following query for monitor temporary segment

59

Select * from v$sort_segments or Select * from v$sort_usage


Use following query for free space in tablespace :
select TABLESPACE_NAME,BYTES_USED, BYTES_FREE from
V$TEMP_SPACE_HEADER;
How to Dropping / Recreating Temporary Tablespace? (Method)
This should be performed during off ours with no user logged on performing work.
If you are working with a temporary tablespace that is NOT the default temporary tablespace for
the database, this process is very simple. Simply drop and recreate the temporary tablespace:
Step: 1 Drop the Tablespace
DROP TABLESPACE temp;
Tablespace dropped.
Step: 2 Create new temporary tablespace.
CREATE TEMPORARY TABLESPACE TEMP
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' SIZE 500M REUSE
AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
How to Dropping / Recreating Default Temporary Tablespace? (Method)
You will know fairly quickly if the tablespace is a default temporary tablespace when you are
greeted with the following exception:
DROP TABLESPACE temp;
drop tablespace temp
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace
Step: 1 Create another temperory tablespace.
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf'SIZE 5M REUSE
AUTOEXTEND ON NEXT 1M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
Tablespace created.
Step: 2 Make default tablespace.
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
Database altered.
Step: 3 Drop old defaule tablespace.

60

DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;


Tablespace dropped.
Most Importent:
You do not need to assign temporary tablespace while creating a database user. The Temporary
Tablespace is automatically assigned. The name of the temporary tablespace is determined by the
DEFAULT_TEMP_TABLESPACE
column
from
the
data
dictionary
view
DATABASE_PROPERTIES_VIEW.
Example:
Step: 1 Create database user
create user test identified by test default TABLESPACE users;
User created.
Step: 2 View information
SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE FROM
DBA_USERS WHERE USERNAME='TEST';
USERNAME
-------TEST

DEFAULT_TABLESPACE
-----------------------------USERS

TEMPORARY_TABLESPACE
-----------------------------TEMP

NOTE: Temporary Tablespace TEMP is automatically assigned to the user TEST.


Certain Restrictions

The default temporary tablespace can not be DROP.


The default temporary tablespace cab not be taken offline
Managing UNDO TABLESPACE

Before commit, Oracle Database keeps records of actions of transaction because Oracle needs
this information to rollback or Undo the Changes.
What are the main Init.ora Parameters for Automatic Undo Management?
UNDO_MANAGEMENT:
The default value for this parameter is MANUAL. If you want to set the database in an
automated mode, set this value to AUTO. (UNDO_MANAGEMENT = AUTO)
UNDO_TABLESPACE:
UNDO_TABLESPACE defines the tablespaces that are to be used as Undo Tablespaces. If no
value is specified, Oracle will use the system rollback segment to startup. This value is dynamic
and can be changed online (UNDO_TABLESPACE = <Tablespace_Name>)
UNDO_RETENTION:

61

The default value for this parameter is 900 Secs. This value specifies the amount of time, Undo is
kept in the Tablespace. This applies to both committed and uncommitted transactions since the
introduction of FlashBack Query feature in Oracle needs this information to create a read
consistent copy of the data in the past.
UNDO_SUPRESS_ERRORS:
Default values are FALSE. Set this to true to suppress the errors generated when manual
management SQL operations are issued in an automated management mode.
How to Creating UNDO Tablespaces?
UNDO Tablespace can be created during the database creation time or can be added to an
existing database using the create UNDO Tablespace command
Scripts at the time of Database creation:
CREATE DATABASE <DB_NAME>
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 204800K REUSE
AUTOEXTEND ON NEXT 20480K MAXSIZE 32767M
UNDO TABLESPACE "<UNDO_TABLESPACE_NAME>"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF SIZE 1178624K REUSE
AUTOEXTEND ON NEXT 1024K MAXSIZE 32767M
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 2 ('<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 3 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K;
Scripts after creating Database:
CREATE UNDO TABLESPACE "<UNDO_TABLESPACE_NAME"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
How to Dropping an Undo Tablespace?
You cannot drop Active undo Tablespace. Means, undo Tablespace can only be dropped if it is
not currently used by any instance. Use the DROP TABLESPACE statement to drop an undo
Tablespace and all contents of the undo Tablespace are removed.
Example:
DROP TABLESPACE <UNDO_TABLESPACE_NAME> including contents;
How to Switching Undo Tablespace?

62

We can switch form one undo Tablespace to another undo Tablespace. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM
SET statement can be used to assign a new undo Tablespace.
Step 1: Create another UNDO TABLESPACE
CREATE UNDO TABLESPACE "<ANOTHER_UNDO_TABLESPACE>"
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
Step 2: Switches to a new undo Tablespace:
alter system set UNDO_TABLESPACE=<UNDO_TABLESPACE>;
Step 3: Drop old UNDO TABLESPACE
drop tablespace <UNDO_TABLESPACE> including contents;
IMPORTANT:
The database is online while the switch operation is performed, and user transactions can be
executed while this command is being executed. When the switch operation completes
successfully, all transactions started after the switch operation began are assigned to transaction
tables in the new undo Tablespace.
The switch operation does not wait for transactions in the old undo Tablespace to commit. If
there is any pending transactions in the old undo Tablespace, the old undo Tablespace enters into
a PENDING OFFLINE mode (status). In this mode, existing transactions can continue to
execute, but undo records for new user transactions cannot be stored in this undo Tablespace.
An undo Tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo Tablespace cannot used by another
instance, nor can it be dropped. Eventually, after all active transactions have committed, the undo
Tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode.
From then on, the undo Tablespace is available for other instances (in an Oracle Real Application
Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), the current undo
Tablespace will be switched out without switching in any other undo Tablespace. This can be
used, for example, to unassign an undo Tablespace in the event that you want to revert to manual
undo management mode.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
How to Monitoring Undo Space?
The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo
space in the current instance. Statistics are available for undo space consumption, transaction
concurrency, and length of queries in the instance.
The following example shows the results of a query on the V$UNDOSTAT view.

63

SELECT BEGIN_TIME, END_TIME, UNDOTSN, UNDOBLKS, TXNCOUNT,


MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT;

64

Chapter 6: Creating and Managing 10G ASM


About ASM, Functionality and Advantage
ASM is the most powerful feature in 10G. As a DBA we dont worry about database file (Data
file, control files, redo log files, archive logs RMAN backup set and so on) management task
because ASM completely automates file management task. Only we deal with a few disk groups,
instead for direct handling data files. (Example: When we use ASM enable database. We dont
need to refer table space by filename. We just use simple Disk Group name)
Another big advantage is that 10G gives us power to by pass using a third- party LVM, mirroring
and Striping our disk directly because ASM works as LVM and it also handle striping and
mirroring functions.
As a DBA no need to perform relocation of data because ASM prevent disk fragmentation. ASM
performs mirroring and stripping. Mirroring is applied on file basic rather then disk basic.
ASM automatically balance I/O.
Three major components are ASM Instance, Disk Group and ASM Files. In ASM Enable
database, we first start the ASM instance (This is not a full database instance; just the memory
structures and as such is very small and lightweight), ASM instance manage the Disk Group and
help the database to access the ASM files
Important
The default database name is +ASM
The default RAC instance are +ASMn, where n is the node number
The ASM database may be managed with srvctl command
Normally auto-managed by Oracle
Functionality of ASM
Manage group of Disk, Called disk groups.
Manages disk redundancy within a disk group
Provides near-optimal I/O balancing without any manual tuning
Support largs file
Advantage of ASM
Disk AdditionAdding a disk becomes very easy. No downtime is required and file
extents are redistributed automatically.
I/O DistributionI/O is spread over all the available disks automatically, without manual
intervention..
Stripe WidthStriping can be fine grained as in Redo Log Files (128K for faster transfer
rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).
BufferingThe ASM filesystem is not buffered, making it direct I/O capable by design.
Kernelized Asynch I/OThere is no special setup necessary to enable kernelized
asynchronous I/O, without using raw or third-party filesystems such as Veritas Quick I/O.
MirroringSoftware mirroring can be set up easily, if hardware mirroring is not
available

65

ASM instance, Archicture and management


ASM Instance maintains the ASM file metadata. Database can use it to access the files directly.
ASM does not mount the database files. ASM also does not have a data dictionary. ASM
instance only manage the disk group, protect disk group and communication file metadata
to database instance using ASM files.
ASM has own background process like SMON, PMON and LGWR process.
On ASM Instance side, the two new background processes introduce ASM rebalance Master
(RBAL) and ASM rebalance (ARBn).
ASM rebalance Master (RBAL) is in charge of co-coordinating disk activity.
ASM rebalance (ARBn) perform the actual rebalancing work like moving data extents
around.
On Database Instance side, two new background processes introduce RBAL and ASMB.
RBAL process performs global open of the disks in the ASM disk groups.
ASMB background process connects as a foreground process into your ASM instance.
Starting ASM Instance
We can use the STARTUP command with NOMOUNT, MOUNT, RESTRICT and
FORCE option.
Database startup command, read the control file and mount the file system specified in
the control file but ASM startup command, mount the disk group because ASM instance
does not have any file system to mount.
STARTUP FORCE command, the ASM instance is shutdown with STARTUP ABORT
command before restarting it.
STARTUP RESTRICT command, prevent any client Oracle database instance from
connecting to the ASM instance.
Shutting Down Instance
SHUTDOWN and SHUTDOWN NORMAL (Instance wait for all connected database
instance to terminate their ASM connections before shutting down)
SHUTDOWN IMMEDIATE and SHUTDOWN TRANSACTIONAL (ASM instance
wait until all currently executing SQL in the dependent database completes, but does not
wait for the database instance to disconnect.)
STHUDOWN ABORT (ASM instance instantly abort, all open connections to the ASM
instance are terminated and All oracle database will terminate immediately.
Managing ASM Directories, Aliases, Files, Metadata and Template
Important Note
We cannot use ASM for alert, trace, binary and password file.
Oracle automatically create ASM file with a specify redundancy level and stripping
policy. This redundancy and stripping policies are set permanently for the file and we
specify the attribute when we create disk group.
Only file created by ASM will be automatically deleted.
We never need to put file name at the time of database file creation, tablespace creating,
adding and assign file to table space.

66

Operating System cannot see ASM file but RMAN and other oracle utility can view.

Type of ASM Filenames


Fully Qualified ASM filenames
Numeric ASM filenames
Alias ASM filename
Incomplete ASM filename
Fully Qualified ASM Filename: We use only for referencing existing ASM files.
Example: +group/dbname/file_type/tag.file.incarnation
Group is the disk group name.
Dbname Name of the database
file_type ASM file type
tag type specific about the file
file.incarnation is the file/incarnation number pair, used to ensure uniqueness.
Important: We cannot supply a fully qualified ASM filename while creating a new file.
Numeric ASM filename: - The numeric ASM filename can be used for referencing existing
files. It is derived from the fully qualified ASM filename and takes the form:
+group.file.incarnation
Numeric ASM filenames can be used in any interface that requires an existing file name.
An example of a numeric ASM filename is:
+dgroup2.257.541956473
Alias ASM filename:- We can use ASM alias file both when creating new ASM files and when
referring to existing file.
Incomplete ASM filename: - We can use an incomplete ASM filename only when creating
files. Incomplete filenames include just the group name nothing else.
ASM Directories
How to create a directory:
ALTER DISKGROUP DB_DATA ADD DIRECTORY '+DB_DATA/my_dir';
How to rename a directory:
ALTER DISKGROUP DB_DATA RENAME DIRECTORY '+DB_DATA/my_dir' TO
'+DB_DATA/my_dir_2';
How to delete a directory and all its contents:
ALTER DISKGROUP DB_DATA DROP DIRECTORY '+DB_DATA/my_dir_2' FORCE;
67

Aliases
Aliases allow you to reference ASM files using user-friendly names, rather than the fully
qualified ASM filenames.
How to create an alias using the fully qualified filename:
ALTER DISKGROUP DB_DATA ADD ALIAS '+DB_DATA/my_dir/my_file.dbf'
FOR '+DB_DATA/mydb/datafile/my_ts.342.3';
How to create an alias using the numeric form filename:
ALTER DISKGROUP Db_DATA ADD ALIAS '+DB_DATA/my_dir/my_file.dbf'
FOR '+DB_DATA.342.3';
How to rename an alias:
ALTER DISKGROUP DB_DATA RENAME ALIAS '+DB_DATA/my_dir/my_file.dbf'
TO '+DB_DATA/my_dir/my_file2.dbf';
How to delete an alias:
ALTER DISKGROUP DB_DATA DELETE ALIAS '+DB_DATA/my_dir/my_file.dbf';
Files
Files are not deleted automatically if they are created using aliases, as they are not Oracle
Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For
these circumstances it is necessary to manually delete the files, as shown below.
How to Drop file using an alias?
ALTER DISKGROUP DB_DATA DROP FILE '+DB_DATA/my_dir/my_file.dbf';
How to Drop file using a numeric form filename?
ALTER DISKGROUP Db_DATA DROP FILE '+DB_DATA.342.3';
How to Drop file using a fully qualified filename?
ALTER DISKGROUP DB_DATA DROP FILE'+DB_DATA/mydb/datafile/my_ts.342.3';
Metadata
The internal consistency of disk group metadata can be checked in a number of ways using the
CHECK clause of the ALTER DISKGROUP statement.
How to check metadata for a specific file?

68

ALTER DISKGROUP DB_DATA CHECK FILE '+DB_DATA/my_dir/my_file.dbf'


How to check metadata for a specific failure group in the disk group?
ALTER DISKGROUP DB_DATA CHECK FAILGROUP failure_group_1;
How to check metadata for a specific disk in the disk group?
ALTER DISKGROUP DB_DATA CHECK DISK diska1;
How to check metadata for all disks in the disk group?
ALTER DISKGROUP DB_DATA CHECK ALL;
Templates
Templates are named groups of attributes that can be applied to the files within a disk group. The
following example show how templates can be created, altered and dropped.
How to create a new template?
ALTER DISKGROUP DB_DATA ADD TEMPLATE my_template ATTRIBUTES (MIRROR
FINE);
How to modify template?
ALTER DISKGROUP DB_DATA ALTER TEMPLATE my_template ATTRIBUTES
(COARSE);
How to Drop template?
ALTER DISKGROUP DB_DATA DROP TEMPLATE my_template;
Important:
Available attributes include:
UNPROTECTED - No mirroring or striping regardless of the redundancy setting.
MIRROR - Two-way mirroring for normal redundancy and three-way mirroring for high
redundancy. This attribute cannot be set for external redundancy.
COARSE - Specifies lower granuality for striping. This attribute cannot be set for
external redundancy.
FINE - Specifies higher granularity for striping. This attribute cannot be set for external
redundancy.
Managing ASM Disk Group
ASM Disk group is a collection of disks. When we add storage to our ASM system we simply
add disk to an ASM Disk group.

69

We can create disk group by using Database Control Disk Group Administration page or
manually by using CREATE DISKGROUP command.
How to Create DISK GROUP

Example:

Suppose we have two Disk Controller and a total six disks. DiskA1 through DiskA3 are on a
separate SCSI Controller and DiskB1 through DiskB3 are on yet another disk controller. Here we
create two failure group, each with three disks. The first three disk (DiskA1,DiskA2 and DiskA3)
will be on disk controller 1 and second 3 disk (DiskB1,DiskB2 and DiskB3)will be on disk
controller 2.
Now we start ASM instance in NOMOUNT mode. ASM instance is ready for create a disk
group. Then we create Disk group to correspond with 2 fail groups.
SQL> create diskgroup DB_DATA normal redundancy
Failgroup groupA disk /devices/DiskA1, /devices/DiskA2, /devices/DiskA3,
Failgroup groupB disk /devices/DiskB1, /devices/DiskB2, /devices/DiskB3,
When oracle writes data to the disks in the first failure group GroupA, it also wites those extents
to disk in the other failure group Group B
Important:

When you dont specify a FAILGROUP clause, the disk is in its own failure group.

Adding Disks to a Disk Group


ALTER DISKGROUP DB_DATA ADD DISK '/devices/DiskA4,/device/DiskB4';
Note:
1. When a disk is added, it is formatted and then rebalanced.
When you dont specify a FAILGROUP clause, the disk is in its own failure group.
When you dont specify a REDUNDANCY clause, the disk is in its own failure group.
2. If you don't specify the NAME clause, Oracle assigns its own system-generated names.
3. If the disk already belongs to a disk group, the statement will fail.
Dropping disks and disk groups?
ALTER DISKGROUP DB_DATA DROP DISK DiskA4;
DROP DISKGROUP DB_DATA INCLUDING CONTENTS;

70

Note:
DROPT DISKGROUP statements requires the instance to be in MOUNT state.
When a disk is dropped, the disk group is rebalanced by moving all of the file extents
from the dropped disk to other disks in the disk group. The header on the dropped disk is
then cleared.
If you specify the FORCE clause for the drop operation, the disk is dropped even if
Automatic Storage Management cannot read or write to the disk.
You can also drop all of the disks in specified failure groups using the DROP DISKS IN
FAILGROUP clause.
Rebalance Disk Group
ASM rebalance a disk group automatically, whenever we add or remove disks form disk group.
Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP
statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used.
ALTER DISKGROUP DB_DATA REBALANCE POWER 5;
Resize the Disk

Resize a specific disk.

ALTER DISKGROUP DB_DATA RESIZE DISK DISK1 SIZE 100G;

Resize all disks in a disk group.

ALTER DISKGROUP DB_DATA RESIZE ALL SIZE 100G;

Undrop Disk

The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be
undone. It will not revert drops that have completed, or disk drops associated with the dropping
of a disk group.
ALTER DISKGROUP disk_group_1 UNDROP DISKS;
Mount and Dismount the ASM DISKGROUP
Disk groups are mounted at ASM instance startup and unmounted at ASM instance shutdown.
Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement
as seen below.
ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;

71

ALTER DISKGROUP disk_group_1 MOUNT


Move Non-ASM database to ASM
Step 1 Obtain current Datafile, Control file and redo log files locations using V$DATAFILE,
V$CONTROLFILE and V$LOGFILE.
Step 2 down the Database.
Step 3 Edit init.ora( if Target system use INIT file on local system othere wise use alter system
command ) to point the new control_file ,Database file destination to ASM.
Example: if your disk group name is +DB_DATA and +FLASH_RECOVERY_AREA
Control_file="+DB_DATA"
db_create_file_dest='+DB_DATA'
db_recovery_file_dest='+FLASH_RECOVERY_AREA'
Step 4 Startup the Database in nomount state
SQL> Startup nomount
Step 5 Open RMAN session copy the control file from old location to new location.
RMAN> Connect target
RMAN> restore controlfile from <OLD_LOCATION.CTL>
Step 6 Mount the database.
RMAN> Alter database mount;
Step 7 Copy the datafile from Non-ASm to ASM by using RMAN
RMAN>Backup as copy database format '+DB_DATA';
Step 8 Rename the Datafile using RMAN
RMAN>Switch database to copy;
Step 9 Perform incomplete recovery and open the database using the RESETLOGS option. From
a SQL*Plus session:
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
ORA-00279: change 7937583 generated at 08/14/2006 20:33:55 needed for thread 1
ORA-00289: suggestion : +FLASH_RECOVERY_AREA
ORA-00280: change 7937583 for thread 1 is in sequence #36
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> ALTER DATABASE OPEN RESETLOGS;
Database altered.
Step 10 Re-create any tempfiles that are still currently on the local file system to ASM. This is
done by simply dropping the tempfiles from the local file system and re-creating them in ASM.

72

SQL>select tablespace_name, file_name, bytes from dba_temp_files;


SQL>alter database tempfile '<Path of local file>' drop including datafiles;
SQL> alter tablespace temp add tempfile size 512m autoextend on next 250m maxsize unlimited;
Tablespace altered.
Step 11 Re-create online redo logfiles that are still currently on the local file system to ASM.
This is done by simply dropping the logfiles from the local file system and re-creating them in
ASM.
1. Determine the current online redo logfiles to move to ASM by examining the file names (and
sizes) from V$LOGFILE:
SQL> select a.group#, a.member, b.bytes from v$logfile a, v$log b where a.group# = b.group#;
2. Force a log switch until the last redo log is marked "CURRENT" by issuing the following
command:
SQL> select group#, status from v$log;
GROUP#

STATUS

----------

----------------

CURRENT

INACTIVE

INACTIVE

SQL> alter system switch logfile;


SQL> alter system switch logfile;
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ---------------1 INACTIVE
2 INACTIVE
3 CURRENT
3. After making the last online redo log file the CURRENT one, drop the first online redo log:
SQL> alter database drop logfile group 1;
Database altered.
4. Re-create the dropped redo log group in ASM (and a different size if desired):

73

SQL> alter database add logfile group 1 size 250m;


Database altered.
5. After re-creating the online redo log group, force a log switch. The online redo log group just
created should become the CURRENT one:
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ---------------1 UNUSED
2 INACTIVE
3 CURRENT
SQL> alter system switch logfile;
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ---------------1 CURRENT
2 INACTIVE
3 ACTIVE
6. After re-creating the first online redo log group, loop back to drop / re-create the next online
redo logfile until all logs are rebuilt in ASM.
7. Verify all online redo logfiles have been created in ASM:
SQL> select a.group#, a.member, b.bytes from v$logfile a, v$log b where a.group# = b.group#;
IMPORTANT:
You can not drop current logfile group. When you try to drop the current logfile group resulted in
the following error:
SQL> ALTER DATABASE DROP LOGFILE GROUP 1;
ALTER DATABASE DROP LOGFILE GROUP 1
*
ERROR at line 1:
ORA-01624: log 1 needed for crash recovery of instance TESTDB (thread 1)
ORA-00312: online log 1 thread 1: '<file_name>'
Easy problem to resolve. Simply perform a checkpoint on the database:
SQL> ALTER SYSTEM CHECKPOINT GLOBAL;
System altered.
SQL> ALTER DATABASE DROP LOGFILE GROUP 1;
Database altered.
Step 12 Perform the following steps to relocate the SPFILE from the local file system to an ASM
disk group.

74

1. Create a text-based initialization parameter file from the current binary SPFILE located on the
local file system:
SQL> CREATE PFILE FROM SPFILE;
File created.
2. Create new SPFILE in an ASM disk group:
SQL> CREATE SPFILE='+DB_DATA1/TEST/spfileTEST.ora' FROM PFILE='Location of
pfile on local system';
File created.
3. Shutdown the Oracle database:
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
4. Remove (actually rename) the old SPFILE on the local file system so that the new text-based
init<SID>.ora will be used:
5. Open the Oracle database using the new SPFILE:
SQL> STARTUP
Step 13 Verify that all database files have been created in ASM and delete old file.
$ sqlplus "/ as sysdba"

Access ASM Disk files through FTP and HTTP


XDB configurations enable the possibility to use FTP/HTTP from an ftp and HTTP session on
unix or through a browser on Windows. Files can be easily moved in/out from ASM in this way.
Step 1 Follow the Note: 243554.1 "How to Deinstall and Reinstall XML Database (XDB)" to
install the XDB, if not ready.
Step 2 Configure the FTP and HTTP ports of XDB using:
connect / as sysdba
execute dbms_xdb.sethttpport(8080);
execute dbms_xdb.setftpport(2100);
commit;
Step 3 Restart the database and the listener:
sqlplus '/ as sysdba'
shutdown immediate
startup

75

lsnrctl stop <LISTENER NAME>


lsnrctl start <LISTENER NAME&
Step 4 Check the listener status, you will notice that the following listening endpoints were
automatically registered with your listener
(DESCRIPTION =(ADDRESS = (PROTOCOL = tcp)(HOST = testdb1)(PORT = 2100))
(Presentation = FTP)(Session = RAW))
(DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST = testdb1)(PORT = 8080))
(Presentation = HTTP)(Session = RAW))
Step 5 Connect to the ftp as follows:
OS> ftp -n
open testdb 2100
user system <password&
cd sys
cd asm
...
...
Enter the user and password as SYSTEM and <password>
with ftp you can navigate through the ASM directories and drag and drop files in and out of
ASM.
Reference:
Metalink Note: 243554.1How to Deinstall and Reinstall XML Database (XDB)

ASM Disk Crash Case Study


Case: (All Disks in a failgroup are lost after a disk/LUN failure)
Redundancy
FAILGROUP
Controller1
Controller2

NORMAL
DISK
VOL1, VOL2
VOL4, VOL5

Suppose All Disk (VOL4, VOL5) in failgroup (Controller2) is lost.


Workaround:

Execute following query on ASM instance. (This will show you ASM disk status)

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
GROUP DISK MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP NAME
1 3 MISSING NORMAL UNKNOWN 0 0 VOL4
1 4 MISSING NORMAL UNKNOWN 0 0 VOL5
1 0 CACHED NORMAL UNKNOWN 2047 1269 CONTROLLER1 VOL1
Result will show Disk VOL4 and VOL5 is Missing.

Add Disk physically, create partition, and scan ASM disk and list ASM disk. You
will show delete disk (VOL4 and VOL5)

76

Again execute bellow Query:

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
GROUP DISK MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP NAME
0 0 CLOSED NORMAL UNKNOWN 2047 0
0 1 CLOSED NORMAL UNKNOWN 3067 0
1 4 MISSING HUNG UNKNOWN 0 0 VOL5
1 3 MISSING HUNG UNKNOWN 0 0 VOL4
1 0 CACHED NORMAL UNKNOWN 2047 1269 CONTROLLER1 VOL1

Now delete the ASM Disk, scan ASM Disk and list disk (VOL4 and VOL5)

/etc/init.d/oracleasm deletedisk VOL4


/etc/init.d/oracleasm deletedisk VOL5
The list ASM disk will not show Disk (VOL4 and VOL5)

Create ASM Disk, scan and list ASM Disk with Different Disk name.

/etc/init.d/oracleasm createdisk VOL 4NEW /dev/sdc1


/etc/init.d/oracleasm createdisk VOL 5NEW /dev/sdd1
The list ASM Disk will show Disk (VOL4NEW and VOL5NEW)

Now we add Newelly added ASM Disk to DISKGROUP.

ALTER DISKGROUP <DG_NAME> ADD FAILGROUP <FAILGROUP_NAME>


DISK 'ORCL: VOL4NEW,ORCL: VOL5NEW;

Execute following query and check your disk status.

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
------------END----------------Case: (One or two Disk but not all Disks in a failgroup are lost after a disk/LUN failure)
Redundancy
FAILGROUP
Controller1
Controller2

NORMAL
DISK
VOL1, VOL2
VOL4, VOL5

Suppose One Disk (VOL5) in failgroup (Controller2) is lost.


Workaround:

Execute following query on ASM instance. (This will show you ASM disk status)

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
GROUP DISK MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP NAME
1 2 MISSING NORMAL UNKNOWN 0 0 VOL5
1 0 CACHED NORMAL UNKNOWN 2047 1269 CONTROLLER1 VOL1
1 1 CACHED NORMAL UNKNOWN 2047 1732 CONTROLLER2 VOL4

77

Result will show Disk VOL5 is Missing.

Add Disk physically, create partition, and scan ASM disk and list ASM disk. You
will show delete disk (VOL5)

Again execute bellow Query:

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
GROUP DISK MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP NAME
0 0 CLOSED NORMAL UNKNOWN 3067 0
1 0 CACHED NORMAL UNKNOWN 2047 1269 CONTROLLER1 VOL1
1 1 CACHED NORMAL UNKNOWN 2047 1269 CONTROLLER2 VOL4NEW

Now delete the ASM Disk, scan ASM Disk and list disk (VOL5)

/etc/init.d/oracleasm deletedisk VOL5


The list ASM disk will not show Disk (VOL5)

Create ASM Disk, scan and list ASM Disk with Different Disk name.

/etc/init.d/oracleasm createdisk VOL5 /dev/sdd1


The list ASM Disk will show Disk (VOL5)

Now we add Newelly added ASM Disk to DISKGROUP.

ALTER DISKGROUP <DG_NAME> ADD FAILGROUP <FAILGROUP_NAME>


DISK 'ORCL: VOL5;

Execute following query and check your disk status.

Select GROUP_NUMBER, DISK_NUMBER, MOUNT_STATUS, STATE, REDUNDANCY,


TOTAL_MB, FREE_MB, FAILGROUP, NAME from v$asm_disk;
Useful Query for ASM
How to check Disk Group Details?
select group_number, name, total_mb, free_mb, state, type from v$asm_diskgroup;
How to Check ASM Disk Details?
SELECT group_number, disk_number, mount_status, header_status, state, path FROM
v$asm_disk
GROUP_NUMBER

DISK_NUMBER

MOUNT_S

HEADER_STATU

STATE

PATH

CLOSED

CANDIDATE

NORMAL

C:\ASMDISKS\_FILE_DISK1

CLOSED

CANDIDATE

NORMAL

C:\ASMDISKS\_FILE_DISK2

CLOSED

CANDIDATE

NORMAL

C:\ASMDISKS\_FILE_DISK3

CLOSED

CANDIDATE

NORMAL

C:\ASMDISKS\_FILE_DISK4

78

Note:
The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk
is available but hasn't yet been assigned to a disk group.
Dynamice Performance Views
V$ASM_DISKGROUP
This view provides information about a disk group. In a database instance, this view
contains one row for every ASM disk group mounted by the ASM instance.

V$ASM_CLIENT
This view identifies all the client databases using various disk groups. In a Database
instance, the view contains one row for the ASM instance if the database has any open
ASM files.

V$ASM_DISK
This view contains one row for every disk discovered by the ASM instance. In a database
instance, the view will only contain rows for disks in use by that database instance.

V$ASM_FILE
This view contains one row for every ASM file in every disk group mounted by the ASM
instance.

V$ASM_TEMPLATE
This view contains one row for every template present in every disk group mounted by
the ASM instance.

Guideline for Shared pool Size in ASM instance


Increase shared pool size based on the following guidelines:
For disk groups using external redundancy: Every 100 GB of space needs 1 MB of extra shared
pool plus a fixed amount of 2 MB of shared pool.
For disk groups using normal redundancy: Every 50 GB of space needs 1 MB of extra shared
pool plus a fixed amount of 4 MB of shared pool.
For disk groups using high redundancy: Every 33 GB of space needs 1 MB of extra shared pool
plus a fixed amount of 6 MB of shared pool.
How to check database Size?
To obtain the current database storage size that is either already on ASM or will be stored in
ASM:
SELECT d+l+t DB_SPACE
FROM
(SELECT SUM(bytes)/(1024*1024*1024) d
FROM v$datafile),
(SELECT SUM(bytes)/(1024*1024*1024) l
FROM v$logfile a, v$log b
79

WHERE a.group#=b.group#),
(SELECT SUM(bytes)/(1024*1024*1024) t
FROM v$tempfile
WHERE status='ONLINE')

80

Chapter 7 User Managed Backup/recovery

User-Managed Backup Terminology


(Operating system command are used to make backups when database is closed or open in
this terminology)
Whole database backup refer to a backup of all data file, control file and log file of the database.
Whole database backup can be perform when database open or closed.
The backup takes when database is closed called consistent backup. (Because database file
header are consistent with the control file and when restore completely the database can be
opened without any recovery.)
The backup takes when database is opened and operational called inconsistent backup. (Because
database file header are not consistent with the control file.)
Physical Backup Method
Database Operation Mode
Archive log mode
No Archive log mode

Recovery Senerio
recover to the point of failure
recover to the point of the last
backup

Querying View to obtain Database file Information


V$database( use for obtaining data file information)
v$controlfile( user for obtaining control file information)
v$logfile ( user for obtaining log file information)
Use the v$tablespace and v$datafile data dictonery view to obtain a list of all datafiles and there
respective tablespace.
SQL> SELECT T.NAME TABLESPACE, F.NAME DATAFILE FROM V$TABLESPACE T,
V$DATAFILE F WHERE T.TS# = F.TS# ORADER BY T.NAME;
Making a consistent whole Database Backup

Shutdown the database.

Backup all data file, control file and log file by using an operating system command. we
can also include password file and parameter file.

Restart the oracle database/Instance.

Making a inconsistent whole database backup


Important:

The database is set to ARCHIVELOG mode.

You ensure that the online redo logs are archived, either by enabling the Oracle automatic
archiving (ARCn) process.

81

Making a Backup of an Online Tablespace or Data file

Set the datafile or tablespace is backup mode by issuing following command:

SQL> ALTER TABLESPACE <TABLESPACE_NAME> BEGIN BACKUP;


(Note: This prevents the sequence number in the datafile header from changing.)

Use an operating system backup utility to copy all database in the tablespace to backup
storage.

Copy c:\datafile_path <file:///c:/datafile_path> e:\datafilepath <file:///e:/datafilepath>

After the datafile of the tablespace have been backed up, set them into mode by issuing
the following command:

SQL> ALTER TABLESPACE <TABLESPACE_NAME> END BACKUP;

Archive the unarchive redo logs;

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;


Repeat these steps for all tablespaces.
Mechanism of Open database backup
When a datafile is placed in backup mode, more redo log entries may be generated because the
log writer writes block image of changes block of the datafile in backup mode to the redo log
instead of just the row information
Backup Status Information( When performing open database)
select * from v$backup; (view to determine which file are in backup mode, when alter tablespace
begin backup command is issued the status change to ACTIVE.)
Manual Control File Backups

Creating a binary image:

ALTER DATABASE BACKUP CONTROLFILE TO 'control.bak';

Creating a taxt trace file:

ALTER DATABASE BACKUP CONTROLFILE TO TRACE;


Backing Up the Initilization Parameter File
CREATE PFILE FROM SPFILE;( For Default location)
CREATE PFILE ='C:\BACKUP\INIT.ORA' FRoM SPFILE;
Backup Verification (Command line Interface)
Use to ensure that a backup database or datafile is valid before a restore.
>dbv file='path of file location' start=1 logfile='enter path for log file generation'
Backup Issue with Logging and nologging Option
Tablespace, table, index may be use to set to NOLOGGING mode for Faster load of data when
using direct load operation like SQL LOADER.( Because the redo logs do not contain the values
that were inserted when the table was in NOLOGGING mode.

82

Type of Recovery and Difference between Resetlog and No resetlogs Option


Complete Recovery
Complete recovery can be done with the database OPEN unless the SYSTEM or UNDO
tablespaces are damaged (this will terminate the instance).
When your database is runing in no archive log mode and the entire database is restored to the
point of the last whole closed backup, called Complete recovery.
When your database is runing archive log mode and Only need to restore lost files and recovers
all data to the time of failure, called Complete recovery.
Short step for complete recovery:

Datafiles/Tablesdpace for restore must be offline.

Restore only lost or damaged datafiles.

Recover the datafiles using the Recover command.

Bring recovered datafiles online

Incomplete Recovery
Incomplete Recovery occurs when complete recovery is impossible or you want to lose some
information that was entered by mistake.
You can say, you do not apply all of the redo records generated after the most recent backup.
You usually perform incomplete recovery of the whole database in the following situations:

Media failure destroys some or all of the online redo logs.

A user error causes data loss, for example, a user inadvertently drops a table.

You cannot perform complete recovery because an archived redo log is missing.

You lose your current control file and must use a backup control file to open the database.

To perform incomplete media recovery, you must restore all datafiles from backups created prior
to the time to which you want to recover and then open the database with the RESETLOGS
option when recovery completes.
Difference between ResetLogs and NoResetLogs option?
After incomplete recovery (where the entire redo stream wasn't applied) we use RESETLOGS
option. RESETLOGS will initialize the logs, reset your log sequence number, and start a new
"incarnation"
of
the
database.
After complete recovery (when the entire redo stream was applied) we use NORESETLOGS
option. Oracle will continue using the existing (valid) log files.
What is a cancel-based recovery?
A cancel-based recovery is a user-managed incomplete recovery that is performed by specifying
the UNTIL CANCEL clause with the RECOVER command. UNTIL CANCEL clause to perform
recovery until the user manually cancels the recovery process. Oracle Cancel-Based Recovery is
usually performed when there is a requirement to recover up to a particular archived redo log file.
If the user does not specify CANCEL then the recovery process will automatically stop when all
redo has been applied to the database.
When Cancel Based Recovery required (Scenario)?

83

For example consider a situation where someone dropped a table and one of the online
redo logs is missing and is not archived and the table needs to be recovered.

Another case is where your backup control file does not know anything about the
arhivelogs that got created after your last backup.

Another scenario can be where you have lost all logs pass a specific sequence say X (For
example, you may know that you have lost all logs past sequence 1234, so you want to
cancel recovery after log 1233 is applied) and you want to control which archived log
terminates recovery. Or a scenario where one of the archived redo log files required for
the complete recovery is corrupt or missing and the only recovery option is to recover up
to the missing archived redo log file.

NOTE: Remember the online logs must be reset after you perform an incomplete recovery or
you perform recovery with a backup control file. So finally you will need to open database in
RESETLOGS mode. To synchronize datafiles with control files and redo logs, open database
using "resetlogs" options.
What is a point in time recovery?
A point in time recovery is a method to recover your database to any point in time since the last
database backup.
We use RECOVER DATABASE UNTIL TIME statement to begin time-based recovery. The
time is always specified using the following format, delimited by single quotation marks:
'YYYY-MM-DD:HH24:MI:SS'.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30'
If a backup of the control file is being used with this incomplete recovery, then indicate this in
the statement used to start recovery.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30' USING BACKUP
CONTROLFILE
In this type of recovery, apply redo logs until the last required redo log has been applied to the
restored datafiles. Oracle automatically terminates the recovery when it reaches the correct time,
and returns a message indicating whether recovery is successful.
What is change-based recovery?
Recovers until the specified SCN.
Change-based recovery is a recovery technique using which a database is recovered up to a
specified system change number (SCN). Using the UNTIL CHANGE clause with the RECOVER
command performs a manual change-based recovery. However, RMAN uses the UNTIL SCN
clause to perform a change-based recovery.
Begin change-based recovery, specifying the SCN for recovery termination. The SCN is
specified as a decimal number without quotation marks. For example, to recover through SCN
10034 issue:
RECOVER DATABASE UNTIL CHANGE 10034;
Continue applying redo log files until the last required redo log file has been applied to the
restored datafiles. Oracle automatically terminates the recovery when it reaches the correct SCN,
and returns a message indicating whether recovery is successful.

84

User Managed Database Recovery


Recovery from missing or corrupted datafile(s):
Scenario: (When you take cold backup means consistent backup)
If your database is running in archive log mode and you take cold backup means consistent
backup (all data file, control file and redo log file) in every Sunday and Monday to Saturday you
take only backup of archive log file.
Case 1:
You have Sunday cold backup and you have also Monday to Wednesday archive log file backup.
Suppose any data file corrupt or missed on Thursday, how will you recover database up to
Wednesday.

When u start database by using startup command system show following error:

SQL> startup
ORACLE instance started.
Total System Global Area 122755896 bytes
Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'C: \O\ORADATA\SYSTEM.DBF'

Read DBWR trace file or alert log file and find details of missing data files. Restore
missing files from backup storage area by using OS Copy command and try to open
database by using alter database open command.

SQL> alter database open;


alter database open
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'C:\O\ORADATA\SYSTEM.DBF'
(Error means data file 1 needs media recovery)

Recover database by using following syntax:

SQL> recover datafile 1;


ORA-00279: change 222132 generated at 06/02/2006 10:41:42 needed for thread 1
ORA-00289: suggestion : C:\O\ADMIN\ARCH\ARC00100052
ORA-00280: change 222132 for thread 1 is in sequence #52
Note: Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
Log applied.
Media recovery complete.

Open databae:

SQL> alter database open;


Your database recovered up to Wednesday.

85

Recovery from missing or corrupted redo log group:


Case 1: A multiplexed copy of the missing log is available.
If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an
example, where I attempt to startup from SQLPlus when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03.LOG'
SQL>
To fix this we simply copy REDO03.LOG from its multiplexed location on E: to the above
location on D:
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.

Case 2: Only A redo log file backup copy available


If a redo log is missing, it should be restored from a Cold backup (if redo log backup available in
Sunday Cold Backup) if possible. Here's an example, where I attempt to startup from SQLPlus
when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area
122755896 bytes
Fixed Size
453432 bytes
Variable Size
67108864 bytes
Database Buffers
54525952 bytes
Redo Buffers
667648 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'
SQL>
To fix this we simply copy REDO01.LOG from Cold Backup.

86

SQL> alter database clear unarchived logfile group 1;


SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.
Case 3: All redo log file or any one redo log file missing and we have no backup copy of
redo log file or no multiplexing redo log file.
If all or some redo log is missing, heres an example, where I attempt to startup from SQLPlus
when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 122755896 bytes
Fixed Size
453432 bytes
Variable Size
67108864 bytes
Database Buffers
54525952 bytes
Redo Buffers
667648 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'
SQL> recover database until cancel;
Media recovery complete.
SQL> alter database open resetlogs;
Database altered.
That's it - the database is open for use.
How to perform a disaster recovery of an Oracle server using Manual Backup Method
1. Pre-requisites:
The following are the pre-requisites to fully recover an Oracle database server in the event a
disaster occurs:
A FULL Oracle database backup (all data file, control file and redo log file) using copy
command. When making this backup, make sure the Oracle database is shut down. This backup
set will contain a FULL CLOSED Oracle Database backup. A FULL Oracle database backup
should be performed every time any changes are made to the physical and/or logical structure of
the Oracle database and forms the base for recovering the database server to a fully working
order.
Archive log file backup up to the time of the server failure.
Control file up to the time of the server failures.

87

Scenario
if your database is running in archive log mode and every Sunday you take full/cold backup of
database (all data file , control file and redolog file ) and every day Monday to Saturday you take
only archive log file backup. If a situation in which your database server has been destroyed at
Saturday , how will u recover data up to Saturday
Steps
Build the server
You need a server to host the database, so the first step is to acquire or build the new machine.
This is not strictly a DBA task, so we won't delve into details here. The main point to keep in
mind is that the replacement server should, as far as possible, be identical to the old one. In
particular, pay attention to the following areas:
Disk layout and capacity: Ideally the server should have the same number of disks as the original.
This avoids messy renaming of files during recovery. Obviously, the new disks should also have
enough space to hold all software and data that was on the original server.
Operating system, service pack and patches: The operating system environment should be the
same as the original, right up to service pack and patch level.
Memory: The new server must have enough memory to cater to Oracle and operating system /
other software requirements. Oracle memory structures (Shared pool, db buffer caches etc) will
be sized identically to the original database instance. Use of the backup server parameter file will
ensure this.
Install Oracle Software
Now we get to the meat of the database recovery process. The next step is to install Oracle
software on the machine. The following points should be kept in mind when installing the
software:
Install the same version of Oracle as was on the destroyed server. The version number should
match right down to the patch level, so this may be a multi-step process involving installation
followed by the application of one or more patchsets and patches.
Do not create a new database at this stage.
Create a listener using the Network Configuration Assistant. Ensure that it has the same name
and listening ports as the original listener. Relevant listener configuration information can be
found in the backed up listener.ora file.
Create directory structure for database files
After software installation is completed, create all directories required for datafiles, (online and
archived) logs, control files and backups. All directory paths should match those on the original
server. This, though not mandatory, saves additional steps associated with renaming files during
recovery.
Don't worry if you do not know where the database files should be located. You can obtain the
required information from the backup spfile and control file at a later stage. Continue reading we'll come back to this later.

88

Create Oracle service


An Oracle service must be exist before a database is created. The service is created using the
oradim utility, which must be run from the command line. The following commands show how
to create and modify a service (comments in italics, typed commands in bold):
--create a new service with auto startup
C:\>oradim -new -sid ORCL -intpwd ORCL -startmode a
Unfortunately oradim does not give any feedback, but you can check that the service exists via
the Services administrative panel. The service has been configured to start automatically when
the computer is powered up. Note that oradim offers options to delete, startup and shutdown a
service. See the documentation for details.
Restore backup from tape
The next step is to get your backup from tape on to disk.
Restore and recover database
If an Oracle database server experienced a disaster such as a hard disk failure, use this procedure
to recover the server and the Oracle databases:
Shutdown database
SQL> SHUTDOWN IMMEDIATE
Restore all data file, log file from cold backup and restore all archive log file from Cold backup
to disaster.
Restore current control file at the time of Disaster recovery.
When the restore operation completes, move to the Oracle database server
Start server manager, connect as Internal and start the database but only mount it by typing:
SQL> STARTUP MOUNT
When the database is mounted, type:
RECOVER DATABASE USING BACKUP CONTROLFILE
Note: Oracle will respond to this command by returning the following message, suggesting a log
sequence to apply.
ORA-00279: Change 36579 generated at <time/date> needed for thread 1
ORA-00289: Suggestion : \Oracle_Home\Oradata\<SID>\%SID%T00036579.ARC
ORA-00280: {<RET>=Suggested | filename | AUTO | FROM logsource | CANCEL}
At the prompt, type:
AUTO
Then press <Enter>
This will automatically apply all archived log sequences required to recover the database
(assuming all archived redo logs are available in the location specified in the init.ora parameter
and that the format corresponds to the format specified).

89

It is possible that a final non-archived log sequence is requested to complete the recovery. This
will only hold one System Change Number (SCN) and no transactions relating to the database,
up to, and including the time of the FULL ONLINE Oracle backup. If this is the case, the
following message will be returned by Oracle:
ORA-00308: cannot open archived log
'E:\ORACLE\ORADATA\KIMSTAD\ARCHIVE\KIMSTADT00036949.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
To finish the recovery, stay in server manager with the database mounted, and type:
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE
Then press <Enter>
When Oracle requests this final sequence again, type:
CANCEL
Then press <Enter>
Oracle will return the following message:
Media recovery canceled
The media recovery of the database is complete.
To open the database and to synchronize the log sequence, type:
ALTER DATABASE OPEN RESETLOGS
Then press <Enter>
The Oracle database server is now restored to full working order up to the time of the latest full
online Oracle backup.

90

Recovery Manager (RMAN)


Oracle provides a tool for Database backup and restore operation is called RMAN.
Recovery Manager is a client/server application that uses database server sessions to perform
backup and recovery. It stores metadata about its operations in the control file of the target
database and, optionally, in a recovery catalog schema in an Oracle database.
Difference between RMAN and Traditional backup methods
RMAN is Oracle's backup and recovery utility. With RMAN, backups become as easy as:
BACKUP DATABASE;
RMAN reduces the complexity of backup and recovery. RMAN can determine what needs to be
backed up or restored.
Why Should we use RMAN
Ability to perform incremental backups.
Ability to recover one block of a datafile.
Ability to perform the backup and restore with parallelization.
Ability to automatically delete archived redo logs after they are backed up.
Ability to automatically backup the control file and the SPFILE.
Ability to restart a failed backup without having to start from the beginning.
Ability to verify the integrity of the backup.
Ability to test the restore process without having to actually perform the restore.
Comparison of RMAN Automated and User-Managed Procedures
By using operating system commands for User-Managed Backup and Recovery , a DBA
manually keeps track of all database files and backups. But RMAN performs these same tasks
automatically
Understanding the RMAN Architecture
An oracle RMAN comprises of RMAN EXECUTABLE This could be present and fired even
through client side, TARGET DATABASE (This is the database which needs to be backed up)
and RECOVERY CATALOG (Recovery catalog is optional otherwise backup details are stored
in target database controlfile .)
About the RMAN Repository
The RMAN repository is a set of metadata that RMAN uses to store information about the target
database and its backup and recovery operations. RMAN stores information about:
Backup sets and pieces
Image copies (including archived redo logs)
Proxy copies
The target database schema
Persistent configuration settings
If you start RMAN without specifying either CATALOG or NOCATALOG on the command
line, then RMAN makes no connection to a repository. If you run a command that requires the

91

repository, and if no CONNECT CATALOG command has been issued yet, then RMAN
automatically connects in the default NOCATALOG mode. After that point, the CONNECT
CATALOG command is not valid in the session.
Types of Database Connections
You can connect to the following types of databases.
Target database
RMAN connects you to the target database with the SYSDBA privilege. If you do not
have this privilege, then the connection fails.
Recovery catalog database
This database is optional: you can also use RMAN with the default NOCATALOG option.
Auxiliary database
You can connect to a standby database, duplicate database, or auxiliary instance (standby
instance or tablespace point-in-time recovery instance
Note: That a SYSDBA privilege is not required when connecting to the recovery catalog. The
only requirement is that the RECOVERY_CATALOG_OWNER role be granted to the schema
owner.
Using Basic RMAN Commands
After you have learned how to connect to a target database, you can immediately begin
performing backup and recovery operations. Use the examples in this section to go through a
basic backup and restore scenario using a test database. These examples assume the following:
The test database is in ARCHIVELOG mode.
You are running in the default NOCATALOG mode.
The RMAN executable is running on the same host as the test database.
Connecting to the Target Database
rman TARGET /
If the database is already mounted or open, then RMAN displays output similar to the following:
Recovery Manager: Release 9.2.0.0.0
connected to target database: RMAN (DBID=1237603294)
Reporting the Current Schema of the Target Database
In this example, you generate a report describing the target datafiles. Run the report schema
command as follows:
RMAN> REPORT SCHEMA; (RMAN displays the datafiles currently in the target database.)
Backing Up the Database

92

In this task, you back up the database to the default disk location. Because you do not specify the
format parameter in this example, RMAN assigns the backup a unique filename.
You can make two basic types of backups: full and incremental.
Making a Full Backup
Run the backup command at the RMAN prompt as follows to make a full backup of the datafiles,
control file, and current server parameter file (if the instance is started with a server parameter
file) to the default device type:
RMAN> BACKUP DATABASE;
Making an Incremental Backup
Incremental backups are a convenient way to conserve storage space because they back up only
database blocks that have changed. RMAN compares the current datafiles to a base backup, also
called a level 0 backup, to determine which blocks to back up.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
Backing Up Archived Logs
Typically, database administrators back up archived logs on disk to a third-party storage medium
such as tape. You can also back up archived logs to disk. In either case, you can delete the input
logs automatically after the backup completes.To back up all archived logs and delete the input
logs (from the primary archiving destination only), run the backup command at the RMAN
prompt as follows:
RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT;
Listing Backups and Copies
To list the backup sets and image copies that you have created, run the list command as follows:
RMAN> LIST BACKUP;
To list image copies, run the following command:
RMAN> LIST COPY;
Validating the Restore of a Backup
Check that you are able to restore the backups that you created without actually restoring them.
Run
the
RESTORE
...
VALIDATE
command
as
follows:
RMAN> RESTORE DATABASE VALIDATE;
Type of RMAN Backup Tutorial
Full Backups
A full backup reads the entire file and copies all blocks into the backup set, only skipping datafile
blocks that have never been used.
About Incremental Backups

93

Rman create backup only changed block since a previous backup. You can use RMAN to create
incremental backups of datafiles, tablespaces, or the whole database.
How Incremental Backups Work
Each data block in a datafile contains a system change number (SCN), which is the SCN at which
the most recent change was made to the block. During an incremental backup, RMAN reads the
SCN of each data block in the input file and compares it to the checkpoint SCN of the parent
incremental backup. RMAN reads the entire file every time whether or not the blocks have been
used.
The parent backup is the backup that RMAN uses for comparing the SCNs. If the current
incremental is a differential backup at level n, then the parent is the most recent incremental of
level n or less. If the current incremental is a cumulative backup at level n, then the parent is the
most recent incremental of level n-1 or less. If the SCN in the input data block is greater than or
equal to the checkpoint SCN of the parent, then RMAN copies the block.
Multilevel Incremental Backups
RMAN can create multilevel incremental backups. Each incremental level is denoted by an
integer, for example, 0, 1, 2, and so forth. A level 0 incremental backup, which is the base for
subsequent incremental backups, copies all blocks containing data. The only difference between
a level 0 backup and a full backup is that a full backup is never included in an incremental
strategy.
If no level 0 backup exists when you run a level 1 or higher backup, RMAN makes a level 0
backup automatically to serve as the base.
The benefit of performing multilevel incremental backups is that RMAN does not back up all
blocks all of the time.
Differential Incremental Backups
In a differential level n incremental backup, RMAN backs up all blocks that have changed since
the most recent backup at level n or lower.
For example, in differential level 2 backups, RMAN determines which level 2 or level 1 backup
occurred most recently and backs up all blocks modified after that backup. If no level 1 is
available, RMAN copies all blocks changed since the base level 0 backup. If no level 0 backup is
available, RMAN makes a new base level 0 backup for this file.
Case 1: if you want to implement incremental backup strategy as a DBA in your organization:

Use Command for incremental Level Backup

94

RMAN> backup incremental level 0 database tag="SUNDAY";


RMAN> backup incremental level 3 database tag="MONDAY";
RMAN> backup incremental level 3 database tag="TUESDAY";
RMAN> backup incremental level 3 database tag="WEDNESDAY";
RMAN> backup incremental level 2 database tag="THURSDAY";
RMAN> backup incremental level 3 database tag="FRIDAY";
RMAN> backup incremental level 3 database tag="SATURDAY";
Backup Example (You can view your incremental Backup Details by using following Query)
select incremental_level, incremental_change#, checkpoint_change#, blocks from
v$backup_datafile;
Result of above Query:
INC_LEVEL INC_CHANGE#
0
0
3
271365
3
271369
3
271371
2
271365
3
271378
3
271380

CHECKPOINT_CHANGE#
271365
271369
271371
271374
271378
271380
271383

BLOCKS
59595
2
1
2
2
1
2

Cumulative Incremental Backups


RMAN provides an option to make cumulative incremental backups at level 1 or greater. In a
cumulative level n backup, RMAN backs up all the blocks used since the most recent backup at
level n-1 or lower.
For example, in cumulative level 2 backups, RMAN determines which level 1 backup occurred
most recently and copies all blocks changed since that backup. If no level 1 backups is available,
RMAN copies all blocks changed since the base level 0 backup.
Cumulative incremental backups reduce the work needed for a restore by ensuring that you only
need one incremental backup from any particular level. Cumulative backups require more space
and time than differential backups, however, because they duplicate the work done by previous
backups at the same level.
Case 1: if you want to implement Cumulative backup strategy as a DBA in your organization:

Use Command for Cumulative Level Backup


backup incremental level=0 database tag='base';
backup incremental level=2 cumulative database tag='monday';

95

backup incremental level=2 cumulative database tag='tuesday';


backup incremental level=2 cumulative database tag='wednesday';
backup incremental level=2 cumulative database tag='thursday';
backup incremental level=2 cumulative database tag='friday';
backup incremental level=2 cumulative database tag='saturday';
backup incremental level=1 cumulative database tag='weekly';
Incremental backup implementation
RMAN will determine the incremental SCN for each datafile Find the backup with highest
checkpoint scn that belongs to the incarnation of datafile matches the given file# is an
incremental backup/copy at level N or less if noncumulative or is an incremental backup/copy at
level N-1 or less if cumulative belongs to an available backup set if backup
Incremental Backup Strategy
You can implement a three-level backup scheme so that a full or level 0 backup is taken monthly,
a cumulative level 1 backup is taken weekly, and a cumulative level 2 is taken daily. In this
scheme, you never have to apply more than a day's worth of redo for complete recovery. When
deciding how often to take full or level 0 backups, a good rule of thumb is to take a new level 0
whenever 50% or more of the data has changed. If the rate of change to your database is
predictable, then you can observe the size of your incremental backups to determine when a new
level 0 is appropriate. The following query displays the number of blocks written to a backup set
for each datafile with at least 50% of its blocks backed up:
SELECT
FILE#,
INCREMENTAL_LEVEL,
COMPLETION_TIME,
BLOCKS,
DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0 AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Compare the number of blocks in differential or cumulative backups to a base level 0 backup.
For example, if you only create level 1 cumulative backups, then when the most recent level 1
backup is about half of the size of the base level 0 backup, take a new level 0.
RMAN: RESTORE Concept
Use the RMAN RESTORE command to restore the following types of files from copies on disk
or backups on other media:
Database (all datafiles)
Tablespaces
Control files
Archived redo logs
Server parameter files
Process of Restore Operations
RMAN automates the procedure for restoring files. When you issue a RESTORE command,
RMAN restore the correct backups and copies to either:
The default location, overwriting the old files with the same name
A new location, which you can specify with the SET NEWNAME command
For example:

96

If you restore datafile 'C:_DATA.DBF to its default location, then RMAN restores the file
C:_DTAA.DBF and overwrites any file that it finds with the same filename.
if you run a SET NEWNAME command before you restore a file, then RMAN creates a datafile
copy with the name that you specify. For example, assume that you run the following commands:
Run
{
SET NEWNAME FOR DATAFILE 'C:_DATA.DBF TO C:_DATA.DBF;
RESTORE DATAFILE 'C:_DTAA.DBF;
SWITCH DATAFILE 'C:_DATA.DBF' TO DATAFILECOPY 'C:_DATA.DBF;
}
In this case, RMAN creates a datafile copy of 'C:_DATA.DBF named 'C:_DATA.DBF and
records it in the repository. To change the name for datafile 'C:_DATA.DBF to
'C:_DATA.DBF in the control file, run a SWITCH command so that RMAN considers the
restored file as the current database file.
RMAN Recovery: Basic Steps
If possible, make the recovery catalog available to perform the media recovery. If it is not
available, then RMAN uses metadata from the target database control file. Assuming that you
have backups of the datafiles and at least one autobackup of the control file.
The generic steps for media recovery using RMAN are as follows:
Place the database in the appropriate state: mounted or open. For example, mount the database
when performing whole database recovery, or open the database when performing online
tablespace recovery.
Restore the necessary files using the RESTORE command.
Recover the datafiles using the RECOVER command.
Place the database in its normal state.
Mechanism of Restore and Recovery operation:
The DBA runs the following commands:
RESTORE DATABASE;
RECOVER DATABASE;
The RMAN recovery catalog obtains its metadata from the target database control file. RMAN
decides which backup sets to restore, and which incremental backups and archived logs to use for
recovery. A server session on the target database instance performs the actual work of restore and
recovery.
Mechanics of Recovery: Incremental Backups and Redo Logs

97

RMAN does not need to apply incremental backups to a restored level 0 incremental backup: it
can also apply archived logs. RMAN simply restores the datafiles that it needs from available
backups and copies, and then applies incremental backups to the datafiles if it can and if not
applies logs.
How RMAN Searches for Archived Redo Logs During Recovery
If RMAN cannot find an incremental backup, then it looks in the repository for the names of
archived redo logs to use for recovery. Oracle records an archived log in the control file
whenever one of the following occurs:
The archiver process archives a redo log
RMAN restores an archived log
The RMAN COPY command copies a log
The RMAN CATALOG command catalogs a user-managed backup of an archived log
RMAN propagates archived log data into the recovery catalog during resynchronization,
classifying archived logs as image copies. You can view the log information through:
The LIST command
The V$ARCHIVED_LOG control file view
The RC_ARCHIVED_LOG recovery catalog view
During recovery, RMAN looks for the needed logs using the filenames specified in the
V$ARCHIVED_LOG view. If the logs were created in multiple destinations or were generated
by the COPY, CATALOG, or RESTORE commands, then multiple, identical copies of each log
sequence number exist on disk.
If the RMAN repository indicates that a log has been deleted or uncataloged, then RMAN ceases
to consider it as available for recovery. For example, assume that the database archives log 100 to
directories /dest1 and /dest2. The RMAN repository indicates that /dest1/log100.arc and
/dest2/log100.arc exist. If you delete /dest1/log100.arc with the DELETE command, then the
repository indicates that only /dest2/log100.arc is available for recovery.
If the RMAN repository indicates that no copies of a needed log sequence number exist on disk,
then RMAN looks in backups and restores archived redo logs as needed to perform the media
recovery. By default, RMAN restores the archived redo logs to the first local archiving
destination specified in the initialization parameter file. You can run the SET ARCHIVELOG
DESTINATION command to specify a different restore location. If you specify the DELETE
ARCHIVELOG option on RECOVER, then RMAN deletes the archived logs after restoring and
applying them. If you also specify MAXSIZE integer on the RECOVER command, then RMAN
staggers the restores so that they consume no more than integer amount of disk space at a time.
Incomplete Recovery

98

RMAN can perform either complete or incomplete recovery. You can specify a time, SCN, or log
sequence number as a limit for incomplete recovery with the SET UNTIL command or with an
UNTIL clause specified directory on the RESTORE and RECOVER commands. After
performing incomplete recovery, you must open the database with the RESETLOGS option.
Disaster Recovery with a Control File Autobackup
Assume that you lose both the target database and the recovery catalog. All that you have
remaining is a tape with RMAN backups of the target database and archived redo logs. Can you
still recover the database? Yes, assuming that you enabled the control file autobackup feature. In
a disaster recovery situation, RMAN can determine the name of a control file autobackup even
without a repository available. You can then restore this control file, mount the database, and
perform media recovery.
About Block Media Recovery
You can also use the RMAN BLOCKRECOVER command to perform block media recovery.
Block media recovery recovers an individual corrupt datablock or set of datablocks within a
datafile. In cases when a small number of blocks require media recovery, you can selectively
restore and recover damaged blocks rather than whole datafiles.
Note: Restrictions of block media recovery:

You can only perform block media recovery with Recovery Manager. No SQL*Plus
recovery interface is available.
You can only perform complete recovery of individual blocks. In other words, you cannot
stop recovery before all redo has been applied to the block.
You can only recover blocks marked media corrupt. The
V$DATABASE_BLOCK_CORRUPTION view indicates which blocks in a file were
marked corrupt since the most recent BACKUP, BACKUP ... VALIDATE, or COPY
command was run against the file.
You must have a full RMAN backup. Incremental backups are not allowed.
Blocks that are marked media corrupt are not accessible to users until recovery is
complete. Any attempt to use a block undergoing media recovery results in an error
message indicating that the block is media corrupt.

When Block Media Recovery Should Be Used


For example, you may discover the following messages in a user trace file:
ORA-01578: ORACLE data block corrupted (file # 7, block # 3)
ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf'
ORA-01578: ORACLE data block corrupted (file # 2, block # 235)
ORA-01110: data file 2: '/oracle/oradata/trgt/undotbs01.dbf'
You can then specify the corrupt blocks in the BLOCKRECOVER command as follows:
BLOCKRECOVER DATAFILE 7 BLOCK 3 DATAFILE 2 BLOCK 235;
Block Media Recovery When Redo Is Missing

99

Like datafile media recovery, block media recovery cannot survive a missing or inaccessible
archived log. Where is datafile recovery requires an unbroken series of redo changes from the
beginning of recovery to the end, block media recovery only requires an unbroken set of redo
changes for the blocks being recovered.
When RMAN first detects missing or corrupt redo records during block media recovery, it does
not immediately signal an error because the block undergoing recovery may become a newed
block later in the redo stream. When a block is newed all previous redo for that block becomes
irrelevant because the redo applies to an old incarnation of the block. For example, Oracle can
new a block when users delete all the rows recorded in the block or drop a table.
Deciding Whether to Use RMAN with a Recovery Catalog
By default, RMAN connects to the target database in NOCATALOG mode, meaning that it uses
the control file in the target database as the sole repository of RMAN metadata. Perhaps the most
important decision you make when using RMAN is whether to create a recovery catalog as the
RMAN repository for normal production operations. A recovery catalog is a schema created in a
separate database that contains metadata obtained from the target control file.
Benefits of Using the Recovery Catalog as the RMAN Repository
When you use a recovery catalog, RMAN can perform a wider variety of automated backup and
recovery functions than when you use the control file in the target database as the sole repository
of metadata.
The following features are available only with a catalog:

You can store metadata about multiple target databases in a single catalog.

You can store metadata about multiple incarnations of a single target database in the
catalog. Hence, you can restore backups from any incarnation.

Resynchronizing
the
recovery
catalog
at
intervals
less
than
the
CONTROL_FILE_RECORD_KEEP_TIME setting, you can keep historical metadata.

You can report the target database schema at a noncurrent time.

You can store RMAN scripts in the recovery catalog.

When restoring and recovering to a time when the database files that exist in the database
are different from the files recorded in the mounted control file, the recovery catalog
specifies which files that are needed. Without a catalog, you must first restore a control
file backup that lists the correct set of database files.

If the control file is lost and must be restored from backup, and if persistent
configurations have been made to automate the tape channel allocation, these
configurations are still available when the database is not mounted.

Costs of Using the Recovery Catalog as the RMAN Repository


The main cost of using a catalog is the maintenance overhead required for this additional
database.
For example, you have to:Find a database other than the target database to store the recovery
catalog (otherwise, the benefits of maintaining the catalog are lost), or create a new database
Create enough space on the database for the RMAN metadata.

Back up the recovery catalog metadata

Upgrade the recovery catalog when necessary


100

Types of Files That RMAN Can Back Up


The BACKUP command can back up the following types of files:
Database, which includes all datafiles as well as the current control file and current server
parameter
file:

Tablespaces (except for locally-managed temporary tablespaces)

Current datafiles

Current control file

Archived redo logs

Current server parameter file

Backup sets

RMAN does not back up the following:

Online redo logs

Transported tablespaces before they have been made read/write

Client-side initialization parameter files or noncurrent server parameter files


How to Configure RMAN

RMAN can invoked from the command line on the database host machine like so:
C:\>rman target sys/sys_password
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to target database: ORCL (DBID=1036216947)
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default

101

CONFIGURE SNAPSHOT CONTROLFILE NAME TO


'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; #
default
RMAN>
Retention Policy:
This
instructs
RMAN
on
the
backups
that
are
eligible
for
deletion.
For example: A retention policy with redundancy 2 would mean that two backups - the latest and
the one prior to that - should be retained. All other backups are candidates for deletion.
Default Device Type:
This can be "disk" or "sbt" (system backup to tape). We will backup to disk and then have our
OS backup utility copy the completed backup, and other supporting files, to tape.
Controlfile Autobackup:
This can be set to "on" or "off". When set to "on", RMAN takes a backup of the controlfile AND
server parameter file each time a backup is performed. Note that "off" is the default.
Controlfile Autobackup Format:
This tells RMAN where the controlfile backup is to be stored. The "%F" in the file name
instructs RMAN to append the database identifier and backup timestamp to the backup filename.
The database identifier, or DBID, is a unique integer identifier for the database.
Parallelism:
This tells RMAN how many server processes you want dedicated to performing the backups.
Device Type Format:
This specifies the location and name of the backup files. We need to specify the
format for each channel. The "%U" ensures that Oracle appends a unique identifier to the backup
file name. The MAXPIECESIZE attribute sets a maximum file size for each file in the backup
set.
Any of the above parameters can be changed using the commands displayed by the "show all"
command.
For example, one can turn off controlfile autobackups by issuing:
RMAN> configure controlfile autobackup off;
using target database controlfile instead of recovery catalog old RMAN configuration
parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP OFF;

102

new

RMAN

configuration

parameters

are

successfully

stored

RMAN>
Complete Steps for Using RMAN through Catalog
Recovery manager is a platform independent utility for coordinating your backup and restoration
procedures across multiple servers.
How to Create Recovery Catalog
First create a user to hold the recovery catalog:
CONNECT sys/password@w2k1 AS SYSDBA
Step 1 Create tablepsace to hold repository
CREATE TABLESPACE "RMAN"
DATAFILE 'C:\ORACLE\ORADATA\W2K1\RMAN01.DBF' SIZE 6208K REUSE
AUTOEXTEND ON NEXT 64K MAXSIZE 32767M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
Step 2 Create rman schema owner
CREATE USER rman IDENTIFIED BY rman
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;
GRANT connect, resource, recovery_catalog_owner TO rman;
Step 3 then create the recovery catalog:
C:>rman catalog=rman/rman@w2k1
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to recovery catalog database
Recovery catalog is not installed
RMAN> create catalog tablespace "RMAN";
Recovery catalog created
RMAN> exit
Recovery Manager complete.
C:>
Step 4 Register Database
Each database to be backed up by RMAN must be registered:
C:>rman catalog=rman/rman@w2k1 target=sys/password@w2k2\
<mailto:target=sys/password@w2k2\>
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: W2K2 (DBID=1371963417)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
103

RMAN>
Full Backup
First we configure several persistent parameters for this instance:
RMAN> configure retention policy to recovery window of 7 days;
RMAN> configure default device type to disk;
RMAN> configure controlfile autobackup on;
RMAN> configure channel device type disk format
'C:\Oracle\Admin\W2K2\Backup%d_DB_%u_%s_%p';
Next we perform a complete database backup using a single command:
RMAN> run
{backup database plus archivelog;
delete noprompt obsolete;
}
The recovery catalog should be resyncronized on a regular basis so that changes to the database
structure and presence of new archive logs is recorded. Some commands perform partial and full
resyncs implicitly, but if you are in doubt you can perform a full resync using the follwoing
command:
RMAN> resync catalog;

104

RMAN Recovery Case Study


Recovery from missing or corrupted datafile(s):
Case 1: Recovery from corrupted or missing datafile
This scenario deals with a situation where a datafile has gone missing, or is corrupted beyond
repair. For concreteness, we look at a case where a datafile is missing. Below is a transcript of an
SQL Plus session that attempts to open a database with a missing datafile (typed commands in
bold, lines in italics are my comments, all other lines are feedback from SQL Plus):
--open SQL Plus from the command line without
--logging on to database
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 25 14:52:41 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
--Connect to the idle Oracle process as a privileged user and start up instance
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'

SQL>
The error message tells us that file# 4 is missing. Note that although the startup command has
failed, the database is in the mount state. Thus, the database control file, which is also the RMAN
repository can be accessed by the instance and by RMAN. We now recover the missing file using
RMAN. The transcript of the recovery session is reproduced below (bold lines are typed
commands, comments in italics, the rest is feedback from RMAN):
--logon to RMAN
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore missing datafile
RMAN> restore datafile 4;

105

Starting restore at 26/JAN/05


using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=14 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=15 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\BACKUP\0QGB0UEC_1_1.BAK tag=TAG20050124T152708 params=NULL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--recover restored datafile - RMAN applies all logs automatically
RMAN> recover datafile 4;
Starting recover at 26/JAN/05 using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
archive log thread 1 sequence 4 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_4.ARC
archive log thread 1 sequence 5 is already on disk as file
C:\ORACLE_ARCHIVE\ORCL\1_5.ARC
archive log thread 1 sequence 6 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_6.ARC
archive log thread 1 sequence 7 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_7.ARC
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_4.ARC thread=1 sequence=4
archive log filename=C:\ORACLE_ARCHIVE\ORCL\1_5.ARC thread=1 sequence=5
media recovery complete
Finished recover at 26/JAN/05
--open database for general use
RMAN> alter database open;
database opened
RMAN>
In the above scenario, the database is already in the mount state before the RMAN session is
initiated. If the database is not mounted, you should issue a "startup mount" command before
attempting to restore the missing datafile. The database must be mounted before any datafile
recovery can be done.
If the database is already open when datafile corruption is detected, you can recover the datafile
without shutting down the database. The only additional step is to take the relevant tablespace
offline before starting recovery. In this case you would perform recovery at the tablespace level.
The commands are:
C:\>rman target /

106

Recovery Manager: Release 9.2.0.4.0 - Production


Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--offline affected Tablespace
RMAN> sql 'alter tablespace USERS offline immediate';
using target database controlfile instead of recovery catalog
sql statement: alter tablespace USERS offline immediate
--recover offlined Tablespace
RMAN> recover tablespace USERS;
Starting recover at 26/JAN/05
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=14 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=12 devtype=DISK
starting media recovery
media recovery complete
Finished recover at 26/JAN/05
--online recovered Tablespace
RMAN> sql 'alter tablespace USERS online';
sql statement: alter tablespace USERS online
RMAN>
Here we have used the SQL command, which allows us to execute arbitrary SQL from within
RMAN.
Case 2: Recovery from block corruption
It is possible to recover corrupted blocks using RMAN backups. This is a somewhat exotic
scenario, but it can be useful in certain circumstances, as illustrated by the following example.
Here's the situation: a user connected to SQLPlus gets a data block corruption error when she
queries a table. Here's a part of the session transcript:
SQL> connect testuser/testpassword
Connected.
SQL> select count(*) from test_table;
select count(*) from test_table
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 2015)
ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'

107

Since we know the file and block number, we can perform block level recovery using RMAN. This is best illustrated by
example:

C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore AND recover specific block
RMAN> blockrecover datafile 4 block 2015;
Starting blockrecover at 26/JAN/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=19 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=20 devtype=DISK
channel ORA_DISK_1: restoring block(s)
channel ORA_DISK_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00004
channel ORA_DISK_1: restored block(s) from backup piece 1
piece handle=E:\BACKUP\0QGB0UEC_1_1.BAK tag=TAG20050124T152708 params=NULL
channel ORA_DISK_1: block restore complete
starting media recovery
media recovery complete
Finished blockrecover at 26/JAN/05
RMAN>
Now our user should be able to query the table from her SQLPlus session. Here's her session
transcript after block recovery.
SQL> select count(*) from test_table;
COUNT(*)
---------217001
SQL>
A couple of important points regarding block recovery:
1. Block recovery can only be done using RMAN.
2. The entire database can be open while performing block recovery.
3. Check all database files for corruption. This is important - there could be other corrupted
blocks. Verification of database files can be done using RMAN or the dbverify utility. To verify
using RMAN simply do a complete database backup with default settings. If RMAN detects
block corruption, it will exit with an error message pointing out the guilty file/block.

108

Recovery from missing or corrupted control file


Case 1: A multiplexed copy of the control file is available.
On startup Oracle must read the control file in order to find out where the datafiles and online
logs are located. Oracle expects to find control files at locations specified in the
CONTROL_FILE initialisation parameter. The instance will fail to mount the database if any one
of the control files are missing or corrupt. Here's an example:
SQL> startup
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
ORA-00205: error in identifying controlfile, check alert log for more info
SQL>
On checking the alert log, as suggested, we find the following:
ORA-00202: controlfile: 'e:\oracle_dup_dest\controlfile\ORCL\control02.ctl'
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 5447783)
The above corruption was introduced by manually editing the control file when the database was
closed.
The solution is simple, provided you have at least one uncorrupted control file - replace the
corrupted control file with a copy using operating system commands. Remember to rename the
copied file. The database should now start up without any problems.
Case 2: All control files lost
What if you lose all your control files? In that case you have no option but to use a backup
control file. The recovery needs to be performed from within RMAN, and requires that all logs
(archived and current online logs) since the last backup are available. The logs are required
because all datafiles must also be restored from backup. The database will then have to be
recovered up to the time the control files went missing. This can only be done if all intervening
logs are available. Here's an annotated transcript of a recovery session (as usual, lines in bold are
commands to be typed, lines in italics are explanatory comments, other lines are RMAN
feedback):
-- Connect to RMAN
C:\rman
Recovery Manager: Release 9.0.1.1.1 - Production
(c) Copyright 2001 Oracle Corporation. All rights reserved.
RMAN> set dbid 4102753520
executing command: SET DBID
set DBID - get this from the name of the controlfile autobackup. For example, if autobackup
name is

109

CTL_SP_BAK_C-1507972899-20050124-00 the the DBID is 1507972899. This step will not be


required if the instance is
RMAN> connect target sys/change_on_install
connected to target database: (not mounted)
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (not mounted)
RMAN> restore controlfile from autobackup;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
channel ORA_DISK_1: restoring controlfile
channel ORA_DISK_1: restore complete
replicating controlfile
input filename=D:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL01.CTL
output filename=E:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL02.CTL
output filename=C:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL03.CTL
Finished restore at 26/JAN/05
-- Now that control files have been restored, the instance can mount the
-- database.
RMAN> mount database;
database mounted
-- All datafiles must be restored, since the controlfile is older than the current
-- datafiles. Datafile restore must be followed by recovery up to the current log.
RMAN> restore database;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\0DGB0I79_1_1.BAK tag=TAG20050124T115832 params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\0CGB0I78_1_1.BAK tag=TAG20050124T115832 params=NULL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--Database must be recovered because all datafiles have been restored from
-- backup

110

RMAN> recover database;


Starting recover at 26/JAN/05
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 2 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_2.ARC
archive log thread 1 sequence 4 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO02A.LOG
archive log thread 1 sequence 5 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO01A.LOG
archive log thread 1 sequence 6 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_2.ARC thread=1 sequence=2
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_3.ARC thread=1 sequence=3
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO02A.LOG thread=1 sequence=4
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO01A.LOG thread=1 sequence=5
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG thread=1 sequence=6
media recovery complete
Finished recover at 26/JAN/05
-- Recovery completed. The database must be opened with RESETLOGS
-- because a backup control file was used. Can also use
-- "alter database open resetlogs" instead.
RMAN> open resetlogs database;
database opened
Several points are worth emphasizing.
1. Recovery using a backup controlfile should be done only if a current control file is
unavailable.
2. All datafiles must be restored from backup. This means the database will need to be recovered
using archived and online redo logs. These MUST be available for recovery until the time of
failure.
3. As with any database recovery involving RESETLOGS, take a fresh backup immediately.
4. Technically the above is an example of complete recovery - since all committed transactions
were recovered. However, some references consider this to be incomplete recovery because the
database log sequence had to be reset.
After recovery using a backup controlfile, all temporary files associated with locally-managed
tablespaces are no longer available. You can check that this is so by querying the view
V$TEMPFILE - no rows will be returned. Therefore tempfiles must be added (or recreated)
before the database is made available for general use. In the case at hand, the tempfile already
exists so we merely add it to the temporary tablespace. This can be done using SQLPlus or any
tool of your choice:
SQL> alter tablespace temp add tempfile

111

'D:\oracle_data\datafiles\ORCL\TEMP01.DBF';
Tablespace altered.
SQL>
Check that the file is available by querying v$TEMPFILE
Recovery from missing or corrupted redo log group
Case 1: A multiplexed copy of the missing log is available.
If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an
example, where I attempt to startup from SQLPLUS when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG'
SQL>
To fix this we simply copy REDO03A.LOG from its multiplexed location on E: to the above
location on D:
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.
Case 2: All members of a log group lost.
In this case an incomplete recovery is the best we can do. We will lose all transactions from the
missing log and all subsequent logs. We illustrate using the same example as above. The error
message indicates that members of log group 3 are missing. We don't have a copy of this file, so
we know that an incomplete recovery is required. The first step is to determine how much can be
recovered. In order to do this, we query the V$LOG view (when in the mount state) to find the
system change number (SCN) that we can recover to (Reminder: the SCN is a monotonically
increasing number that is incremented whenever a commit is issued)
--The database should be in the mount state for v$log access
SQL> select first_change# from v$log whnhi.ere group#=3 ;
FIRST_CHANGE#
-------------

112

370255
SQL>
The FIRST_CHANGE# is the first SCN stamped in the missing log. This implies that the last
SCN stamped in the previous log is 370254 (FIRST_CHANGE#-1). This is the highest SCN that
we can recover to. In order to do the recovery we must first restore ALL datafiles to this SCN,
followed by recovery (also up to this SCN). This is an incomplete recovery, so we must open the
database resetlogs after we're done. Here's a transcript of the recovery session (typed commands
in bold, comments in italics, all other lines are RMAN feedback):
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--Restore ENTIRE database to determined SCN
RMAN> restore database until scn 370254;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_2: starting datafile backupset restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
channel ORA_DISK_2: restored backup piece 1
piece handle=E:\BACKUP\13GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL
channel ORA_DISK_2: restore complete
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\14GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--Recover database
RMAN> recover database until scn 370254;
Starting recover at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
archive log thread 1 sequence 9 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_9.ARC
archive log thread 1 sequence 10 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_10.ARC
archive log thread 1 sequence 11 is already on disk as file
E:\ORACLE_ARCHIVE\ORCL\1_11.ARC

113

archive log thread 1 sequence 12 is already on disk as file


E:\ORACLE_ARCHIVE\ORCL\1_12.ARC
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_9.ARC thread=1 sequence=9
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_10.ARC thread=1 sequence=10
media recovery complete
Finished recover at 26/JAN/05
--open database with RESETLOGS (see comments below)
RMAN> alter database open resetlogs;
database opened

RMAN>
The following points should be noted:
1. The entire database must be restored to the SCN that has been determined by querying v$log.
2. All changes beyond that SCN are lost. This method of recovery should be used only if you are
sure that you cannot do better. Be sure to multiplex your redo logs, and (space permitting) your
archived logs!
3. The database must be opened with RESETLOGS, as a required log has not been applied. This
resets the log sequence to zero, thereby rendering all prior backups worthless. Therefore, the first
step after opening a database RESETLOGS is to take a fresh backup. Note that the
RESETLOGS option must be used for any incomplete recovery.
Disaster Recovery
Introduction:
i.e. a situation in which your database server has been destroyed and has taken all your database
files (control files, logs and data files) with it. Obviously, recovery from a disaster of this nature
is dependent on what you have in terms of backups and hardware resources. We assume you
have the following available after the disaster:

A server with the same disk layout as the original.


The last full hot backup on tape.

With the above items at hand, it is possible to recover all data up to the last full backup. One can
do better if subsequent archive logs (after the last backup) are available. In our case these aren't
available, since our only archive destination was on the destroyed server ). Oracle provides
methods to achieve better data protection. We will discuss some of these towards the end of the
article.
Now on with the task at hand, the high-level steps involved in disaster recovery are:

Build replacement server.


Restore backup from tape.
Install database software.
Create Oracle service.
Restore and recover database.
114

Step: 1 Build the server


You need a server to host the database, so the first step is to acquire or build the new machine.
This is not strictly a DBA task, so we won't delve into details here. The main point to keep in
mind is that the replacement server should, as far as possible, be identical to the old one. In
particular, pay attention to the following areas:

Ideally the server should have the same number of disks as the original. The new disks
should also have enough space to hold all software and data that was on the original
server.

The operating system environment should be the same as the original, right up to service
pack and patch level.

The new server must have enough memory to cater to Oracle and operating system / other
software requirements. Oracle memory structures (Shared pool, db buffer caches etc) will
be sized identically to the original database instance. Use of the backup server parameter
file will ensure this.

Step: 2 Restore backup from tape


The next step is to get your backup from tape on to disk.
Step: 3 Install Oracle Software
The next step is to install Oracle software on the machine. The following points should be kept in
mind when installing the software:

Install the same version of Oracle as was on the destroyed server. The version number
should match right down to the patch level, so this may be a multi-step process involving
installation followed by the application of one or more patch sets and patches.

Do not create a new database at this stage.

Create a listener using the Network Configuration Assistant. Ensure that it has the same
name and listening ports as the original listener. Relevant listener configuration
information can be found in the backed up listener.ora file.

Step: 4 Create directory structure for database files

After software installation is completed, create all directories required for datafiles,
(online and archived) logs, control files and backups. All directory paths should match
those on the original server.

Don't worry if you do not know where the database files should be located. You can
obtain the required information from the backup spfile and control file at a later stage.
Continue reading - we'll come back to this later.

Step: 5 Create Oracle service


An Oracle service must be exist before a database is created. The service is created using the
oradim utility, which must be run from the command line. The following commands show how
to create and modify a service (comments in italics, typed commands in bold):

115

--create a new service with auto startup


C:\>oradim -new -sid ORCL -intpwd ORCL -startmode a
Unfortunately oradim does not give any feedback, but you can check that the service exists via
the Services administrative panel. The service has been configured to start automatically when
the computer is powered up.
Step: 6 Restore and recover database
Now it is time to get down to the nuts and bolts of database recovery. There are several steps, so
we'll list them in order:

Copy PASSWORD and TNSNAMES file from backup: The backed up password file and
tnsnames.ora files should be copied from the backup directory to the proper locations.
Default location for password and tnsnames files are ORACLE_HOME\database
ORACLE_HOME\network\admin respectively.

Set ORACLE_SID environment variable: ORACLE_SID should be set to the proper SID
name (ORCL in our case). This can be set either in the registry (registry key:
HKLM\Software\Oracle\HOME<X>\ORACLE_SID) or from the system applet in the
control panel

Invoke RMAN and set the DBID: We invoke rman and connect to the target database as
usual. No login credentials are required since we connect from an OS account belonging
to ORA_DBA. Note that RMAN accepts a connection to the database although the
database is yet to be recovered. RMAN doesn't as yet "know" which database we intend
to connect to. We therefore need to identify the (to be restored) database to RMAN. This
is done through the database identifier (DBID). The DBID can be figured out from the
name of the controlfile backup. Example: if you use the controlfile backup format , your
controlfile backup name will be something like "CTL_SP_BAK_C-150797289920050228-00". In this case the DBID is 1507972899. Here's a transcript illustrating the
process of setting the DBID:

C:\>rman
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> set dbid 1507972899
executing command: SET DBID
RMAN>connect target /
connected to target database (not started)
RMAN>
Restore spfile from backup: To restore the spfile, you first need to startup the database in the
nomount state. This starts up the database using a dummy parameter file. After that you can
restore the spfile from the backup (which has been restored from tape ). Finally you restart the
database in nomount state. Here is an example RMAN transcript for the foregoing procedure.
Note the difference in SGA size and components between the two startups:
RMAN> startup nomount
116

startup failed: ORA-01078: failure in processing system parameters


LRM-00109: could not open parameter file
'C:\ORACLE\ORA92\DATABASE\INITORCL.ORA'
trying to start the Oracle instance without parameter files ...
Oracle instance started
Total System Global Area 97590928 bytes
Fixed Size 454288 bytes
Variable Size 46137344 bytes
Database Buffers 50331648 bytes
Redo Buffers 667648 bytes
RMAN> restore spfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=9 devtype=DISK
channel ORA_DISK_1: autobackup found: e:\backup\CTL_SP_BAK_C-1507972899-2005022800
channel ORA_DISK_1: SPFILE restore from autobackup complete
Finished restore at 01/MAR/05
RMAN> startup force nomount
Oracle instance starte
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes

RMAN>

The instance is now started up with the correct initialization parameters.


We are now in a position to determine the locations of control file and archive destination, as this
information sits in the spfile. This is done via SQL Plus as follows:
C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.
SQL> show parameter control_file
SQL> show parameter log_archive_dest
The directories listed in the CONTROL_FILES and LOG_ARCHIVE_DEST_N parameters should
be created at this stage if they haven't been created earlier.
Restore control file from backup: The instance now "knows" where the control files should be
restored, as this is listed in the CONTROL_FILES initialization parameter. Therefore, the next
step is to restore these files from backup. Once the control files are restored, the instance should
be restarted in mount mode. A restart is required because the instance must read the initialization
parameter file in order to determine the control file locations. At the end of this step RMAN also
117

has its proper configuration parameters, as these are stored in the control file.
Here is a RMAN session transcript showing the steps detailed here:
RMAN> restore controlfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
allocated channel: ORA_DISK_1
hannel ORA_DISK_1: sid=13 devtype=DISK
channel ORA_DISK_1: restoring controlfile
channel ORA_DISK_1: restore complete
replicating controlfile
input filename=D:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL01.CTL
output filename=E:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL02.CTL
output filename=C:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL03.CTL
Finished restore at 01/MAR/0
RMAN> shutdown
Oracle instance shut down

RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database (not started)
RMAN>startup mount;
Oracle instance started
database mounted
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes
RMAN> show all;
using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;

118

CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default


CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE
SNAPSHOT
CONTROLFILE
NAME
TO
'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; # default
RMAN>
At this stage we can determine the locations of data files and redo logs if we don't know where
they should go. This is done from SQL Plus as follows:
C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.
SQL>select name from v$datafile;
SQL>select member from v$logfile;
SQL>
The directories shown in the output should be created manually if this hasn't been done earlier.
Restore all datafiles: This is easy. Simply issue a "restore database" command from RMAN, and
it will do all the rest for you:
RMAN> restore database;
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=11 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=8 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS02.DBF
channel ORA_DISK_2: starting datafile backupset restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00005 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
restoring datafile 00006 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS02.DBF
channel ORA_DISK_2: restored backup piece 1
piece handle=E:\BACKUP\80G6E1TT_1_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\81G6E1TU_1_1.BAK tag=TAG20041130T222501 params=NULL

119

channel ORA_DISK_2: restored backup piece 2


piece handle=E:\BACKUP\80G6E1TT_2_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 2
piece handle=E:\BACKUP\81G6E1TU_2_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 3
piece handle=E:\BACKUP\81G6E1TU_3_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_2: restored backup piece 3
piece handle=E:\BACKUP\80G6E1TT_3_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_2: restore complete
Finished restore at 01/MAR/05
RMAN>
Recover database: The final step is to recover the database. Obviously recovery is dependent on
the available archived (and online) redo logs. Since we have lost our database server and have no
remote archive destination, we can recover only up to the time of the backup. Further, since this
is an incomplete recovery, we will have to open the database with resetlogs. Here's a sample
RMAN session illustrating this:
RMAN> recover database;
Starting recover at 01/MAR/05
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
unable to find archive log archive log thread=1 sequence=1388
RMAN-00571: ==============================
RMAN-00569: =ERROR MESSAGE STACK FOLLOWS =
RMAN-00571: ===============================
RMAN-03002: failure of recover command at 04/01/2005 14:14:43
RMAN-06054: media recovery requesting unknown log: thread 1 scn 32230460
RMAN>alter database open resetlogs;
database opened
RMAN>
Note that RMAN automatically applies all available archive logs. It first applies the backed up
log and then searches for subsequent logs in the archive destination. This opens the door for
further recovery if the necessary logs are available. In our case, however, we have no more redo
so we open the database with resetlogs. The error message above simply indicates that RMAN
has searched, unsuccessfully, for subsequent logs
That's it. The database has been recovered, from scratch, to the last available backup. Now
having done this, it is worth spending some time in discussing how one can do better - i.e.
recover up to a point beyond the backup. We do this in the next section.

120

Standby Database
Oracle Standby Database
This document contains a guide on setting up standby databases for maximum protection, using
command line mode, and avoiding using the GUI. To do this Oracle9i has a feature called Data
Guard and the following sections describe the tasks undertaken to set-up primary and standby
servers and a couple of WINDOWS servers.
Database PROD is replicated from production Server to Standby Server via DataGuard
Data Guard Operational Prerequisites:
Same Oracle software release must be used for both primary and standby databases. The
operating system running on primary and standby locations must be same, but operating
system release may not need to be same.
The Primary Database must run in ARCHIVELOG mode.
The hardware and Operating system architecture on primary and standby location must be
same.
Each primary and standby database must have its own control file.
Architecture:
The Oracle9i Data Guard architecture incorporates the following items:
Primary Database - A production database that is used to create standby databases. The archive
logs from the primary database are transferred and applied to standby databases. Each standby
can only be associated with a single primary database, but a single primary database can be
associated with multiple standby databases.
Standby Database - A replica of the primary database.
Log Transport Services - Control the automatic transfer of archive redo log files from the
primary database to one or more standby destinations.
Network Configuration - The primary database is connected to one or more standby databases
using Oracle Net.
Log Apply Services - Apply the archived redo logs to the standby database. The Managed
Recovery Process (MRP) actually does the work of maintaining and applying the archived redo
logs.
Role Management Services - Control the changing of database roles from primary to standby.
The services include switchover, switchback and fail over.
The services required on the primary database are:
Log Writer Process (LGWR) - Collects redo information and updates the online redo logs. It can
also create local archived redo logs and transmit online redo to standby databases.

121

Archiver Process (ARCn) - One or more archiver processes make copies of online redo logs
either locally or remotely for standby databases.
Fetch Archive Log (FAL) Server - Services requests for archive redo logs from FAL clients
running on multiple standby databases. Multiple FAL servers can be run on a primary database,
one for each FAL request.
The services required on the standby database are:
Fetch Archive Log (FAL) Client - Pulls archived redo log files from the primary site. Initiates
transfer of archived redo logs when it detects a gap sequence.
Remote File Server (RFS) - Receives archived and/or standby redo logs from the primary
database. Archiver (ARCn) Processes - Archives the standby redo logs applied by the managed
recovery process (MRP).
Managed Recovery Process (MRP) - Applies archive redo log information to the standby
database.
Step-by-Step Stand by Database Configuration:
Step1: Configure Listener in Production Server and Standby Server.
TIPS: You should try to Create Listener (Standby) by using Net Configuration Assistant on
Standby Server.
TIPS: Assume Listener already configure with PROD name on Primary Node. If Listener not
configured on Primery Node , You Should Create Listener by using Net Configuration
Assistant on Primary Server.
Step2: Configure TNSNAMES.ORA in Production Server and Standby Server. following TNSNAMES.ORA entry on

Production Database and Standby Database


# Connection string for Primary Instance.
PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = Production IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
# Connecting string for Standby Instance
STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = STANDBY IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)

122

Step3: Put your production database in Archive Log mode if your database not running in
Archive log mode add following entries in init.ora file in Production Server.
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_DEST_1='LOCATION=C:\oracle\database\archive MANDATORY
REOPEN=30'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY REOPEN=300'
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_STATE_2=enable
LOG_ARCHIVE_FORMAT=ARC%S.arc
REMOTE_ARCHIVE_ENABLE=true
STANDBY_FILE_MANAGEMENT=AUTO
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Step4 : After add above syntax in init.ora file, copy init.ora file from production server to
Standby server in Oracle_Home\Database\ folder.
Step5 : On the both system, the same directory structure was set-up
Step6 : Place production database in FORCE LOGGING mode by using following statement:
SQL> alter database force logging;

Database altered.
Step7 : Identify the primary database Data files:
SQL> select name from v$datafile;
Step8 : Make a copy of Production data files and redo flog file by performing following steps:
Shutdown the Primary Databas
SQL> shutdown immediate and put your primary database in Archive log mode after archive
log enable shutdown the database the database.
Copy the Datafiles and redo log files to standby location by using OS Command
Note: Primary Database must be shutdown while coping the files.
Step9 : Restart the Production Database
SQL> startup;
Step10 : Create Control file for Standby Database Issue the following command on production
database to create control file for the standby database.
SQL> Alter database create standby controlfile as 'c:\controlfile_standby.ctl';
Database altered.
Note: The filename for newly created standby control file must be different of current control file

123

of the production database. Also control file for standby database must be created after the last
timestamp for the backup Datafiles.
Step11 : Create init.ora file for standby database.
Copy init.ora file from Production Server to Stand by Server in Database folder in oracle home directory and add

following entries:
LOG_ARCHIVE_START = TRUE
LOG_ARCHIVE_DEST_1 = 'LOCATION=c:\oracle\database\archive MANDATORY'
LOG_ARCHIVE_FORMAT = arch%s.arc
REMOTE_ARCHIVE_ENABLE = true
STANDBY_FILE_MANAGEMENT = AUTO
LOG_ARCHIVE_MIN_SUCCEED_DEST=1
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
fal_server = FAL
fal_client = STANDBY
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Note: Although most of the initialization parameter settings in the text initialization parameter
file that you copied from the primary system are also appropriate for the physical standby
database, some modifications need to be made.
Edit created pfile from primary database.
control_files - Specify the path name and filename for the standby control file.
standby_archive_dest - Specify the location of the archived redo logs that will be received from
the primary database.
db_file_name_convert - Specify the location of the primary database datafiles followed by the
standby location of the datafiles. This parameter will convert the filename of the primary
database datafiles to the filename of the standby datafile filenames. If the standby database is on
the same system as the primary database or if the directory structure where the datafiles are
located on the standby site is different from the primary site then this parameter is required.
log_file_name_convert Specify the location of the primary database logs followed by the
standby location of the logs. This parameter will convert the filename of the primary database log
to the filenames of the standby log. If the standby database is on the same system as the primary
database or if the directory structure where the logs are located on the standby site is different
from the primary site then this parameter is required.
log_archive_dest_1 - Specify the location where the redo logs are to be archived on the standby
system. (If a switchover occurs and this instance becomes the primary database, then this
parameter will specify the location where the online redo logs will be archived.)
standby_file_management - Set to AUTO.
remote_archive_enable - Set to TRUE.
instance_name - If this parameter is defined, specify a different value for the standby database

124

than the primary database when the primary and standby databases reside on the same host.
lock_name_space - Specify the standby database instance name. Use this parameter when you
create the physical standby database on the same system as the primary database. Change the
INSTANCE_NAME parameter to a value other than its primary database value, and set this
LOCK_NAME_SPACE initialization parameter to the same value that you specified for the
standby database INSTANCE_NAME initialization parameter.
Also change the values of the parameters background_dump_dest, core_dump_dest and
user_dump_dest to specify location of the standby database.
Step12 : Create a Window service in Standby Server.
If standby database is running on windows system, then oradim utility is used to create windows
service. Issue following command from the command prompt window
C:\>oradim -new -sid PROD -intpwd PROD -startmode a
Step: 13 Start Physical standby database
Start up the stand by database using following commands
C:\>set oracle_sid=PROD
C:\>sqlplus /nolog
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL> alter database mount standby database;
Database altered.
Step: 14 Initiate Log apply services The example includes the DISCONNECT FROM SESSION
option so that log apply services run in a background session.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.
Step: 15 Now go to production database prompt
SQL> alter system switch logfile;
Database altered.
Step: 16 Verifying the Standby Database On standby database query the V$ARCHIVED_LOG
view to verify that redo log received.

125

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG


ORDER BY SEQUENCE#;
Result:
SEQUENCE# FIRST_TIME NEXT_TIME
14 25-APR-05 16:50:34 25-APR-02 16:50:42
15 25-APR-05 16:50:42 25-APR-02 16:50:47
16 25-APR-05 16:50:47 25-APR-02 16:51:52
Archive the current log on the primary database using following statement.
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
On standby database query the V$ARCHIVED_LOG view
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG
ORDER BY SEQUENCE#;
Result:
SEQUENCE# FIRST_TIME NEXT_TIME
14 25-APR-05 16:50:34 25-APR-02 16:50:42
15 25-APR-05 16:50:42 25-APR-02 16:50:47
16 25-APR-05 16:50:47 25-APR-02 16:51:52
17 25-APR-05 16:51:52 25-APR-02 17:34:00
TIPS: Now connect system/manager on production database and create table or insert row in any
table.
Now connect as sys on production database and execute following SQL statement
SQL> alter system switch logfile;
On standby database execute following SQL statements
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION;
Database altered.
SQL> recover managed standby database cancel;
Media recovery complete.
SQL> alter database open read only;
Database altered. And check whether the changes applied on the standby database or not.
Step: 17 Query V$MANAGED_STANDBY
Query the physical standby database to monitor log apply and log transport services activity at
the standby site.
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM
V$MANAGED_STANDBY;

126

Result:
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MRP0 WAIT_FOR_LOG 1 4205 0 0
RFS RECEIVING 0 0 0 0
RFS RECEIVING 1 3524 2445 2445
RFS WRITING 1 4205 14947 20480
If we do the same query on the production database
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM
V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CLOSING 1 4203 2049 124
ARCH CLOSING 1 4204 1 1551
LGWR WRITING 1 4205 14947 1
From the query on the primary database, we see the current sequence being written to in the redo
log area is 4205, and on the standby database we also see the current archive log being applied is
for sequence 4205. In the directory that receives archive files on the standby database, the file
DWH0P01_0000004205.arc will exist and will be the same size as the redo log on the primary
database. However the primary database will not have DWH0P01_0000004205.arc as a file in
the archive area, as a log switch will not have occurred yet, but both databases are synchronized
at the same sequence and block number, 14947.
Step: 18 Log files to check on both systems
On production database in the bdump directory, the alert log and files generated by lgwr and lnsx
can be checked for any problems On standby database in the bdump directory, the alert log and
files generated by mrpx can be checked for any problems.
Standby Oracle Database by using RMAN
You can use the Recovery Manager DUPLICATE TARGET DATABASE FOR STANDBY
command to create a standby database.
RMAN automates the following steps of the creation procedure:
Restores the standby control file.
Restores the primary datafile backups and copies.
Optionally, RMAN recovers the standby database (after the control file has been mounted) up
to the specified time or to the latest archived redo log generated.
RMAN leaves the database mounted so that the user can activate it, place it in manual or
managed recovery mode, or open it in read-only mode.
After the standby database is created, RMAN can back up the standby database and archived
redo logs as part of your backup strategy. These standby backups are fully interchangeable with
primary backups. In other words, you can restore a backup of a standby datafile to the primary
database, and you can restore a backup of a primary datafile to the standby database.

127

Step-by-Step Stand by Database Configuration:


Step1: Configure Listener in Production Server and Standby Server.
TIPS: You should try to Create Listener (Standby) by using Net Configuration Assistant
on Standby Server.
TIPS: assume Listener already configure with PROD name on Primary Node. If Listener
not configured on Primery Node , You Should Create Listener by using Net
Configuration Assistant on Primary Server.
Step2: Configure TNSNAMES.ORA in Production Server and Standby Server. following
TNSNAMES.ORA entry on Production Database and Standby Database
# Connection string for Primary Instance.
PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = Production IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
# Connecting string for Standby Instance
STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = STANDBY IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
Step3: Put your production database in Archive Log mode if your database not running in
Archive log mode add following entries in init.ora file in Production Server.
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_DEST_1='LOCATION=C:\oracle\database\archive MANDATORY
REOPEN=30'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY REOPEN=300'
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_STATE_2=enable
LOG_ARCHIVE_FORMAT=ARC%S.arc
REMOTE_ARCHIVE_ENABLE=true
STANDBY_FILE_MANAGEMENT=AUTO
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Step4 : Configure RMAN in Production Instance if not configured earlier.

128

Step5 : Take a full valied backup of Production instance


RMAN> backup database plus archivelog;
Step6 : go to Standby machine and Create Service for standby instance
Step7 : create a standby controlfile on production Machine
RMAN> backup current controlfile for standby format='c:\rman_backup\stby_cfile.%U';
Step8 : Record last log sequence
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
-------------100
Step8 : Backup new archive logs
RMAN>backup archivelog all;
Step9 : Make the RMAN Backups Available to Standby Server
Step10 : On the both system, the same directory structure was set-up
Step11 : Create init.ora file for standby database.
Copy init.ora file from Production Server to Stand by Server in Database folder in oracle home
directory and add following entries:
LOG_ARCHIVE_START = TRUE
LOG_ARCHIVE_DEST_1 = 'LOCATION=c:\oracle\database\archive MANDATORY'
LOG_ARCHIVE_FORMAT = arch%s.arc
REMOTE_ARCHIVE_ENABLE = true
STANDBY_FILE_MANAGEMENT = AUTO
LOG_ARCHIVE_MIN_SUCCEED_DEST=1
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
fal_server = FAL
fal_client = STANDBY
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Note: Although most of the initialization parameter settings in the text initialization parameter
file that you copied from the primary system are also appropriate for the physical standby
database, some modifications need to be made.
Step: 12 Start Physical standby database
Start up the stand by database using following commands

129

C:\>set oracle_sid=PROD
C:\>sqlplus /nolog
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
Step13 : Go to the Standby server and connect RMAN Run the following:
CMD> rman target sys/change_on_install@prod_conn_string
RMAN > connect auxiliary sys/change_on_install
Step14 : The following RUN block can be used to fully duplicate the target database from the
latest full backup. This will create the standby database:
run {
# Set the last log sequence number
set until sequence = 100 thread = 1;
# Allocate the channel for the duplicate work
allocate auxiliary channel ch1 type disk;
# Duplicate the database to ORA920
duplicate target database for standby dorecover nofilenamecheck ;
}
RMAN> exit
Step15 : Put the Standby in Managed recovery Mode
On the standby database, run the following:
SQL> sqlplus "/ as sysdba"
SQL> recover standby database;
SQL> alter database recover managed standby database disconnect;
Database altered.

130

Standby Database Maintenance


Cancel/Stop Managed Standby Recovery
While connected to the standby database follow following steps:

ALTER DATABASE SET STANDBY DATABASE UNPROTECTED;


RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE OPEN READ ONLY;

The database can subsequently be switched back to recovery mode as follows:


Start-up managed recovery on standby database

CONNECT / AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP NOMOUNT
ALTER DATABASE MOUNT STANDBY DATABASE;
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database Switchover
A database can be in one of two mutually exclusive modes (primary or standby). These roles can
be altered at runtime without loss of data or resetting of redo logs. This process is known as a
Switchover and can be performed using the following statements:
While connected to the primary database, issue the following commands:

CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
DISCONNECT FROM SESSION;

Now the original Primary database is in Standby mode and waiting for the new Primary database
to activate, which is done while connected to the standby database (not the original primary)

CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
SHUTDOWN IMMEDIATE;
STARTUP

This process has no affect on alternative standby locations. The process of converting the
instances back to their original roles is known as a Switchback. The switchback is accomplished
by performing another switchover.

131

Database Fail Over


Graceful Database Fail over occurs when database fail over causes a standby database to be
converted to a primary database:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;


ALTER DATABASE ACTIVATE STANDBY DATABASE;

This process will recovery all or some of the application data using the standby redo logs,
therefore avoiding reinstantiation of other standby databases. If completed successfully, only the
primary database will need to be reinstatiated as a standby database.
Standby Diagnosis Query for Primary Node
Query 1: protection_level should match the protection_mode after the next log switch
select name,database_role role,log_mode, protection_mode,protection_level from v$database;
NAME
ROLE
LOG_MODE
PROTECTION_MODE
PROTECTION_LEVEL
TEST
PRIMARY
ARCHIVELOG MAXIMUM PERFORMANCE
MAXIMUM PERFORMANCE
1 row selected.
Query 2: ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the
archiver failed to archive a log last time, but will try again within 5 minutes.
LOG_SWITCH_WAIT The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching
is waiting for. Note that if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in
the current online redo log, then value is NUL
select instance_name,host_name,version,archiver,log_switch_wait from v$instance
INSTANCE_NAME
TEST
1 row selected.

HOST_NAME
flex-suntdb

VERSION
9.2.0.5.0

ARCHIVE
STARTED

LOG_SWITCH_

Query 3: Query gives us information about catpatch.


select version, modified, status from dba_registry where comp_id = 'CATPROC';
VERSION
9.2.0.5.0
1 row selected.

MODIFIED
19-NOV-2004 10:12:27

STATUS
VALID

Query 4: Force logging is not mandatory but is recommended. Supplemental logging must be
enabled if thestandby associated with this primary is a logical standby. During normal operations
it is acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.

132

select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;
FORCE_LOGGING
REMOTE_ARCHIVE
DATAGUARD_BROKER
NO
ENABLED
ACTIVE
DISABLED

SUP

SUP

NO

NO

SWITCHOVER_STATUS
SESSIONS

1 row selected.
Query 5: This query produces a list of all archive destinations. It shows if they are enabled, what
process is servicing that destination, if the destination is local or remote, and if remote what the
current mount ID is
select dest_id "ID",destination,status,target,schedule,process,mountid mid from v$archive_dest
order by dest_id;
ID
DESTINATION
STATUS
MID
1
/applprod/archprod VALID
2
STANDBY
VALID
........
........

TARGET

SCHEDULE

PRIMARY ACTIVE
STANDBY ACTIVE

PROCESS
ARCH
ARCH

0
0

10 rows selected.
Query 6: This select will give further detail on the destinations as to what options have been set.
Register indicates whether or not the archived redo log is registered in the remote destination
control file.
select dest_id "ID",archiver,transmit_mode,affirm,async_blocks async, net_timeout
net_time,delay_mins delay,reopen_secs reopen, register,binding from v$archive_dest order by
dest_id;
ID
ARCHIVER TRANSMIT_MOD
REG BINDING

AFF

ASYNC

NET_TIME

DELAY REOPEN

1
300

ARCH
YES

SYNCHRONOUS
MANDATORY

NO

2
300
...
...

ARCH
YES

SYNCHRONOUS
OPTIONAL

NO

10 rows selected.
Query 7: The following select will show any errors that occured the last time an attempt to
archive to the destination was attempted. If ERROR is blank and status is VALID then the
archive completed correctly.

133

select dest_id,status,error from v$archive_dest;


DEST_ID
STATUS
1
VALID
2
VALID
3
INACTIVE
4
.........
5
...........
10 rows selected.

ERROR

Query 8: The query below will determine if any error conditions have been reached by querying
the v$dataguard_status view (view only available in 9.2.0 and above):
select message, timestamp from v$dataguard_status where severity in ('Error','Fatal') order by
timestamp;
no rows selected
Query 9: The following query will determine the current sequence number and the last sequence
archived. If you are remotely archiving using the LGWR process then the archived sequence
should be one higher than the current sequence. If remotely archiving using the ARCH process
then the archived sequence should be equal to the current sequence. The applied sequence
information is updated at log switch time.
select ads.dest_id,max(sequence#) "Current Sequence", max(log_sequence) "Last Archived"
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads where
ad.dest_id=al.dest_id and al.dest_id=ads.dest_id group by ads.dest_id;
DEST_ID
1
2

Current Sequence
233
233

Last Archived
233
233

2 rows selected.
Query 10: The following select will attempt to gather as much information as possible from the
standby. SRLs are not supported with Logical Standby until Version 10.1
select dest_id id,database_mode db_mode,recovery_mode,
protection_mode,standby_logfile_count "SRLs", standby_logfile_active ACTIVE, archived_seq#
from v$archive_dest_status;
ID
DB_MODE
RECOVER
PROTECTION_MODE
SRLs
ARCHIVED_SEQ#
1
OPEN
IDLE
MAXIMUM PERFORMANCE 0
2
MOUNTED-STANDBY IDLE MAXIMUM PERFORMANCE 0
...

ACTIVE
0
0

233
2 33

...\

10 rows selected.

134

Query 11: Query v$managed_standby to see the status of processes involved in the shipping redo
on this system. Does not include processes needed to apply redo.
select process,status,client_process,sequence# from v$managed_standby;
PROCESS
ARCH
ARCH

STATUS
CLOSING
CLOSING

CLIENT_P
ARCH
ARCH

SEQUENCE#
233
232

2 rows selected.
Query 12: The following query is run on the primary to see if SRL's have been created in
preparation for switchover.
select group#,sequence#,bytes from v$standby_log;
no rows selected
Query 13: The above SRL's should match in number and in size with the ORL's returned below:
select group#,thread#,sequence#,bytes,archived,status from v$log;

Standby Diagnosis Query for Standby Node


Query 1: ARCHIVER can be (STOPPED | STARTED | FAILED) FAILED means that the
archiver failed to archive a log last time, but will try again within 5 minutes.
LOG_SWITCH_WAIT The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching
is waiting for. Note that if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in
the current online redo log, then value is NULL
select instance_name,host_name,version,archiver,log_switch_wait from v$instance;
INSTANCE_NAME HOST_NAME VERSION ARCHIVE LOG_SWITCH_
TEST flex-sprod 9.2.0.5.0 STARTED
1 row selected.
Query 2: The following select will give us the generic information about how this standby is
setup. The database_role should be standby as that is what this script is intended to be ran on. If
protection_level is different than protection_mode then for some reason the mode listed in
protection_mode experienced a need to downgrade. Once the error condition has been corrected
the protection_level should match the protection_mode after the next log switch.
select name,database_role,log_mode,controlfile_type,protection_mode,protection_level from
v$database;
Query 3: Force logging is not mandatory but is recommended. Supplemental logging should be
enabled on the standby if a logical standby is in the configuration. During normal operations it is
acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or NOT ALLOWED.
select
force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;
FORCE_LOGGING
REMOTE_ARCHIVE
DATAGUARD_BROKER

SUP

SUP

SWITCHOVER_STATUS

NO ENABLED NO NO SESSIONS ACTIVE DISABLED


1 row selected.

135

Query 4: This query produces a list of all archive destinations and shows if they are enabled,
what process is servicing that destination, if the destination is local or remote, and if remote what
the current mount ID is. For a physical standby we should have at least one remote destination
that points the primary set but it should be deferred.
select dest_id
v$archive_dest;

"ID",destination,status,target,

archiver,schedule,process,mountid

from

Query 5: If the protection mode of the standby is set to anything higher than max performance
then we need to make sure the remote destination that points to the primary is set with the correct
options else we will have issues during switchover.
select dest_id,process,transmit_mode,async_blocks,net_timeout,delay_mins,reopen_secs,register,binding from
v$archive_dest;

Query 6: The following select will show any errors that occured the last time an attempt to
archive to the destination was attempted. If ERROR is blank and status is VALID then the
archive completed correctly.
select dest_id,status,error from v$archive_dest;
Query 7: Determine if any error conditions have been
thev$dataguard_status view (view only available in 9.2.0 and above):

reached

by

querying

select message, timestamp from v$dataguard_status where severity in ('Error','Fatal') order by


timestamp;
Query 8: The following query is ran to get the status of the SRL's on the standby. If the primary
is archiving with the LGWR process and SRL's are present (in the correct number and size) then
we should see a group# active.
select group#,sequence#,bytes,used,archived,status from v$standby_log;
Query 9: The above SRL's should match in number and in size with the ORL's returned below:
select group#,thread#,sequence#,bytes,archived,status from v$log;
Query 10: Query v$managed_standby to see the status of processes involved in the
configuration.
select
process,status,client_process,sequence#,block#,active_agents,known_agents
v$managed_standby;

from

Query 11: Verify that the last sequence# received and the last sequence# applied to standby
database.
select max(al.sequence#) "Last Seq Recieved", max(lh.sequence#) "Last Seq Applied" from
v$archived_log al, v$log_history lh;
Query 12: The V$ARCHIVE_GAP fixed view on a physical standby database only returns the
next gap that is currently blocking redo apply from continuing. After resolving the identified gap
and starting redo apply, query the V$ARCHIVE_GAP fixed view again on the physical standby
database to determine the next gap sequence, if there is one.
select * from v$archive_gap;

136

Oracle Database Up-gradation

Why upgrade oracle database to higher version


Oracle release 10g version about a year ago and recently release 11g. Many companies are still
using the 8i (8.1.x), 8 (8.0.x) and 7.x database versions. For the most part, everything that is
available in a lower version will be available in a higher version as well. To give you an idea of
what is "new" consider this...Oracle 8i introduced many new features for the developer. With 8i,
you could run Java in the database, you had expanded tools to help with object-oriented
development and 8i introduced some enhancements to support larger databases (Materialized
Views, additions to partitioning). Oracle 9i introduced many new features to help the DBA such
as the ability to change database configuration "on the fly", enhanced availability and enhanced
manageability. The advantage of a higher version is that you have more features and better
capabilities. You also stay current with the latest "supported" versions. The disadvantage of these
new systems is that you have to convert your older databases to the newer versions. This can
sometimes cause application changes as well. The advantage of staying at a lower version is that
you know it works and you don't have to change a thing. The disadvantage is that you can't use
any of the latest and greatest features and that you may lose support
Role of the Database Administrator during the Upgrade
The database administrator (DBA) is the most important person for upgrade process. He involve
in each steps, except application testing.
The DBA handle following task for database upgrade:

Analyze current database status and make a plan for upgrade process.
Meeting with everyone involved in the upgrade process and clearly defining their roles
Performing test upgrades
Scheduling the test and production upgrades
Performing backups of the production database
Completing the upgrade of the production database
Performing backups of the newly upgraded Oracle Database production database

Different option for upgrade method, difference and process


We can use three methods to upgrade our oracle database:

Export/Import
DBUA
Manually by using Scripts

Export/Import
Export/Import utilities only physically copy data from current database to a new database. The
current database's Export utility copies specified parts of the database into an export dump file.
Then, the Import utility of the new Oracle Database loads the exported data into a new database.

137

Database Upgrade Assistant


The Database Upgrade Assistant (DBUA) utility configures the database for the new Oracle
Database. The Database Upgrade Assistant automates the upgrade process by performing all of
the tasks normally performed manually.
The Database Upgrade Assistant performs the following pre-upgrade steps:

It checks for any invalid user accounts or roles


It checks for any invalid data types
It checks for any disported character sets
It checks for adequate resources, including rollback segments, table spaces, and free disk
space
It optionally backs up all necessary files

The Database Upgrade Assistant does not begin the upgrade until it completes all of the preupgrade steps.
The Database Upgrade Assistant automatically modifies or creates new required tablespaces,
invokes the appropriate upgrade scripts, archives the redo logs, and disables archiving during the
upgrade phase.
While the upgrade is running, the Database Upgrade Assistant shows the upgrade progress for
each component. The Database Upgrade Assistant writes detailed trace and log files and
produces a complete HTML report for later reference. To enhance security, the Database
Upgrade Assistant automatically locks new user accounts in the upgraded database. The
Database Upgrade Assistant then proceeds to create new configuration files (parameter and
listener files) in the new Oracle home.
Manual Upgrade
A manual upgrade consists of running SQL scripts and utilities from a command line to upgrade
a database to the new Oracle Database release.
While a manual upgrade gives you finer control over the upgrade process, it is more susceptible
to error if any of the upgrade or pre-upgrade steps are either not followed or are performed out of
order.
When manually upgrading a database, you must perform the following pre-upgrade steps:
Analyze the database using the Pre-Upgrade Information Tool. The Upgrade Information Tool is
a SQL script that ships with the new Oracle Database 10g release, and must be run in the
environment of the database being upgraded.
The Upgrade Information Tool displays warnings about possible upgrade issues with the
database. It also displays information about required initialization parameters for the new Oracle
Database 10g release. Before starting up the new Oracle Database 10g release, make the
necessary adjustments to the database.

Perform a backup of the database.

138

Add free space to any tablespaces in the database that require additional space, and drop
and re-create any redo log files whose size is insufficient for the upgrade.

Adjust the parameter file for the upgrade, removing obsolete initialization parameters and
adjusting initialization parameters that might cause upgrade problems.

Depending on the release of the database being upgraded, you may need to perform
additional pre-upgrade steps.

After the Upgrade


View the status of the upgrade using the Post-Upgrade Status Tool. The Upgrade Status Tool is a
SQL script that ships with the new Oracle Database 10g release, and must be run in the
environment of the new Oracle Database 10g release.

139

Bellow Figure Showing Oracle Database up-gradation Steps

Difference between Up-gradation and Migration


Dont think there is no differentiation between Upgrade and migration, basically, if you are
simply upgrading software/database to a newer version thats called upgrade for example:
upgrade 8i database to 9i or 10g. If you have some changes in technology like client/server based
to Web based or different platform (moving database from Windows to UNIX based system or
via-verse) or different vendor (moving SQL Server database to Oracle) to call it migration.
Oracle Release Numbers
Bellow figure describes what each part of a release number represents.

140

The release number 10.1.0.1.0 is displayed. The significance of each number (reading from left
to right) is shown in the following table:
Number Significance
10
Major database release number
1
Database maintenance release number
0
Application server release number
1
Component specific release number
0
Platform specific release number

Upgrade Path
Upgrade path for Oracle database 11g release 1
Direct Upgrade Path
Source Database Target Database
9.2.0.4.0 (or higher) 11.1.x
10.1.0.2.0 (or higher) 11.1.x
10.2.0.1.0 (or higher) 11.1.x
Indirect Upgrade Path
Source Database Intermediate Upgrade Path Target Database
7.3.3.0.0 (or lower)
7.3.4.x > 9.2.0.8
11.1.x
8.0.5.0.0 (or lower)

8.0.6.x > 9.2.0.8

11.1.x

8.1.7.0.0 (or lower)

8.1.7.4 > 9.2.0.8

11.1.x

9.0.1.3.0 (or lower)

9.0.1.4 > 9.2.0.8

11.1.x

Upgrade path for Oracle database 10 releases 2


Direct Upgrade Path

141

Source Database

Target Database

8.1.7.4 (or higher)

10.2.x

9.0.1.4 (or higher)

10.2.x

9.2.0.4 (or higher)

10.2.x

10.1.0.2 (or higher) 10.2.x

Indirect Upgrade Path


Source Database Intermediate Upgrade Path Target Database
7.3.4 -> 8.1.7.4

10.2.x

7.3.4

8.1.7.4

10.2.x

8.0.n

8.1.7.4

10.2.x

8.1.n

8.1.7.4

10.2.x

7.3.3.0.0 (or lower)

142

Converting 32-bit Oracle Databases to 64-bit Oracle Database


If you are using 32-bit oracle database on any 32-bit / 64-bit plateform then you can convert or
migrate 32-bit database to 64-bit database. Now i am describing you in two senerio.
First Senerio: Suppose you are using 32-bit database (9iR2) on any plateform and you want to
convert your 32-bit database (9iR2) to 64-bit database.
Second Senerio: Suppose you are using 32-bit database (9iR2) and want to upgrade your current
32-bit database (9iR2) to 64-bit higher version database like 10g, 11g.
In the second senerion, database will automatically be converted to 64-bit during the upgrade to
oracle database higher version.
Here we are describing only first senerion in this section.
Project
Convert / Migrate a 32-bit Oracle 8.0.x, Oracle 8i, Oracle 9i, Oracle 10g and oracle 11g Database
to 64-bit.
Step 1

Startup SQL*PLUS, connect with 32-bit database instance AS SYSDBA and shutdown database
by using SHUTDOWN IMMEDIATE command.
Step 2

Tack full cold backup of database.


Step 3

Install 64-bit version of the same oracle software realease in Different ORACLE_HOME.
Step 4

Copy initialization parameter file eg initSID.ora, spfileSID.ora from old ORACLE_HOME to


new 64-bit ORACLE_HOME.
Step 5

Change your environment to point at the new 64Bit ORACLE_HOME.


Step 6

Add following parameter in new 64-bit initialization parameter file.


aq_tm_processes=0
job_queue_processes=0
_system_trig_enabled= false
Step 7

startup SQL*PLUS, connect with 64-bit database instance AS SYSDBA.


Step 8

143

If you are working with in Oracle 8.0.x, Oracle8i or Oracle9i 9.0.x database, run STARTUP
RESTRICT:
SQL> STARTUP RESTRICT
If you are working with in Oracle9i 9.2.0.x database, run STARTUP MIGRATE:
SQL> STARTUP MIGRATE
If you are working with in Oracle10g database, run STARTUP UPGRADE:
SQL> STARTUP UPGRADE
If you are working with in Oracle11g database, run STARTUP UPGRADE:
SQL> STARTUP UPGRADE
Step 9

Set the system to spool results to a log file for later verification of success:
SQL> SPOOL catoutw.log
Step 10

Run utlirp.sql:
SQL> @$ORACLE_HOME/rdbms/admin/utlirp.sql
This script recompiles existing PL/SQL modules in the format required by the new database.
This script first alters certain dictionary tables. Then, it reloads package STANDARD and
DBMS_STANDARD, which are necessary for using PL/SQL.
Optional Steps:
If the patchset level is not being changed (for example, you are migrating a 9.2.0.8 32-bit
database to 9.2.0.8 64-bit) then there is no need of optional steps.
If the patchset level is change to run then need to optional steps.
If you are working with in Oracle 8.0, Oracle8i or Oracle 9i 9.0.x database, run the following
script:
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
If you are working with in Oracle9i 9.2.0.x database, run the following
script:
SQL> @$ORACLE_HOME/rdbms/admin/catpatch.sql
If you are migrating an Oracle10g 10.1.0.x or 10.2.0.x database, run the following script:

144

SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
Step 11

Run utlrp.sql:
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
This script recompiles all invalid objects
TIPS:
If you are using same machine for converting 32-bit to 64 bit. Only you will create new
ORACLE_HOME for 64-bit oracle software and you will use same phiysical database structure.
If you are using different machine for convertion 32-bit to 64-bit . You will install Oracle 64 bit
oracle software on differnet machine and you will clone your 32 bit database on new machine.
If you are using UNIX based OS and want to use different machine for converting 32 bit to 64 bit
better you create same database file structure and restore from old box to new box.
Moving From the Standard Edition to the Enterprise Edition and via-verse
If you are using a Standard Edition database (Release prior to 11gR1), then you can change it to
an Enterprise Edition database.
Step 1
Standard Edition database software should be same as the Enterprise Edition database software.
Step 2
Shutdown the database
Step 3
Shut down your all oracle services including oracle database
Step 4
De-install the Standard Edition oracle software
Step 5
Install the Enterprise Edition server software using the Oracle Universal Installer.
Step 6
Select the same Oracle home that was used for the de-installed Standard Edition. During the
installation, be sure to select the Enterprise Edition. When prompted, choose Software only from
the Database Configuration screen.
Step 7
Start up your database.
Your database is now upgraded to the Enterprise Edition.

145

Tips:
1. You can only convert Standard Edition Database to the Enterprise Edition Database by
using above method.
2. If you want to convert from an Enterprise Edition database to a Standard Edition
database, you must use Export/Import operation. Without Export/Import you can not
convert.
Inside Story:
1. The Enterprise Edition contains data dictionary objects which are not available in the
Standard Edition. If you just install the Standard Edition software, then you will end up
with data dictionary objects which are useless. Some of them might be invalid and
possibly create problems when maintaining the database.
2. The Export/Import operation does not introduce data dictionary objects specific to the
Enterprise Edition, because the SYS schema objects are not exported. Oracle recommends
using the Standard Edition EXP utility to export the data.
3. After the Import in the Standard Edition database, you are only required to drop all user
schemas related to Enterprise Edition features, such as the MDSYS account used with
Oracle Spatial.

146

Upgrade Project
Project A
Upgrade Oracle Database from Version 8.1.x(x>7) to 8.1.7

Here I am describing Manual methods to upgrade our oracle database.


Step 1 (Prepare the Existing Database to be Upgrade)
1.1. Make sure the DB_DOMAIN initialization parameter in your initialization parameter file is
set to one of the following:

.WORLD
A valid domain setting for your environment

1.2. Make sure the _SYSTEM_TRIG_ENABLED initialization parameter is set to FALSE in the
initialization parameter file. If this initialization parameter is not currently set, then explicitly set
it to FALSE:

_SYSTEM_TRIG_ENABLED = FALSE

1.3. Check free Space in SYSTEM and Rollback Segment Tablepace


Upgrading to a new release requires more space in your SYSTEM tablespace where you store
rollback segments. In general, you need at least 20 MB of free space in your SYSTEM tablespace
to upgrade.
Query for checking the free space.
clear buffer
clear columns
clear breaks
set linesize 500
set pagesize 5000
column a1 heading 'Tablespace' format a15
column a2 heading 'Data File' format a45
column a3 heading 'TotalSpace [MB]' format 99999.99
column a4 heading 'FreeSpace [MB]' format 99999.99
column a5 heading 'Free%' format 9999.99
break on a1 on report
compute sum of a3 on a1
compute sum of a4 on a1
compute sum of a3 on report
compute sum of a4 on report
SELECT a.tablespace_name a1,a.file_name a2,a.avail a3,NVL(b.free,0)
a4,NVL(ROUND(((free/avail)*100),2),0) a5
FROM
(SELECT tablespace_name,SUBSTR(file_name,1,45)
file_name,file_id,ROUND(SUM(bytes/(1024*1024)),3) avail

147

FROM sys.dba_data_files GROUP BY tablespace_name, SUBSTR(file_name,1,45),file_id)


a,(SELECT tablespace_name, file_id, ROUND(SUM(bytes/(1024*1024)),3) free
FROM sys.dba_free_space GROUP BY tablespace_name, file_id) bWHERE a.file_id = b.file_id
(+)ORDER BY 1, 2
If you need to add more space in your system tablespace, execute following steps:
How to add more space to the SYSTEM tablespace?
ALTER TABLESPACE system
ADD DATAFILE ''
SIZE 16M
AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED;
1.4. Run SHUTDOWN IMMEDIATE on the database and backup your database.
SQL> SHUTDOWN IMMEDIATE
Step 2 (If you are using Windows platform, stop and delete the services)

2.1. Stop the OracleServiceSID Oracle service of the oracle 8.1.6 database if you are using
Windows
C :\> NET STOP OracleService
2.2. Selete the OracleServiceSID at command line of 8.1.6 Home, if you are using Windows
C:\>ORADIM DELETE SID
Step 3 Install Oracle 8.1.7 database software only in new oracle home.
Step 4 Create the new oracle database 8.1.7 service at command prompt using the following
command, if you are using windows platform
C:\>ORADIM NEW SID INTPWD STARTMODE A
Step 5 Put your init file from 8.1.6 Oracle home to 8.1.7 oracle home default location and adjust
the initialization parameter file for use with the new 8.1.7.

db_domain = .WORLD
optimizer_mode = choose
job_queue_processes = 0
aq_tm_processes = 0

Step 6 Connect to the new oracle 8.1.7 instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP RESTRICT

148

You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in 8.1.7
Home). We have just put init file at new oracle 8.1.7 home from 8.1.6.
Step 7 Execute following scripts:
SPOOL c:\revoke_restricted_session.log;
SELECT 'REVOKE restricted session FROM ' username ';' FROM dba_users
WHERE username NOT IN ('SYS','SYSTEM');
SPOOL OFF;
Step 8 Run Spool File:
@c: \revoke_restricted_session.log;
Step 9 Enable Restricted Session
SQL>ALTER SYSTEM ENABLE RESTRICTED SESSION;
Step 10 Run Migration Scripts Which is reside in 8.1.7 oracle home/rdbms/admin location
SPOOL catoutu.log
SET ECHO ON
@u0801060.sql # Script for 8.1.6 -> 8.1.7
SET ECHO OFF
SPOOL OFF
Step 11 ALTER SYSTEM DISABLE RESTRICTED SESSION;
Step 12 SHUTDOWN IMMEDIATE
NOTE:
This script creates and alters certain dictionary tables. It also runs the catalog.sql and catproc.sql
scripts that come with the release to which you are upgrading, which create the system catalog
views
and
all
the
necessary
packages
for
using
PL/SQL.
Step 13: (Post migration Steps)
13.1. Startup database and must execute additional scripts:
# Run all sql scripts for replication option
@$ORACLE_HOME/rdbms/admin/catrep.sql
# Collect I/O per table (actually object) statistics by statistical sampling

149

@$ORACLE_HOME/rdbms/admin/catio.sql
# This package creates a table into which references to the chained rows for an IOT (Index-OnlyTable) can be placed using the ANALYZE command.
@$ORACLE_HOME/rdbms/admin/dbmsiotc.sql
# Wrap Package which creates IOTs (Index-Only-Table
@$ORACLE_HOME/rdbms/admin/prvtiotc.plb
# This package allows you to display the sizes of objects in the shared pool, and mark them for
keeping or unkeeping in order to reduce memory fragmentation.
@$ORACLE_HOME/rdbms/admin/dbmspool.sql
# Creates the default table for storing the output of the ANALYZE LIST CHAINED ROWS
command
@$ORACLE_HOME/rdbms/admin/utlchain.sql
# Creates the EXCEPTION table
@$ORACLE_HOME/rdbms/admin/utlexcpt.sql
# Grant public access to all views used by TKPROF with verbose=y option
@$ORACLE_HOME/rdbms/admin/utltkprf.sql
# Create table PLAN_TABLE that is used by the EXPLAIN PLAN statement. The explain
statement requires the presence of this table in order to store the descriptions ofthe row sources.
@$ORACLE_HOME/rdbms/admin/utlxplan.sql
# Create performance tuning views
@$ORACLE_HOME/rdbms/admin/catperf.sql
# Create v7 style export/import views against the v8 RDBMS so that EXP/IMP v7 can be used to
read out data in a v8 RDBMS. These views are necessary if you want to exportfrom Oracle8 and
import in an Oracle7 database.
@$ORACLE_HOME/rdbms/admin/catexp7.sql
# Create views of oracle locks
@$ORACLE_HOME/rdbms/admin/catblock.sql
# Print out the lock wait-for graph in a tree structured fashion

150

@$ORACLE_HOME/rdbms/admin/utllockt.sql
# Creates the default table for storing the output of the analyze validate command on a
partitioned table
@$ORACLE_HOME/rdbms/admin/utlvalid.sql
# PL/SQL Package of utility routines for raw datatypes
@$ORACLE_HOME/rdbms/admin/utlraw.sql
@$ORACLE_HOME/rdbms/admin/prvtrawb.plb
# Contains the PL/SQL interface to the cryptographic toolkit
@$ORACLE_HOME/rdbms/admin/dbmsoctk.sql
@$ORACLE_HOME/rdbms/admin/prvtoctk.plb
# This package provides a built-in random number generator. It is faster than generators written
in PL/SQL because it calls Oracle's internal random number generator.
@$ORACLE_HOME/rdbms/admin/dbmsrand.sql
# DBMS package specification for Oracle8 Large Object This package provides routines for
operations on BLOB and CLOB datatypes.
@$ORACLE_HOME/rdbms/admin/dbmslob.sql
# Procedures for instrumenting database applications DBMS_APPLICATION_INFO package
spec.
@$ORACLE_HOME/rdbms/admin/dbmsapin.sql
# Run obfuscation toolkit script.
@$ORACLE_HOME/rdbms/admin/catobtk.sql
# Create Heterogeneous Services data dictionary objects.
@$ORACLE_HOME/rdbms/admin/caths.sql
# Stored procedures for Oracle Trace server
@$ORACLE_HOME/rdbms/admin/otrcsvr.sql
# Oracle8i Profiler for PL/SQL Profilers is helpful tools to investigate programs and identify
slow program parts and bottle necks. Furthermore you can determine which procedure; function
or any other code part is executed how many times. To be able to use the DBMS_PROFILER
package you have to install once for your database the following packages. Do this as user SYS
@$ORACLE_HOME/rdbms/admin/profload.sql

151

@$ORACLE_HOME/rdbms/admin/proftab.sql
@$ORACLE_HOME/rdbms/admin/dbmspbp.sql
@$ORACLE_HOME/rdbms/admin/prvtpbp.plb
13.2. Recompiling Invalid PL/SQL Modules
Run UTLRP.sql scripts to recompiles all INVALID objects, such as packages, procedures, types,
etc.
SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql
13.3. Additional Checks after the Migration
Check for Bad Date Constraints
A bad date constraint involves invalid date manipulation, which is a date manipulation that
implicitly assumes the century in the date, causing problems at the year 2000. The utlconst.sql
script runs through all of the check constraints in the database and marks constraints as bad if
they include any invalid date manipulation. This script selects all the bad constraints at the end.
Oracle7 let you create constraints with a two-digit year date constant. However, version 8 returns
an error if the check constraint dates constant does not include a four-digit year.
To run the utlconst.sql script, complete the following steps:
SQL> SPOOL utlresult.log
SQL> @utlconst.sql
SQL> SPOOL OFF
Server Output ON
Statement processed.
Statement processed.
Checking for bad date constraints
Finished checking -- All constraints OK!
After you run the script, the utlresult.log log file includes all the constraints that have invalid date
constraints. The utlconst.sql script does not correct bad constraints, but instead it disables them.
You should either drop the bad constraints or recreate them after you make the necessary
changes.
13.4. Rebuild Unusable Bitmap Indexes
During migration, some bitmap indexes may become unusable. To find these indexes, issue the
following SQL statement:
SELECT index_name, index_type, table_owner, status FROM dba_indexesWHERE index_type
= 'BITMAP'AND status = 'UNUSABLE';
13.5. Rebuild Unusable Function-Based Indexes
During upgrade, some function-based indexes may become unusable. To find these indexes,
issue the following SQL statement:

152

SELECT owner, index_name, funcidx_statusFROM dba_indexesWHERE funcidx_status =


'DISABLED';
13.6. Change the Password for the OUTLN User
The OUTLN user is created automatically during installation of Oracle8i. This user has DBA
privileges. Use the ALTER USER statement to change the password for this user.

153

Project B
Upgrade Oracle Database from Version 8.1.7 to 9.2.0

We can use three methods to upgrade our oracle database.

Export / Import
Database Upgrade Assistant
Manually by using Scripts
database)

(We

prefer

manually method

to

upgrade

our

Step 1 (Prepare the Database to be Upgraded)


1. Log in to the system as the owner of the Oracle home directory of the database being upgraded
and Start SQL*Plus.
2. Connect to the database instance as a user with SYSDBA privileges.
3. Add space to your SYSTEM tablespace and to the tablespaces where you store rollback
segments, if necessary.
Release

SYSTEM Table Space

Additional Table Space

9.0.1

16 MB

30 MB

8.1.7

52 MB

80 MB

8.0.6

70 MB

N/A

7.3.4

85 MB

N/A

How to add more space to the SYSTEM tablespace:


ALTER TABLESPACE system ADD DATAFILE 'Path of Datafile' SIZE 16M AUTOEXTEND
ON NEXT 10M MAXSIZE UNLIMITED;
ALTER ROLLBACK SEGMENT 'TB_NAME' STORAGE (MAXEXTENTS UNLIMITED);
NOTE: A rollback segment of at least 70 MB is recommended.
4. Run SHUTDOWN IMMEDIATE on the database and backup your database.
SQL> SHUTDOWN IMMEDIATE

Step 2 (Upgrade the Database)

1. Stop the OracleServiceSID Oracle service of the oracle 8i database

154

C :> NET STOP OracleService


2. Delete the OracleServiceSID at command line of 8i Home
C :> ORADIM DELETE SID
3. Create the new
following command.

oracle

database

9i

service

at

command

prompt

using

the

C :> ORADIM NEW SID INTPWD STARTMODE A


4. Put your init file in database folder at new oracle 9i home from 8i.
5. Remove obsolete initialization parameters and adjust deprecated initialization parameters.

Make sure the SHARED_POOL_SIZE initialization parameter is set to at least 48 MB.


Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to at least
24 MB.
Make sure the LARGE_POOL_SIZE initialization parameter is set to at least 8 MB.
Make sure the COMPATIBLE initialization parameter is properly set for the new
Oracle9i release. If COMPATIBLE is set below 8.1.0, then you will encounter the
following error when you attempt to start up your release 9.2 database.

ORA-00401: the value for parameter compatible is not supported by this release.
Details about COMPATIBLE issue:
If you are upgrading from Release: 7.3.4 then remove COMPATIBLE from your parameter file,
or set COMPATIBLE to 8.1.0.
If you are upgrading from Release: 8.0.6 then remove COMPATIBLE from your parameter file,
or set COMPATIBLE to 8.1.0
If you are upgrading from Release: 8.1.7, then Set COMPATIBLE To: If COMPATIBLE is set
to 8.0.x, then either remove COMPATIBLE from your parameter file, or set COMPATIBLE to
8.1.0. If COMPATIBLE is set to 8.1.x, then leave the setting as is.
If you are upgrading from Release: 9.0.1, then if one or more automatic segment-space managed
tablespaces exist in the database, then set COMPATIBLE to 9.0.1.3 Otherwise, leave the setting
as is.
6. Connect to the new oracle 9i instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP migrate
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle9iHome/database). We have just put init file at new oracle 9i home from 8i.

155

7. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
8. Run the upgrade scripts.
SQL>@u0801070.sql
Details about Upgrade Scripts: (Run Scripts according your Old release)
Old Release

Run Script

7.3.4
8.0.6
8.1.7
9.0.1

u0703040.sql
u0800060.sql
u0801070.sql
u0900010.sql

You

only need to run one script. For example, if your old release was 8.1.7, then you only need
to run u0801070.sql
The script you run creates and alters certain dictionary tables. It also runs the catalog.sql and
catproc.sql scripts that come with the new 9.2 release, which create the system catalog views and
all the necessary packages for using PL/SQL.
9. Display the contents of the component registry to determine which components need to be
upgraded:
SQL> SELECT comp_name, version, status FROM dba_registry;
COMP_NAME
Oracle9i Catalog Views

VERSION

STATUS

9.2.0.8.0

VALID

Oracle9i Packages and Types 9.2.0.8.0

VALID

JServer JAVA Virtual Machine 8.1.7

LOADED

Java Packages

8.1.7

LOADED

Oracle XDK for Java

8.1.7

LOADED

Oracle interMedia Text

8.1.7

LOADED

Oracle interMedia

8.1.7.0.0

LOADED

Oracle Spatial

8.1.7.0.0

LOADED

Oracle Visual Information Retrieval 8.1.7.0.0

LOADED

156

10. Run the cmpdbmig.sql script to upgrade components that can be upgraded while connected
with SYSDBA privileges:
SQL>@cmpdbmig.sql
The following components are upgraded by running the cmpdbmig.sql script:
JServer JAVA Virtual Machine
Oracle9i Java Packages
Oracle XDK for Java
Messaging Gateway
Oracle9i Real Application Clusters
Oracle Workspace Manager
Oracle Data Mining
OLAP Catalog
OLAP Analytic Workspace
Oracle Label Security
11. Display the contents of the component registry to determine which components were
upgraded:
SQL> SELECT comp_name, version, status FROM dba_registry;
12. Turn off the spooling of script results to the log file:
SQL> SPOOL OFF
13. Then, check the spool file and verify that the packages and procedures compiled successfully.
Correct any problems you find in this file and rerun the appropriate upgrade scripts if necessary.
14. Shut down and restart the instance to reinitialize the system parameters for normal operation.
SQL> SHUTDOWN IMMEDIATE
15. Upgrade any remaining components that existed in the previous database.
The following components require separate upgrade steps:
Oracle Text
Oracle Ultra Search
Oracle Spatial
Oracle interMedia
Oracle Visual Information Retrieval
16. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
17. Verify that all expected packages and classes are valid:
157

SQL> SELECT count(*) FROM dba_objects WHERE status='INVALID';


SQL> SELECT destinct object_name FROM dba_objects WHERE status='INVALID';
18. Verify that all components are valid and have been upgraded to release 9.2:
SQL> SELECT comp_name, version, status FROM dba_registry;
Steps for Upgrading remaining components that existed in the previous database.

Upgrading Oracle Spatial

(From Release 8.1.5 8.1.6 or 8.1.7 to 9i release 9.2.0)


1. Connect as a SYSDBA.
2. Run following scripts. This wills Grand the required privileges to the MDSYS user:
SQL>@ORACLE_HOME\md\admin\mdprivs.sql
3. Connect as s MDSYS user.
4. Run following procedure:
SQL>@ORACLE_HOME\md\admin\c81xu9x.sql
Upgrading Oracle interMedia
(From Release 8.1.5, 8.1.6 or 8.1.7 to 9i release 9.2.0)
After upgrading your database, perform the following steps to upgrade interMedia manually:
1. First you invoke the imdbma script to determine whether you need to upgrade:
2. Connect as SYSDBA
3. Run following scripts:
SQL>@\ord\im\admin\imdbma.sql
This script displays one of the following strings:
NOT_INSTALLED - if no prior release of interMedia components were installed on your
system. You must install interMedia, rather than an upgrade.
INSTALLED - if the current interMedia release is already installed
u0nnnnn0.sql - the script that performs the upgrade. nnnnn is the release of interMedia or Image
Cartridge that is currently installed. For example, u0800060.sql upgrades from Image Cartridge
release 8.0.6.0.0.
4. If an upgrade is required and your system is ready, perform the upgrade.
158

a) Connect / as SYSDBA
b) First upgrade Oracle interMedia Common Files.
SQL>@<ORACLE_HOME>\ord\admin\u0nnnnn0.sql
c) Then upgrade interMedia.
SQL>@<ORACLE_HOME>\ord\im\admin\u0nnnnn0.sql
4) Verify the upgrade:
a) Connect as ORDSYS user.
B) Run following Command:
SQL>@<ORACLE_HOME>\ord\im\admin\imchk.sql
Upgrading Oracle Visual Information Retrieval
(From Release 8.1.5, 8.1.6 or 8.1.7 to 9i release 9.2.0)
1. Invoke the virdbma SQL script to decide whether or not you need to upgrade.
2. Connect as SYSDBA
SQL> @<ORACLE_HOME>\ord\vir\admin\virdbma.sql
This script displays one of the following strings:
NOT_INSTALLED - if no prior Visual Information Retrieval release was installed on your
system.
INSTALLED - if Visual Information Retrieval Compatible API is already installed.
u0nnnnn0.sql - the script for upgrade. nnnnn is the release of Visual Information Retrieval that
you have currently installed. For example, u0801070.sql upgrades from Visual Information
Retrieval release 8.1.7.0.0.
3. If an upgrade is required, perform the upgrade:
SQL>@<ORACLE_HOME>\ord\vir\admin\u0nnnnn0.sql
Where u0nnnnn0.sql is the upgrade script displayed by step 1, if an upgrade is necessary.

Upgrading Oracle Text

If the Oracle system has Oracle Text installed, then complete the following steps:
1. Log in to the system as the owner of the Oracle home directory of the new release.

159

At a system prompt, change to the ORACLE_HOME/ctx/admin directory.


2. Start SQL*Plus.
3. Connect to the database instance as a user with SYSDBA privileges.
4. If the instance is running, shut it down using SHUTDOWN IMMEDIATE
SQL> SHUTDOWN IMMEDIATE
5. Start up the instance in RESTRICT mode:
SQL> STARTUP RESTRICT
You may need to use the PFILE option to specify the location of your initialization parameter
file.
6. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL text_upgrade.log
If you are upgrading from release 8.1.7, then complete the following steps. Skip to Step 9 if you
are upgrading from release 9.0.1.
7.Run S0900010.sql
SQL>@s0900010.sql
This script grants new, required database privileges to user CTXSYS.
8. Connect to the database instance as user CTXSYS.
Run u0900010.sql
SQL> @u0900010.sql
Connect to the database instance as a user with SYSDBA privileges.
9. If you are upgrading from release 8.1.7 or release 9.0.1, then complete the following steps.
Run S0902000.sql
SQL> @s0902000.sql
This script grants new, required database privileges to user CTXSYS.
10. Connect to the database instance as user CTXSYS.
Run u0902000.sql

160

SQL> @u0902000.sql
This script upgrades the CTXSYS schema to release 9.2.
Connect to the database instance as a user with SYSSBA privileges. Check for any invalid
CTXSYS objects and alter compile as needed. Turn off the spooling of script results to the log
file:
SQL> SPOOL OFF
Then, check the spool file and verify that the packages and procedures compiled successfully.
11. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
Exit SQL*Plus.

Upgrade User NCHAR Columns (Tasks to Complete Only After Upgrading a Release
8.1.7 Database)

If you upgraded from a version 8 release and your database contains user tables with NCHAR
columns, you must upgrade the NCHAR columns before they can be used in the Oracle
Database.
You will encounter the following error when attempting to use the NCHAR columns in the
Oracle Database until you perform the steps in this section:
ORA-12714: invalid national character set specified
To upgrade user tables with NCHAR columns, perform the following steps:
1. Connect to the database instance as a user with SYSDBA privileges.
2. If the instance is running, shut it down using SHUTDOWN IMMEDIATE:
SQL> SHUTDOWN IMMEDIATE
3. Start up the instance in RESTRICT mode:
SQL> STARTUP RESTRICT
4. Run utlnchar.sql:
SQL> @utlnchar.sql
Alternatively, to override the default upgrade selection, run n_switch.sql:
SQL> @n_switch.sql

161

5. Shut down the instance:


SQL> SHUTDOWN IMMEDIATE
6. Exit SQL* PLUS

162

Project C
Upgrade Oracle Database from Version 8.1.7 to 10.2.0

If your source database version 8.1.7.4 (or higher) then you can directly upgrade your current or
source database to 10gR2, otherwise you must follow indirect upgrade path, which is mention in
bellow table.
Indirect Upgrade Path
Source Database

Intermediate Upgrade Path Target Database

7.3.3.0.0 (or lower) 7.3.4 -> 8.1.7.4

10.2.x

7.3.4

8.1.7.4

10.2.x

8.0.n

8.1.7.4

10.2.x

8.1.n

8.1.7.4

10.2.x

Here I am describing Manual methods to upgrade our oracle database.


Step 1 (Pre upgrade Task)
1.1 Install the oracle 10g database software in new oracle home.
1.2 Connect to the oracle database form 9i and run pre-upgrade scripts (utlu102i),
which is store in Oracle 10 g Home/rdbms/admin.
1.3 Follow the steps suggested from output of above steps.
Output will show you following information:
Database Name, Version and Compatible
Log files Section: Log file should not less then 4MB. If log file less the 4 MB then Create
additional log file as per suggested size (15MB) and drop the smaller ones.
Tablespaces Section: This section Show you required size of System, TEMP and DRSYS
tablesapce. If size is not enough then you must increase size of tablespace.
For system tablespace minimum required size is 598 MB
For TEMP Tablespace minimum required size is 59 MB
For DRSYS tablespace minimum required size is 5 MB
Rollback Segments Section: This Section show you required size of rollback
segments Rollback Segments must be 70MB. if size less then 70 MB alter storage
clause MAXEXTENTS UNLIMITED by using following Command: ALTER
ROLLBACK SEGMENT
Parameter section: This will suggest you Parameter for 10g
"compatible" must be set to at least 9.2.0
"shared_pool_size" needs to be increased to at least 187332895

163

"java_pool_size" needs to be increased to at least 67108864


"streams_pool_size" is not currently defined and needs a value of at least 50331648
"large_pool_size" needs to be increased to at least 8388608
"pga_aggregate_target" is not currently defined and needs a value of at least 25165824
"session_max_open_files" needs to be increased to at least 20
Deprecated Parameter Section: This will show you Deprecated Parameters "mts_dispatchers"
new name is "dispatchers"
Obsolete Parameters Section: This will show you Obsolete Parameter
"job_queue_interval"
"distributed_transactions"
"oracle_trace_collection_name"
"max_enabled_roles"
Component Section: This section show you database component will be upgraded or installed.
Oracle Catalog Views [upgrade]
Oracle Packages and Types [upgrade]
JServer JAVA Virtual Machine [upgrade]
The 'JServer JAVA Virtual Machine' JAccelerator (NCOMP) is required to be installed from the
10g Companion CD.
Oracle XDK for Java [upgrade]
Oracle Java Packages [install]
Oracle Text [upgrade]
Oracle XML Database [install]
Oracle interMedia [upgrade]
The 'Oracle interMedia Image Accelerator' is required to be installed from the 10g Companion
CD.
Spatial [upgrade]
Miscellaneous Warnings Section:
WARNING: --> your database is using an obsolete NCHAR character set.
In Oracle Database 10g, the NCHAR data types (NCHAR, NVARCHAR2, and NCLOB)
are limited to the Unicode character set encoding (UTF8 and AL16UTF16), only.
See "Database Character Sets" in chapter 5 of the Oracle 10g Database Upgrade
Guide for further information.
WARNING: --> Deprecated CONNECT role granted to some user/roles.
CONNECT role after upgrade has only CREATE SESSION privilege.
WARNING: --> Database contains INVALID objects prior to upgrade.
USER MDSYS has 4 INVALID objects.
USER SYS has 41 INVALID objects.

164

SYSAUX Table space Section: Create table space in the Oracle Database 10.2 environment. New
"SYSAUX" table space minimum required size for database upgrade: 500 MB.
Tips:
If you are using SPFILE in current database, you must create INIT file in default location.
Step 2 (Only for window platform, UNIX platform user only Shutdown the database and
listener services)
2.1. Shutdown the 8i instance
SQL>SHUTDOWN IMMEDIATE
2.2. Stop the OracleServiceSID Oracle service of the oracle 8i database
C :\> NET STOP OracleService
2.3. Delete the OracleServiceSID at command line of 8i Home
C:\>ORADIM DELETE SID
Step 3 (Only for Window platform, if you are using UNIX based platform just follow 3.2
step)
3.1. Create the new oracle database 10g service at command prompt using the following
command.
C:\>ORADIM NEW SID INTPWD STARTMODE A
3.2. Put your init file in database folder at new oracle 10g home from 8i.
Step 4
Connect to the new oracle 10g instance as a user SYSDBA privilege and issue following
command:
SQL>STARTUP UPGRADE
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle10gHome/database). We have just put init file at new oracle 10g home from 8i.
IMPORTANT:
The error may be occurring, when you attempting to start the new oracle Database 10g release. If
you receive, issue the SHUTDOWN ABORT commands to shut down the database and correct
the problem.
Step 5

165

Create SYSAUX table space for the database.


CREATE TABLESPACE sysaux DATAFILE sysaux01.dbf
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
NOTE:
If you are upgrading from 10.1, then skip step5 otherwise create SYSAUX tablespace. In oracle
10g, the SYSAUX tablespace is used to consolidate data from a number of tablespace that where
separated in previous release.
Step 6
6.1. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
6.2. Run the upgrade scripts.
SQL>@catupgrd.sql
The catupgrd.sql script determines which scripts need to be run and then runs each necessary
scripts.
6.3. Run the result of the upgrade display report.
SQL>@utlu102s.sql
The Post-upgrade status Tool display the status of the database components in the upgrade
database and time required to complete each component upgrade.
6.4. Turn off the spooling of script result to the log file
SQL>spool off;
6.5. Shutdown the instance and restart.
SQL> STHTDOWN IMMEDIATE
SQL> STARTUP
Step 7 (Post migration Task)
7.1. Remove the obsolete initialization parameter from parameter file.
7.2. Check Invalid objects in database by using following Query:
SQL> select count(*) from dba_objects where status=INVALID;

166

7.3. Run utlrp.sql scripts to recompile any remaining stored Pl/SQL and java codes.
SQL > @utlro.sql
7.4. Upgrade User NCHAR Columns (Tasks to Complete Only After Upgrading a Release 8.1.7
Database)
If you upgraded from a version 8 releases and your database contains user tables with NCHAR
columns, you must upgrade the NCHAR columns before they can be used in the Oracle
Database.
You will encounter the following error when attempting to use the NCHAR columns in the
Oracle Database until you perform the steps in this section:
ORA-12714: invalid national character set specified
To upgrade user tables with NCHAR columns, perform the following steps:
7.4.1. Connect to the database instance as a user with SYSDBA privileges.
7.4.2. If the instance is running, shut it down using SHUTDOWN IMMEDIATE:
SQL> SHUTDOWN IMMEDIATE
7.4.3. Start up the instance in RESTRICT mode:
SQL> STARTUP RESTRICT
7.4.4. Run utlnchar.sql:
SQL> @utlnchar.sql
Alternatively, to override the default upgrade selection, run n_switch.sql:
SQL> @n_switch.sql
5. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
6. Exit SQL* PLUS

167

Project D
Upgrade Oracle Database from Version 9.2.0 to 10.2.0

If your source database version is 9.0.1.4 (or higher) or 9.2.0.4 (or higher) then you can directly
upgrade your current or source database to 10gR2, otherwise you must apply latest patch set on
source database. You can download latest patch set from metal ink.
Here I am describing Manual methods to upgrade our oracle database.
Suppose your current oracle 9.2.0 software reside in /data01 and all database reside in /data02
mount point in UNIX based platform. If you are using Window based platform suppose your
oracle software reside in D drive and database reside in E drive.
Step 1 (Pre upgrade Task)
1.1 Install the oracle 10g database software in new oracle home.
1.2 Connect to the oracle database form 9i and run pre-upgrade scripts (utlu102i), which is store,
in Oracle 10g Home/rdbms/admin.
1.3 Follow the steps suggested from output of above steps.
Tips:
If you are using SPFILE in current database, you must create INIT file in default location.
Step 2 (Only for Window platform, if you are using UNIX based platform only shutdown
database service and Listener Services)
2.1 Shutdown the 9i instance
SQL>SHUTDOWN IMMEDIATE
2.2 Stop the OracleServiceSID Oracle service of the oracle 9i database
C :\> NET STOP OracleService
2.3 Delete the OracleServiceSID at command line of 9i Home
C:\>ORADIM DELETE SID
Step 3 (Only for Window platform, if you are using UNIX based platform just follow 3.2
step)
3.1 Create the new oracle database 10g service at command prompt using the following
command.
C:\>ORADIM NEW SID INTPWD STARTMODE A
3.2 Put your init file in 10g default location from 9i.

Step 4
Connect to the new oracle 10g instance as a user sysdba privilege and issue following command:

168

SQL>STARTUP UPGRADE
You dont need to use the PFILE option to specify the location of your initialization parameter
file in our case because we are using INIT file in default location (which is reside in
Oracle10gHome/database). We have just put init file at new oracle 10g home from 9i.
The error may be occurring, when you attempting to start the new oracle Database 10g release. If
you receive, issue the SHUTDOWN ABORT commands to shut down the database and correct
the problem.
Step 5 (Create SYSAUX tablespace)
Create SYSAUX table space for the database.
CREATE TABLESPACE sysaux DATAFILE sysaux01.dbf
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
NOTE:
If you are upgrading from 10.1, then skip step5 otherwise create SYSAUX tablespace. In oracle
10g, the SYSAUX tablespace is used to consolidate data from a number of tablespace that where
separated in previous release.
Step 6
6.1 Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
6.2 Run the upgrade scripts.
SQL>@catupgrd.sql
The catupgrd.sql script determines which scripts need to be run and then runs each necessary
scripts.
6.3 Run the result of the upgrade display report.
SQL>@utlu102s.sql
The Post-upgrade status Tool display the status of the database components in the upgrade
database and time required to complete each component upgrade.
6.4 Turn off the spooling of script result to the log file
SQL>spool off;

169

6.5 Shutdown the instance and restart.


SQL> STHTDOWN IMMEDIATE
SQL> STARTUP
Step 7 (Post migration Task)
7.1. Remove the obsolete initialization parameter from parameter file.
7.2. Check Invalid objects in database by using following Query:
SQL> select count(*) from dba_objects where status=INVALID;
7.3. Run utlrp.sql scripts to recompile any remaining stored Pl/SQL and java codes.
SQL > @utlrp.sql
7.4. Exit SQL* PLUS

170

Migration of Oracle Database Instances across OS Platforms


In this section, we will discuss how to migrate an existing database from one OS platform to
another (i.e.: Windows to Solaris).
We can migrate database across OS platform as part of an Oracle version upgrade (oracle 8i
.oracle 11g) or within the same oracle version (oracle 10.2.0 to 10.2.0)
We can not use Migration utility (DBUA) or Scripts to perform a cross platform migration.
We can only Re-build database instance and data moved using one of the following methods:

Export / Import.
Transportable Tablespaces 10G or Later
RMAN Convert Database functions. 10G or Later

Project
Migration of Database across OS Platform through Export and Import
We can use Export and Import utility for moving an existing Oracle Database from one platform
to another (i.e. UNIX to NT or via-verse).
A full database export and import can be used in all Oracle versions to transfer a database across
platform.
Example:
-Source Database is 32-bit 9.2.0 database on 32-Bit Windows platform
- Target Database is 64-bit 10.2.0 database on 64-bit any UNIX based platform.
Step 1
Query the source database views dba_tablespaces, dba_data_files and dba_temp_files. You will
need this information later in the process.
Step 2
Perform a full export from the source database:
exp system/manager FULL=y FILE=exp_full.dmp LOG=exp_full.log
Step 3
Transfer the export dump file in binary mode to the HP-Unix 11.22 server.
Step 4
Create a new database on the target server.

171

Step 5
Before importing the dump file, you must first create your tablespaces structure , using the
information obtained in step 1.
Step 6
Perform a full import with the IGNORE parameter enabled:
imp system/manager FULL=y FILE=exp_full.dmp LOG=imp_full.log IGNORE=y
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the
import to complete.

Project
Use RMAN CONVERT DATABASE on Source Host for Cross platform Migration
Here I am explaining the CONVERT DATABASE procedure on source host.
Restriction:

The principal restriction on cross-platform transportable database is that the source and
destination platform must share the same endian format.

Redo log files and control files from the source database are not transported. New control
files and redo log files are created for the new database during the transport process, and
an OPEN RESETLOGS is performed once the new database is created. Similarly,
tempfiles belonging to locally managed temporary tablespaces are not transported. The
temporary tablespace will be re-created on the target platform when the transport script is
run.

BFILEs, External tables and directories, Password files are not transported.

Step 1
Check that the source and destination platform belong to same ENDIAN format. We will try to
transport a database from Windows (32-bit) to Linux (32-bit).
SQL> select PLATFORM_NAME, ENDIAN_FORMAT from V$TRANSPORTABLE_PLATFORM;
Note: If the two platforms are not on the same ENDIAN format, you will need to use
TRANSPORTABLE TABLESPACE instead of CONVERT DATABASE

Step 2
Check the database can be transported to a destination platform and the current state of the
database database (such as incorrect compatibility settings, in-doubt or active transactions)
permits transport.

172

Make sure your database is open in READ ONLY mode and call DBMS_TDB.CHECK_DB.
For Example: We need to transport to Linux 32-Bit we will call the procedure with following
arguments:
SQL> set serveroutput on
SQL> declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Linux IA (32-bit)');
end;
/
PL/SQL procedure successfully completed.

If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions


preventing transport BEFORE the "PL/SQL procedure successfully completed message", then
your
database
is
ready
for
transport.
Step 3
Identify any external tables, directories or BFILEs. Use DBMS_TDB.CHECK_EXTERNAL
package for this because RMAN cannot automate the transport of such files as mentioned above.
SQL> set serveroutput on
SQL> declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/

The following directories exist in the database: SYS.DATA_PUMP_DIR


PL/SQL procedure successfully completed.

If there are no external objects, then this procedure completes with no output. If there are external
objects,
however,
the
output
will
be
somewhat
similar
to
above.
Step 4
If above three steps has been completed successfully, the database is ready for transport. We will
use RMAN CONVERT DATABASE command and specify a destination platform.
Bellow steps create transport script, which contains SQL statements used to create the new
database on the destination platform:
C:\>rman target / nocatalog
RMAN> CONVERT DATABASE NEW DATABASE 'TESTL'
TRANSPORT SCRIPT 'D:\Transport.sql'
TO PLATFORM 'Linux IA (32-bit)';

173

Note: This will be convert data file and put in Windows Default location
(Example: ORACLE_HOME/database), we can use format parameter for convert data
file other the default.
Uses bellow Command for creating converted datafile in other then default
location:
CONVERT DATABASE NEW DATABASE 'TESTL'
TRANSPORT SCRIPT 'D:\Transport.sql'
TO PLATFORM 'Linux IA (32-bit)'
FORMAT='D:\%U';

Step 5
After completion of above task, now copy the Transport.sql, converted datafiles, and the pfile
from Windows OS to Linux.
Step 6
Go to Destination Machine and edit the PFILE to change any settings for the destination
database.
Step 7
Go to Destination Machine and edit the TRANSPORT.SQL script to reflect the new path for
datafiles in the CREATE CONTROLFILE section of the script.
Step 8
Go to Destination Machine and run Transport.sql Scripts:
$ export ORACLE_HOME=/home/oracle/product/ora10g
$ export ORACLE_SID=TESTL
$ export PATH=$ORACLE_HOME/bin:$PATH
$ sqlplus "/ as sysdba"
Connected to an idle instance.
SQL> @TRANSPORT.SQL
ORACLE instance started.
Total System Global Area
Fixed Size
Variable Size
Database Buffers
Redo Buffers

201326592
1218484
67110988
125829120
7168000

bytes
bytes
bytes
bytes
bytes

Control file created.


Database altered.
Database closed.
Database dismounted.
ORACLE instance shut down.

174

ORACLE instance started.


Total System Global Area
Fixed Size
Variable Size
Database Buffers
Redo Buffers
Database mounted.
Database opened.

201326592
1218484
67110988
125829120
7168000

bytes
bytes
bytes
bytes
bytes

...
...
...

Step 9
Check error during recompilation:
SQL> select COUNT(*) "ERRORS DURING RECOMPILATION" from utl_recomp_errors;
ERRORS DURING RECOMPILATION
--------------------------0
SQL>
SQL>

Step 10
Run component validation procedure
SQL> SET serveroutput on
SQL> EXECUTE dbms_registry_sys.validate_components;
PL/SQL procedure successfully completed.
SQL> SET serveroutput off

Step 11
Change database identifier
1. Put your database in mount stage.
2. To verify the DBID and database name
SQL> SELECT dbid, name FROM v$_database;
3. set PATH and execute nid command in terminal
$ nid target=/
-----Change database ID of database ORCLLNX? (Y/[N]) => Y
Proceeding with operation
.
.

175

Instance shut down


Successfully changed database ID.
DBNEWID - Completed succesfully.
SQL> startup mount;
SQL> alter database open resetlogs;
Database altered.
Step 12
Check database integrity
SQL> select tablespace_name from dba_tablespaces;
SQL> select file_name from dba_data_files;
SQL> SELECT COMP_NAME, STATUS FROM DBA_REGISTRY;
Post Migration Steps
Directory objects must be created on the target system. Query DBA_DIRECTORIES on the target
database to determine the filesystem locations that must exist for the directory objects to be usable.

$ mkdir $ORACLE_HOME/admin/dpdump
SQL> select directory_path from dba_directories;
SQL> update dba_directories set directory_path=/d01/oracle/apexdb/admin/testdb/dpdump
where directory_path=/u02/oracle/apexdb/admin/apex/dpdump;

Project
Cross-Platform Migration on Destination Host Using RMAN Convert Database
Here I am explaining the CONVERT DATABASE procedure on destination Host source host.
Restriction:

The principal restriction on cross-platform transportable database is that the source and
destination platform must share the same endian format.

Redo log files and control files from the source database are not transported. New control
files and redo log files are created for the new database during the transport process, and
an OPEN RESETLOGS is performed once the new database is created. Similarly,
tempfiles belonging to locally managed temporary tablespaces are not transported. The
temporary tablespace will be re-created on the target platform when the transport script is
run.

BFILEs, External tables and directories, Password files are not transported.

The Source and the target database version must be equal / greater than 10.2.0. version

176

Step 1
Check that the source and destination platform belong to same ENDIAN format. We will try to
transport a database from Windows (32-bit) to Linux (32-bit).
SQL> select PLATFORM_NAME, ENDIAN_FORMAT from V$TRANSPORTABLE_PLATFORM;
Note: If the two platforms are not on the same ENDIAN format, you will need to use
TRANSPORTABLE TABLESPACE instead of CONVERT DATABASE

Step 2
Check the database can be transported to a destination platform and the current state of the
database database (such as incorrect compatibility settings, in-doubt or active transactions)
permits transport.
Make sure your database is open in READ ONLY mode and call DBMS_TDB.CHECK_DB.
For Example: We need to transport to Linux 32-Bit we will call the procedure with following
arguments:
SQL> set serveroutput on
SQL> declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Linux IA (32-bit)');
end;
/
PL/SQL procedure successfully completed.

If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions


preventing transport BEFORE the "PL/SQL procedure successfully completed message", then
your
database
is
ready
for
transport.
Step 3
Identify any external tables, directories or BFILEs. Use DBMS_TDB.CHECK_EXTERNAL
package for this because RMAN cannot automate the transport of such files as mentioned above.
SQL> set serveroutput on
SQL> declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/

The following directories exist in the database: SYS.DATA_PUMP_DIR


PL/SQL procedure successfully completed.

177

If there are no external objects, then this procedure completes with no output. If there are external
objects, however, the output will be somewhat similar to above.
Step 4
If above three steps has been completed successfully, the database is ready for transport. We will
use RMAN CONVERT DATABASE command and specify a destination platform.
C:\>rman target / nocatalog
RMAN> CONVERT DATABASE ON TARGET PLATFORM
CONVERT SCRIPT 'D:\convertscript.rman'
TRANSPORT SCRIPT 'D:\transportscript.sql'
new database 'TESTL'
FORMAT 'D:\%U';
Note:

The CONVERT DATABASE ON TARGET PLATFORM command generates a transport


scripts, PFILE and also generate a convert scripts for all datafile of the database.
This

does

not

produce

converted

datafile

copies.

Step 5
After completion of above task, now copy the Transport.sql, datafiles, convertscripts.rman and
the pfile from Windows OS to Linux.
Step 6
Go to Destination Machine and edit the PFILE to change any settings for the destination
database.
Step 7
Go to Destination Machine and Create a dummy Controlfile.
$ export ORACLE_HOME=/home/oracle/product/ora10g
$ export ORACLE_SID=TESTL
$ export PATH=$ORACLE_HOME/bin:$PATH
$ sqlplus "/ as sysdba"
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
SQL> <run here create control file syntex >
Control file created.
Step 8
Now edit the file Convertscript.rman and make necessary changes with respect to the filesystem
and the file names. Now once the changes are done run the script from rman prompt

178

$ rman target / nocatalog @CONVERTSCRIPT.RMAN


Recovery Manager complete.
Step 9

Now

shutdown

the

database

and

delete

the

dummy

controlfile.

Step 10
Now edit the TRANSPORT sql script to reflect the new path for datafiles and redolog files in the
CREATE
CONTROLFILE
section
of
the
script.
Step 11 Once the PFILE and TRANSPORT sql scripts are suitably modified invoke SQLPLUS
on the destination host after setting the Oracle environment parameters and then run
TRANSPORT.sql as:
$
$
$
$

export ORACLE_HOME=/u01/oracle/product/ora10g
export ORACLE_SID=win10g
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus "/ as sysdba"

Connected to an idle instance.


SQL> @TRANSPORTSCRIPT.SQL
ORACLE instance started.
Total System Global Area 201326592 bytes
Fixed Size 1218484 bytes
Variable Size 67110988 bytes
Database Buffers 125829120 bytes
Redo Buffers 7168000 bytes
Control file created.
Database altered.
Database closed.
Database dismounted.
ORACLE instance shut down.
ORACLE instance started.
Total System Global Area 201326592 bytes
Fixed Size 1218484 bytes
Variable Size 67110988 bytes
Database Buffers 125829120 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
...
...

When the transport script finishes, the creation of the new database is complete.
Step 9
Check error during recompilation:

179

SQL> select COUNT(*) "ERRORS DURING RECOMPILATION" from utl_recomp_errors;


ERRORS DURING RECOMPILATION
--------------------------0
SQL>
SQL>

Step 10
Run component validation procedure
SQL> SET serveroutput on
SQL> EXECUTE dbms_registry_sys.validate_components;
PL/SQL procedure successfully completed.
SQL> SET serveroutput off

Step 11
Change database identifier
4. Put your database in mount stage.
5. To verify the DBID and database name
SQL> SELECT dbid, name FROM v$_database;
6. set PATH and execute nid command in terminal
$ nid target=/
-----Change database ID of database ORCLLNX? (Y/[N]) => Y
Proceeding with operation
.
.

Instance shut down


Successfully changed database ID.
DBNEWID - Completed succesfully.
SQL> startup mount;
SQL> alter database open resetlogs;
Database altered.
Step 1 2
Check database integrity
SQL> select tablespace_name from dba_tablespaces;
SQL> select file_name from dba_data_files;

180

SQL> SELECT COMP_NAME, STATUS FROM DBA_REGISTRY;

Project
Migration Oracle Databases across Platforms by using Transporting Tablespace
Prior to Oracle 10g, one of the only supported ways to move an Oracle database across platforms
was to export the data from the existing database and import it into a new database on the new
server.
Export / Import utility pretty well if your database is small, but can require an unreasonable
amount of down time if your database is large. In Oracle 10g, the transportable tablespace feature
has been enhanced in a way that makes it possible to move large databases (or portions of them)
across platforms much more quickly and simply than the export/import method.
Note:
In Oracle 8i and Oracle 9i, tablespaces could only be transported into databases that ran on the
same hardware platform and operating system. So if your Database ran on Windows and want to
migrate on Linux , you could not use transportable tablespaces to copy data efficiently between
the databases.
Beginning in Oracle 10g release 1, cross-platform support for transportable tablespaces is
available for several of the most commonly used platforms. The process is similar to transporting
tablespaces in previous Oracle releases, except there are a few possible extra steps, and there are
more limitations and restrictions. Oracle 10g releases 2 goes one step further and offers the
ability to transport an entire database across platforms in one step. But the limitations here are
even stricter.
Important:

Data pump cannot transport XMLTypes while original export and import can.
Data pump offers many benefits over original export and import in the areas of
performance and job management, but these benefits have little impact when transporting
tablespaces because metadata export and import is usually very fast to begin with.
Original export and import cannot transport BINARY_FLOAT and BINARY_DOUBLE
data types, while data pump can.
When original export and import transport a tablespace that contains materialized views,
the materialized views will be converted into regular tables on the target database. Data
pump, on the other hand, keeps them as materialized views.

Limitation:

The source and target database must use the same character set and national character set.
We can not transport a tablespace to a target database in which a tablespace with the same
name already exists.
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain
XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using
181

EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the
default).
How to retrieve tablespace list that content XMLType Data type?
select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and
x.owner=u.username
Transporting tablespaces with XMLTypes has the following limitations:

The target database must have XML DB installed.


Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
Schemas referenced by XMLType tables cannot have cyclic dependencies.
Any row level security on XMLType tables is lost upon import.
If the schema for a transported XMLType table is not present in the target database, it
is imported and registered.
If the schema already exists in the target databasean error is returned unless the
ignore=y option is set.
Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are
transportable using Data Pump but not the original export utility, EXP.

Step 1: Check Platform Support and File Conversion Requirement


First we need to verify that Oracle supports cross-platform tablespace transport between the two
platforms we have in mind. The v$database view shows the name of the platform (as Oracle sees
it) and the v$transportable_platform view shows a list of all supported platforms. If we join these
two together in a query, we get the information we need.
On the source database:
SQL> SELECT A.platform_id, A.platform_name, B.endian_format
FROM v$database A, v$transportable_platform B
WHERE B.platform_id (+) = A.platform_id;
PLATFORM_ID PLATFORM_NAME
----------- ----------------------------------- -------------2
Solaris[tm] OE (64-bit)

ENDIAN_FORMAT
--------Big

SQL>
On the target database:
SQL> SELECT A.platform_id, A.platform_name, B.endian_format
FROM
WHERE

v$database A, v$transportable_platform B
B.platform_id (+) = A.platform_id;

PLATFORM_ID PLATFORM_NAME
---------------------------------------------

ENDIAN_FORMAT
--------------

182

10

Linux IA (32-bit)

Little

SQL>
Note:
The endian format column value can show three value Big,Little or can be blank.
If the source and target platform have the same endian format, then file conversion will not be
necessary.
If the endian formats differ, however, then file conversion will be required. (Endian format
describes the order in which processor architecture natively places bytes in memory, a CPU
register, or a file.
A blank indicates that the platform is not supported for cross-platform tablespace transport
Step 2: Identify Tablespaces to be transported and verify Self-containment
Now we figure out which tablespaces we want to transport. There is no need to transport the
SYSTEM, undo, or temporary tablespaces.
We use following query on source for retrieving Tablespace, which should be transport.
SQL> SELECT tablespace_name, segment_type, COUNT(*),
SUM (bytes) / 1024 / 1024 mb
FROM dba_segments
WHERE owner NOT IN ('SYS',SYSTEM)
GROUP BY tablespace_name, segment_type
ORDER BY 1, 2 DESC;
Self-contained meaning that objects in the tablespace set cannot reference or depend on objects
that reside outside of the tablespace. For example, if a table in the EMP2 tablespace had an index
in the IND1 tablespace, then transporting the EMP1 and IND1 tablespaces (without the EMP2
tablespace) would present a problem. When the EMP1 and IND1 tablespaces are transported into
the target database, there would be an index on a non-existent table. Oracle will not allow this
and will point out the problem while exporting the metadata.
Use following query on the source database to verify there were no self-containment problems:
SQL> BEGIN
SYS.dbms_tts.transport_set_check
('<TABLESPACE_NAME>, <TABLESPACE_NAME>, incl_constraints=>TRUE,
full_check=>FALSE);
END;
/
PL/SQL procedure successfully completed.
SQL> SELECT * FROM SYS.transport_set_violations;

183

no rows selected
SQL>
If there had been an index in tablespace IND1 that belonged to a table outside of the tablespace
set, we would have seen a violation like:
SQL> SELECT * FROM SYS.transport_set_violations;
VIOLATIONS
Index
MY_SCHEMA.MY_INDEX
in
tablespace
MY_SCHEMA.MY_TABLE in tablespace EMP2
SQL>

IND1

points

to

table

If there had been a table in the TAB1 tablespace with a foreign key referencing a table outside of
the tablespace set, we would have seen a violation like:
SQL> SELECT * FROM SYS.transport_set_violations;
VIOLATIONS
Constraint MY_CHILD_TABLE_FK1 between table MY_SCHEMA.MY_PARENT_TABLE in
tablespace EMP2 and table MY_SCHEMA.MY_CHILD_TABLE in tablespace EMP1
SQL>
Step 3: Check for Problematic Data Types
As we pointed out earlier, data pump is not able to transport XMLTypes, while original export
and import are not able to transport BINARY_FLOAT or BINARY_DOUBLE data.
Furthermore, there are several opaque data types including RAW, LONG RAW, BFILE,
ANYTYPE, and user-defined data types. Because of the unstructured nature of these data types,
Oracle does not know if data in these columns will be platform-independent or require byte
swapping for endian format change. Oracle simply transports these data types as-is and leaves
conversion to the application.
We ran the following queries on the source database in order to survey the data types used in our
tablespace set:
SELECT B.data_type, COUNT(*)
FROM dba_tables A, dba_tab_columns B
WHERE A.owner NOT IN( 'SYS',SYSTEM)
AND
B.owner = A.owner
AND
B.table_name = A.table_name
GROUP BY B.data_type
ORDER BY B.data_type;
.
SELECT B.owner, B.table_name
184

FROM dba_xml_tables A, all_all_tables B


WHERE B.owner = A.owner
AND B.table_name = A.table_name
AND B.tablespace_name IN ('USERS', 'EXAMPLE');
no rows selected
SQL>
The tablespaces we are planning to transport do not appear to have problematic data types.
Step 4: Check for Missing Schemas and Duplicate Tablespace and Object Names
At this point we check schema, tablespace, and object names on the source and target databases
in order to ensure that the transport will go smoothly. We may need to create schemas on the
target database if they are missing. We may also need to rename tablespaces or schema objects
on either the source or target database if there are duplications.
At this time we should determine what schemas will be required on the target database and create
them if they dont already exist.
SELECT owner, COUNT (*)
FROM dba_segments
WHERE tablespace_name IN ('USERS', 'EXAMPLE')
GROUP BY owner;
Next we verify that this schema did not yet exist on the target database:
SQL> SELECT username
FROM dba_users
WHERE username != 'SYS';
We create the missing schema on the target database now and give it all roles and system
privileges required by the application:
SQL> CREATE USER dbrd IDENTIFIED BY password;
User created.
SQL> GRANT connect, resource TO dbrd;
Grant succeeded.
SQL> GRANT create library TO dbrd;
Grant succeeded.
SQL> REVOKE unlimited tablespace FROM dbrd;
Revoke succeeded.
SQL>

185

Next we move on to checking for duplicate tablespace or object names. It will not be possible to
transport our tablespace set into the target database if a tablespace already exists there with the
same name as one of the tablespaces in our set. We can quickly check the target database for a
duplicate tablespace name:
SQL> SELECT tablespace_name
FROM dba_tablespaces
WHERE tablespace_name IN ('USERS', 'EXAMPLE');
If there had been a duplication of tablespace names, we could simply rename a tablespace (on the
source or target database) with a statement such as:
SQL> ALTER TABLESPACE old_tablespace_name RENAME TO
new_tablespace_name;

Step 5: Make Tablespaces Read-only in Source Database


The tablespaces to be transported need to be made read-only on the source database for long
enough to copy them and extract the metadata.
We put the tablespaces into read-only mode with the following statements:
SQL> ALTER TABLESPACE <TABLESPACE_NAME> READ ONLY;
Tablespace altered.
Step 6: Extract Metadata from Source Database
We are now ready to extract the metadata from the source database. We could use either data
pump or original export to do this. The export command and output were as follows:
$ exp "'/ as sysdba'" file=Export.dmp transport_tablespace=y
tablespaces=<TABLESPACE_NAME>,<TABLESPACE_NAME>
Note that the export must be performed while connected to the database with SYSDBA privilege
Step 7: Copy Files to Target Server and Convert if Necessary
We are now ready to copy the export file and all of the data files that make up the transported
tablespace set to the server where the target database resides.
The RMAN command and output to convert data files on the target database were as follows:

$ rman
Recovery Manager: Release 10.2.0.2.0 - Production on Wed Dec 20 10:11:38 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.

186

RMAN> CONNECT TARGET


connected to target database: PROD463 (DBID=2124019545)
RMAN> CONVERT DATAFILE '/u01/stage/tab101.dbf',
'/u01/stage/tab102.dbf',
'/u01/stage/tab103.dbf',
'/u01/stage/ind101.dbf'
FROM PLATFORM "Solaris[tm] OE (64-bit)"
DB_FILE_NAME_CONVERT ('/u01/stage/',
'/u01/oradata/PROD463/');
Finished backup at 20-DEC-06
RMAN> EXIT
Recovery Manager complete.

Step 8: Import Metadata into Target Database


We are now ready to load the metadata extracted from the source database into the target
database. This will make the data files we copied to the target database server part of the target
database and will add objects to the target databases data dictionary. This step is sometimes
called plugging in the tablespaces. Again we can use data pump or original import, and weve
chosen to use original import. The import command and output were as follows:
$
imp
"'/
as
sysdba'"
file=PROD417_tab1ind1.dmp
transport_tablespace=y
datafiles=/u01/oradata/PROD463/ind101.dbf,
/u01/oradata/PROD463/tab101.dbf,
/u01/oradata/PROD463/tab102.dbf, /u01/oradata/PROD463/tab103.dbf
$
We changed the tablespaces to read-write mode on the target database so that transaction activity
could begin:
SQL> ALTER TABLESPACE <TABLESPACE_NAME> READ WRITE;
Tablespace altered.
SQL>

187

You might also like