You are on page 1of 43

Oracle Study – Module – III

Network Environment Challenges - Configuring the network environment,


maintaining the network, Tunning troubleshooting and monitoring the network,
implementing security in the network, integrating legacy systems.
Simple Network – Two tier – This is called client-server architecture, here it involves
client process that requests services form server processes, they communicate over given
protocol.. cant be scalable.
Simple to Complex Network N- Tier – In N-tier the role fo middle tier can have multiple
services like translation services, scalability services, network agent services.
Complex Network – different hardware platforms, different OS, multiple protocols,
different syntax between different connected applications, applications in different
geographical locations.
Orcale Networking solutions
Connectivity – Oracle net services – It is protocol independence, comphrehensive
platform support, Integrated GUI administration tools, multiple configuration options,
tracing and diagnostic toolset and basic security
Database connectivity with IIOP (Internet Inter-ORB protocol and HTTP - Using these
protocols we can run applications with web browser, Internet File system, enterprise Java
beans and using SSL ca provid added security.
Directory Naming- It is the process of resolving a network alias using an LDAP-
compliant directory server, clients must be configured to use the LDAP compliant server.
It provides standard interface to enterprise directory service like Oracle Internet direcoty,
directory development and deployment of directory enabled applications, an array of
programming interfaces that enables seamless deployment of internet ready applications
Naming Methos, Host naming, Local naming – use of a tnsnames.ora, Oracle names
naming – use an oracle name server with oracle 8i server and earlier, directory naming –
Oracle Internet directory.

Directory Services – Oracle Internet Directory is Oracle’s LDAP compliant directory


service, integrates tightly with oracle 9i, simplifies network administration, provides
secure and reliable directory services, integrates existing public key certificates, e-
wallets and access privileges.
Scalability – Oracle shared server – This enables a large no of users to coonect to a
database simultaneously, database resources are shared resulting in efficient memory and
processing page, connections are routed via dispatcher, server process are not dedicated
to each client, server processes serve client processes as needed.
Dispatcher – This is a process which handles the management of the connections to the
valuable server processes, It can support multiple client connections concurrently.
Server process – It handles the retrival and saving of datato the database and any other
cpu processing the application needs.
Scalability – connection Manager – It offers multiplexing of connections – can handle
several incoming connections and transmit them over a single outgoing connections,
gives larege no of users to access, this is only in tcp/ip, cross protocol connectivity – a
client and server can communicate with different protocols, network access control –
designate clients can connect to certain servers based on tcp/ip protocols, usage –
supports more users, enables cross-protocol communication, can act as a n access control
mechanism, can act as an proxy server.

Security – Encryption (DES, RSA, 3DES), Authentication – Authenticates users through


third party services and SSL, Data Integrity – ensures data integrity during transmission
MDS, SHA, single sign-on.
Oracle Net services and Firewalls - Oracle provides support for all major firewalls,
Oracle net application proxy kit allows firewall vendors to provide connection support for
oracle environments, It supports proxy based and stateful packet inspection firewalls.

Accessiblty – Hetrogenous services - Enables access of legacy data as if it resides in a


single local relational database, enables oracle procedure calls to access non oracle
systems, services or APIs (access databases like Informix, DB2, SQL server and Sybase)
Accessibitly – External Procedures – These are functions written in 3GL language that
can be called form PL/SQL, support of external procedures allows the developmer more
flexibility than SQL/PLSQL, Oracle listener can listen for external procedures,
connections to external procedure can be configured during or after server installation.
PL/SQL passes the following information to external procedures shared library name,
external procedure name, parameters.

Oracle Net configuration Assitant - to configure Listerner names and protocols, naming
methods the client will use to resolve connect identifiers, net services names in a
tnsnames.ora, directory server access.
Oracle Net manager – It is use to configure and manage oracle networking. The
components are Service naming - create or modify network descriptions for database
services in a tnsnames.ora file, directory server, or an oracle names server, Listener –
process on server for communication with client requests, profile – client parameters,
Oracle name server – create modify configuration settings on oracle name server
Oracle Net Connections

Oracle Net is used to establish connections between applications on a network depending


on network configuration, location of the nodes, application and the network protocol.
The connections type can be client/server application, java application and web client
applications.

Client server Application – Oracle net enables connection between client and server it
resides on client and server it is layered on top of network protocol.
The layer – Client - Client application (OCI), two task common, oracle net foundation
layer, oracle protocol support, network protocol.
Database server – Network protocol, oracle protocol support, oracle net foundation layer
two task common , oracle RDMS, (OPI).
Client application – uses OCI (Oracle call interface) to communicate with the server it
provides an interface between client application and server application (SQL).
Two Task common – it does character set and data type conversion between character set
or formats on the client and server it is in presentation layer.
Oracle Net foundation layer – Responsible for establishing and maintaing connection
between serever and client, location of server, whether one or more connection is
involved, how to handle exception and interrupts.
Oracle Protocol support – TCP/IP, TCP/IP with ssl, Named Pipes, LU6.2, Virtual
Interface

Web client Application connections – web browsers can connect to an oracle server
using JDBC Oracle call interface driver, connect directly by using IIOP and HTTP.
Application Web server Java Application – Java application, JDBC OCI driver, TTC,
Oracle Net foundation Layer, Oracle Protocol stack, Network protocol.
Database Server - Network protocol, Oracle Protocol stack, Oracle Net foundation Layer,
TTC, Oracle RDBMS.
Application Web server Java Applet Client - Java appllet, JDBC thin driver, JavaTTC,
Java net , TCP/IP Network protocol.
Database Server - TCP/IP Network protocol, Oracle Protocol stack, Oracle Net
foundation Layer, TTC, Oracle RDBMS.

Web application connection NO middle tier- Using HTTP and IIOP client can directly
communicate with server. Listener can be configured to accept HTTP and IIOP
connections, HTTP is used to access Oracle Internet File System and IIOP to connections
to Enterprise Java beans and Common object request Broker (CORBA) applications.
Connectivity concepts and terminology - Database services - service name a LOGICAL
representation of a database, the way a database is presented to clients, Connect
Descriptor – Location of the database, name of the database service, Listner – receives
client connection requests, hands request to the database server.
Service registration – database registers information with the listener, service handlers
available for each instance. Service Handlers – connection points or dedicated server

Oracle Net configuration Models

Localized management – local file on each computer in the network, Centralized


management - LDAP compliant directory server, Oracle Names server.

Oracle Net configuration files – ldap.ora Located on the database server and client
parametes to access directory server. Listener.ora located on database server config file
includes, protocol addresses it is accepting connection request, database and nodatabase
services it is listening for, control pramaeters used by the listener. Names.ora located on
names server includes location of domain info and optional config prameters of name
server. Sqlnet.ora located on database and client end. Tnsnames.ora locate don client file
contains net service names mapped to connect descriptors.
Chapter -3 Basic oracle net server side configuration

Listener Process – Listener process can listen for incoming connections on behalf of a
database or a number of databases. It can listen for more than one database, multiple
listeners can listen for one database, It can listen for multiple protocols, the name of the
listener must be unique per listener.ora file.

Connection Methods
Spawn and Bequeth – the listener pass or bequeath the connection to a spawned process,
this is only in deirect server config. Direct Hans-off - Listener will hand off a connection
to a dispatcher when an oracle shared server is used (only shared server). Redirected
(Message) –A connection may be redirected by the listener to a dispatcher if a shared
server is used. Whether the session is bequeathed, handed off or redirected it is
transparent to the user.

Spawn and Bequeath and Direct Hand-off connections – the ability is depended on OS,
when listener forks a dedicated server process and bequeaths connection to the server
process it is called bequeath session

Direct hand-off with oracle shared server – if the OS handles as server connection and
bequeaths the to the server process it si dorect hand-off conencttion.

Redirected session- In redirect client establishes by sending connect packet, the listener
chscks sid and opens a new process to service the new connection, An IPC is established
between the new process. The new process slects a new tcp/ip port and passes to the
listener, the listener inserts this new port int to redirect packet and sends it back to the
client and original TCP socket between client and listener is reset. A new TCP
connection is established to the redirected packet address.. the dedicated process can now
finally accept the incoming sonnection and forward an accept message.

Service configuration and Registration


Dynamic service registration – does not require configuration in listener.ora file, the
listener relies on the pmon process. If oracle9i JVM is installed HTTP and IIOP listening
end points can be registered dynamically with the register., if the listener is not up pmon
will try to contact after the listener is up, Listener will reject any connection form an
unregistered service. Connection-time failover is enabled, connection load balancing is
enabled for shred servers.
Static service configuration – Use for Oracle 8 and earlier releases, requires listener.ora
configuration, required for OEM and other services
The Listener.ora contains the following settings, Listner name listener, port 1521,
protocols TCP/IP and IPC, SID name default Instance, Host Name – Default host name.

Listener = , (address_list = , (sddress =(protocol=tcp) (hosts =abvc) (port=1521))),


Sid_listener= , (sid_list = , (sid_desc = ,(oracle_home=/home/oracle), global_dname =
orcl.in.idfc.co), (sid_name=idfc)))

Configuring services, logging and tracing can be done.

Dynamic service registration - the following initialization parameters should be


configured service_name, instance_name should be configured for service registration
work. By default PMON registers with a local listener on the server on the default local
address of TCP/IP port 1521. PMON can register with non default listener if
local_listener initlization parameter is defined and listeners attribute of the dispatchers
initlization parameter is defined for oracle shared server.

Configuring the Listener for Oracle 9i JVM : IIOP and HTTP


The listener can be configure to accept connections from clients using IIOP and HTTP,
Use static listener registration if the database is Oracle 8i or earlier, If both listener and
database are oracle9i configuration occurs dynamically during service registration.
Listene control utility - lsnrctl < command name), lsnrctl > command, lsnrctl > st
current_listener listener02 , lsnrctl > start/top. Additional commands – change_password,
exit, help, quit, reload, save_config, services, set parameter, show parameter, status.
Changing listener parameters with set
Lsnrctl> set trc_level admin, lsnrctl > show trc_directory. Other commands set
current_listener, set log_direcotry, set log_file, set log_status, set password, set
save_config_on_stop, set trc_directory, set trc_file, set trc_level.
Chapter – 4 Naming Method configuration

Naming methods are used by a client application to resolve a connect identifier to a


connect descriptor when attempting to connect a database service, five naming methods
Host naming – client can connect to a server using a host name under the following
conditions – connecting to an oracle database service using oracle net services client
software, client and server using TCP?IP, host names are resolved through an IP address
translation mechanism such as DNS or a local /etc/hosts file, No advanced features such
as connection manager or security options are used. This method requires minimal user
configuration, eliminates the need to create and maintain local names configuration file,
elimantes oracle names or Oracle Internet directory. Client side requirements – Oracle net
services and TCP/IP, name can be resolved through DNS or host file.
Hosst Naming – server side - In 9i info about the database is automatically registered
with the database name including the global database name, the default listener running
on tcp/ip on port 1521 , the local_listener parameter is set in initlization parameter file.
Naming methods config – It is packaged with java run time environment., we can select
the naming method like hosts, it is writeen to sqlnet.ora file. Local naming – provides a
relatively straight forward method for resolving service name addresses, resolves service
name across networks running different protocols, can be easily configured using a
graphical configuration tool., it requires net service name to be stored in tnsnames.ora. A
net service name is a shor convenient name that is mapped to a network addesses
contained in the tnsnames.ora. we can add, reconfigure,delete,rename. Then we need to
choose the database version oracle 8i or later or oracle release 8.0 or before. Then we
need to enter the database service name this should be the global database name db_name
and db_domain as specified in the services_names parameter., then choose the network
protocol tcp/tcps/ipc, then we nnet to choose the hostname and the port number., then we
can choose to perform a test connection and enter the service name.. The generated file is
the tnsnames.ora fiel. The content in the file net service name, description, address,
protocol,host,port connect_data.
Sqlnet.ora – this file controls the behaviour of oracle net services, the default location
oracle/network/admin , it can be overridden by tns_admin environment variable. Once
sqlnet and tnsnames are configured we can connect to oracle from the client.
Troubleshooting – verify tnsnames.ora file exists, location is specified in the tns_admin,
the connection string is correct, ther are no duplicate sqlnet.ora files. Verify we enter
correct database name.
Usage and configuration of the oracle shared server.

Server configuration – dedicated server process, shared server process

Dedicated – The user process and server process are separate, each user process has its
own server process, the user and server process can run on different machine to take
advantage of distributed processing, ther is one-to-one ration between user and server
processes, when there is no request the dedicated server exists but remain idle.

Oracle Shared server – This config enables shared servers and dedicated servers and
combined users and servers exist with the same instance, Here client user processes
connect to a dispatcher, The PMON process registers the location and load of the
dispatchers with the listeners, enabling the listener to forward the request tot the least
utilized dispatcher. A dispatcher can support multiple client connections concurrently,
every client connections are virtual circuit. A virtual circuit is a piece of shared memory
used by dispatcher for client database connection request and replies. It places the VC in
a common queue, then the shared server picks up the VC from the common queue
services and relinqueses the VC before attempting to retrieve another VC. Dispatchers
can work only on TCP/IP.
Benefits – Reduces the no of processes against an instance, increases the no of possible
users, achieve load balancing, reduces the no of idle server processes, reduce memory
usage and system overhead.
Using a dedicated server with the oracle shared server – when submitting batch jobs or
connecting as sysdba to startup,shutdown, recovery we can use dedicated server. To
request a dedicated server, the clause server=dedicated should be given in Oracle TNS
connection string in tnsnames.ora.
Connecting to an oracle shared server - the listener process waits for any connection
requests from user process, when a request is there, the listener determines whether to
connect the user process to a dispatcher or assign to a dedicated server process. If the user
process is connected to a dispatcher the listener process gives the use process address of
the dispatcher process, if the user process requests a dedicated server the listener creates a
dedicated server process and connects the user process to it. Once the connection has
established either through dispatcher or dedicated the connection is maintained for the
duration of the session.
Processing a request – A user sends a request to its dispatcher, the disp places the req
queue in the SGA, Shared server picks up the request from the request queue and
processes, the shared server places the response on the calling dispatcher response queue,
the response is handed off to dispatcher, the dispatcher returns the response to the user.
Request quqe- One request queue is shaerd by all dispatchers, shared server monitor the
request queue for new one, request are processed in FIFO basis, shared servers place all
completed requests on all calling dispatchers queue, each dispatcher has its own respone
queue in SGA , dispatcher is responsible for sending back to the user, user are connected
to the same dispatcher for the session.

SGA and PGA –


Dedicated server User session data is kept in the PGA. SGA – Shared pool and other
memory structures. PGA – Stack space, user session data cursor state.
Oracge Shared Server – User data session is held in the SGA. SGA – User session data
cursor data, shared pool and other memory structures. PGA – Stack space.

Configuring Oracle shared server – Dispatchers, shared_servers, It also requires oracle


net services. Optional Parameters – Max_dispatchers, Max_shared_servers, Circuits,
Shared_server_sessions.

Dispatchers – Specifies the no of dispatchers initially started for a given protocol.


DISPATCHERS = ‘’(PROTCOL=TCP) (DISPATCHERS=2)\( PROTCOL=IPC)
(DISPATCHERS=1)”. Only one of the following is required protocol, address,
description. The other parameters are dispatchers, sessions, listener, connections.
Initial no of dispatchers =ceil(avg no of concurrent sessions/connections per dispatchers)
MAX_dispatchers=5 – This specifies the maximum no of dispatchers that can run
simultaneously, Issue the alter system command to add more dispatchers then initially
started. Alter system set dispatchers = ‘’(PROTCOL=TCP) (DISPATCHERS=2)’

Shared_servers – Specifies the no of server processes created when an instance is started


up. Shared_servers=6. The no of server process reqd for an database is depends on no of
users connect to it and how much processes they use.Onse shared server will be ok for 10
to 20 users, It is beeter to have minimum initial shared servers less because they always
remain allocated if they are idle. Oracle terminate servers that are idle when there are mor
servers than the minimum limit set., if shared servers is set to 0 oracle will terminate all
servers once they become idle and will not start new one, id disables shared server
temporarily. Using alter syste set shared_servers=0 we can do that. type- integer, class -
static, default;t value 0, range of value – OS dependant.

MAX_SHARED_SERVERS – Specifies the max no of shared servers that can be started,


allows shared servers to be allocated dynamically on the length of the request queue.
MAX_SHARED_SERVERS=10. type- integer, class - static, default value – derived
form shared_servers, range of value – OS dependant.
Circuits – specifies the total no of VC that are available for inbound and outbound
network sessions, contributes to total SGA size. Circuits=100. type- integer, class - static,
default value – If configured value of circuits will match value of sessions.
SHARED_SERVER_SESSIONS – Specifies the total no of oracle shared server user
session to allow, setting this parameter enables you to reserve user sessions for dedicated
servers. SHARED_SERVER_SESSIONS = 100, type- integer, class - static, default
value – derived lesser of circuits and sessions – 5, range of values = to sessions -5.
Related Parameters –
Large_pool_size - specifies the size in bytes of the large pool allocation heap. Oracle
shared server may force the default value to be set to high, causing performance problems
or problems starting the database. Large pool should be used allocate shared server
related UGA. To see in which pool the memory of object resides see the pool column in
v$sgastat table.
Sessions – specifies the max no of sessions that can be created in system.

Verifying setup – verify tha the dispatchers has registerd with the listener when the
instance was started by issuing lsnrctl services. Verify that you are connected using
shared servers by making a single connection then query v$circuit view to show one
entry per shared server connection.
Dynamic –view - V$circut, v$shared_server, v$diaptcher, v$shared_server_monitor
(tunning of shared server process), v$queue, v$session.
Chapter – 6 Backup and recovery Issues

Protect the database from numerous types of failures, Increase Mean-Time-Between-


Failures(MTBF), Decrease Mean Time-Between—to-recover(MTTR), Minimize data
loss.
Categories of Failure – Statement failure, User Process failure, User error, Network
failure, Instance failure, Media failure.
Causes of Statement failures – Logic error in an application, Attempt to enter invalid data
into the table, Attempt an operation with insufficient privileges, Attempt to create a table
but exceed allotted quota limits, Attempt an Insert or Update to a table, causing an extent
to be allocated, but with insufficient free space available in the tabelspace.

Resolutions for Statement failures – Correct the logical flow of the program, Modify and
reissue the SQL statement, Provide the necessary database privileges, change the user
quota limit by using the Alter User command, Add file space to the tablespace, Enable
resumable space allocation.
Causes for User process failures - User performed an abnormal disconnect in the session,
the use session was abnormally terminated, user’s program raised an address exception
which terminated the session.
Resolution of user process failures – The PMON process detects an abnormally
terminated user process, PMON rolls back the transaction and releases any resources and
locks being held by it.
Possible user errors _ User accidentally drops or truncates a table, User deletes all rows in
a table, User commits data, but discovers an error in the committed data.
Resolution of user errors – Train the database users, recover from a valid backup,
Import the table from an export file, Use logminer to determine the time of error, recover
with a point-in-time recovery, Use Logminer to perform object level recovery, Use flash
back to view and repair historical data.
Instance failure – It can happen for lot of reasons, Power outage, Hardare problems,
Oracle server background process. Recovery is automatic, once you start the databse
oracle server will perform both roll forward and roll back phases, we can monitor the
alert log to find the reasons.
Causes of Media failure – Head crash on a disk drive, physical problems in reading from
or writing to database files, file was accidentally erased. Resoultions – Recovery stratergy
depends on which backup method was chosen and which files are affected, If available,
apply archived redo log files to recover data committed since the last backup.
Define a backup and recovery stratergy – Business requirements – ( Mean time to
recover, Mean time between failure, Evolutionary process) , operational requirements
( 24 hour operations, testing and validating backups, database volatility), technical
considerations (Resources – hardware, software, manpower and time, Physical images
copies of the OS files, Logical copuies of the objects in the database, Database
configuration, Transaction volume which affects desired frequency of backup),
management concurrence.
Disaster recovery Issues – how will business be affected in event of major disaster like
eqrthquake, fire or flood- complete loss of machine-malfunction of storage hardware or
software, - loss of key personnel, such as DBA. Planning for DRP.
Chapter – 7 Instance and Media Recovery Structures

Memory Structures

Database Buffer – Memory area used to store blocks read from data files, data is read into
the block by server processes and written out by DBWn asynchronously
Log-buufer – Memory containing before and after image copies oc hanged data to be
written to the redo logs
Large-pool – Optional area in SGA, large memory for backup,restore, I?O operations,
shared memory for shared server.
Shared pool – stores parsed version of sql statements, PL/SQL procedures and DD info.
Java poll- Java code and data within JAVA VM.

Backgorund Processes

DBWn – writes dirty buffers from db buffer cache to data file.


LGWR – writes data from redo log buffer to redo log files.
SMON – Performance automatic instance recovery, Recovers space in temp segments,
Merges contigous of free space depending on parameters.
PMON – clears up user process, Performs rollbacl and releases the held by the failed
process.
CKPT – Synchronizes the headers of the data files and control files with the current redo
log and checkpoint numbers
ARCn – a process cpies redo logs that have been marked for archiving.

User Process – Server Process


Oracle Database – Datafiles, Redo logs , control files, Archieve logs (optional)
Other file – Init parameter , server initlization, password file.
Dynamic Views – V$SGA, V$INSTANCE, V$PROCESS, V$BGPROCESS,
V$DATABASE, V$DATABASE, V$DATAFILE.

Large _ pool – Can be configured as a separate memory area in the SGA , used for Oracle
backup and restore operations, I/O Ptocess, session memory for shared servers, Is sized
by the large_pool_size parameter. RMAN use large pool for backup and rstore, If large
pool is set oracle tries to get memory from shared pool, it does not get buffers from
shared pool, If not able to get memory then it obtains from i/o buffer memory from local
prcess memory and writes in alert file.
Large pool parameters
Large_pool_size if not set ther is no large pool can get details from v$sgastat
Dbwr_io_slaves - it is set above 0 the no of slaves used by Aarcn, lgwr, and recovery
manager process are set to 4.
Backup_tape_io_slaves – it is used to backup,copy or restore data to tape. Value true or
false.
Database Buffer cache – DB cache is an area in SGA is used to store recently used data
blocks, the server process reads tables, indexes, and undo segments from datafiles into
the buffer cache where it makes changes to data blocks when required, It uses LRU
algorithm to determine which can be overwritten to accomadate new blocks.

DBWn background process - Database writer process writes dirty buffers from database
buffers cache to datafiles., it makes sure that sufficient no of free buffers are available in
db cache, the database writer synchronizes db cache and the datfiles, this checkpoint
event triggered in various situations, you can configure multiple dbwn but one is enough.
Datafiles – I contains both system and user data and can contain committed and
uncommitted data.
Tablespaces – It can contain one or more datfiles, It shold have system, temporary, undo,
read-only, index data tablespaces.

Redo Log buffer – It is a circular buffer that hold info about changes made to the
database, It is stored in redo entries., It contains info to redo changes made to the
database by insert,delete,uodate, create, alter, drop, redo entries are copied by oracle
server process from user memory space to redo log buffer.
LGWR – It writes redo entries from redolog buffer to redo log files, when , redo log
buffer is one-third full, when a time-out occurs, there is 1 MB of redo, before dbwn
writes modified blocks in the db cache to datafiles, when a transaction commits.
Redolog files – It stores al changes made to the database, if databse is recovered, to a
pint in time, when it was operational redo logs are used, LGWR writes to redo log files in
circular fashion, two log groups are necessary.
REDO log switches - At log switch current redo log group is assigned a log sequence no
that identifies the info stored in that redo log group, It happens when lgwr stops writing
to one redo log group and begin wrting to another, when it has filled one group, using
alter system we can switch, a checkpoint occurs at logswitch, processing can continue if
at least one group is available.
Dynamic views – V$log (group no, seq no, size of group, no of members, status,
checkpoint change nos,), v$logfiles (list names, status and group of each log file)]
V$log_history (contains log history form control file)

Multiplexed Redo Log files – Redo log file reuires at least 2 redo log members per group,
all members contain identical information,they are updated simultaneously, each group
should have the same no of members of the same size., the oracle server changes only
pointer in control files afte renaming the files. To change the location, switch the log file,
copy the logfile using os to a new location, using alter database rename file to make
chage in control file. We can also add a member by add database add logfile member to
group, we can laod drop by alter database drop logfile member filename, you cannot drop
in current or active redo log group.

Checkpoint – checkpoint are used to determine where recovery should start, checkpoint
position – where recovery styarts, it is the distance between it and the end of the redo log.
Checkpoint queue – a link list of dirty blocks arranged in order by when the block was
first dirtied. Each entry in queue includes identifier of data block and location of redo log
where block was first dirtied called the RBA, the first entry in this identifies the oldest
dirty block in db cache.
Type of checkpoints –
Full checkpoint – All dirty buffers are written, shutdown normal, immediate,
transactional. Alter system checkpoint
Incremental checkpoint – periodic writes, only writes the oldest blocks
Partial checkpoint – Dirty buffers belonging to the tablespace, alter tablespace begin
backup, alter tablespace tablespace offline normal.
CKPT process - It is responsible form making DBWn at checkpoints and updating the
datfils and control files, it does evry 3 seconds and records the rba, rba represents the
point in redlog at which instance recovery is to begin in case of failure, I n case of log
switch it writes the info to the headers of the datafiles.
Multiplexed control files - Control file is a binary file which describes the structure of
the database, It should be always available, ,one file is enough it is recommended to have
atleast 2 files. Control file contents – Database name, time stamp of creation,
synchronization information needed for recovery, names and location of data files and
redolog files, archiving mode of database, current log sequence no, recovery manager
backup meta data. We can query form select name form v$controlfile for name and
location.
ARCn Process and Archived Log files. The ARCn is an optional process it is used
archive redo log files, it is useful in backup and recover where dataset should be
operation 7*24 hrs, It initiates when a log switch occurs and copies one member of the
last edo group to destinations specified., The LGWR waits till the online redo log file is
archived before it will be reused, If one is corrupt another member is used.
Database Synchronization – All data files (except offline and read-only) must be
synchronized for the databse to open, synchronization is based on the current checkpoint
no, applying changes recorded in the redo log files synchronizes datafile, redo log file are
automatically requested by oracle server.
Phases for Instance Recovery - Datafiles out-of synch, roll forward (redo) all chages
recorded in redo log files since the checkpoint are reapplied to datafiles, also regenerates
rollback data, committed and non-committed data files and opens data, roll back (undo)
phase any changes not committed are rolled back, contains only committed data in files.

Tunning crash and instance recovery performance – Tunning duration of instance and
crash recovery, tunning the phases of instance recovery, tunning can be done by
advancing checkpoint by only a few blocks behind the most recent redo log record, but it
may affect performance. Set initialization parameters to influence the no of redo log
records and data blocks involved in recovery, size the redo log file to influence
checkpointing frequency. Issue sql statements to intitiate checkpoints, parallelize instance
recovery operations.

Initlization parameter influencing checkpoints


Fast_start_mttr_target - expected MTTR in seconds, log_checkpoint_timeout – amount
of time that has passed since the incremental checkpoint at the position where last wirte
to the redo log occurred, log_checkpoint_interval number of redo log file blocks that can
exist between an incremental checkpoint and last written to the redo log file.
V$instance_recovery table provice info on
Recover_estimated_ios – contains no of dirty buffers in the buffer cache.
Actual_redo_blks – current no of redo blocks required to be read for recovery,
target_redo_blks – goal for maximum no of redo blocks to be processed during recovery
Log_file_size_redo_blks – no of redo blocks that must be processed during recovery
corresponding to 90% of the size of the smallest log file.
Log_chkpt_timeout_redo_blks – no of redo blocks that must be processes during
recovery to satisfy log_checkpoint_interval
Fast_start_io_target_redo_blks – This field is obsolete , It is retained for backward
compatability, It is always is NULL.
Target_mttr - Effective mean time to recover target in seconds.
Estimated_mttr - Current estimated mean time to recover in no of seconds based on the
no of dirty buffers and log blocks.
Ckpt_block_writes – no of blocks written by checkpoint writes.

V$Instance_Recovery – used to monitor the mechanisms available to limit recovery i/o.


statistics form this view to calculate which parameter has the greatest influence on
checking.
Tunning the phases of crash and instance recovery – Tunning the rollforward and
rollback phase.

Rollforward phase – Parallel block recovery, Recovery_parallelism specifies the no of


concurrent recovery processes.

Rollback phase – Fast-start on-demand rollback ( server process encountering data to be


rolled back performing the following, rolls back the block containing the required row,
hands off further recovery which may be parallel to SMON avantage is transaction does
not have to wait until all work of a long transaction is rolled back.
Fast-Start Parallel rollback – This enables SMON to act as a coordinator ans use multiple
server processes to complete the rollback operation, parallel rollback is automatically
started when smon determines that dead transaction has generated large no of undoi
blocks. Fast_Start_Parallel_rollback parameters value – maximum parallel recovery
servers, false – none, low(default) – 2* cpu_count, high – 4* cpu_count.
Monitoring parallel v$fast_start_serevrs, v$fast_start_transactions
(usn,state,undoblksdone,undoblkstotal)
Chapter – 8 configuring database archiving mode

Redo log history

Archivelog mode – It copies from online redo log file to archived log file. The database is
protected from loss of data when media failure occurs, you can backup when it is online,
when other than system tablespace goes off , we can recover while database is open.
Media Recovery options – Can restore a backup copy of the damaged files and use
archived log files to bring datafiles up to date when it is offline or online. You can
recover database at a specific point of time. You can recover database to the end of a
specified archived log file. You can recover to a specific SCN.

A user must have alter system privilege to alter the archivelog mode
To change the mode – shutdown the database in normal,immediate, transactional. Start
the database in mount state so that we can alter in archieve mode startup mount. Set the
database in archieve log mode by using alter database command, alter database
archievelog. Open the database alter database open. Take a full backup of the database.
Form now the old backup will be of no use, the new backup will be used in which future
archieved log file wil apply. This not enable the archieve process.
Setting the archieve mode – Automatic and Manual . In automatic ARCn process are
enabled and they copy redo log files as they are filled, In manual we have to copy the
files. Before deciding on archieve mode you muset set database in archievelog mode,
failure to do this will prevent from copying redo log files, the database should be
shutdown cleanly before enabling archive process.

Specifying Multiple ARCn Processes


The dynamic parameter log_archive_max_prcesses controls the no of archive processes
at instance startup, A maximum of ten ARCn processes can be specified , The no of
ARCn processes can be changed with alter system. During a period of heavy transactions
load or activity , we can temporarily start additional archive processes to eliminate
archiving bottlenecks. Alter system set log_archive_max_processes=3, if database is in
arcievelogmode then archiver processes can be started evry time instance is started.
Log_archive_start=true or false
Enabling Automatic Archving after instance startup
Alter system archive log start ‘file path’. To disable alter system archive log stop , then in
intilization parameter Log_archive_start=false.
Manually archiving online redolog files – Alter system archive log current. When
manually archiving we can use the following options. Thread (thread containing redo log
file), sequence (archives log file identified by sequence no), change (based on SCN),
group (based on group ), current (current file of the specified thread), logfile ( redo log
fiel group with members identified by file name), next (oldest online group) , all ( all
files form the specified thread), start (enables archiving), to (specifies location to which
the group is to be archived), stop (disables archiving).

Specifying Archieve Log Destination – Use log_archive_dest_n to specify up to ten


archival destinations. Used Log_archive_format to include log sequence no and thread no
as part of file name. Logg_arcive_duplex_dest parameter to define a backup destination,
Multiple archive log destinations can be done on local disk or remote standby database by
using log_archive_dest_n. We can set archive location as mandatory or optional, can
define time before retry in case of failures. We can also specify a minimum number of
local destinations by local_archive_min_succeed_dest, An online redo log group can be
reused only if archiving has been done to all mandatory locations, the no of locations
archived is greated than or equal to the value of log_archive_min_succeed_dest
parameter

An archival destination can be disabled by using the dynamic initlization parameter alter
system set log_archive_dest_state_2 = defer and can be enable again by alter system set
log_archive_dest_state_2 = enable.

Specifying the filename format – Log_archive_format = extension where extension


should include the variable %s or %S for log sequence number.
Obtainning archive log information
V$archive_log, v$archive_dest, v$log_history, v$database, v$archive_processes.
Archive log list provice info about the log mode and status of archving the database.
Chapter 9 Oracle recovery Manager overview and configuration

Recovery manager features - RMAN provides a flexible way in backing up the database,
tablespaces, datafiles, control files and archive logs, stor frequently executed backup and
recovery operations, perform incremental block level backup, skip unused blocks, specify
limits for backups. RMAN is command line utility , OEM provides an graphical
interface, it provides lot of features like storing frequently executed operations as scripts
in database. Using incremental block-level backup feature you can limit backup to only
those blocks that have changed since previous backup, can use rman to manage size of
backup pieces and save time by parallelizing the backup operation, RMAN operations
can be integrated with scheduling of the OS to automate backup operations.
Other features – detect corrupted blocks during backup, increase performance through
automatic parallelization, generation of less redo, restricting I/O for backups, tape
streaming. Manage backup and recovery tasks.
Recovery Manager Components – Recovery manager executable, OREM, server
sessions, target database, rman repository (data used by rman for backup and restore are
called as rman metadata, channel, media management library ( used for reading and
writing in tapes)

RMAN Repository – Using the control file. RMAN is repository is metadata about target
database and backup and recovery operations, rman repository is always stored in the
control file of the target database. Control_file_record_keep_time determines the
minimum age in days of a record before it can be overwritten, The control file can grow
in size, before it grows certain steps are performed free space in control file is used,
entries older than control_file_record_keep_time are overwritten, if no more space is
there the control file grows and reaches till ops max file size.

Channel allocation - a channel is an stream of data to backup up device, it muse tbe


allocated before backup or recovery is started. Each channel establishes a connection
from the RMAN executable to a target or auxiliary database instance by starting a server
session on the instance. Only one RMAN session communication with the allocated
server sessions, we can use manual or automatic channel allocation. Manual can be done
with allocate channel command with a run command, and the allocate channel for
maintenance command issued at rman prompt, Manual channel overrides automatic
channel allocation. Automatic channel provides a preconfigured disk channel that you
can use for backups and copies to disk, in addition we can configure a set of persistent
automatic channels.
Automatic can be done by – change of default device type – configure default device tpe
to sbt; configure parallelism for automatic channels – configure device type disk
parallelism 3; configure automatic channel options – configure channel device type disk
format = ‘/backup/rman/%u’; configure channel device type disk maxpiecesize 2g;. when
a channel is allocated automatically its name is in the format ora_devicetype_n. by
default rman has preconfigured a disk channel so that you can backup to disk without
doing any manual configuration.
Manual channel allocation - Backup , copy restore and recover commands require at
least one channel, allocating a channel starts a server process on the target database,
channel affects the degree of parallelism, channels write to different media types, channel
can be used to impose limit. RMAN > Run ( allocate channel c1 type disk format =
‘/db01/backup/usr0520.bak’; backup datafile ‘/db01/oradata/user01.dbf’;). Every
command issued in RMAN requires atleast one channel, the max no channels allocated
with the degree of parallelization that is used during process. We can improve
performance by allocate channel command with parameter rat, maxpiecesize,
maxopenfile.

Media Management – to use tapes for backup storage oracle requires media manager.
This is useful for loading tapes, labels and unloads sequentaial media such as taped drives
for the backup and restore. The oracle call MML routines for this purpose.. MML is not
required if it has to backup to data. Oracle backup solutions program provides a range of
media managements products.
Procedures – 1. Install and configure MM software on the server. 2. Ensure that we can
take normal flat files backup. Obtain and install media management software for other
vendors and install..
MM handles restore as well as abackup operations – The server requests the restor of a
particular file. MM identifies the tape conftaining the file and reads the tape, MM passes
the info back to the oracle server, Oracle session writes the file to the disk.

Type of connections with RMAN – Target database (taget database with sysdba
privileges). Recovery catalog database (Optional database which is configured for
RMAN repository. Auxillary database (It is created by using RMAN Duplicate
command, or it can be temp databae used during tablespace point in time recovery. A
standby database is a copy of your production database that can be used for disaster
recovery.
Connecting to the target database without a catalog
Local connection - Set oracle_sid=orcl , rman target / notalog
Remote connection – rman sys / target_pwd@db01
Connection Process - A user process is created for recovery manager, It creates 2 server
processes, Onec connected to the target database for executing sql commands
resynchronizing the control file and recovery roll forward, One polling process connected
to the target database to locate RPC completions, backup and recovery info is retrieved
from control file.

Additional RMAN command Line Arguments


Writing RMAN output to a log file – rman target sys/oracle log $homte/oradata/rman.log
append
Executing a command file when RMAN is invoked – rman target sys/oracle log
$home/oradata/u03/rman.log append @’$home/student/script.rsv’

Recovery Manager Modes


Interactive mode - Use it when doing analysis , minimize regular usage, avoid using the
log option
Batch Mode – Meant for automated jobs, Minimize operator errors, set the log file to
obntain information.
Using OEM we can use RMAN.

RMAN Commands –
Stand Alone commands - Executed only at the RMAN Prompt, executed individually,
cannot appear as subcommands within run. Eg. Change, connect , create catalog, resync
catalog, created script, delete script, replace script, report schema.
Job – Must be within brackets of run, executed as a group.

Job command eg. – Run { backup increment level 0 format ‘/u01/db01/backup/%d_%s_


%p’ filepreset 5 (database include currentfile) sql ‘alter database archive log current’;}.
RMAN executes in a block , if one command fails in the block then the entire block
stops.

RMAN configuration settings – RMAN is preset with default configuration settings, we


can use configure command to – configure automatic channels, specify the backup
retention policy, specify the no of backup copies to be created, limit the size of backup
sets , exempt a tablespace from backup, enable and disable backup optimization.

Configuring automatic channels - RMAN> configure channel device type disk format
‘/db01/backup/%u’; Implement retention policy by specifying a recovery windows –
rman > configure retention policy to recovery windows of 7 days; implement retention
policy by specifying redundancy – RMAN > configure retention policy to redundancy 2;
You can implement retention policy in one of the following mutually exclusive ways –
specify a recovery windows, specify a redundancy value.

Configure duplexed backup sets -


Rman > configure datafile backup copies for device type disk to 2;
Configure backup optimization rman > configure backup optimization on
Clear option to return to the default value rman > configure retention policy clear ;
rman> configurec channel device type sbt clear;

Show command – displays persistent configuration settings, we can use it to display


automatic channel configuration settings, backup retention policy settings, number of
baclup copies to be created, backup set size limit, tablespace excluded form backups,
backup optimization status. RMAN > show all ( will display all settings)
Automatic channel – show channel, show device type, show default device type.
Retention policy – show retention policy. No of backup copies – show datafile backup
copies. Max size of backup sets – shoe maxsetsize. Tablespaces excluded from whole
backup – show excluded. Statuos of backup optimization – Show backup optimization.

List command – List backup sets and copies of datafiles, Lists backup set and copies of
any datafile for a specified tablespace, List backup sets and copies containing archive
logs for a specified range.
List backup of all files in database – rman > list backup of database.
List backup sets containing in the user01.dbf – List backup of dtafile ‘/u01/usr01.dbf’;
List all copies of datafiles in the system tablespace – List copy of tablespace “system”;

Report command – Produces a detailed analysis of repository, files need a backup, which
backup files can be deleted, which files are unrecoverable.
RMAN > Report schema; report need backup; report obsolete; report unrecoverable;

Report need backup – Lista all datafile requiring a backup, Assumes most recent backup
is used during a restore, Provides 3 options – incremental,daya, redundancy. Without
option takes into account the configure retention policy.
RMAN> report need backup incremental 3; report need backup days 3; report need
backup redundancy 3;

Recoveyr Manager uses PL?SQL packages as its interfae to : target databases, the
recovery catalog. These packages are dbms_rcvcat – to maintain info in the recvery
catalog, ddms_rcvman – queries the control file or recovery catalog, the 2 packages are
created catproc.sql.
DBMS_backup_restore package is created by dbmsbrks.sql, prvtbkrs.sql called by
catproc.sql –thi spackage is uses to interface with oracle and os to create restore and
recover backup of datafiles and archived redolog files.

RMAN – Usage considerations – Resoucres shared memory and more processes.


Priveliges given to users – database sysdba, OS – access to devices. Remote operations –
set up the password file, ensure that the password file is backed up, globalization od
environment variables, format used for the time parameters in RMAN Command.
Chapter – 10 User Managed Backups

Whole Database backup - Target database may be opened or closed, backup of all
datafiles and the control file. Whole database is taken when it is closed by
normal,transactional or immediate. If it is in abort mode the databse is inconsistent so
recover is required for the database. When database is shoule be available 7*24 then we
have to use inconsistent backup and this can be performed only on database running in
Archivelog mode.
Partial Database backups – Tablespace, datafile, control file. Table spaces backup can be
done only in archiovelog mode, because redo entires are required to make it consistent,
Data file backup - Single datafile canbe backed up in archivelog mode in no archive
mode we can do when is reead-only or closed mode.
Control file - we can configure automatically EMAN to copy control file aftear a copy or
backup, It can be backed htroguh sql commands.

User Managed Backup and recovery - Files are backed up with OS commands, Backups
are restored with OS system commands, recovery is done through using sql/plus
commands.. Oracle recommends RMAN
Query Views obtain Database File information
V$datafile (name, status), $controlfile (name), V$logifle, dba_data_files

Backup Methods – OS backup without archiving is used to recover to the point of last
backup after media failure. OS backup with archiving is used to recover to the point of
failure after a media failure.
Consistent whole database backup (closed) – this backup is taken when the database is in
closed mode, It can include, redo log files, parameter files and password file. We need to
ensure that complete pathnames are noted ans used properly when backup and recovery
Advantages of whole database backup – conceptually simple, easy t perform, require
little operator interaction
Performing an consistent whole backup – compile the files need to backed up. Shutdown
oracle in normal, transactional or in immediate mode, backup all dtafiles, control files,
using backup utility , retart oracle instance.

Open database backup - If buiness requires 24*7 then we can have only open backup,
Perform backups of all tablespaces or datafiles while they are online or offline, online
redo files are not required to backup.It has advantages like maintains high databse
availability, can be done at tabelspace or datafile level, supports nonstop business
operations. Considerations – More training is required for the DBA, tested and
automated scripts are recommended for performing open database backups. Open
Database backup requirements – Database is et to archivelog mode, ensuring online redo
log files are archived by ARCn process or by manually archiving redo log files.

Open Database Backup Options – We can enable backup pf all datafiles of a specific
tablespace, or an individual datafile of an tablespace, the database is online and available
for transactions. When datafile is placed in backup mode, more redo log entries can be
generated because logwrtr blocks images of changed blocks of datafile in backup mode
to the redo log instead of row information. This can have effect in performance of log
writer.
Performing an online tablespace backup
1. Set the datafile or tablespace in backup mode by issuing alter tablespace name
begin backup . this prevents sequence no in datafile header from changing, so that
logs are applied in recovery from backup start time. Even in backup mode it is
available for normal transaction.
2. Use an OS command to copy all the datafile in the tablespace to backup storage,
the log sequence in backup files can be different when each tablespace is backed
up sequentially. Copy c:\uers\users01.dbf e:\users\users01.dbf
3. 3. After the datafiles are copied issue the command alter tablespace users end
backup.
4. Archive the unarchived redo logs so that the redo required to recover thee
tablespace backup is archived as follows. Alter system archive log current..
We can do for all tablespace including system and undo tablespaces. The time
between start and end backup command should be minimized. It is required to
perform on tablespace at a time.

Backup status information – V$backup – to determine which files are in backup


mode, when backup begin command is issued it begins active.V$datafiel_header
(name, status, fuzzy) – Information about datafiles that are in backup mode can be
found in this view

Failure during Online tablespace backup - system may crash, databse may be shutdown
or any major events may happen. If any happens then the backup files will be unusable if
the OS did not complete the backup, we need to backup again. The databse files in
online backup will not be synchronized with databse because the header is frozen when
backup starts. The databse will not open because server assumes that file have been
restored form backup. We need to use alter database end backup to take the datafile out
of backup mode. We should use only when you are sure that file were put in backup
mode not restored from backup.

Ending Online backup during failure – query the V$backup to check backup status. Issue
the alter databse command to change the status and unfreeze the header (Alter databse
‘filename’ end backup; or alter databse end backup in 9i. Now we can open the the
database – alter database open.
Read-only tablespace – alter tablespace quer_data read only.
Read-only tablespace operations. – 1. status of tablespace from read-write to read-only by
using alter tablespace query_data read only. 2. Alter tablespace command is issued a
checkpoint is performed for all datafiles associated with the tablespace. The file headers
are frozen with the curren SCN. 3. When we make tablespace read only we must backup
all datafile in the tablespace. 4. DBW0 process writes only to datafiles whose tablespaces
are in read-wrie mode and normal checkpoints occur in these files.
Read-only tablespace backup issues – Only one backup is needed after altering tablespace
to read-only, resume a normal backup schedule for that tablespace after making it read-
write. The control file must correctly identify the tablespace in rad-only mode otherwise
you must recover it. Because no writes are performed on read-only atbelspaces it can be
recovered is when they are damaged.

Backup Issues with Logging and nologginf options


Logging – all changes are recorded to redo, fully recoverable from last backup, no
additional backup.
Nologging – Minimal redo recorded, not recoverable from last backup, may require
additional backup.
Tablespaces, tables, indexes or partitions may be set to nologging mode for faster lode of
data when using direct-load operations, when this option is set for direc-load operation
insert statements are not logged in the redo log files. Because redo logs do not contain
values that were inserted when table is in nologging mode , the datafile pertaining to the
table should be backup imdtly upon completion of direct-load operation.

Manual control file backups – Creating a binary image – alter database backup controlfile
to ‘control1.bkp’; Creating a text trace file Alter database backup controlfile to trace;.
If rman is not used we hould manually take backup of control file. Certain info in control
file like names of database current online redo log file is used by server during instance
or media recovery.
Guidelines – Multiples control file and name them in init.ora file by using control files
parameter. Alter database backup controlfile to trace command creates a script script to
re create control file, the file is located in dir specified in user_dump_dest, It does not
contain rman metadata. We can also backup using alter database backup control file
command, this provides a binary copy o fcontrol file. During full backup shut down
instance and backup the control file.

Backing up Initlization parameter file – Create Pfile form SPFILE. Create PFILE
=’/backup/init.ora’ from spfile; . We can use create pfile command to create backup of
server parameter file, It exports the file in an text format . we can create the file in
default location or in specified location.

Verifying Backups using DBVERFIY utility – This utility enable syou to perform
verification of datafile by checking the structural integrity of data blocks within specified
datafiles, it is external to database. 1. Utility can be used to verify online dtaafiles. 2. can
invoke the utility on a portion of a datafile. 3. Utility can be used to verofy offline
datafiles,. 4. Can direct output of the utility to an error log..
Command line interface – Extenal –command utility. Used to ensure that a backup
database or datafile Is valid before a restore. Can be helpful diagnostic aid when data
corruption problems are encounterd
Eg. Dbv file =/oradata/user01.dbf start=1 end =500 – this checks integrity for user01.dbf
starting form block 1 to block 500.
Dbverify parameters – file (database file name), start (start block size), end (end block
address), blocksize (reqd only if file has block soze greater than 2 KB), logfile (logging
information), feedback (causes to display a single . for n pages verified), help (on-screen
help), parfile (species name of parameter file to use.
Chapter 11 – RMAN Backups

RMAN Backup concepts


Recovery Manager backup is a server managed backup, It uses oracle server sessions for
backup operations, can backup entire database, all datafiles in a tablespace, selected
datafiles, control files, archived redo log files.
Recovery Manager Backups – Image copies are copies of datafile, control files, archived
redo log files. It can be made by RMAN or by OS utility. It consists of all blocks of the
data blocks including unused blocks. The image copy cannot be multiplexed.
Backup sets can contin one or more files, can have 2 type
Full backup – Here we backup one or more files all blocks containing data for the files
specified are backed up.
Incremental – Includes blocks only that have changed since the last incremental backup,
this backup requires a bas-level backup which backs up all blocks containing data for
files specified.
Backupsets – It consists of one or more physical files stored in RMAN specific format on
disk or tape, it has 2 types datafile – control file and datafiles, archived log – only
archived log files. Acontrol file can be included in backup explicitly by using the include
control file syntax, implicitly by backing up the file 1 (the system data file). RMAN
backup I used to backup archived, data and control files, It can be on or more files in to
one or more backup sets on disk or tape. It can be when database is open or closed,
incremental or full backups.

Charaterestics of backup sets – Backup command creates backup sets, backup sets
usually contain more than one file, backup sets can be written to a disk or tape. A rstore
operation is required to extract files form a backup set. Datafile backup sets can be
incremental or full. Backup sets do not include never-used blocks. Archived redo log file
backup sets cannot be incremental.
Backup Piece – A backup piece is a file in a backup set. A backup piece can contain
blocks from more than one datafile. For large database a backup set can exceed one tape
or os file, size of each piece can be limited by maxpiecesize with configure channel ot
allocate channel commands.
Backup piece size can be limited as follows RMAN > Run { allocate chanel t1 type ‘sbt’
maxpiecesize = 4g; backup format ‘df_%t_%s_%p’ filesperset 3 (tablespace users)).

We can control no of backup sets that the server produces as well as the no of file that
RMAN places into a single backup set. When using backup command we must mount or
open the target database, recovery manager allows us to make an inconsistent backup if
the database us un archive log mode, then we should apply redo logs to make it
consistent. Manual allocation is reqd if automatic allocation is not doen for the channel.
We can include control file by using include current controlfile option.
Command Options
Full – copies all blocks, skips blocks not uses. It does not skip blocks when backing
archived logs or conteol files
Incremental level integer - The session copies data blocks that have changed since last
incremental n backup where n is integer form 1 to 4. when incremtal is choosen it checks
for level 0 backup, if incremental is set then parameters like data file, data filecopy,
tablespace or database .
Filesperset integer – It compares filesperset value to a calculated value ( no of files
backed up per channel) and takes the lower of 2 ther by ensuring all channels. If
fileperset is not specified Default is 64 and takes the lower value of the 2
Skip – skip some data files or archived logfiles.
Maxsetsize integer – specifies maximum size of backup set in KB,M,G.
Diskration integer – To assign only dtafiles to backup sets spread across the specified no
of drives , useful when datafile is tripped and stored in different HDD.
Delet input – deletes input file upon successful creation od backup set only on archived
log files and data files.
Include current control file- takes snapshot of current control file.
Format – format of the name of the output
%c – specifies the copy no of backup piece, %P – specified backup piece no within
backup set. %S – specifies backup set no, the number is a counter in control file.
%d – specifes database name
%m – database name padded on right with x xharacters up to 8
%t – specifies backup set time stamp, %u – specifies 8 charater name constituted by
backup set no and time %U – specifies a convenient shothand fo %u_%p_%c guarantees
uniqueness in file name.

Multiplexed Backup Sets. – Multiplexing two or more datafiles into a backup set for tape
streaming. It is controlled by The FILEPERSET parameter on backup command and the
MAXOPENFILES parameter of the allocate channel and configure channel commands.
Parallelization of backup sets – In configure command we can set parallelism it
parallelizes its operation and erites multiple backup sets in parallel, when multiple
backup I screated and allocate multiple channels, it automatically parallelizes. Parallelism
of backup set is achieved by configuring parallelism to greater than 1 or allocating
multiple channels.
Duplexed Backup sets – we can create up to four identical copies of each backup piece by
duplexing the backup set, we can use following commands.
Backup copies, set backup copies , configure … backup copies.
Eg. Eman > backup copies 2 datafile 1, datafile 2 format ‘/backup/%u’,’/backup1/%u’; .
Her it places one copy in backup and backup 1 identical copies. We can also backup of
backup set as an additional way to manage your backups. We can use RMAN backup
backupset command for disk to disk and disk to tape backup.

Archived Redo Log File Backups – Online redo log file switch is automatic, Archived
log failover is performed.
Archived Redo Log Backup sets – Includes only archived redo log files, Are always full
backups. RMAN > Backup Format ‘/disk1/backup/ar_+%t_%s_%p’ archivelog all delete
all input;. Common problem is we don’t know whether an archived log has been
completely copied to the archived log destination before attempting to back it up. RMAN
has access to control file or recovery catalog, so it knowswhcih logs ahs been archived
and can be restored during recovery. We can backup archive files with backup archivelog
command or can include when taking backup of datafiles and control files.
Characteristics pf archived log backup sets - can include archived log files only, are
always full backup.
Backup Constraints – The database must be mounted or open. Online redo log backups
are not supported,. Only clean backups are usable in noarchivelog mode. Only current
datafile backups are usable in archivelog mode. If a recovery catalog is used, the recovery
catalog database must be open.
Image Copies – Image copy contains a single datafile archived redo log file or control
file, An image copy can be created with RMAN copy command or an OS command.
When copy is created the server session validates the blocks in the file and records the
copy in control file.
Characteristics of Image copy – Can be written only to a disk. Can be used for recover
immediately does not need to be restored. Is a physical copy of a single datafile, archived
log or control file. Is most like an OS backup. Can be part of an incremental stratergy. It
can be designated as a level ) backup in cremental backup stratergy no other levels are
possible.
Image copy Example. – Run { allocate channel c1 type disdk;
Copy datafile ‘/oradata/users.dbf ‘ to ‘ /backup/user.dbf’ archivelog ‘arch1.arc’to
‘arch1.bak’;}
Rman copy command writed output file to disk. You can copy datafiles, archived redo
log files, or control files. If whole database is to be backup then we should backup all
datafiles seperatley with a copy command. During copy operation oracle server process
computes a checksum for each block to detect corruption, RMAN verifies the checksum
when restoring the copy. We can sto p this by using nochecksum option. If database
maintains an block checksum then this is of no use. We can alos use check logical option
to test data and index blocks that pass physical corruption checks ofr logical corruption.
With MAXCORUPT we can set an threshold for logical and physical corruptiom. RMAN
completes and populates this value in v$copy_corruption.
Image copy paralleliation – Copy process is serial, but we can parallelize by configure
device type … parallelism or allocating multiple channels, specifying on copy command
for multiple files. One copy caoomand can be used to copy multiple files. Only when one
copy command is completed other copy command will start work.
Copying the whole database – Mount the database for a whole consistent backup. Use the
report schema command to list he files. Use the copy command or make an image copy
of each datafile. Use List copy command to verify the copies. We can include current
control file in copy with current controlfile command. If configure control file
autobackup is on then RMAN automatically back the control file.

Making Incremental Backups.


Full backups contain all datafile blocks. Differential incremental backup contains only
modified blocks from level n-1 or lower.
Backup in NOARCHIVELOG Mode

Ensure sufficient space for the backup. Shutdown using the normal or immediate clause.
Mount the database. Allocate multiple channels if not using automatic. Run the backup
command. Verify that the backup is finished and cataloged. Open the database for normal
use.

RMAN Control file autobackups – Use the configure controlfile autobackup command to
enable by defaylt it is offutomatically performs a control file autobackup adter backup or
copy commands. Backup is given a default name. RMAN copies control file using format
of %F. We can change this format using configure controlfile autobackup cormat and set
controlfile autobackup format commands. %F variable translates into IIIIIIIIII-
yyyyMMDD-QQ. I stabds form DBID, It is printed in decimal. Y – is time stamp with
date, QQ is sequence in hexadecimal numbethat starts with 00 and ‘FF’

Tags form Backup and Image copies – Tag is a meaningful name that is assigned to
backup set. It provides useful references to backups, It can be use dint LIST, Restore and
switch command. Same tag can be used for multiple backup sets or file copies.
RMAN Dynamic views – V$archived_log : shows which archives have been created
backed up and cleared in database. V$backup_corruption – shows which blocks have
been found corrupt during backup of a backup set. V$copy_corruption – shows block
corruption in image copy. V$backup_datafile – is useful in creating equal-sized by
finding no of blocks in each datafile. It alos has the no of blocks corrupt in datafile.
V$backup_redolog – shows archived logs stored in backup sets. V$backup_set – shows
backup sets that have been created. V$backup_piece – shows backup pieces created for
backup sets.
Monitoring RMAN Backups – Correlate server session with channels with the se
command ID command. Query V$process and V$session to determine which sessions
correspond to which RMAN channel. Query v$session_longops to monitor the progress
of backups and copies. Use an OS utility to monitor the thread or processes

Miscellaneous RMAN Issues – Abnormal termination - . IF abnormal termination


happens the files will be incomplete and can exist in the OS byt RMAN will ot use these
files, we need to remove them.
Detecting Physical and logical block corruption – RMAN detects and stop any attempst
tp perform any operations that would result in unsual backup files. Info about corruption
is logged in control file and alert log. To view corrupt blocks from control file we can
get from v$backup_corruption, v$copy_corruption.
Detecting fractured block during open backups - It reads whole database blocks and
finds if block is fractured comparing header and ffoter, if it detects then it rereads the
block until it get a consistent block. Due to this it is not reqd to put tablespace in online
backup mode when taking backup using RMAN.
Chapter – 12 User Managed Complete Recovery

Media Recovery - Use to recover a lost or damaged current datafile or control file.
Requires explicit invocation. File are restored from backups. Redo data is applied to the
restored files from archived redo log files and online redo logs.
Recovery Steps – Damaged or missing files are restored from a backup. Changes from
archived redo log files and online redo log files are applied as necessary. Undo blocks
Are generated, this termed as rolling forward or cache recovery. Database may now
contain committed and uncommitted changes. Unod blocks are used to roll back any
uncommitted changes, known as roolong back. Databse is now in recovered state.

Restoration and datafile Media recovery with user managed procedures – restore files
using OS commands, Recover files using SQL*Plus recover command.

Factors affecting the mode


NOarchivelog method - data loss between backups can be tolerated, faster to reaplly
transactions, data rarely changes.
Archvie log – Database cannot be shutdown for a closed backup, data loss cannot be
tolerated, It is easier to recover using archived redo log files then reapplying transactions.
Recovery – Noarchivelog Mode – All Data files , control files. We can also restore redo
log files, password files, parameter files.
Advantages – Easy to perform with low risk of error. Recovery time is the time to restore
all files. Disadvantages – Data is lost and must be reapplied manually. Entire database is
restored to the point of the last whole closed backup.

Recovery in NOarchivelog mode with redo log file backups- shutdown and resotre all
oracle data files.
Recovery in NOarchivelog mode without redo log file backups- shutdown the Instance.
Restore the datafiles and control file from the most recent whole database backup.
Perform cancel-based recovery – Recover database until cancel using backup
controlfile_. Open the database with resetlogs option – Alter database open restelogs.

Recovery in Archivelog mode.


Complete Recovery – Uses redo data or incremental backups to update restored files,
Updates the database to the most current point in time, Applies all redo changes.
Incomplete recovery – Uses backup and redo logs to produce a noncurrent version of the
database.
Complete Recovery – Make sure that datafiles for restore are offline. Restore only lost or
damaged datafiles. Do not restore the control files, redo log files, password files or
parameter files. Recover the datafiles. A valid backup containing the lost datafiles taken
after database was set in archivelog mode. All archived logs from then till now. Redo log
files contain transactions not yet archived.
Advantages – Only need to restore lost files. Recovers all data to the time of failure.
Recovery time is the time it takes to restor lost files and apply all archived log files.
Disadvantages – Must have all archived log files since the backup from which you are
restoring.

Determining which files need recovery – By viewing v$recover_file to determine which


datafiles need recovery. V$archived_log is used for a list of all archived redo log files for
the database. V$recovery_log for a list of all archived redo log files required ofr
recovery.
User – Manage Recovery procedures
Recover a mounted database.
Recover (automatic) database – closed database recovery
Recover (automatic) tablespace <number> | name – open database recovery
Recover (automatic) datafile <number> | name – open and closed database recovery

Using Archived Redolog files during recovery

To change archive location use the – Alter system archive log .. command
To apply redo log files automatically – Issue the set autorecovery on command before
starting media recovery. Enter auto when prompted for an archived log file. Use the
Recover Automatic command.
Oracle server automatically applies redo log files, Before redo log file is applied oracle
server suggests log file name to apply. I archive log files are not restored to
log_archive_des directory then it should be notified to the oracle server new directory
path. Specify log : {<ret>=suggested | filename |auto |cancel}
Alter system archive log start to <new location;
Recover from ‘<new location>’ database.

To apply redo log files automatically – set autorecovery on

Restoring Datafile to a new location with user-managed procedures


1. Use OS command to restore the datafile to the new location . 2. Start and mount
the instance. Use Alter database rename file command to update the record in the
control file.

Complete Recovery Methods –


Method -1 Recovering a closed database – This is used when database is not operational
24*7. The recovered files belong to the system or undo segment tablespace. The whole
database or a majority of the datafiles need recovery.
Method -2 Recovering an opened database, Initially opened – this is used when File
corruption , accident loss of file has occurred which has not resulted in database being
shutdown. The database is operational 24*7, down time should be kept minimum.
Recovered files do not belong to the system or undo segment tablespaces.
Method 3 Recovering an open database initially closed – This is used when hardware
failure has shutdown the system, Datbase is operational 24*7, down time should be kept
minimum. The restored files do not belong to the system or undo segment tablespace.
Method -4 Recovering a datafile with no backup – This is used when Media or user
failure has resulted in loss of datafile which was not backed up. All archived log exists
since the file was created. The restored file does belong to the system ot undo tablespace.

Complete recovery of a closed database – Eg.

We have one tablespace u01 contains corrupt blocks, the datafile abc is stored, this can be
found by querying v$datafile and V$tablespace views.this datafile belongs to the system
tablespace.
1. Shut down the instance – SHUTDOWN ABORT
2. Restore the file from backup – copy c:\backup\abc.dbf d:\data\
3. Start the Instance – Startup mount – Recover database
4. Whe recovery is over open the database – Alter database open

Open Database Recovery when the Database is Initially Open. – Eg.

Database is open and one datafile number 2 has been removed using OS commands. We
can find to which tablespace the datafile belongs to, by using the sql given below
Select file_id f#, file_name, tablespace_name, status from dba_data_files.
1. to determine wether to take datafile 2 offline – select d.file#, d.name, d.status,
h.status from V$datafile d, v$datafile_header h where d.file# = h.file#;
2. when offline we can restore from the backup – copy c:\backup\df2.dbf d:\data\
3. Use recover command to apply the archives and redo logs on the restored datafile
– recover datafile ‘/disk2/backup/df2.dbf’ or recover tablespace user_data
4. When recovery is finished all datfile are synchronized, now datafile can be online
– alter database datafile ‘disk2/data/db2.dbf’ online or alter tablespace user_data
online;

Open database recovery when the database Is initially closed

Ond HDD fails, the datafile 2 In the HDD has to be recoverd it is not an system or undo
datafile- it is unavailability will not affect users form running their reports.
1. Mount the database – it will not open because datafile cannot be opened. We need
to take the datafile offline.
2. Alter database datafile ‘d:\data\db2.dbf’ offline. (alter tablespace cannot be used
because database is not yet open
3. Alter database open;
4. Now we have to restore the file in another HDD – copy c:\backup\db2.dbf
e:\data\db2.dbf. then change the location of file in oracle server- alter database
rename file ‘d:\data\db2.dbf’ to ‘e:\data\db2.dbf’
When database is opened and recovery to be done, query to find which tablespace
owns this datafile. Select file_id, file_name, tablespce_name, status form
dba_data_files.
5. Use Recover or Alter database recover command to start applying the archived
redo log files and online redo log files – recover datafile ‘e:\data\db2.dbf’ or
recover tablespace user_data.
6. when recovery is finished all datafiles are synchronized bring the datafile online.-
Alter database datafile ‘e:\data\db2.dbf ‘ online or alter tablespace user_data
online

Recovery of a datafile without a backup

Datafile is lost that was never backed up – cannot be used when it is a file from system
tablespace. Cannot be used if control file has been recreated.

Recreating Lost datafiels without backup - Used when missing datafile cannot be
resotred because it had never been backed up. Description of missing datafile is still in
data dictionary and control file. Recreate the datafile – alter database create datafile
‘filename’ – Recrated the datafile with a different filename – alter database created
datafile ‘filename’ as ‘new file name’
Recovery without a backup example – Datafile 4 was lost, when restoring datfile we find
that it is not backed up and it contains important data, Since it is not a system or undo
datafile and since we have all the archived log files we can do s follows
1. If databse is closed mount the database, take the datafile without backup offline
and mount the database. If database is already open take the datafile or tablespace
offline. – alter tablespace table_data offline immediate; . We should confirm the
recovery status by quering the v$recover_file. Select * from v$recover_file
2. Issue a command to recreate the file – Alter database create datafile
‘d:\data\db2.dbf’ as ‘e:\data\db2.dbf’;. the issue the query select * from
v$recover_file;
3. Use the recover to alter database recover command to start applying the archived
and online redo log files to recreate the datafile – recover tablespace table_data;
4. When recovery is finished bring the tablespace online – alter tablespace
table_data online

Read-Only Tablespace recovery

Case-1 – The tablespace being recovered is read-only when the last backup occurred,
Here we can simply recover the tablespace no need to apply redo information.
Case – 2 - The tablespace being recovered is read-write, but was read-only when the last
backup occurred, Here we can recover the tablespace and apply redo information from
when the tablespace was made read-write.
Case 3- Thetablespace being recovered is read-only but was read-write when last backup
occurred. Here we must restore the tablespace from the backup and recover up to the time
the tablespace was read-only.

Read-Only Tablespace recovery issues - Special considerations should be taken for


read-only tablespaces when 1. Recreating a control file – We should issue command alter
database backup controlfile to trace to get a listing of the procedures. 2. Renaming
datafiles – we cannot restore read-only tablespace to correct destination we should use
alter database rename command to place file in new location. 3. Backup control file –
Prcodeure for recovering read-only tablespace with a backup control file is essentially
same as for offline normal tablespaces, except we need to bring the tablespace online
after database is open.

Loss of control Files – We need to create control files if all control files are lost because
of a failure. The name of a database needs to be changed. The current settings in the
control file need to be changed.
Recovering control file - current control file – Use a current copy to restore a lost file. It
assumes that all the files are not lost. Create a new control file - use the create control
file command to create a new file. To do this we need to know all the files for the
database. Backing up an control file to trace on occasional basis facilitates this process.
Use a backup control file – Use recover database using backup control file command.
This is needed if all control files are lost.
RMAN – complete recovery

Restoration and Datafile Media Recovery Using RMAN - Restore files from backup sets
or image copies using RMAN restore command . RMAN uses a server session to restore
the correct backups and copies. The repository is used to select the best available backup
sets or image copies. Rcover files by using RMAN recover command – when we use
recover command RMAN applies changes from online redo log files and archived redo
log files or uses incremental backups to recover the restored files. Using RMan we can
perform recovery at database, tablespace,datafile.
Using RMAN to recover a database in Archivelog mode
Rman target /
Rman > startup mount . RMAN> Restore database; . RMAN > Recover database; .
Rman > Alter database open. This g. assumes that afull backup using RMAN is available
on disk.The current control files were not damaged and do no need to be restored. All
datafiles ar damaged or lost.

Using RMAN to restore datafiles to a new location


Use the set newname command to restore datafile to the new location. Set newname for
datafile 1 to ‘/<newdir>/system01.dbf’;
Use the switch command to record the change in the control file. Switch datafile all;
Eg. Rman target . rman > startup mount. Use rMAN to restore datafile to the new
location and record change in control file.
Run { set nename for datafile ‘/abc/system01/dbf’; restore database; switch datafile all;
recover database; alter database open; }

Using RMAN to recover a tablespace. Restore Tablespace, Recover Tablespace


Run { sql “alter tablespace users offline immediate”; restore tablespace users; recover
tablespace users; sql “alter tablespace users online”; }

Using RMAN to relocate a tablespace - use the set newname command to restore the
files. Use the switch command to record the new names in the control file. Use the
recover tablespace command to recover the datafiles of the tablespace.

Eg- 1. check the location and size of the file from v$datafile. 2. Make sure that it is
offline. 3. Since file is copied to a new location using set name the file must be made
current by by notifying the oracle server the new file location with switch command. 4.
Use recover command to apply incremental and cumulative backuos, archived redo log
files and online redo log files. 5. Whe recover finished bring online and inform users that
they shoud reenter non commited data.
Eg. Run { sql “alter tablespace users offline immediate”; set newname for datafile
‘/abc/users01.dbf’ to ‘/def/users01.dbf’; restore tabelspace users; switch datafile 3;
recover tablespace users; sql “alter tablespace users online”;
Chapter 14 – User-Managed Incomplete Recovery

Incomplete Recovery – Incomplete recovery reconstructs the database to a prior point in


time. Incomplete recovery can be difficult and time-consuming operation. To perform
incomplete recovery we nned a valid offline or oline backup of all datafiles made before
recovery point. All archived logs from the backup until the specififed time of recovery.
This recovery is done when complete recovery fails.

Situation Requiring Incomplete recovery – complete recovery fails beacuase an archived


log is lost. All unarchived redo log files and a datafile are lost. Use error – An important
table was dropped. Invalid data was committed in a table. Current control file is lost and
backup control file must be used to open the database
Types – Time-based recovery – this is used after all changes up to a specified point in
time are committed. Unwanted changes wer made or important ables droppedand approx
time of error is known, Recovery time and dataloss will be less if notified earlier. The
approximate time a nonmirroed online redo log becomes corrupt, Mirroring of logs shoul
prevent the need for this type of recovery. Cancel-Based recovery – this method is
terminated by entering cancel at the recovery point. This is used when an current redo log
file or group is damaged and is not available for recovery, mirroring will prevent the nedd
for this type of recovery. Anrchived redo log file needed for recoveyr is lost. Frequent
backups and multiple destinaction for archive files can solve these problems.
Change-Based Recovery – This recovery is terminated after all changes up to the
specififed system change number are committed. This is used in distributed environment.
Recovery using a backup control file - This method is terminated when the specified
method of recovery (the above set methods) has completed or control files are
recovered. We must specify in recover database command that old copy of control file is
used for recovery. This method is used when control file cannot be re-created. Restoring
a databse with a different structure than the current database to a prior point in time.

Incomplete recovery guidelines. – follow all steps carefully, take whole database backups
before and after recovery. Always verify that the recovery was successful. Back up and
remove archived logs.
Incomplete recovery and the alert log – check the alert log before and after recovery.
Contains error information, hints and SCNs.
User-Managed procedures for Incomplete recovery - SHUTDOWN AND BACKUP the
database. Restore all datafiles. Do not restore the control file, redo logs, password file and
parameter file. Mount the database. Recover the datafile to a point before the point of
failure. Open the database with resetlogs. Perform a closed database backup.

Recover Command Overview – Recover a database until cancel – Recover database until
cancel. Recover a database until time – Recover database until time ‘2003-03-
04:14:22:03’
Recover using backup control file – Recover database until time ‘2003-03-04:14:22:03’
Using backup control file.
Syntax - Recover [automatic ] database <option> - automatic automatically applies redo
log files. Options – until time ‘yyyy-mm-dd:hh:hmi:ss’, until cancel, until scn <integer>
using backup control file.

Time based recovery eg. – Scenario – current time is 12.00pm. on Mrach 9 2001, the
employees table has been dropped. The table was dropped appors at 11.45 am. Database
activity is minimal because most staff are currently in meeting. The table must be
recovered

Incomplete recovery using until time. – sice the time of failure is known and database
structure has not changed since the time we can use until time.
1. Shout down the database using normal,immediate, transactional options.
2. Restore all datafiles from backup (the most recent if possible). You may need to
restore archived logs. If there is enough space available restore to the
log_archive_dest lodation or use the alter system archive log start to <location> or
set logsource <location> to change the location.
3. Mount the database
4. Recover the database – sql>recover database until time ‘2001-03-09:11:44:00’
5. Synchronize datafiles with control files and redolog files , open the database by
using resetlogs option – sql > alter database open resetlogs.
6. 6. check whether employees table exist and take a whole closed backup.

Cancel Based Recovery – Scenario – The current time is 12.00 p.m. on March 9,2001.
The employee table was dropped whicle someone was trying to fix bad blocks. Log files
exist on the same disk. The table was dropped at approximately 11:45 am. . Staff are
currently in meeting.
Block corruption in employees table resulting from disk error. Redo logs are contained on
the same disk we need to check status of redo logs and archived logs.
Sql> select * from v$logfile; sql> select * from v$log; . We find that redo log log2a.rdo
cannot be located and has not been archived. Therfoe we cannot recover past this point.
We can query v$log_history configrm the absence of log seq 48 and find out the no of
minutes work the data will be lost. If database is recovered before applying log2a.rdo.
We can recover database as follows.
1. Shutdown the database. 2. Restore all datafiles from the most recent backup.3.
You already have a valid backup so mount he database. 4. Recover the databe
until log seq 48 . sql > recover database until cancel
2. specify log : {<return>=suggested | filename | auto | cancel } – type cancel (media
recovery cancelled)
3. Open the database by using the resetlogs option. 6. check that employees table
exists. When recovery is complete make a backup.

Using a backup control file curing recovery – scenario – Current ime 12.00 pm. On mar 9
2001. Tablespace containing employees table has been dropped. Error occurred around
11:45 a.m. Many employees records were update this morning but not after 11.00 am.
Backups are taken every night.
1. Tablespace containing the employees table has been dropped – sql> drop tablespace
emp_ts including contents; 2. We inform users and make system in restricted session –
sql > alter system enable restricted session. 3. We need to find the binary control files
form backup, since control file is replaced, we should get the database structure
information. – Select * from v$log; 4. Confirm the time of error by checking alert.log.
5. Shutdown the database, backup control files, then restore all datafiles and control files
for the database at a time when tablespace existed. If open we can find the databse is not
synchronized. 6. Verify any offline datafie exist and place them online. – Select * from
recover_file; if any datafile is offline make it online. – alter database datafile 4 online;
7. Perform recovery - recover database until time ‘2001-03-09:11:44:00’ using backup
control file.8. to synchronize open database with resetlogs option. 9. verify that
employees table exist. Make a whole backup.

Loss of current redo log files - If the database is closed – Attempt to opent he database.
Find the current log sequence number. Recover the database until cancel. Drop and
rcreate log files if necessary. Open the database using resetlogs. Perform a whole –
database backup.
If database is closed media falure have occurred. The step to be followed.
1. Attempting to open the database will imdtly notifyyou of the current redo log
group therough message database mounted ORA-00313 ope failed for memebere
of log group 2 of thread 1 . ora-00312 online log 2 thread 1 “
‘disk.1/archive/log2a.rdo’ nosuch file or directory. We cant clear the log 2 file
because it is the current one.
2. Incomplete recovery is reqd. we shoud note the current log sequence no by slect
* from v$log.we can find log 61 is the current one.
3. Restore all file form previous backup and the use recover until cancel and stop
before redo log 61 is applied. –SQL> recover database until cancel.
4. Open database using resetlogs option. 5. Databse will be operational because
missing file will be recreated. 6. Since it is incomplete recovery database should
be backed up.
Chapter -15 RMAN Incomplete recovery.

Incomplete recovery of a database using RMAN- Mount the database. Allocate multiple
channels for parallelization. Restore all datafiles. Recover the database by using until
time, until sequence, or until SCN. Open the database by using resetlogs. Perform a
whole database backup.
The restore and recovery process for rman are almost same except that all datafile need to
be restored from the past backup. Target database must be in mounted dtate. File being
restored should be offline. We can restore only if backups were taken with RMAN.

RMAN Incomplete recovery until time eg.


RMAN > RUN { allocate channel c1 type disk; allocate channel c2 type disk; set until
time ‘2002-12-09:11:$:00’; restore database ; recover database; alter database open
resetlogs;}

1. Target databse is open perform clean shutedown 2. Mount the target database. 3.
ensure that INS_lang and nls_date_format environmet variables are correct. 4. Start
recovery manager – rman target rman.rman@DB00. 5. we can allocate multiple channels
to improve the performance rman> run { allocate channel c1 type disk; allocate channel
c2 type disk; . 6. Specify the time for recovery and restor all datafiles from a backup with
rman command – rman > set until time ‘2002-12-09:11:$:00’ ; rman > .. restore database.
7. Recover the database to the time specified in the set until command – rman > …
recover database. 8. Open database by using resetlogs option – RMAN > alter database
open resetlogs; 9. check the table exists and perform a nbackup. 10.If using recovery
catalog register the new incaranation of the database; - rman > reset database.

RMAN Incomplete recovery until sequence example


RMAN> RUN { set until sequence 120 thread 1; alter database mount; restore database;
recover database; (recovers through log 119) Alter database open resetlogs; }. The until
sequence clause specifies a redo log sequence number and thread as an upper limit.
RMAN performs the operation on files up to but not including the specified log sequence
number . This eg assumes that log sequence 120 was lost due to adisk crash and the
database needs to be recovered using the available archived redologs..
Chapter 16 RMAN Maintenace

Cross checking Backups and Copies

We can use crosscheck command to – Ensure repository information is synchronized


with actual files. Check the status of a backup or copy. Update the repository when files
have been deleted with OS commands. We can use the list command to find the backups
and copies made and then we can check that hese files exist using crosscheck. If the file
is not exist RMA update the repository as expired. We can use list expired to find the
expired sets then we cane delete by delete expired.
CROSSCHECK COMMAND
Crosscheck all backups in the database – crosscheck backupset of database;
Crosscheck all copies in the database – crosscheck copy;
By default all files in the database with status available or expired are checked. This
command can also be used with options to restrict the check to a specific
piece,backupset,datafile, or controlfile copy.
Deleting Backups and copies – We can use the delete command to delete physical
backups and image copies, update repository status to deleted. Remove records from the
recovery catalog
Delete command -
Delete a specific backup set – Delete backupset 102;
Delete an expired backup without confirmation – delete noprompt expired backup of
tablespace users;
Delete all backups copies and archived redo log files based on the configured retention
policy – delete obsolete;
NOprompt – to delete without confirmation. Expired – delete which sets have marked as
expired. Obsolete – Based on retention policy the the file will be deleted. We can
configure retention policy by using configure retention policy. We can also use
redundancy and recovery window option on the delete command
Eg. Delete obsolete recovery window of 7 days;

Deleting Backups and copies – Use the backup.. delete input command
Delete input files upon successful creation of backup set. Delete archived redo log files.
Datafie copies, and backup sets. We can use this option when backing up archived redo
log files, datafile copies or backup sets. If we use delete all inpuit when backing up
archived redo log files all copies of corresponding archived redo logs that match the
selection criteria. The backup archivelog command back one copy of each distinct log
sequence no, if we specify delete input option without all tman deletes only the copy of
the file that it backs up.

Changing the availability of RMAN Backups and copies - Change the status of backup
or copy to unavailable with the change … unavailable command. Return the status to
Available with the change .. available command.
Changing the Availability status –
Change the stastus of a specific datafile – Change datafilecopy ‘/db01/backup/user01.dbf’
unavailable; Change the status of a control file backup – change backup of controlfile
unavailable; change backup of controlfile available; . change the sdtatus of archived redo
log files – change copy of archivelog sequence between 230 and 240 unavailable;
IF we do nto specify an option for backup then change backup operates on all backups
recorded in the repository.Copy keyword with change command indicates on which files
the command shoud operate, if no option isd specifiedthen it operates i=on all copies
recorded in the repository.
Exempting a backup or copy from the retention policy – Use the change keep command
to exempt a backup or copy from th retention policy. Use the change nokeep option
command to cancel the exemption.
Change Keep Command – Create a long-term backup – change backupset 123
keepforever nologs. Make a datafile exempt from retention policy for 60 days – change
datafilecopy ‘/db01/backup/users01.dbf’ keep until ‘sysdate+60’;
Forever parameter implies that backup or copy never expires.We must use a recovery
catalog when forever is specified, otherwise backup records wventualy age out of the
control files. When logs parameter is specified all archived logs requird ot be backed up
will remain till this backup is there. Nologs specifies that this backup or copy cannot be
recoverd archived logs will not be kept. We can use this to restore to the point in time
backup or copy was taken.’we can use until time = ‘date string’ to specify till what date
backup should be kept.

Cataloging Archived Redo Log files and user-managed backups – we can use the catalog
command to add information to the repository about an OS datafile copy, An archived
redo log copy, A control file copy.

Cataloging files in the Repository- During OS backups nothing is cataloged, to catalog it


must be accessible on disk, a complete image copy of a single file. A consistent or
inconsistent data file control file or archived redo log backup. If inconsistent it must be
created with begin backup /end backup. For control file we should have Alter database
backup controlfile statement . RMAN treates OS backup as datafile copies during
cataloging RMAN only checks file header, it does not check whether file was copied.
Catalog command – Catalog a backup taken with an OS command – Catalog datafilecopy
‘/db01/backup/user01.dbf’; Catalog archived redo log files – Catalog archivelog
‘/oradata/archive1/arch_12.arc’, ‘/oradata/archive1/arch_13.arc’;
We can use the controlfilecopy parameter to specify the filename of a controlfile a copy
to be added to or updated in the repository. Datafilecopy specifies the filename of a
datafile copy to be added or updated in the repository. We use the archivelog parameter
to specify the filename of an archived redo log file to be added or updated in the
repository.
Uncataloging RMAN Records – Use the change.. uncatalog command to update the
record in the repository to deleted status. Delete a specific backup or copy record form
the recovery catalog.
Change … uncatlog command- This is uses to update the repository it is not to remove
physical copies or backups.Remove records for deleted archived redo log files 0 change
archivelog…. Uncatalog . Remove records for a deleted datafile – change datafilecopy
‘/dn01/backup/user01.dbf’ uncatalog;
Chapter 17 – Recovery Catalog Creation and Maintenance

Overview – Recovery catalog is a schema that is created in separate database. It contains


the RMAN metadata obtained form target database control file. If we have multiple
target database to manage then it is advisable to have information in recovery catalog.
This is Maintained by RMAN when we do the following : 1. Register the target database
in the catalog. 2. Reshynchronize the catalog with the control file of the target database.
3. Reste the database to a previous incarnation. 4. Change information about the backups
or files. Perform a backup, resotre or recovery operation.. We can use the report and list
command to obtain info from recovery catalog. We can also store scripts in the recovery
catalog.

Recovery Catalog Contents – Recovery catalog is an optional repository containing


information on – Datafile and archived redo log file backups sets and backup pieces.
Datafile copies records the time stamp and name of dtafile copies. Archived redo log files
– maintains a record of which archived logs have been created by the server and any
copies made by RMAN. The physical structure of the target database, contains similar
like information in the target database control file. Persitent RMAN configuration
settings and stored job scripts.

Benefits of using a recovery catalog - Metadata about multiple target databases in one
catalog. Metadata about multiple incaranations of a single target database. Historical
metadata. Reporting on target database at a noncurrent time.

Creating a recovery Catalog


1. Connect to the catalog database and create a tablespace for the catalog – swl>
create tablespace rman_ts datafile ‘pathname’ size 20M default storage ( initial
100k next 100k pctincrease 0);
2. Create a user and schema for the recovery catalog – sql> create user rman_db01
identified by rman_db01 default tablespace rman_ts quota unlimited on rman_ts;
3. Grant roles and privileges to the user to maintain recovery catalog and perfor
backup and recovery options. Sql > grant recovery_catalog_owner ro rman_db01;
sql> grant connect, resource to eman_db01;
4. Log in to the OS and issue RMAN command to invoke rman command interepter.
Create the catlog use the log option to enable output messages. % rman catalog
rman_db1/rman_db1@catdb log=catalog.log create catalog tablespace rman_ts;
exit; when use the log option output is sent to file so we will not get RMAN
prompt so weh should enter when cursor appears in next ine.
5. coonect to the target database. We must log in with sysdba privileges on target
database to perform all the backup and recovery options. %rman target
sys/oracle@db01(connecting to target database) . rmasn> connect catalog
rman_db01/rman_db01@catdb (connecting to recovery catalog database)
6. Register the target databvase in the catalog , if its not then it cannot stor. RMan
uses internal DBID which is calculated when database is created as an unique
identifier. If we try to register a new databse that has been copied and then
chaging databse name the register will fail, we can avoid this problem by
duplicate command,which copies the databse and generates new identifier. To
backup a copuied databse create a new catalog owner and careate a catalog in the
new account.

Connecting using a recovery catalog

Eg Unix – Oracle_SID=db01; export oracle_sid - $ rman target / - rman> connect


catalog rman_db01/rman_db01@catdb
Eg winnt – c:\> set oracle_sid=db01 – c:\> rman target / - rman > connect catalog
rman_db01/rman_db01@catdb

How to connect to recovery manager

1. Initiate an RMAN session from the target database


2. connect to the recovery catalog database
Recovery catalog maintenance – The catalog maintenance commands are The catalog,
change and delete are uses to update the recovery catalog manually

Resynchronization of the recovery catalog – Resynchronization ensures metadata is


current with the target control file. It can be full or partial , In partial rman reads cuurent
control file to updata changed data but does not resynsyncronize the metadata about
datafiles, tablespaces, redo threads, rollback segments and redo logs. In full it updates all
chages. RMan automatically detects when it has to do partial or full and excutes as
needed. We can force full resynchhronizing by issung resync catalog command By using
n days which is the control_file_record_keep_time to resync the catalog.
Using resync catalog command – We have to issue this command when we add or drop a
tablespace, add or drop a datafile, relocate a database file.
$rman target / catalog rman/rman@catdb – rman > resync catalog;
The resync catalog command updates following records – Log history : Created when an
log switch occurs. Archived redo log : Assocated with archived logs. Backup history :
associated with backup sets, backup pieces, backup set members, proxy copies and image
copies. Physical schema: Associated with datafiles and tablespaces.

Resetting an database Incaranation – Use the reser database command to direct RMAN to
create a new database incarnation record. To distinguish between opening with restlogs
and an accidental restor operation of an old control file. Open with RESETLOGS after
RESET DATABASE.
An incarnation of a database is a number used to identify a version of the database prior
to the log sequence no being rest to zero. This prevents archived and online redo logs
from being applied to an incorrect incarnation of the database. This command id used by
RMAN to store the incarnation no in the recovery catalog. All subsequent backup and
log archives will be associated with new incarnation.
Reset databse
Reset database to incarnation <identifier>
Recovery Catalog reporting
Report command – To analyze various aspects of the backup, copy, restore and recovery
operations. List command – To display info on backup sets, file copies and archived logs
which are stored in recovery catalog. Views – the views that are created when recovery
catalog is created – rc_database, rc_datafile, rc_stored_script, rc_stored_script_line,
rc_tablespace.
Stored Scripts - Rman script is a set of commands that is stored and can be used for
backup recover and restore operations, It is created with create script command. Are
stored in recovery catalog, can be called by run command.Minimize the potential of
operator errors. We can display stored scripts form rc_stored_script view, we can use
rc_stored_script_line view to list the text stored in an specific script or we can use the
print script command.
Script eg – rman > create script level0back { backup incremental level 0 format
‘/u01/backup/%d_%s_%p’ fileperset 5 (database include current controlfile); sql ‘alter
database archive log current’;
We can use execute to run an script – rman > run {execute script level0backup}
We can use replace script to rewirte the script - rman >replace script level0back
We can use delete script to remove a script – rman> delete script level0back;
Use print script to display script – rman> print script level0back;

Backup of Recovery Catalog – Whole databse backup of the databse containing the
recovery catalog. Tablespace backup of the tablespace containing the recovery catlog.
Export – If catalog databse is not very large we can export at regular intervals. If it is
large we can export the schema containing the recovery catalog. We should always store
recovery catalog database in separate database,
Reocvering in the recovery catalog – Create a database form previous backup of recovery
catalog database. Relocate the catalog into another database and import the data. Import
the entire database from an export. When recovery catalog has been recbuilt we should
resynchronize the catalog with the control file of the target database immediately. During
sync RMAn add s records for files that no longer exists because files being re-cataloges
are not verified, we should remove these files bu using change … uncatalog command.

You might also like