Professional Documents
Culture Documents
Oracle9i
Database:
Fundamentals II
ANDREW C. SIMKOVSKY
Oracle9i Database: Fundamentals II
Andrew C.
Simkovsky
Andrew Simkovsky is an Oracle
Certified Professional Database
Administrator and published
author. His previous works include
Oracle training manuals for
Element K Press, and Oracle8i:
OCP Virtual Test Center (Sybex).
Currently, he is a Senior Database
Administrator for World Fuel
Services Corporation in Miami,
Florida. Andrew is also a SYSOP
for the world-renowned Quest
Software/RevealNet Labs DBA
Pipeline (www.quest-pipelines.
com). Prior to working at World
Fuel Services, Andrew held several
Oracle-related positions, including
IT manager, DBA, consultant, and
OCP instructor. He has managed
Oracle databases ranging in size
from 5 gigabytes to over 20
terabytes, and his Oracle
experience includes Internet
startups, telecommunications,
retail, and teaching.
ORACLE9I DATABASE: FUNDAMENTALS II
Course Number: 079176
Course Edition: 1.0
For software version: version 9.2.0.1.0
ACKNOWLEDGEMENTS
Project Team
Curriculum Developer and Technical Writer: Andrew C. Simkovsky • Sr. Copy Editors: Angie J. French and
Christy D. Johnson • Material Editor: Lance Anderson • Graphic/Print Designer: Isolina Salgado Toner •
Project Technical Support: Michael Toscano
Project Support
Content Manager: Susan B. SanFilippo • Project Coordinator: Nicole Heinsler • Development Assistance:
Warren Capps
NOTICES
DISCLAIMER: While Element K Content LLC takes care to ensure the accuracy and quality of these materials, we cannot guarantee their accuracy, and all materials are
provided without any warranty whatsoever, including, but not limited to, the implied warranties of merchantability or fitness for a particular purpose. The name used in the data
files for this course is that of a fictitious company. Any resemblance to current or future companies is purely coincidental. We do not believe we have used anyone’s name in
creating this course, but if we have, please notify us and we will change the name in the next revision of the course. Element K is an independent provider of integrated
training solutions for individuals, businesses, educational institutions, and government agencies. Use of screenshots, photographs of another entity’s products, or another
entity’s product name or service in this book is for editorial purposes only. No such use should be construed to imply sponsorship or endorsement of the book by, nor any
affiliation of such entity with Element K. This courseware may contain links to sites on the Internet that are owned and operated by third parties (the “External Sites”). Element
K is not responsible for the availability of, or the content located on or through, any External Site. Please contact Element K if you have any concerns regarding such links or
External Sites.
TRADEMARK NOTICES: Element K and the Element K logo are trademarks of Element K LLC and its affiliates. Oracle9i is a registered trademark of Oracle Corporation in
the U.S. and other countries; the Oracle products and services discussed or described may be trademarks of Oracle Corporation. All other product names and services used
throughout this book may be common law or registered trademarks of their respective proprietors.
Copyright © 2003 Element K Content LLC. All rights reserved. Screenshots used for illustrative purposes are the property of the software proprietor. This publication, or any
part thereof, may not be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, storage in an information
retrieval system, or otherwise, without express written permission of Element K, 500 Canal View Boulevard, Rochester, NY 14623, (585) 240-7500, (800) 434-3466. Element K
Courseware LLC’s World Wide Web site is located at www.elementkcourseware.com.
This book conveys no rights in the software or other products about which it was written; all use or licensing of such software or other products is the responsibility of the
user according to terms and conditions of the owner. Do not make illegal copies of books or software. If you believe that this book, related materials, or any other Element K
materials are being reproduced or transmitted without permission, please call 1-800-478-7788.
Contents iii
CONTENTS ORACLE9I DATABASE: FUNDAMENTALS II
CONTENTS
About This Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Course Setup Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
How To Use This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Contents v
CONTENTS Task 3C-2 Recovering After Loss of Control File . . . . . . . . . . . . . . . . . 182
Loss of the Current Online Log Group . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Task 3C-3 Recover From the Loss of the Current Online Log Group . . . . . 190
Read-only Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Task 3C-4 Identify Recovery Considerations for Read-only Tablespaces . . 198
Tablespace Point-in-Time Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Task 3C-5 Describe Tablespace Point-in-Time Recovery . . . . . . . . . . . . . 201
Lesson Review 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .371
Contents vii
viii Oracle9i Database: Fundamentals II
ABOUT THIS COURSE ABOUT THIS
This course introduces the concepts of Oracle9i® network configuration and data-
base backup and recovery. You will learn how to configure the Oracle9i network
COURSE
environment to support connections to and from the database. You will also learn
how to perform backups of the Oracle database using various techniques and how
to recover the database in various failure scenarios. Finally, you will learn how to
load, move, and reorganize large volumes of data very quickly using Oracle-
provided utilities. This course helps prepare students for Oracle’s exam, Oracle9i
Database: Fundamentals II (1Z0-032).
Course Prerequisites
To ensure your success, we recommend you first take the following Element K
courses or have equivalent knowledge:
• Oracle9i: SQL, PL/SQL, and SQL*Plus
• Oracle9i Database: Fundamentals I
Course Objectives
When you’re done working your way through this course, you’ll be able to:
Class Requirements
In order for the class to run properly, perform the procedures described below.
1. As you prepare for the installation of the Oracle software, it is expected that
you will have your computer configured with the previously outlined
requirements. Please be sure that your hard drives are partitioned into at
This course will normally least two partitions. One partition should be designated as the C drive for the
refer to the D:\oracle drive. In operating system software. The system drive should be at least 2 GB in size
reality, you may be using the
and formatted as either FAT or NTFS. The second partition should be desig-
D drive, the E drive, or some
other drive letter. If your nated as the D drive and reserved for the Oracle installation. This drive
Oracle partition is declared needs to be at least 8 GB and formatted as either FAT or NTFS.
as a letter other than D, you
will need to replace the 2. Once the computer’s operating system is fully installed and configured, start
references in this manual Windows 2000 Professional and log on as Administrator.
with your respective drive
letter. Note: If you are installing Oracle by using the downloaded installation files
from Oracle Corporation’s Web site, rather than the installation CD-ROMs,
follow the instructions that came with the downloaded media. Additionally, the
installation files require considerable space on disk. Therefore, once the
Oracle9i software is installed, you should immediately delete the downloaded
installation files from the system.
3. Once the installer loads, you will see the Welcome screen. Click Next.
Note: If you are installing Oracle from the downloaded installation files, the
Source path may be different than the one given here.
You will see a progress meter in the upper-right corner of the installer win-
dow as the products list is loaded. This progress meter will appear several
times throughout the installation process.
5. The Available Products screen asks you to select a product to install. Select
Oracle9i Database 9.2.0.1.0, and then click Next.
Note: The software version you may actually be installing might have a
slightly different version number, such as 9.2.0.2.0. This course is designed to
work with Oracle9i version 9.2.0.1.0 or higher.
6. The Installation Types screen asks you to select an installation type. Select
Enterprise Edition, and then click Next.
8. The Oracle Services for Microsoft Transaction Server screen asks you to
enter the port number on which the Oracle MTS Recovery Service will lis-
ten for requests. Leave the value at its default setting, and then click Next.
9. The Database Identification screen asks you to enter the Global Database
Name for the new database that will be created. For the Global Database
Name, type ora92. The same value for SID will automatically be entered for
you. Click Next.
10. The Database File Location screen allows you to select a directory to store
database files. If Directory For Database Files doesn’t show it, set the path
to D:\oracle\oradata. Click Next.
11. The Database Character Set screen asks you which character set should be
used in your database. The default character set should already be selected.
Click Next.
12. The Summary screen provides you with a summary of the installation set-
tings you have selected. Review the settings, and then click Install.
The Install screen shows a progress meter as Oracle is installed. Long pauses
in the progress of the meter are normal.
You will see the Database Configuration Assistant dialog box appear. The
dialog box will show the steps being taken to create and start the database,
along with a progress meter.
Once the database has been created and started, a new dialog box will
appear asking you to change the passwords for the SYS and SYSTEM users.
Set the password to ora92 for both the SYS and SYSTEM users, and then
click OK.
You will be returned to the Configuration Tools screen of the Oracle Univer-
sal Installer. You will see the installer execute additional configuration tools.
Once these tools have completed executing, the installer will move on to the
End Of Installation screen. This screen provides information about the HTTP
server that was installed with the database. Click Exit. A question box will
appear asking if you really want to exit. Click Yes.
14. After a few moments, the Oracle Enterprise Manager (OEM) Console will
appear. In the left pane of the Network tree, click the plus sign next to Data-
bases to expand the tree.
You will see an icon for ORA92 database. Double-click the ORA92 database
icon. The Database Connect Information dialog box will appear.
In the Username text box, type sys, and in the Password text box, type
ora92. Click the drop-down list for the Connect As entry, and select
SYSDBA. Click OK. After a moment, the Database Connect Information
dialog box will disappear, and the tree below the ORA92 database will
expand.
15. In the ORA92 tree, click the plus sign next to the Security icon to expand
the tree further.
Click the Users icon. The right pane will change to show a list of user
accounts that exist in the database.
16. In the right pane, find the user name RMAN in the list of users. Right-click
the RMAN user name to bring up a command menu. Click Remove.
A dialog box will appear stating that RMAN still owns objects in the data-
base, and asking if you are sure that you want drop the user and the objects
using the CASCADE option. Click Yes.
After a few moments, the RMAN user name will disappear from the list of
users.
18. Remember to delete the installation files as specified in the note at step 2.
To install the student files for this class, perform the following procedures:
1. Insert the CD-ROM that accompanies this course into the CD-ROM drive.
3. Once the self-extracting WinZip file opens, confirm or change that it will
write your files to the C drive in a folder named 079176.
As a Learning Guide
Each lesson covers one broad topic or set of related topics. Lessons are arranged
in order of increasing proficiency with Oracle9i; skills you acquire in one lesson
are used and developed in subsequent lessons. For this reason, you should work
through the lessons in sequence.
We organized each lesson into explanatory topics and step-by-step activities. Top-
ics provide the theory you need to master Oracle9i, and activities allow you to
apply this theory to practical hands-on examples.
You get to try out each new skill on a specially prepared sample file. This saves
you typing time and allows you to concentrate on the technique at hand. Through
the use of sample files, hands-on activities, illustrations that give you feedback at
crucial steps, and supporting background information, this book provides you
with the foundation and structure to learn Oracle9i quickly and easily.
As a Review Tool
Any method of instruction is only as effective as the time and effort you are will-
ing to invest in it. For this reason, we encourage you to spend some time
reviewing the book’s more challenging topics and activities.
As a Reference
You can use the Concepts sections in this book as a first source for definitions of
terms, background information on given topics, and summaries of procedures.
Overview
1
Data Files
Oracle’s networking environment, called Oracle Net, enables connectivity none
between clients and database servers, and also between multiple database
servers, thereby enabling distributed enterprise level database applications. Lesson Time
In this lesson, you will learn about the features provided by Oracle Net and 4 hours, 25 minutes
how to configure your environment to provide optimal connectivity to your
database servers. You will learn how to configure the database server to
accept connections and also how to configure the client to initiate
connections. Additionally, you will learn how to optimize your Oracle Net
configuration to scale your environment to allow dozens, even hundreds, of
connections to the database if necessary.
Objectives
To describe and configure the Oracle Network Environment, you will:
single-task connection:
A connection where the
client and server processes
are separate, usually through
a network.
2. Why do you think the concept of net service names was invented?
Doesn’t this just add another unnecessary layer to the already complex
process of connecting to a database?
While technically you can bypass net service names, in practice they make
life much simpler. Although some initial configuration is required, net service
names are easy to maintain and can greatly simplify the command string
that needs to be passed at connection time. Consider this connect string
which specifies a connect descriptor directly:
sqlplus scott/tiger@(description=(address=(protocol=tcp)
(host=rnd.acme.org) (port=1521)) (connect_data=(service_name=
dev.acme.org))
A connection request such as this is too cumbersome to use on a regular
basis and is extremely error prone. By storing the full definition of a connect
descriptor, either locally in the tnsnames.ora file or in a centralized naming
server, the connect descriptor can be easily maintained and resolved on
demand when a connection request is initiated.
Figure 1-7: The Oracle communication stack for JDBC OCI connections.
The JDBC thin driver is somewhat different in that it provides database connec-
tivity to very small Java applications, such as Java applets. Using the JDBC thin Communication Stack for
driver, a Java applet can connect directly to the database through Java sockets, JDBC Thin Clients
which bypasses most of the communication stack and can greatly improve con-
Figure 1-8: The Oracle communication stack for JDBC thin clients.
In this figure, you will see that the communication stack for JDBC thin clients
has fewer layers than the standard Oracle communication stack. The OCI layer is
not used, and the Presentation layer is using a Java implementation of TTC. The
Oracle Net layers have been replaced with JavaNet, which is a Java version of
the Oracle Net components. Both the names resolution and security service com-
ponents have been eliminated, which reduces the amount of network traffic
required to establish and sustain a connection to the database. You should also
see that the only networking protocols supported by the JDBC thin driver is TCP/
IP, which must be used throughout the connection between client and server.
TASK 1A-3
Describe Oracle Net Layered Architecture
1. In a client-server application connection, in which layer of the Oracle
Net stack are naming resolution methods applied?
a. Oracle Protocol Support
✓ b. Oracle Net Foundation layer
c. Network Protocol
d. Presentation – TTC
4. Describe the communication stack layers that are involved for connect-
ing a JDBC thin client to the database.
Using the JDBC thin driver, a Java applet can connect directly to the data-
base through Java sockets, which bypasses most of the communication stack
and can greatly improve connection performance. The only other network
support required by the JDBC thin driver is a lightweight Java implementa-
tion of the Presentation – TTC layer and a Java version of the Oracle Net
layers, called JavaNet.
Connection Manager
In addition to the capabilities and features of Oracle Net, Oracle has included
another networking service, Connection Manager (CMAN). CMAN is design to
support a large number of concurrent users in a 3-tier environment and can
handle more than 1,000 concurrent users. Connection Manager can be installed on
any node that has Oracle Net and more or less functions as a virtual traffic cop
for database connections. Oracle recommends that CMAN be configured on the
middle tier of a 3-tier architecture to take advantage of all the features it
provides. These features include:
• Connection Pooling
• Multi-protocol Connectivity
• Secure Network Access
Heterogeneous Connectivity
In today’s business world, it is common for many companies to take advantage of
database software products from numerous vendors to support various data
systems. It can be greatly beneficial for the company to combine the data from all
heterogeneous of these systems and transform it into top-notch information about all aspects of
connectivity: the company’s operation. However, doing so is not such an easy task. Each sys-
A feature that allows an tem usually stores its data in a different way, accesses the data differently, and
Oracle client to use standard
Oracle SQL and procedure
has different ways to handle errors. Although SQL is an industry-standard query
calls to seamlessly access language, most database products have their own dialect of the language. Even
non-Oracle data sources. simple datatype translations and basic transaction management can be different
from one system to the next. To simplify this seemingly daunting task, Oracle
provides heterogeneous connectivity services to allow clients to use standard
Oracle SQL and procedure calls to seamlessly access non-Oracle data sources.
Oracle provides heterogeneous connectivity through the use of its Oracle Trans-
Oracle Transparent parent Gateway products. A gateway is primarily an interface between the Oracle
Gateways database and another non-Oracle relational database system. Each gateway that
Oracle offers is designed to provide connectivity to a specific database system on
a specific platform. For example, Oracle offers the Oracle Transparent Gateway
for DB2 on IBM RS/6000, which allows the Oracle database to access data on a
DB2 database that is running on an IBM RS/6000 server. For each third-party
database product you want to access, you must install the appropriate gateway.
Figure 1-11 illustrates the use of Oracle Transparent Gateways to access non-
Oracle databases.
Heterogeneous Services
Architecture
Topic 1B
Basic Oracle Net Server-side Configuration
Configuring the database server to accept incoming network connections consists
primarily of configuring the listener process. The listener is an Oracle program,
separate from the database server, that runs on the server or a middle tier. A lis-
tener is so-called because it “listens” for client connection requests over one or listener:
more addresses on behalf of one or more Oracle database instances. Upon receiv- A constantly running process
ing a connection request from a client, the listener will send the requesting client that listens on the network
for incoming connection
the network address of a prespawned server process where the target database requests.
resides. If there is no prespawned server process available, it will direct the data-
base to spawn a new dedicated server process for the client. The actual server
process that handles the connection request is referred to as a service handler.
Once the listener hands a client process off to either a dispatcher or server pro-
cess, the listener no longer has any part in the communications of that session
and returns to the task of listening for connection requests.
Create a Listener
Listener configuration information is stored in a file called listener.ora, which is
located in the ORACLE_HOME/network/admin directory. This file contains all
the configuration information necessary for a listener to accept connection
requests and direct clients to the proper databases. A single listener.ora file can be
used to configure multiple listeners on a single server. If this file does not exist,
the listener process can still be started, but it will use a default configuration,
which may or may not be correct for the environment. Figure 1-13 shows a
sample listener.ora file.
1. From the desktop, launch the Oracle Net Manager utility by choosing
Start→Programs→Oracle – OraHome92→Configuration And Migration
Tools→Net Manager.
3. Double-click the Listeners icon to show the listeners that are currently
configured on your system. You should see one listener, named LISTENER,
in the tree.
Notice that the Port number for this listener is 1521. You will configure a
new listener for your database, except using a different port so that the two
listeners do not conflict with each other.
A dialog box will appear asking you to choose a listener name. The name
The dialog box will disappear, and LISTENER1 will be added to the tree in
the left pane.
The right pane will change to show that no database services have been set
up for this listener and that Oracle8i release 8.1 databases will dynamically
register with the listener.
9. Click the Add Database button in the right pane. You will see a Data-
base1 tab appear in the right pane.
10. The information currently displayed in the Database1 tab is incorrect for
your database.
Congratulations! You have just configured a new listener process for your
Three of the commands listed in this table, help, set, and show, have
extended command options. When used alone, the help command will display
the list of commands available for the Listener Control utility. To get specific
information about a particular command, you would type:
help command
In this syntax, command is the command you wish to know more about. Listener
Control will display a short description about the specified command.
The set command is used to set a number of different configuration parameters.
Its syntax is:
set config_param new_value
Parameter Description
current_listener Sets the name of the target listener that all subsequent commands will
be set to.
displaymode Sets the amount of output that will be displayed for the services
and status commands. Valid values include RAW, COMPAT,
NORMAL, and VERBOSE. By default, the display mode is
NORMAL.
log_directory The path where the listener log file will be written to.
log_file The name of the listener log file.
log_status Enables or disables listener logging. Valid values include ON and
OFF. By default, the log status is set to ON.
password Specifies the password for the current Listener Control session.
save_config_on_stop Instructs Listener Control to save all unsaved configuration changes for
the current listener back to the listener.ora file when the listener is
stopped using the stop command.
trc_directory Sets the directory path where listener trace files will be written to if
listener tracing is enabled.
trc_file Sets the trace file name that the listener will be writing tracing
information to if listener tracing is enabled.
trc_level Enables listener tracing at the specified trace level. Valid trace levels
include OFF, USER, ADMIN, and SUPPORT. By default, the lis-
tener trace level is OFF.
All configuration parameters that can be changed with the set command can be
displayed with the show command, with the exception of the password
parameter. This parameter is used to set the password for the current Listener
Control session only, but it does not change the password of the listener; this is
done with the change_password command. In other words, the
change_password parameter is used to change the stored password for the
target listener. When the password has been set, all administrative tasks for that
listener will require the password prior to executing the task. To supply that pass-
word, you would use the set password command, which will prompt you for
the password. This password is only validated against the actual listener password
when you attempt to perform an administrative task on the listener, such as stop-
ping it. Only then will the listener compare the value you supplied for the
set password command to the value specified for the change_password
command.
TASK 1B-2
Using the Listener Control (lsnrctl) Utility
Objective: To use the Listener Control utility to manage the listener
process.
1. From the desktop, choose Start→Run. The Run dialog box will appear.
In the Run text box, type cmd and press Enter. A command window will
appear.
You will see that the name of the listener that you are currently working
with is LISTENER.
4. To see the current configuration of this listener, you will use the status
command. First, you will format the output of this command by setting the
displaymode option to compat, which will minimize the output to a
short summary. At the lsnrctl prompt, type set displaymode compat and
press Enter.
The utility will respond with the message “Service display mode is
COMPAT.”
At the prompt, type status and press Enter. The utility will display a sum-
mary of the configuration of this listener. The uptime and number of service
5. To switch the listener that you are currently working with to your new lis-
tener, type set current_listener listener1 and press Enter.
6. To see the current status of the new listener, type status and press Enter.
Since LISTENER1 is a brand new listener and has never been started before,
the corresponding Windows service for this listener has not yet been created.
Therefore, the lsnrctl utility will return the errors “TNS-12541: TNS:no lis-
tener,” “TNS-12560: TNS:protocol adapter error,” and “TNS-00511: No
listener.”
After a few moments, the utility will respond with a series of errors and
messages. This output shows you that the utility is attempting to find the
Windows service related to this listener, but cannot find it. Once it realizes
8. To shut down this listener, you would use the stop command. At the lsnrctl
prompt, type stop and press Enter.
The utility will display messages stating that it connected to the listener at
its network address, and that the command completed successfully.
9. Since this listener is not currently running, attempting to request its status
will return an error. At the prompt, type status and press Enter.
The utility will display several error messages, indicating that the listener
could not be reached.
10. To start the listener again, type start and press Enter.
The listener will start, and the utility will display the listener’s current status.
You will be prompted for your old password. Currently, there is no password
for this listener, press Enter.
You will then be prompted for your new password. Type ora92 and press
Enter. For security purposes, the characters you type will not be displayed
on the screen. When prompted to re-enter the new password, type ora92
and press Enter again.
The utility will display the message “Password changed for listener1.”
12. Now that the listener has been given a password, any administrative com-
mands sent to the listener will generate an error if the proper password isn’t
set for the current lsnrctl session. At the prompt, type reload and press
Enter.
Since the proper password has not been set for this lsnrctl session, the utility
will display the error “TNS-01169: The listener has not recognized the
password.”
13. To provide the password for this session, type set password and press
Enter.
The lsnrctl utility will prompt you to enter the password. Type ora92 and
press Enter.
The utility will display the message “The command completed successfully.”
14. Now that the session password has been set, administrative commands will
be allowed. At the prompt, type reload and press Enter.
The utility will display the message “The command completed successfully.”
15. To exit the lsnrctl utility, type exit and press Enter. You will be returned to
the command prompt.
To exit the command window, type exit and press Enter again. The com-
mand window will close.
TASK 1B-3
Configuring the Listener Manually
Objective: To configure the listener process manually.
1. From the desktop, choose Start→Run. The Run dialog box will appear.
3. The simplest way to manually add a listener definition to the listener.ora file
is to copy the definition of an existing listener. In the listener.ora file, high-
light the definition for LISTENER1 listener.
Move the insertion point to the blank line after the last line of the LIS-
TENER1 definition. Press Enter to add a blank line, then choose Edit→
Paste. Add another blank line after the new LISTENER1 definition you
just pasted.
In the definition you just pasted, change the name of the listener to LIS-
TENER2
Scroll down through the file to find the entry for SID_LIST_
LISTENER1. This entry defines which databases the LISTENER1 listener
will service. Highlight the entire SID_LIST_LISTENER1 entry. Copy the
entry and paste a new one below it using the same method as the lis-
tener definition.
5. Choose File→Exit. You will be asked if you want to save the changes.
Click Yes. The file will close.
6. Now that you have added the appropriate definitions to the listener.ora file,
you will now use the lsnrctl utility to start the listener.
From the desktop, choose Start→Run. In the Run text box, type cmd and
press Enter.
At the command prompt, type lsnrctl and press Enter. This will launch the
lsnrctl utility.
7. You must first set the current listener to listener2. At the lsnrctl prompt, type
set current_listener listener2 and press Enter. The lsnrctl utility will dis-
play the message “Current Listener is listener2.”
Just as in the previous activity, the lsnrctl utility will first display the “Failed
to open service” error message. This indicates that the Windows service for
9. Exit from the lsnrctl utility, and then exit from the command prompt.
Close all open windows.
Topic 1C
Basic Oracle Net Client-side Configuration
To connect to a remote database, the client must specify a user name and pass-
word to the database. However, the database must first be identified and located
on the network. When submitting a connection request, the client specifies an
names resolution alias to represent the intended database, which is resolved to the location on the
method: network where a listener for that database resides. Depending on how the client is
A configured method that is configured, the client will use that alias to perform a lookup, called a names reso-
used to translate a net
service name to the network
lution method, to find the correct network location of the listener.
address where the intended When submitting a connection request, the client will determine which names
database resides.
resolution method to use by looking in its local sqlnet.ora file. This file is found
in the ORACLE_HOME/network/admin folder and contains basic configuration
information for all connections originating from the client. This file can be con-
figured through the Profile icon in the Net Manager utility, or it can be manually
configured. The NAMES.DIRECTORY_PATH parameter in the sqlnet.ora file
provides a list of names resolution methods that the client is configured to use.
The client will first attempt to use the first method listed for this parameter. If the
attempt is unsuccessful, it will attempt to use the next method listed. If all of the
listed methods fail, the connection request fails and the client software will return
an error. Figure 1-15 shows a sample sqlnet.ora file.
TASK 1C-1
Identify Names Resolution Methods
1. Which names resolution method would be best in an environment where
there are several dozen individual Oracle client systems and each client
needs the ability to connect to multiple Oracle databases on the net-
work? Explain your answer.
A centralized naming method would be best for this type of environment. All
connection definitions could easily be stored in a central repository where
connection information can be quickly looked up on demand. This greatly
reduces the maintenance overhead of configuring each client separately,
especially in an environment that is constantly changing, such as when cli-
ents and servers are constantly added, removed, or relocated within the
enterprise.
The output shows the method that was used to resolve the ora92 connect
string. The message “Used TNSNAMES adapter to resolve the alias” indi-
cates that the Oracle Net client used local naming by looking up the ora92
entry in the tnsnames.ora file. You will change the names resolution method
to use HOSTNAME instead of TNSNAMES.
3. Leaving the command open, launch the Oracle Net Manager by choosing
Start→Programs→Oracle – OraHome92→Configuration and Migration
Tools→Net Manager.
4. In the left pane, to expand the tree, click the plus sign next to the Local
icon. In the tree, click the Profile icon.
The right pane will change to show the names resolution methods configured
for your Oracle Net client. The current settings reflect the parameters that
are currently set in your sqlnet.ora file.
8. You will now modify your system’s hosts file to include the ora92 connect
string.
From the desktop, choose Start→Run. In the Run text box, type C:\winnt\
system32\drivers\etc and click OK.
A window for the C:\winnt\system32\drivers\etc folder will appear. Your Windows directory may
be named either winnt or
windows, depending on how
your system was configured.
10. Move the insertion point to the end of the word localhost. Press Tab,
and then type ora92 and press Enter.
11. Choose File→Exit. You will be asked if want to save the changes. Click
Yes. The hosts file will close.
The output will now show that the HOSTNAME adapter was used to resolve
the alias. You have successfully configured your client to use the host names
resolution method.
TASK 1C-3
Configuring Local Naming
Objective: To configure the Oracle client to use local naming.
2. You will first configure your client to use local naming as the first names
resolution method.
Click the plus sign next to the Local icon to expand the tree.
Click the Profile icon to display the list of naming methods currently
configured for your client.
4. You will now add a new net service name for your client.
5. On the toolbar at the far left of the Oracle Net manager window, click the
large plus sign.
6. In the Net Service Name text box, type localdb and click Next.
8. On the Protocol Settings page, you are prompted for the Host Name of the
server where the target database is located. Since the database resides on the
same machine as your client, in the Host Name text box, type localhost
Leave the Port Number to its default of 1521 and click Next.
10. The Oracle Net Manager utility now has all the information it needs to cre-
ate the new net service name. On the Test page, you can test your new
connection to confirm that your settings were correct. To test your new net
service name, click the Test button.
The Connection Test window will appear showing that the test has started.
After a few moments, the window will show a message that states the con-
nection test was successful.
11. When you are finished looking at the results of the test, click Close to close
the Connection Test window.
You will see that the localdb net service name has been added to your cli-
ent’s configuration.
13. You will now use your new localdb net service name to connect to the
database.
14. In the Log On dialog box, type system for User Name, ora92 for Pass-
word, and localdb for Host String. Click OK.
After a moment, you will be connected to the ora92 database as the system
user. You have successfully configured your client to locate and connect to
15. At the SQL*Plus prompt, type exit and press Enter to close SQL*Plus.
In many cases, connectivity issues are the result of something so blatantly obvi-
ous that it becomes easy to miss. For example, if the database is not up and
running, then no client will be able to connect. However, if only a single user
actually uses that database, that person will probably be the only one complaining
about connection problems. It is almost too easy to jump to conclusions that it is
the client configuration that is incorrect, when it is really the database that has the
problem.
Once you have established that it truly is a single client that is having connection
problems, you can begin to drill down to determine exactly what that client’s Common Errors From
problem is. This is where the tnsping utility becomes extremely useful. All Oracle tnsping
client installations have this utility, and can be used to test connections from the
client to the target listener. Any errors returned from the tnsping utility will usu-
ally lead you straight to the root of the problem. The following table lists three of
the most common errors that tnsping may return.
Error Description
TNS-03505: Failed to resolve name The service name did not match a valid name listed
in the tnsnames.ora file. Typically this results from
an incorrectly entered service name at the
application level.
TNS-12541: No listener Indicates the listener has not been started (or has
failed) on the server. This can also happen if the
actual port the listener is listening on is different
from that listed in the tnsnames.ora file.
TNS-12154: TNS: Could not resolve service Indicates name resolution was not possible. Usually
name results when the tnsnames.ora file is unavailable or
empty. This error is different than TNS-03505 where
resolution was possible but was not successful.
If the steps in your checklist do not lead you to the root of the problem, you can
enable logging and tracing for network connections using Oracle Net.
TASK 1C-4
Configuring Client and Server Tracing
Objective: To configure tracing for both the client and database server.
2. Click the plus sign next to the Local icon to expand the tree.
In the tree, click the Profile icon. The right pane will change to show the
current naming methods for your Oracle client.
3. In the right pane, click the drop-down list and select General.
4. The Tracing tab is used to enable tracing for both the database server and
the client by automatically adding the pertinent parameters to the sqlnet.ora
file.
In the Client Information section, click the Trace Level drop-down list,
and select Admin.
The Unique Trace File Name feature instructs the Oracle client to add the
process ID of each database connection to its trace file. This will ensure that
the trace file names will be unique, and will not be overwritten by subse-
In the Server Information section, click the Trace Level drop-down list
and select Admin.
In the Log On dialog box, type system for User Name, ora92 for Pass-
word, and ora92 for Host String. Click OK.
After a moment, SQL*Plus will connect to your database. Since all you need
is a single connection to generate some basic client and server tracing, you
can simply close the connection. At the SQL*Plus prompt, type exit and
press Enter.
7. From the desktop, choose Start→Run. In the Run text box, type D:\oracle\
ora92\network\trace and click OK.
Note: The process ID numbers used in the trace file names on your system
will most likely be different than what is shown here. A trace file that includes
“_1” before the .trc extension is a continuation of the first trace file of the
same name. For example, the contents of the client_trace_508_1.trc file is a
continuation of the contents in the client_trace_508.trc file.
8. Double-click the first client_trace*.trc file. If the Open With dialog box
appears, select Notepad from the list of applications and click OK. The
file will open in a Notepad window.
Take a moment to look through the trace file. Keep in mind that the con-
tent of this file shows what is happening within the Oracle client while
trying to contact the database server and establish a connection.
Take a moment to look through the trace file. Keep in mind that the con-
tent of this file shows what is happening within the Oracle database server
while trying to establish a connection for an incoming connection request.
When you are done looking at the trace file, close the Notepad window.
10. The final file to check is the log file generated by the listener process. This
file is found in the D:\oracle\ora92\network\log folder.
From the desktop, choose Start→Run. In the Run text box, type
11. Your system currently has three listener processes configured, therefore you
have three listener log files. As defined in the tnsnames.ora file, the ora92
net service name contacts the default listener, named simply LISTENER, at
port 1521.
Double-click the listener.log file to open it. If the Open With dialog box
appears, select Notepad from the list of applications and click OK. The
file will open in a Notepad window.
The listener process will log all incoming connection requests, including
those that are not honored due to privilege settings or misconfigurations.
Take a moment to browse through this file. See if you can find the exact
moment in time your last connection request contacted the listener.
Topic 1D
Oracle Shared Server
The two types of configurations to handle users’ requests for data include single-
task and two-task configurations. In a single-task configuration, the user and
server processes are actually a single, combined process. The one process
A Dedicated Server executes both the application code and the Oracle code, when processing a
Configuration request. Very few systems are capable of handling a single-task configuration.
Two-task is the most commonly found configuration among Oracle setups. The
user process and server process are separate. The user process handles the appli-
cation code, while the server process handles the interaction with the database on
shared server process: behalf of the user process. The server process can be either a dedicated or a
A single server process that shared process. It is considered dedicated because it will serve only a single user
can support database
requests from multiple
process at a time. A shared server process can handle multiple user connections
clients simultaneously. simultaneously. A client/server environment is a common example of a two-task
configuration. The client handles the application or user process, while the server
handles the Oracle code to manipulate data in the database. Figure 1-19 illustrates
a client-server environment with a dedicated server configuration.
TASK 1D-1
Describing Oracle Shared Server Architecture
1. List and describe the three types of configurations to handle user’s
requests for data.
• Single-task configuration—User and server processes are a single, com-
bined process. Handles both the application code and the Oracle code.
• Two-task configuration—User and server processes are separate. User
process handles application code, while server process handles Oracle
code.
• Shared server configuration—Modified two-task configuration. Each
shared server process is capable of connecting to multiple user pro-
cesses to handle requests to the database.
3. True or False? Once you have Oracle Shared Server configured on your
system, all connections to the database must use a shared server
processes. Explain your answer.
False. A user process can still request a dedicated server. Some processes
require a dedicated server, such as connecting to start and stop the data-
base, or simply connecting as SYS with the SYSDBA role.
LOCAL_LISTENER
The dispatcher processes must be registered with the listener process in order for
the listener to be aware that OSS has been configured for the database. The
LOCAL_LISTENER parameter specifies the service name for the listener, or mul-
tiple listeners, with which the dispatchers must register. The values given to this
parameter in the initialization file must exactly match the service name specified
in the connect descriptor as it is shown in the client’s tnsnames.ora file. For
instance, the following is an example service name shown in the tnsnames.ora
file:
O92SHARED =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = elementk)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ora92oss)
)
)
DISPATCHERS
The DISPATCHERS parameter specifies the number of dispatchers that are auto-
matically spawned when the database is started up. The default value is null,
which results in no dispatchers spawned at start up. You can also configure the
system to accept connections from multiple network protocols, and assign one or
more dispatchers to each protocol.
For each dispatcher, you can list only one of the following specifications to set
the network protocol:
• PROTOCOL—Specifies the network protocol the dispatcher will use.
• ADDRESS—Specifies the actual network address to listen on.
• DESCRIPTION—Specifies a network description in Oracle Net syntax.
The following is an example of a correct assignment of the DISPATCHERS ini-
tialization parameter:
DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3)"
For multiple network protocols, the parameter will list each protocol separately,
assigning each one a specific number of dispatchers. The following is an example
of assigning the DISPATCHERS multiple network protocols:
The double right arrow (⇒)
DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3) ⇒ indicates that the syntax
(PROTOCOL=IPC)(DISPATCHERS=2)" should be typed as if on a
single line. The double right
The DISPATCHERS attribute (shown in parenthesis) indicates the number of dis- arrow is not part of the
patchers to spawn at startup. If this keyword is left uninitialized, or omitted syntax.
completely, the attribute will default to 1 dispatcher to be spawned at startup. You
can also supply additional network attributes to the ADDRESS and DESCRIP-
TION attributes. This is to support the possibility of having a host with multiple
Oracle homes. The additional attributes you can set include, but are not limited
to:
• SESSIONS—The maximum number of network sessions to be supported by
each dispatcher.
• LISTENER—The network name of an address or list of addresses for a
listener.
• SERVICE—The service name for the dispatcher to register with.
MAX_DISPATCHERS
This parameter specifies the overall maximum number of dispatchers that are to
be created for the database. The default value is either 5 or the value of DIS-
PATCHERS, whichever is higher. The following is an example of properly setting
this parameter:
MAX_DISPATCHERS = 3
SHARED_SERVERS
The SHARED_SERVERS parameter specifies the number of shared server pro-
cesses to spawn when the database is started. The following is an example of this
parameter in the initialization file:
SHARED_SERVERS = 3
As with the dispatcher process, these server processes will be spawned at data-
base start up, and named. The name of each process will include an s for “shared
server”, a number, and the database system identifier. If you have specified that
three shared servers for the ora92 database are to be spawned at start up, those
server processes would be named:
ora_s001_ora92
ora_s002_ora92
ora_s003_ora92
The default value for this parameter is 0, which indicates that no shared servers
will be started for the database. Also, you can modify this parameter using the
ALTER SYSTEM command for the current instance.
MAX_SHARED_SERVERS
This parameter specifies the maximum number of shared server processes that are
to be created for the database. The default for this parameter is 20, or two times
SHARED_SERVERS, whichever is higher. The following is an example of set-
ting this parameter in the initialization file:
MAX_SHARED_SERVERS = 10
This parameter cannot be changed dynamically with the ALTER SYSTEM com-
mand for the current instance. It can only be modified in the parameter file or
spfile, which requires the database to be restarted for the parameter to take effect.
3. True or False? All users must share the User Global Area within the
SGA. Explain your answer.
False. Each user will have a private User Global Area within the SGA.
V$CIRCUIT
The V$CIRCUIT view contains information about each OSS connection. If there
is a problem with a specific process in the database, you can use this view to
gather information on the specific user.
V$DISPATCHER
You can monitor all your dispatchers with the V$DISPATCHER data dictionary
view. The most important columns in this view are the STATUS, IDLE, and
BUSY columns. The STATUS column has several possible values. The following
table lists those values and their descriptions.
The IDLE and BUSY columns provide the amount of time the dispatcher spent
idle or busy. You would determine the current total busy rate for the dispatchers
of each network by taking the ratio of total time busy to the total time the dis-
patcher existed. The following formula is used to determine the busy rate for
each dispatcher:
busy / (busy + idle)
This formula simply gives you the ratio of time spent busy to the total time the
dispatcher has been up and running. The following query will access
V$DISPATCHER for the total dispatcher busy rate for each protocol configured:
SELECT network,
SUM (busy) / (SUM (busy) + SUM (idle)) * 100 rate
FROM v$dispatcher
GROUP BY network
/
The following shows a sample output from this query, which in this case indi-
cates that only a single dispatcher is configured, and it has a busy rate of just
under 35 percent.
NETWORK RATE
-------------------------------------------------- ------
(ADDRESS=(PROTOCOL=tcp)(HOST=elementk)(PORT=1030)) 34.25
V$SHARED_SERVER
This view is used to monitor the shared server processes. As with the dispatchers,
you would monitor the shared servers by determining the current workload on the
servers. You would determine the busy rate by using the same formula used for
the dispatcher busy rate. If the total workload of the shared servers exceeds 50
percent, you should add shared servers. Shared server processes can be added by
changing the SHARED_SERVERS parameter. The following is an example of
increasing shared servers from three to five using the ALTER SYSTEM command
for the current instance and all subsequent restarts:
ALTER SYSTEM SET SHARED_SERVERS = 5 SCOPE=BOTH;
V$SHARED_SERVER_MONITOR
The V$SHARED_SERVER_MONITOR view provides information that can be
helpful in deciding the best value for the MAX_SHARED_SERVERS parameter.
The columns found in this view are listed in the following table.
COLUMN DESCRIPTION
MAXIMUM_CONNECTIONS Maximum number of connections that each dispatcher can
support (OS dependent).
MAXIMUM_SESSIONS Highest number of shared server sessions in use at one time
since the instance started.
SERVERS_STARTED Total number of additional shared servers started since instance
startup (above and beyond the specified number at instance start
up).
SERVERS_TERMINATED Total number of shared servers stopped since instance startup.
SERVERS_HIGHWATER The highest number of shared servers since database creation.
TASK 1D-3
Describe Methods to Monitor and Tune Oracle Shared
Server
1. How would you determine whether or not more dispatchers need to be
added to your configuration?
You would determine whether or not you need to add dispatchers by deter-
mining the overall busy rate of the dispatcher processes. You would use the
following formula with values from the IDLE and BUSY columns of the
V$DISPATCHER view:
busy / (busy + idle)
If the results of this formula show that the dispatchers are busy 50 percent
of the time or more, you should add dispatchers.
Figure 1-23: The communications stack with the IIOP and GIOP layers.
In this figure, the client machine accesses the Internet in general using IIOP pro-
tocols, specifically HTTP or FTP. All requests from the client are sent through the
IIOP presentation layer using industry-standard object requests. The requests are
sent to the database through the network using TCP/IP and are translated by the
database using the standard TCP/IP protocol support layer. The request is then
sent through the database’s GIOP presentation layer to hand off to the database.
To support Web client connections, the database server must have Oracle Shared
Server configured with at least one available dispatcher. The listener can be con-
figured to listen for both traditional database requests and for Web client requests.
The IIOP protocol is used by the client to contact the listener, which is listening
on a specially configured port for Web connections. The listener will hand the
connection off to the appropriate dispatcher that is configured to receive requests
from the GIOP layer.
To configure the GIOP layer on the database side, you would simply include the
PRESENTATION argument with the set value for the DISPATCHERS initializa-
tion parameter. This argument is set to Oracle’s predefined GIOP presentation
layer that is included with the Oracle software. For example, the following shows
the DISPATCHERS parameter correctly configured to support Web client connec-
tions through the GIOP presentation layer.
2. True or False? IIOP connection requests can bypass the listener and
directly contact dispatchers through the network. Explain your answer.
True. IIOP clients can be configured to directly contact dispatchers at pre-
assigned ports. Each dispatcher can be assigned a port by specifying a host
and port number for the ADDRESS keyword of the DISPATCHERS
parameter.
Summary
In this lesson, you learned about the features of Oracle Net and how to con-
figure both the Oracle database server and the client to establish connections
to the database. You also learned how to configure Oracle Shared Server to
improve the scalability of the Oracle database and accept large numbers of
concurrent connections. Additionally, you learned about the additional net-
working options that are available for Oracle and the features they provide.
Which protocols does the Oracle protocol support layer currently sup-
port? Choose all that apply.
✓ a. Named Pipes
✓ b. TCP/IP
✓ c. TCP/IP with SSL
d. UDP
Even if you set an administrative password for the listener, how can the
security and integrity of the listener process be compromised by an
unscrupulous individual?
Since the listener is just a process that runs at the OS level, any user with
enough privileges through the OS can kill the process or stop its correspond-
ing Windows service. Additionally, since the listener configuration is stored
in a simple text file, it too can be compromised by any privileged user
through the OS. A user with Administrator or root privileges can kill the
listener process, change its configuration in the listener.ora file, and restart
it.
1D What happens after a user process issues a SQL command to the dis-
patcher process?
The dispatcher process places the SQL command in the request queue to be
picked up by the next available shared server process.
What query to the V$DISPATCHER view will give you the total dis-
patcher busy rate for each protocol configured?
SELECT network, SUM(busy) / (SUM(busy) + SUM(idle)) * 100
rate
FROM v$dispatcher
GROUP BY network;
Objectives
To describe the basic concepts of database backup and recovery in Oracle9i, you
will:
Statement Failure
When statement failure occurs, a single SQL statement, usually a SELECT state-
ment, from a single Oracle client has failed. This type of failure can occur for a
wide variety of reasons. It is possible that the statement issued from the client
was syntactically incorrect, or the statement referenced an object that either didn’t
exist or the user didn’t have privileges to access. Statement failure can also occur
if a statement performs a large amount of sorting in the temporary tablespace,
and the temporary tablespace runs out of space during the operation. This type of
failure usually has no damaging affects on the database, therefore no recovery is
required. The client issuing the statement will simply receive an error stating the
reason why the statement failed.
Transaction Failure
Transaction failure differs from statement failure only in that transaction failure
occurs while a client is in the middle of manipulating the data. The client would
issue a DML statement to perform one or more INSERT, UPDATE, or DELETE
undo segments: operations on the data, and the entire transaction failed. To recover from transac-
Database segments that tion failure, Oracle makes use of undo segments in the database. As a transaction
temporarily hold the original executes, the data that is to be changed is first copied to the undo segments. Once
versions of changed data
from a transaction in case
the transaction is committed, the undo segment the transaction was using is
the transaction needs to be released. If the transaction fails, Oracle will automatically rollback the entire
rolled back and the original transaction using the original copies of the data in the undo segment to return the
version replaced. data to its original state before the transaction started.
Each transaction can consist of one or more DML statements in the database. For
data consistency, Oracle treats every transaction as an atomic unit of work. This
means that the entire transaction is successful or the entire transaction is rolled
back; Oracle does not allow the partial completion of a transaction. A transaction
starts when the first DML statement is issued, and completes when the transaction
is committed. There can be any number of DML operations between the first
DML statement and the commit. For example, the following series of statements
consist of a single transaction, which ends when the COMMIT statement is issued.
UPDATE emp
SET salary = salary * 1.1;
COMMIT;
In this example, the transaction starts when the first statement is issued, which is
the INSERT statement that creates a row in the DEPT table. The transaction con-
sists of a total of three DML statements, and is complete when the COMMIT
statement is issued. If the transaction fails at any point after the INSERT state-
ment begins and before the COMMIT is issued, all changes will be rolled back. A
transaction can be committed by either explicit or implicit commits. An explicit
commit occurs when the client issues the COMMIT statement. An implicit commit
occurs when some new activity requires that all transactions within the session be
committed. All DDL statements cause an implicit commit.
A transaction failure can be caused by any number of reasons, but is usually
caused by the shortage of database resources. For example, a long-running trans-
action may generate so much undo information that it causes the undo segment
extent to grow to the point that the tablespace is full. If the undo segment cannot
extend, the transaction will fail. Oracle automatically handles all transaction fail-
ures through the use of the undo segments, so no DBA intervention is necessary.
If a transaction fails, the client issuing the transaction will receive an error stating
why the transaction failed.
Session Failure
Session failure occurs when a client loses its connection to the database com-
pletely, and usually occurs when any component of a database connection, such
as client, server, or network, becomes unavailable. For example, if the client loses
its connectivity to the network, or if the client machine crashes entirely, it will
also have lost contact with the database. Depending on the failure, a trace file
concerning the error may be generated, and the client may either receive an error
back immediately from Oracle that states what the error is, or the error will be
returned the next time the client attempts to access the database. Of course, if the
session failure occurred because the client machine crashed completely, then there
will be no error returned, but the reason for the session failure will be obvious to
the user.
Session failure may actually cause lesser failures, such as the failure of any state-
ments or transactions the client was currently issuing to the database. Statement
failure has no affect on the database, and transaction failure will be rolled back
automatically by Oracle using the undo segments. The only other action that must
occur is releasing any database or system resources consumed by the client con-
nection, such as PGA space in memory and locks and latches in the database.
Such resource “cleanup” is handled by the process monitor (PMON) background
process. If PMON detects that any client has lost connectivity to the database, it
will immediately attempt to release all resources consumed by the failed process,
thus freeing up the resources to be used by other new or existing database
connections.
Media Failure
While instance failure is the most widespread, media failure is actually the most
damaging, and can usually cause the most downtime. Media failure occurs when
a hardware component of the server where the database resides, usually a disk or
disk array, has failed, become damaged, or has become otherwise unavailable.
Media failure may not necessarily cause instance failure, but it may. If the disk
media in question contains the Oracle binary executables, the datafiles of the
SYSTEM tablespace, all copies of the control file, or the current online redo log,
then the instance will fail and will not be able to be restarted until the affected
files were restored and recovered. However, if the disk contains one or more
datafiles of non-SYSTEM tablespaces, then the instance can usually stay up and
running, but Oracle will automatically take the affected datafiles offline. Oracle
will write error messages to the alert log for each of these datafiles at every
attempt to write to them until the datafiles are recovered.
Media failure is considered the most severe type of failure because, while it may
not cause instance failure, it can result in the actual loss of data. Instance recov-
ery is the most widespread failure because it affects all sessions, but can usually
be resolved very quickly by simply restarting the instance. Media failure usually
requires additional steps to restore the database to its original state, and if not
done properly, the loss of data can be permanent and extremely costly. Even if
the instance is up and running, if the core data that an application depends on is
unavailable, the application can no longer function. This is also considered
downtime.
TASK 2A-2
Describe Backup and Recovery Basics
1. Define the terms MTBF and MTTR. How do these concepts influence
your backup and recovery strategy?
MTBF stands for mean-time between failures, which indicates how much
time passes between one major failure and the next. MTTR stands for mean-
time to recovery, which indicates the total amount of time required between
a failure and a resolution to that failure.
To improve database uptime and availability, it should be the DBA’s goal to
maximize the amount of time between failures and minimize the amount of
time between a failure and its resolution. The amount of downtime a particu-
lar database can allow will be highly dependent on the business
requirements for that database and will play heavily into the backup and
recovery strategy. A database that can tolerate the occasional failure may
require only a moderate backup and recovery strategy, but a database that
cannot allow any downtime whatsoever will require a very in-depth high
availability and backup and recovery strategy.
Figure 2-4: Redo log groups and their log file members.
The minimum redo log configuration is two groups, each group having one log
file member each. For performance reasons, it is recommended that you configure
at least three redo log groups. The size of the log file members can be manually
set when they are created, but should be sized appropriately for your system. Log
file members that are sized too small for a busy system will cause the log files to
fill up very quickly, which results in excessive log switching and can hurt
performance. It is also recommended that you configure at least two members for
each log file group so you can place each member per group on separate disks,
just in case your system experiences media failure where one of your log file
members reside.
Control File
The control file is the single most important file in the database, both for normal
operations and for recovery operations. It contains the current file structure and
the most current SCN for the database, along with a list of the most recent
archive log files that have been generated for the database. A database can have
one or more copies of the control file, which are found at the location specified
by the CONTROL_FILES parameter. The parameter CONTROL_FILE_
RECORD_KEEP_TIME determines how long the information in the control file
will be kept before it is over written with new information. This is particularly
important for the archive log information, which is ever-growing for databases in
ARCHIVELOG mode.
Each copy of the control file contains identical information, but it is recom-
mended that you configure multiple copies to eliminate a single point of failure.
If all copies of the control file (or the only copy) become corrupted or lost, the
database would crash and could not restart until a valid control file exists.
During recovery, Oracle uses the SCN in the control file as a reference point to
determine which datafiles need recovery. If the header of a datafile contains an
SCN that is lower than what is contained in the control file, that datafile will
need to be recovered. As the roll forward phase of recovery begins, Oracle
applies the missing changes to the datafile, and the SCN for that datafile
increments. Once the SCN for that datafile matches that of the control file, the
datafile is considered to be recovered, and once all datafiles are recovered, the
database can then be opened for general use.
TASK 2A-3
Identify Instance and Media Recovery Structures
1. Which component of the Oracle database is used as the reference point
to determine whether or not the database needs recovery?
✓ a. Control file
b. Data dictionary
c. Redo logs
d. SYSTEM tablespace datafile
2. Which background process updates the datafiles and control files with
the current system change number (SCN)?
✓ a. CKPT
b. DBWR
c. LGWR
d. SMON
4. What would be the affect on the database if Oracle did not have a
built-in redo logging mechanism?
Without the built-in redo logging mechanism, there would be no record of
the exact transactions that took place when changes to data are made. If a
serious failure in the system requires the database to be restored from
backup, those transactions would not be available to allow a re-enactment of
the changes. The database could only be recoverable to the point in time
when the last database backup was generated, and any changes that
occurred after that point in time would be lost. The ability to log all transac-
tions someplace outside of the datafiles provides the abilities to apply the
latest changes to the database if the datafiles had to be restored from the
last backup, providing the database with true point-of-failure recoverability.
Topic 2B
Cold Backup and Recovery Concepts
Performing a cold backup is as simple as shutting down the database cleanly and
making a copy of the spfile, control file, datafiles, and redo log files. Once the
copies are done, you simply start the database back up. While this method of
backups requires database downtime, it can be used to create a backup that is
guaranteed to be consistent with absolutely no loss of data or transactions.
Prior to shutting down the database, you would need to know the exact names
Finding Files to Backup and locations of all the files you intend to copy. The V$PARAMETER view pro-
vides the locations of the spfile and control files. The following query can be
used to find this information.
SELECT name, value
FROM v$parameter
WHERE name IN ('control_files','spfile');
While a database may have multiple mirrors of the control file, you only need to
backup one of them, since all the mirrors are identical. There is no harm, how-
ever, in backing up all of them. Control files take up very little space, and if one
backup copy is bad, you would still have others you could use to recover from.
The DBA_DATA_FILES view provides a list of all datafiles in the database. It
also provides the sizes of the datafiles, which can be useful in determining
whether or not the location where you intend to store your backup has sufficient
space. The following query can be used to find the names, locations, and sizes of
your datafiles.
SELECT file_name, bytes/1024/1024 mb
FROM dba_data_files;
TASK 2B-1
Performing a Cold Database Backup
Objective: To perform a full, cold backup of the database.
1. Before backing up the database, you should identify the files that should be
included in your backup set. The backup set should include the spfile, con-
trol file, datafiles, and redo log files. The names and locations for these files
are found in the V$PARAMETER, DBA_DATA_FILES, and V$LOGFILE
data dictionary view. You will query these views to identify which files
should be backed up.
2. You will first query the V$PARAMETER view. This view will give you the
locations of the spfile and control files.
First, format the output of your query by typing the following com-
mands at the SQL*Plus prompt. Press Enter after each command.
COLUMN name FORMAT a15
COLUMN value FORMAT a50
The results of your query will show you the names and locations of the
spfile and control files for your database.
The control_files parameter shows that all three of your control files are
located in the D:\oracle\oradata\ora92 folder.
3. You will now query the DBA_DATA_FILES view to determine the names
and locations of all the datafiles that currently exist for your database.
First, format the output of your query with the following command:
COLUMN file_name FORMAT a50
The output will show the names and locations of all the datafiles in your
database. You will see that all your datafiles are currently stored in the
D:\oracle\oradata\ora92 folder.
4. You will now query the V$LOGFILE view to determine the names and loca-
tions of your redo log files.
First, format the output of your query with the following command:
COLUMN member FORMAT a50
The output will show the names and locations of the redo logs for your
database. Like your datafiles, all of your redo logs are currently stored in the
D:\oracle\oradata\ora92 folder.
5. Before taking a cold backup of the database, you must first shut it down.
6. Now that you have identified the names and locations of all the files you
need to backup, you will need to identify the location where you will copy
the files to.
Leaving the SQL*Plus window open, choose Start→Run. In the Run text
box, type D:\oracle and click OK. A window for the D:\oracle folder will
appear.
From the D:\oracle folder, double-click the ora92 folder. The window will
change to show the contents of the ora92 folder.
Double-click the database folder. You will see a list of files. Select
SPFILEORA92.ORA. Choose Edit→Copy.
Choose Edit→Paste.
8. You will now create backups of all your control files, datafiles, and redo log
files.
From the cold_bup folder, press Backspace to move back to the D:\oracle
folder.
You will see a list of the control files, datafiles, and redo log files for your
database.
Choose Edit→Select All. All the files in the ora92 folder will be
highlighted.
Choose Edit→Copy.
A progress meter will appear while the files are copied. It will take a few
minutes to copy all of the files to the backup destination.
Once the file copies are finished, the backup is complete. Close the cold_
bup window.
After a few moments, the output will show that the instance was started, and
the database was mounted and opened.
10. Type exit and press Enter to close the SQL*Plus window.
Argument Purpose
file The path and file name of the datafile to verify.
start The datablock number to start scanning from. If omitted, DBVerify will start
at the first datablock in the file.
end The datablock number at which to stop scanning. If omitted, DBVerify will
stop after the last datablock in the file.
blocksize The size in bytes of the datablocks in the datafile. The default value is
2048.
logfile The path and file name of the log file to generate. If omitted, no log file
will be generated.
feedback Instructs DBVerify to output a period (.) to the screen for each datablock
verified. The default is 0, which means no feedback is displayed.
parfile Specifies an optional parameter file that contains additional arguments.
userid User name and password of the database user to use when logging into
the target database if you are scanning a segment specified by the
segment_id argument.
segment_id Specifies the segment to scan.
C:\>
If any corruption in the file is found, the output would show the number of
blocks data or index blocks that were corrupted. To scan a specific segment in the
database, you would use the following syntax:
dbv userid=username/pwd segment_id=tsn.segfile.segblock
In this syntax, username/pwd specifies the database user and password to log
in to the database with. If you are scanning a specific segment, you must execute
DBVerify from the same machine where the database resides. Therefore, you can-
not specify a connect string for a remote database.
For the segment_id argument, tsn represents the tablespace number,
segfile represents the target file number, and segblock represents the block
number where the header of target segment resides. The values you need to
specify can be determined by querying the appropriate data dictionary views for
the segment in question, such as DBA_TABLES and DBA_SEGMENTS. The
following example shows DBVerify scanning a single segment:
C:\>dbv userid=system/ora92 segment_id=5.5.5897
C:\>
In this example, a single segment that resides at block number 5897 in datafile 5
of tablespace 5 was scanned for corruption.
TASK 2B-2
Perform a Cold Database Recovery
Objective: To perform a cold recovery of the database using the latest
full, cold backup set.
1. Before performing a cold recovery, you will first simulate disk failure on the
disk where the control files, datafiles, and redo log files reside.
In the Log On dialog box, type sys for User Name, ora92 for Password,
and ora92 as sysdba for Host String. Click OK.
After a few moments, the output will state that the database was closed, dis-
mounted, and the instance shut down.
3. Leaving the SQL*Plus window open, choose Start→Run. In the Run text
box, type D:\oracle\oradata\ora92 and click OK.
4. Choose Edit→Select All. All the files in the window will be selected.
Press Shift+Delete. A question box will appear asking if you are sure you
want to delete these 17 files. Click Yes. All of your control files, datafiles,
and redo log files will be permanently erased from the disk. This simulates
complete disk failure.
After a few moments, you will see the instance start, but a message will
Double-click the alert_ora92.log file. The alert log will open in a Notepad
window. Scroll to the bottom of the file.
The last several lines in the alert log shows the reason why the database
Lesson 2: Backup and Recovery Overview 125
could not be mounted. The operating system could not find CONTROL01.
CTL file in the D:\oracle\oradata\ora92 directory. Upon further investigation,
you realize that the entire disk where that control file resides has crashed.
7. A new disk has been installed, and you will now restore all the necessary
files from your last cold backup. Before doing so, you must first shut down
the instance.
At the SQL*Plus prompt, type shutdown immediate; and press Enter. After
a moment, the output will show that the instance was shut down.
8. Leaving the SQL*Plus window open, choose Start→Run. In the Run text
box, type D:\oracle\cold_bup and click OK. A window for the cold_bup
folder will appear.
Choose Edit→Copy.
Double-click the oradata folder to open it. In the oradata folder, double-
click ora92 folder to open it.
Choose Edit→Paste. A progress meter will appear while the files are
After a few moments, you will see the instance start, and the database mount
and open. You have successfully performed a cold recovery of the database
from the last cold backup.
10. Type exit and press Enter to close the SQL*Plus window.
Topic 2C
Hot Backup and Recovery Concepts
While hot backups are more complicated than cold backups, the ability to back
up the database while it remains up and running provides much more power and
flexibility. With hot backups, you can back up either the entire database or a
single tablespace at a time. This allows you to spread your backups over a longer
period of time, which can be very handy if your database is very large. Addition-
ally, you have the ability to restore just a single tablespace, or even a single
The LOG_ARCHIVE_FORMAT parameter can be set only in the spfile and can
be assigned any combination of four pattern variables plus any additional envi-
ronment variables or literal characters you wish to include. For example, the
following is an ALTER SYSTEM command to set the LOG_ARCHIVE_
FORMAT parameter using the %S pattern variable and the $ORACLE_SID
environment variable in a Unix environment.
ALTER SYSTEM SET log_archive_format='log_$ORACLE_SID_%S.arc'
SCOPE=SPFILE;
In this example, if the $ORACLE_SID variable was set to ora92, and the log
sequence number of the redo log being archived was 17, the name of the archive
log to be generated would be log_ora92_00017.arc. Including the $ORACLE_SID
environment variable can be useful to identify exactly which database a list of
archive logs came from. This can be handy if you have several databases
archiving their log files to the same or similar directory structure. The default
LOG_ARCHIVE_FORMAT parameter is OS dependent.
Another parameter which can be set is the LOG_ARCHIVE_START parameter.
This parameter accepts only the values TRUE or FALSE and is optional, but it is
highly recommended that you set it. If this LOG_ARCHIVE_START is left set to
its default of FALSE, the archiver process will only archive the redo logs when
told to do so by the ALTER SYSTEM ARCHIVE LOG ALL command. This
Figure 2-6: The output from the ARCHIVE LOG LIST command.
After setting the appropriate parameters, you can then enable ARCHIVELOG
mode. First, the database must be shut down cleanly, meaning that the instance
must not have been aborted either through instance failure or the SHUTDOWN
ABORT command. The database must then be started up using the STARTUP
MOUNT command. Once in the mount mode, you would enable archive logging
by issuing the ALTER DATABASE ARCHIVELOG command. Once this com-
mand is executed, you can open the database for general use. It is recommended
that once you enable ARCHIVELOG mode that you manually switch the redo
logs a few times to ensure that the logs are archiving, and that they are being
archived to the correct location. The following sample output shows the sequence
of events to enable ARCHIVELOG mode for a database.
Database altered.
Database altered.
In the Log On dialog box, type sys for User Name, ora92 for Password,
and ora92 as sysdba for Host String. Click OK.
2. Before changing the configuration of the database, you should determine the
current configuration.
At the SQL*Plus prompt, type ARCHIVE LOG LIST; and press Enter.
The output will show that the database is currently in No Archive Mode.
The log sequence numbers Currently, hot backups cannot be performed for this database.
for your system may be
slightly different than what is
shown here.
4. You must now restart the database to have these changes take affect.
After a moment, the output will show that the database has been closed and
dismounted, and the instance shut down.
5. You must bring the database up to the mount stage to enable ARCHIVELOG
mode.
After a moment, you will see that the instance is started and the database is
mounted.
The output will show you that the database is in ARCHIVELOG mode, that
automatic archiving is enabled, and that full redo logs will be archived to the
D:\oracle\ora92\database\archive folder.
9. To verify that your settings are correct, you will force a log switch. This will
cause the archiver process to copy the current online redo log to the archive
log destination.
10. Leaving the SQL*Plus window open, choose Start→Run. In the Run text
box, type D:\oracle\ora92\database\archive and click OK. A window for
the D:\oracle\ora92\database\archive folder will open.
You will see a single archive log file. This shows that you have indeed con-
figured the database in ARCHIVELOG mode with the proper settings. Hot
11. At the SQL*Plus prompt, type exit and press Enter to close SQL*Plus.
Column Description
FILE# File ID number.
ONLINE Left over from previous versions of Oracle and is obsolete. Will
always show the same value as ONLINE_STATUS column.
ONLINE_STATUS Online status of the datafile.
ERROR Reason why the datafile needs recovery. Will be null if the reason is
unknown.
CHANGE# SCN where recovery must start.
TIME Time of the starting recovery SCN.
The most important column of this view is the ERROR column, which tells you
exactly why the file needs recovery. For example, if this column shows the value
'FILE NOT FOUND', then either the file was accidentally deleted from the file
system, or the entire disk where the file resides has gone offline or crashed. The
FILE# column lists only the ID number of each datafile that needs recovery. You
can use the file ID to look up the actual path and name of the file in the
V$DATAFILE view. The file ID also resides in the DBA_DATA_FILES view, but
this view is only available while the database is open; the V$DATAFILE view is
available when the database is in the mounted state.
Once you have determine why a datafile needs recovery, you can decide what
steps to take next. If the datafile exists but is simply offline due to other errors,
then you can simply begin the recovery process to bring the file up to date with
the rest of the database, and then bring the datafile or tablespace online. If the file
is completely missing, then you must first restore it from a hot backup before
initiating recovery. If the entire disk where the file resides is damaged or unus-
able, then you must restore the datafile to a different location.
Once the datafiles in question are in place, you can begin the recovery process
with the RECOVER command. You can perform the recovery at the datafile,
tablespace, or database levels. If you issue the RECOVER DATAFILE command,
only the specified datafile will be recovered, while RECOVER TABLESPACE
command will recover all datafiles that make up the specified tablespace. If you
issue the RECOVER DATABASE command, all datafiles in the database that need
recovery will be recovered. In most cases, using the RECOVER DATABASE com-
mand is easiest for a large database since it will cause Oracle to automatically
recover all datafiles that need recovery without having to list each datafile or
tablespace separately.
During the recovery process, Oracle will compare the SCN in the control file
with the SCNs in the datafiles being recovered. If any datafile SCNs are lower
than what is shown in the control file, then Oracle will determine where the redo
information necessary for recovery resides, either in the current online redo log or
an archived log. If the redo information resides in the current online redo log,
Summary
In this lesson, you learned about the types of failures that may occur in an
Oracle9i database and the features provided to resolve them. You also
learned how to create a cold backup of the database and how to use that
backup to restore the database after a failure. Finally, you learned about the
mechanisms in place to back up and recover an Oracle9i database with
minimal data loss.
Each redo log group in your database consists of two redo log file
members. What will happen if media failure causes the loss of one log
file member of one log file group?
In this scenario, the Oracle database will stay up an running. Since each log
file member in a single group contains the same information, Oracle will be
able to keep generating changes and switching redo logs as long as at least
one redo log member is available in each group. The primary purpose of
multiplexing the redo log files is to ensure the database is durable enough to
withstand losing a single log file member without crashing.
2B Which data dictionary views could you use to determine the names and
locations of the files that you need to back up as part of a full, cold
database backup? Choose all that apply.
✓ a. DBA_DATA_FILES
b. DBA_TABLESPACES
✓ c. V$PARAMETER
✓ d. V$LOGFILE
True or False? When restoring the database from a cold backup, you
must restore only the files that were lost due to media failure. Explain
your answer.
False. You must restore all files from the cold backup, including the control
files and redo log files, not just the files that were lost due to media failure.
Objectives
To perform user-managed backups and recoveries of the database, you will:
exit
There is one small but very important difference between a tablespace in hot
backup mode and a tablespace not in hot backup mode. While a tablespace is in
hot backup mode, all changes to datablocks that are stored in the tablespace are
still written from the buffer cache to the datafiles by the DBWR process when
checkpoints occur. However, the checkpoints are deferred for tablespaces in hot
backup mode. That is to say that CKPT does not update the headers of the
datafiles of a tablespace while that tablespace is in hot backup mode. The
datafiles will still contain the SCN that was current at the point in time when the
tablespace was placed in hot backup mode. Since the control file tracks the cur-
rent SCN for the entire database, once a tablespace is taken out of hot backup
mode, Oracle simply advances the tablespace’s datafile headers to the latest
checkpoint to bring them up to date with the rest of the database. While the
tablespace is in hot backup mode, its datafiles may be changing while the OS is
copying them, which creates an inconsistent copy of the files. However, this is
easily rectified during Oracle’s recovery mechanisms when the backup datafiles
are restored and recovered.
The V$BACKUP data dictionary view shows which datafiles are currently in hot
backup mode and the SCN and timestamp of the tablespace the last time it was
placed in hot backup mode. The following table shows the columns of the
V$BACKUP view and their descriptions.
Column Description
FILE# The file ID number.
STATUS Current status of the datafile. Values include ACTIVE if the datafile is in hot
backup mode, or NOT ACTIVE if the datafile is not in hot backup mode.
CHANGE# The SCN of the datafile when it was last placed in hot backup mode.
TIME The timestamp of the datafile when it was last placed in hot backup mode.
Since the V$BACKUP view contains only the file number of each datafile, it is
much easier to identify the tablespaces and their datafiles by joining this view
with the V$DATAFILE and V$TABLESPACE views to provide a complete listing
of tablespaces with their datafiles. The following is an example query of these
three views showing the backup status of all the tablespaces and their datafiles.
TASK 3A-1
Hot Backup Tablespaces
Objective: To perform hot backups of the tablespaces.
1. Before beginning the tablespace hot backup, you will create a folder to store
the backed up datafiles.
From the desktop, choose Start→Run. In the Run text box, type D:\oracle
and click OK. A window for the D:\oracle directory will open.
2. Leaving the D:\oracle window open, launch SQL*Plus from the Start
menu.
In the Log On dialog box, type sys for User Name, ora92 for Password,
and ora92 as sysdba for Host String. Click OK.
3. Before setting any tablespaces to backup mode, you should determine the
current backup status of the tablespaces.
The bup_status.sql script will open. This script queries from the
The output will show that none of the tablespaces are currently in backup
mode.
4. You will back up the EXAMPLE and SYSTEM tablespaces. At the prompt,
issue the following commands:
ALTER TABLESPACE example BEGIN BACKUP;
ALTER TABLESPACE system BEGIN BACKUP;
You will see that the values in the STATUS column for EXAMPLE and
SYSTEM tablespaces have changed to ACTIVE. You can now begin backing
up the datafiles for these tablespaces.
Double-click the oradata folder to open it, and then double-click the
ora92 folder.
Select EXAMPLE01.DBF, and then, while holding the Ctrl button, select
SYSTEM01.DBF as well.
A status bar will appear while the files are copied. It will take a few
moments until the copies are complete.
7. At the SQL*Plus prompt, disable backup mode for the EXAMPLE and
SYSTEM tablespaces with the following commands:
ALTER TABLESPACE example END BACKUP;
ALTER TABLESPACE system END BACKUP;
Oracle will display the message “Tablespace altered” after each command.
The output will show that the EXAMPLE and SYSTEM tablespaces are no
longer in backup mode.
FILE_NAME
------------------------------------------------
D:\oracle\admin\ora92\udump\ora92_ora_2636.trc
With a little creativity, this query can be used within a backup script to back up
the control file to trace, then automatically find the trace file and copy it to the
backup location along with the rest of the backup datafiles and control files.
TASK 3A-2
Backing Up the Control File
Objective: To perform backups of the control file using multiple methods.
2. You will first back up the control file to a standard image copy.
3. Choose Start→Run. In the Run text box, type D:\oracle\hot_bup and click
OK.
You will see three files. The *.DBF files are backups of datafiles you created
earlier in this lesson. The CONTROL.BAK file is the backup copy of the
control file. Close the hot_bup window.
4. You will now generate a trace file that can be used as a script to re-create
the control file.
5. To determine the current path and file name of the trace file that was gener-
ated, type @C:\079176\get_trace.sql and press Enter.
Oracle will display the path and file name of the trace file that was
generated. The file name on your system may be different from what is
shown here.
7. The udump folder will contain one or more trace files. Find and open the
file that was identified by the get_trace.sql script.
The trace file contains a SQL script that can be edited and used to re-create
the control file in the event that all control files for the database are lost.
Take a few moments to browse through the file and see its contents.
Read-only Tablespaces
In Oracle, tablespaces can be set to a read-only status, which guarantees that the
tablespace will not be written to for any reason. A tablespace is set to read-only Setting a Tablespace to
or read-write with the ALTER TABLESPACE command, as shown in the follow- Read-only or Read-write
ing syntax.
ALTER TABLESPACE tablespace_name READ [ONLY|WRITE];
When the ALTER TABLESPACE command is issued to set a tablespace to read-
only, a checkpoint is issued, and the datafile headers for that tablespace are
updated one last time with the latest SCN from the control file. The datafiles are
then frozen, and any process that tries to write to the tablespace will receive an
error. The STATUS column of the DBA_TABLESPACES view indicates which
tablespaces are set to read-only.
Once a tablespace is in read-only mode, the datafiles for that tablespace need to
be backed up only once. Since the datafiles are guaranteed to remain static, there
is no need to include these datafiles in every database backup. Additionally, you
do not need to place a read-only tablespace into backup mode; you can simply
use any appropriate OS-level command to make a copy of the datafiles. Once the
tablespace is set back to read-write mode, you should immediately back up the
tablespace, then resume including the tablespace in regular database backups.
It’s important to note that any time you change the status of a tablespace from
read-write to read-only, or vice versa, that you back up the control file. This
ensures that your latest control file backup contains the most current status infor-
mation about all tablespaces in the database. If you place a tablespace in read-
only and you do not back up the control file, loss of the control file at a later
TASK 3A-3
Identify Backup Considerations for Read-only
Tablespaces
1. How many times must you back up a read-only tablespace? Explain
your answer.
A tablespace in read-only mode only requires to be backed up once, as long
as that backup occurs after the tablespace was placed in read-only mode.
Once the tablespace is placed in read-write mode, it should be backed up
immediately. All future backups of the tablespace should follow the same
logic as all other tablespaces. If the tablespace is placed in read-only mode
again, only one backup of the tablespace after that point in time is
necessary.
TASK 3A-4
Resolve a Failed Hot Backup
Objective: To resolve a failed hot backup in the event of failure while a
tablespace is in backup mode.
1. You will first simulate instance failure while a hot backup is in progress.
This can be done by issuing the SHUTDOWN ABORT command while
tablespaces are in backup mode.
The output will show that no tablespaces are currently in backup mode.
At the prompt, issue the following commands. After each command, Oracle
will display the message “Tablespace altered.”
ALTER TABLESPACE system BEGIN BACKUP;
ALTER TABLESPACE undotbs1 BEGIN BACKUP;
ALTER TABLESPACE example BEGIN BACKUP;
ALTER TABLESPACE tools BEGIN BACKUP;
ALTER TABLESPACE users BEGIN BACKUP;
The output will show that the SYSTEM, UNDOTBS1, EXAMPLE, TOOLS,
and USERS tablespaces are now in backup mode.
Oracle will simply display the message “ORACLE instance shut down.”
6. Since the instance was aborted while several tablespaces were in backup
mode, the tablespaces will not be consistent with the rest of the database
upon startup. This means that any attempt to open the database without
addressing this issue will fail.
The output will show that Oracle was able to start the instance and mount
the database, but could not open the database because the tablespaces that
were in backup mode at the time of the failure require some sort of
recovery.
7. To resolve the failed hot backup, you will take all tablespaces out of hot
backup mode with a single command.
At the prompt, type ALTER DATABASE END BACKUP; and press Enter.
After a few moments, Oracle will display the message “Database altered.”
9. To confirm that all tablespaces have been taken out of backup mode, type
@C:\079176\bup_status.sql and press Enter.
The output will show that no tablespaces are currently in backup mode. To
resolve a hot backup after a failure, you would take the tablespaces out of
backup mode while the database is mounted but not open. The ALTER
DATABASE END BACKUP command can be used to take all tablespaces out
of backup mode simultaneously.
Figure 3-2: A datafile that has been restored after media failure.
In this figure, the INDX_01.DBF datafile was restored from backup. The SCN
stored in the header of this datafile (328644) indicates the last SCN that was writ-
ten to the datafile when the INDX tablespace was put into backup mode just prior
to the datafile being backed up. Since the control file shows a later SCN
(454143), upon recovery, Oracle will determine that the newly-restored datafile is
missing some changes that need to be applied in order to bring it up to date with
the rest of the database.
The actual database recovery is initiated with the RECOVER command from
SQL*Plus and can be performed at the file, tablespace, or database levels, known The RECOVER command
as recovery targets. If recovery is performed at the file level, only the target is identical to the ALTER
datafile is recovered, while if the recovery is performed at the tablespace level, all DATABASE RECOVER
restored datafiles that make up the target tablespace are recovered. If the recovery command.
TASK 3B-1
Recover a Tablespace After a Failure
Objective: To recover a tablespace after a failure.
The output shows a list of tables owned by the user OE. All of the tables are
stored in the EXAMPLE tablespace.
3. You will create a new table in the OE schema, and store it in the
EXAMPLE tablespace. At the prompt, issue the following command:
CREATE TABLE oe.test_recovery
( col1 NUMBER(5) )
TABLESPACE example;
Oracle will display the messages “1 row created” and “Commit complete.”
To see the row of data you just inserted into the test table, issue the follow-
ing query:
SELECT *
FROM oe.test_recovery;
The output will show the value 5 in the COL1 column. This row did not
exist when the EXAMPLE tablespace was backed up.
5. To see the complete list of tables owned by OE, issue the following query
again:
SELECT table_name, tablespace_name
FROM dba_tables
WHERE owner='OE';
6. The redo information generated by the CREATE TABLE and INSERT state-
ments were stored in the current online redo log. If you force a log switch,
the current redo log will be archived to be used in the event that a recovery
is necessary any time in the future.
7. You will now simulate loss of the datafile that makes up the EXAMPLE
tablespace.
First, determine the name of the EXAMPLE datafile by issuing the fol-
lowing query:
SELECT file_name
FROM dba_data_files
WHERE tablespace_name='EXAMPLE';
8. To shut down the database, type shutdown immediate and press Enter.
The output will show that the database was closed and dismounted, and the
instance shut down.
9. Leaving the SQL*Plus window open, choose Start→Run. In the Run text
box, type D:\oracle\oradata\ora92 and click OK. A window for the
D:\oracle\oradata\ora92 folder will appear.
10. Leaving the ora92 window open, switch back to the SQL*Plus window.
The output will show the instance start and the database mount. However,
the database will not open because it cannot find the EXAMPLE01.DBF
datafile.
11. You will now restore the datafile from backup, and perform a complete
recovery of the database using the redo information stored in the online and
archived redo logs.
Press the Backspace key twice to move back to the D:\oracle folder.
Double-click the oradata folder to open it. Double-click the ora92 folder
to open it.
13. Now that the EXAMPLE01.DBF datafile has been restored, you must now
recover the database.
To accept the suggested archive log, press Enter. If Oracle prompts for
additional logs, keep pressing Enter to accept the suggested logs. After
each log, Oracle will display the message “Log applied.”
Once all archived changes have been applied to the database, Oracle will
14. The EXAMPLE tablespace has been recovered, and you may now open the
database.
15. To ensure that your TEST_RECOVERY table has been recovered, issue the
following query once more:
SELECT table_name, tablespace_name
FROM dba_tables
WHERE owner='OE';
16. To ensure that your data is intact, issue the following query:
SELECT *
FROM oe.test_recovery;
17. You have just perform a user-managed complete recovery of your database.
Trial Recovery
Occasionally, a database recovery may fail in mid-operation for one reason or
another, such as block corruption in the backup sets or archive logs. This requires
that the entire operation be restarted, and can be very time-consuming, especially
trial recovery: if the database is very large, and can increase your mean time to recovery. In
A test recovery of the order to avoid this problem, Oracle provides the ability to execute a trial recovery
database that is used to to determine if a problem will be encountered during an actual recovery. The trial
determine whether or not a
real recovery will be
recovery is much faster than an actual recovery, and if any problems are reported
successful. during trial recovery, those problems can be dealt with ahead of time.
To execute a trial recovery, you would simply include the option TEST with the
RECOVER command. The recovery process will appear to execute as normal,
prompting for archive logs as it proceeds, except that no changes are actually
written to the datafiles. All changes from the archive logs are stored only in the
database buffers, but will be rolled back at the end of the trial recovery. All errors
that are encountered during trial recovery are recorded in the alert log. The TEST
option can be included with any type of recovery that can be performed with the
RECOVER command. The following statement shows an example RECOVER com-
mand with the TEST option.
RECOVER DATABASE TEST;
If the trial recovery detects only a few corrupted blocks, you can choose to pro-
ceed with the actual recovery, and direct the recovery process to allow those
corruptions. This is done with the ALLOW n CORRUPTION clause of the
RECOVER command. The n specifies the number of corrupt blocks to allow dur-
ing the recovery. To recover a database and allow three corrupt blocks (found
during trial recovery), you would issue the following command.
RECOVER DATABASE ALLOW 3 CORRUPTION;
It’s important to note that the ALLOW n CORRUPTION clause is provided only
as a means to move past a block corruption issue to finish the recovery process.
Since you have instructed Oracle to allow those block corruptions during the
recovery process, the corruptions will still exist when you bring the recovery tar-
get online. Any subsequent attempts to access those datablocks through normal
database operations will result in an error and will need to be rectified. Block
corruption can be handled multiple ways, which you will learn about later in the
course.
1. In this activity, you will simulate the loss of a datafile, and then perform a
trial recovery of the tablespace. This will allow you to determine if the
recovery will encounter corruption issues without actually performing the
recovery.
4. Leaving the ora92 folder open, switch back to the SQL*Plus window.
5. You will now restore the datafile from the hot backup of the database you
created earlier.
6. You will now run a trial recovery of the EXAMPLE tablespace. This will
help to identify potential problems that might arise during an actual
recovery. At the SQL*Plus prompt, issue the following command:
RECOVER TABLESPACE example TEST;
The first message listed, starting from the bottom, states that test recovery
cannot apply redos that may modify the control file. This means that the trial
recovery successfully completed. The recovery started with the earliest
If you do not receive the last required entry in the redo logs, and stopped when it reached the last known
two errors and instead entry in the redo log. If any more redo entries were applied, the datafile
receive the following error,
would contain a later SCN than the control file, and the controlfile would
“ORA-10570: Test recovery
complete,” then issue the have to be modified to support it, which is not allowed in trial recovery.
following command before
moving on with this activity: The next message, stating that the test recovery was canceled due to errors,
CONNECT sys as is purely informational. The third message from the bottom states the SCN
sysdba. range that was tested during the trial recovery. This range starts from the
earliest known redo entry required for the data file that was restored, and
ends with the current SCN known to the control file.
The final message states that the test recovery did not corrupt any data
block. This means that the trial recovery was successful in that it did not
encounter any corrupted data blocks during the recovery process. If a block
7. Before bringing the EXAMPLE tablespace back online, you must first per-
form an actual recovery of the tablespace. At the SQL*Plus prompt, issue
the following command:
RECOVER TABLESPACE example;
1. You will first simulate the loss of the disk where the EXAMPLE01.DBF
datafile is stored.
The output will show that the EXAMPLE01.DBF datafile is currently stored
in the D:\oracle\oradata\ora92 folder.
3. To shut down the database, type shutdown immediate and press Enter.
The output will show that the database has been closed and dismounted, and
the instance shut down.
5. Leaving the ora92 window open, switch back to the SQL*Plus window.
The output will show that the instance was started and the database opened,
6. The original disk with the EXAMPLE01.DBF datafile was lost, therefore,
you must restore the datafile to a new location. Since it is assumed that your
system has only a single disk, you will simulate restoring the datafile by cre-
ating a new folder on the D drive.
8. Before recovering the database, you must change Oracle’s internal pointer to
the EXAMPLE01.DBF datafile. This is done with the ALTER DATABASE
RENAME FILE command.
9. You will now recover the database using the redo information stored in the
archived redo logs.
Oracle detects that the changes needed to recover this datafile have been
archived already, and prompts you for the name and path of the required
archive log.
To accept the suggested archive log, press Enter. If Oracle prompts for
additional logs, keep pressing Enter to accept the suggested logs. After
each log, Oracle will display the message “Log applied.”
Once all archived changes have been applied to the database, Oracle will
display the message “Media recovery complete.”
10. To open the database, type ALTER DATABASE OPEN; and press Enter.
The output will show that the EXAMPLE01.DBF datafile is now stored in
the D:\oracle\oradata\new_loc folder, instead of the D:\oracle\oradata\ora92
folder. You have successfully restored a datafile to a new location and recov-
Topic 3C
User-managed Incomplete Recovery
When an incomplete recovery is performed, the database is recovered to a point
in time prior to the point of failure. This is done by not applying all redo infor-
mation to the database, from either the archive logs or the current online redo
Incomplete Recovery log. An incomplete recovery may be required, and in some cases desired, in cer-
Scenarios tain situations, which include:
• Loss of all copies of the current control file.
• Loss of current redo log group.
• A gap in the archive log sequence during normal recovery.
• Intentional point-in-time recovery.
As you learned earlier, the SCN recorded in the control file is considered the
most current SCN for the database and is used as a reference point during data-
base recovery. If the headers of any datafiles contain SCNs that are lower than
that of the control file, then Oracle determines that those files require recovery.
During recovery, the SCNs in the target datafiles are incremented as changes from
the redo or archive logs are applied. Only when the SCNs in the target datafiles
match the SCN in the control file will Oracle consider the datafiles recovered.
Losing the current control file means that you have lost that reference point, and
Oracle will not know when the datafiles are completely recovered. In this case,
you must apply all available redo information to the datafiles, but since Oracle
has no idea when to stop recovering the datafiles, Oracle will never know when
the recovery is done. Such an operation is considered an incomplete recovery.
Another situation in which you may need to perform an incomplete recovery is
losing part or all of the redo stream. If media failure causes the loss of the cur-
rent online redo log group, the LGWR will not be able to write the redo log
buffer out to that redo log group, and will summarily terminate the instance.
Upon restarting the instance, instance recovery will begin, and Oracle will
attempt to begin the roll forward phase of recovery using the redo information
from the current redo log group. Since that group is no longer there, any changes
it may have contained, committed or uncommitted, will be lost. You must termi-
nate the recovery and open the database without applying those missing changes.
2. What are the three types of user-managed incomplete recovery that can
be performed for an Oracle9i database?
✓ a. Cancel-based
✓ b. Change-based
c. Commit-based
✓ d. Time-based
TASK 3C-2
Recovering After Loss of Control File
Objective: To recover the database after a complete loss of all copies of
the current control file.
Note: The control file is critical to the operation of the Oracle database. Use
extreme caution and follow the directions in this activity very carefully. Failure to
do so may result in the complete loss or corruption of the current control files
and/or the backup control file, which may render your database useless. It is rec-
ommended that you read through the entire activity first before actually beginning
the activity on a live database.
1. You will first simulate the loss of all copies of the current control file.
3. Create a new backup copy of the current control file by issuing the fol-
lowing command:
ALTER DATABASE BACKUP CONTROLFILE TO
'D:\oracle\hot_bup\control.bak' REUSE;
4. To shut down the database, type shutdown immediate; and press Enter.
5. After the database is shut down, leave SQL*Plus open, and choose Start→
Run. In the Run text box, type D:\oracle\oradata\ora92 and click OK. A
window for the D:\oracle\oradata\ora92 folder will appear.
Delete all three control files from the folder. The files are named
CONTROL01.CTL, CONTROL02.CTL, and CONTROL03.CTL. Be careful
not to delete any other files in this folder.
The instance will start, but the database will not mount, and Oracle will dis-
play the message “ORA-00205: error in identifying controlfile, check alert
log for more info.”
Open the alert_ora92.log file, and scroll to the very bottom. The alert log
will show that the CONTROL01.CTL control file could not be found.
When you are done looking at the alert log, close Notepad and the window
for the bdump folder.
8. You will now use your backup control file to restore all three of the current
control files.
Open a window to the D:\oracle\hot_bup folder from the Run text box.
Once you have restored all three of your control files, the ora92 folder
9. You will now mount the database and perform an incomplete database
recovery.
Oracle will determine the SCN that is required for recovery, and will ask for
the archive log that contains that change.
One of two possibilities may occur. Oracle will either find the suggested
archive log, apply it to the database, and ask for another log, or it will dis-
play a message stating that the specified archive log could not be found. If
the specified archive log is found and applied, press Enter again to apply
the next archive log. Keep pressing Enter repeatedly until Oracle dis-
12. After failing to find the last archive log, Oracle may also display the follow-
ing messages:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS
would get error below
ORA-01152: file 1 was not restored from a sufficiently old
backup
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\ORA92\SYSTEM01.DBF'
If these messages are not displayed, then skip the remainder of this step
and go to step 13 to continue.
The ORA-01547 and ORA-01152 errors occur because the very last changes
that need to be applied to the database are still in the current redo log and
have not yet been archived. You will begin the recovery again, and you will
specify the online redo log for recovery instead of an archive log.
If Oracle states that it needs changes generated later than what is contained
in REDO01.LOG, then issue the RECOVER DATABASE USING
BACKUP CONTROLFILE UNTIL CANCEL; command again. This time,
specify D:\oracle\oradata\ora92\redo02.log as the archive log to apply. If
necessary, repeat the process again for the D:\oracle\oradata\ora92\
redo03.log.
Once all the required changes have been applied to the database, Oracle will
display the messages “Log applied” and “Media recovery complete.”
After a few moments, Oracle will display the message “Database altered.”
You have just recovered from a complete loss of all copies of the current
control file.
14. To see the current log sequence number, type ARCHIVE LOG LIST; and
press Enter.
Your system may show that The output will show that the current log sequence number has been reset to
the oldest online log 1. Also, the oldest online log sequence has been set to 1, which means that
sequence number has been all archive log history has been removed from the control file.
set to 0, instead of 1.
15. Since you now have a new incarnation of the database, the previous control
file backup is useless. You should immediately back up the control file in
case there is another failure in the near future.
Figure 3-4: Recovery after loss of the current redo log group.
In the top part of this figure, the database has been operating normally, perform-
ing log switches and archiving the logs as usual. Then, a media failure occurred
causing the loss of the current redo log group, which happened to be log
sequence 25. The instance crashed and all changes in this log group were perma-
nently lost.
The bottom part of Figure 3-4 shows the steps that were taken to recover the
database. The last full hot backup was restored, and the archive logs were applied
to recover the datafiles. However, the database can only be recovered to the point
in time just prior to log sequence 25 becoming the current log group, which
means you can only recover up to and including log sequence 24.
TASK 3C-3
Recover From the Loss of the Current Online Log Group
Objective: To recover the database from the loss of the current online
redo log group.
1. To recover the database from the loss of the current online redo log group,
you must have a full hot backup of the database available. You will create
this backup now.
It will take several minutes for the hot backup to complete. Once it’s com-
plete, the command prompt window will disappear, you will be returned to
the SQL*Plus prompt, and Oracle will display the message “Backup
complete.”
To determine which redo log group is the current group, type @C:\079176\
logfiles.sql and press Enter. The log sequence number
shown in your output may be
The STATUS column in the output will show which log group is the current different from what is shown
group. Additionally, the SEQ# column will show the log sequence number of here.
each group. The sequence number increments infinitely, and is used to iden-
tify the log file once it is archived. In the example shown here, the current
log group is log group 2, with a log sequence number of 2.
Oracle will display the messages “1 row created” and “Commit complete.”
The redo information generated from the CREATE TABLE and INSERT
statements was first stored in redo log buffer, then written to the current redo
log group.
5. You will now force a log switch. The current online redo log will be
archived, and Oracle will switch to the next log group.
The output will show that Oracle has switched to the next redo log group.
The redo log that contains the CREATE TABLE command and INSERT
statement has been archived. In this example, log group 3 is now the current
log group, which also has a log sequence number of 3. Log group 2 has
been archived.
Oracle will display the messages “1 row created” and “Commit complete.”
The redo information for the CREATE TABLE command and the first
INSERT statement is stored in log group 2, which has been archived. The
redo information for the last row inserted was stored in the current online
log group, which is now log group 3. Losing the current online log group
will mean that all changes stored in that log group are permanently lost.
However, all changes in the previous group can be recovered from the
archive log.
Make a note of which log group is current, along with its log sequence
The group number of the number from SEQ# column.
current log group may be
different from the log
sequence number. This is
normal.
In the ora92 folder, find the redo log file of the current online redo log
group. In the used example here, the file would be REDO03.LOG. Select
REDO03.LOG.
A question box will appear asking if you are sure you want to change the
file extension. Click Yes.
10. Leaving the ora92 folder open, switch back to the SQL*Plus window.
The output will show that the instance was started and the database
mounted, but the database could not be opened because Oracle could not
find the current online redo log group.
A blank command prompt will appear while the backed up datafiles are cop-
ied to their original locations. The restore will take a few minutes to
complete.
Once the restore is complete, the command prompt will disappear, and
Oracle will display the message “Database restored.”
12. You will now perform the incomplete recovery of the database. At the
prompt, issue the following command:
RECOVER DATABASE UNTIL CANCEL;
Oracle will detect which archive log contains the changes needed for
recovery. You will apply all archive logs up to, but not including, the log
sequence number of the current log group, which you should have made
note of in step 7 from the SEQ# column of the logfiles.sql output. In the
example used here, the current log sequence number was 3.
Press Enter to accept the suggested archive logs until Oracle asks for the
log sequence number of the current log group. Do not press Enter for
the current log group. Instead, type CANCEL and press Enter.
After several moments, Oracle will display the message “Database altered.”
14. To see which changes were recovered and which were lost, you will now
query from the TEST_LOG_FAIL table.
The output will show that only the first row, containing the value A, was
recovered. The second row, containing the value B, which was only stored in
the current online redo log group, was lost.
16. In the ora92 folder, you will see your original redo log file that you renamed
with the .OLD extension. However, you will also see that Oracle automati-
cally created a new file for the current online redo log when the ALTER
DATABASE OPEN RESETLOGS command was issued.
TASK 3C-4
Identify Recovery Considerations for Read-only
Tablespaces
1. The USERS tablespace was placed in read-only mode at 4:00 P.M. The
last backup of the control file and the USERS datafiles occurred at 5:00
P.M. A disk failure results in the loss of all the datafiles of the USERS
tablespace and all copies of the control file. How would you recover
from this failure?
Since the current control file is lost, you must use the backup control file for
this recovery. Since the control file was backed up after the tablespace was
placed in read-only mode, the backup control file knows that the USERS
tablespace is in read-only mode. You only need to restore the backup control
file and the USERS datafiles, take the USERS datafiles offline, and perform a
standard database recovery, then open the database using the RESETLOGS
option. Once the database is open, you can bring the USERS tablespace
back online to make it available for use.
3. Given the following scenario, which control file would you use to recover
the EXAMPLE tablespace? Explain your answer.
• A full, hot backup was executed at 10:00 A.M. All tablespaces were in
read-write mode at that time, with the exception of the EXAMPLE
tablespace, which was in read-only mode.
• The hot backup included all datafiles, a standard backup of the control
file, and a backup of the control file to trace.
• The EXAMPLE tablespace was placed in read-write mode at 1:00 P.M.
• The database suffered media failure at 3:00 P.M., which resulted in the
loss of all copies of the current control file and all the datafiles of the
EXAMPLE tablespace.
In this scenario, as in most recovery scenarios, the ideal solution is to use
the current control file, since it will contain the most current information
about all datafiles in the database. However, since the current control file is
not available in this case, this is not an option.
The backup control file that was created at 10:00 A.M. only contains infor-
mation about the datafiles as they existed at that point in time. Since the
EXAMPLE tablespace was in read-only mode at the time the backup was
taken, the backup control file will reflect that information. If the backup con-
trol file is used for recovery, the EXAMPLE tablespace will be treated as if
it’s still in read-only mode, and none of the transactions that occurred in
that tablespace after it was set to read-write mode will be recovered.
Since there is currently no control file, current or backup, that contains the
information that the EXAMPLE tablespace was ever in read-write mode, the
only other option is to create a new control file by using the script stored in
the control file backup trace file generated at 10:00 A.M. The script can be
modified to reflect the existing configuration of the database and can be used
to create a new control file that will allow the full recovery of the
EXAMPLE tablespace.
Tablespace Point-in-Time
Recovery
TASK 3C-5
Describe Tablespace Point-in-Time Recovery
1. Is it possible to recover a tablespace to a different point in time than
that of the rest of the database? Explain your answer.
Technically, it is not possible to recover a tablespace to a different point in
time than that of the rest of the database. Oracle will only consider a
tablespace recovered when it is consistent with the rest of the database. If
you stop a tablespace recovery before it becomes consistent with the data-
base, Oracle will return an error when you attempt to bring the tablespace
online.
Lesson Review
3A What happens internally in the database when a tablespace is placed in
hot backup mode?
While a tablespace is in hot backup mode, all changes to datablocks that are
stored in the tablespace are still written from the buffer cache to the datafiles
by the DBWR process when checkpoints occur. However, the checkpoints are
deferred for tablespace in hot backup mode. This means that CKPT does not
update the headers of the datafiles of a tablespace while that tablespace is
in hot backup mode. When the tablespace is taken out of hot backup mode,
Oracle simply advances the tablespace’s datafile headers to the latest check-
point to bring them up to date with the rest of the database.
What step must you take from within Oracle after restoring a datafile to
a location that is different from its original location?
a. Bring the tablespace online.
✓ b. Change Oracle’s internal pointer to the file.
c. Bring the tablespace offline.
d. Recover the tablespace
3C The current time is 8:15 A.M. The last hot backup of your database was
completed today at 6:30 A.M. Which one of the following times is valid
to use for an incomplete recovery?
a. 7:15 P.M.
b. 8:38 A.M.
c. 8:25 P.M.
✓ d. 8:14 A.M.
You have restored the control file from backup and applied all archive
logs that were generated between the point in time the backup was
taken and the point of failure. However, when issuing the CANCEL com-
mand, Oracle returned the following error. Why? How would you
resolve it?
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS
would get error below
ORA-01152: file 1 was not restored from a sufficiently old
backup
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\ORA92\SYSTEM01.DBF'
This error is caused by Oracle believing that there are more changes that
can be applied to the database, even though you have applied all available
archive logs. It is possible that the current online redo log contains the miss-
ing changes, but Oracle did not have a chance to archive this log prior to
the media failure that caused the instance to crash. To work through this
error, the recovery can be started again and the current online redo log
group can be specified for recovery instead of an archive log. If the current
log group contains the changes, Oracle will apply them to the database.
Overview
4
Data Files
Manually performing all backup and recovery operations for the database is none
considered user-managed backup and recovery. However, Oracle provides a
powerful utility called Recovery Manager, better known as RMAN, which Lesson Time
provides the capability to create a centralized solution to handle all backup 7 hours
and recovery operations for all Oracle databases in a single environment. In
this lesson, you will learn about the features RMAN provides and its archi-
tecture and process flow. You will also learn how to configure RMAN, how
to use it to perform server-managed hot and cold backups, and how to main-
tain its centralized information repository.
Objectives
To perform backups of the database using RMAN, you will:
4A Describe the RMAN environment and the basic steps RMAN uses to
back up and recover an Oracle9i database.
RMAN provides a powerful, centralized solution to simplify backup and
recovery operations for all Oracle databases in an environment. In this
topic, you will learn about the features of RMAN, its centralized informa-
tion repository, and the process flow RMAN uses to perform backups and
recoveries. You will also learn about the different types of backup sets
that RMAN creates and how to decide which type of backup set to use
for various backup and recovery strategies.
TASK 4A-1
List and Describe RMAN Features
1. How can RMAN be used to simplify the automation of database backup
procedures?
Using the command-line interface, the DBA can create and execute backup
and recovery scripts and can store them in RMAN’s repository for use at
any time. Backup and recovery jobs can be scheduled using any third-party
scheduling utility to execute these stored scripts. With OEM, the DBA can
easily configure custom backup and recovery jobs and schedule them to
execute using OEM’s scheduling features, which eliminates the need to even
write RMAN scripts.
3. What feature does RMAN provide to reduce the amount of time it takes
to perform a backup of a large database?
RMAN can perform incremental backups of datafiles, which allows RMAN to
back up only those datablocks which have actually changed since the last
time the database was backed up. This can greatly reduce the amount of
time it takes to back up a very large and active database. Hot incremental
backups can be performed on databases that are running in ARCHIVELOG
mode, and cold incremental backups can be performed on databases running
in NOARCHIVELOG mode.
4. How does RMAN simplify the restore and recovery process of a data-
base that has experienced media failure?
RMAN can automatically inspect the recovery target and, within seconds,
determine which files need to be restored, then look in its repository to
locate where in the backup location those files reside, automatically restore
them, and then identify, locate, and restore all archive logs required to per-
form the recovery. This entire operation can be performed with a single
command from the DBA.
23 rows selected.
TASK 4A-3
Describe the RMAN Process Flow
1. After connecting to both the target database and the repository, what
must RMAN do before performing any backup or recovery operations
on the target database?
a. Verify all control files and datafiles.
b. Update the repository.
c. Backup the control file.
✓ d. Allocate a channel.
2. You are performing a backup operation that will include datafiles, the
control file, and archive logs, and you are not creating image copies.
What is the minimum number of backup sets that will be created for
this operation?
a. 1
✓ b. 2
c. 3
d. 4
3. True or False? If your backup set is 100 GB, but you only need to
restore a 5 MB file, you will still need to read through the entire 100 GB
backup set to find that small datafile. Explain your answer.
True. As RMAN resends the input files to be backed up, RMAN will mesh the
files together to create the backup set. When recovering from a backup set,
RMAN requires that all pieces of the set be available to read from, even
though you may only need to restore a small datafile from it.
Topic 4B
Configuring the RMAN Environment
Prior to using RMAN to back up target databases, each target database must have
an available user account on the target that RMAN can use to log in with. For
simplicity, it is recommended that the user accounts on each database have the
same user names, even if each account uses a different password. Additionally,
the RMAN utility must have access to at least one Oracle Net service name per
remote target database that RMAN supports.
If you are using the control file of the target database as the RMAN repository,
then no additional configuration is necessary. You would simply invoke RMAN
and issue the CONNECT command with the TARGET keyword to specify the user
name, password, and connect string for the target database. The syntax of the
CONNECT command that uses the control as the repository is:
connect target username/password@connectstring
1. You will first create a database user, called rman, which the RMAN utility
will use to connect to the database.
Oracle will display the messages “User created” and “Grant succeeded.”
Exit from SQL*Plus.
The output will show that you have connected to the specified target data-
base, which is currently the ORA92 database. The output will also show that
you are currently using the control file of the target database instead of the
4. To see the current datafile layout for the target database, type report
schema; and press Enter.
The output will show the names and sizes of all the datafiles in each
tablespace of the ORA92 database.
5. At the prompt, type exit and press Enter to exit from the RMAN utility
and return to the command prompt.
Figure 4-7: RMAN connecting to both target and recovery catalog databases.
In this example, the recovery catalog is located in the database specified by the
RECCAT connect string, and the target database is specified by the ORA92 con-
nect string. The user name rman with the password buprec was used for both
locations.
TASK 4B-2
Creating the Recovery Catalog
Objective: To create the RMAN recovery catalog to store the RMAN
repository inside the database.
1. While it is not advisable to store your recovery catalog in the same database
that must be backed up, for the purpose of this course, you will create a
tablespace in the ORA92 database to store the RMAN recovery catalog.
After a few moments, Oracle will display the message “Tablespace created.”
Oracle will display the messages “User altered” and “Grant succeeded.”
4. Leaving the SQL*Plus window open, open a command prompt from the
Start menu.
You will now launch RMAN, and simultaneously connect to the target data-
base and the database where the recovery catalog is stored. For this course,
both refer to the ORA92 database. At the command prompt, issue the fol-
lowing command:
rman rcvcat rman/buprec@ora92 target rman/buprec@ora92
The RMAN utility will launch and, after a moment, it will connect to the
ORA92 database as both the target and the recovery catalog database.
RMAN will also display a message stating that the recovery catalog is not
installed.
5. To create the recovery catalog, type create catalog; and press Enter.
6. To exit from the RMAN utility, type exit and press Enter.
7. You will now query the data dictionary to see a list of the tables that were
created in the RMAN schema.
The output will show that 30 tables have been created in the RMAN
schema. These tables make up the main storage structures of the RMAN
recovery catalog. You may have to scroll up to see all of the output.
TASK 4B-3
Registering a Target Database
Objective: To register a target database in the RMAN recovery catalog.
1. You will first launch RMAN, and connect to both the recovery catalog and
the target database simultaneously.
2. To generate a list of the tablespaces that currently make up the target data-
base, type report schema; and press Enter.
RMAN will display an error message stack. The errors state that the target
database is not found in the recovery catalog. Before performing any opera-
tions with RMAN against a target database while using the recovery catalog
as the repository, you must first register the database with the recovery
catalog.
3. To register the target database with the recovery catalog, type register data-
base; and press Enter.
The output will show that the database has been registered with the recovery
catalog, and a full resync was completed.
4. Now that the metadata information about the target database is now stored in
the recovery catalog, you will be able to generate a list of the tablespaces
that currently make up the target database.
RMAN will display a report of the database schema. This report consists of
all the datafiles in the database, and the size of each one. The report will
also show which datafiles contain rollback segments. This information may
5. To exit from the RMAN utility, type exit and press Enter.
Variable Usage
%d Specifies the name of the target database.
%D Specifies the current day of the month using the format DD.
%F Combines the database identifier (DBID), day, month, year, and a sequential
number into a unique and repeatable generated name.
%M Specifies the month number using the format MM.
%p Specifies the backup piece number within a single backup set. The first set will
automatically use the value 1, and the value is incremented for each subsequent
piece in the set.
%s Specifies the backup set number. This number is actually stored in the control
file of the target database and is incremented for each backup set created for
the target database.
%t Specifies the timestamp of the backup set.
%T Specifies a full year, month, and date using the format YYYYMMDD.
%u Instructs RMAN to generate a unique name for each output file.
%U Shorthand for %u_%p_%c. This combination guarantees uniqueness for all
backup sets generated for a single target database.
%Y Specifies the current year using the format YYYY.
1. You will first connect to the recovery catalog and the target database.
RMAN will launch and connect to both the recovery catalog and the target
database.
2. To see the current settings of the configuration parameters, type show all;
and press Enter.
The output will show all configuration parameters that are currently set for
RMAN. This list is not all inclusive; if a parameter currently has no setting,
it will not appear in the list.
3. You will set some configuration parameters to tailor the RMAN environment
for your system. The changes you will make include:
• Destination path and file name format for backup sets.
• Control file autobackup.
• Backup control file path and file name format.
• Backup optimization.
• Tablespace exclusions.
To set the destination path and file name format for backup sets, issue the
following command at the RMAN prompt:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT
'D:\oracle\rman_bup\%d_%s_%p';
The output will show that RMAN stored the new parameter setting, and per-
You should notice that RMAN accepts the path and format setting even
though the D:\oracle\rman_bup folder does not yet exist. However, if the
path specified is still invalid when RMAN attempts to perform a backup, the
backup will fail. Therefore, you will create the D:\oracle\rman_bup folder
now.
4. You will now configure RMAN to automatically include the control file in
all backups. This includes enabling control file autobackup, as well as setting
the path and file name format for the backed up control file.
5. Certain RMAN features can help reduce the amount of time required to back
up the entire database, such as backup optimization and excluding non-
critical tablespaces. You will enable backup optimization to direct RMAN to
bypass datafiles that have already been backed up and have not changed
since the last backup. You will also set an exclusion for the EXAMPLE
tablespace.
6. To confirm all of your configuration changes, type show all; and press
Enter.
The output will show that the path and file name format for backup sets and
the control file have been set. You will also see that control file autobackup
has been enabled. Backup optimization has been enabled, and the
EXAMPLE tablespace will be excluded from all backups.
These parameter settings will be used for all future backups of this target
database until the settings are changed again.
Figure 4-10: Shutting down and mounting the target database from within RMAN.
Once the database is mounted, the backup can begin. During the backup process,
The BACKUP Command
RMAN will apply all persistent configuration parameters to the current operation,
unless the parameters are explicitly overridden for the operation. If the parameters
are set appropriately, then the set of commands to initiate a cold database backup
may simply consist of just the BACKUP command that specifies the target to back
up, such as one or more datafiles, one or more tablespaces, the entire database,
the control file, or one or more archive logs. The following command will back
up the entire database to the backup location specified by the default channel con-
figuration parameter:
COPY
DATAFILE 2 TO 'D:\oracle\backup\copies\undo01.bak',
DATAFILE 5 TO 'D:\oracle\backup\copies\example01.bak;
COPY
CURRENT CONTROLFILE TO 'D:\oracle\backup\copies\control01.bak',
DATAFILE 1 TO 'D:\oracle\backup\copies\system01.bak';
COPY
ARCHIVELOG 'D:\oracle\oradata\ora92\oraarch\ORA92_3833.arc' TO
'D:\oracle\backup\copies\ORA92_3833_arc.bak';
In the first COPY command, an image copy of the SYSTEM01.DBF datafile is
created in the D:\oracle\backup\copies folder. The input file is specified with an
absolute path and file name of the datafile. In the second example, multiple
datafiles are to be copied and are listed by their ID numbers, which can be deter-
mined from the data dictionary. In the third example, the COPY command
includes both the current control file and the SYSTEM01.DBF datafile. In the last
example, a single archive log is being copied from its original location to the ’D:\
oracle\backup\copies folder.
TASK 4C-1
Performing Cold Database Backups Using RMAN
Objective: To perform a full, cold backup of the database using RMAN,
and to back up the database by creating image copies.
The output will show that the database was closed and dismounted, and the
database shut down.
The output will show that the Oracle instance was started, and the database
mounted.
The output will show RMAN’s current configuration for this target database,
some of which you had set in an earlier activity. You will see that datafiles
will be backed up to the D:\oracle\rman_bup folder. You will also see that
4. Since all the necessary configuration parameters are already set, you simply
need to issue the backup command to perform a full, cold backup.
You will see RMAN begin the backup process. First RMAN allocates a
channel, then generate a list of datafiles to be included in the backup set.
Once the datafiles are backed up, RMAN will automatically back up the
control file and spfile. The entire backup process may take several minutes
to complete.
The folder contains two files. The first file is the backup copy of the control
file, and has a name that begins with the letter C, followed by a string of The names of the files you
numbers. As indicated by the configuration parameters, the control file will see on your system may not
automatically be backed up for each backup of the database. The other file is be the same as those shown
here.
6. Leaving the rman_bup window open, switch back to the RMAN window.
You will now create an image copy of the datafile that makes up the USERS
The output of the schema tablespace. To determine the name and location of the USERS datafile, type
report may be wider than report schema; and press Enter.
your window, which causes
longer lines of text to wrap The output will show a list of the datafiles in the target database. Find the
to the next line.
datafile for the USERS tablespace. Make note of the file number of that
datafile, which is found in the first column of the report. In the example
shown here, the file number of the USERS datafile is 9.
7. To create an image copy of the USERS datafile, issue the following com-
mand, making sure to substitute the file number of your USERS datafile
from the schema report:
copy datafile 9 to 'D:\oracle\rman_bup\users_img.dbf';
You will see RMAN begin the copy process. First, RMAN allocates a chan-
nel, then copies the USERS01.DBF datafile to the D:\oracle\rman_bup
folder, and names the copy USERS_IMG.DBF. Once the file is copied,
You will see two new files in the folder. The first new file is USERS_IMG.
DBF, which is the image copy of the USERS01.DBF datafile. The other new
file is another backup copy of the control file, which was automatically gen-
erated by RMAN.
Close the rman_bup window and switch back to the RMAN prompt.
9. You will now open the database to make it available for general use.
At the RMAN prompt, type alter database open; and press Enter.
TASK 4C-2
Perform a Hot Backup of the Database Using RMAN
Objective: To perform a hot backup of the database using RMAN.
You will see RMAN begin the backup process. With the exception of
excluded datafiles, all datafiles, the control file, and the spfile were included
in the backup. While this backup is occurring, the database remains open
and available to users with very little or no impact to performance. The
TASK 4C-3
Perform Incremental Hot Backups using RMAN
Objective: To perform incremental hot backups using RMAN.
1. Before performing hot backups in true increments, you must first perform a
level 0 incremental hot backup.
Open a command prompt. Launch RMAN and connect with the follow-
ing command:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
You will see RMAN launch and connect to both the target database and the
recovery catalog.
The output will show RMAN allocate a channel, determine which files to
include in the backup set, back up the datafiles, then back up the control file
and spfile. A level 0 incremental hot backup is very similar to a basic full
hot backup. All the datafiles, with the exception of any datafiles specified by
3. In the output, find the line that starts with the words “piece handle”. This
line tells you the name of the file that was generated from this backup set.
Make a note of the name of the file that was generated. In the example
shown in step 2, the file name is ORA92_13_1.
In the rman_bup window, if you cannot see the sizes of the individual files,
choose View→Details. The contents of the window will change to show the
full details of its files.
Find the file that was just generated from your level 0 backup. The size
of the file, as shown in the Size column, should be relatively large. As
shown in the example here, the ORA92_13_1 file is 397,000 kilobytes.
4. A level 1 incremental hot backup will back up only the datablocks that have
changed since the last level 1 or level 0 hot backup. To show this, you will
create a table in the USERS tablespace, which must modify datablocks. Only
modified datablocks of this tablespace will actually be backed up.
The output will look almost identical to that of the level 0 hot backup you
just performed. Although it looks as if RMAN is adding all the datafiles to
the backup set, it is really just examining the datafiles to determine if any
blocks have changed. RMAN will back up only those datablocks that have
6. In the RMAN output, find the line that starts with the words “piece
handle” to determine the name of the file that was just generated. In the
example shown here, that file name is ORA92_15_1.
Switch to the rman_bup window. Find the file that was just generated
by your level 1 incremental backup. The size of this file should be much
smaller than the file generated by your level 0 incremental backup in step 2.
In the example shown here, the level 0 backup piece, file ORA92_13_1, is
397,000 KB, while the level 1 backup piece, ORA92_15_1, is only 1,048
KB. Only the datablocks that were modified by your CREATE TABLE com-
mand in step 4 were included in the backup set, resulting in a much smaller
backup piece.
You should see a list of archive logs that have been generated by your
database. You will use RMAN to back up these archive logs, and then delete The number of archive logs
them from the system after the backup automatically. and their details on your
system may be different than
what is shown here.
2. Leaving the archive folder open, open a command prompt, and then
launch RMAN with the following command:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
RMAN will launch and connect to both the target database and the recovery
catalog.
3. At the RMAN prompt, back up and delete the archive logs by issuing the
following command:
backup archivelog all delete all input;
The output will show that RMAN archived the current redo log, then allo-
cated a channel and generated a list of archive logs to include in the backup The number of archive logs
shown in the D:\oracle\ora92\
database\archive folder may
not match the number of
archive logs added to the
RMAN backup set. This is
because some of the archive
logs were rendered useless
when you opened the
database with the
RESETLOGS option ear-
lier in the course. RMAN
detects this and excludes
the unneeded archive logs
from the backup.
In the database window, you will see that the backed up archive logs have
been deleted from this folder. They were backed up by RMAN to the
D:\oracle\rman_bup folder, and then removed from the file system. Any
archive logs still in this folder are left over from an earlier ALTER
DATABASE OPEN RESETLOGS command.
Topic 4D
RMAN Catalog Maintenance and Management
Since the RMAN recovery catalog consists of basic tables within the database,
RMAN provides the ability to store frequently used backup and recovery scripts
right in the catalog. Creating, maintaining, and executing stored scripts is
extremely easy and can greatly reduce the maintenance overhead of managing
these scripts using other methods, such as in standard text files in the file system.
RMAN scripts are created and managed right from the RMAN prompt. To create
a script, you would use the CREATE SCRIPT command, provide a script name,
and list the script commands within a set of curly braces. You do not need to add
a semicolon after the closing curly brace to terminate the command. The follow-
ing example creates a stored script to perform a full hot backup of the target
database:
RMAN> CREATE SCRIPT full_hot_backup
2> {
Creating an RMAN Stored
3> BACKUP DATABASE PLUS ARCHIVELOG;
Script 4> }
In this example, the full_hot_backup script that was already stored in the recov-
ery catalog was replaced with a new set of commands.
To delete a stored script, you would use the DELETE SCRIPT command and
specify the name of the script you would like to delete, along with a terminating
semicolon. You must be very careful with this command; once executed, the
specified script is deleted immediately without further prompting from the DBA,
and there is no way to roll back the operation. The following example deletes the
full_hot_backup script:
RMAN> DELETE SCRIPT full_hot_backup;
SCRIPT_NAME
------------------------------------------------
backup_archive_logs
backup_control_file
full_cold_backup
full_hot_backup
SQL>
The following example shows a query to the RC_STORED_SCRIPT_LINE view
to bring up the contents of the full_hot_backup script:
SQL> SELECT text
2 FROM rc_stored_script_line
3 WHERE script_name='full_hot_backup';
TEXT
-----------------------------------------------------
{
BACKUP DATABASE PLUS ARCHIVELOG DELETE ALL INPUT;
}
SQL>
It’s important to note that a stored script is only valid for the target database you
are currently connected to. In other words, if RMAN is connected to the ORA92
database as the target, and you execute the CREATE SCRIPT command, the
script you create can only be run against the ORA92 database. However, because
RMAN scripts are stored in tables within RMAN’s schema, it is very easy to add
or reassign RMAN scripts for other databases using basic SQL commands against
the RC_STORED_SCRIPT and RC_STORED_SCRIPT_LINE views from the
SQL*Plus prompt.
TASK 4D-1
Creating, Using, and Managing RMAN Scripts
Objective: To create, use, and manage RMAN scripts to simplify database
backups.
1. You will create an RMAN script to perform a full, hot backup of the
database. The script will be stored in the recovery catalog.
Open a command prompt, and launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
2. Your RMAN script will be named full_hot_backup, and will only back up
the database. Since all the essential configuration parameters are already set,
your script only needs to contain the BACKUP DATABASE command.
RMAN will display the message “created script full_hot_backup.” The script
is now stored in the recovery catalog and can be viewed through the
RC_STORED_SCRIPT and RC_STORED_SCRIPT_LINE views.
3. To see the contents of any RMAN script, you would issue the PRINT
SCRIPT command.
4. After looking at the script, you decide that you want the script to also back
up the archive logs, then delete them from the file system afterwards. You
also want to tag the backup set for easy identification later. To change the
script, you would use the REPLACE SCRIPT command.
5. To see the changed content of your script, type print script full_hot_
backup; and press Enter.
6. To execute a stored RMAN script, you would issue the RUN command, and
enclose the EXECUTE SCRIPT command, and the script name, inside a set
of brackets.
You will see RMAN execute the full hot backup as specified in the script.
The backup will include all datafiles except those specified by the EXCLUDE
parameter, the archive logs, the control file, and the spfile. Additionally, all
archive logs will be deleted from the file system after they are backed up.
The SCHEMA option can be used to report on the current layout of the target
database or the layout of the target at a specified point in time. You can specify
an alternate point in time with the AT clause along with a date and time string, a
specific SCN, or a log sequence number. Consider the following examples:
REPORT SCHEMA;
1. Open a command prompt, then launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
RMAN will launch and connect to both the target database and recovery
catalog.
2. To see a report of the current layout of the database, type report schema;
and press Enter.
The output will show a list of datafiles that currently exist in the ORA92
database. This information is very important when designing a backup and
recovery strategy for your database.
3. To generate a list of all backups of all files generated by RMAN for the tar-
get database, type list backup; and press Enter.
RMAN will generate a long list of backups that were generated. This list
The output may be so long will include all files that were backed up, including datafiles, control files,
that it will scroll up beyond
the maximum limit of the
command prompt, and you
may not be able to see all of
it. This is normal.
4. Since the LIST BACKUP command alone may generate more information
than can be easily read, you can filter the output based on the type of file
backed up.
To see a list of backed up datafiles only, type list backup of database; and
press Enter.
The output will show a list of all backup copies of the datafiles, but will not
include control files, archive logs, or the spfile.
5. To generate a list of backups of just the control file, type list backup of
controlfile; and press Enter.
6. To generate a list of archive logs that have been backed up, type list backup
of archivelog all; and press Enter.
The output will show only information about backup copies of archive logs
and which backup sets they belong to.
7. To generate a list of backups created as image copies, type list copy; and
press Enter.
The output will show only information about image copies that were gener-
ated by RMAN.
8. Earlier in this lesson, you generated an RMAN script to perform a full, hot
backup of the database. In that script, you specified a tag name for that
backup set, called test_script. You can generate a list of backup sets by their
tag name.
At the prompt, type list backup tag test_script; and press Enter.
To determine the current retention policy for the target database, type show
retention policy; and press Enter.
The output will show that the current retention policy is set to 1, which
means that RMAN is configured to maintain exactly one backup copy of all
critical database files. To comply with the retention policy, any backup cop-
ies other than the most current one will be considered obsolete. Also, any
files that have not been backed up, or files whose last backup is older than
the current full backup, will be considered in need of a backup.
10. To determine which datafiles are in need of backing up to comply with the
retention policy, type report need backup; and press Enter.
11. To determine which backups are obsolete and can be deleted, type report
obsolete; and press Enter.
1. You will first delete a backup set from the file system manually, and then
update the recovery catalog using the CROSSCHECK and DELETE
commands.
Find the backup set file with the highest backup set ID. The backup set
ID is indicated by the number immediately following the database name
ORA92. In the example shown here, that file is ORA92_21_1.
Select the backup file with the highest backup set ID and press
Shift+Delete. A question box will appear asking if you are sure you want to
delete the file. Click Yes.
2. Leaving the rman_bup window open, open a command prompt, and then
launch RMAN with the following command:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
RMAN will launch and connect to both the target database and recovery
catalog.
You will see RMAN allocate a channel, then compare each backup entry in Your results will vary
the recovery catalog against what actually exists on disk. Towards the end of depending on the number of
the output, you will see that the file you deleted could not be found on the archive logs.
4. Since the backup set is no longer available on disk, the related entries for
that backup set in the recovery catalog can be purged.
The output will show that the expired backup was deleted, which means that
the recovery catalog entries related to the expired backup set were purged
from the catalog.
5. Obsolete backups are backups that are no longer needed according to the
configured retention policy.
To see a list of obsolete backups, type report obsolete; and press Enter.
The output will show a list of files that RMAN has determined are obsolete,
and asks if you really want to delete the objects. Type YES and press
Enter.
You will see that the folder contains only a few files. One of these files is
the current backup of the control file. The other files contain a complete set
of backed up datafiles and archive logs that would be required to perform a
complete recovery in the event of a failure. RMAN had deleted all other
backup sets, which were no longer needed according to the retention policy.
CATALOG ARCHIVELOG
'D:\oracle\oradata\ora92\archive\ARC_0003.ARC';
Once the information about manually created backups have been cataloged in the
recovery catalog, RMAN can immediately consider those backup files as candi-
dates for restore if necessary.
TASK 4D-4
Cataloging OS-level Database Backups
Objective: To catalog database backups that were performed through OS
commands with the recovery catalog.
1. Earlier in the course, you had performed a user-managed full cold backup of
the database and stored the backed up files in the D:\oracle\cold_bup folder.
Open a window to the D:\oracle\cold_bup folder.
These files were created through the use of standard operating system com-
mands and are not currently listed in the recovery catalog. You will use
RMAN to create entries for some of these files in the recovery catalog so the
backup can be restored by RMAN if necessary.
2. Open a command prompt, and launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
RMAN will launch and connect to both the target database and recovery
catalog.
After each datafile, RMAN will return the message “cataloged datafile copy”
and display the file name, record id, and a stamp ID to uniquely identify the
file in the recovery catalog.
5. To confirm that the backup files have been registered with the recovery cata-
log, type list copy of database; and press Enter.
While in this activity you are The output will show that you have successfully cataloged the datafiles of
only cataloging some of the the SYSTEM, USERS, and TOOLS tablespaces with the recovery catalog. If
files of a cold backup, you
the need arises, RMAN will be able to locate, restore, and recover these
must remember that all files
of a cold backup must be backup datafiles in the event of a disk failure or file corruption.
available in order to recover
and restore the database with
that backup, including the
datafiles and the control file.
TASK 4D-5
Validate Backup and Restore Operations
Objective: To ensure that a specified backup or restore operation can suc-
ceed as expected.
1. You will perform validation checks to ensure that the database can be
backed up and restored using the existing RMAN configuration.
Open a command prompt and launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 rcvcat rman/buprec@ora92
RMAN will launch and connect to both the target database and the recovery
catalog.
The output will look very similar to a standard backup, except RMAN does
not actually perform the backup. RMAN acts out the backup process but
does not generate a backup set, nor does it record any information to the
recovery catalog. If any issues are found during the process, such as a chan-
nel configuration for a backup tape that does not exist, messages would be
included in the output to provide information about the issues.
3. Once the backup validation is complete, type restore validate database; and
press Enter.
The output will show that RMAN starts the validation of the datafile backup
set, and acts out a full restore of the database using the latest full backup. If
any issues are found during the process, messages would be included in the
output to provide information about the issues. In this case, RMAN has
reported that there is no backup or image copy of datafile 5 found to restore.
This is because the EXAMPLE01.DBF datafile has been excluded from all
database backups. This information could be critical when determining if the
current database backups are sufficient to recover from a major failure.
Backing Up a Single
Recovery Catalog
Lesson Review
4A Your database has a 30 megabytes datafile that was completely full.
After truncating several tables, the volume in the datablock dropped
down to 12 megabytes. How many megabytes of this datafile will be
backed up by RMAN.
a. 12
b. 26
✓ c. 30
d. The file will not be backed up.
Which of the following can be stored in the repository when using the
recovery catalog as a repository, but not with the control file?
a. Target-specific configuration parameter settings.
✓ b. Stored scripts.
c. Information about corruptions in backup files.
d. Complete archive log history.
4B What command would you use to invoke RMAN and connect to both
the target database and recovery catalog simultaneously?
rman target username/password@db1 catalog
username/password@db2
What information about the target database is loaded into the recovery
catalog when the database is first registered with the catalog? Choose all
that apply.
a. Control file layout
✓ b. Database name
✓ c. Incarnation information
✓ d. Oracle version
True or False? Setting the EXCLUDE parameter for the INDX tablespace
will cause the tablespace to be excluded from all backups of all data-
bases supported by that repository. Explain your answer.
False. A tablespace exclusion is target database-specific, meaning that a
tablespace that has been added to the exclusion list will only be excluded
from future backups of the target database RMAN is currently connected to.
Each database will have its own tablespace exclusion list.
4C For an RMAN cold database backup, why does the database have to be
in the mount state?
This allows RMAN to log in to the database and read pertinent information
from the data dictionary and control file to update the RMAN repository.
4D True or False? RMAN checks the syntax of a stored script at run time.
Explain your answer.
False. RMAN checks the syntax of a stored script when the script is created.
Objectives
To perform complete and incomplete recoveries of the database using RMAN,
you will:
1. Before attempting to restore datafiles, you will ensure that you have one
good, hot backup of the database. You will perform this backup with RMAN
while using the target database control file as the RMAN repository.
Open a command prompt, and launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 nocatalog
RMAN will launch and connect to the target database and will use the con-
trol file as the RMAN repository.
2. You will perform a backup of all datafiles in the database, including the
EXAMPLE datafile. Currently, the EXAMPLE table is excluded from all
backups, as specified by the EXCLUDE parameter.
3. You will now perform the full, hot backup. The backup will include all
datafiles, the control file, the spfile, and all archive logs. You will also delete
the archive logs from the file system after they have been backed up.
Your results may vary
depending on the number of At the RMAN prompt, issue the following command:
archive logs.
BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
You will see RMAN begin the backup process. It will take a few minutes for
the backup to complete.
4. You will now simulate the loss of all datafiles, then restore them from your
backup using RMAN.
To shut down the database, type shutdown immediate; and press Enter.
You will see Oracle close and dismount the database, then shut down the
instance.
You will delete all Oracle datafiles from this folder. You will not delete the
control files, the redo log files, or the TEMP01.DBF file. Select the follow-
ing files to delete:
• CWMLITE01.DBF
• DRSYS01.DBF
• INDX01.DBF
• ODM01.DBF
• RMAN01.DBF
• SYSTEM01.DBF
• TOOLS01.DBF
• UNDOTBS01.DBF
• USERS01.DBF
• XDB01.DBF
Delete the selected files, and only these files, from the folder. Be careful
6. Leaving the ora92 folder open, switch back to the SQL*Plus window.
You will see Oracle start the instance and mount the database, but the data-
base will not open because it cannot file the SYSTEM01.DBF datafile. You
have simulated the loss of all datafiles in the database.
7. You will use RMAN to restore all the datafiles from the last backup.
Open a command prompt, and then launch RMAN with the following
command:
rman target rman/buprec@ora92 nocatalog
8. To restore the datafiles from the last backup, type restore database; and
press Enter.
RMAN will determine the type of recovery that is required and which
backup copies of the datafiles should be used to restore the database. RMAN
will then extract those files from the appropriate backup set and restore them
to their original locations.
9. Once the restore is complete, exit from both RMAN and the command
prompt.
Although the datafiles have been restored, you must remember that restoring
datafiles only involves copying the datafiles from their backup locations to
their original locations. It does not, however, recover the database to roll
forward changes contained in the archive logs. The database will remain in
an unusable state until the recovery is performed. You will do this in the
next activity.
TASK 5A-2
Perform a Complete Recovery
Objective: To perform a complete recovery after restoring the database
from backup.
2. In the previous task, you simulated the loss of all datafiles in the database,
then restored all the datafiles from the last backup. Since the backup copy
was valid and all archive logs since the backup was performed are available,
you will be able to perform a complete recovery. You will perform this
recovery using RMAN.
Open a command prompt, and launch RMAN with the following com-
mand:
rman target rman/buprec@ora92 nocatalog
RMAN will launch and connect to the target database using the control file
as the RMAN repository.
3. When performing a database recovery with RMAN, the RMAN utility will
determine the type of recovery required, and will automatically extract any
necessary archive logs from the appropriate backup set, and apply their con-
tents to the database.
4. Once the recovery is complete, you must open the database to make it avail-
able for general use.
TASK 5A-3
Describe Block Media Recovery
1. True or False? Block media recovery can be performed from both the
RMAN utility and the SQL*Plus prompt. Explain your answer.
False. Block media recovery is only available from within the RMAN utility.
If the recovery catalog is used, and the catalog is stored in a database that is
independent from the target database, then restoring the control file and recover-
ing the target database is very simple and can use the commands shown. RMAN
will use the information in the recovery catalog to determine which backup con-
trol file to restore. However, if the target database’s control file is used as the
repository, then the recovery becomes slightly more difficult. The control file is
lost, which means RMAN has no repository for the target database. However, it
is still possible to use RMAN for the recovery.
If you have configured control file autobackups with the CONFIGURE command,
and have at least one control file autobackup that was created no more than the
number of days specified by CONTROL_FILE_RECORD_KEEP_TIME, then
you can restore the control file from its autobackup. Even without the repository,
you would just need to tell RMAN which database you are recovering, and
RMAN will scan the backup location specified by the channel configuration for
any control file autobackup copies for the specified database. To tell RMAN
TASK 5B-1
Describe Recovery with RMAN After Loss of Control
File
1. Which one of the following statements is true?
a. Cancel-based recovery is unavailable for user-managed recovery,
and change-based recovery is unavailable for RMAN recovery.
✓ b. Sequence-based recovery is unavailable for user-managed recovery,
and cancel-based recovery is unavailable for RMAN recovery.
c. Cancel-based recovery is unavailable for user-managed recovery,
and sequence-based recovery is unavailable for RMAN recovery.
d. Change-based recovery is unavailable for user-managed recovery,
and time-based recovery is unavailable for RMAN recovery.
3. If the target database is down, how could you determine the correct
value to specify for the SET DBID command?
a. By looking at the name of the backup sets generated by RMAN.
✓ b. By looking at the name of the control file autobackup.
c. By looking at the name of the spfile autobackup.
d. By opening the backup control file in a text editor.
TASK 5B-2
Using RMAN to Recover After Loss of the Current Redo
Log
Objective: To perform an incomplete recovery using RMAN from the
loss of the current online redo log group
1. You will first simulate the loss of the current online redo log.
2. To determine which redo log group is the current group, type @C:\079176\
logfiles.sql and press Enter.
The STATUS column in the output will show which log group is the current
group, and the SEQ# column will show the log sequence number of each
group. In the example shown here, the current log group is log group 3, with
a log sequence number of 7.
Oracle will display the messages “1 row created” and “Commit complete.”
The redo information generated from the CREATE TABLE and INSERT
4. You will now force a log switch. The current online redo log will be
archived, and Oracle will switch to the next log group.
The output will show that Oracle has switched to the next redo log group.
The redo log that contains the CREATE TABLE and INSERT statements has
been archived. In this example, log group 1 is now the current log group,
which also has a log sequence number of 8. Log group 3 has been archived.
Oracle will display the messages “1 row created” and “Commit complete.”
The redo information for the CREATE TABLE and the first INSERT state-
ments is stored in log group 3, which has been archived. The redo
information for the last row inserted was stored in the current online log
group, which is now log group 1. Losing the current online log group will
mean that all changes stored in that log group are permanently lost. How-
7. You will now simulate the loss of the current online redo log group.
In the ora92 folder, delete the redo log file that corresponds with the cur-
rent online redo log group for your database. In the example here, the file
would be REDO01.LOG. Be careful to delete the correct file, and only
that file.
The output will show that the instance was started and the database
mounted, but the database could not be opened because Oracle could not
find the current online redo log group.
10. To recover from this type of loss, you must restore all datafiles from backup
and perform an incomplete recovery up to the last archive log that was
generated. All changes stored in the current log group will be lost. You will
restore and recover the database using RMAN.
11. To perform a full restore of the database, type restore database; and press
Enter.
RMAN will allocate a channel and begin the restore process. All backup
copies of the datafiles will be extracted from the backup set and restored to
their original locations, overwriting the existing datafiles. It will take a few
minutes for the restore to complete.
12. You will now use RMAN to perform an incomplete recovery of the
database. Since the failure occurred due to loss of the current online redo
log, the most efficient method of recovery is to specify the log sequence
number of the current online log group as the stopping point during recovery
using the UNTIL SEQUENCE option of the RECOVER command.
To recover the database until the current log sequence number, issue the
following command. For the x, specify the log sequence number of the
current online redo log group for your database, which you determined
in step 6. In this example, the log sequence number for the current redo log
group is 8.
recover database until sequence x thread 1;
RMAN will determine the archive logs and their locations that are needed to
perform the specified recovery. RMAN will then extract any archive logs
from the latest backup set, if necessary, then apply the contents of the logs
to roll the database forward. All logs that are available are applied to the
database, with the exception of the log that normally would have been gen-
erated when the current online log group was archived.
Once the database has been recovered to the specified log sequence number,
13. The database has been recovered to include all available changes. The cur-
rent online redo log group is no longer available, therefore its changes will
be lost.
At the RMAN prompt, type alter database open resetlogs; and press Enter.
After a few moments, RMAN will display the message “database opened.”
Exit from both the RMAN utility and the command prompt.
14. When you opened the database with the RESETLOGS option, you created a
new incarnation of the database, and all backups of the previous incarnation
of the database are now useless for recovery. You should immediately per-
form another full, cold backup of the database in case there is some sort of
failure in the near future.
After a moment, RMAN will report that the instance has started and the
database was mounted.
A full, cold backup of the database will begin. It will take several minutes
for the backup to complete.
15. Once the cold backup is complete, you will also create an image copy of the
control file. This will allow you to recover the database if the recovery cata-
log is not available.
Exit from both the RMAN utility and the command prompt window.
16. At the SQL*Plus prompt, type Connect sys@ora92 as sysdba and press
Enter. Type ora92 for the password and press Enter.
17. To see which changes were recovered and which were lost, you will now
query from the TEST_RMAN_REDO table.
The output will show that only the first row, containing the value A, was
recovered. The second row, containing the value B, which was only stored in
the current online redo log group, was lost.
After performing an incomplete recovery and opening the database using the
RESETLOGS option, all information about tempfiles belonging to any tem-
porary tablespaces are lost. You will add the tempfile back to your temporary
tablespace.
Parameter Description
lock_name_space Set to the new name of the clone database.
db_file_name_convert Specifies the directory path where the auxiliary file set will be
restored to.
log_file_name_convert Specifies the directory path where the redo logs will be re-
created when the clone is opened with the RESETLOGS
option.
control_files Specifies the directory path and file names where the control
files will be restored to.
log_archive_start Must be set to FALSE, since the clone must be running in
NOARCHIVELOG mode.
TASK 5B-3
Describe Tablespace Point-in-Time Recovery Using
RMAN
1. What is the difference between the recovery set and the auxiliary set?
The recovery set contains all the datafiles of the associated tablespace that
is to be recovered. The auxiliary set contains the minimum number of
datafiles from the primary database to be restored for the clone database to
make the clone fully operational, which includes the control file, the system
tablespace datafiles, and the undo segment tablespace datafiles. The auxil-
iary set does not include any of the datafiles of the associated tablespace to
be recovered.
3. All of your backup copies are stored on tape. However, when perform-
ing tablespace point-in-time recovery with RMAN, you must still
allocate at least one channel of type DISK. Why?
The datafiles of the recovery and auxiliary sets will be restored from tape,
but RMAN will replicate the current control file from the original database
to create the control file for the auxiliary database. Since this file resides on
disk, a channel of type DISK must be allocated.
Summary
In this lesson, you learned how to perform both complete and incomplete
recoveries with RMAN. You learned how to restore lost files, including
datafiles, control files, archive logs, and the spfile. You also learned how to
recover the database to the point of failure and to alternative points in time
using the RMAN utility. Finally, you learned how to use RMAN to perform
a tablespace point-in-time recovery by creating a clone database, recovering
that clone to a specific point in time, and transferring the recovered
tablespace back to the original database.
Lesson Review
5A Which RMAN command is equivalent to the ALTER DATABASE
RENAME FILE command?
a. CREATE DATAFILE...AS
b. RESET DATAFILE
c. SET NEWNAME
✓ d. SWITCH
5B Why is it usually better to issue the SET UNTIL command rather than
include the UNTIL command during an RMAN incomplete recovery?
If the UNTIL command is used, RMAN will try to restore the last known
good backup of the recovery target, which may have been created after the
point in time specified by the UNTIL clause. If you issue the SET UNTIL
command before the RESTORE command, RMAN will automatically choose
the most appropriate backup of the recovery target and avoid potential
errors during recovery.
Your database has suffered a loss of all copies of the current control file.
By looking in the alert log for the last redo log switch, you have deter-
mined that the current log sequence number for the database was 32.
When using RMAN to perform an incomplete recovery, what log
sequence number should you specify for the UNTIL clause?
a. 31
✓ b. 32
c. 33
d. None, RMAN will automatically determine when to stop the recov-
ery
Your database has suffered a loss of the current redo log group. By look-
ing in the alert log for the last redo log switch, you have determined
that the current log sequence number for the database was 32. When
using RMAN to perform an incomplete recovery, what log sequence
number should you specify for the UNTIL clause?
✓ a. 31
b. 32
c. 33
d. None, RMAN will automatically determine when to stop the recov-
ery
Overview
6
Data Files
Oracle provides several utilities to load and unload data in and out of the inventory.dat
database. For example, the SQL*Loader utility is used to bulk load data listplan.sql
from flat ASCII files into the database. The export utility is used to export item_ext.sql
data from the database into a file in binary format, which can then be
final_setup.pif
imported back into the same database or another database. In this lesson,
you will learn how to use these utilities to load, unload, and move data in Lesson Time
large volumes. 4 hours
Objectives
To move data around quickly and easily using Oracle-provided utilities, you will:
Types of Loads
SQL*Loader supports two methods of loading data: conventional path and direct
path. In conventional path loading, SQL*Loader reads the data from the flat file
and generates standard INSERT statements to pass to the database. This method
of loading provides maximum flexibility because Oracle functions, such as conventional path load:
UPPER or TO_CHAR, can be applied to the data prior to inserting it into a table. A method of loading data
In direct path loading, SQL*Loader reads the data from the flat file, but bypasses into Oracle using standard
SQL calls.
the SQL processing engine of the database and stores the data directly in the tar-
get table. Because the SQL processing engine has been bypassed, Oracle
functions cannot be used against the data while it is loading, but the speed and
performance of direct path loads is often far superior than conventional path direct path load:
loads. The decision to use one over the other will depend on the requirements of A method of loading data
the data to be loaded. You will learn how to use both loading methods later in into Oracle that writes
this topic. directly to the datablocks in
the datafiles, bypassing the
The SQL*Loader Control File SQL processing engine.
The SQL*Loader control file is a simple text file that can easily be created by
hand. The control file contains the names and locations of the datafiles where the
data is stored and the format the data is stored in within the datafiles. It also
specifies the name of target table and the columns where the data will be loaded
to. Any Oracle functions to apply on the data while loaded would be listed for
each column. The control file also contains any options that are to be used during
loading. Additionally, the control file could also contain the actual data that is to
be loaded. The data must be listed at the end of the file after all other control file
contents. This allows simple maintenance and ease of management when dealing
with multiple datafiles.
Since SQL*Loader is such a powerful utility, the control file syntax allows a long
list of available commands and options. Shown here is the basic syntax for a Refer to the Oracle9i Utilities
SQL*Loader control file. manual for a complete list of
SQL*Loader control file
[RECOVERABLE | UNRECOVERABLE] options and keywords.
[CONTINUE] LOAD DATA
INFILE file_name[,...]
BADFILE file_name
DISCARDFILE file_name
FIELDS
TERMINATED BY [WHITESPACE | 'string']
ENCLOSED BY 'string'
OPTIONALLY ENCLOSED BY ' string'
TRAILING NULLCOLS
[INSERT | APPEND | REPLACE | TRUNCATE]
INTO TABLE table_name
WHEN (conditions)
(column_list)
[BEGIN DATA]
Datafile Formats
SQL*Loader can read two different types of datafile formats, fixed-width and
delimited. In a fixed-width datafile, the columns of all rows in the file are set at a
fixed-width, much like a spreadsheet. To load the data, you would include in the
control file the starting position and ending position of each column in the
datafile. Figure 6-2 shows a sample datafile in a fixed-width format.
SQL*Loader Commands
SQL*Loader is a command line utility and, therefore, comes with several argu-
ments that can be passed to the utility when executed. The entire SQL*Loader
command is entered at the command prompt with all arguments on a single line,
and each argument is assigned a value. The table shown here provides a list of
valid arguments with a description of appropriate values for each.
Argument Description
userid The Oracle user name and password to execute the load with. This user must
have INSERT privileges on the target table.
control The name of the control file to use for the load, including full path and file
name.
bad The path and file name of the bad file where rows that cause errors will be
stored.
data The path and file name of the datafile to load from.
discard The path and file name of the file to store discarded rows.
discardmax The maximum number of rows allowed to be discarded. Once this limit is
reached, the load is terminated. This argument defaults to allow all discards.
skip The number of rows in the datafile to skip starting from the first row. This is
useful when you must continue a previous load that failed. The default is 0.
load The number of rows to load. The default is all rows.
errors The maximum number of errors to allow before terminating the load. The
default is 50.
rows The number of rows in a conventional path bind array, or the number of
rows between saves in a direct path load. For conventional path, the default
is 64, for direct path the default is all rows.
bindsize The size of the conventional path bind array in bytes.
silent Suppress all screen-displayed messages during run. Useful for loads run
from a batch file or script.
direct Use direct path. Default is FALSE.
parfile The path and file name of an option parameter file that contains command
line arguments.
parallel Perform parallel load (multiple datafiles into a single table). Default is
FALSE.
file The name of the tablespace datafile to first start loading the data. The table
may extend out of this datafile if necessary.
readsize Size of the read buffer in bytes.
You may have noticed that several arguments have the same purpose as portions
of the control file syntax. This is the case, and you have a choice of listing those
arguments in the command line, or as clauses in the control file. If the arguments
appear in both the command line and control file, the arguments in the command
line will take precedence. The following example shows a sample SQL*Loader
command, complete with arguments.
TASK 6A-1
Using SQL*Loader
Objective: To quickly load data into the database using SQL*Loader.
1. The inventory.dat file contains 5000 rows of sample inventory data that you
will load into the a table in the database.
The inventory.dat file will open in a Notepad window. The contents of this
file are in a delimited format, with each field terminating with a comma. You
will load the contents of this file into a table in the database. When you are
2. You will first create the ITEM table where the data will be stored.
Leaving the C:\079176 window open, launch SQL*Plus and log in as sys.
3. You will now create the control file for SQL*Loader to load this data.
4. In this control file, you will specify that the data to be loaded is located in
the file C:\079176\inventory.dat. You will direct the utility to automatically
create a bad file called C:\079176\inventory.bad to catch any rows that cause
errors. You will also specify that each field in the datafile is terminated by a
comma.
Once you have completed the contents of the file, save the file to the
C:\079176 directory. Change the Save As Type to ALL Files, and name
the file INVENTORY.CTL
Once you press Enter, SQL*Loader will execute and load all the data from
the INVENTORY.DAT file into the ITEM table. You will see the output
from the utility scroll by as it loads rows of data. Once you are finished
looking at the output, exit from the command prompt.
6. Once the load is complete, you should check the SQL*Loader log and bad
files to see if any errors were encountered that caused rows to be rejected.
Switch back to the C:\079176 window. You will see that a bad file was not
generated, which means that no rows were rejected during the load. Open
the INVENTORY.LOG file.
The log file contains a summary of the activity that took place during the
load. Take a moment to look through the file. Once you are done looking at
7. You will now confirm that all 5000 rows have indeed been loaded into the
ITEM table.
The output will show that the ITEM table now contains all 5000 rows from
the INVENTORY.DAT file.
8. You can query the ITEM table to see some of its contents.
First, format the output of your query with the following commands:
The output will show that the ITEM table now contains the rows of data
from the INVENTORY.DAT file.
TASK 6A-2
Loading Data Using Direct Load Inserts
Objective: To load data into a table with a direct load insert.
To see the difference between a direct load insert and a conventional insert,
you will look at the differences between the execution plans of each type of
statement. You will first need to create the PLAN_TABLE, which will hold
the execution plan generated by the EXPLAIN PLAN command.
Oracle will display the message “Table created.” You can now generate
execution plans with the EXPLAIN PLAN command.
2. You will create an empty table, called ITEM_DIRECT, with a structure iden-
tical to that of the ITEM table. This table will serve the target of our direct
load insert.
To verify that the ITEM and ITEM_DIRECT tables are identical, issue the
following commands:
DESCRIBE item
DESCRIBE item_direct
The output will show that the two tables are indeed identical.
3. You will now generate an execution plan for a conventional INSERT state-
ment which reads all the rows from the ITEM table and inserts them into the
ITEM_DIRECT table.
5. You will now generate an execution plan for a direct load INSERT state-
ment to read all the rows from the ITEM table, and append them to the
ITEM_DIRECT table.
To clear out the plan table, type TRUNCATE TABLE plan_table; and
press Enter.
6. To see the new execution plan you just created, type @C:\079176\listplan.
sql and press Enter.
The output will show a different execution plan for this INSERT statement
than the previous statement. This execution plan shows an INSERT state-
ment and a table access of the ITEM table, just as before, but this time the
7. You will now execute the direct load insert into the ITEM_DIRECT table.
Oracle will display the message “5000 rows created.” Type COMMIT; and
press Enter.
External Tables
Oracle9i provides a feature that enables you to access external data using stan-
dard SQL as if you were querying a table or a view. You can create external
tables, which are table-like structures that direct Oracle to read the data from
external files and return it to the calling user. Creating and using external tables external table:
eliminates the need to use a separate process that loads the data into the database A table-like structure in
before processing it. Oracle provides the clause ORGANIZATION EXTERNAL Oracle that accesses an
external flat file containing
for the CREATE TABLE command to define external tables. data.
The definition of an external table looks very much like a combination of the
standard CREATE TABLE syntax and a SQL*Loader controlfile. The CREATE
TABLE command includes the logical structure of the table, meaning the columns
and their data types. It also includes the location where the external file can be
found and the information about how the data is formatted in the file. When the
external table is accessed, thefile that contains the data is read sequentially to
fetch the data into the buffer cache. External tables are read-only tables, and you
cannot generate indexes on them. Also, you must take extra care to ensure the
security of the data because any user that has sufficient privileges through the
operating system can open and read the datafile.
TASK 6A-3
Creating and Accessing External Tables
Setup: To create and access external tables to read and load data from
outside the database.
1. You will first create the directories that will hold the datafile, the log file,
and the bad file. The three directories will be called data, log, and bad,
respectively.
In the C:\079176 directory, create the data, log, and bad directories.
2. You will use the inventory.dat file as the data source for your external table.
Before you can access the file, you must first tell Oracle where this file is
located. This is done by creating a directory within the database that points
to the actual path where the file is located. You must also create directories
to tell Oracle where to create the log and bad files.
4. You will now create the external table. This table will have the following
structure.
COLUMN DATATYPE
item_id NUMBER(12,0)
vendor_id NUMBER(12,0)
item_name VARCHAR2(64)
descr VARCHAR2(1000)
unit VARCHAR2(5)
retail_price NUMBER(8,2)
qty_on_hand NUMBER(8,0)
Oracle will display the column layout of the ITEM_EXT table. You will see
that it looks identical to a typical Oracle table.
6. Now that the external table has been created, you can query from it just like
any other table.
To see the total number of rows in the table, issue the following query:
SELECT COUNT(*)
FROM item_ext;
The output will show that there are 5000 rows in the ITEM_EXT table.
7. To find how many items were purchased from each vendor, issue the follow-
ing query:
SELECT vendor_id, COUNT(item_id)
FROM item_ext
GROUP BY vendor_id;
8. Since an external table can be queried just like any other table, this becomes
a convenient and powerful method for loading data into the database. You
will query from the table and direct the data to be loaded into another table.
You will create a new table based on the data from the ITEMS_EXT exter-
nal table. This new table will be called HIGHER_ITEMS, and will contain
only items that have a retail price that is greater than 40 dollars.
9. To see how many rows exist in your new table, issue the following query:
SELECT COUNT(*)
FROM higher_items;
10. To see the lowest retail price of the items in this table, issue the following
query:
SELECT MIN(retail_price)
FROM higher_items;
Oracle will display the lowest retail price that exists in the HIGHER_ITEMS
table. Only the items with a retail price greater than 40 dollars were
retrieved from the ITEMS_EXT external table and loaded into the HIGHER_
ITEMS table.
Topic 6B
The Export and Import Utilities
The export and import utilities are command line executables that allow you to
quickly reorganize data in a database or to move data from one database to
another. The flat file created by the export utility is in binary format and can only
be read by Oracle’s import utility, but it can be imported into another database,
even if the target database resides on another operating system. This comes in
especially useful if you need to quickly move an entire database from one plat-
form to another.
Argument Value
userid User name and password of the user executing the utility. Must be the first
argument passed at the command line.
buffer Size of the data buffer in bytes.
file Path and file name of the export file to create.
compress Indicates that each object to be exported will later be imported into a single
extent. The default is Y, to indicate that objects will be compressed.
grants Indicates whether user privileges will be included in the export. The default
is Y.
indexes Indicates whether indexes will be included in the export. The default is Y.
rows Indicates whether rows of data will be included in the export. The default is
Y. If set to N, then only the structure of the objects will be exported.
contraints Indicates whether constraints will be included in the export. The default is
Y.
log Path and file name of the log file to record screen output.
direct Indicates whether the export should be done in direct mode.
feedback Sets the number of rows to use for feedback. For every row exported, the
utility will display a dot (.) to the screen to show its progress. The default
is 0.
filesize Sets the maximum size of the export file. When the size of the first file
grows to this size, and if multiple files are listed by the file argument,
the export utility will automatically begin writing to the next file. If multiple
files are not listed, the export will halt with an error.
query Lists the conditions to filter the data prior to exporting.
full Indicates whether to perform a full database export. The default is N. Can-
not be used in conjunction with owner, tables, or
tablespaces.
owner Lists the users to export for a user-level export. Cannot be used in
conjunction with full, tables, or tablespaces.
tables Lists the tables to export for a table-level export. Cannot be used in
conjunction with full or owner.
recordlength Specifies the length of the file record in bytes. The default is operating
system dependent.
parfile Path and file name of optional parameter file to hold export parameters.
The export utility also provides a help screen that can be accessed by running
export utility with only the help=y argument. The help screen will display a list
of all available arguments with their descriptions. The following shows a sample
export command to take a user-level export of the user Scott.
exp userid=sys/oracle file=d:\export\scott.dmp ⇒
log=d:\export\scott.log direct=y feedback=1000 owner=scott
In this example, Scott’s entire schema will be exported into the SCOTT.DMP
export file. The export will be performed in direct mode and will display a dot
for every 1000 rows exported. An export file should never be opened or edited.
Doing so could corrupt the file, which will render it useless. The only recourse
for dealing with a corrupted export file is to take another export.
TASK 6B-1
Exporting the Database
Objective: To export the contents of the database using the export utility.
1. You will first perform a full database export. You will perform the export in
direct mode, and the export file will be named exp_ora92_full.dmp.
Once you press Enter, the export will begin. You will see the window fill
with informational messages as the utility processes each step of the export.
All objects in the database, except for those owned by SYS, will be exported
2. You will now perform an export of just a single schema. The schema you
will use for this export is OE, which is the user for the order entry example
schema.
Once you press Enter, the export will begin. This time, only the objects
owned by the OE user will be exported to the exp_ora92_oe_schema.dmp
file. It will take a few minutes for the export to complete.
3. You will perform an export of only those objects that reside in the
EXAMPLE tablespace. All objects in this tablespace, regardless of which
user owns them, will be exported to the exp_ora92_example_ts.dmp file. The
export will also include any indexes on the tables in the EXAMPLE
tablespace, even if those indexes are stored in a different tablespace.
Once you press Enter, the export will begin. All objects in the EXAMPLE
tablespace will be exported. It will take a few minutes for the export to
complete.
4. You will now perform an export of only a single table. The table you will
use is the SCOTT.EMP table.
5. You will now perform an export of several tables owned by the SH user, all
of which have names that begin with the letter C.
Once you press Enter, all the tables owned by SH that begin with the letter
C will be exported.
The export utility provides a wide variety of features to allow DBAs and
users a powerful method of extracting data from an Oracle database. The
utility can export the entire database, one or more users’ schemas, the con-
tents of a tablespace, one or more specific tables, or even a list of tables that
match a certain naming pattern.
Argument Value
userid User name and password of the user executing the utility. Must be the
first argument passed at the command line.
buffer Size of the data buffer in bytes.
file Path and file name of the file to import from.
show Instructs the import utility to only show the contents of the source file.
No data will be imported. The default is N.
ignore Specifies whether to ignore create errors during import due to the
object already existing. The default is N.
grants Indicates whether user privileges will be imported. The default is Y.
indexes Indicates whether indexes will be imported. The default is Y.
rows Indicates whether rows of data will be imported. The default is Y. If set
to N, then only the structure of the objects will be imported.
contraints Indicates whether constraints will be imported. The default is Y.
log Path and file name of the log file to record screen output.
commit Specifies whether to send a COMMIT command each time all the data
in the import buffer has been written to the datafiles. The default is N.
Like the export utility, the import utility also provides a help screen that can be
accessed by passing the argument help=y at the command line. The following
shows a sample import command to perform a full import.
imp userid=sys/oracle file=d:\import\scott.dmp ⇒
log=d:\import\scott.log feedback=1000 full=y
In this example, the entire contents of the SCOTT.DMP file will be imported into
the database. All screen output will be recorded in the SCOTT.LOG file, and the
utility will generate a dot for every 1000 rows imported to show its progress.
Once the tables are imported, the utility will pass the appropriate commands to
Oracle to enable any referential integrity constraints on the tables. If any con-
straints cannot be enabled for any reason, such as a foreign key that references a
table that does not exist, the import utility will display a warning to the screen,
but will continue with the import. Additionally, any PL/SQL packages, proce-
dures, and functions that are included in an export file, namely a full or user-level
export, will be automatically created and compiled at the end of the import
process. If compiling these objects are subject to any errors, the objects are still
created, but are left in an invalid state. The import utility will display a warning
message about the error and continue its process.
1. In the previous task, you performed several exports from the database, one
of which was a full database export. From that full export, you will perform
a table level import to import a single table from the HR schema into the
SCOTT schema. The table you will import is called JOB_HISTORY.
First, you will confirm that Scott does not currently own a table called JOB_
HISTORY.
The output will show that Scott owns four tables, but none of them are
named JOB_HISTORY.
Once you press Enter, the import will begin. The import utility will find the
JOB_HISTORY table almost immediately and import it from the HR schema
into the SCOTT schema.
After a few moments however, the utility will display several error messages
The JOB_HISTORY table was imported into Scott’s schema, but several
foreign key constraints could not be enabled because they reference tables
that Scott does not currently own. However, the final message in the output,
which states “Import terminated successfully with warnings,” indicates that
the table was imported even though some warnings were raised during the
process.
3. You will now confirm that the JOB_HISTORY table does indeed exist in
Scott’s schema.
Your previous query will execute again. This time, the output will show that
Scott now owns a table called JOB_HISTORY.
TASK 6B-3
Transporting a Tablespace
Objective: To transport a tablespace using the export and import utilities.
To determine the current status of the RMAN tablespace, issue the follow-
ing query:
SELECT tablespace_name, status
FROM dba_tablespaces
WHERE tablespace_name='RMAN';
2. Before transporting a set of tablespaces, you must ensure that the tablespace
set is self-contained. This is done with the TRANSPORT_SET_CHECK pro-
cedure of the DBMS_TTS package.
Oracle will display the message “no rows selected.” This indicates that there
are no dependencies between the objects in the RMAN tablespace and
At the prompt, type ALTER TABLESPACE rman READ ONLY; and press
Enter.
The output will show that the RMAN tablespace is indeed in read-only
mode. The tablespace is now ready for transport.
Once you press Enter, the export utility will begin to extract the metadata
information for the RMAN tablespace and it’s contents. However, no data
from the tables in the RMAN tablespace will be exported.
Once the export is complete, the tts_ora92_rman.dmp file contains all the
information necessary to import the RMAN tablespace into another database.
The export file and all datafiles for the tablespace can be copied to another
server, and the tablespace can be imported.
6. Because your system has only one database, you will assume that the
datafile and export file have both been copied to another system, and you are
about to import the tablespace into another database. However, before
importing the tablespace, you must first drop the existing RMAN tablespace
from the database.
Leaving the command prompt open, switch back to the SQL*Plus window.
To confirm that the RMAN tablespace has been dropped, issue the following
query again:
SELECT tablespace_name, status
FROM dba_tablespaces
WHERE tablespace_name='RMAN';
Oracle will display the message “no rows selected.” The RMAN tablespace
no longer exists in the database.
7. You will now use the export file you just created to import the RMAN
tablespace into the database.
Once you press Enter, the import process will begin. The output may look as
if individual tables were being imported, however only the metadata con-
Once the import is done, the last line of the output will state “Import termi-
nated successfully without warnings.” Close the command prompt.
8. You will now confirm that the RMAN tablespace has been imported as
expected.
The output will show that the RMAN tablespace has been imported and is
currently still set to read-only.
10. You have successfully transported a tablespace from one database to another.
You are the DBA for a database that currently supports an application that
has been in development for several weeks, and the application is scheduled
to go live tomorrow. However, a developer calls you to say that he has acci-
dentally deleted all 50,000 rows from the SH.CUSTOMERS table, and
committed the transaction. He states that the contents of the table are abso-
lutely critical for the application and must be recovered at all costs.
Your task is to recover the contents of SH.CUSTOMERS table. You may use
any skills and techniques you have learned throughout this course, as well as
any resources available on your system. The recovery of the
SH.CUSTOMER table contents is of the highest priority; the content of any
other user tables in the database is irrelevant. Your first steps should include
making a list of possible recovery methods, and identifying which method is
best for this scenario.
While attempting to recover the table, you may encounter other issues with
the system. These issues may involve any aspect of the system, including the
operating system, disk media, Oracle installation, datafiles, and/or client and
server configurations. If you do encounter any other issues, you should
research, identify, and resolve those issues as quickly as possible so you can
move towards your immediate goal of recovering the table.
Summary
In this lesson, you learned how to move large volumes of data quickly by
using Oracle-provided utilities. You learned how to use SQL*Loader to bulk
load data from flat ASCII files into the database. You also learned how to
perform direct-load inserts from one table to another, and you learned how
to create external tables to access data outside of the database through a
table-like structure inside the database. You learned to use the export utility
to export data from the database into a binary file, and then import it back
in using the import utility. Finally, you learned how to move entire
tablespaces by using Oracle’s transportable tablespace feature.
When loading data with SQL*Loader, which control file option can be
used to load data in to an empty table? Choose all that apply.
✓ a. INSERT
✓ b. APPEND
✓ c. REPLACE
✓ d. TRUNCATE
If the data to be loaded resides in the same file as the control file syntax,
how should the INFILE clause be specified?
a. INFILE=true
✓ b. INFILE *
c. INFILE
d. It should be omitted
Your SQL*Loader control file contains the BADFILE clause, and the
command you are using at the command prompt contains the BAD
argument. Which one will be used to determine the name and location
of the bad file?
✓ a. The command line
b. The control file
c. Both will be ignored
d. An error will be returned
LESSON 6
Topic 6B
The final_setup file launches the go.exe executable. This executable makes changes to the state of the system
which will cause the students to encounter several issues while attempting to recover the SH.CUSTOMERS table.
The go.exe executable makes the following changes:
• All rows in the SH.CUSTOMERS table are deleted, and commit is issued.
• The instance is aborted with the SHUTDOWN ABORT command.
• All copies of the current control file are deleted.
• The EXAMPLE01.DBF datafile is deleted.
• The primary listener is stopped.
In order to bring the database to a state where the SH.CUSTOMERS table can be recovered, the students must:
1. Restart the listener.
2. Manually restore all copies of the control file from the control file image copy created in Task 5B-2, step 15.
3. Use RMAN to restore the EXAMPLE01.DBF datafile from the backup set created in Task 5B-2, step 14. This
must be done while RMAN is connected to the control file as the repository.
Once the students have reached this point, the best course of action is to perform a tablespace point-in-time
recovery of the database to the SCN just prior to the DELETE statement that was issued to delete all the rows
from the SH.CUSTOMERS table. That would ensure that all data, including the most recent transactions, have
been recovered with the exception of the DELETE statement. In the current environment, the furthest the students
are expected to get is to restore the database to the last log sequence number prior to the current redo log. The
students would get extra kudos if they mention using the LogMiner utility to extract the last transactions from
the current redo log, although they are not expected to perform this task.
After restoring the datafiles, if the students perform an incomplete recovery using all redo logs, and open the
database with the RESETLOGS option, they will notice that the SH.CUSTOMERS table is still empty. This is
because the redo information related to the DELETE statement is included in the current redo log file.
If some students seem to have difficulty performing a tablespace point-in-time recovery, it is acceptable to apply
all redo information, including the current redo log, open the database with the RESETLOGS option, then import
the SH.CUSTOMERS table from either the exp_ora92_full.dmp or exp_ora92_wildcard.dmp export files created in
Task 6B-2. Although this will not guarantee that the latest transactions are included in the restored table, the
table can be restored to its last known good state, and any outstanding transactions can either be manually
applied, or extracted from the redo and archive logs using LogMiner and then applied. To reiterate, it is not
expected that the students use LogMiner to extract and apply redo information, but if it is at least mentioned,
this is considered a bonus.
Glossary 367
GLOSSARY
direct load insert hot backup
A method of loading data from one table to A backup of the database, either full or
another using a direct path load. partial, performed while the database
remains up and running.
direct path load
A method of loading data into Oracle that IIOP
writes directly to the datablocks in the (Internet Inter-ORB Protocol) A presenta-
datafiles, bypassing the SQL processing tion protocol designed to implement
engine. CORBA technologies across the Internet.
Glossary 369
GLOSSARY
transportable tablespace
An Oracle feature that allows you to copy
an entire tablespace from one database to
another by simply exporting the tablespace
metadata from the data dictionary.
trial recovery
A test recovery of the database that is used
to determine whether or not a real recovery
will be successful.
UGA
(User Global Area) The memory area at the
OS level on the server that exists only in a
shared server configuration.
undo segments
Database segments that temporarily hold
the original versions of changed data from
a transaction in case the transaction needs
to be rolled back and the original version
replaced.
backup
cold, 114-120
D
cold, RMAN, 236-243 database writer (DBWR) process, 110-111
control file, 148-153 DES encryption
hot, 127-135 See: Advanced Security Option
hot, RMAN, 243-245 direct load inserts, 329-333
read-only tablespaces, 153-154 dispatcher process, 76-77, 81-82
resolving failed hot backup, 154-158
RMAN, 236-243 E
user-managed, 142-147 export
backup sets types, 342-344
See: RMAN external naming
bequeath connection See: names resolution methods
See: user connection process external tables, 333-341
block corruption, 168-171
Also see: block media recovery F
Block Media Recovery failures
See: BMR instance failure, 98
block media recovery (BMR), 295-296 media failure, 98
Also see: RMAN session failure, 97
BMR, 295-296 statement, 96
transaction failure, 96-97
C
centralized naming G
See: names resolution methods GIOP, 87-90
checkpoint (CKPT) process, 110
client-server architecture, 3-6
cold backup
H
heterogeneous connectivity, 22-23
Index 371
INDEX
host naming creating manually, 42-46
See: names resolution methods Also see: Listener Control utility (lsnrctl)
hot backup Listener Control utility (lsnrctl), 35-41
definition, 101-103 local naming
HTTP/FTP presentations See: names resolution methods
See: layered architecture log writer (LGWR) process, 109
logging
I Oracle Net, 66-68
IIOP, 87-90 lsnrctl
image copies See: Listener Control utility (lsnrctl)
See: RMAN
import M
types, 348-352 media failure
incomplete recovery See: failures
See: recovery media recovery
incremental backups See: recovery
See: RMAN MTBF, 100-105
inserts MTS
direct load, 329-333 See: Oracle Shared Server
instance failure MTTR, 100-105
See: failures multi-protocol connectivity
instance recovery See: Connection Manager
See: recovery Multi-threaded Server
See: Oracle Shared Server
J
Java client communication stack N
See: layered architecture N-tier architecture, 3-6
Java thin client, 15-16 names resolution methods
JDBC, 15-16 centralized naming, 46-49
external naming, 46-49
L host naming, 46-49
layered architecture local naming, 46-49, 55-64
Java client communication stack, 15-16 net service name, 9, 10
OCI layer, 13-15 networking
OPI layer, 13-15 client-side configuration, 46-49
Oracle communication stack, 13-15 server-side configuration, 25-35
Oracle communications stack, 12-18 troubleshooting, 64-74
Web client communication stack, 16-17 Also see: tnsping utility
Listener Control utility (lsnrctl), 35-41 Web client configuration, 87-90
listener process, 11 Also see: layered architecture
configuring for Web clients, 87-90
Also see: layered architecture
creating, 25-27
Index 373
INDEX
T
tablespace point-in-time recovery
RMAN, 308-314
user-managed, 200-201
tnsping utility, 50
tracing
Oracle Net, 66-68
transaction failure
See: failures
transparent gateway, 22-23
transportable tablespaces, 353-361
Also see: export
Also see: import
trial recovery, 168-171
Also see: block media recovery
TSPITR
See: tablespace point-in-time recovery
two-task connection
See: user connection process
U
undo segments, 106-107
user connection process, 7-12
bequeath connection, 7-12
Oracle Shared Server, 76-77
W
Web client communication stack
See: layered architecture
Web client configuration, 87-90