You are on page 1of 29

Oracle DBA for SAP Basis

Home Oracle DBA Made Easy Learn SAP Basis Unix knowledge for BASIS Contact us & Comment

Oracle Architecture

Oracle Server Oracle Instance Oracle Database

Oracle Server:
Oracle Database Server is made of two major components.

Oracle Server = Oracle Instance (RAM) + Oracle Database (Physical disk)

Oracle Instance

Created in the RAM of the server. Gets created on startup of the database. Memory size and other related parameters are defined in parameter file. Used to access the database (Physical files). It is always connected to one database only. Made of memory structure and background processes.

When user starts a tool like Oracle forms, other application software or SQL plus to connect to oracle database, a user process starts on the users machine. It will request the connection to the database, so server process will start on the server on which database is created. User process communicates with Oracle Instance via server process, and server process communicates with database. User can create connection with Oracle server by three different ways: 1. 1 Tier connection: In this type user log on to the same machine on which oracle database is resides. 2. 2 Tire connection: In this type user log on to the separate client machine and then connects to database through network. This also called as client-server type of the connection. 3. 3 Tier (N Tire) connection: In this type, user log on to the separate client machine and connect to middle tire-application server and application server connects to oracle database. Session: A session is a separate connection between user and oracle server. This gets created when user get authenticated by the oracle server.

Oracle Database

It is a physical storage disk on the server. It is the physical files related to the database. These files are mainly three types. o Data files - Contains actual data. o Online Redo log files - Records changes in the data. o Control files - Contains information to maintain and operate the database. Oracle server also having other files, those are not a part of database are. o Password file - Authenticate user to startup and shutdown the database. o Parameter file - It defines the characteristics of an Instance like size of the different memory structure of the SGA etc. o Archive log files These are the offline copies of the online redo log files.

Oracle Instance Details


Oracle Instance = Memory Structure + Background processes.

Memory Structure
It is made of two memory areas:

System Global Area (SGA): Created at the startup of instance. Program Global Area (PGA): Created at the startup of the server process.

System Global Area (SGA): System Global Area is made of following different memory areas:

Database Buffer Cache Shared pool Redo log buffer Large pool (Optional) Java pool (Optional) Other miscellaneous like area for locks, latches and other processes related required memory area.

System Global Area (SGA) is allocated in the virtual memory of the server on which, the oracle database is resides at the startup of the oracle instance. It is sized by the initialization parameter SGA_MAX_SIZE. The function and the size of the different area of the SGA are controlled by INIT.ORA (initialization parameter) file. It contains data and control information for oracle server. Oracle processes share all the information. In Oracle 9i and onwards SGA is dynamic meaning the sizes of the individual areas of the SGA can be changed without shutting down the oracle instance. The total of all the areas of the SGA cannot exceed of SGA_MAX_SIZE. The size of the various areas of the SGA can be allocated and de-allocated in terms of granules. Granules are the multiple of contiguous oracle blocks. The size of the granule is depends on the SGA_MAX_SIZE.

4 MB if SGA size is < 128 MB Otherwise 16 MB

The minimum granules allocated at instance startup are three.


1st for SGA fixed size, which includes redo buffers. 2nd for Database Buffer Cache 3rd for Shared Pool.

The size of the SGA is a total of the following initialization parameter and related memory areas in SGA. Total Maximum size of SGA (SGA_MAX_SIZE) = DB_CACHE_SIZE The size of the database buffer cache in bytes, which is used to hold the data blocks in cache. Default sizes: On Windows 52 MB, and on Unix 48 MB. + LOG_BUFFER The size of the LOG BUFFER, which is used to hold the changed data blocks. + SHARED_POOL_SIZE The size in bytes, which is used to stored SQL, PL/SQL and data dictionary information. Default sizes: 16 MB. if 64 bit, then 64 MB. + LARGE_POOL_SIZE The size in bytes, which is normally used to I/O related processes, and shared server environment. Default size is zero (in normal configuration) + JAVA_POOL_SIZE The size in bytes, which used for java, based modules. Default size is 24 MB. Memory allocation can be checked by:

SQL> show sga; Total System Global Area 621879952 bytes Fixed Size 455312 bytes Variable Size 352321536 bytes Database Buffers 268435456 bytes Redo Buffers 667648 bytes Total System Global Area = Fixed Size (The area used by oracle backup ground process and instance management) + Variable Size (Total of SHARED_POOL_SIZE + LARGE_POOL_SIZE + JAVA_POOL_SIZE) + Database Buffers (DB_CACHE_SIZE) + Redo Buffers (LOG_BUFFER)
Shared Pool

Shared Pool is very important area of the SGA. Shared Pool is sized by initialization parameter SHARED_POOL_SIZE. The size is dynamic and can be changed by using following command Sql> alter system set shared_pool_size = 156m; It is made of two important performance related memory structures: 1. 2. Library Cache Stores recently executed SQL statements Data Dictionary Cache Stores most recently used data definitions

The size of the individual area cannot be controlled by DBA. Oracle controls the size of Library Cache and Data Dictionary Cache by internal algorithm.

Library Cache Library Cache is a part of Shared Pool. It is sized by SHARED_POOL_SIZE initialization parameter. It cannot be sized separately.

Sized by SHARED_POOL_SIZE. Stores information for most recently used SQL and PL/SQL statements. It allows sharing of commonly used SQL and PL/SQL. The space is managed by LRU (List Recently Used) algorithm. It consist of two main areas: Shared SQL area Shared PL/SQL area

Memory is allocated, when statement is parsed. If the size is small then, statements are continually reloaded into library cache, which reduce the performance. When new SQL statement comes and need free space the old statement (least recently used) will be aged out and new statement will get the memory. Shared SQL area: This area stores the execution plan and parse tree for the statement and shares to other sessions. If the same statement runs second time then, it takes advantage of parse information and execution plan available in Library Cache, which avoid reparsing time and expedites the process. In order to make SQL statement sharable the schema, text and bind variables should be same. Shared PL/SQL area: This area stores most recently PL/SQL statements and parsed and compiled functions, packages and triggers.
Data Dictionary Cache

Data Dictionary Cache is also a part of Shared Pool. It is sized by SHARED_POOL_SIZE initialization parameter. It cannot be sized separately.

Sized by SHARED_POOL_SIZE. Stores most recently used definitions in database. Stores information about data for user, data files, tables, indexes columns, users, privileges etc from the data dictionary. During the parsing server process looks for all these information. By caching this information in cache, when next time it requires, it can be faster accessed from cache. This makes the execution faster. If the size of Data Dictionary Cache is too small then, server process has to repeatedly query the data dictionary to get the same information over and over, which is called recursive calls and it slower down the performance

Tablespae Administration
A database is divided into logical storage units called tablespaces, which group together related logical structures (such as tables, views, and other database objects). For example, all application objects can be grouped into a single tablespace to simplify maintenance operations. A tablespace consists of one or more physical datafiles. Database objects assigned to a tablespace are stored in the physical datafiles of that tablespace. When you create an Oracle database, some tablespaces already exist, such as SYSTEM and USERS. Tablespaces provide a means to physically locate data on storage. When you define the datafiles that make up a tablespace, you specify a storage location for these files. For example, you might specify a datafile location for a certain tablespace as a designated host directory (implying a certain disk volume) or designated Automatic Storage Management disk group. Any schema objects assigned to that tablespace then get located in the specified storage location. Tablespaces

also provide a unit of backup and recovery. The backup and recovery features of Oracle Database enable you to back up or recover at the tablespace level. Tablespace EXAMPLE Description This tablespace contains the sample schemas that are included with Oracle Database. The sample schemas provide a common platform for examples. Oracle documentation and educational materials contain examples based on the sample schemas. SYSTEM This tablespace is automatically created at database creation. Oracle Database uses it to manage the database. It contains the data dictionary, which is the central set of tables and views used as a read-only reference for a particular database. It also contains various tables and views that contain administrative information about the database. These are all contained in the SYS schema, and can be accessed only by the SYS user or other administrative users with the required privilege. SYSAUX This is an auxiliary tablespace to the SYSTEM tablespace. Some components and products that used the SYSTEM tablespace or their own tablespaces in releases prior to Oracle Database 10g now use the SYSAUX tablespace. Using SYSAUX reduces the load on the SYSTEM tablespace and reduces maintenance because there are fewer tablespaces to monitor and maintain. Every Oracle Database 10g or later database release must have a SYSAUX tablespace. Components that use SYSAUX as their default tablespace during installation include Automatic Workload Repository, Oracle Streams,

Oracle Text, and Database Control Repository. TEMP This tablespace stores temporary data generated when processing SQL statements. For example, this tablespace would be used for query sorting. Every database should have a temporary tablespace that is assigned to users as their temporary tablespace. In the preconfigured database, the TEMP tablespace is specified as the default temporary tablespace. If no temporary tablespace is specified when a user account is created, then Oracle Database assigns this tablespace to the user. UNDOTBS1 This is the undo tablespace used by the database to store undo information. Every database must have an undo tablespace. USERS This tablespace is used to store permanent user objects and data. Similar to the TEMP tablespace, every database should have a tablespace for permanent user data that is assigned to users. Otherwise, user objects will be created in the SYSTEM tablespace, which is not good practice. In the preconfigured database, USERS is designated as the default tablespace for all new users. Even though you can create more than one undo tablespace, only one can be active. If you want to switch the undo tablespace used by the database instance, then you can create a new one and instruct the database to use it instead. The undo tablespace no longer in use can then be removed from the database (or dropped). Temporary Temporary tablespaces are used for storing temporary data, as would be created when SQL statements perform sort operations. An Oracle database gets a temporary tablespace when the database is created. You would create another temporary tablespace if you were creating a temporary tablespace group. Under

typical circumstances, you do not need to create additional temporary tablespaces. If you have an extremely large database, then you might configure additional temporary tablespaces. The physical files that make up a temporary tablespace are called tempfiles, as opposed to datafiles. The TEMP tablespace is typically used as the default temporary tablespace for users who are not explicitly assigned a temporary tablespace.

Tablespace Status You can set tablespace status as follows: Read Write Users can read and write to the tablespace after it is created. This is the default. Read Only If the tablespace is created Read Only, then the tablespace cannot be written to until its status is changed to Read Write. It is unlikely that you would create a Read Only tablespace, but you might change it to that status after you have written data to it that you do not want modified. Offline If the tablespace is created Offline, then no users can access it. It is unlikely that you will create an Offline tablespace, but later you might change its status to Offline to perform maintenance on its datafiles. Autoextend Tablespace You can set a tablespace to automatically extend itself by a specified amount when it reaches its size limit. If you do not enable autoextend, you are alerted when the tablespace reaches its critical or warning threshold size. The critical and warning threshold parameters have default values that you can change at any time. These parameters also cause alerts to be generated for autoextending tablespaces that are approaching their specified size limit. You can respond to size alerts by manually increasing the tablespace size. You do so by increasing the size of one or more of the tablespace datafiles or by adding another datafile to the tablespace.

Database Tuning

Here are some Oracle database tuning topics that I have picked out from some of my experiences and other sources. They are in no particular order - just whatever came across a listserv or prompted me from a book or manual. Tuning is an ongoing process, but, don't let it dominate your life! You might want to check out your configuration every month or so, or when there has been some large structural change made to it, or if your users are noticing a slowdown, but, it's usually not something that demands your constant attention. I've tried to focus on statistics that are immediately available in tables, rather than having to run statistics gathering routines such as utlbstat/utlestat, since the reports from those can contain a majority of information that you will probably never use and will have to sift through to find what you are really looking for. Note that if you have just started up your Oracle database instance, this information will probably be irrelevant you should probably wait several hours after startup to get a representative sample of your users' interactions with the database. Also, be aware that some statistics may be expressed as a "hit ratio", while others may be expressed as a "miss ratio" - they are different, and you can convert one to the other by subtracting it from 1. All of this information is generic to Oracle. Check back here again for additions and updates!

Tuning Topics
Redo Log Buffer Latches Database Buffer Cache Size Shared Pool Size Tuning Scripts Redo Log Buffer Latches When a transaction is ready to write its changes to the redo log, it first has to grab the Redo Allocation Latch, of which there is only one, to keep others from writing to the log at the same time. If someone else has that latch, it has to wait for the latch, resulting in a "miss". Once it grabs that latch, if the change is larger than log_small_entry_max_size bytes and if your server has multiple CPU's, it then tries to grab a Redo Copy Latch, of which there can be up to 2 times the number of CPU's, which would allow it to release the Redo Allocation Latch for someone else to use. If none of them are available, resulting in an "immediate miss", it will not wait for a Redo Copy Latch (thus, the "immediate"), but, instead, hangs on to the Redo Allocation Latch until the change is written. Oracle keeps statistics for these latches in v$latch, including the number of gets and misses for the Redo Allocation Latch and the number of immediate gets and immediate misses for the Redo Copy Latches, which are cumulative values since instance startup. If you've got a 100% hit ratio for either of those latch types, that's a good thing. It just means that all of your transactions were able to grab and use the latch without retrying.

It's when you get below a 99% hit ratio that you need to start looking out. The following sql figures the current hit ratios for those latches: column latch_name format a20 select name latch_name, gets, misses, round(decode(gets-misses,0,1,gets-misses)/ decode(gets,0,1,gets),3) hit_ratio from v$latch where name = 'redo allocation'; column latch_name format a20 select name latch_name, immediate_gets, immediate_misses, round(decode(immediate_gets-immediate_misses,0,1, immediate_gets-immediate_misses)/ decode(immediate_gets,0,1,immediate_gets),3) hit_ratio from v$latch where name = 'redo copy'; If your Redo Allocation Latch hit ratio consistently falls below 99%, and if you have a multi-CPU machine, you can lower the value for log_small_entry_max_size (see below) in your init.ora file (ours is currently 800 bytes, but, maybe 100 or so bytes may be better - you'll have to try out different values over time), which says that any change smaller than that will hang onto the Redo Allocation Latch until Oracle is finished writing that change. Anything larger than that grabs a Redo Copy Latch, if currently available, and releases the Redo Allocation Latch for another transaction to use. If your Redo Copy Latch hit ratio consistently falls below 99%, and if you have a multiCPU machine, you can raise the value of log_simultaneous_copies in your init.ora file up to twice the number of CPU's to provide more Redo Copy Latches (there is only one Redo Allocation Latch, so it is at a premium). Remember that you have to shut down your database instance and restart it to reread the new parameter values in the init.ora file ($ORACLE_HOME/dbs/initSID.ora). The following sql shows the current values for those associated parameters: column name format a30 column value format a10 select name,value from v$parameter where name in ('log_small_entry_max_size','log_simultaneous_copies', 'cpu_count'); Database Buffer Cache Size The Database Buffer Cache is part of the Shared Global Area (SGA) in memory for a single database instance (SID) and holds the blocks of data and indexes that you and everyone else is currently using. It may even contain multiple copies of the same data block if, for example, more than one transaction is making changes to it but not yet committed, or, if you are looking at the original copy (select) and someone else is looking at their modified but uncommitted copy (insert, update, or delete). The

parameters db_block_buffers and db_block_size in your init.ora file determine the size of the buffer cache. db_block_size, in bytes, is set at database creation, and cannot be changed (unless you recreate the database from scratch), so, the only thing that you can adjust is the number of blocks in db_block_buffers (one buffer holds one block). The Cache Hit Ratio shows how many blocks were already in memory (logical reads, which include "db block gets" for blocks you are using and "consistent gets" of original blocks from rollback segments that others are updating) versus how many blocks had to be read from disk ("physical reads"). Oracle recommends that this ratio be at least 80%, but, I like at least 90% myself. The ratio can be obtained from values in v$sysstat, which are constantly being updated and show statistics since database startup (it is only accessable from a DBA user account). You will get a more representative sample if the database has been running several hours with normal user transactions taking place. The Cache Hit Ratio is determined as follows: select (1-(pr.value/(dbg.value+cg.value)))*100 from v$sysstat pr, v$sysstat dbg, v$sysstat cg where pr.name = 'physical reads' and dbg.name = 'db block gets' and cg.name = 'consistent gets'; If you have a low Cache Hit Ratio, you can test to see what the effect of adding buffers would be by putting "db_block_lru_extended_statistics = 1000" in the init.ora file, doing a shutdown and startup of the database, and waiting a few hours to get a representative sample. Oracle determines how many Additional Cache Hits (ACH) would occur for each query and transaction for each of the 1000 buffer increments (or whatever other maximum value you might want to try out), and places them into the x$kcbrbh table, which is only accessable from user "sys". To measure the new Cache Hit Ratio with, for example, 100 extra buffers, determine ACH as follows: select sum(count) "ACH" from x$kcbrbh where indx < 100; and plug that value into the Cache Hit Ratio formula as follows: select (1-((pr.value-&ACH)/(dbg.value+cg.value)))*100 from v$sysstat pr, v$sysstat dbg, v$sysstat cg where pr.name = 'physical reads' and dbg.name = 'db block gets' and cg.name = 'consistent gets'; If the ratio originally was lower than 80% and is now higher with ACH, you may want to increase db_block_buffers by that number of extra buffers, restarting your database to put the increase into effect. Be sure to try several values for the number of extra buffers to find an optimum for your work load. Also, remove db_block_lru_extended_statistics from your init.ora file before restarting your database to stop gathering statistics, which

tends to slow down the transaction time. (Removing that clears the x$kcbrbh table.) Also, make sure that your server has enough memory to accomodate the increase! If you are running really tight on memory, and the Cache Hit Ratio is running well above 80%, you might want to check the effect of lowering the number of buffers, which would release Oracle memory that could then be used by other processes, but would also potentially slow down database transactions. To test this, put "db_block_lru_statistics = true" in your init.ora file and restart your database. This gathers statistics for Additional Cache Misses (ACM) that would occur for each query and transaction for each of the buffer decrements up to the current db_block_buffers value, placing them into the x$kcbcbh table, also only accessable from user "sys". To measure the new Cache Hit Ratio with, for example, 100 fewer buffers, determine ACM as follows: select sum(count) "ACM" from x$kcbcbh where indx >= (select max(indx)+1-100 from x$kcbcbh); and plug that value into the Cache Hit Ratio formula as follows: select (1-((pr.value+&ACM)/(dbg.value+cg.value)))*100 from v$sysstat pr, v$sysstat dbg, v$sysstat cg where pr.name = 'physical reads' and dbg.name = 'db block gets' and cg.name = 'consistent gets'; If the ratio is still above 80%, you may want to decrease db_block_buffers by that number of fewer buffers, restarting your database to put the decrease into effect. Be sure to try several values for the number of fewer buffers to find an optimum for your work load. Also, remove db_block_lru_statistics from your init.ora file before restarting your database to stop gathering statistics, which tends to slow down the transaction time. (Removing that clears the x$kcbcbh table.) I have three scripts which you can use to figure your instance's optimum number of db_block_buffers. The cache_hit_ratio.sql script computes the current ratio for the database buffer cache, and can be run from any DBA account. The adding_buffers.sql script computes the resulting ratio for an increase in the buffer cache size of the given number of buffer blocks (figuring ACH itself). It must be run from user "sys", after a representative sampling time with db_block_lru_extended_statistics in place. The removing_buffers.sql script computes the resulting ratio for a decrease in the buffer cache size of the given number of buffer blocks (figuring ACM itself). It must be run from user "sys", after a representative sampling time with db_block_lru_statistics in place. Shared Pool Size The Shared Pool is also part of the Shared Global Area (SGA) in memory for a single database instance (SID) and holds the Library Cache with the most recently used SQL

statements and parse trees along with PL/SQL blocks, and the Data Dictionary Cache with definitions of tables, views, and other dictionary objects. Both of those sets of cached objects can be used by one or more users, and are aged out (Least Recently Used) as other objects need the space. (You can pin large frequently-used objects in the Shared Pool for performance and other reasons, but, I won't go into that here.) There are several ratios that you can check after a representative sample time that may indicate that you need to enlarge the shared pool, which is set by the shared_pool_size parameter in your init.ora file and defaults to 3500000 (3.5 Meg). One indicator is the Library Cache Get Hit Ratio, which shows how many cursors are being shared (SQL statements (gets) which were already found and parsed (gethits) in the shared pool, with no parsing or re-parsing needed), and is determined by: select gethits,gets,gethitratio from v$librarycache where namespace = 'SQL AREA'; If the gethitratio is less than 90%, you should consider increasing the shared pool size. Another indicator is the reloads per pin ratio, which shows how many parsed statements (pins) have been aged out (reloaded) of the shared pool for lack of space (ideally 0), and is determined by: select reloads,pins,reloads/pins from v$librarycache where namespace = 'SQL AREA'; If the reloads/pins ratio is more than 1%, you should consider increasing the shared pool size. A third indicator, which is not as important as the first two, is the dictionary object getmisses per get ratio, which shows how many cached dictionary object definitions in the dictionary cache are encountering too many misses (aged out?), and is determined by: select sum(getmisses),sum(gets),sum(getmisses)/sum(gets) from v$rowcache; If the getmisses/gets ratio is more than 15%, you should consider increasing the shared pool size. If these ratios indicate that your shared pool is too small, you can estimate the size of the shared pool by doing the following. Set the shared_pool_size to a very large number, maybe a fourth or more of your system's available memory, depending on how many other instances and processes that you have running that are also using memory, then shutdown and startup your database and let it run for a representative time (like all day or when a large batch job is running that you want to accomodate), then, figure the memory required for packages and views, memory required for frequently used SQL statements, and memory required for users SQL statements executed, as shown below:

select sum(sharable_mem) "Packages/Views" from v$db_object_cache; select sum(sharable_mem) "SQL Statements" from v$sqlarea where executions > 5; select sum(250 * users_opening) "SQL Users" from v$sqlarea; Then, add the above three numbers and multiply the results by 2.5. Use this estimated size as a guideline for the value for shared_pool_size, changing that parameter to the estimated size or back to the original size and doing another shutdown/startup to put the value into effect. The shared_pool_size.sql script can be used to figure these values for you, which uses an example of the Select From Selects tip: select sum(a.spspv) "Packages/Views", sum(a.spssql) "SQL Statements", sum(a.spsusr) "SQL Users", round((sum(a.spspv) + sum(a.spssql) + sum(a.spsusr)) * 2.5,-6) "Estimated shared_pool_size" from (select sum(sharable_mem) spspv, 0 spssql, 0 spsusr from v$db_object_cache union all select 0, sum(sharable_mem), 0 from v$sqlarea where executions > 5 union all select 0, 0, sum(250 * users_opening) from v$sqlarea) a;

Oracle Database File Management Control Files


An Oracle database cannot be started without at least one control file. The control file contains data on system structures, log status, transaction numbers and other important information about the database. The control file is generally less than one megabyte in size. It is wise to have at least two copies of your control file on different disks, three for OFA compliance. Oracle will maintain them as mirror images of each other. This ensures that loss of a single control file will not knock your database out of the water. You cannot bring a control file back from a backup; it is a living file that corresponds to current database status. In both Oracle7 and Oracle8, there is a CREATE CONTROL FILE command that allows recovery from loss of a control file. However, you must have detailed knowledge of your database to use it properly. The section of the recovery chapter that deals with backup and recovery of control files explains in detail how to protect yourself from loss of a control file. It is easier to maintain extra control file copies. In Oracle8 and Oracle8i the use of RMAN may drive control file sizes to tens of megbytes. Controlfile is specified in the initSID.ora file also. Relavent views and tables on controlfile are V$CONTROLFILE,V$CONTROLFILE_RECORD_SECTION

Redo Logs
As their name implies, redo logs are used to restore transactions after a system crash or other system failure. The redo logs store data about transactions that alter database information. According to Oracle each database should have at least two groups of two logs each on separate physical non-RAID5 drives; if no archive logging is taking place, three or more groups with archive logging in effect. These are relatively active files and if made unavailable, the database cannot function. They can be placed anywhere except in the same location as the archive logs. Archive logs are archive copies of filled redo logs and are used for point-in-time recovery from a major disk or system failure. Since they are backups of the redo logs it would not be logical to place the redo logs and archives in the same physical location. Size of the redo logs will determine how much data is lost for a disaster affecting the database. I have found three sets of multiplexed logs to be the absolute minimum to prevent checkpoint problems and other redo related wait conditions, under archive log use three groups is a requirement. Relavent views and tables on redo logfile are V$LOG, V$LOGFILE,V$LOGHIST

Datafiles
Datafiles are the operating system files that hold the data within the database. The data is written to these files in an Oracle proprietary format that cannot be read by other programs. Tempfiles are a special class of datafiles that are associated only with temporary tablespaces. Datafiles can be broken down into the following components: Segments and Extents A segment contains a specific type of database object. For example, tables are stored in data segments, whereas indexes are stored in index segments. An extent is a contiguous set of data blocks within a segment. Oracle initially allocates an extent of a specified size for a segment, but if that extent fills, then more extents can be allocated. Data Blocks Data blocks, also called database blocks, are the smallest unit of I/O to database storage. An extent consists of several contiguous data blocks. The database uses a default block size at database creation. After the database has been created, it is not possible to change the default block size without re-creating the database. Nevertheless, it is possible to create tablespace with a block size different than the default block size. Relavent views and tables on datafile are V$DATAFILE, V$DATAFILE_COPY,V$DATAFILE_HEADER

Backup

1) General Backup and Recovery questions


Why and when should I backup my database?

Backup and recovery is one of the most important aspects of a DBAs job. If you lose your company's data, you could very well lose your job. Hardware and software can always be replaced, but your data may be irreplaceable! Normally one would schedule a hierarchy of daily, weekly and monthly backups, however consult with your users before deciding on a backup schedule. Backup frequency normally depends on the following factors:

Rate of data change/ transaction rate Database availability/ Can you shutdown for cold backups? Criticality of the data/ Value of the data to the company Read-only tablespace needs backing up just once right after you make it read-only If you are running in archivelog mode you can backup parts of a database over an extended cycle of days If archive logging is enabled one needs to backup archived log files timeously to prevent database freezes Etc.

Carefully plan backup retention periods. Ensure enough backup media (tapes) are available and that old backups are expired in-time to make media available for new backups. Off-site vaulting is also highly recommended. Frequently test your ability to recover and document all possible scenarios. Remember, it's the little things that will get you. Most failed recoveries are a result of organizational errors and miscommunication.

What strategies are available for backing-up an Oracle database?


The following methods are valid for backing-up an Oracle database:

Export/Import - Exports are "logical" database backups in that they extract logical definitions and data from the database to a file. See the Import/ Export FAQ for more details. Cold or Off-line Backups - shut the database down and backup up ALL data, log, and control files. Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the tablespaces into backup mode and backup their files. Also remember to backup the control files and archived redo log files. RMAN Backups - while the database is off-line or on-line, use the "rman" utility to backup the database. Brtools Backups - Backup can be taken offline and online using SAP Brtools

It is advisable to use more than one of these methods to backup your database. For example, if you choose to do on-line database backups, also cover yourself by doing database exports. Also test ALL backup and recovery scenarios carefully. It is better to be safe than sorry. Regardless of your strategy, also remember to backup all required software libraries, parameter files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup archived log files.

What is the difference between online and offline backups?


A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode. A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode. It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent. Nevertheless, on-line backups are less disruptive and doesn't require database downtime. Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when the database is in ARCHIVELOG mode.

What is the difference between restoring and recovering?


Restoring involves copying backup files from secondary storage (backup media) to disk. This can be done to replace damaged files or to copy/move a database to a new location. Recovery is the process of applying redo logs to the database to roll it forward. One can rollforward until a specific point-in-time (before the disaster occurred), or roll-forward until the last transaction recorded in the log files.
SQL> connect SYS as SYSDBA SQL> RECOVER DATABASE UNTIL TIME '2009-0306:16:00:00' USING BACKUP CONTROLFILE; RMAN> run { set until time to_date('04-Apr-2009 00:00:00', 'DD-MON-YYYY HH24:MI:SS'); restore database; recover database; }

My database is down and I cannot restore. What now?


This is probably not the appropriate time to be sarcastic, but, recovery without backups are not supported. You know that you should have tested your recovery strategy, and that you should always backup a corrupted database before attempting to restore/recover it. Nevertheless, Oracle Consulting can sometimes extract data from an offline database using a utility called DUL (Disk UnLoad - Life is DUL without it!). This utility reads data in the data

files and unloads it into SQL*Loader or export dump files. Hopefully you'll then be able to load the data into a working database. Note that DUL does not care about rollback segments, corrupted blocks, etc, and can thus not guarantee that the data is not logically corrupt. It is intended as an absolute last resort and will most likely cost your company a lot of money! DUDE (Database Unloading by Data Extraction) is another non-Oracle utility that can be used to extract data from a dead database. More info about DUDE is available at http://www.ora600.nl/.

How does one backup a database using the export utility?


Oracle exports are "logical" database backups (not physical) as they extract data and logical definitions from the database into a file. Other backup strategies normally back-up the physical data files. One of the advantages of exports is that one can selectively re-import tables, however one cannot roll-forward from an restored export. To completely restore a database from an export file one practically needs to recreate the entire database. Always do full system level exports (FULL=YES). Full exports include more information about the database in the export file than user level exports. For more information about the Oracle export and import utilities.

How does one put a database into ARCHIVELOG mode?


The main reason for running in archivelog mode is that one can provide 24-hour availability and guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode before one can start to use on-line database backups. Issue the following commands to put a database into ARCHVELOG mode:
SQL> CONNECT sys AS SYSDBA SQL> STARTUP MOUNT EXCLUSIVE; SQL> ALTER DATABASE ARCHIVELOG; SQL> ARCHIVE LOG START; SQL> ALTER DATABASE OPEN;

Alternatively, add the above commands into your database's startup command script, and bounce the database. The following parameters needs to be set for databases in ARCHIVELOG mode:
log_archive_start = TRUE log_archive_dest_1 = 'LOCATION=/arch_dir_name' log_archive_dest_state_1 = ENABLE log_archive_format = %d_%t_%s.arc

NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode. Without it one would not be able to recover. Also, implement an archivelog backup to prevent the archive log directory from filling-up.

NOTE 2:' ARCHIVELOG mode was introduced with Oracle 6, and is essential for database point-in-time recovery. Archiving can be used in combination with on-line and off-line database backups. NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=... NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG START TO 'directory'; statement. This statement is often used to switch archiving between a set of directories. NOTE 5: When running Oracle Real Application Clusters (RAC), you need to shut down all nodes before changing the database to ARCHIVELOG mode. See the RAC FAQ for more details.

I've lost an archived/online REDO LOG file, can I get my DB back?


The following INIT.ORA/SPFILE parameter can be used if your current redologs are corrupted or blown away. It may also be handy if you do database recovery and one of the archived log files are missing and cannot be restored. NOTE: Caution is advised when enabling this parameter as you might end-up losing your entire database. Please contact Oracle Support before using it.
_allow_resetlogs_corruption = true

This should allow you to open the database. However, after using this parameter your database will be inconsistent (some committed transactions may be lost or partially applied). Steps:

Do a "SHUTDOWN NORMAL" of the database Set the above parameter Do a "STARTUP MOUNT" and "ALTER DATABASE OPEN RESETLOGS;" If the database asks for recovery, use an UNTIL CANCEL type recovery and apply all available archive and on-line redo logs, then issue CANCEL and reissue the "ALTER DATABASE OPEN RESETLOGS;" command. Wait a couple of minutes for Oracle to sort itself out Do a "SHUTDOWN NORMAL" Remove the above parameter! Do a database "STARTUP" and check your ALERT.LOG file for errors. Extract the data and rebuild the entire database

2) User managed backup and recovery


This section deals with user managed, or non-RMAN backups.

How does one do off-line database backups?


Shut down the database from sqlplus or server manager. Backup all files to secondary storage (eg. tapes). Ensure that you backup all data files, all control files and all log files. When completed, restart your database. Do the following queries to get a list of all files that needs to be backed up:
select name from sys.v_$datafile; select member from sys.v_$logfile; select name from sys.v_$controlfile;

Sometimes Oracle takes forever to shutdown with the "immediate" option. As workaround to this problem, shutdown using these commands:
alter system checkpoint; shutdown abort startup restrict shutdown immediate

Note that if your database is in ARCHIVELOG mode, one can still use archived log files to roll forward from an off-line backup. If you cannot take your database down for a cold (off-line) backup at a convenient time, switch your database into ARCHIVELOG mode and perform hot (on-line) backups.

How does one do on-line database backups?


Each tablespace that needs to be backed-up must be switched into backup mode before copying the files out to secondary storage (tapes). Look at this simple example.
ALTER TABLESPACE xyz BEGIN BACKUP; ! cp xyfFile1 /backupDir/ ALTER TABLESPACE xyz END BACKUP;

It is better to backup tablespace for tablespace than to put all tablespaces in backup mode. Backing them up separately incurs less overhead. When done, remember to backup your control files. Look at this example:
ALTER SYSTEM SWITCH LOGFILE; -- Force log switch to update control file headers ALTER DATABASE BACKUP CONTROLFILE TO '/backupDir/control.dbf';

NOTE: Do not run on-line backups during peak processing periods. Oracle will write complete database blocks instead of the normal deltas to redo log files while in backup mode. This will lead to excessive database archiving and even database freezes.

My database was terminated while in BACKUP MODE, do I need to recover?

If a database was terminated while one of its tablespaces was in BACKUP MODE (ALTER TABLESPACE xyz BEGIN BACKUP;), it will tell you that media recovery is required when you try to restart the database. The DBA is then required to recover the database and apply all archived logs to the database. However, from Oracle 7.2, one can simply take the individual datafiles out of backup mode and restart the database.
ALTER DATABASE DATAFILE '/path/filename' END BACKUP;

One can select from V$BACKUP to see which datafiles are in backup mode. This normally saves a significant amount of database down time. See script end_backup2.sql in the Scripts section of this site. From Oracle9i onwards, the following command can be used to take all of the datafiles out of hotbackup mode:
ALTER DATABASE END BACKUP;

This command must be issued when the database is mounted, but not yet opened.

Does Oracle write to data files in begin/hot backup mode?


When a tablespace is in backup mode, Oracle will stop updating its file headers, but will continue to write to the data files. When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction of a block if only half of it was backed up (split blocks). Because of this, one should notice increased log activity and archiving during on-line backups. To solve this problem, simply switch to RMAN backups.

3) RMAN backup and recovery


This section deals with RMAN backups:

What is RMAN and how does one use it?


Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and recovering Oracle Databases. RMAN ships with the database server and doesn't require a separate installation. The RMAN executable is located in your ORACLE_HOME/bin directory. In fact RMAN, is just a Pro*C application that translates commands to a PL/SQL interface. The PL/SQL calls are stallically linked into the Oracle kernel, and does not require the database to be opened (mapped from the ?/rdbms/admin/recover.bsq file).

RMAN can do off-line and on-line database backups. It cannot, however, write directly to tape, but various 3rd-party tools (like Veritas, Omiback, etc) can integrate with RMAN to handle tape library management. RMAN can be operated from Oracle Enterprise Manager, or from command line. Here are the command line arguments:
Argument Value Description ---------------------------------------------------------------------------- target quoted-string connectstring for target database catalog quoted-string connect-string for recovery catalog nocatalog none if specified, then no recovery catalog cmdfile quoted-string name of input command file log quoted-string name of output message log file trace quoted-string name of output debugging message log file append none if specified, log is opened in append mode debug optional-args activate debugging msgno none show RMAN-nnnn prefix for all messages send quoted-string send a command to the media manager pipe string building block for pipe names timeout integer number of seconds to wait for pipe input ----------------------------------------------------------------------------

Here is an example:
[oracle@localhost oracle]$ rman Recovery Manager: Release 10.1.0.2.0 Production Copyright (c) 1995, 2004, Oracle. All rights reserved. RMAN> connect target; connected to target database: ORCL (DBID=1058957020) RMAN> backup database; ...

How does one backup and restore a database using RMAN?


The biggest advantage of RMAN is that it only backup used space in the database. RMAN doesn't put tablespaces in backup mode, saving on redo generation overhead. RMAN will re-read database blocks until it gets a consistent image of it. Look at this simple backup example.
rman target sys/*** nocatalog run { allocate channel t1 type disk; backup format '/app/oracle/backup/%d_t%t_s%s_p%p' (database); release channel t1; }

Example RMAN restore:


rman target sys/*** nocatalog run { allocate channel t1 type disk; until time 'Aug 07 2000 :51'; restore tablespace users; recover tablespace users; release channel t1; } # set

The examples above are extremely simplistic and only useful for illustrating basic concepts. By default Oracle uses the database controlfiles to store information about backups. Normally one would rather setup a RMAN catalog database to store RMAN metadata in. Read the Oracle Backup and Recovery Guide before implementing any RMAN backups.

Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to disk and then manually copy the backups to tape.

How does one backup and restore archived log files?


One can backup archived log files using RMAN or any operating system backup utility. Remember to delete files after backing them up to prevent the archive log directory from filling up. If the archive log directory becomes full, your database will hang! Look at this simple RMAN backup scripts:
RMAN> run { 2> allocate channel dev1 type disk; 3> backup 4> format '/app/oracle/archback/log_%t_%sp%p' 5> (archivelog all delete input); 6> release channel dev1; 7> }

The "delete input" clause will delete the archived logs as they as backed-up. List all archivelog backups for the past 24 hours:
RMAN> LIST BACKUP OF ARCHIVELOG FROM TIME 'sysdate-1';

Here is a restore example:


RMAN> run { 2> allocate channel dev1 type disk; 3> restore (archivelog low logseq 78311 high logseq 78340 thread 1 all); 4> release channel dev1; 5> }

How does one create a RMAN recovery catalog?


Start by creating a database schema (usually called rman). Assign an appropriate tablespace to it and grant it the recovery_catalog_owner role. Look at this example:
sqlplus sys SQL> create user rman identified by rman; SQL> alter user rman default tablespace tools temporary tablespace temp; SQL> alter user rman quota unlimited on tools; SQL> grant connect, resource, recovery_catalog_owner to rman; SQL> exit;

Next, log in to rman and create the catalog schema. Prior to Oracle 8i this was done by running the catrman.sql script.
rman catalog rman/rman RMAN> create catalog tablespace tools; RMAN> exit;

You can now continue by registering your databases in the catalog. Look at this example:
rman catalog rman/rman target backdba/backdba RMAN> register database;

One can also use the "upgrade catalog;" command to upgrade to a new RMAN release, or the "drop catalog;" command to remove an RMAN catalog. These commands need to be entered twice to confirm the operation.

How does one integrate RMAN with third-party Media Managers?


The following Media Management Software Vendors have integrated their media management software with RMAN (Oracle Recovery Manager):

Veritas NetBackup - http://www.veritas.com/ EMC Data Manager (EDM) - http://www.emc.com/ HP OMNIBack/ DataProtector - http://www.hp.com/ IBM's Tivoli Storage Manager (formerly ADSM) - http://www.tivoli.com/storage/ EMC Networker - http://www.emc.com/ BrightStor ARCserve Backup - http://www.ca.com/us/data-loss-prevention.aspx Sterling Software's SAMS:Alexandria (formerly from Spectralogic) http://www.sterling.com/sams/ SUN's Solstice Backup - http://www.sun.com/software/whitepapers/backup-n-storage/ CommVault Galaxy - http://www.commvault.com/ etc...

The above Media Management Vendors will provide first line technical support (and installation guides) for their respective products. A complete list of supported Media Management Vendors can be found at: http://www.oracle.com/technology/deploy/availability/htdocs/bsp.htm When allocating channels one can specify Media Management spesific parameters. Here are some examples: Netbackup on Solaris:
allocate channel t1 type 'SBT_TAPE' PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so.1';

Netbackup on Windows:
allocate channel t1 type 'SBT_TAPE' send "NB_ORA_CLIENT=client_machine_name";

Omniback/ DataProtector on HP-UX:


allocate channel t1 type 'SBT_TAPE' PARMS='SBT_LIBRARY= /opt/omni/lib/libob2oracle8_64bit.sl';

or:
allocate channel 'dev_1' type 'sbt_tape' parms 'ENV=OB2BARTYPE=Oracle8,OB2APPNAME=orcl,OB2BARLIST=machinename_orcl_archlogs) ';

How does one clone/duplicate a database with RMAN?

The first step to clone or duplicate a database with RMAN is to create a new INIT.ORA and password file (use the orapwd utility) on the machine you need to clone the database to. Review all parameters and make the required changed. For example, set the DB_NAME parameter to the new database's name. Secondly, you need to change your environment variables, and do a STARTUP NOMOUNT from sqlplus. This database is referred to as the AUXILIARY in the script below. Lastly, write a RMAN script like this to do the cloning, and call it with "rman cmdfile dupdb.rcv":
connect target sys/secure@origdb connect catalog rman/rman@catdb connect auxiliary / run { set newname for datafile 1 to '/ORADATA/u01/system01.dbf'; set newname for datafile 2 to '/ORADATA/u02/undotbs01.dbf'; set newname for datafile 3 to '/ORADATA/u03/users01.dbf'; set newname for datafile 4 to '/ORADATA/u03/indx01.dbf'; set newname for datafile 5 to '/ORADATA/u02/example01.dbf'; allocate auxiliary channel dupdb1 type disk; set until sequence 2 thread 1; duplicate target database to dupdb logfile GROUP 1 ('/ORADATA/u02/redo01.log') SIZE 200k REUSE, GROUP 2 ('/ORADATA/u03/redo02.log') SIZE 200k REUSE; }

The above script will connect to the "target" (database that will be cloned), the recovery catalog (to get backup info), and the auxiliary database (new duplicate DB). Previous backups will be restored and the database recovered to the "set until time" specified in the script. Notes: the "set newname" commands are only required if your datafile names will different from the target database. The newly cloned DB will have its own unique DBID.

Can one restore RMAN backups without a CONTROLFILE and RECOVERY CATALOG?
Details of RMAN backups are stored in the database control files and optionally a Recovery Catalog. If both these are gone, RMAN cannot restore the database. In such a situation one must extract a control file (or other files) from the backup pieces written out when the last backup was taken. Let's look at an example: Let's take a backup (partial in our case for ilustrative purposes):
$ rman target / nocatalog Recovery Manager: Release 10.1.0.2.0 - 64bit Production Copyright (c) 1995, 2004, Oracle. All rights reserved. connected to target database: ORCL (DBID=1046662649) using target database controlfile instead of recovery catalog RMAN> backup datafile 1; Starting backup at 20AUG-04 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=146 devtype=DISK channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001 name=/oradata/orcl/system01.dbf channel ORA_DISK_1: starting piece 1 at 20AUG-04 channel ORA_DISK_1: finished piece 1 at 20-AUG-04 piece handle=

/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256 _0lczd9tf_.bkp comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset including current SPFILE in backupset channel ORA_DISK_1: starting piece 1 at 20-AUG-04 channel ORA_DISK_1: finished piece 1 at 20-AUG04 piece handle= /flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256 _0lczfrx8_.bkp comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04 Finished backup at 20-AUG-04[/code]

Now, let's destroy one of the control files:


SQL> show parameters CONTROL_FILES NAME TYPE VALUE ------------------------------------ ----------- ----------------------------- control_files string /oradata/orcl/control01.ctl, /oradata/orcl/control02.ctl, /oradata/orcl/control03.ctl SQL> shutdown abort; ORACLE instance shut down. SQL> ! mv /oradata/orcl/control01.ctl /tmp/control01.ctl</pre>

Now, let's see if we can restore it. First we need to start the databaase in NOMOUNT mode:
SQL> startup NOMOUNT ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1301536 bytes Variable Size 262677472 bytes Database Buffers 25165824 bytes Redo Buffers 262144 bytes</pre>

Now, from SQL*Plus, run the following PL/SQL block to restore the file:
DECLARE v_devtype VARCHAR2(100); v_done BOOLEAN; v_maxPieces NUMBER; TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer; v_pieceName t_pieceName; BEGIN -- Define the backup pieces... (names from the RMAN Log file) v_pieceName(1) := '/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T15325 6_0lczfrx8_.bkp'; v_pieceName(2) := '/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T15325 6_0lczd9tf_.bkp'; v_maxPieces := 2; -- Allocate a channel... (Use type=>null for DISK, type=>'sbt_tape' for TAPE) v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>'d1'); -- Restore the first Control File... DBMS_BACKUP_RESTORE.restoreSetDataFile; -CFNAME mist be the exact path and filename of a controlfile taht was backedup DBMS_BACKUP_RESTORE.restoreControlFileTo(cfname=>'/app/oracle/oradata/orcl/co ntrol01.ctl'); dbms_output.put_line('Start restoring '||v_maxPieces||' pieces.'); FOR i IN 1..v_maxPieces LOOP dbms_output.put_line('Restoring from piece '||v_pieceName(i)); DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done, params=>null); exit when v_done; END LOOP; -- Deallocate the channel... DBMS_BACKUP_RESTORE.deviceDeAllocate('d1'); EXCEPTION WHEN OTHERS THEN DBMS_BACKUP_RESTORE.deviceDeAllocate; RAISE; END; /

Let's see if the controlfile was restored:

SQL> ! ls -l /oradata/orcl/control01.ctl -rw-r----1 oracle 3096576 Aug 20 16:45 /oradata/orcl/control01.ctl[/code]

dba

We should now be able to MOUNT the database and continue recovery...


SQL> ! cp /oradata/orcl/control01.ctl /oradata/orcl/control02.ctl SQL> ! cp /oradata/orcl/control01.ctl /oradata/orcl/control03.ctl SQL> alter database mount; SQL> recover database using backup controlfile; ORA-00279: change 7917452 generated at 08/20/2004 16:40:59 needed for thread 1 ORA-00289: suggestion : /flash_recovery_area/ORCL/archivelog/2004_08_20/o1_mf_1_671_%u_.arc ORA00280: change 7917452 for thread 1 is in sequence #671 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} /oradata/orcl/redo02.log Log applied. Media recovery complete. Database altered. SQL> alter database open resetlogs; Database altered.

The SAP tool BRSPACE for Oracle databases enables you to manage the space in your database
Instance administration:

o Start up database o Shut down database o Alter database instance o Alter database parameter o Recreate database
Tablespace administration:

o Extend tablespace o Create tablespace o Drop tablespace o Alter tablespace o Alter data file o Move data file
Segment management:

o Reorganize tables o Rebuild indexes o Export tables o Import tables o Alter tables o Alter indexes

You might also like