You are on page 1of 28

In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Best cases: when you are concerned


about any change in the environment
Database Replay of the database.

*Host changed from Solaris to HP-UX.


*Effect of a patch, such as the
#1 Top Feature of Oracle 11g quarterly CPU patches
Database – According Arup Nanda *Effect of converting the database
Changes to the database like creating from a single instance to an RAC
an index, applying a patch, changing a
table from non-partitioned to You can replay the workload in the
partitioned (changing the partitioning test system. Whatever you are
scheme from hash to range), changing replaying is an accurate reflection of
the storage layout from raw to ASM what goes on in your production
etc. database.
Ongoing essential tasks like gathering “Truthful prediction of the load in the
stats and the need for new indexes production system”
are not just avoidable hassles as
business changes. Topic Area: “Oracle 11g Change
Management”
“Inevitable question whether the
change will break something else Database Replay:
somewhere”
“Testing can take time and be very
Assess the risk to the whole system as expensive, and therefore upgrades
a result of the change and changes that can have a very
positive impact on your system are
Concerns: delayed or not performed simply
*Perform the changes in the test because of the cost and time
system prior to making them in involved”
production. Using Oracle Database Replay:
*Relying on a third-party load
generator tool typically used by QA. Oracle Database Replay addresses the
issues associated with environmental
Capturing the load in production changes by providing the ability to
system (the actual SQL statements test the impact of those changes on a
that run in production system) test system. Thus, you can gauge the
impacts of these changes before you
Using the Database Replay feature move them to production.
you start recording the workload from
your production system. Important workload attributes such as
concurrency and transactional
dependencies are maintained to make
testing as real-world as possible.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Scenarios recommended where RAC – you will have separate shadow


Database Replay can be used: recording process and shadow files on
- Database Upgrades each node of the cluster. During
- Applying Oracle Patch Installs replay these shadow files have to be
- RAC related changes (adding relocated on a shared drives so all
nodes, interconnect changes, nodes will have access to them.
and so on)
- OS platform changes and
upgrades
- Hardware changes (CPU,
memory, or storage)

What you infer from the test results

- Oracle bugs in new versions of


oracle
- Significant performance issues

It is important to ensure that the test Database Replay


environments mimic the production
After the workload is captured, it will
environments as closely as possible
need to be preprocessed before it
Database Replay: Overview can be used for replay. The
preprocessing is a one-time action.
Shadow capture process that (Same Version of Oracle)
records the transactions occurring on
the database into log files. All DB During replay, replay clients are
related requests are captured by this used to consume the workload
shadow processing in a way that has a recording from the shadow files and
minimal impact on the database. (TCP replay the workload on the database.
Overhead - ~4.5 %) To the database they look like normal
external clients making requests to
Each session requires 64K of the database.
additional overhead during flashback
workload capture. Database Replay supported
workloads:
Additional disk space is required for
shadow files and this should be 1. All SQL operations including
located on a different set of physical most with binds.
disks than your database disks to 2. All long object (LOB) operations.
ensure resulting disk IO does not 3. Local Transactions
impact the database disks. 4. Login and Logoffs
5. Session switching
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

6. Some PL/SQL remote procedure Pre-steps before capturing the


calls workload:
** Make sure your database is in
Following operations are not recorded ARCHIVELOG mode and back it up.
in the workflow ** Enough disk space for the shadow
logs to hold the captured workload.
1. Direct path load operations ** Determine whether you need to
2. OCI and REF binds recycle the database.
3. Streams and non-PL/SQL based ** Shadow files wcr* files
AQ **30% impact with workload capture
4. Distributed transactions
5. Flashback Operations Workload capture can be done using
6. Shared server operations OEM and Manually

Database Replay: Workload OEM Capture of Workload:


Capture
Options:
*Sufficient disk space for the captured ** You can define whether you do or
workload. do not want to restart the database
*Directories that will be used are prior to the start of the capture
accessible by the oracle owning process.
account. ** Apply Workload filters to the
capture process. (Include / Exclude
Recording User: The account that is schemas that you need)
used to manage the workload capture
is known as the recording user. Capture Workload: Parameters Page
Replay User: The account that is ** Select oracle directory where the
used to manage the replay is called shadow workload files are stored from
Replay User. drop down list.
** Initialization Parameter File
** SYSDBA privileges (New/Default)
** Backup the database before the
capture process begins so that you Capture Workload: Schedule Page
can store this backup to the test **Schedule when you want to run the
system. workload capture process.
Restore the test system to the SCN
where the workload capture started. Capture Workload: Review Page
**Summary of the actions to be taken
and allows you to begin the workload
Be aware of the impact of in-flight capture process.
transactions and be prepared to deal
with them when analyzing the replay View Capture Workload Page
results. **Status: In Progress
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

OEM: Click stop capture button


Manual Capture of Workload:
Package to manually capture workload Manual:
is DBMS_WORKLOAD_CAPTURE DBMS_WORKLOAD_CAPTURE.FINISH_
CAPTURE
Manual Steps
1. Define Filters Default time out wait is 30 seconds, an
2. Start Workload Capture optional timeout parameters allow you
to define a shorter or longer timeout.
Defining Filters: Another parameter reason lets you
define the reason for stopping the
DBMS_WORKLOAD_CAPTURE.ADD_FIL capture.
TER(fname, fattribute, fvalue)
WRR$_AUTO_STOP_CAPTURE_nn
fattribute: program, module, action, (automatic job to stop the capture)
service, instance_number and user.
Database Replay: Delete a Capture
DBMS_WORKLOAD_CAPTURE.DELETE OEM: View Workload Capture History
_FILTER(fname) Link in OEM. Choose to delete them.

Start the capture using the Manual Method:


start_capture procedure DBMS_WORKLOAD_CAPTURE.DELETE_
CAPTURE_INFO(capture_id)
DBMS_WORKLOAD_CAPTURE.START_ ** This will not physically remove files
CAPTURE (name,dir,duration) created during the capture process.
** You can get the capture id from the
name – name the workload capture DBA_WORKLOAD_CAPTURES data
process dictionary table
dir – directory to store the shadow files
duration – duration of the capture Database Capture: Data Dictionary
View
default_action : whether to include /
exclude filters ** DBA_WORKLOAD_CAPTURES
auto_unrestrict: staring the database ** DBA_WORKLOAD_FILTERS
in a known state to avoid data
divergence due to in-flight Database Replay: Pre Process the
transactions. Captured Workload

Data Dictionary: Preprocessing the Workload from


DBA_WORKLOAD_CAPTURES OEM:
Preprocess in OEM using the
Database Replay: Stop Work Load Preprocess Captured Workload Page
Capture
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Manual Preprocess of workload: within the workload being replayed


will be preserved)
This is done using the
DBMS_WORKLOAD_REPLAY package. Connection Time Scale:
**DBMS_WORKLOAD_REPLAY.PROCES Supported by connect_time_scale
S_CAPTURE( capture_dir) parameter which defaults to 100, this
is used to manage the timeframe
The result of the pre process will between the start of the replay and
create more files in the capture dir when each session connection is
made, contrasted to the timeframe of
** EXTB files the start of the workload capture and
** wcr_login.pp file the connection.
** wcr_process.wmd file
Think Time Scale:
Database Replay Workload: Supported by think_time_scale
parameter which defaults to 100, it
Database Replay: Setup the Replay represents the adjustment of user
Database (Test Database) think time during database replay.
** First, we need to recreate the test
database, restoring it to the SCN Database Replay
where the capture process was
started. **Moving the capture files to the
replay system
Using RMAN restore database upto ** Using Flashback database/ standby
SCN: database
** Just as the capture system you
DUPLICATE TARGET DATABASE TO auxdb need to create a directory in the
FROM ACTIVE DATABASE UNTIL SCN replay system
7445675 SPFILE NOFILENAMECHECK;
Consider using Flashback Database /
Database Replay: Replay Options
StandBy Database
1. Synchronization mode ** Use Flashback database
2. Connection time scale technologies in the replay database to
3. Think time scale flashback the database after you
execute a replay operation allowing
DBMS_WORKLOAD_REPLAY.PREPARE_ you to replay many times.
REPLAY ** Use standby database

Synchronization Mode: Deal with external references


You can use this option to disable SCN DBMS_WORKLOAD_REPLAY.REMAP_C
based synchronization of the replay. ONNECTION
(The commit order of the transactions Dictionary View:
DBA_WORKLOAD_CONNECTION_M
AP
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

calibrate: used to estimate the


Execute Database Replay: OEM number of replay clients and CPUs
OEM Replay Workload Page: This page that will be needed to replay the
prompts you for the directory object workload.
that contains the workload you wish to
process. wrc –help (list of options for the
workload replay clients)
List of operations needed to be done
before replaying the workload Execute Database Replay: Manual
** Restore the database for replay
** Perform any system changes that ** Copy the shadow files from the
you wish to test production to the test / standby
** Resolve any references to external system.
systems (for ex. database links) ** Initialize the replay data
** Set up replay clients, they should ** Prepare for Database Replay
already be installed when you install ** Initiate Database Replay
the database software.
Initialize Database Replay:
Replay Workload Customize Options
Page: DBMS_WORKLOAD_REPLAY.INITIALIZ
** Review the connection strings found E_REPLAY(replay_name,replay_dir);
in the workload
** Replay parameters: This initializes the workload in the
synchronization, connect_time_scale database.
and think_time_scale
DBMS_WORKLOAD_REPLAY.DELETE_R
Replay Workload: Wait for Client EPLAY_INFO
Connections Page
Replay Workload: Review Page This will remove a replay record from
the system.
Replay Clients The dictionary view to see the replays
The replay clients are started from the are DBA_WORKLOAD_REPLAYS
command line by starting the wrc
executable. Prepare for Database Replay:
Having initialized the workload and
wrc mode=replay userid=xxx mapped any external connections,
password=xxx next step is to prepare for the replay
In your testing you may have a operation.
number of replay clients, depending
on the workload of the system and the DBMS_WORKLOAD_REPLAY.PREPARE_
level of concurrency. REPLAY
This procedure takes in all the replay
mode =replay|calibrate|list_hosts options like synchronization.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Initiate Database Replay:


** Start the replay clients using the Another choice of the standby
command line interface wrc database

** Start the workload replay using the Reporting: Workload Capture and
procedure START_CAPTURE Replay

DBMS_REPLAY_WORKLOAD.START_RE Capture:
PLAY
DBMS_WORKLOAD_REPLAY.GET_CAPT
Data Dictionary Views to monitor URE _INFO(dir)
Replay: DBMS_WORKLOAD_REPLAY.REPORT(c
apture_id,format)
** DBA_WORKLOAD_REPLAYS
** Replay:
DBA_WORKLOAD_REPLAY_DIVERG
ENCE DBMS_WORKLOAD_REPLAY.GET_REPL
** V$WORKLOAD_REPLAY_THREAD AY_INFO(dir)
DBMS_WORKLOAD_REPLAY.REPORT(r
Stopping Workload Replay: eplay_id,format)

Stop the replay using the cancel


replay procedure.

DBMS_WORKLOAD_REPLAY.CANCEL_
Temporary
REPLAY Tablespace
CAPTURE – REPLAY – SAME
DATABASE
Enhancements
Steps:
(Oracle 11G)
** Capture Workload
** Preprocess the captured files Temporary Tablespace:
** Create a restore point using the
command “create restore point Temporary tablespaces are used for
pre_replay;” special operations, particularly for
** Start Replay sorting data results on disk. For SQL
** Flashback database to a restore with millions of rows returned, the sort
point using the following commands operation is too large for the RAM area
shutdown immediate and must occur on disk.
startup mount
flashback database to restore **Temporary tablespace is created
point pre_replay when the database is created. Each
alter database open resetlogs
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

database should have at least one


temporary tablespace. alter tablespace
** Temporary tablespace has temporary_tablespace_name shrink
TEMPFILES rather than data files like tempfile file_location;
other data tablespaces.
** Temp files do not grow to the size This command allows you to shrink a
that they have been allocated. given tempfile to its originally defined
size. This shrinks the specific file.
Dictionary Views:
DBA_TABLESPACES KEEP option: indicates that you want
DBA_TEMP_FILES the temporary tablespace or data file
to be a minimum of the keep size.

Examples:

alter tablespace temp shrink space


keep 100m;
Temporary Tablespace Features
alter tablespace temp shrink tempfile
** ALTER TABLESPACE SHRINK SPACE ‘/oracle01/oradata/orcl/temp01.dbf’
** ALTER TABLESPACE SHRINK keep 100m;
TEMPFILE
View: DBA_TEMP_FREE_SPACE
New View Added:
DBA_TEMP_FREE_SPACE The DBA_TEMP_FREE_SPACE has been
added to the Oracle 11g data
Temporary Tablespace Shrink: dictionary to easily mange temporary
tablespaces.
Certain database operations may
require unusually large amounts of This view has the following
temporary tablespace usage, which information: space allocated and free
leads to large tempfiles. If these space.
operations are rare, or one time
operations, the tempfiles can be ** tablespace_name
shrunk ** tablespace_size
** allocated_space
alter tablespace ** free_space
temporary_tablespace_name shrink
space;

This will cause Oracle to reduce the


overall size of the temporary
tablespace to its originally defined
size.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Temporary alter user username temporary


tablespace temp_ts_group;
Tablespace
alter database default temporary
Groups tablespace temp_ts_group;

To remove a tablespace from the


Tablespace groups allow users to use group
more than one tablespace to store
temporary segments. alter tablespace temp2 tablespace
group ‘’;
** The tablespace group is created
implicitly when the first tablespace is There is no theoretical maximum limit
assigned to it. to the number of tablespaces in a
tablespace group, but it must contain
Example: at least one. The group is implicitly
removed when the last member of the
** creating a group by adding an group is removed.
existing temporary tablespace to the
group.

alter tablespace temp tablespace


group temp_ts_group;
Oracle 11g
** adding another temp tablespace to
the group Result Cache
create temporary tablespace temp2
The shared pool component of the
tempfile ‘location’
SGA stores the parsed and compiled
tablespace group temp_ts_group;
versions of SQL queries, which lest the
database quickly execute frequently
** View which holds information about
run SQL statements and PL/SQL
the temporary tablespace groups
functions.
DBA_TABLESPACE_GROUPS
In Oracle 11g, a new component
result cache store the results of both
Both the tablespace and the
SQL queries and PL/SQL functions,
tablespace groups share the same
thus letting it quickly return query and
common namespaces, so they should
function results without having to re-
have different names.
execute the code.
Once the group is created it can be
Improved Database Performance:
assigned just like a tablespace to a
user or as the default temporary
tablespace.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

A second session executing the query 0.5% of ~sga_target


will return the results directly from the
cache instead of the disk. ~ result_cache_max_result:
This is the maximum % of the result
The result caching is done at the cache that a single cached result can
database wide level. use.
Default: 5%
If objects are modified, the database
automatically invalidates the cached ~ result_cache_remote_expiration:
results that reference the modified This specifies the length of time for
objects. which a result that depends on the
remote objects will remain valid.
Good Candidates for caching: Objects
that access many rows and return only
few rows. (DWH)

The result cache consists of two


components
** SQL Query Result Cache Caching SQL Results with a
** PL/SQL Function Result Cache Result_Cache Hint:
** Client Result Cache
Initialization Parameter:
Result Cache Memory Pool: result_cache_mode=manual

** SQL Query Result Cache Pool The results for a sql statement can be
** PL/SQL Function Result Cache Pool cached using a hint “result_cache”.
Example:
Size depends on following parameters:
~memory_traget select /*+ result_cache */ *
~sga_target from table_name
where condition ..;
~shared_pool_size

The result cache operator would lead


Managing the Result Cache:
the database to check the result cache
every time you execute the previous
Initialization Parameters:
query to see if results for this query
~ result_cache_max_size: Size of
are in cache from previous execution
the result cache
of the query.
(0 - System defined level)

If yes ..get it from cache, if not run the


alter system set
query and store it in cache.
result_cache_max_size=0;

Initialization Parameter:
Recommended sizes:
result_cache_mode=force
0.25 % of ~memory_target
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

With this mode set the database will


try to cache query results wherever
possible and wherever it can cache.
You can bypass this by using
no_result_cache hint.
Make sure you disable the cache in
the before executing the
The hints take precedence over the
DBMS_RESULT_CACHE.FLUSH
initialization mode.
procedure.

DBMS_RESULT_CACHE:

Operations:
** Is the result cache open or closed?
** Statistics of the usage of the result
cache
** Flushing the result cache Result Cache: Dynamic Performance
Views
DBMS_RESULT_CACHE.MEMORY_REP
ORT V$RESULT_CACHE_STATISTICS -
Reports the different memory sizes memory usage statistics and cache
not limited to block size, maximum settings
cache size, maximum result size and V$RESULT_CACHE_OBJECTS - lists all
Total Memory cached objects and their attributes
If result_cache_max_size = 0, the V$RESULT_CACHE_DEPENDENCY
MEMORY_REPORT function tells you V$RESULT_CACHE_MEMORY- shows all
the cache is disabled. memory blocks and their statistics

DBMS_RESULT_CACHE.STATUS V$RESULT_CACHE_OBJECTS
Enabled / Disabled (status of the result Statuses: new, published, expired,
cache) invalid, bypass

DBMS_RESULT_CACHE.FLUSH SQL QUERY RESULT CACHE


Use this procedure / function to clear
the contents of the cache. Both the Steps to setup SQL query result cache
function and procedure will clear the or PL/SQL result cache
cache of existing results and return ** set the initialization parameter
the freed memory to the system. result_cache_mode
(values: manual|force) default value:
manual
** LRU algorithm is used to cache out
results, thus making room for the
fresh query results.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Restrictions on Using the SQL Query


Result Cache:
** Temporary Tables Restrictions:
** Dictionary Views It can’t be a pipelined table function
** Non-deterministic PL/SQL functions It can’t have any out or in out
** the currval / nextval pseudo parameters
functions It must be a named function
It isn’t defined in a module that has
Caching a user written function used invoker rights
in a function-based index ensure that It can’t have any IN parameters
the function is declared with the belong to the LOB type, ref cursor, and
deterministic keyword. collection, object or record types.

You can cache subqueries and inline


views using the hint /*+result_cache */ Client Query Result Cache:

PL/SQL Function Result Cache OCI result cache enables client side
It caches the results of the PL/SQL caching of SQL result sets. The OCI
functions in the result cache result cache which is transparent to
component of the SGA. OCI applications, keeps the result data
set consistent with any changes in the
Candidates: Functions that the session attributes or in the database
database invokes frequently but which itself.
depend on the information that
changes infrequently or never. ** performance
** server scalability
The database uses the input The OCI result cache, which is on a
parameters of the function as the per-process basis among multiple
lookup key. client sessions, can use the same
cached results sets.
Creating a cacheable function:
create or replace function The database keeps the client result
function_name return return_param set transparently consistent with
result_cache relies_on (table_name,..) changes on the server.

PL/SQL Function Cache: Candidates: static data, lookup tables,


** When the cached result of the queries producing repeatable small
parameter is invalid because an object result sets
in the relies clause has changed
** When the function bypasses the Enabling and Disabling the Client
cache Result Cache:
** When the cached result for the set
of parameter values has aged out ** Same as server side result caching
because the system needs memory
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

~client_result_cache_size: This ** You can use the invisible index as a


parameter determines the maximum temporary index for a specific
size of the client per-process result operation without forcing all
cache (in bytes). By default the operations to use that index.
database allocates every OCI client ** Test effects of removing an index
process the maximum size specified before you get rid of an index for
by this parameter. good.

~oci_result_cache_max_size
~client_result_cache_lag
~oci_result_cache_max_rset_size
~oci_result_cache_max_rset_rows

Monitoring: Performance Dynamic


Views Initialization parameter:
CLIENT_RESULT_CACHE_STATS$ optimizer_use_invisible_indexes = true

** session / system level

Views: The visibility column in the


DBA_INDEXES view tells whether the
index is visible or invisible.

** The database maintains an invisible


index during DML statements.

Oracle 11g Oracle 11g New


Invisible Index Locking
Mechanisms
In Oracle 11g, you can create invisible
index. Oracle 11g provides more efficient
Invisible Index: An index is similar to capabilities relating to the
regular indexes except that the implementation of object locking.
optimizer cannot see these indexes, ** DDL locks to wait for a DML locks
its invisible to the optimizer. instead of failing if it can’t get one
Use: right away.
** Database makes less use of
exclusive locks.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

specify. You can set any value for the


Allowing DDL Locks to wait for wait parameter.
DML Locks ** If you omit the mode parameter
altogether, the database locks the
table once it becomes available and
** In Oracle 11g, you can specify a returns control to you.
time interval for which the DDL
statement will wait for a DML Lock, Reduced Need for Exclusive Locks
instead of the DDL failing
automatically when it can’t get an
immediate DDL lock. Oracle Database11g removes the
requirement for an exclusive lock on
ddl_lock_timeout: length of time the tables during the following operation
DDL statement can wait for a DML
statement. ** create index online
** create materialized view log
** enable constraint no validate

Max value: 1,000,000 seconds (11.5


days)
Flashback Data
Explicit Table Locking Archive
Oracle Database 11g has enhanced
the lock table statement so you can Oracle 9i – Flashback transactions
specify the time a statement will wait
When data is updated, the past image
for a DML lock on that table.
if the block is stored in the undo
segments, even if the data is
Any DDL statement requires an
committed.
exclusive lock on the table to perform
the DDL operation; it fails if the Long running query that started before
database can’t immediately acquire the data was changed should see the
an exclusive lock on the table. pre-change data, even if the changes
are committed now.

Mode parameter can take two values: This feature allows getting the data as
wait |no wait of specified time (SCN).

Flash back transactions – These are


** The nowait option immediately
used for auditing and debugging
returns control to you if the table is
purposes. That actually gave us a
already locked by others
history of data changes, how they
** The wait option lets the statement
changed, who changed it.
wait for execution for the period you
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Since undo evaporates the flashback 4. Auditing is stored in SYSTEM


transaction fails when there is no undo tablespace whereas FDA can be
available. Flashback data archive stored in nay user specified
stores the undo data. tablespace.

create flashback archive archive_name


New Features – Oracle Flashback
tablespace tablespace_name
quota 1g
retention 1 year 1. Oracle Flashback Transaction
Backout: Allows you to back out
alter table table_name flashback archive
transactions that are already
archive_name;
committed.
When you enable a table for flashback 2. Oracle Flashback Data Archives:
archive, Oracle performs the Which provides the ability to
functional equivalent of writing track changes to a table over its
triggers to capture the pre-change lifetime
data and populate those internal
tables. Oracle FlashbackTransaction
Backout
This could be done using triggers This feature allows you to back out
1. There are no internal triggers; a committed transaction and all
the oracle software code does dependent transactions while the
the archiving to flash back database is still online.
archive. There is no trigger
related performance impact or Only the selected transactions and
context switching or checking dependent transactions will be
for dependencies. backed out, other transactions will
2. Flexibility for flashback queries be untouched.
while not limiting you to undo
segments only. DBMS_FLASHBACK.TRANSACTION_
BACKOUT

Uses:
Setting up for Flashback
1. Quick Auditing tool without
Transaction Backout:
turning on auditing.
2. Both auditing and flash back are
Pre requisite
written to disk producing I/O.
1. Enable supplemental logging
Auditing is done thru
with primary key logging.
autonomous transaction. But in
case of flashback, flashback
archives are written by special
process called FBDA. (Flash
Back Data Archiver). 2. Grant execute privileges to user
3. You can use cheaper storage for performing flashback
FDA. transaction backout.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

3. Grant select any transaction to


user who will be performing the
flashback transaction backout.

Fashback transaction backout can


be done using the OEM.

- Schema Tab
- Select table to execute the
transaction backout.
- Select transaction backout
operation on.

Flashback Transaction: Perform


Query Page
- Select the SCN / Time range of
the query you wish to perform
your transaction.
- Oracle will use the Log Miner
and mine all the transactions on
the selected table over the
given period of time.
- Present the mined transactions
- OEM gives an estimate on how
long the mining process should
last and an option to cancel the
operation.

Compensating transactions: The


statements used to back out the
transaction are known as
compensating transactions.

Executing a Flashback Transaction


Backout Manually:
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

Online Table
Redefinition
The definition of the table can be
changed while it is accessible by the
application users.

Ex: A non partitioned table can be


partitioned while application users are
using it.

It is a capability for High Availability


OLTP Applications.

Restrictions (Oracle 11g):

1. Tables in SYS and SYSTEM


schema cannot be redefined
online.
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

2. Tables having LONG and LONG 4. Create any triggers, indexes,


RAW columns cannot be grants or constraints required in
redefined online, but the the interim table.
columns need to be converted
to CLOB’s and BLOB’s. 5. The redefinition process is
finished by executing the
3. Temporary tables cannot be FINISH_REDEF_TABLE
redefined online. procedure.

4. Tables having BFILE columns Note: To avoid the manual step 4:


cannot be redefined online. you can use the
COPY_TABLE_DEPENDENTS
5. Tables with FGA control cannot procedure to create all dependent
be redefined online. objects on the interim table.
6. The overflow table of an IOT has Dependent objects supported via
to be redefined the same time this method are triggers, indexes,
when the base IOT is redefined. constraints and grants.

7. After redefining a Table having


materialized view logs, the Code:
dependent materialized views
must be refreshed with a Package: DBMS_REDEFINITION
complete refresh.
- Checking if the table can be
redefined

Package used for redefinition is called execute


DBMS_REDEFINITION.CAN_REDEF_TAB
DBMS_REDEFINITION.
LE(‘schema’,’table_name’);
Steps:

1. Verify if the table can be - Create an interim table with


redefined. Execute the desired changes
procedure CAN_REDEF_TABLE.
- Start the redefinition process
2. Create an interim table in the (Provide Column mappings if
same schema, with desired needed)
attributes of the redefined
table.
execute DBMS_REDEFINITION.START
3. Start the redefinition process _REDEF_TABLE(‘schema’,’table_name’,
using the ’interim_table_name’);
START_REDEF_TABLE
procedure.
- Copy the dependedent objects
In the Name of “Allah” the most Gracious the most Merciful - Oracle 11G Upgrade for 9i OCPS

execute invalidated if they reference the


column being dropped.
DBMS_REDEFINITION.COPY_TABLE_DEP
ENDENTS The concept is called Fine Grained
Dependency Management.

- Finish the redefinition process Note: Triggers continue to be


automatically invalidated as before
execute during an online redefinition.
DBMS_REDEFINITION.FINISH_REDEF_TA
BLE(‘schema’,’table_name’,’interim_ta All the triggers that are defined on a
ble_name’); redefined table will be invalidated, but
the database automatically revalidates
them when the next DML statement
- Abort redefinition process using the execution takes place.
below command
Note: You must ensure that you have
execute DBMS_REDEFINITION.ABORT enough space to hold the original
_REDEF_TABLE(‘schema’,’table_name’, table and a copy of it because the
’interim_table_name’); DBMS_REDEFINITION packages make
Online Redefinition for tables with a temporary copy of the source table.
MV Logs Online Redefinition method to
Clone the materialized view log into migrate to Secure Files
the interim table during the Secure files can also be migrated
redefinition process as you do with using partition exchange method.
triggers, indexes.
Revisit when reading about Secure File
One requirement is that at the end of CLOBS
the redefinition process a complete
refresh needs to be performed on the
materialized views.

Minimal Invalidation of Dependent


Objects

Previous to Oracle 11G, oracle


automatically invalidated all
dependent objects and PL/SQL
packages during online redefinition,
even if those objects weren’t logically
affected.

For example if a column is dropped,


only the dependent objects are

You might also like