Professional Documents
Culture Documents
Miscellaneous
TRUE, FALSE and NULL 24 May 1999
Using the "not equal" operators (<> or !=) gives me back a row count of zero when there are actually 16 valid rows with
NULL in the field. Just because a column is NULL Oracle should still report it as having a value, that value being
NULL. Is this expected behavior?
There has been a lot of debate in the literature (both academic and popular) on this issue over the years. It spans logic,
philosophy and semantics, as well as relational theory. Some claim that if a predicate (Boolean expression) is not TRUE
then it is FALSE. Most allow that it might also be UNKNOWN. Some however claim that we should allow even more
truth values such as UNDEFINED and others.
Nevertheless, in practice we have three truth-values (TRUE, FALSE and NULL) in Oracle and this is enough to
implement any value system you like. If you want 2-valued logic, you can just make the columns not nullable. If you
want 3-valued logic without NULLs, you just use default clauses on the columns to ensure that absent "values" are
assigned a special constant value outside the domain of the field. If you want 4-valued logic or higher, you can define
multiple such special constants, or allow NULLs and just add a metadata column to distinguish between the meanings of
NULL.
Of course, from a performance point of view this really starts to get interesting when you consider the treatment of
NULLs in indexes ... but that's another topic.
Copying a table 8 October 1999
When duplicating a table, is it more efficient to use the COPY command or to use CREATE TABLE AS SELECT?
CREATE TABLE AS SELECT uses the direct (unbuffered) I/O path and can be done
NOLOGGING/UNRECOVERABLE and in parallel, all of which is a distinct advantage. One thing to watch out for is
that the new table inherits the degree of parallelism with which it was created, which may not be what you want.
If the table is prone to index range scans, it is worth taking the trouble to ensure that the rows are ordered by that index
key. In recent releases, you can include an ORDER BY clause in the query. In older releases, you need to use an
INDEX hint, which inhibits parallelism. You can then use the NOSORT option when creating that index, but more
importantly, the index will have an optimal clustering factor, and that will give your index range scans a big boost.
Sequence of statements 2 November 1999
I need to find the sequence of the SQL statements being issued by a front end whose source code I do not have.
You could set sql_trace in the parameter file. That would give the sequence of SQL statements within each process,
including their recursive SQL, but it will be difficult to resolve the order in which the statements were executed by
different processes. If you use event 10046, level 8 instead, you will be able to estimate the time at which each
statement was executed, by adding up the elapsed time for all the waits.
ASTs 5 November 1999
In the OPS chapter in your book you are consistently referring to ASTs (Asynchronous System Traps) as a notification
mechanism. AST is a VAX/VMS mechanism for notification after an I/O is done. Processes can even queue an AST to
another process, given sufficient privileges. DEC's CISC CPUs (NVAX and CVAX chips) had special registers for AST
queuing, and AST delivery operation is even elevated the IPL level of CPU. As such provisions do not exist in Unix
world, What does Oracle use on Unix instead of ASTs? My guess is that signals are used instead, but I cannot be sure of
that.
Unix uses sockets. You can trace it with truss. Signals are also used locally on each node, but only to awaken processes
that were blocked on a timer wait, in which case, a SIGALRM is used.
http://www.ixora.com.au/q+a/misc.htm
MTU size 19 November 1999
I want to set the SDU size in listener.ora and tnsnames.ora. But this parameter is related to the size of the MTU
(Maximum Transmission Unit). How do I find the current setting of the MTU size -- on HP-UX in particular?
It is conceivably different for each network interface. You can see it with netstat -i.
Sequence performance 30 November 1999
We have a problem with waits for both locks and latches. In particular, we spend a lot of time waiting for the cache
buffers chains latches, although the sleep rate is only 0.04%. There are also lots of sleeps on the redo allocation and
enqueues latches.
My hunch is that you may have a sequence with NOCACHE or ORDER specified. If not, then following the latch level
rule you should tune the redo allocation latch first (log_small_entry_max_size = 0, reduce commit frequency, etc).
Thereafter, you will have to determine which cache hash chains and database blocks are causing the contention by
joining V$LATCH_CHILDREN to X$BH.
Performance of objects 22 December 1999
I need information about performance considerations for Oracle8 "objects" versus "relational" methods
• for DML operations
• for query operations
Can you help me?
I have not personally investigated objects in any depth. However all of the papers that I have seen that have made any
attempt at performance comparisons have concluded emphatically that objects are much slower than relational methods.
In short, unless you are using OCI and its object navigation features, then you are best advised to leave objects entirely
alone.
oerr on NT 24 December 1999
Is there a command on NT similar to oerr on Unix?
There is a bit of PL/SQL to do the job on Jonathan Lewis' web site. If you don't already know Jonathan's web site, it is
one of the best.
Releasable hashed locking 27 December 1999
Your book says, "... the lock handles on releasable hashed lock elements are never actually released, despite the lock
elements being releasable". What benefit is there in them being releasable then?
The LCKn processes retain their handles on releasable hashed lock elements because they are "hashed". That is, those
lock elements are permanently allocated to protect the range of blocks that hash onto them. The only significance in
them being "releasable" is that they are taken from gc_releasable_locks, and those locks are called releasable because
they were initially introduced for fine-grained locking. Maybe it would be less confusing if Oracle called them
"acquirable" lock elements instead.
Creating an ordered table 29 December 1999
How should I create an ordered table, since ORDER BY is not allowed in CREATE TABLE AS SELECT?
It is version specific. The ORDER BY clause is allowed in 8i and some 7.3.4 versions. The normal workaround on other
versions is to use an /*+ index(...) */ hint in the CREATE TABLE AS SELECT statement.
http://www.ixora.com.au/q+a/misc.htm
listener.log 20 January 2000
My listener.log file is enormous! Would it be safe to trim it with both the instance and listener up, or do I need to bring
the listener down?
Yes, you need to bounce the listener. Try the following.
mv listener.log listener.log.prev
lsnrctl stop
lsnrctl start
You may want to put this in a script and have it run periodically from cron.
Update mechanism 20 January 2000
I have been trying (for quite some time now) to get an understanding of the processes that occur during the life of a
transaction - say an update. Things like, What does an ITL entry contain; When is it made; What is the lock byte; What
is fast commit and delayed block clean out; How does a transaction know which rollback segment to go to; What does
the before image look like; and What happens at commit time?
This is a BIG question. I will not answer in detail, but ... undo data is not a before image of the block, it consists of
opcodes and arguments much as redo data does. For an update, the rollback segment is chosen (next in rotation having
the equal minimum number of active transactions); a slot is chosen in the rollback segment header block's transaction
table; an initial undo data block is chosen for the undo for the transaction; undo is prepared for the change to rollback
segment header block; undo is prepared for the change to the table data block; redo is prepared for all these changes; the
redo is copied into the log buffer; the redo is applied to the blocks in cache. One of the data block changes is the
establishment of an ITL in the block header for the transaction. The ITL encodes both the transaction id (which is based
on the rollback segment number, slot number and wrap number) and the undo data block address (UBA) for the undo
for the changes to the block. The lock byte goes into the 3-byte row header for the row being updated, and points to the
ITL entry. The UBA in the rollback segment transaction table is the most recent UBA for the transaction. The commit
generates a special opcode into the log buffer, updates the transaction table in the rollback segment header block, and
syncs the log buffer.
State object free lists 25 January 2000
What is a process, session or transaction state object? Is the transaction state object free list the same thing as a
transaction freelist?
A state object is just a structure in the SGA containing information about the state of a database resource and a list of
pointers to subordinate resources. If a process or session dies, or is killed, then it is the state object hierarchy for that
process or session that is used by PMON to clean up and free the resources that were being used by it. There are distinct
arrays in the SGA for process, session and transaction state objects. The size of these arrays is set via the processes,
sessions and transactions initialization parameters. You can see parts of these structures exported via the fixed tables
X$KSUPR, X$KSUSE and X$KTXCB.
One of the members of each of these structures is a pointer to the next free slot in the array. If a process, session or
transaction state object is free, it has a valid pointer to another free state object in the same array (unless it is at the end
of the free list). These pointers implement a free list though the state object arrays. They are modified under the
protection of the process allocation, session allocation and transaction allocation latches respectively. State objects are
always added to and removed from the heads of their free lists.
The transaction state object free list is a totally different thing to transaction freelists. Transaction freelists are linked
lists of data blocks in a particular segment. When a transaction frees space in a data block so that the space used falls
below PCTUSED, then that block is linked to a transaction freelist.
Sequence problem 29 January 2000
We have just switched our database over to a new server. We used a full database export and import to transfer the data.
The export was done with the old database in STARTUP RESTRICT mode. Now we are getting unique constraint
http://www.ixora.com.au/q+a/misc.htm
violation errors when trying to insert new records, because the LAST_NUMBER for one sequence is less than the
maximum value in the table. How can this have happened? Also, I tried updating SYS.SEQ$.LAST_NUMBER with the
correct next value. But when I run the program, I get the same error. What can I do?
Did you test the full database import against the new database before the actual switchover? If so, you may have
forgotten to drop the sequence before the final import. You don't need to tamper with the data dictionary to fix the
problem. Just execute the following commands ...
By the way, I don't regard my book as a basic tuning book. It is an advanced internals book, and has very little to say
about basic tuning.
oiconnect 5 February 2000
You mentioned $ORACLE_HOME/bin/oiconnect. What is this utility expected to do? I cannot find it in my 8.1.5/bin
directory.
It just checks whether it is possible to connect to the instance. It is not there in 8.1.5. Have a look at our db_check.sh
script for an alternative.
USERENV 5 February 2000
What are the parameters that can be passed to the USERENV function?
It is basically what is document in the SQL Reference manual, plus ...
As to whether you should go, depends on whether you want/need a really deep understanding of Oracle. Probably only
1% of DBAs need to know this stuff.
http://www.ixora.com.au/q+a/misc.htm
Encrypted passwords 16 March 2000
Are the encrypted passwords shown in DBA_USERS portable across platforms and versions? Do you know what the
encryption algorithm is?
I don't know what the encryption algorithm is, nor would I tell you if I did! But it is not platform or version dependent.
The only variables are the username and password strings.
Here documents in shell scripts 19 March 2000
How come the following doesn't work?
if $var=1; then
sqlplus -s apps/$APPS_PASS <<EOF1
-- sql statements
EOF1
fi
If there is leading white space on the line containing the here document terminator, then the here document must be
introduced with the <<- token. However, I don't like here documents. Apart from being ugly, they create an unnecessary
temporary file, which if done intensively can stress some parts of the operating system. I would write the shell code
above as
OK, here's a complete list with a brief description of each type of process.
Dedicated Servers
Each of these processes is dedicated to serving a single client process. They parse and execute its SQL
statements, read database blocks from the data files, make changes to buffered database blocks, and return
results to the client process. Dedicated server processes are also called shadow processes, particularly when
their client process is running on the same computer.
Shared server processes are used in MTS configurations. They perform the same tasks as dedicated server
processes, however, they are not associated with any particular client process. Instead they service requests
from the call request queue and return results to a result queue.
http://www.ixora.com.au/q+a/misc.htm
Dispatchers – Dnnn
Dispatcher processes mediate between MTS client processes and the shared servers. Incoming calls are
placed in the request queue for processing by a shared server, and outgoing responses are taken from the
response queue and transmitted to the clients. Each dispatcher process can host multiple connections if they
share the same network transport protocol.
The job queue processes perform scheduled tasks such as executing jobs scheduled with the DBMS_JOB
package, refreshing local snapshots, and propagating advanced queuing messages and replicated
transactions to other databases. Up to 36 of these processes may be configured.
Up to 10 of these processes may be used by the Advanced Queuing facility for message delay, expiration
and retry, and to collect interval statistics.
This process is also associated with Advanced Queuing. It is responsible for the asynchronous notification
of subscribers that have registered a callback request for particular messages.
An instance may have up to 256 parallel execution slaves. These slaves are used for parallel SQL execution,
parallel recovery, and parallel propagation of replicated transactions. When one of these operations is
performed in parallel, the driving dedicated server, shared server or job queue process acts as the
coordinator for a set of parallel execution slaves.
Whenever a change is made to the database and whenever a transaction ends, the low-level database change
operations and transaction markers are recorded in the log buffer. The LGWR process writes this
information from the log buffer to disk in order to safeguard recovery.
In ARCHIVELOG mode, used log files must be archived to another location before they can be reused. The
ARCn processes are responsible for automatic archiving. Up to 10 archive processes may be used.
When a change is made to the database, modified database blocks are seldom written immediately to disk.
Instead they are retained in the database buffer cache and written to disk in the background by the DBWn
processes. Up to 10 database writer processes may be used.
In a checkpoint the DBWn processes write out all database blocks that were modified before the checkpoint
SCN. This is intended to limit the redo that must be applied in the event of instance recovery. For all
checkpoints other than incremental checkpoints, the last step in checkpoint processing is to update the
checkpoint SCN in headers of all affected data files. This task is performed by the CKPT process.
http://www.ixora.com.au/q+a/misc.htm
When necessary, the system monitor process performs crash recovery in single instance Oracle and instance
recovery in parallel server configurations. SMON is also responsible for a range of space management
housekeeping functions such as shrinking rollback segments that have extended beyond their optimal size,
coalescing contiguous free extents in the free extent table of the data dictionary, cleaning up obsolete
temporary segments, and allocating space in sort segments.
Oracle server processes are sometimes killed from the operating system, or die unexpectedly due to
unrecoverable errors. The instance is aborted if any of the essential processes dies or is killed. Otherwise,
the PMON process frees database resources that were being used by the dead process, rolls back any
uncommitted transactions, and restarts the process if possible.
Oracle automatically uses a two-phase commit mechanism for distributed transactions to ensure that all the
transaction branches either commit or roll back together. Local transaction branches are said to be in-doubt
between the two phases of a two-phase commit. If the second commit phase cannot be completed due a
network or instance failure, then it is uncertain as to whether the transaction is to be committed or rolled
back. In this case RECO waits until the connection can be restored, and then automatically complete the
transaction.
On operating systems that do not support asynchronous I/O, each process can only perform one I/O
operation at a time. This constraint is untenable for processes that require high I/O throughput. Oracle
allows multiple I/O slaves to be configured to achieve parallelism in such cases.
In parallel server environments Oracle synchronizes access to resources that are shared between instances
using instance locks. Each instance manages a subset of the instance locks. The LMDn processes service
requests for locally managed locks directly, and forward requests for remotely managed locks to the remote
node. The LMDn processes are also responsible for the detection of distributed deadlocks. Multiple lock
manager processes can be configured for each instance, but one is normally sufficient because the messages
are passed asynchronously.
The instance locks that protect globally cached resources are held by the lock processes on behalf of their
instance. Other instance locks are held directly by the processes with an interest in the resource. Up to 10
lock processes may be configured, but one is normally sufficient, as these processes also operate
asynchronously. If there are multiple lock processes, then the responsibility for PCM locks is evenly
distributed between them.
The lock monitor process handles the redistribution of instance locks as instances are started up and shut
down, and the recovery of instance lock information prior to instance recovery. LMON also cooperates with
PMON in the recovery of dead processes holding instance locks, and has a role in the establishment of
communication paths with remote parallel execution slaves.
If a process requires a consistent read image of a buffer for which its instance does not have the require
instance lock, then from release 8.1 onwards it requests that buffer from the instance holding the blocking
instance lock. A buffer server processes in that instance receives the request, constructs the required CR
http://www.ixora.com.au/q+a/misc.htm
buffer, and sends it back to the requesting process directly over the interconnect. Up to 10 of buffer server
processes may be started dynamically in each instance depending on the load.
When an Oracle process sleeps, waiting for an event, the sleep ends either when the process is posted,
indicating that the event has occurred, or when the sleep times out. The alarm() system call is used to set
the time out. However, prior to release 8.1, if the wakeup monitor process was enabled, then most sleeps
would never time out. Instead, the sleeping process was posted by WMON.
The wakeup monitor can be enabled by setting the reduce_alarm parameter to TRUE. The interval at which
it checks for sleeping processes to wakeup is set by the _wakeup_timeout parameter. This parameter is set
in centiseconds and defaults to 100 centiseconds, that is one second. This information is included here
merely for completeness. Do not attempt to enable the wakeup monitor in a busy production environment.
The effect on performance would almost certainly be disastrous.
TRWR is the most obscure Oracle process. It writes out trace files from an internal tracing facility that is
seldom used. This facility is controlled by a set of hidden parameters, and by the ALTER TRACING
command. Trace messages are stored in a circular buffer in memory, and written to disk asynchronously by
TRWR.
Copyright © Ixora Pty Ltd
http://www.ixora.com.au/q+a/misc.htm
Questions and Answers
PL/SQL
Space freed by recursive transactions 13 April 1999
When I rebuild a set of indexes using ALTER INDEX REBUILD from within a PL/SQL block using DBMS_SQL, the
space required to perform the rebuilds is not released until all the indexes in the set have been rebuilt. I have tried
issuing a COMMIT after each index is rebuilt, but this has no effect, nor would I expect it to. At the extreme, if I rebuild
all indexes in the tablespace, I need twice as much space available as I need to store the indexes.
If I create a script containing the ALTER INDEX REBUILD commands and run it from SQL*Plus, space is released
after each index is rebuilt, as I would expect.
Is there a fix or workaround that would allow me to rebuild a set of indexes through DBMS_SQL calls that does not
require keeping my index tablespace more than 2 times its current usage?
Because your index rebuilds are not top level transactions, but recursive transactions, the call to free the temporary
segments that they have used will always fail because the row cache entry will be marked as dirty and the top level
transaction is still active. So Oracle cannot be confident that a parent transaction is not using the segment. This is why it
is necessary to have SMON clean up temporary segments.
Your SQL*Plus workaround can be made into a complete solution by writing another script to generate the rebuild
index script file dynamically and then execute it.
PL/SQL timing 7 July 1999
Is there any utility / tool that can tell us the time taken to execute a PL/SQL package?
Within PL/SQL you can use DBMS_UTILITY.GET_TIME. If you are just executing it from SQL*Plus, then try SET
TIMING ON.
SLAX: parser 5 October 1999
I'm getting major performance differences between my production system and my development system. A couple of the
observed differences are that the development system has both the Oracle7 Distributed Database option and the SLAX:
parser option. What in the world is the SLAX: parser option? I can't find any mention of it on my 7.3.4 documentation
CD.
PL/SQL needs a slightly different parser for each host language context. The way Oracle have done this is to use a
parser generator called SLAX to generate a PL/SQL parser dynamically when linking the other products. From the
installer's point of view, SLAX is a separate product that is included according to dependency rules if required.
Somewhat ambiguously, the name SLAX is also used for the generated parsers.
BAMIMA: Bam buffer 11 November 1999
What is "BAMIMA: Bam buffer" as shown in the KSMLRCOM column of X$KSMLRU?
http://www.ixora.com.au/q+a/plsql.htm
ALTER SYSTEM in PL/SQL 9 December 1999
I'm trying to write a stored procedure (Oracle 7.3.4 / SunOS 5.6) to kill all sessions for a given user. It compiles; it
executes; but it doesn't actually kill the sessions, nor mark them as killed in V$SESSION. I've changed the command to
various other DDL statements and it works as expected. I've traced the SQL, and the parse, execute and fetch all appear
to succeed. If I cut and paste the output, it works exactly as it should but not when run from the stored procedure. Do
you have any ideas why this might not be working? Am I missing something simple?
Yes, check the documentation. ALTER SYSTEM is not a DDL statement. It is a "system control statement". And as the
book says "ALTER SYSTEM is not supported in PL/SQL". However, you can work around the restriction using
DBMS_SQL.
PL/SQL or C? 29 December 1999
We need to do some batch processing on about 1,000,000 rows per night. Given that PL/SQL is stored in the database in
a compiled form, is it worth the effort of rewriting the code in C?
The "compiled" form a PL/SQL is called PCODE, which stands for pseudo code, and is interpreted at runtime by the
PL/SQL engine. That is, it is not really compiled; it is just pre-compiled. There is also a machine dependent form of
pseudo code used in some cases called MPCODE. Oracle is working on native compilation for their Java code, and
there is a chance that they will do the same for PL/SQL. Nevertheless, on current versions compiled C code can be
expected to outperform PL/SQL by a long way. It depends on what you are doing, but I would be surprised if the
performance gain were less than 20%.
Wrapped PL/SQL 5 January 2000
Does wrapped PL/SQL code take less memory in the SGA?
No, wrapped code takes more space in the data dictionary, but the same amount of memory in the library cache when it
is loaded.
expected symbol name is missing 13 January 2000
This is a question regarding PL/SQL program unit compilation. When I compile/recompile certain procedures, I get a
message, "expected symbol name is missing", but the compilation itself goes through successfully. I looked at the
procedure body, but didn't find anything wrong. Can you tell me what is happening here, and why I get the message?
This is an SQL*Plus error. You may be using the & character in your program unit. SQL*Plus expects a variable name
after that character.
Partitions dropped from PL/SQL 24 March 2000
I have been working on a PL/SQL package to support rolling partitions. During every run, it gets a spurious out of space
error. If I fix the script to sleep for 30 seconds and retry, it works. What's the problem?
The problem is that the space used by segments dropped from a PL/SQL procedure is not available for reuse until the
procedure exits, or until a space management error occurs which posts SMON to do the cleanup.
Copyright © Ixora Pty Ltd
http://www.ixora.com.au/q+a/plsql.htm
Questions and Answers
In my opinion, cold backups are useful for taking a baseline before and after database changes, and that is about all.
Using dd versus cp for raw backups 15 June 1999
My understanding is that the dd command must be used for backups when working with raw devices on Unix (AIX in
this case).
A colleague of mine tried cp (not dd), restored a raw device file, and then successfully brought up the database. Of
course, we all insist that he use the dd command, but I admit that I don't know the exact reason for that requirement.
Can you shed some light on this, and why he was able to succeed?
Maybe your colleague used the cp command in recursive mode (-R). In this mode cp just copies the specification of
device files by performing a mknod(). It does NOT read or write the contents of the devices. This would give the
appearance of a valid backup and restore, but in truth nothing has been done to the datafiles whatsoever.
The cp command can be used to copy the contents of a raw device onto a file within a file system, but it cannot be used
to copy back the other way. It is a one way operation. However, beware that on some Unix brands, datafiles copied from
raw to file system in this way cannot be used to open the database, because the alignment of the first block will be
wrong.
The dd command is the only native Unix facility that can be used to both backup and restore the contents of raw
devices. However, there are many third party backup and restore products, such as ADSM, Alexandria, Omniback, and
so on, that can do the job too. In my opinion, these solutions are preferable to in-house scripts based on dd.
Spilt blocks during hot backups 6 September 1999
Can you explain about split blocks and the impact of _log_blocks_during_backup = FALSE in online backups?
A "split block" is one that was being written by Oracle at the same time that it was being read for the backup. Say the
first SCN on the block was 100, and the write was for SCN 200. Then the backup may contain the first half of the block
as at SCN 100, but the second half as at SCN 200.
By default, whole blocks are logged to the redo stream before the first change to any block during backups to allow for
the recovery of split blocks. For example, in this case the change would log the entire block as at SCN 100 into the redo
stream, followed by the change vectors for the change. Then during recovery roll forward, the whole block can be over-
written with the correct image before applying the change vectors.
The flags field in the buffer header for each buffer contains an indication of whether the block has been logged. If the
flag is set, the block is not logged for subsequent changes. However, if a buffer is reused, and the block is then read
back into the cache for further modification, the whole block will be logged once again prior to such modification.
http://www.ixora.com.au/q+a/recovery.htm
Can NOLOGGING be disabled? 27 October 1999
Is there any undocumented way of disabling the use of the NOLOGGING feature? We are setting up a Hot Standby
system, and want to make sure that NOLOGGING cannot be used.
So far as I am aware, there is no way to do this. But there is an enhancement request already open for it (#651582).
Archiving over NFS 5 November 1999
Is archiving over NFS safe?
Files written and then closed over NFS are not guaranteed to be synced. Therefore, it would be possible for Oracle to
complete archiving, close the archived log file, begin to reuse the online log file and then for the NFS server to crash
before the final blocks written to that file had been synced. This would leave a hole in your redo stream, which is bad
news for any recovery.
Also, beware of using NFS for a secondary archival destination. Under most NFS implementations, if the remote NFS
server crashes, ARCn will hang at the next log switch, and eventually so will LGWR and everything else.
Backup and recovery internals 21 December 1999
Have you written anything about the internals of Oracle backup and recovery?
The Oracle Press Backup and Recovery Handbook by Rama Velpuri is the best treatment of Backup and Recovery
internals available.
EMC BCVs 30 December 1999
+
We are considering using EMC's BCV technology to facilitate the hot backups of a 500 G database. What do you
think?
The basic idea of splitting mirrors for backups is not new, and certainly not unique to TimeFinder. However, there is no
such thing as a free lunch, and the cost in this case is the performance hit on the system when the mirrors have to be re-
synced. This is done in the background by the EMC hardware, and only out-of-sync disk tracks need to be copied. So it
only affects foreground I/O response time noticeably if a lot of tracks have been changed and if reads cannot be satisfied
from cache, or if writes need to be destaged during the attach operation.
Single-task export 26 January 2000
I remember reading somewhere that one can speed up exports by running them in single-task mode. How is this done
under Oracle 7.3 on HP-UX?
To perform a single-task export, you must use the expst executable instead of just exp. You will probably need to create
the executable first, as follows.
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk singletask
If you are intending the single-task executables to be used by a user other than oracle, then you will need to set the
setuid bits as follows.
http://www.ixora.com.au/q+a/recovery.htm