You are on page 1of 11

Oracle Data Pump

The last couple articles I have written focused on meta-data or DDL extraction for Oracle. The search for a part III to those
articles lead me to Oracle's Data Pump utility. Not necessarily for the data movement piece but because it has an API for
meta-data. Well even though I have been using 10g for quite some time, I have yet to use Data Pump. I thought this would be
a great way to introduce myself, and possibly you the reader, to this new utility. This article will serve as a basic introduction
to Data Pump and then in subsequent articles we will walk through the new command line options for Data Pump's export and
import (expdp & impdp), and look at the PL/SQL packages DBMS_DATAPUMP and DBMS_METADATA.

What is Oracle's Data Pump?


Oracle simply states that Data Pump is a "very high-speed" mechanism for moving data and meta-data from one database to
another. Is Data Pump a complete replacement for export (exp) and Import (imp)? Oracle clearly states that the only time you
would/ should use the original (exp & imp) is when you need backward compatibility to an earlier version that does not have
Data Pump export & import (expdp & impdp). Basically if you wanted to import a 10g export into a pre-10g database or
import into a 10g database from a pre-10g database. As we go through this evaluation of Data Pump and its usage this will
become more clear if there are any other situations. I venture to say there is.

Data Pump has three distinct components.


The uniqueness of Data Pump is that all processing is done through the database server and through DIRECTORY objects. To do
this there are two internal packages to Oracle and an optional replacement for the original export and import command line
utilities (exp & imp).

expdp & impdp

These to command line utilities are very close to the old standby export & import (exp & imp) utilities. They are not stand-
alone utilities in the sense that they use the DBMS_DATAPUMP PL/SQL package to execute the export and import functions.
They accept a variety of command line options that, like exp & imp, allow you to pick and choose the objects to be exported
and imported.

DBMS_DATAPUMP

The Data Pump API and can be used independently of expdp & impdp. Is the package accessed to move data and / or
metadata between databases.

DBMS_METADATA

The meta-data API in Oracle and can also be used independently of expdp & impdp. If you remember this is the package we
were using in the last two articles for extracting meta-data. I am very interested in how it interfaces with Data Pump.

Some features I thought interesting


o Data Pump export and import are not compatible with the old exp & imp. So if you need to import into a pre-10g
database it is best to stick with the original export utility (exp).
o There are new 10g features that are only supported (exported & imported) with the new Data Pump utility.
o With Data Pump you can allocate parallel threads to fine-tune resource consumption during export and import. Also
available is to have multiple client processes (expdp & impdp) to attach to a Data Pump job. Both of these options
allow you to throw more resources to get the Data Pump job completed. Data Pump also works on a set of dump files
rather than a single sequential dump file and you can add additional dump files to a running dump process. I am sure
we will have to be concerned here with too many I/O operations to a single disk area.
o Restart ability is supported for export jobs but also you can stop and restart the jobs as you see fit. So if you see a
job is consuming too many resources or it is running too long and going to effect production, you can just stop the
job and then restart at a latter time.
o Because Data Pump works on the database server through packaged procedures and directory structures, you now
have the ability to perform export and import operations over the network.
o Various features that allow for the re-mapping, re-naming, including, or excluding of database objects and database
structures. As we saw in the DBMS_METADATA package in the last two articles how to compare across schemas and
translate one schema ownership to another, Data Pump also has these types of abilities for moving objects and data.
o Versioning capabilities of database objects.
o To use Data Pump you must have EXP_FULL_DATABASE or IMP_FULL_DATABASE depending if you will be performing
export or import operations. These allow you to expdp & impdp across ownership for items such as grants, resource
plans, schema definitions, and re-map, re-name, or re-distribute database objects or structures.
o Access to data is through direct path and external tables. Both of which, under the covers, have the same data
format so Oracle can switch between the two depending on a best method approach to move your data. Some of
which is dependent upon the targeted structure as direct path can not be used for some object types and database
structures.
o Monitoring mechanisms have been put in place to see how the Data Pump job is performing. You can monitor through
a log file created or one of three views that assist in the monitoring of Data Pump jobs.

- DBA_DATAPUMP shows all the active Data Pump jobs and details on the state of the job.

- DBA_DATAPUMP_SESSIONS shows the user sessions that are attached to a Data Pump job.

- V$SESSION_LONGOPS shows the progress of a Data Pump job in regards to the amount of work it needs to do and
how much it has done.

o Through the use of internal tables, and what Oracle calls master & worker processes, there exists an
intercommunication between the processes that are performing work on behalf of Data Pump and internal logging
information that allows the user, and Oracle, to understand where in the process Data Pump is.

My security concerns
o Having EXP_FULL_DATABASE & IMP_FULL_DATABASE privileges opens up users being able to see too much and affect
too many objects across the database where you may want a higher granularity of security.
o Operations across the network. This has implications that allow for unsecured network nodes to be infiltrated where
your production servers could then be accessed from and then compromised.
o By definition, Oracle gives permission to the objects in a DIRECTORY that a user would not normally have access to.
Also be aware that there is a default server-based directory object, DATA_PUMP_DIR that Data Pump will look for if a
DIRECTORY is not specified. While I have not seen this directory created by default in an Oracle installation and the
manuals say it needs to be created by a DBA, it is still something to be on the look out for in subsequent releases as
it could cause a security hole.
o Couple the possibility of network holes with the ability to spawn multiple client (expdp & impdp) services, a server
could easily be overcome with processes that would either become a bottleneck or bring your system to a halt.
o It is highly recommended that new monitoring and auditing of these new server processes should be undertaken or at
least validating that they are shut down or restricted to limited use.

Data Pump seems to have a lot of nice features to facilitate the movement of data and meta-data between databases. The
monitoring and tuning of the Data Pump jobs seems to be the greatest benefit to moving to this new form of export and
import. My only concern is the opening up of the database to see the O/S through database links and directory objects.
Hopefully this article served as a nice introduction to Data Pump, I know I learned a lot just by writing this high level
overview. Come back for subsequent parts to this introduction and we will explore how Data Pump really works and hopefully
put to bed any security concerns of mine.
Since we are all familiar with Oracle’s original export (exp) utility, and in my opinion Data Pump will be replacing exp soon, I
thought it would be good to start off getting familiar with this utility by some relatively simple Data Pump exports (expdp)
that are similar to the way we have used exp in the past. In particular the FULL export.

As a word of caution, the Data Pump exports (expdp) are not compatible with exp. So as you go forth and play with this
utility please at least name the exports something that will signify that the dump file was created by expdp so that you won’t
get confused. Also since this utility is not backward compatible, if you have any databases prior to 10g and you are using exp,
you may want to hold off on implementing the new expdp utility as you will not be able to import into any pre-10g databases.

Where are my dmp files


Different from the original export utility in Oracle, Data Pump runs only on the server side. You may initiate the export from
a client but the job(s) themselves will run inside an Oracle server. There are no dump files (expdat.dmp) or log files that will
be created on your local machine if you initiate a Data Pump Export (expdp) there. But, as we will see through a couple
examples, if you have an Oracle server on your local machine you can get Oracle to produce dump files there.

Oracle creates dump and log files through DIRECTORY objects. So before you can use Data Pump you must create a DIRECTORY
object. There are a few different “default” mechanisms for Oracle to determine an appropriate DIRECTORY to use. Mostly
through environment variables and a default directory name that Oracle will look for. But as we all should know, we should
not leave this to chance and instead explicitly create and use a directory object name of our choice. As soon as you create an
object, a DIRECTORY here, that is a default you open yourself up to security breaches and thus this practice should be
avoided. So for here I have logged into my S1 database and will create a DIRECTORY named datapump.

SQL-S1> CREATE DIRECTORY datapump AS 'C:\user\datafile\datapump';

Then, as you use Data Pump you can reference this DIRECTORY as a parameter for export where you would like the dump or
log files to end up. It is good to note here that as dump and log files are created, log files that are written to will overwrite
existing log files of the same name but dump files that have the same name will only create an error condition and error out
the Data Pump job. This was not the case with Oracle’s original export utility (exp). Subsequent exports would overwrite all
files. With Data Pump this is a nice safeguard but can also create problems for those of us who did nightly exports to the
same location and file names. Now we have to think about cleanup routines. A small price to pay for additional security that
could save your life one day when the scraping utility fails.

Just like the original exp utility Data Pump requires some authorization to allow users to export. Here I am granting
EXP_FULL_DATABASE to a user JKOOP on database S1 that will allow the user to perform a full database export. If not given
the JKOOP user could only export their own schema. Also I need to grant READ and WRITE privileges on the recently created
DIRECTORY. Also on database S1.

SQL-S1 > GRANT EXP_FULL_DATABASE to jkoop;


SQL-S1 > GRANT READ, WRITE ON DIRECTORY datapump to jkoop;

Now for the Examples


We are all familiar with the FULL database export. Data Pump easily performs this with the following command line. Notice
there are just a few name changes and instead of specifying the directory path in the file locations the additional parameter
for your DIRECTORY is supplied. This command line assumes you are on the database server and environment variables are
properly set for a direct connection.

E:> expdp jkoop/pwd FULL=y DIRECTORY=datapump DUMPFILE=expdata.dmp LOGFILE=expdata.log

We have also used the exp utility to connect through a TNS entry to perform an export on a remote database. Data Pump can
also easily do this by adding a connection identifier to the user/password parameter. The exact same way done in exp.

E:> expdp jkoop/pwd@S1 FULL=y DIRECTORY=datapump DUMPFILE=byTNS.dmp LOGFILE=byTNS.log

Now for a few export trickeries. These next two examples assume an additional database named S2. They allow for a
connection to the target database that we want to export through a database link. So the first thing to do is create a
database link.

SQL-S2> CREATE DATABASE LINK S1 CONNECT TO JKOOP IDENTIFIED BY PWD USING 'S1';

The key item to remember with Data Pump and where files will end up is the fact that wherever you Data Pump runs it
requires a DIRECTORY to place dump and log files in. So since we will be connecting to the S2 database there will be required
a DIRECTORY for placing these files in. Here I create a new DIRECTORY named mydump on database S2.

SQL-S2> CREATE DIRECTORY mydump AS 'D:\mydump';

Now for the command line options. Here we are running on the server where database S2 resides and will be producing a full
dump of database S1 through the NETWORK_LINK. But placing the dump and log files on the server where database S1
resides. This was great news for me as when I first read the documentation I thought all dumps would have to reside on the
server the database resided on. Now I can almost produce an environment where a single database is a ‘gateway’ for my
database exports if needed.

E:> expdp jkoop/pwd FULL=y DIRECTORY=mydump NETWORK_LINK=S1 DUMPFILE=byDB.dmp LOGFILE=byDB.log

Ok, suppose we do produce that gateway for exports. Do we need to execute all commands from that server? No! With Data
Pump we need only connect to the S2 database through a TNS entry and then supply the appropriate NETWORK_LINK to the
database we want to export.

E:> expdp jkoop/pwd@S2 FULL=y DIRECTORY=mydump NETWORK_LINK=S1 DUMPFILE=TNSDB.dmp LOGFILE=TNSDB.log

Introduction to Monitoring Data Pump


A simple way to gain insight into the status of a Data Pump job is to look into a few views maintained within the Oracle
instance the Data Pump job is running. These views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and
V$SESSION_LOGOPS. These views are critical in the monitoring of your export jobs so, as we will see in a later article, you
can attach to a Data Pump job and modify the execution of the that job.

DBA_DATAPUMP_JOBS
This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

SQL> select * from dba_datapump_jobs

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS

---------- ---------------------- ---------- ---------- ------------- --------- -----------------

JKOOP SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING 1 1

JKOOP SYS_EXPORT_SCHEMA_01 EXPORT SCHEMA EXECUTING 1 1

DBA_DATAPUMP_SESSIONS
This view give gives the SADDR that assist in determining why a Data Pump session may be having problems. Join to the
V$SESSION view for further information.

SQL> SELECT * FROM DBA_DATAPUMP_SESSIONS

OWNER_NAME JOB_NAME SADDR

---------- ------------------------------ --------

JKOOPMANN SYS_EXPORT_FULL_01 225BDEDC

JKOOPMANN SYS_EXPORT_SCHEMA_01 225B2B7C

V$SESSION_LONGOPS
This view helps determine how well a Data Pump export is doing. Basically gives you a progress indicator through the
MESSAGE column.

SQL> select username,opname,target_desc,sofar,totalwork,message from V$SESSION_LONGOPS

USERNAME OPNAME TARGET_DES SOFAR TOTALWORK MESSAGE

-------- -------------------- ---------- ----- ---------- ------------------------------------------------

JKOOP SYS_EXPORT_FULL_01 EXPORT 132 132 SYS_EXPORT_FULL_01:EXPORT:132 out of 132 MB done


JKOOP SYS_EXPORT_FULL_01 EXPORT 90 132 SYS_EXPORT_FULL_01:EXPORT:90 out of 132 MB done

JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 17 17 SYS_EXPORT_SCHEMA_01:EXPORT:17 out of 17 MB done

JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 19 19 SYS_EXPORT_SCHEMA_01:EXPORT:19 out of 19 MB done

The original export utility (exp) may or may not be going away soon. The documentation clearly states that Data Pump will
handle data types that exp will not and we should begin our migration to this new utility. Except for those instances where
you must export between 10g and pre-10g databases. This article stepped through the process of performing FULL exports as
these are typical in Oracle environment. If you are doing schema or table exports the change is simple and we will visit those
in subsequent parts to this series.

On our quest to learn about Oracle's Data Pump utility it has often been compared to the old export and import (exp & imp)
utilities that we have all grown to love (or hate). This article is where where Data Pump takes a detour from these old
utilities and begins to shine. This article will explore some of the export modes available and give examples on how to export
selected object types and dependencies those objects

In order to use Data Pump, we learned in Part II of this series that a datapump directory was required to export and import
from and to databases. Here are the three setup and authorization commands needed to get started.

SQL> CREATE DIRECTORY datapump AS 'C:\oradata\datapump';

SQL> GRANT EXP_FULL_DATABASE to scott;

SQL> GRANT READ, WRITE ON DIRECTORY datapump to scott;

In the last article various FULL exports were performed. These are termed 'FULL mode' exports for the obvious reason and had
the following format.

C:\>expdp scott/tiger FULL=y DIRECTORY=datapump DUMPFILE=full.dmp LOGFILE=full.log

A slight change to this example, changing the FULL keyword to SCHEMA, allows us to perform a SCHEMA mode export where a
particular schema will be exported. Anyone familiar with the old export / import (exp / imp) utilities should feel right at
home here. To export multiple schema's you need only separate each schema with commas.

C:\>expdp scott/tiger

SCHEMAS=SCOTT DIRECTORY=datapump DUMPFILE=scott.dmp LOGFILE=scott.log

Likewise we could change the SCHEMS option and export all objects in a particular tablespace by switching to the
TABLESPACES export mode.
C:\>expdp scott/tiger TABLESPACES=USERS DIRECTORY=datapump DUMPFILE=TSusers.dmp
LOGFILE=TSusers.log
If you wanted to export a single table, you need only switch to TABLE mode and use the following export command.

C:\>expdp scott/tiger

TABLES=SCOTT.DEPT DIRECTORY=datapump DUMPFILE=dept.dmp LOGFILE=dept.log

The interesting point to notice when issuing these commands is to take a close look at the export logs for each of these
export modes. When taking a full schema export you will notice that the export pulls out various additional object types such
as grants, roles, sequences, and views. To just name a few. Here is the log from the SCHEMA export performed above.

Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********

SCHEMAS=SCOTT DIRECTORY=datapump DUMPFILE=scott.dmp LOGFILE=scott.log


Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 1024 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."DEMO_IMAGES" 56.57 KB 11 rows

. . exported "SCOTT"."DEMO_CUSTOMERS" 8.976 KB 7 rows

. . exported "SCOTT"."DEMO_ORDERS" 6.421 KB 10 rows

. . exported "SCOTT"."DEMO_ORDER_ITEMS" 6.578 KB 16 rows

. . exported "SCOTT"."DEMO_PAGE_HIERARCHY" 5.984 KB 18 rows

. . exported "SCOTT"."DEMO_PRODUCT_INFO" 7.656 KB 10 rows

. . exported "SCOTT"."DEMO_STATES" 6.046 KB 51 rows

. . exported "SCOTT"."DEMO_USERS" 7.179 KB 2 rows

. . exported "SCOTT"."DEPT" 5.648 KB 4 rows

. . exported "SCOTT"."EMP" 7.804 KB 14 rows

. . exported "SCOTT"."HT_ISSUES" 14.28 KB 29 rows

. . exported "SCOTT"."HT_PEOPLE" 7.203 KB 18 rows

. . exported "SCOTT"."HT_PROJECTS" 6.406 KB 5 rows

. . exported "SCOTT"."SALGRADE" 5.570 KB 5 rows

. . exported "SCOTT"."BONUS" 0 KB 0 rows


Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:

C:\ORADATA\DATAPUMP\SCOTT.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:49:55

If we then take a look at the export for a tables you will quickly notice that not all the object types that were exported for
the SCHEMA mode have been exported for the TABLE mode. Some of this is because, in our example, the DEPT table does not
have certain dependent objects and because other object types are not at all exported even though they would seem to have
a dependency. For instance indexes, triggers, and statistics will be exported under TABLE mode but a view on the DEPT table
will not. So as a caution, be careful and examine your export logs. You may not be getting everything you think is a
dependent object.

Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********

TABLES=SCOTT.DEPT DIRECTORY=datapump DUMPFILE=dept.dmp LOGFILE=dept.log

Estimate in progress using BLOCKS method...

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."DEPT" 5.648 KB 4 rows

Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:

C:\ORADATA\DATAPUMP\DEPT.DMP

Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 18:46:56

One way to determine the objects that will or can be exported for the different modes is to look at the three DBA views
DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS. Each of these views, if queried, will
give you a list and short description on the specific paths to object types that you can expect INCLUDE or EXCLUDE to be
dependent on the object you are exporting or importing. For instance if you were to query the TABLE_EXPORT_OBJECTS view
with the following SQL you would get a list of all objects that are dependent on exporting a table. As you can see there is no
entry for exporting views based on a table export. In actuality there are 86 INCLUCE/EXCLUDE types just in the
TABLE_EXPORT_OBJECTS view and many more the other two export views. I would encourage you to select the object paths
for each of the views and get acquainted with what you can export.

SQL> SELECT object_path, comments FROM table_export_objects where object_path like 'TABLE%';

OBJECT_PATH COMMENTS

------------------------------------------------------- --------------------------------------------------

TABLE/AUDIT_OBJ Object audits on the selected tables

TABLE/COMMENT Table and column comments on the selected tables


TABLE/CONSTRAINT Constraints (including referential constraints)

TABLE/CONSTRAINT/REF_CONSTRAINT Referential constraints

TABLE/FGA_POLICY Fine-grained auditing policies

TABLE/GRANT Object grants on the selected tables

TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Object grants on the selected tables

TABLE/INDEX Indexes

TABLE/INDEX/STATISTICS Precomputed statistics

TABLE/INSTANCE_CALLOUT Instance callouts

TABLE/POST_INSTANCE/GRANT/PROCDEPOBJ_GRANT Grants on instance procedural objects

TABLE/POST_INSTANCE/PROCDEPOBJ Instance procedural objects

TABLE/POST_INSTANCE/PROCDEPOBJ_AUDIT Audits on instance procedural objects

TABLE/POST_TABLE_ACTION Post-table actions

TABLE/PRE_TABLE_ACTION Pre-table actions

TABLE/PROCACT_INSTANCE Instance procedural actions

TABLE/RLS_CONTEXT Fine-grained access control contexts

TABLE/RLS_GROUP Fine-grained access control policy groups

TABLE/RLS_POLICY Fine-grained access control policies

TABLE/TRIGGER Triggers on the selected tables

TABLE_EXPORT/TABLE/AUDIT_OBJ Object audits on the selected tables

TABLE_EXPORT/TABLE/COMMENT Table and column comments on the selected tables

TABLE_EXPORT/TABLE/CONSTRAINT Constraints (including referential constraints)

TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Referential constraints

TABLE_EXPORT/TABLE/FGA_POLICY Fine-grained auditing policies

TABLE_EXPORT/TABLE/GRANT Object grants on the selected tables

TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Object grants on the selected tables

TABLE_EXPORT/TABLE/INDEX Indexes

TABLE_EXPORT/TABLE/INDEX/STATISTICS Precomputed statistics

TABLE_EXPORT/TABLE/INSTANCE_CALLOUT Instance callouts

TABLE_EXPORT/TABLE/POST_INSTANCE/GRANT/PROCDEPOBJ_GRANT Grants on instance procedural objects

TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ Instance procedural objects

TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ_AUDIT Audits on instance procedural objects

TABLE_EXPORT/TABLE/POST_TABLE_ACTION Post-table actions

TABLE_EXPORT/TABLE/PRE_TABLE_ACTION Pre-table actions

TABLE_EXPORT/TABLE/PROCACT_INSTANCE Instance procedural actions

TABLE_EXPORT/TABLE/RLS_CONTEXT Fine-grained access control contexts

TABLE_EXPORT/TABLE/RLS_GROUP Fine-grained access control policy groups

TABLE_EXPORT/TABLE/RLS_POLICY Fine-grained access control policies

TABLE_EXPORT/TABLE/TRIGGER Triggers on the selected tables


Through the INCLUDE/EXCLUDE options you can fine tune your exports to pull exactly what you want from your databases. So,
to take advantage of these INCLUDE/EXCLUDE object types we can perform an export on a table and not include statistics
with the following export command.

C:\>expdp scott/tiger

TABLES=SCOTT.DEPT EXCLUDE=STATISTICS DIRECTORY=datapump DUMPFILE=dept.dmp LOGFILE=dept.log

Export: Release 10.2.0.1.0 - Production on Tuesday, 23 August, 2005 19:40:25

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production

With the Partitioning, OLAP and Data Mining options

Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********

TABLES=SCOTT.DEPT EXCLUDE=STATISTICS DIRECTORY=datapump DUMPFILE=5dept.dmp L

OGFILE=dept.log

Estimate in progress using BLOCKS method...

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

. . exported "SCOTT"."DEPT" 5.648 KB 4 rows

Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:

C:\ORADATA\DATAPUMP\DEPT.DMP

Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:40:40

When playing around with Data Pump export and using the INCLUDE / EXCLUDE feature, I soon found out that it was much
easier to use a parameter file (parfile) when specifying the different INCLUDE / EXCLUDE options. This is the same concept as
the old export and import (exp & imp) utilities. This is easier because in the course of trying to put all of the potential
options on one command line and with the fact that there are “special” characters required when specifying INCLUCE /
EXCLUDE options, you will soon find it easier to add to and subtract from the export command. I tried a number of times
putting these options on a single command line but had numerous issues. So I would suggest just getting use to the parfile
from the start.
For an example in using the parfile I decided to export the DEPT table from the SCOTT schema and include views. Remember,
as noted earlier in this article that views are not available to export under a table. So if you were to look at the DBA views,
also noted above, you need to at least go up to a schema export to include views. So I created the following parfile. This will
actually export all views in the SCOTT schema. If you knew the view names associated with the DEPT table you could also
create in IN list much like the INCLUDE statement for the DEPT table.

Parfile dept.par

SCHEMAS=SCOTT
INCLUDE=TABLE:"IN ('DEPT')"

INCLUDE=VIEW

DIRECTORY=datapump

DUMPFILE=dept.dmp

LOGFILE=dept.log

Here is the command line that would be issued. Looks very similar to the old export utility exp.

C:\>expdp scott/tiger parfile=dept.par

Export: Release 10.2.0.1.0 - Production on Tuesday, 23 August, 2005 19:54:46

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production

With the Partitioning, OLAP and Data Mining options

Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** parfile=dept.par

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."DEPT" 5.648 KB 4 rows

Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:

C:\ORADATA\DATAPUMP\DEPT.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 19:55:04

Data Pump's import command is much the same as the export command. I actually just replaced the expdp with the impdp
command in these examples and had no problems importing back into my database. Many times though we want to import
into a different schema and this is accomplished by the REMAP_SCHEMA option. Here is an example where I imported the
DEPT table into a different schema.

C:\>impdp system/tiger REMAP_SCHEMA=SCOTT:JKOOPMANN DIRECTORY=datapump DUMPFILE=dept.dmp


LOGFILE=dept.log

Oracle's Data Pump utility has many options that allow you to fine tune what you can export from a database. Just remember
to query the DBA views (DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS) that dictate
the dependent objects that will be exported under certain scenarios. Also keep in mind you can just as easily exclude these
object types to pull out exactly what you want.

You might also like