You are on page 1of 16

Materialized views

------------------------ Create a materialized view sample (MVIEW)


- As we do not indicate the nature of the refreshment inmmediatamente MVIEW prod
uces (BUILD IMMEDIATE)
CREATE MATERIALIZED VIEW SALES_MV AS
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND
p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;
- We consult the content
SELECT * FROM SALES_MV WHERE ROWNUM < 5;
- Consulted attributes MVIEW
SELECT * FROM DBA_MVIEWS WHERE MVIEW_NAME='SALES_MV';
- Borramos the MVIEW
DROP MATERIALIZED VIEW SALES_MV;
- A building ejempo MVIEW without the initial data refresh
CREATE MATERIALIZED VIEW sales_mv BUILD DEFERRED AS
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND
p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;
- We check that they have no record
SELECT * FROM SALES_MV WHERE ROWNUM < 5;
- We launched the manual refresh
EXEC DBMS_MVIEW.REFRESH('SALES_MV');
- We tried other ways to refresh MVIEW
- Refresh rates all dependent MVIEWS table SALES
VARIABLE failures NUMBER;
EXEC DBMS_MVIEW.REFRESH_DEPENDENT(:failures,'SALES');
- All MVIEWS refresh rates (performed as SYSDBA)
EXEC DBMS_MVIEW.REFRESH_ALL_MVIEWS(:failures);
- We check that is already populated MVIEW
SELECT * FROM SALES_MV WHERE ROWNUM < 5;
- Borramos the MVIEW
DROP MATERIALIZED VIEW SALES_MV;
- Create the LOGS of MVIEWS
- If you want it, you can see that several tables are created:
-- MLOG$_SALES
-- MLOG$_PRODUCTS
-- MLOGS$_TIMES
- How to use aggregate functions in MVIEW => INCLUDING NEW VALUES we add and SEQ
UENCE
- Normally you always add ROWID to allow Fast Refresh
- Overall COMMIT SCN provides better performance with MVIEW LOGS, with exception
s:

- SCT COMMIT is not supported on tables with one or more LOB fields
- All master tables in a MVIEW be created with COMMIT SCN
- We also added references columns in the SELECT or JOIN the MVIEW
CREATE MATERIALIZED VIEW LOG ON sales
WITH SEQUENCE, ROWID (prod_id, time_id, amount_sold),
COMMIT SCN INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON products


WITH PRIMARY KEY, SEQUENCE, ROWID, COMMIT SCN
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON times


WITH PRIMARY KEY, SEQUENCE, ROWID (calendar_year),
COMMIT SCN INCLUDING NEW VALUES;

- Create a soft MVIEW with incremental (on demand)


CREATE MATERIALIZED VIEW sales_mv REFRESH FAST AS
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;

- Create the MVIEW with a soda every day


CREATE MATERIALIZED VIEW sales_mv_daily REFRESH FAST NEXT SYSDATE+1 AS
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;

- We can specify that updates a MVIEW every time you make a COMMIT
CREATE MATERIALIZED VIEW sales_mv_current REFRESH FAST ON COMMIT AS
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;

- We can change the refresh frequency (every minute)


ALTER MATERIALIZED VIEW sales_MV REFRESH FAST NEXT SYSDATE+(1/1440);

- Get the date of the next refresh of MVIEW


SELECT NAME, NEXT FROM DBA_SNAPSHOTS WHERE NAME LIKE 'SALES_MV%';
- We note that we have enabled parameter QUERY_REWRITE_ENABLED
- This parameter can take three values:
- FALSE: Query Rewrite disabled
- TRUE: the plan is to choose lower cost WITH or WITHOUT Write Query
- FORCE: Whenever possible use Query Rewrite, will be used
SHOW PARAMETER QUERY_REWRITE_ENABLED
- We check if our MVIEW have enabled Query Rewrite
- Default is enabled in (N = REWRITE_CAPABILITY)
SELECT MVIEW_NAME, REWRITE_ENABLED FROM DBA_MVIEWS;
- We enable the generation of execution plans
SET AUTOTRACE TRACE EXPLAIN
- We check the execution plan of the following query

- We found that the query plan all related tables


SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND
p.prod_id = s.prod_id
AND
s.prod_id = 13
GROUP BY t.calendar_year, p.prod_id;
- We enable Query Rewrite for MVIEW SALES_MV
ALTER MATERIALIZED VIEW SALES_MV ENABLE QUERY REWRITE;
- We return to run the same query as before to see if it changes the plan
- The following operation "MAT_VIEW REWRITE ACCESS FULL" on SALES_MV confirms th
at is correct
SELECT t.calendar_year, p.prod_id, SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND
p.prod_id = s.prod_id
AND
s.prod_id = 13
GROUP BY t.calendar_year, p.prod_id;
- Run the MV_CAPABILITIES_TABLE creating table where the report goes
@? / Rdbms / admin / utlxmvsql
- Launched the analysis MVIEW
EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW('SALES_MV');
-

We consult the report


The report says we need to have FAST REFRESH
Any DML (REFRESH_FAST_AFTER_ONETAB_DML):
We need to add COUNT (expr) = COUNT (s.amount_sold) to MVIEW
We need to add COUNT (*) to the MVIEW
PCT "Partition Change Tracking" does not apply in many cases because TIMES and
PRODUCTS are not partitioned
SELECT * FROM MV_CAPABILITIES_TABLE;
- Recreate the view to recreate all possible capabilities
DROP MATERIALIZED VIEW SALES_MV;
CREATE MATERIALIZED VIEW sales_mv REFRESH FAST AS
SELECT COUNT(*), COUNT(s.amount_sold), t.calendar_year, p.prod_id, SUM(s.amoun
t_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id
AND
p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;
- We return to obtain the report capabilities MVIEW
- You can make FAST REFRESH after any type of DML
TRUNCATE TABLE MV_CAPABILITIES_TABLE;
EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW('SALES_MV');
SELECT * FROM MV_CAPABILITIES_TABLE;
-- Borramos las MVIEW Y MVIEW LOGS
DROP MATERIALIZED VIEW SALES_MV;
DROP MATERIALIZED VIEW SALES_MV_DAILY;
DROP MATERIALIZED VIEW SALES_MV_CURRENT;
DROP MATERIALIZED VIEW LOG ON TIMES;
DROP MATERIALIZED VIEW LOG ON PRODUCTS;
DROP MATERIALIZED VIEW LOG ON SALES;

- Create a couple of materialized views


CREATE MATERIALIZED VIEW SALES_G AS
SELECT PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_
SOLD
FROM SH.SALES;
CREATE MATERIALIZED VIEW PRODUCTS_G AS
SELECT PROD_ID, PROD_NAME, PROD_DESC
FROM SH.PRODUCTS;
- Create the refresh group
- Although we specify the refresh rate of MVIEWS not taken into account because
they are ON DEMAND
EXEC DBMS_REFRESH.MAKE ('TEST_GROUP','SALES_G,PRODUCTS_G', SYSDATE, 'SYSDATE+1')
;
- We launched a manual refresh
EXEC DBMS_REFRESH.REFRESH('TEST_GROUP');
- We consult the date of last refresh
SELECT MVIEW_NAME, TO_CHAR(LAST_REFRESH_DATE,'YYYY/MM/DD HH24:MI:SS')
from DBA_MVIEWS WHERE MVIEW_NAME IN ('SALES_G','PRODUCTS_G');
- Eliminated the group of soft drinks
EXEC DBMS_REFRESH.DESTROY('TEST_GROUP');
- We delete materialized views
DROP MATERIALIZED VIEW SALES_G;
DROP MATERIALIZED VIEW PRODUCTS_G;
# Create the DB LINK DB in OEM
# The first is to add the entry to the tnsnames.ora file OCM if we do not have
vi $ORACLE_HOME/network/admin/tnsnames.ora
# Add these lines
OCM =
(DESCRIPTION=
(ADDRESS = (PROTOCOL = tcp) (HOST = ocm.dbajunior.com) (PORT = 1521))
(CONNECT_DATA=
(SERVICE_NAME=OCM)))
# Check that we have connectivity
tnsping ocm
- Create the MVIEW LOG on the EMPLOYEES table
CREATE MATERIALIZED VIEW LOG ON EMPLOYEES;
-- Creamos el DB LINK
CREATE PUBLIC DATABASE LINK OCM CONNECT TO HR IDENTIFIED BY "hr" USING 'OCM';
- Tried connectivity
SELECT COUNT(*) FROM EMPLOYEES@OCM;
- Create the EMP MVIEW pointing at table EMPLOYEES
CREATE MATERIALIZED VIEW EMP REFRESH FAST AS
SELECT * FROM EMPLOYEES@OCM;
-- Probamos el Fast Refresh
EXEC DBMS_MVIEW.REFRESH('EMP','F');
- Create a MVIEW LOG in the DEPARTMENTS table (BD OCM)
CREATE MATERIALIZED VIEW LOG ON DEPARTMENTS;

- Create a second materialized view in the database of OEM


CREATE MATERIALIZED VIEW DEP REFRESH FAST AS
SELECT * FROM DEPARTMENTS@OCM;
- Create the refresh group
- Information can be found in these two views
-- DBA_REFRESH
-- DBA_REFRESH_CHILDREN
BEGIN
DBMS_REFRESH.MAKE (
NAME => 'REFRESH_GROUP_TEST',
LIST => 'EMP,DEP',
NEXT_DATE => SYSDATE,
INTERVAL => 'SYSDATE +1 / 1440' ,
IMPLICIT_DESTROY => TRUE);
END;
/
- We
EXEC
DROP
DROP
DROP

clean the (OEM) environment


DBMS_REFRESH.DESTROY('REFRESH_GROUP_TEST');
MATERIALIZED VIEW EMP;
MATERIALIZED VIEW DEP;
PUBLIC DATABASE LINK OCM;

- We delete the LOG MVIEW in OCM


DROP MATERIALIZED VIEW LOG ON EMPLOYEES;
DROP MATERIALIZED VIEW LOG ON DEPARTMENTS;
Create and Manage Encrypted Tablespaces
----------------------------------------------------------# Create the path where the wallet almaceramos
mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1/wallets
# Edit it sqlnet.ora file
vi $ORACLE_HOME/network/admin/sqlnet.ora
# Add the following lines to end of file
ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=/u01/app/oracle/product/11.2.0/dbhome_1/wallets)))
- The BD must have the parameter COMPATIBLE> = 11.1
- Validate that we fulfill this requirement
SHOW PARAMETER COMPATIBLE
-- Creamos el Wallet
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "oracle_4U";
- We restart it in order to open BD el WALLET
- Whenever you reboot the instance, we must open the wallet again
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "oracle_4U";
ALTER DATABASE OPEN;

- Select encryption 3DES168


- We can choose from the following encryption methods:
-- AES192
- 3DES168
-- AES128
-- AES256
CREATE TABLESPACE TBSTEST01 DATAFILE '/u01/app/oracle/oradata/OCM/tbstest0101.db
f'
SIZE 100M ENCRYPTION USING '3DES168' DEFAULT STORAGE(ENCRYPT);
- If the encryption type is not specified, the default is AES128
CREATE TABLESPACE TBSTEST02 DATAFILE '/u01/app/oracle/oradata/OCM/tbstest0201.db
f'
SIZE 100M ENCRYPTION DEFAULT STORAGE(ENCRYPT);
- Consulted status WALLET
SELECT * FROM V$ENCRYPTION_WALLET;
- Check if a Tablespace is encrypted in the
SELECT TABLESPACE_NAME, ENCRYPTED FROM DBA_TABLESPACES;
- We consult the configuration of encrypted tablespaces
SELECT * FROM V$ENCRYPTED_TABLESPACES;
- We delete tablespaces we created
DROP TABLESPACE TBSTEST01 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE TBSTEST02 INCLUDING CONTENTS AND DATAFILES;
-- Cerramos el WALLET
ALTER SYSTEM SET ENCRYPTION WALLET CLOSE IDENTIFIED BY "oracle_4U";
# The first method is to use use "orakpi"
# Create the AUTO LOGIN WALLET associated PKCS # 12 wallet we already have
orapki wallet create -wallet /u01/app/oracle/product/11.2.0/dbhome_1/wallets -au
to_login -pwd oracle_4U
# You can also get informacim certificate we created
orapki wallet display -wallet /u01/app/oracle/product/11.2.0/dbhome_1/wallets
# The second way to set the AUTO LOGIN is the graphical utility "owm"
# The "owm" wizard also allows us to configure the WALLET and other tasks
# Execute from a graphics terminal
owm
Transportable Tablespaces
-------------------------------COLUMN PLATFORM_NAME FORMAT A36
SELECT * FROM V$TRANSPORTABLE_PLATFORM ORDER BY PLATFORM_NAME;
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:
PLATFORM_NAME
ENDIAN_FORMAT
---------------------------------- -------------Solaris[tm] OE (32-bit)
Big
The following is the result from the destination platform:

PLATFORM_NAME
ENDIAN_FORMAT
---------------------------------- -------------Microsoft Windows IA (32-bit)
Little
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);
SELECT * FROM TRANSPORT_SET_VIOLATIONS;
VIOLATIONS
--------------------------------------------------------------------------Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table
JIM.DEPT in tablespace OTHER
Partitioned table JIM.SALES is partially contained in the transportable set
SQL> ALTER TABLESPACE sales_1 READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE sales_2 READ ONLY;
Tablespace altered.
SQL> HOST
$ expdp system dumpfile=expdat.dmp directory=data_pump_dir transport_tablespaces
=sales_1,sales_2 logfile=tts_export.log
OR
$ expdp system dumpfile=expdat.dmp directory=data_pump_dir transport_tablespaces
=sales_1,sales_2 transport_full_check=y logfile=tts_export.log
Output
-----******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/app/oracle/admin/salesdb/dpdump/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace SALES_1:
/u01/app/oracle/oradata/salesdb/sales_101.dbf
Datafiles required for transportable tablespace SALES_2:
/u01/app/oracle/oradata/salesdb/sales_201.dbf
Cross Platform conversion
-------------------------RMAN> CONVERT TABLESPACE sales_1,sales_2
2> TO PLATFORM 'Microsoft Windows IA (32-bit)'
3> FORMAT '/tmp/%U';
Starting conversion at source at 30-SEP-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00007 name=/u01/app/oracle/oradata/salesdb/sales_101.
dbf
converted datafile=/tmp/data_D-SALESDB_I-1192614013_TS-SALES_1_FNO-7_03jru08s
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00008 name=/u01/app/oracle/oradata/salesdb/sales_201.
dbf
converted datafile=/tmp/data_D-SALESDB_I-1192614013_TS-SALES_2_FNO-8_04jru0aa
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:25

Finished conversion at source at 30-SEP-08


On Source database
ALTER TABLESPACE sales_1 READ WRITE;
ALTER TABLESPACE sales_2 READ WRITE;
On Destination Database
impdp system dumpfile=expdat.dmp directory=data_pump_dir
transport_datafiles=
c:\app\orauser\oradata\orawin\sales_101.dbf,
c:\app\orauser\oradata\orawin\sales_201.dbf
remap_schema=sales1:crm1 remap_schema=sales2:crm2
logfile=tts_import.log
Configure a Schema to Support a Star Transformation Query
--------------------------------------------------------- Before enabling optimization STAR QUERIES we will take a sample query
- We review the value of the parameter STAR_TRANSFORMATION_ENABLED (default FALS
E)
SHOW PARAMETER STAR_TRANSFORMATION_ENABLED
- We enable the visualization of the implementation plan
SET AUTOTTRACE EXPLAIN
- We launched the consultation of the documentation to see the execution plan
- They are basically chained several HASH JOIN the tables involved
SELECT ch.channel_class, c.cust_city, t.calendar_quarter_desc,
SUM(s.amount_sold) sales_amount
FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
WHERE s.time_id = t.time_id
AND s.cust_id = c.cust_id
AND
s.channel_id = ch.channel_id
AND c.cust_state_province = 'CA'
AND ch.channel_desc in ('Internet','Catalog')
AND
t.calendar_quarter_desc IN ( '1999-01 ' , '1999-02 ' )
GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc;
- We enable STAR_TRANSFORMATION_ENABLED parameter for all BD
- It can be enabled for the current session (ALTER SESSION) only
ALTER SYSTEM SET STAR_TRANSFORMATION_ENABLED=TRUE;
- We launched the query again to see the new execution plan optimized
- The cost of the query considerablmente has dropped, and the number of blocks t
o be read
- This optimization can provide huge improvements in time in a DW
SELECT ch.channel_class, c.cust_city, t.calendar_quarter_desc,
SUM(s.amount_sold) sales_amount
FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
WHERE s.time_id = t.time_id
AND s.cust_id = c.cust_id
AND
s.channel_id = ch.channel_id
AND c.cust_state_province = 'CA'
AND ch.channel_desc in ('Internet','Catalog')
AND
t.calendar_quarter_desc IN ( '1999-01 ' , '1999-02 ' )
GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc;

Star Transformation with a Bitmap Join Index


CREATE BITMAP INDEX sales_c_state_bjix
ON sales(customers.cust_state_province)
FROM sales, customers
WHERE sales.cust_id = customers.cust_id
LOCAL NOLOGGING COMPUTE STATISTICS;
Administer External Tables
----------------------------------------------# Create the directory structure with which we will work
mkdir -p / u01/stage/data
mkdir -p /u01/stage/log
mkdir -p /u01/stage/bad
# We create the file he / u01/stage/data/empxt1.dat
vi / u01/stage/data/empxt1.dat
# Add the following lines
360, Jane, Janus, ST_CLERK ,121,17-MAY-2001, 3000,0,50, jjanus
361, Mark, Jasper, SA_REP ,145,17-MAY-2001, 8000, .1,80, mjasper
362, Brenda, Starr, AD_ASST ,200,17-MAY-2001, 5500,0,10, bstarr
363, Alex, Alda, AC_MGR ,145,17-MAY-2001, 9000, .15,80, aalda
# Create a second file / u01/stage/data/empxt2.dat
vi / u01/stage/data/empxt2 . dat
# Add the following lines
401, Jesse, Cromwell, HR_REP ,203,17-MAY-2001, 7000,0,40, jcromwel
402, Abby, Applegate, IT_PROG ,103,17-MAY-2001, 9000, .2,60, aapplega
403, Carol, Cousins, AD_VP ,100,17-MAY-2001, 27000, .3,90, ccousins
404, John, Richardson, AC_ACCOUNT ,205,17-MAY-2001, 5000,0,110, jrichard
- Proceed to create the table but we have to create the directories
CREATE OR REPLACE DIRECTORY admin_dat_dir AS '/u01/stage/data';
CREATE OR REPLACE DIRECTORY admin_log_dir AS '/u01/stage/log';
CREATE OR REPLACE DIRECTORY admin_bad_dir AS '/u01/stage/bad';
GRANT READ ON DIRECTORY admin_dat_dir TO hr;
GRANT WRITE ON DIRECTORY admin_log_dir TO hr;
GRANT WRITE ON DIRECTORY admin_bad_dir TO hr;
- We connect with the HR user to create the table
CONN HR/hr
)
-

We launched the creation statement


We review the main clauses
TYPE => Driver to access the data (ORACLE_LOADER the ORACLE_DATAPUMP)
ACCESS PARAMETERS => The parameters that the driver (for example ORACLE_LOADER

PARALLEL => Provide for parallelism in the sentences if certain conditions are
met
- REJECT_LIMIT => Maximum rechazos allowed (per server PX)
- All parameters in the driver ORACLE_LOADER
http://docs.oracle.com/cd/E11882_01/server.112/e22490/et_params.htm # SUTIL01
2
CREATE TABLE admin_ext_employees
(employee_id
NUMBER(4),
first_name
VARCHAR2(20),
last_name
VARCHAR2(25),

job_id
manager_id
hire_date
salary
commission_pct
department_id
email

VARCHAR2(10),
NUMBER(4),
DATE,
NUMBER(8,2),
NUMBER(2,2),
NUMBER(4),
VARCHAR2(25)

)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY admin_dat_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile admin_bad_dir:'empxt%a_%p.bad'
logfile admin_log_dir:'empxt%a_%p.log'
fields terminated by ','
missing field values are null
( employee_id, first_name, last_name, job_id, manager_id,
hire_date char date_format date mask "dd-mon-yyyy",
salary, commission_pct, department_id, email
)
)
LOCATION ('empxt1.dat', 'empxt2.dat')
)
PARALLEL
REJECT LIMIT UNLIMITED;
- We launched a test query
SELECT * FROM ADMIN_EXT_EMPLOYEES;
- If we need to do an INSERT ... SELECT and there are many data => enable PARALL
EL DML
-- Ej. INSERT INTO EMPLOYEES (...) SELECT * FROM ADMIN_EXT_EMPLOYEES;
ALTER SESSION ENABLE PARALLEL DML;
Preprocessor
--------------# Create a directory for storing the archives
mkdir -p / u01/stage/zdata
# Comprimos the two source files of the previous year
cp / u01/stage/data / *. DAT / u01/stage/zdata
cd /u01/stage/zdata
gzip *.dat
# We also need to copy the executable to read the archives "zcat"
cp /bin/zcat /u01/stage/zdata
- Create the DIRECTORY in Oracle
CREATE OR REPLACE DIRECTORY admin_zdat_dir AS '/u01/stage/zdata';
- In addition to reading, we need to give execute permission to use "zcat"
GRANT READ, EXECUTE ON DIRECTORY admin_zdat_dir TO HR;
- Create the table with the difference PREPROCESSOR argument which indicated tha
t "zcat" is used
CREATE TABLE admin_ext_employees_gzip
(employee_id
NUMBER(4),

first_name
last_name
job_id
manager_id
hire_date
salary
commission_pct
department_id
email

VARCHAR2(20),
VARCHAR2(25),
VARCHAR2(10),
NUMBER(4),
DATE,
NUMBER(8,2),
NUMBER(2,2),
NUMBER(4),
VARCHAR2(25)

)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY admin_zdat_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile admin_bad_dir:'empxt%a_%p.bad'
preprocessor admin_zdat_dir: 'zcat'
logfile admin_log_dir:'empxt%a_%p.log'
fields terminated by ','
missing field values are null
( employee_id, first_name, last_name, job_id, manager_id,
hire_date char date_format date mask "dd-mon-yyyy",
salary, commission_pct, department_id, email
)
)
LOCATION ('empxt1.dat.gz', 'empxt2.dat.gz')
)
PARALLEL
REJECT LIMIT UNLIMITED;
Compression
-------------- We see an example of using the driver
- We writable directory al ADMIN_DAT_DIR
GRANT WRITE ON DIRECTORY admin_dat_dir TO HR;
- We connect with the HR user
CONN HR/hr
- Create a CTAS data from another table
- COMPRESSION => enabled (ENABLED)
CREATE TABLE admin_ext_employees_dump
ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY admin_dat_dir
ACCESS PARAMETERS (COMPRESSION ENABLED) LOCATION ('emp.dmp'))
AS
SELECT * FROM EMPLOYEES;
- We can use the DUMP generated to create a new external table
CREATE TABLE admin_ext_employees_dump2
(
EMPLOYEE_ID
NUMBER(6),
FIRST_NAME
VARCHAR2(20),
LAST_NAME
VARCHAR2(25),
EMAIL
VARCHAR2(25),
PHONE_NUMBER
VARCHAR2(20),
HIRE_DATE
DATE,
JOB_ID
VARCHAR2(10),

SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID

NUMBER(8,2),
NUMBER(2,2),
NUMBER(6),
NUMBER(4)

)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY admin_dat_dir
LOCATION ('emp.dmp')
);
Implement Data Pump Export and Import Jobs for Data Transfer
------------------------------------------------------------- We consult the route pointed to the DIRECTORY DATA_PUMP_DIR
SELECT DIRECTORY_PATH FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME='DATA_PUMP_DIR';
expdp system SCHEMAS=HR DIRECTORY=DATA_PUMP_DIR DUMPFILE=exp_hr_20130613.dmp LOG
FILE=exp_hr_20130613.log
expdp USERID=\"/ as sysdba\" FULL=Y DIRECTORY=DATA_PUMP_DIR COMPRESSION=ALL DUMP
FILE=exp_full_20130613.dmp
# Let's see how we can use a file for all parameters of the Export
vi /u01/stage/expdp_parfile.par
# Add the following lines
CONTENT=METADATA_ONLY
TABLES=HR.EMPLOYEES,HR.DEPARTMENTS
EXCLUDE=STATISTICS
DIRECTORY=DATA_PUMP_DIR
DUMPFILE = exp_employees_20130613.dmp
LOFILE = exp_employees_20130613.log
# Run the export file with input parameters
# CONTENT = METADATA => Only export the metadata (Table Definitions)
# EXCLUDE = STATISTICS => We decided that we do not want to export statistics of
these objects
expdp system PARFILE=/u01/stage/expdp_parfile.par
- Create a DIRECTORY of an alternative location
CREATE DIRECTORY TEMP_DIR AS '/u01/stage';
# We launched an Export of the SH schema in 6 parallel processes
# We see that we generate the following files
# / U01/app/oracle/admin/OCM/dpdump/exp_sales_01_20130613.dmp
# / U01/stage/exp_sales_01_20130613.dmp
# / U01/app/oracle/admin/OCM/dpdump/exp_sales_02_20130613.dmp
# / U01/stage/exp_sales_02_20130613.dmp
# / U01/app/oracle/admin/OCM/dpdump/exp_sales_03_20130613.dmp
# / U01/stage/exp_sales_03_20130613.dmp
# / U01/app/oracle/admin/OCM/dpdump/exp_sales_04_20130613.dmp
expdp system PARALLEL=8 SCHEMAS=SH \
DUMPFILE = DATA_PUMP_DIR: exp_sales_% U_20130613.dmp, TEMP_DIR: exp_sales_% U_
20130613.dmp \
LOGFILE = DATA_PUMP_DIR: exp_sales_20130613.dmp
# Activate the FLASHBACK_TIME parameter for a EXPORT
# To do this you have to give EXPORT permissions READ, WRITE the user HR

# GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO HR;


expdp hr FLASHBACK_TIME=SYSTIMESTAMP CONTENT=DATA_ONLY \
DIRECTORY = DATA_PUMP_DIR DUMPFILE exp_hr_fbk_20130613.dmp = LOGFILE = exp_hr_
fbk_20130613.log
# We return to launch the EXPORT but using a date in the past
# We've had many escape symbols, but in a parameter file is not necessary
expdp hr FLASHBACK_TIME=\"TO_TIMESTAMP\(\'13-06-2013 16:00:00\', \'DD-MM-YYYY HH
24:MI:SS\'\)\" \
CONTENT=DATA_ONLY DIRECTORY=DATA_PUMP_DIR DUMPFILE=exp_hr_past_20130613.dmp \
LOGFILE = exp_hr_past_20130613.log
# We perform the Import of table SH.SALES
# EXCLUDE => We can exclude del Import them deseemos objects (eg INDEX, CONSTRAI
NT)
# REMAP_TABLE => table is renamed to make the Import
impdp system PARALLEL=8 \
DIRECTORY=DATA_PUMP_DIR \
TABLES=SH.SALES \
EXCLUDE=INDEX,CONSTRAINT \
REMAP_TABLE=SH.SALES:SALES_TEMP \
DUMPFILE = DATA_PUMP_DIR: exp_sales_% U_20130613.dmp, TEMP_DIR: exp_sales_% U_
20130613.dmp \
LOGFILE = DATA_PUMP_DIR: imp_sales_20130613.dmp
# We launched a full export of the database in a secion
expdp userid=\"/ as sysdba\" DIRECTORY=DATA_PUMP_DIR FULL=Y DUMPFILE=full.dmp LO
GFILE=full.log
- In another session locate information running job
COL OPERATION FORMAT A20;
SELECT OWNER_NAME, JOB_NAME, STATE, OPERATION, DEGREE FROM DBA_DATAPUMP_JOBS;
# In the second session we connect this work with the ATTACH clause expdp
expdp userid=\"/ as sysdba\" ATTACH=SYS.SYS_EXPORT_FULL_01
# We can change the degree of parallelization
PARALLEL=4
# We can pause the work
STOP_JOB
# We connected again to resume work
expdp userid=\"/ as sysdba\" ATTACH=SYS.SYS_EXPORT_FULL_01
# Resume the work (this work resumed our session)
CONTINUE_CLIENT
# We can kill work (but before we launch CONTROL + C)
KILL_JOB
Implement Data Pump To and From Remote Databases
----------------------------------------------- Create DB LINK OCM OEM
- You must use the same user in the DB LINK and the export / import
CREATE PUBLIC DATABASE LINK OEM CONNECT TO SYSTEM IDENTIFIED BY "********" USING
'OEM';
- Tried access
SELECT COUNT(1) FROM RMAN.BP@OEM;

# I start testing an EXPORT from multiple tables RMAN


expdp system NETWORK_LINK=OEM TABLES=RMAN.DB,RMAN.DBINC,RMAN.BP \
DIRECTORY=DATA_PUMP_DIR DUMPFILE=exp_rman.dmp LOGFILE=exp_rman.log
# Now launched an IMPORT and take the opportunity to see more parameters that we
have not used so far
# REMAP_SCHEMA => Change the owner of the objects we import
# Imported REMAP_TABLESPACE => changes the fate of objects TBS
# EXCLUDE => Escogemos objects that we want to delete del IMPORT
# TABLE_EXISTS_ACTION => Allows you to specify what to do when the table already
exists
impdp system NETWORK_LINK=OEM TABLES=RMAN.DB,RMAN.DBINC,RMAN.BP \
REMAP_SCHEMA=RMAN:HR REMAP_TABLESPACE=RCAT:USERS \
EXCLUDE=CONSTRAINT TABLE_EXISTS_ACTION=APPEND \
DIRECTORY=DATA_PUMP_DIR LOGFILE=exp_rman.log
Configure and Use Parallel Execution for Queries
--------------------------------------------------- We get the execution plan for the query
EXPLAIN PLAN FOR
SELECT /*+ PARALLEL(4) */ customers.cust_first_name, customers.cust_last_name,
MAX(QUANTITY_SOLD), AVG(QUANTITY_SOLD)
FROM sh.sales, sh.customers
WHERE sales.cust_id=customers.cust_id
GROUP BY customers.cust_first_name, customers.cust_last_name;
- Show the execution plan
@?/rdbms/admin/utlxplp
- We get the execution plan of the query without HINTS
EXPLAIN PLAN FOR
SELECT customers.cust_first_name, customers.cust_last_name,
MAX(QUANTITY_SOLD), AVG(QUANTITY_SOLD)
FROM sh.sales, sh.customers
WHERE sales.cust_id=customers.cust_id
GROUP BY customers.cust_first_name, customers.cust_last_name;
- Show the execution plan
- We see no longer appear operations PX (Parallel execution)
@?/rdbms/admin/utlxplp
- We consult the number of threads per instance that is used to read the tables
used or DOP
- We see that we have configured parallelization in these tables (DEGREE = 1)
- When DEGREE = DEFAULT => Auto DOP is used to calculate Oracle
SELECT TABLE_NAME, DEGREE FROM DBA_TABLES WHERE TABLE_NAME IN ('SALES','CUSTOMER
S') AND OWNER = 'SH';
- Modify the DOP of the tables
ALTER TABLE PARALLEL SH.SALES 8;
ALTER TABLE SH.CUSTOMERS PARALLEL 4;
- Validate the change
SELECT TABLE_NAME, DEGREE FROM DBA_TABLES WHERE TABLE_NAME IN ('SALES','CUSTOMER
S') AND OWNER = 'SH';
- We get the same query plan but after amendment of DOP

EXPLAIN PLAN FOR


SELECT customers.cust_first_name, customers.cust_last_name,
MAX(QUANTITY_SOLD), AVG(QUANTITY_SOLD)
FROM sh.sales, sh.customers
WHERE sales.cust_id=customers.cust_id
GROUP BY customers.cust_first_name, customers.cust_last_name;
- We use another way to see the execution plan we get with EXPLAIN PLAN
- We found that we no longer need to use the HINT Parallel Execution
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY);
- Now we can configure the Oracle tables to choose the DOP
ALTER TABLE SH.SALES PARALLEL (DEGREE DEFAULT );
ALTER TABLE SH.CUSTOMERS PARALLEL (DEGREE DEFAULT);
- Validate the change
SELECT TABLE_NAME, DEGREE FROM DBA_TABLES WHERE TABLE_NAME IN ('SALES','CUSTOMER
S') AND OWNER = 'SH';
- Now if we went to get the execution plan, we would also use Parallel Execution
- Resetamos the DOP of the tables to the original value
ALTER TABLE PARALLEL SH.SALES 1;
ALTER TABLE SH.CUSTOMERS PARALLEL 1;
PARALLEL_DEGREE_POLICY = MANUAL => DOP automatically be disabled on the specifie
d explicitly
PARALLEL_DEGREE_POLICY = LIMITED => DOP automatic tables and indexes with degree
= DEFAULT
PARALLEL_DEGREE_POLICY = AUTO => DOP automatic Anyway
- Activate the policy
ALTER SYSTEM SET PARALLEL_DEGREE_POLICY=AUTO SCOPE=BOTH;
- Oracle checks apply Parallel Execution
EXPLAIN PLAN FOR
SELECT customers.cust_first_name, customers.cust_last_name,
MAX(QUANTITY_SOLD), AVG(QUANTITY_SOLD)
FROM sh.sales, sh.customers
WHERE sales.cust_id=customers.cust_id
GROUP BY customers.cust_first_name, customers.cust_last_name;
- We will see that not apply because we have not collected statistics IO
-- Note
-- ------ automatic DOP: skipped because of IO calibrate statistics are missing
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY);
- Calibrate the capabilities of the I / O of our BD
- Documentation http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_resmgr.h
tm # CJGHGFEA
SET SERVEROUTPUT ON
DECLARE
lat INTEGER;
iops INTEGER;
mbps INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.CALIBRATE_IO (1, 10, iops, mbps, lat);
DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);

end;
/
- We see the result
SELECT * FROM DBA_RSRC_IO_CALIBRATE;
- Oracle checks apply Parallel Execution
EXPLAIN PLAN FOR
SELECT customers.cust_first_name, customers.cust_last_name,
MAX(QUANTITY_SOLD), AVG(QUANTITY_SOLD)
FROM sh.sales, sh.customers
WHERE sales.cust_id=customers.cust_id
GROUP BY customers.cust_first_name, customers.cust_last_name;
- We will see that not apply because the runtime is below the minimum threshold
- One sentence should last more PARALLEL_MIN_TIME_THRESHOLD of seconds to apply
PX
-- Note
-- ------ automatic DOP: Computed Degree of Parallelism is 1 because of parallel
threshold
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY);
Enable the parameter to AUTO PARALLEL_DEGREE_LIMIT
Change default DEGREE with the objects we want to use the PX
ALTER TABLE SH.SALES PARALLEL 4;
ALTER TABLE SH.SALES PARALLEL (DEGREE DEFAULT );
SELECT /*+ PARALLEL(SALES,4) */ SUM(AMOUNT_SOLD) FROM SH.SALES;
ALTER
ALTER
ALTER
ALTER
ALTER
ALTER
ALTER
ALTER

SESSION
SESSION
SESSION
SESSION

ENABLE
ENABLE
ENABLE
ENABLE

PARALLEL
PARALLEL
PARALLEL
PARALLEL

QUERY;
PARALLEL QUERY 5;
DML;
DDL;

SESSION FORCE PARALLEL QUERY;


SESSION FORCE PARALLEL PARALLEL QUERY 5;
SESSION FORCE PARALLEL DML;
SESSION FORCE PARALLEL DDL;

. V$PX_BUFFER_ADVICE : Provides statistics about PX buffers. Useful for resizing


the SGA.
V$PX_SESSION : Information about sessions, DEGREE ...
V$PX_SESSTAT : cross information between V and V $ $ PX_SESSION SESSTAT
V$PX_PROCESS : Information about the PX processes
V$PX_PROCESS_SYSSTAT : Information about the PX processes with statistics about
buffers
V$PQ_SESSTAT : Statistics about the query execution with PX. Useful to adjust ce
rtain parameters.
V$PQ_TQSTAT : Information message traffic queue at each PX process

You might also like