You are on page 1of 13

Real-Time Downstream Environment

Note: Only one real-time downstream capture process is allowed at a single


downstream database.

Creating the streams tablespace:


=========================

-- create the tablespace on both sides, but most important the downstream site:

conn /as sysdba

CREATE TABLESPACE DWSTREAMS DATAFILE


/u02/oracle/product/10.2.0/upstrm/data/dwstreams01.dbf' SIZE 500M;

Creating the streams admin on both sides:


===============================

Create the streams admin user on both sides of Database :

CREATE USER DWSTREAMS IDENTIFIED BY DWSTREAMS DEFAULT TABLESPACE DWSTREAMS


QUOTA UNLIMITED ON DWSTREAMS;

GRANT DBA TO DWSTREAMS;

BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => ' DWSTREAMS ',
grant_privileges => true);
END;
/

checking that the streams admin is created:

SELECT * FROM dba_streams_administrator;

USERNAME LOCAL_PRI ACCESS_FR


------------------------ ---------
DWSTREAMS YES YES

Creating connection between source and downstream:


=========================================
Check $TNS_ADMIN, to point to the place of tnsnames.ora.
Make sure that service names of both databases exist in tnsnames.ora files of each
other.
Check the listeners are working and databases are registered.
Make sure that Source and Target can both connect to each othe, you may use the
TNSPING to make sure.

 Setting the SYS password:


The SYS password need to be the same on both source and target, for the
connection to be established successfully between source and downstream
sites to send the redo data.

 orapwd file=orapwUPSTRM password=manager entries=5


Set GLOBAL_NAMES = TRUE on both sides:

Alter system set global_names=TRUE scope=BOTH;

Create dblink from downstream to Source for administration purposes:

conn dwstreams/dwstreams;

Create database link DSTRM connect to DWSTREAMS identified by DWSTREAMS using


'DSTRM';

select * from dual@DSTRM;

Preparing the Downstream site (uslaxorcstgdb17)

Setting parameters for downstream archiving:


============================

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/u02/oracle/product/10.2.0/ups


trm/flash_recovery_area/
VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
SCOPE=SPFILE;

LOCATION - place where archived-logs will be written from the standby redo logs
coming from the source site.

VALID FOR - Specify either (STANDBY_LOGFILE,PRIMARY_ROLE) or


(STANDBY_LOGFILE,ALL_ROLES).

Note : We didn’t specify the log_archive_dest_2 location in the test as we don’t want
to create the archive log in any other location. It will create in the same location as
where we enable the archive log.

-- Specify Source and downstream database for LOG_ARCHIVE_CONFIG, using


DB_UNIQUE_NAME of both sites:

SQL> alter system set log_archive_config='DG_CONFIG=(DSTRM,UPSTRM)'


scope=both;

System altered.
Creating standby redo-logs to receive redo data from Source:
==============================================

-- From the source:

1) Determine the log file size used on the source database:

conn /as sysdba

select THREAD#, GROUP#, BYTES/1024/1024 from V$LOG;

Note:
- The standby log file size must exactly match (or be larger than) the source
database log file size.
- The number of standby log file groups must be at least one more than the number
of online log file groups on the source database.

From the Source : uslaxorcstgdb15

SQL> select THREAD#, GROUP#, BYTES/1024/1024 from V$LOG;

THREAD# GROUP# BYTES/1024/1024


---------- ---------- ---------------
1 1 50
1 2 50
1 3 50

Add standby logs in the Destination Database (uslaxorcstgdb17)

For example, the source database has three online redo log file groups and each log
file size of 50 MB. In this case, use the following statements to create the appropriate
standby log file groups:

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4


('/u02/oracle/product/10.2.0/upstrm/data/slog4.rdf') SIZE 50M;
2
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
('/u02/oracle/product/10.2.0/upstrm/data/slog5.rdf') SIZE 50M; 2

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6


('/u02/oracle/product/10.2.0/upstrm/data/slog6.rdf') SIZE 50M; 2

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7


('/u02/oracle/product/10.2.0/upstrm/data/slog7.rdf') SIZE 50M; 2

Database altered.

SQL> show parameter db_name

NAME TYPE VALUE


------------------------------------ --------------------------------- ------------------------------
db_name string UPSTRM

SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM


V$STANDBY_LOG;

GROUP# THREAD# SEQUENCE# ARCHIVED STATUS


---------- ---------- ---------- --------- ------------------------------
4 0 0 YES UNASSIGNED
5 0 0 YES UNASSIGNED
6 0 0 YES UNASSIGNED
7 0 0 YES UNASSIGNED

Get downstream database to archive-log mode:


====================================

1) Enable the uslaxorcstgdb17 database to archive log mode.

-- Increase the number of archiving processes:

ALTER SYSTEM SET log_archive_max_processes=5 SCOPE=BOTH;

**************************************************
*** Preparing the Source site (ORCL102C) ****
**************************************************

1) Enable Shipping of online redo log data from Source to Downstream database:
==================================================

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=UPSTRM


LGWR SYNC NOREGISTER
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=UPSTRM' SCOPE=BOTH;

System altered.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE;

SERVICE - Specify the identifier of downstream database from tnsnames.ora of


source.
LGWR ASYNC or LGWR SYNC - Specify a redo transport mode.
The advantage of specifying LGWR SYNC is that redo data is sent to the downstream
database
faster then when LGWR ASYNC is specified. You can specify LGWR SYNC for a real-
time downstream capture process only.

NOREGISTER - Specify this attribute so that the location of the archived redo log files
is not
recorded in the downstream database control file.

ALID_FOR - Specify either (ONLINE_LOGFILE,PRIMARY_ROLE) or


(ONLINE_LOGFILE,ALL_ROLES).

DB_UNIQUE_NAME - The value of db_unique_name of the downstream database.

-- Specify Source and downstream database for LOG_ARCHIVE_CONFIG, using


DB_UNIQUE_NAME of both sites:

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(UPSTRM,DSTRM)' SCOPE=SPFILE;

Get source database to archivelog mode:


===============================

Enable the Source database to archive log.

*********************************************
**** Setting up Streams Replication ****
*********************************************

1. Create a schema for the stream Replication

create user TSTDWSTREAMS identified by TSTDWSTREAMS;


grant connect, resource, create table to TSTDWSTREAMS;
conn TSTDWSTREAMS/ TSTDWSTREAMS

Creating the streams queue on the downstream site:


========================================

SQL> conn dwstreams


Enter password:
Connected.
SQL> BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'dwstreams.DOWNSTREAM_Q_TABLE',
queue_name => 'dwstreams.DOWNSTREAM_Q',
queue_user => ' dwstreamad');
END; 2 3 4 5 6
7 /
PL/SQL procedure successfully completed.

SQL> select name, queue_table from user_queues;

NAME QUEUE_TABLE
------------------------------------------------------------------------------------------ -------------------------------------------------
AQ$_DOWNSTREAM_Q_TABLE_E DOWNSTREAM_Q_TABLE
DOWNSTREAM_Q DOWNSTREAM_Q_TABLE

Creating the apply process at the downstream site:


========================================

SQL> BEGIN
DBMS_APPLY_ADM.CREATE_APPLY(
queue_name => 'dwstreams.DOWNSTREAM_Q',
apply_name => 'DOWNSTREAM_APPLY',
apply_captured => TRUE
);
END; 2 3 4 5 6 7
8 /

PL/SQL procedure successfully completed.


SELECT apply_name, status, queue_name FROM DBA_APPLY;

APPLY_NAME STATUS QUEUE_NAME


------------------------------------------------------------------------------------------ ------------------------
APPLY_STREAM ENABLED APPLY_Q
DOWNSTREAM_APPLY DISABLED DOWNSTREAM_Q

SELECT parameter, value, set_by_user


FROM DBA_APPLY_PARAMETERS
WHERE apply_name = 'DOWNSTREAM_APPLY';

ALLOW_DUPLICATE_ROWS N NO
COMMIT_SERIALIZATION FULL NO
DISABLE_ON_ERROR Y NO
DISABLE_ON_LIMIT N NO
MAXIMUM_SCN INFINITE NO
PARALLELISM 1 NO
STARTUP_SECONDS 0 NO
TIME_LIMIT INFINITE NO
TRACE_LEVEL 0 NO
TRANSACTION_LIMIT INFINITE NO
TXN_LCR_SPILL_THRESHOLD 10000 NO
WRITE_ALERT_LOG Y NO
Value:

Creating the capture process at the downstream site:


=======================================

Capture process
Conn dwstreams/dwstreams

SQL> BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'dwstreams.DOWNSTREAM_Q',
capture_name => 'DOWNSTREAM_CAPTURE',
rule_set_name => NULL,
start_scn => NULL,
source_database => 'DSTRM',
use_database_link => true,
first_scn => NULL,
logfile_assignment => 'implicit');
END;
/

PL/SQL procedure successfully completed.

SQL> SQL> SELECT capture_name, status from dba_capture;

CAPTURE_NAME STATUS
------------------------------------------------------------------------------------------ ------------------------
CAPTURE_STREAM ENABLED
DOWNSTREAM_CAPTURE DISABLED

SELECT parameter, value, set_by_user FROM DBA_CAPTURE_PARAMETERS;


PARALLELISM 1 NO
STARTUP_SECONDS 0 NO
TRACE_LEVEL 0 NO
TIME_LIMIT INFINITE NO
MESSAGE_LIMIT INFINITE NO
MAXIMUM_SCN INFINITE NO
WRITE_ALERT_LOG Y NO
DISABLE_ON_LIMIT N NO
DOWNSTREAM_REAL_TIME_MINE Y yes
STARTUP_SECONDS 0 NO
TRACE_LEVEL 0 NO
TIME_LIMIT INFINITE NO
MESSAGE_LIMIT INFINITE NO
MAXIMUM_SCN INFINITE NO
WRITE_ALERT_LOG Y NO
DISABLE_ON_LIMIT N NO
DOWNSTREAM_REAL_TIME_MINE Y YES
PARALLELISM 1 NO
PARALLELISM 1 NO
STARTUP_SECONDS 0 NO
TRACE_LEVEL 0 NO
TIME_LIMIT INFINITE NO
MESSAGE_LIMIT INFINITE NO
MAXIMUM_SCN INFINITE NO
WRITE_ALERT_LOG Y NO
DISABLE_ON_LIMIT N NO
DOWNSTREAM_REAL_TIME_MINE Y NO
Set capture for real-time captuing of changes:

SQL> BEGIN
DBMS_CAPTURE_ADM.SET_PARAMETER(
capture_name => 'DOWNSTREAM_CAPTURE',
parameter => 'downstream_real_time_mine',
value => 'y');
END;
/ 2 3 4 5 6 7

PL/SQL procedure successfully completed.

-- Archive the current log file from the source database.

conn /as sysdba

ALTER SYSTEM ARCHIVE LOG CURRENT;

Note :Archiving the current log file at the source database starts real time
mining of the source database redo log.

Add rules to instruct the capture process, what to capture:


=============================================

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => ' TSTDWSTREAMS',
streams_type => 'capture',
streams_name => 'downstream_capture',
queue_name => 'dwstreams.downstream_q',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'DSTRM',
inclusion_rule => TRUE);
END;
/

SELECT rule_name, rule_condition


FROM DBA_STREAMS_SCHEMA_RULES
WHERE streams_name = 'DOWNSTREAM_CAPTURE'
AND streams_type = 'CAPTURE';

T S T D W S T R E A M((:Sd 4m0l. g e t _ o b je c t _ o w n e r() = 'T S T D W S T R E A M S ') a n d : d m l. is _ n u ll_ t a g () = 'Y ' a n d : d m l. g e t _ s o u rc e _ d a t a b a s e _ n a m e () =


T S T D W S T R E A M((:Sd 4d 1l. g e t _ o b je c t _ o w n e r() = 'T S T D W S T R E A M S ' o r : d d l. g e t _ b a s e _ t a b le _ o w n e r() = 'T S T D W S T R E A M S ') a n d : d d l. is _ n u ll_
Instantiating the replicated objects:

In our example we want to replicate schema TSTDWSTREAMS , so where what we


shall do:

Using ordinary exp/imp to instantiate:

- From Source:
exp system/manager owner=tstdwstreams file= tstdwstreams.dump log=
tstdwstreams.log object_consistent=Y

* Object_consistent must be set to Y, so the imported data would be consistent with


the source.

-- From Downstream:

imp system/manager file= tstdwstreams.dump fromuser= tstdwstreams touser=


tstdwstreams ignore=y STREAMS_INSTANTIATION=Y

Note:
When doing STREAMS_INSTANTIATION=Y and having the export done with
object_consistent=Y, the instantiation SCN for the apply will be modified to the SCN
at the time the export was taken, and this will insure that data at the target is
consistent with data at the source.

Other Method to instantiate:

If you want to instantiate Manually:

A- Create the replicated objects at the downstream site.


You need to create the same objects at the downstream site, and if tables contains
data, you will need to copy them by any means like insert into downstream_table
select * form source_table@dblink.

B- Set instantiation for the replicated objects:

-- Run the following from the downstream site:

conn dwstreams/ dwstreams

DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@DSTRM; -- Get current SCN from
Source
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => ' TSTDWSTREAMS ',
source_database_name => 'DSTRM',
instantiation_scn => iscn,
recursive => TRUE);
END;
/

* You have to make sure that objects between source and downstream are consistent
at the time you set the instantiation manually, or you may end up with ORA-01403.

-- After instantiating, check that instantiation is done:

select * from DBA_APPLY_INSTANTIATED_OBJECTS;

select * from DBA_APPLY_INSTANTIATED_SCHEMAS;


Ref doc :753158.1

Start the apply process:


=================

conn strmadmin/strmadmin

exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY');

select apply_name, status from dba_apply;

Start the capture process:


==================

conn strmadmin/strmadmin

exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE');

select capture_name, status from dba_capture;

***********************
*** Testing... ****
***********************

Source database :

Conn TSTDWSTREAMS/ TSTDWSTREAMS

Sql>insert into test values (1.’DBA’);

1 row inserted

Sql> commit;

Target database

Conn TSTDWSTREAMS/ TSTDWSTREAMS

Sql> Select * from test;

Check whether the inserted data replicated in the Target database or not.
Monitoring Scripts:

Current Capture State


1) select
CAPTURE_NAME,STATE,CAPTURE_TIME,CAPTURE_MESSAGE_CREATE_TIME,total_messa
ges_captured from v$streams_capture;

CAPTURE_NAME STATE CAPTURE_TIME CAPTURE_MESSAGE_CREA TOTAL_MESSAGES_CAPTURED


-------------------- ----------------------- -------------------- -------------------- -----------------------
DOWNSTREAM_CAPTURE WAITING FOR REDO 12-Feb-2011 02:03:31 12-Feb-2011 01:34:37 24526143

CAPTURING CHANGES - Scanning the redo log for changes that evaluate to TRUE against the
capture process rule sets.

CREATING LCR - Converting a change into an LCR.

WAITING FOR DICTIONARY REDO - Waiting for redo log files containing the dictionary
build related to the first SCN to be added to the capture process session. A capture process cannot
begin to scan the redo log files until all of the log files containing the dictionary build have been
added.

PAUSED FOR FLOW CONTROL - Unable to enqueue LCRs either because of low memory or
because propagations and apply processes are consuming messages slower than the capture
process is creating them. This state indicates flow control that is used to reduce spilling
of captured messages when propagation or apply has fallen behind or is unavailable

ENQUEUING MESSAGE - Enqueuing an LCR that satisfies the capture process rule sets into
the capture process queue.

Checking Capture Process


select CAPTURE_NAME,start_scn,first_scn,STATUS,CAPTURED_SCN,APPLIED_SCN from dba_capture;

CAPTURE_NAME START_SCN FIRST_SCN STATUS CAPTURED_SCN APPLIED_SCN


-------------------- -------------- -------------- -------- -------------- --------------
DOWNSTREAM_CAPTURE 8364550439439 8364550439439 ENABLED 8365085461809 8365083512332

Checking Apply Process


select apply_name, status, to_char(status_change_time,'DD-MON-YYYY HH24:MI:SS') from dba_apply;

APPLY_NAME STATUS TO_CHAR(STATUS_CHANGE_TIME


------------------------------ -------- --------------------------
DOWNSTREAM_APPLY ENABLED 10-Feb-2011 14:35:26
Checking Rule process

select r.rule_name, r.rule_owner, c.apply_name from dba_rule_set_rules rs, dba_apply c,


dba_rules r where c.rule_set_name=rs.rule_set_name and
c.rule_set_owner=rs.rule_set_owner and rs.rule_name=r.rule_name and
rs.rule_owner=r.rule_owner and upper(r.rule_condition) like '%:DML%' order by 3,1;

2) SELECT CAPTURE_NAME,
(ELAPSED_CAPTURE_TIME/100) ELAPSED_CAPTURE_TIME,
(ELAPSED_RULE_TIME/100) ELAPSED_RULE_TIME,
(ELAPSED_ENQUEUE_TIME/100) ELAPSED_ENQUEUE_TIME,
(ELAPSED_LCR_TIME/100) ELAPSED_LCR_TIME,
(ELAPSED_PAUSE_TIME/100) ELAPSED_PAUSE_TIME
FROM V$STREAMS_CAPTURE;

3) Identifyng SID of Capture Process.


SELECT c.CAPTURE_NAME,
SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
c.SID, c.SERIAL#, c.STATE,
c.TOTAL_MESSAGES_CAPTURED,
c.TOTAL_MESSAGES_ENQUEUED
FROM V$STREAMS_CAPTURE c, V$SESSION s
WHERE c.SID = s.SID AND c.SERIAL# = s.SERIAL#;

Start and Stop the Capture and Apply process


Start and Stop Capture Process

exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE');

exec DBMS_CAPTURE_ADM.STOP_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE');

exec DBMS_CAPTURE_ADM.DROP_CAPTURE (capture_name => 'DOWNSTREAM_CAPTURE');

Start and stop Apply process

exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY');

exec DBMS_APPLY_ADM.STOP_APPLY(apply_name => 'DOWNSTREAM_APPLY');


APPLY PROCESS STATUS

SQL> select APPLY_NAME, STATE from v$streams_apply_reader;

APPLY_NAME STATE
------------------------------------------------------------------------------------------ -------------------------------------
DOWNSTREAM_APPLY DEQUEUE MESSAGES

SQL> select APPLY_NAME, STATE from v$streams_apply_server;

APPLY_NAME STATE
------------------------------------------------------------------------------------------ --------------------------------------
DOWNSTREAM_APPLY IDLE

SQL> select APPLY_NAME, STATE from v$streams_apply_coordinator;

APPLY_NAME STATE
------------------------------------------------------------------------------------------ --------------------------------------
DOWNSTREAM_APPLY IDLE

You might also like