Professional Documents
Culture Documents
9 – ETL
Implementation Steps
PeopleBooks Contributors: Teams from PeopleSoft Enterprise Product Documentation and Development.
The Programs (which include both the software and documentation) contain proprietary information; they are provided
under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent,
and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the
Programs, except to the extent required to obtain interoperability with other independently created software or as
specified by law, is prohibited.
The information contained in this document is subject to change without notice. If you find any problems in the
documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be
expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or
transmitted in any form or by any means, electronic or mechanical, for any purpose.
If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of
the United States Government, the following notice is applicable:
The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous
applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other
measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim
liability for any damages caused by such use of the Programs.
The Programs may provide links to Web sites and access to content, products, and services from third parties. Oracle
is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated
with the use of such content. If you choose to purchase any products or services from a third party, the relationship is
directly between you and the third party. Oracle is not responsible for: (a) the quality of third-party products or services;
or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and
warranty obligations related to purchased products or services. Oracle is not responsible for any loss or damage of any
sort that you may incur from dealing with any third party.
Oracle, JD Edwards, and PeopleSoft are registered trademarks of Oracle Corporation and/or its affiliates. Other names
may be trademarks of their respective owners.
Oracle takes no responsibility for its use or distribution of any open source or shareware software or documentation and
disclaims any and all liability or damages resulting from use of said software or documentation. The following open
source software may be used in Oracle’s PeopleSoft products and the following disclaimers are provided.
Apache Software Foundation
This product includes software developed by the Apache Software Foundation (http://www.apache.org/). Copyright
1999-2000. The Apache Software Foundation. All rights reserved.
THIS SOFTWARE IS PROVIDED “AS IS'” AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE APACHE SOFTWARE FOUNDATION
OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PeopleSoft Enterprise Performance Management 8.9 Implementing ETL: Frequently Asked Questions
OpenSSL
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit
(http://www.openssl.org/).
THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT “AS IS” AND ANY EXPRESSED OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL
PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
SSLeay
This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes
software written by Tim Hudson (tjh@cryptsoft.com). Copyright (C) 1995-1998 Eric Young. All rights reserved.
THIS SOFTWARE IS PROVIDED BY ERIC YOUNG “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Loki Library
Copyright 2001 by Andrei Alexandrescu. This code accompanies the book: Alexandrescu, Andrei. “Modern C++
Design: Generic Programming and Design Patterns Applied”. Copyright (c) 2001. Addison-Wesley. Permission to
use, copy, modify, distribute and sell this software for any purpose is hereby granted without fee, provided that the
above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation
Helma Project
Copyright 1999-2004 Helma Project. All rights reserved. THIS SOFTWARE IS PROVIDED “AS IS” AND ANY
EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE HELMA PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Helma includes third party software released under different specific license terms. See the licenses directory in
the Helma distribution for a list of these license.
Sarissa
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
License for more details.
You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
.
Table of Contents
Table of Contents...................................................................................................................................................... 5
5
PeopleSoft Enterprise Performance Management 8.9 Implementing ETL: Frequently Asked Questions
4. Job Execution 98
4.1 Running Jobs................................................................................................................................................................ 99
4.2 Job Validation ............................................................................................................................................................ 100
4.3 Debugging Jobs.......................................................................................................................................................... 100
4.4 Common Problems..................................................................................................................................................... 102
4.5 Recovery .................................................................................................................................................................... 103
4.6 Reporting Errors......................................................................................................................................................... 104
5. Customizations and Performance Enhancement 105
5. 1 Customizations.......................................................................................................................................................... 105
5.2 Performance Enhancements ....................................................................................................................................... 112
5.3 Miscellaneous............................................................................................................................................................. 115
1. Identify all the relevant technical, implementation and configuration documents that come with EPM8.9.
Study this documentation before implementation. For list of documentation, refer to the Appendix A FAQ.
2. Make a detailed list of all the EPM8.9 products that have been purchased and the license codes. From
these identify and enumerate you are going to implement and in which order.
3. Decide on a detailed implementation schedule depending on the EPM data marts/business units that you
are going to implement.
4. Review the list of ETL application software components, DSX files, Parameter files, DSParams files, XML
files etc., and identify the ones necessary for your requirements basis the implementation schedule. For
this refer to the “DSX Files Import Description.xls” and also do refer to the
“EPM89InstallationGuide.PDF/Chapter 2: Configuring Ascential DataStage for PeopleSoft Enterprise
Performance Management” and “Parameter and Source data Files Information.xls” documentation.
5. Identify the list of database tables that are to be populated and the list of corresponding jobs that have to
be executed to populate these tables. Note that apart from the jobs, which directly populate the relevant
target tables, you also have to identify all the dependent jobs, hash file load jobs etc. For this task refer to
the Lineage document posted in Update ID : 618278.
6. Perform all the non-ETL pre-requisites. For this refer to relevant documentation and also refer to the Setup
Manager.
2. Refer to the relevant database sizing document and perform sizing taking into consideration all the tables
that are going to be populated by the ETL application as well as tables used for reporting.
3. Run the delivered script for inserting a “Not Available” row into all relevant tables. This script will insert one
“Not Available” row each into every table, which is a pre-requisite for the ETL application. This script will be
included in each Common Objects and Maps Bundle (starting from Bundle 9 onwards) provided there is a
change to this script.
5. Find out the number of hash files that will be created for the subset of the ETL application that you are
going to implement. This can be done using the list of jobs that you have enumerated in earlier steps and
the list of hash files that are supplied along with EPM 8.9.
6. Calculate the space required for storing all of these hash files. Hash file sizing has to be done taking into
consideration the hash file properties, structure as well as the quantum of data that is associated to each
hash file. Further instructions for performing this are provided in Appendix A FAQ. It is to be noted that a
buffer has to be allocated for future incremental data and hence growth in hash file size.
7. Make a decision regarding the physical storage space of the hash files, along with the DataStage server
directory or elsewhere. And also space is required for DataStage server log file. Refer to documentation.
8. Allocate space for all the other input data files like XML files, parameter files, .dat files etc.
2. Install DataStage servers. Create separate servers for Dev/QA and for Production. Do all necessary steps
required for Database configuration depending on your source and target Databases. More information on
this can be obtained from the Install and Upgrade Guide.
3. Install the DataStage client. Follow the steps in the Install and Upgrade guide.
4. Apply the latest patches for server and client. For more information on where to get the latest patches refer
to Appendix A FAQ.
1. Do a detailed analysis of the strategy for project creation. Refer to Appendix A FAQ. You may want to
decide whether to have a single project for the whole EPM 8.9 application or to have separate projects for
each mart. Refer to the configuration chapter for more information.
2. Create DataStage projects, one for Development, one for QA and one for Production. It is recommended
that the Production Project be on a separate DataStage server.
3. Import the DSParams file and set up the environmental variables. Refer to Appendix A FAQ for more
information. Enter the values for the environmental variables appropriately after studying your environment.
Read the “List of Environment Parameters.xls” for the list of environmental variables.
4. Classify the jobs into high, medium, and low volume based on the underlying database tables.
Appropriately give project defaults for array size, transaction size, IPC buffer, and other performance
parameters. Any exceptions and special cases have to be handled by changing the value at the job level.
Further changes and fine-tuning can be done on a trial and error basis, depending on the runtime
performance data. Read the FAQ for more information.
5. Using the DataStage manager, import the list of dsx files, which have been identified as necessary, to the
server.
6. Copy the necessary input files like XML files, Parameter files and *.dat files into appropriate folders.
7. Analyze the various job categories and the various job types.
8. Open a sample job from each category. Learn and understand the filter conditions in the source, the
update strategy, job design, job parameters, and so on. Refer to Appendix A FAQ, which refers to design
documentation.
9. Review the master run utility and create appropriate sequential file inputs. Analyze this feature and decide
on the different categories that you want to run using this utility. Refer to Appendix A FAQ, which gives
further information on this utility, design, and usage. Also see chapter 5 which has very detailed information
on this utility.
10. Also analyze the master sequencers and study them. Customizations may have to be done on the master
sequencer before you can implement. This may involve removing those jobs that are not within the scope
of your implementation. Refer to Appendix A FAQ to see more on customizations.
11. Take a business process as an example and identify all the job that are required to run it. This information
can be attained by using Lineage documentation. Run this as an example to learn how the jobs are
ordered, the interdependencies, the hash file usage, and so on.
2. Plan a scheduling strategy. Use either the scheduler that is available within DataStage Director or you can
use a third party tool. Do a sample run using the scheduling tool to test whether the tool meets all the
expectations for scheduling the application.
3. Refer to the utilities that are provided. Refer to relevant documentation. Examples of the functionality
provided by the utilities are: hash file cleaning, job run status, hash file backup etc.
4. Define the error validation strategy that you are going to use. For more information on the error handling in
jobs and the relevant environmental variables, ERR_VALIDATE and ERR_THRESHOLD, review “List of
Environment Parameters.xls” and Appendix A FAQ.
5. For more information on debugging jobs and tips and techniques for the same refer to the relevant section
in Appendix A FAQ. Another feature that can be useful during job execution is the performance statistics,
which gives the rows written in each link and the rows written per second.
Once you have completed the import, you should verify that all the necessary jobs are imported along with all the
routines and shared containers etc. You should also make sure that jobs are in compiled state.
In the Attach to Project dialog box, enter the server name/IP in the Host system, User name (the username to
access connection to the machine where the DataStage Server is installed), and Password (the password to
access connection to the machine where the DataStage Server is installed), and select the project you want to
verify.
Routines Verification
Expand the Routines on the left pane; you should see a category called “EPM 89_Routines”. If this is not present
then import of Common_Utilities_E_E1.dsx is not complete. Refer to EPM89_Routines_Details.xls for more details
about the routines.
Expand the Shared Containers on the left pane, you should see two categories namely, “Incremental_Logic” (it
should have 5 components) and “Language_Swap” (should have 1 component) as shown below. If this is not
present then import of Common_Utilities_E_E1.dsx is not complete.
SECTION 2.2 TO VIEW AND VERIFY ALL JOBS ARE COMPILED AFTER IMPORT
Open DataStage Director by selecting Start, Programs, Ascential DataStage. DataStage Director.
In the Attach to Project dialog box, enter the server name/IP in the Host system, User name (the username to
access connection to the machine where the DataStage Server is installed), and Password (the password to
access connection to the machine where the DataStage Server is installed), and select the project you want to
verify.
The folder structure looks like this: (This is just a sample, users are not expected to find their structures look the
same as the screenshot.)
Ensure View, Status is selected. Then verify that all jobs are in the Compiled state.
Note. If any job status shows as not compiled, perform the procedures in sections 2.2 and 2.3 to compile the jobs.
Otherwise move to the next task.
In the Attach to Project dialog box, enter the server name/IP in the Host system, User name (the username to
access connection to the machine where the DataStage Server is installed), and Password (the password to
access connection to the machine where the DataStage Server is installed), and select the project for which you
need to compile the jobs.
In the Attach to Project dialog box, enter the server name/IP in the Host system, User name (the username to
access connection to the machine where the DataStage Server is installed), and Password (the password to
access connection to the machine where the DataStage Server is installed), and select the project for which you
need to compile the jobs.
Select Tools, Run Multiple Job Compile. In the DataStage Batch Job Compilation Wizard, select the Server,
Sequence, Only select uncompiled jobs, and Show job selection page check boxes.
The next window will list any jobs that are not compiled in the right pane.
The status for each job will be status Compiled OK after the process is complete.
Note. For details on compiling jobs, refer to page 2-23 of the Ascential DataStage Designer Guide
(coredevgde.pdf), “Your First DataStage Project,” Compiling a Job
Setup –
Dimension
Mapper J_DimMap_PS_<TableName> J_DimMap_PS_BUS_UNIT
_GL
The following screenshot shows an example of how an individual job in a project uses a subset of these parameters.
In the above screenshots, the Default Value is given as $PROJDEF. In this case at job runtime, the project default value of the
environment variable is retrieved. When a job is run individually, you can override the $PROJDEF with any required value.
For example, when a sequencer job is run, the window that appears is similar to this:
Once you have completed the database configuration, DSParam file changes, DSX files import and import
verification process, you can start running the jobs based on the SKUs for your warehouse.
The following example will explain the jobs that correspond to the Base tables. If your warehouse supports Related
Language tables, then you need to follow the same steps for the jobs that are present under the Language folder
for every module wherever applicable. However, the Language jobs have to be executed after the successful
completion of corresponding Base jobs.
The example will explain the jobs for Enterprise transaction systems (E source). If your warehouse has E1 source
then follow the same steps for E1 jobs.
The example assumes that your system has OWE applications. If it doesn’t support OWE, you can ignore the steps
that correspond to OWE jobs.
Warning! Do not run the jobs that are present under the category ReusableJobs. These are not specifically used
to load any target tables. These jobs will be automatically triggered by various Sequence jobs.
Note. All the server jobs relating to Hash files that are present under the Load_Hash_Files category need to be run
first before running other Sequence jobs under the Load_Tables category since these hash files are being used in
other server jobs.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the Setup_E category on the left pane and then OWS, (Source Transactional System),
Base, Load_Hash_Files, Server
If you want the job to use the values that are defined in the DataStage Administrator, then click “Run”
button. If you want to override the values then type the appropriate values and then click “Run” button.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the Setup_E category on the left pane and then OWS, (Source Transactional System),
Base, Load_Tables, Sequence
Now, run all the Sequence jobs that are present under this folder. This will load the OWS setup tables in
the EPM database for the required source transactional system.
Note. For E1 Setup OWS, you need to run the Hash load jobs and table jobs that are present under Setup_E1,
OWS, Base category.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the Setup_E category on the left pane and then select Dimension_Mapper, Base,
Load_Hash_Files, Server.
If you want the job to use the values that are defined in the DataStage Administrator, then click the Run
button. If you want to override the values, then type the appropriate values and click the Run button.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the Setup_E category on the left pane and then select Dimension_Mapper, Base,
Load_Tables, Sequence
Now, run all the Sequence jobs that are present under this folder. This will load the Dimension Mapper
setup tables in the EPM database.
Note. For E1 Dimension Mapper, you need to run the Hash load jobs and table jobs that are present under
Setup_E1, Dimension_Mapper, Base category.
Before running these jobs make sure to complete all the setup in the PIA pages. The navigation path (s) is EPM
Foundation, EPM setup, Warehouse Sources & Bus. Units and also EPM Foundation, EPM setup, Common
Definitions.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Expand the Shared_Lookups category in the left pane and then move to each and every folder. Now, run the jobs
that are present under these folders.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Option 1. Expand the Setup_E category on the left pane and then OWE, Base, Load_Tables, Sequence.
Run all the Sequence jobs that are present under this folder. This will load the OWE setup tables in the
EPM database.
Note. For E1 Setup OWE, you need to run the Hash load jobs and table jobs that are present under Setup_E1,
OWE, Base category
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Common dimension table jobs are spread across 5 folders. They are Business_Unit, Calendar, Currency,
Language, and Unit_Of_Measure. Each of the folders has Master Sequences to run jobs.
The following is the order in which the master sequences needs to be run for E transactional system.
1. Calendar:
1.1 MSEQ_E_Hash_Calendar
1.2 MSEQ_E_OWE_BaseDim_Calendar
1.3 MSEQ_E_OWS_BaseDim_Calendar
2. Business_Unit
2.1 MSEQ_E_Hash_BU
2.2 MSEQ_E_OWE_BaseDim_BU
2.3 MSEQ_E_OWS_BaseDim_BU
3. Currency
3.1 MSEQ_E_Hash_Currency
3.2 MSEQ_E_OWE_BaseDim_Currency
4. Unit_Of_Measure
4.1 MSEQ_E_Hash_UOM
4.2 MSEQ_E_OWE_BaseDim_UOM
4.3 MSEQ_E_OWS_BaseDim_UOM
5. Language
5.1 MSEQ_E_Hash_Language
5.2 MSEQ_E_OWE_BaseDim_Language
The following is the order in which the master sequences needs to be run for E1 transactional system.
1. Calendar:
1.1 MSEQ_E1_Hash_Calendar
1.2 MSEQ_E1_BaseDim_Calendar
2. Currency
2.1 MSEQ_E1_Hash_Currency
2.2 MSEQ_E1_OWE_BaseDim_Currency
3. Business_unit:
3.1 MSEQ_E1_Hash_BU
3.2 MSEQ_E1_BaseDim_BU
4. Unit Of Measure
4.1 MSEQ_E1_Hash_UOM
4.2 MSEQ_E1_BaseDim_UOM
Note. For all the Dimension load jobs (Common dimension, Global Dimension, Local Dimension and MDW
Dimension, D00s), user can customize the error validation by mentioning appropriate values to the environmental
variable. If you want to skip error validation, then $ERR_VALIDATE should be ‘N’. If you want to perform error
validation, then $ERR_VALIDATE should be ‘Y’. Also, you can specify the threshold limit for the error validation. If
you want the job to abort if the lookup fails more than 50 times, then $ERR_VALIDATE should be ‘Y’ and
$ERR_THRESHOLD should be 50.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the CRM_E category on the left pane and then OWS, Base, Load_Hash_Files,
Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
Run all the jobs that are present under the Server folder. The idea is to create and load the Hash files that
are required to run OWS jobs.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the CRM_E category on the left pane and then OWS, Base, Load_Tables, Sequence.
Run the Sequence jobs under this category.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to load all the CRM OWS tables in
the EPM database.
Note. (1) The OWS folder has the jobs corresponding to a particular warehouse, for example CRM or HCM etc. So,
the prerequisite is to identify those OWS jobs related to your warehouse(s) and delete the unwanted jobs, or you
can create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s)
and then execute the master sequencer. Otherwise use Master_Run_Utility to run required sequence jobs.
(2) For E1 OWS, you need to run the Hash load jobs and OWS table jobs that are present under OWS_E1, Base
category.
(3) The OWS_E1 folder has the jobs corresponds to all warehouse OWS tables. So, the prerequisite is to identify
the OWS jobs related to your warehouse(s) and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer. Otherwise use Master_Run_Utility to run required sequence jobs.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs.
Global_Dimensions_E, Master_Sequence
Option 2. Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Global dimension (Dimension shared across warehouse) loading.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Dimension load Server jobs. This will load the Global dimension tables in the EPM database:
GLOBAL_DIMENSIONS_E, MASTER_SEQUENCE
Option 2. Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Tables, Sequence.
Similarly, repeat the steps for OWE_To_MDW jobs under Global_Dimensions_E (if any).
Note. The GLOBAL_DIMENSION folder has the jobs corresponding to all warehouses. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs. The navigation path is given below.
Option 2. Expand the CRM_E category on the left pane and then selelct Local_Dimensions,
OWS_To_MDW, Base, Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to
the Server folder (i.e.) CRM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Dimension load Server jobs. This will load Local dimension tables in the EPM database.
Option 2. Expand the CRM_E category on the left pane and then Local_Dimensions, OWS_To_MDW,
Base, Load_Tables, Sequence.
Note. The LOCAL_DIMENSION folder has the jobs corresponding to all the SKU within a particular warehouse.
So, the prerequisite is to identify those jobs related to your requirements and delete the unwanted jobs or you can
create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s) and
then execute the master sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
For the CRM SKU, lets discuss the Sales Mart as an example. Each SKU will have a set of Business Processes.
We will discuss the Order Capture (Business Process) under the SKU Sales_Mart.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs.
Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name), Order_Capture
(Business_Process), Master_Sequence.
Option 2. Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name),
Order_Capture (Business_Process), OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Sequence
If there is no job under Sequence folder, then go to the Server folder (i.e.) CRM_E, Sales_Mart
(SKU_Name), Order_Capture (Business_Process), OWS_To_MDW, Base, Load_Hash_Files, Server
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Dimension load Server jobs. This will load the dimension tables in the EPM database.
Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name), Order_Capture
(Business_Process), Master_Sequence.
Option 2. Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name),
Order_Capture (Business_Process), OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence
If not running through Master Sequence, Run all the Sequence jobs that are present under this folder to
populate all the dimensions into EPM database.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Fact load Server jobs. This will load the fact tables in the EPM database.
Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name), Order_Capture
(Business_Process), Master_Sequence.
Option 2. Expand the CRM_E category on the left pane and then Sales_Mart (SKU_Name),
Order_Capture (Business_Process), OWS_To_MDW, Facts, Base, Load_Tables, Sequence
If not running the Master sequence, run all the Sequence jobs that are present under this folder. This will
load the CRM Fact tables for the Sales Mart in the EPM database.
Similarly, repeat the process for other SKUs like Customer Mart, Marketing Mart and Service Mart
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Hash_Files, Server. Now, run all the
jobs that are present under the current folder. The idea is to create and load the Hash files that are required to perform
lookup operation for Global OWE table load.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Tables, Sequence.
© Copyright PeopleSoft Corporation 2004. All rights reserved. 31
PeopleSoft Enterprise Performance Management 8.9 Implementing ETL: Frequently Asked Questions
Now, run the Master sequence (if any) otherwise run all the Sequence jobs that are present under this
folder. This will load the Global OWE tables in the EPM database.
Note. The Global_D00 folder has the jobs corresponding to all warehouses. So, the prerequisite is to identify those
jobs related to your requirements and delete the unwanted jobs, or you can create your own master sequencer,
into which you can drag and drop the jobs relating to your requirement(s) and then execute the master sequencer.
Note. For Loading sequence of Global D00 ETL Jobs, see Appendix-C Section 3.1
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director
Note. You need to follow these steps only if your system has OWE applications. Otherwise, you can ignore these
steps.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the OWE_E category on the left pane and then CRM (Warehouse_Name), Base,
Load_Hash_Files, Sequence
If there is no job under Sequence folder, then go to the Server folder (i.e.) OWE_E, CRM, Base,
Load_Hash_Files, Server. Also , go to CRM (Warehouse_Name), D00>Base, Load_Hash_Files,Server
Now, run all the jobs that are present under these Server folders. The idea is to create and load the Hash
files that are required to perform lookup operation for OWE table load.
Option 1.Expand the OWE_E category on the left pane and then CRM (Warehouse_Name), Base,
Load_Tables, Sequence. Run all the Sequence jobs that are present under this folder.
Option 1. Expand the OWE_E category on the left pane and then CRM (Warehouse_Name), D00, Base,
Load_Tables, Sequence. Run all the Sequence jobs that are present under this folder. This will load the
CRM OWE D00 tables in the EPM database. Similarly, repeat the process for Language D00s.
Note. For Loading sequence of CRM D00 ETL Jobs, see Appendix-C Section 1.1
Option 1. Expand the OWE_E category on the left pane and then CRM (Warehouse_Name), F00, Base,
Load_Tables Sequence. Run all the Sequence jobs that are present under this folder. . This will load the
CRM OWE F00 tables in the EPM database.
Note. The OWE folder has the jobs corresponding to the CRM specific applications. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer. Otherwise use Master_Run_Utility to run required sequence jobs.
This section will give you an example of running jobs in FMS Warehouse for Enterprise transaction systems.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the FMS_E category on the left pane and then OWS, Base, Load_Hash_Files,
Sequence.
Run the Master Sequence jobs that are present under this folder (if any) to run the entire Hash load Server
jobs. If there is no job under Sequence folder, then go to the Server folder (i.e.) FMS_E, OWS, Base,
Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required to run OWS jobs.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the FMS_E category on the left pane and then OWS, Base, Load_Tables, Sequence
and run the Sequence jobs under this category.
Note. The OWS folder has the jobs corresponding to a particular warehouse, for example CRM or HCM etc. So,
the prerequisite is to identify those OWS jobs related to your warehouse(s) and delete the unwanted jobs or you
can create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s)
and then execute the master sequencer.
Note. (1) For E1 OWS, you need to run the Hash load jobs and OWS table jobs that are present under OWS_E1,
Base category.
(2) The OWS_E1 folder has the jobs corresponds to all warehouse OWS tables. So, the prerequisite is to identify
the OWS jobs related to your warehouse(s) and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs. GLOBAL_DIMENSIONS_E, MASTER_SEQUENCE
Option 2. Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
Global_Dimensions_E, OWS_To_MDW, Base, Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Global dimension (Dimension shared across warehouse) loading.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Dimension load Server jobs. This will load the Global dimension tables in the EPM database.
GLOBAL_DIMENSIONS_E, MASTER_SEQUENCE
Option 2. Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Tables, Sequence and run all Sequence jobs.
Similarly, repeat the steps 1 and 2 for OWE_To_MDW jobs under Global_Dimensions_E (if any).
Note. The GLOBAL_DIMENSION folder has the jobs corresponding to all warehouses. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs. FMS_E, LOCAL_DIMENSIONS, MASTER_SEQUENCE
Option 2. Expand the FMS_E category on the left pane and then Local_Dimensions, OWS_To_MDW,
Base, Load_Hash_Files, Sequence.
If there is no job under Sequence folder, then go to the Server folder (i.e.) FMS_E, Local_Dimensions,
OWS_To_MDW, Base, Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Dimension load Server jobs. This will load the Local dimension tables in the EPM database. FMS_E,
LOCAL_DIMENSIONS, MASTER_SEQUENCE
Option 2. Expand the FMS_E category on the left pane and then Local_Dimensions, OWS_To_MDW,
Base, Load_Tables, Sequence and run the Sequence jobs.
Note. The LOCAL_DIMENSION folder has the jobs corresponding to all the SKU within a particular warehouse.
So, the prerequisite is to identify those jobs related to your requirements and delete the unwanted jobs or you can
create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s) and
then execute the master sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
For the FMS SKU, lets discuss the ESA Mart as an example. Each SKU will have a set of Business Processes. In
this example we will discuss the Contracts (Business Process) under the SKU ESA_Mart. The master sequencers
(if any) that are to be run will be per business unit.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash File load Server jobs.
Expand the FMS_E category on the left pane and then ESA_Mart (SKU_Name), Contracts
(Business_Process), Master_Sequence.
Option 2. Expand the FMS_E category on the left pane and then ESA_Mart (SKU_Name), Contracts
(Business_Process), OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Sequence.
If there is no job under the Sequence folder, then go to the Server folder (i.e.)FMS_E, Local_Dimensions,
OWS_To_MDW, Base, Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Dimension
load Server jobs. This will load the dimension tables in the EPM database.
Expand the FMS_E category on the left pane and then ESA_Mart (SKU_Name), Contracts
(Business_Process), Master_Sequence
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Fact load Server jobs.
This will load the FMS Fact tables for the ESA Mart in the EPM database.
Expand the FMS_E category on the left pane and then ESA_Mart (SKU_Name), Contracts
(Business_Process), Master_Sequence.
Similarly, repeat the process for other SKUs like GL and Profitability Mart, Payables Mart, and Receivables
Mart.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Hash_Files, Server. Now, run all the
jobs that are present under the current folder. The idea is to create and load the Hash files that are required to perform
lookup operation for Global OWE table load.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Tables, Sequence.
Now, run the Master sequence (if any) otherwise run all the Sequence jobs that are present under this
folder. This will load the Global OWE tables in the EPM database.
Note. The Global_D00 folder has the jobs corresponding to all warehouses. So, the prerequisite is to identify those
jobs related to your requirements and delete the unwanted jobs or you can create your own master sequencer, into
which you can drag and drop the jobs relating to your requirement(s) and then execute the master sequencer.
Note. For the loading sequence of Global D00 ETL Jobs, see Appendix-C Section 3.1
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. You need to follow these steps only if your system has OWE applications. Otherwise, you can ignore these
steps.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the OWE_E category on the left pane and then FMS (Warehouse_Name), Base,
Load_Hash_Files, Sequence. If there is no job under the Sequence folder, then go to the Server folder
(i.e.) OWE_E, FMS, Base, Load_Hash_Files, Server. Also, go to the folder FMS (Warehouse_Name),
D00, Base, Load_Hash_Files, Server.
Now, run all the jobs that are present under the Server folder. The idea is to create and load the Hash files
that are required to perform lookup operation for OWE table load.
Option 1. Expand the OWE_E category on the left pane and then FMS (Warehouse_Name), Base,
Load_Tables, Sequence. Run all the Sequence jobs that are present under this folder.
Option 1. Expand the OWE_E category on the left pane and then FMS (Warehouse_Name), D00, Base,
Load_Tables, Sequence. Run all the Sequences under this folder. This will load the FMS OWE D00
tables in the EPM database. Similarly, repeat the process for Language D00s.
Note. For the loading sequence of FMS D00 ETL Jobs, see Appendix-C Section 2.1
Option 1. Expand the OWE_E category on the left pane and then FMS (Warehouse_Name), F00, Base,
Load_Tables, Sequence. Run all the Sequence jobs that are present under this folder. . This will load the
FMS OWE F00 tables in the EPM database.
Note. The OWE folder has the jobs corresponding to the FMS specific applications. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Note. The FMS Job J_F00_PS_JOB_REQ has dependency on two of the HCM warehouse staging jobs, namely
J_Stage_PS_HRS_JOB_OPENING and J_Stage_PS_HRS_OPNAPR_XREF. In other words, this fact will work
only if the customer has the PeopleSoft HRMS system and the staging tables PS_HRS_JOB_OPENING and
PS_HRS_OPNAPR_XREF are populated by running the corresponding staging jobs.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Expand the HCM_E category on the left pane and then OWS, Base, Load_Hash_Files,
Sequence.
Run the Master Sequence jobs that are present under this folder (if any) to run the entire Hash load Server
jobs. If there is no job under the Sequence folder, then go to the Server folder (i.e.) HCM_E, OWS, Base,
Load_Hash_Files, Server.
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required to run OWS jobs.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the HCM_E category on the left pane and then OWS, Base, Load_Tables, Sequence
and run all the Sequence jobs that are present under this folder.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to run the Sequence jobs.
Note. The OWS folder has the jobs corresponding to a particular warehouse, for example CRM or HCM etc. So,
the prerequisite is to identify those OWS jobs related to your warehouse(s) and delete the unwanted jobs or you
can create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s)
and then execute the master sequencer. Otherwise use Master_Run_Utility to run required sequence jobs.
Note. (1) For E1 OWS, you need to run the Hash load jobs and OWS table jobs that are present under OWS_E1,
Base category.
(2) The OWS_E1 folder has the jobs to all warehouse OWS tables. So, the prerequisite is to identify the OWS jobs
related to your warehouse(s) and delete the unwanted jobs or you can create your own master sequencer, into
which you can drag and drop the jobs relating to your requirement(s) and then execute the master sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1. Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire
Hash load Server jobs. Global_Dimensions_E, Master_Sequence.
Option 2. Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
Global_Dimensions_E, OWS_To_MDW, Base, Load_Hash_Files, Server
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Global dimension (Dimension shared across warehouse) loading.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Dimension
load Server jobs. This will load the Global dimension tables in the EPM database. Global_Dimensions_E,
Master_Sequence.
Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Tables, Sequence to monitor the job status in detail.
Similarly, repeat the steps Loading Hash Files and Loading Dimensions for OWE_To_MDW jobs under
Global_Dimensions_E (if any).
Note. The GLOBAL_DIMENSION folder has the jobs corresponding to all warehouses. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Run the Master Sequence job (MSEQ_HCM_E_OWS_Hash_LocalDimensions) that is present under this
folder (if any) to run the entire Hash load Server jobs. HCM_E, LOCAL_DIMENSIONS, MASTER_SEQUENCE
Now, this will run all the jobs that are present under the Load_Hash_Files folder. The idea is to create and load the
Hash files that are required for Local dimension (Dimension shared within warehouse) loading.
Expand the HCM_E category on the left pane and then Local_Dimensions, OWS_To_MDW, Base,
Load_Tables, Server to monitor the job status in detail. HCM_E, Local_Dimensions, OWS_To_MDW,
Base, Load_Hash_Files, Server
Now, run the Master sequence (if any) otherwise run all the Sequence jobs that are present under this folder. This will
load the Local dimension tables in the EPM database.
Expand the HCM_E category on the left pane and then Local_Dimensions, OWS_To_MDW, Base,
Load_Tables, Server to monitor the job status in detail. Similarly, repeat the process for Language
Dimensions.
Note. The LOCAL_DIMENSION folder has the jobs corresponding to all the SKU within a particular warehouse.
So, the prerequisite is to identify those jobs related to your requirements and delete the unwanted jobs or you can
create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s) and
then execute the master sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
For the HCM SKU, we will use the Compensation Mart as an example. Each SKU will have a set of Business
Processes. We will discuss the Benefits (Business Process) under the SKU Compensation_Mart.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the HCM_E category on the left pane and then Compensation_Mart (SKU_Name), Benefits
(Business_Process), Master_Sequence.
Run the Master Sequence job (MSEQ_E_OWS_Hash_Benefits) that is present under this folder (if any) to
run the entire Hash load Server jobs. Expand the following folder to monitor the job status in detail:
HCM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.
The idea is to create and load the Hash files that are required for HCM SKU based dimension.
Expand the HCM_E category on the left pane and then Compensation_Mart (SKU_Name), Benefits
(Business_Process), Master_Sequence.
Now, run the Master sequence (MSEQ_E_OWS_BaseDim_Benefits) otherwise run all the Sequence jobs
that are present under this folder. This will load the dimension tables in the EPM database. Similarly,
repeat the process for Language Dimensions.
Expand the HCM_E category on the left pane and then Compensation_Mart (SKU_Name), Benefits
(Business_Process), Master_Sequence.
Now, run the Master sequence (MSEQ_E_OWS_Fact_Benefits) otherwise run all the Sequence jobs that are present
under this folder. This will load the HCM Fact tables for the Compensation Mart in the EPM database.
Similarly, repeat the process for other SKUs like Learning And Development Mart, Recruiting Mart and Workforce
Mart
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Hash_Files, Server.
Run all the jobs that are present under the Server folder. The idea is to create and load the Hash files that are required
to perform lookup operation for Global OWE table load.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Tables, Sequence.
Run the Master sequence (if any), otherwise run all the Sequence jobs that are present under this folder. This will load
the Global OWE tables in the EPM database.
Note. The Global_D00 folder has the jobs corresponding to all warehouses. So, the prerequisite is to identify those
jobs related to your requirements and delete the unwanted jobs or you can create your own master sequencer, into
which you can drag and drop the jobs relating to your requirement(s) and then execute the master sequencer.
Note. For Loading sequence of Global D00 ETL Jobs, see Appendix-C Section 3.1
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. You need to follow these steps only if your system has OWE applications. Otherwise, you can ignore these
steps.
Please use the following strategy to load HCM D00 and F00 ETL Jobs:
• Run all the initial Hash Jobs in the folder OWE_E, HCM, D00, Base, Load_Hash_Files, Server
and OWE_E, HCM, F00, Base, Load_Hash_Files, Server.
• Run all the D00 Jobs present under OWE_E, HCM, D00, Base, Load_Tables\ Sequence in the
order specified. (See Appendix-C section 5.1 ).
• Run the selective HCM F00 ETL Jobs present under OWE_E, HCM, F00, Base,
Load_Tables\Sequence in the order specified. (See Appendix-C section 5.2 )
• Run the HCM F00 Hash Jobs present under OWE_E, HCM, F00, Base, Load_Hash_Files, Server
(See Apendix-C 5.3)
• Run the HCM D00 Jobs that are dependent on HCM F00 Jobs in the order mentioned. (see
Appendix-C section 5.4)
• Run all the F00 jobs present under OWE_E, HCM, F00, Base, Load_Tables\Sequence in the order
specified.(See Appendix-C section 5.5)
Note. The OWE folder has the jobs corresponding to the HCM specific applications. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs, or you can create your own master
sequencer into which you can drag and drop the jobs relating to your requirement(s), and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the SCM_E category on the left pane and then OWS, Base, Load_Hash_Files, Sequence.
If there is no job under Sequence folder, then go to the Server folder (i.e.) SCM_E, OWS, Base,
Load_Hash_Files, Server. Run all the jobs that are present under the Server folder.
The idea is to create and load the Hash files that are required to run OWS jobs.
Expand the SCM_E category on the left pane and then OWS, Base, Load_Tables, Sequence. Run the Master Utility to
load the SCM OWS tables in the EPM database.
Note. (1) The OWS folder has the jobs corresponding to a particular warehouse, for example CRM or HCM etc. So,
the prerequisite is to identify those OWS jobs related to your warehouse(s) and delete the unwanted jobs or you
can create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s),
and then execute the master sequencer.
(2) For E1 OWS, you need to run the Hash load jobs and OWS table jobs that are present under OWS_E1, Base
category.
(3) The OWS_E1 folder has the jobs corresponds to all warehouse OWS tables. So, the prerequisite is to identify
the OWS jobs related to your warehouse(s) and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s), and then execute the master
sequencer
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Hash File
load Server jobs. The navigation path is given below.
Global_Dimensions_E, Master_Sequence
Expand the Global_Dimensions_E category on the left pane and then OWS_To_MDW, Base,
Load_Hash_Files, Sequence.
If there is no job under Sequence folder, then go to the Server folder (i.e.) Global_Dimensions_E,
OWS_To_MDW, Base, Load_Hash_Files, Server; run all the jobs that are present under the Server folder.
The idea is to create and load the Hash files that are required for Global dimension (Dimension shared
across warehouse) loading.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Dimension
load Server jobs. This will load the Global dimension tables in the EPM database. Global_Dimensions_E,
Master_Sequence.
Note. The GLOBAL_DIMENSION folder has the jobs corresponding to all warehouses. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Hash File load Server
jobs.
Expand the SCM_E category on the left pane and then Local_Dimensions, OWS_To_MDW, Base,
Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
SCM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Dimension
load Server jobs. This will load the Local dimension tables in the EPM database.
Note. The LOCAL_DIMENSION folder has the jobs corresponding to all the SKU within a particular warehouse.
So, the prerequisite is to identify those jobs related to your requirements and delete the unwanted jobs or you can
create your own master sequencer, into which you can drag and drop the jobs relating to your requirement(s) and
then execute the master sequencer.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
For the SCM SKU, we will use the Fulfillment and Billing Mart as an example. Each SKU will have a set of Business
Processes. In this example we will use the Billing (Business Process) under the SKU Fulfillment_And_Billing_Mart.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Hash File
load Server jobs. Expand the SCM_E category on the left pane and then Fulfillment_And_Billing_Mart
(SKU_Name), Billing (Business_Process), Master_Sequence.
Expand the SCM_E category on the left pane and then Fulfillment_And_Billing_Mart (SKU_Name), Billing
(Business_Process), OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Sequence
If there is no job under Sequence folder, then go to the Server folder (i.e.) SCM_E,
Fulfillment_And_Billing_Mart (SKU_Name), Billing (Business_Process), OWS_To_MDW, Base,
Load_Hash_Files, Server
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required for Local dimension (Dimension shared within warehouse) loading.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run all Dimension load
Server jobs. This will load the dimension tables in the EPM database.
Expand the SCM_E category on the left pane and then Fulfillment_And_Billing_Mart (SKU_Name), Billing
(Business_Process), Master_Sequence.
Run the Master Sequence jobs that are in the folder named Master_Sequence to run the entire Fact load
Server jobs. This will load the SCM Fact tables for the Fulfillment and Billing Mart in the EPM database.
Expand the SCM_E category on the left pane and then Fulfillment_And_Billing_Mart (SKU_Name), Billing
(Business_Process), Master_Sequence.
Similarly, repeat the process for other SKUs like Inventory Mart, Manufacturing Mart, Procurement Mart,
Spend Mart and Supply Chain Planning Mart
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Hash_Files, Server. Now, run all the
jobs that are present under the current folder. The idea is to create and load the Hash files that are required to perform
lookup operation for Global OWE table load.
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Tables, Sequence.
Now, run the Master sequence (if any) otherwise run all the Sequence jobs that are present under this folder. This will
load the Global OWE tables in the EPM database.
Note. The Global_D00 folder has the jobs corresponding to all warehouses. So, the prerequisite is to identify those
jobs related to your requirements and delete the unwanted jobs or you can create your own master sequencer, into
which you can drag and drop the jobs relating to your requirement(s) and then execute the master sequencer.
Note. For Loading sequence of Global D00 ETL Jobs, refer Appendix-C Section 3.1
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Note. You need to follow these steps only if your system has OWE applications. Otherwise, you can ignore these
steps.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Option 1:Expand the OWE_E category on the left pane and then SCM (Warehouse_Name), D00, Base,
Load_Hash_Files, Sequence. If there is no job under Sequence folder, then go to the Server folder (i.e.)
OWE_E, SCM,D00, Base, Load_Hash_Files, Server
Now, run all the jobs that are present under the current folder. The idea is to create and load the Hash files
that are required to perform lookup operation for OWE table load.
Option 2. Run the Master_Run_Utility present under Utilities\Job_Utils\ to create and load the Hash files.
Option 1. Expand the OWE_E category on the left pane and then SCM (Warehouse_Name), D00, Base,
Load_Tables, Sequence. Run all the Sequence jobs that are present under this folder. . This will load the
SCM OWE D00 tables in the EPM database. Similarly, repeat the process for Language D00s.
Note. For Loading sequence of SCM D00 ETL Jobs, refer Appendix-C Section 4.1
Option 1. Expand the OWE_E category on the left pane and then SCM (Warehouse_Name), F00, Base,
Load_Tables Sequence. Run all the Sequence jobs that are present under this folder. . This will load the
SCM OWE F00 tables in the EPM database.
Note. The OWE folder has the jobs corresponding to the SCM specific applications. So, the prerequisite is to
identify those jobs related to your requirements and delete the unwanted jobs or you can create your own master
sequencer, into which you can drag and drop the jobs relating to your requirement(s) and then execute the master
sequencer.
Note. This step is required only if your warehouse or data mart has Trees or Recursive hierarchy data.
Before running this step, make sure all the underlying data tables (OWS/Dimension/D00) are populated.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
This step is required only when there are source DB Tree Hierarchy or E1-Address Book based Recursive Hierarchy.
For an Source DB Tree Hierarchy, expand the EPM89_Utilities category on the left pane and then
Tree_Recursive_Hierarchy, EsourceTrees, StagingTreeMetadata. Depending on the source transactional system
select the required category (e.g. CRM). Then expand the Sequence category. Run all the sequence jobs in this
category.
For an E1-Address Book based Recursive Hierarchy, run the Sequence jobs(s) under EPM89_Utilities,
Tree_recursive_hierarchy, Recursive_Hierarchy, E1_AddrBookRH, Staging.
Complete the Setup for Tree and Recursive Hierarchy definitions in the PIA pages. The Navigation path is EPM
Foundation, EPM Setup, Common Definitions, Hierarchy Group Definition.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
Expand the EPM89_Utilities category on the left pane and then Tree_Recursive_Hierarchy, Load_Hash_Files. Run
all the hash file jobs in this category.
Expand the EPM89_Utilities category on the left pane and then Init_Process.
Run the job J_Hierarchy_Startup_Process. It has two job parameters for Hierarchy Group ID and Sequence Number.
In order to process all the hierarchies under a single group, specify the Hierarchy Group ID value and leave the
Hierarchy Sequence Number blank, as shown in the following example.
To process a single hierarchy, specify the Hierarchy Group ID and Hierarchy Sequence Number as shown in the
following example.
Note. If the source and EPM database base language is different, we need to ensure that EPM base table has
descriptions in EPM base language and the related language table have descriptions in EPM installed foreign
language. For this purpose, Language Swap ETL utility is required.
Language Swap is an OWS post process, therefore before running this utility, make sure to run all the Staging jobs
for base and language tables.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
To use the values that are defined in the DataStage Administrator, click the Run button. To override the values,
enter the appropriate values and then click the Run button.
Note. This step is required if there is a need to populate the reporting amount and reporting currency code columns
in fact tables.
Complete the Setup for MDW Currency Conversion definitions in the PIA pages. The navigation path is EPM
Foundation, EPM Setup, Common Definitions.
MDW Currency conversion is a post process that needs to be applied after the initial data load of all Fact (F00)
tables. Before you run this utility, make sure to run all the Fact jobs.
Open the DataStage Director client by selecting Start, Programs, Ascential DataStage, DataStage Director.
Run the Sequence job - SEQ_J_Run_CurrencyConversion. Give the required Currency Conversion Rule to
process fact(s). These are the ones used in PIA for setup. The Currency conversion will happen for all the facts
grouped under this rule.
Generating the job status report in the log file at the end of the run.
The Master_Run_Utility is available in the Common_Utilities_E_E1.dsx file. After you import the dsx file, you can
locate this job under the category Utilities\Job_Utils, as shown in the following screen shot.
The input file for the master run utility is a flat file that contains the list of jobs names for the master utility to run.
The file must be available on the DataStage Server. You can include comments within the flat file by preceding
those lines with an asterisk (“*”). The flat file can also contain blank lines; the master run utility ignores them.
After each job name you need to include a comma (“,”), then a space, and then either the character “N” or “D”; this
character signifies the job dependency. “N” indicates an independent job, and “D” indicates a dependent job. If any
of the independent jobs fail, the utility will log the information and proceed to the next job. If any of the dependent
jobs fail then the utility is aborted.
The following screenshot shows the contents of an example flat file, named
“Sample_HCM_E_OWS_Base_HashFiles.txt.”
Notice that the first three lines are comments since they are begin with “*”.
Lets consider a simple example of running all the HCM OWS Hash Load jobs. These jobs are not dependent on
each other. Therefore the flat file should have list of jobs with “N” as the dependency indicator, as shown in the
following screenshot.
To run the master run utility, open the DataStage Director and navigate to Category Utilities, Job_Utils. Select
the job Master_Run_Utility and click Run. The system prompts for the parameters, as shown in the following
screen shot:
Using our example, in the Enter the File path parameter you would use the value
“Sample_HCM_E_OWS_Base_HashFiles.txt” and specify “No” for Restart Recovery then press the Run button.
Choose View, Log from the Director Menu to view the logfile.
The system runs the jobs in serial mode and at the end of the run, it generates a report that includes the following
information:
Jobs that are in a not compiled state or jobs where the names are incorrect.
If, for example, there are 100 jobs in the input flat file, and 95 of the jobs complete successfully and 5 jobs are finish
with warnings, then the utility will generate the following report.
It does not generate any entries for jobs that are aborted, since it is not applicable in this example.
If you double-click the highlighted lines (log having the keyword COMPLETED SUCCESSFULLY), it will show the
list of jobs that have completed:
If you double click those highlighted lines (log having the keyword ABORTED), it will show the list of jobs that have
aborted:
Even though in this example there are 5 jobs that aborted, the status of Master_Run_Utility will be “Finished with
warnings” since there is no failure of any Dependent job.
If you want to handle the dependency, then the job name should be suffixed with ‘D’ (Dependent Job) in the Input
Flat file. If any of the Dependent jobs fail, then this utility will not proceed to the next job and instead the
Master_Run_Utility itself will abort.
For example, suppose the input file is SAMPLE_HCM_E_GLOBAL_DIMENSIONS_Base_Tables.txt. This file will
have the list of Global Dimension jobs that are owned by HCM.
In this case, all the dimensions are dependent on each other. So, when you initially run the utility you will specify
the complete file path for SAMPLE_HCM_E_GLOBAL_DIMENSIONS_Base_Tables.txt and Restart Recovery as
“No”, then click the Run button.
If the SEQ_J_Dim_PS_D_POS is aborted then the entire utility will abort and generate the report. Assume you find
out the issue in SEQ_J_Dim_PS_D_POS job and apply the fix. The subsequent time you run the utility, you can
specify the Restart Recovery as “Yes” so that it will directly run the jobs from SEQ_J_Dim_PS_D_POS by ignoring
the jobs that are already ran fine the previous time.
This utility can be used to run any jobs like OWS, MDW or OWE.
Features of Get_Job_RunTime_Statistics
Location of Get_Job_RunTime_Statistics
The Get_Job_RunTime_Statistics is available in the Common_Utilities_E_E1.dsx file. After you import this dsx file,
you can locate this job under the category Utilities\Job_Utils, as shown:
Open the DataStage Director and expand the category Utilities, Job_Utils. Select the job
Get_Job_RunTime_Statistics and Run the job. It will prompt for the Output file as shown:
Specify the complete path of the output csv file and run the job. When the job is complete, it will generate the
output csv file with all the information about the jobs that are present in that project.
Features of PurgeAllHashFiles
Deletes all the Hash Files (including the Hash File structure) in a project.
This PurgeAllHashFiles is available in the Common_Utilities_E_E1.dsx file. After you import this dsx file, you can
locate this utility under the category Utilities\Hash_Utils\PurgeAll, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, PurgeAll. Select the job
PurgeAllHashFiles and Run this job. The system prompts for the Purging Type as shown in the following screen
shot:
Select C to clear the Hash files content, or D to delete the Hash file from the server.
Note. This utility will clear/delete the content of all the Hash files in a project.
Features of Clear_Job_Log
The Clear_Job_Log utility is available in the Common_Utilities_E_E1.dsx file. After you import the dsx file, you can
locate this job under the category Utilities\Job_Utils, as shown:
Open the DataStage Director and expand the category Utilities, Job_Utils. Select the job Clear_Job_Log and run
this job. The system prompts for the Output File name as shown:
Enter the complete path for the file to generate the output text file in the DataStage server machine.
Features of Backup_DateTime_HashFiles
Backs up the hash file(s) that contain the last run datetime values for all jobs to a sequential file.
Location of Backup_DateTime_HashFiles
The Backup_DateTime_HashFiles utility is available in the Common_Utilities_E_E1.dsx file. After you import this
dsx file, you can locate this job under the category Utilities\Hash_Utils\Backup, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Backup. Select the job
Backup_DateTime_HashFiles and run this job. It will prompt for the Backup File Directory as shown:
To run the utility using the values that are defined in the DataStage Administrator, click the Run button. To override
the values, type the appropriate values and then click the Run button.
Once the job is complete, you can view the control data in the following backup files:
Features of Backup_SurrogateKey_HashFile
Backs up the hash file that contains the next available Surrogate Key values to a sequential file.
Location of Backup_SurrogateKey_HashFile
The Backup_SurrogateKey_HashFile utility is available in the Common_Utilities_E_E1.dsx file. After you import the
dsx file, you can locate this job under the category Utilities\Hash_Utils\Backup, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Backup. Select the job
Backup_SurrogateKey_HashFile and run this job. The system prompts for the Backup File Directory, as shown.
To run the utility using the values that are defined in the DataStage Administrator, click the Run button. To override
the values, enter the appropriate values and then click the Run button.
Once the job is complete, you can view the results in the file <Backup File Directory>\SID_SEQFile.txt.
Features of Recovery_DateTime_HashFiles
Recovers the hash file(s) that contain the last run datetime values for all jobs from the backed up sequential file into
the current project.
Location of Recovery_DateTime_HashFiles
The Recovery_DateTime_HashFiles utility is available in the Common_Utilities_E_E1.dsx file. After you import the
dsx file, you can locate this job under the category Utilities\Hash_Utils\Recovery, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Backup. Select the job
Recovery_DateTime_HashFiles and run this job. The system prompts for the Backup File Directory, as shown.
To run the utility using the values that are defined in the DataStage Administrator, click the Run button. To override
the values, enter the appropriate values and then click the Run button.
This utility recovers the next available surrogate key value control data for all the SID parameter values (dimension
table names or a unique ID-‘EPM’) into the current project.
Features of Recovery_SurrogateKey_HashFile
Recovers the hash file that contains the next available Surrogate Key values from the backed up sequential file into
the current project.
Location of Recovery_SurrogateKey_HashFile
The Recovery_SurrogateKey_HashFile utility is available in the Common_Utilities_E_E1.dsx file. After you import
the dsx file, you can locate this job under the category Utilities\Hash_Utils\Recovery, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Recovery. Select the job
Recovery_SurrogateKey_HashFile and run this job. The system prompts for the Backup File Directory, as shown.
To run the utility using the values that are defined in the DataStage Administrator, click the Run button. To override
the values, enter the appropriate values and then click the Run button.
Features of Batch::Reset_MaxDateTime
Resets the last run datetime value of a single job to the specified value.
Resets the last run datetime value of all the jobs to the specified value.
If the reset datetime value is not specified, then it is reset to a minimum default value.
Location of Batch::Reset_MaxDateTime
The Batch::Reset_MaxDateTime utility is available in the Common_Utilities_E_E1.dsx file. After you import the dsx
file, you can locate this job under the category Utilities\Hash_Utils\ Reset, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Reset. Select the job
Batch::Reset_MaxDateTime and run this job. The system prompts for the following parameters, as shown.
For the first parameter, enter a single job name for which datetime reset is required. If the reset is required for all
jobs, enter ‘All’.
For the second parameter, enter value ‘E’ for any (all) job(s) except for E1-staging jobs. Enter E1 for any (all) E1-
staging jobs.
For third parameter, enter the reset datetime value. If this is left empty then it will be reset to ‘1753-01-01 00:00:00’
or ‘-1’ for E or E1 values of second parameter accordingly.
The purpose of this utility is to reset Surrogate Key value (SID) to 1 for the specified dimension or all dimensions.
Features of Batch::Reset_SurrogateKey
Location of Batch::Reset_SurrogateKey
The Batch::Reset_SurrogateKey utility is available in the Common_Utilities_E_E1.dsx file. After you import the dsx
file, you can locate this job under the category Utilities\Hash_Utils\ Reset, as shown:
Open the DataStage Director and expand the category Utilities, Hash_Utils, Reset. Select the job
Batch::Reset_SurrogateKey and run this job. The system prompts for the following parameter, as shown.
If the SID reset is required for a single dimension, then enter that dimension name. If reset is required for all
dimensions, then enter ‘All’.
Appendix A – FAQ
1. INSTALLATION
EPM 8.9 supports Ascential DataStage version 7.5 server edition and Ascential DataStage version 7.5.1A.
• What if the customer is already using an earlier version of DataStage either as part of a non-Peoplesoft
install or as part of EPM 88 SP2?
Since EPM 8.9 supports only Ascential DataStage version 7.5, a fresh install of DataStage version 7.5 has to be done
on a server. It has to be kept in mind that two datastage server versions cannot coexist on a server while DataStage
client version 7.5 can co-exist with an earlier client version.
What are the main differences between the prior DataStage version 7.1 and the latest version 7.5?
An exhaustive list of enhancements in the version 7.5 can be read from the DataStage release notes, which can be
accessed through Start, Programs, Ascential DataStage, DataStage Release Notes. (After the Data Stage install)
What are the differences between ETL content between EPM 8.9 and the previous EPM releases?
Types of Staging
CRC, DTTM (E & E1) DTTM (E & E1)
jobs
Source to
Ascential DataStage (E & E1) Informatica (E & E1)
Staging
App. Engine/
Staging to MDW Ascential DataStage (E & E1) DataMart Builder (E),
Informatica (E1)
What are the different Ascential software components that are being delivered along with EPM 8.9 to
customers as a part of the OEM agreement?
Ascential DataStage 7.5 and Ascential MetaStage 7.5 are the two Ascential related software that are
delivered.
What are QualityStage, ProfileStage, AuditStage, MetaStage and Parallel Extender? How can these help
the customer?
Ascential QualityStage
Ascential QualityStage provides a powerful framework for developing and deploying data investigation,
standardization, enrichment, probabilistic matching and survivorship operations. For use in transactional,
operational, or analytic applications, in batch and real-time, the same services are seamlessly deployed to
facilitate data validation, cleansing or master data entity consolidation for customers, locations and
products.
Ascential ProfileStage
Ascential ProfileStage, a key component in Ascential Enterprise Integration Suite, is a data profiling and
source system analysis solution. Ascential ProfileStage completely automates this all-important first step in
data integration, dramatically reducing the time it takes to profile data from months to weeks or even days.
Ascential ProfileStage also drastically reduces the overall time it takes to complete large-scale data
integration projects, by automatically creating ETL job definitions that are run by Ascential DataStage.
Ascential AuditStage
Ascential MetaStage
Ascential MetaStage, an enterprise metadata directory, provides business users and IT professionals with
analysis, reporting and management capabilities for corporate- wide metadata integration. It delivers
graphical assessment of the impact of change and stewardship of the business and technical meaning and
origin of a corporation’s data.
DataStage Parallel Extender (DS-PX) is a highly scalable parallel processing infrastructure package for the
development and execution of data integration, data warehousing, business intelligence and analytical
applications.
Password and role based security can be effectively implemented in DataStage at a project level from the
DataStage administrator. For detailed information on implementing this, read Section 2-1 in the “DataStage
Administrator Guide” if you have the server installed on Windows or read Section 3-2 in the “DataStage
Administrator Guide” if you have the server installed on Unix.
EPM 8.9 accesses data on Databases using the DRS stage. The user id and password for accessing the
databases are parameterized as environmental variables and the password parameter can be set as an
encrypted field in the DataStage Administrator. This ensures data security by restricting the database
access passwords.
1.2 Documentation
What are the ETL related documents that are provided to customers as a part of the EPM 8.9
documentation?
The following are the various ETL related documentation that will be supplied along with the EPM 8.9
release:
Getting Started - EPM 8.9 Install-Implementation Task Order and Checklist.PDF – This document
describes the steps to be performed to do install verification. (Refer to the appendix of this documentation)
EPM 8.9 ETL Implementation Steps.doc – This is a red paper document that helps a customer during EPM
8.9 ETL implementation. (Refer to Resolution 618278)
DSX Files Import Description.xls – This excel worksheet gives the various dsx files that are generally
available and the ones that a customer should be getting from the EPM CD or the Bundle resolution, as a
part of the licenses one has purchased.
List of Environment Parameters.xls – This excel worksheet gives the complete list of the environment
variables and sample values. This is to be used along with the configuration document.
Parameter and Source data Files Information.xls – This excel sheet gives the list of parameter files and
source files used by the EPM 8.9 application and the jobs which use them. This sheet is also to be used
along with the configuration document.
What are the various DataStage Technical Documentations that are available?
The following are the various ETL related documentation that will be supplied along with the EPM 8.9
release:
Getting Started - EPM 8.9 Install-Implementation Task Order and Checklist.PDF – This document describes the steps
to be performed to do install verification. (Refer to the appendix of this documentation)
EPM 8.9 ETL Implementation Steps.doc – This is a red paper document that helps a customer during EPM
8.9 ETL implementation. (Refer to Resolution 618278)
DSX Files Import Description.xls – This excel worksheet gives the various dsx files that are generally
available and the ones that a customer should be getting from the EPM CD or the Bundle resolution, as a
part of the licenses one has purchased.
List of Environment Parameters.xls – This excel worksheet gives the complete list of the environment
variables and sample values. This is to be used along with the configuration document.
Parameter and Source data Files Information.xls – This excel sheet gives the list of parameter files and
source files used by the EPM 8.9 application and the jobs which use them. This sheet is also to be used
along with the configuration document.
The DataStage online technical documentations can be accessed from Start> Programs>Ascential
DataStage>Online Manuals>DataStage Documentation (after installing DataStage Client) . The documents
that would be of immediate interest will be the following:
g. Apart from these, the DataStage client components also make available contextual help and an
indexed help, which can be accessed from the Toolbar.
The “DataStage Install and Upgrade Guide” which is available under Start, Programs, Ascential DataStage,
Online Manuals, DataStage Documentation contains detailed information on the pre-requisites as well as
the pre-installation steps.
For Windows read the section 2-1 on pre-install steps as well as the section 2-4 on hardware and software
requirements.
For Unix read the section 3-3 on Unix Install Checklist, section 3-6 on Pre-Install Checks and Setup and
section 3-12 on hardware and software requirements.
The DataStage Install and Upgrade Guide gives the following minimum requirements for Server install on
Windows:
o Sufficient storage space for any data to be held in DataStage tables or files.
The minimum requirements for DataStage server install on Unix reads as:
o Sufficient storage space for any data that is to be held in DataStage tables or files.
o Additional space to allow for temporary data storage while a DataStage job is running.
You need to note, that EPM 8.9 ETL Design heavily uses hash files, which are stored in the project
directory in the server. Hence the sizing should take this into consideration as well as future requirements
where the amount of data will increase and along with it the hash file sizes. Also the server directory should
hold the flat files as well as XML file inputs if any that the ETL process requires.
As a rough rule it can be considered that every staging table has a corresponding hash file and every
dimension table has a corresponding hash file. So the size of all the hash files is going to be a function of
the size of the data that will be stored in staging tables and the dimension tables. But, it is also to be
remembered that only relevant columns in a table are loaded into a hash file. For sizing the space
requirement for hash files, it is suggested that you take a few sample hash files and compare them with the
underlying tables to find out the size requirement. This can be found out by doing a rough comparison of
the structure of the table and the number of columns in that table that are actually loaded to the hash file.
And it is also very important to keep sufficient buffer size for future incremental data, since as the data size
increases with time the hash files also grow in size.
Another way to do this is with the help of an unsupported tool provided along with the Ascential DataStage
CD. The tool is called HFC.exe, which is short for Hash File Calculator, and it is available in the Ascential
CD in the folder Utilities\Unsupported.
Read the “DataStage Install and Upgrade Guide” which is available under Start, Programs, Ascential
DataStage, Online Manuals, DataStage Documentation.
For Windows read the section 2 on “Installing on Windows Systems”. For Unix read the section 3 on
“Installing on Unix Systems”.
Also See EPM8.9 Installation Instructions.pdf, chapter 2 “Configuring Ascential DataStage for PeopleSoft
Enterprise Performance Management Applications or Data Marts”.
To verify installation of DataStage Server, check that the files are installed in the specified directory during
installation.
In Windows, Go to Control Panel; verify that DataStage Icon is present. Click the DataStage icon to start
up the DataStage Control Panel. Verify that the services are running.
In Unix, execute the command “ps –ef | grep dsrpcd”. This should return a dsrpcd process running in the
background.
For more details, please consult the “DataStage Install and Upgrade Guide.”
The datastage server has to be installed with NLS set to ON. During DataStage server install you will be
prompted to choose a language. If you choose English (the default) the next screen will ask you whether
you want to install NLS. NLS is automatically installed if you select any other language. A form will appear
with the message, “Do you wish to install National Language Support (NLS) for DataStage server”. The
check box “Install NLS for DataStage server” has to be checked. Also read the relevant sections for Unix
and Windows in the “DataStage Install and Upgrade Guide” for further instructions and screen shots.
Post installation in the DataStage administrator the relevant project can be selected and the NLS command
button can be pressed to select NLS options. These options will be available only if the DataStage server
was installed with NLS on.
Next the correct NLS map has to be selected as the project default. This value will be used in all the jobs to map
the Unicode data.
Incase the required map name is not available, you may press the install button to see all the maps and load it.
In individual jobs in the DRS stage the NLS tab will show the map as the Project Default. This value can be
overwritten if required at the job level by changing this value in the DRS stage.
What are the patches that have to be applied for DataStage server and client? How do I get the list of
patches to be applied?
Please refer to the PeopleSoft resolution number 610322 posted in the ICE portal.
Are there any other relevant patches that I have to apply other than DataStage patches?
The only known database patch at this point is for users of DB2-OS390 version 8. These users need to apply patch
fp8 from IBM. The Ascential case # - 422291.
2. CONFIGURATION
What are the different methods of maintaining projects if the customer has E / E1 sourced jobs or more
than one warehouse?
The EPM8.9 ETL design will support the jobs to spread across multiple projects or the jobs in a single project.
However, the following are some of the options to create projects based on customer needs.
Option 3: One project for both the Source systems (E and E1)
If the customer has both E and E1, then can have one project to have all the E and E1 related jobs.
What are the different configuration settings that customer has to take care after creating the projects?
Projects are to be created from the DataStage administrator. Read Task 2.3 in the “EPM89InstallationGuide.PDF/
Chapter 2: Configuring Ascential DataStage for PeopleSoft Enterprise Performance Management” for detailed
instructions on creating projects. Detailed online technical information for adding/deleting projects can also be found
in section 1-5 in “DataStage Administrator Guide”.
2.2 Packaging
What are the various EPM 8.9 dsx files and how will they be packaged in the CD / future bundles.
There are 51 dsx files that are delivered in EPM 8.9 release for Enterprise and Enterprise One Product Lines. Under
each product line you will see dsx files for four warehouses and 1 for Common where you will have all the dsx files
that are common across warehouses. For more information refer to “DSX Files Import Description.xls”.
What are the various EPM 8.9 bundle files and how will they be packaged in the bundle1/bundle2/ICE
Resolution?
Refer to the Bundle posting related information from Customer Connection.
What are the various E/E1 source application release versions that have been used for EPM 8.9 release?
For Enterprise
FSCM Source Release – FSCM 88
CRM Source Release - CRM 89
HCM Source Release - HCM 89
2.3 Import/Export
What are the steps to be followed for project import? How do I verify successful import?
Refer the section ‘Task 2-5 Import .dsx Files’ in ‘EPM89InstallationGuide.PDF/ Chapter 2: Configuring
Ascential DataStage for PeopleSoft Enterprise Performance Management’ document.
What are the datastage categories (folders) and sub-categories that I will see after project import?
Refer to the topic – Understanding the Project Structure in this document
What needs to be done at a customer site with respect to copying the DSParams file the very first time vs
from one project (eg: Test) to another project (eg: Development)?
The methodology described below provides a workaround for moving or sharing the global parameters without
having to re-type them in the administrator. The work around consists of replacing and/or editing this file to add the
parameters. Be sure to back up the original file before any other activity occurs.
New Project
For a new project, that has not yet defined any global parameters, just copy the existing DSParams file to the new
project. Be sure to rename the existing DSParams file! Ensure all DS clients (Designers, etc.) are logged off, and
stop and start the DataStage services to activate it. Then go into the DataStage Administrator and all the parameters
should be visible in the user-defined section of the environment screen. At this point, edit the default values for each
parameter. This information is also available in section ‘Task 2-4 Configure Environment Parameters’ in
‘EPM89InstallationGuide.PDF/ Chapter 2: Configuring Ascential DataStage for PeopleSoft Enterprise Performance
Management’ document.
Existing Project
For an existing project, that has already defined some global parameters; the DSParams file must be
edited to add the desired parameters. The process below describes how to do this.
The user-defined parameters are in two sections of the DSParams file: one section defines the parameters
[EnvVarDefns] and the second section contains the default values [EnvVarValues]. The approach is to
copy the correct lines from the original source project file into the target project DSParams file.
1. Rename/Backup the DSParams file in the target project directory and backup the source project
DSParams file as well.
2. Edit the source project DSParams. Go to the end of the [EnvVarDefns] section and find the user defined
parameters, which are at the end of the section. Select the lines up to but not including the line which
contains [PROJECT]”.
3. Copy these lines and paste them into the target project DSParams file before the "[PROJECT]" section.
4. Go back and edit the source project DSParams file. Find the section starting with the line
"[EnvVarValues]". This is usually at the end of the file. Copy all of the lines of that section, or select all the
lines for the specific parameters to be moved.
5. Locate the end of the DSParams file in the target project directory. See if it has a section called
"[EnvVarValues]". If it does not, add it. If it does, then go to the next step.
6. Paste the lines into the target project DSParams file at the end of the "[EnvVarValues]" section and
before the end of file.
Ensure all DS clients (Designers, etc.) are logged off, and stop and start the DataStage services to activate
it. Then go into the DataStage Administrator and all the parameters should be visible in the user-defined
section of the environment screen. At this point, change the default values for each parameter.
What is Array size and Transaction Size. What are the things to be remembered regarding these
parameters?
Array Size
Array size is a parameter to specify the number of rows written (to a database) at a time. In other words it
refers to the number of rows that are transferred in one call between DataStage and the database before
they are written. Generally increasing the array size will increase performance since client memory is used
to cache records resulting in lesser server hits. The maximum size for array size is 32767. But increasing
the array size too much will result in strain on the client memory. Hence an optimal value has to be arrived
at considering the client memory.
For flexibility this has been parameterized as a environmental variable. Separate environmental variables
are available for each source as well as for OWS, OWE and MDW.
Transaction Size
Transaction size refers to the number of rows that are written to the database before the data is committed.
Giving a transaction size of zero will ensure that commit doesn’t happen until all the records are written.
The default value is 0. If the transaction size is set to 100 then the database table commits are performed
every 100 rows. Here again an optimal value has to be arrived at considering the strain on the Database
server and the number of records.
For flexibility this has also been parameterized as a environmental variable. Separate environmental
variables are available for each source as well as for OWS, OWE and MDW.
The DATA_ORIGIN needs to be toggled to 'E' only when running those sets of jobs whose flow is from
OWE to MDW.
What is the configuration information that needs to be known for different server platforms?
Refer to the Hardware Software implementation guide (Implement, Optimize + Upgrade, Implementation
Guide, Implementation Documentation and Software, Hardware and Software Requirements, Enterprise,
Enterprise Performance Management) for EPM 8.9 for more details..
DataStage Server :
o Windows 2000
o Sun Solaris
o IBM AIX
o CompacTru64
o HP-UX
o Linux
DataStage Client:
o Windows 2000
o Windows XP
o Oracle
o MSSQL Server
o XML files
For MSSQL Server, the options to support functional index (MSCONCATCOL), in the database should be
enabled.
SET OPTIONS
o SET ANSI_NULLS
o QUOTED_IDENTIFIER
o CONCAT_NULL_YIELDS_NULL
o ANSI_WARNINGS
o ANSI_PADDING
© Copyright PeopleSoft Corporation 2004. All rights reserved. 85
PeopleSoft Enterprise Performance Management 8.9 Implementing ETL: Frequently Asked Questions
How to identify jobs that are needed based on the License Code purchased or Implementation plans for a
customer. What is the best way to do this?
EPM 8.9 ETL - Job Count; EPM 8.9 ETL Lineage Report – Common; EPM 8.9 ETL Lineage Report – CRM;
EPM 8.9 ETL Lineage Report – FMS; EPM 8.9 ETL Lineage Report – HCM; EPM 8.9 ETL Lineage Report
- SCM
Refer to the section - Steps for EPM 8.9 ETL Implementation in this document.
3. JOB DESIGN
o OWE to MDW
What are the different kinds of staging loads that are supported?
Incremental/Destructive
How are we handling incremental load, if the source fields have Null values for the Datetime stamp?
If the Datetime Column is a nullable field on the source database, then source filter will include a condition
to bring that data as well along with the incremental data.
Is there any special handling required between the first run and the subsequent runs?
No, as long as the original delivered jobs are used without any customization.
How are deletes in the source tables handled? Is there any special handling required for these?
No.
What are the different kinds of data sources handled in EPM 8.9?
Are there any control tables (similar to the past releases) used in the EPM 8.9 ETL design?
Does EPM 8.9 handle surrogate keys? If so, how are Surrogate IDs generated and handled?
DataStage manages the universe file SDKSequences, which will hold the surrogate key sequences for a
particular key. DataStage inbuilt routine KeyMgtNextValueConcurent() is used for generating a surrogate
key. This routine receives an input parameter, a name identifying the sequence.
The surrogate key can be unique per single dimension target (“D”) or unique across whole warehouse
(“W”). To provide such capability, there would be an environment parameter called SID_UNIQUENESS.
The value for this parameter is provided at run time. If the value is “D“, then this routine is called with a
dimension job name for which a surrogate key has to be assigned and it will return the next available
number. If not, routine is called with “EPM” as the sequence identifier.
In EPM 8.9, the dimension D_EMPL_JOB from HCM warehouse is designed as Type 2 (Slowly Changing
Dimension) and all the other dimension loads are Type 1. However, the lookup operation supports Type 2.
i.e. Whenever there is lookup on other dimension, it will have effective dated logic.
Refer the document “Steps to convert Type 1 job to Type 2 design Based on EFFDT” in the red paper
document “EPM 8.9 Implementing Slowly Changing Dimensions”.
What is the strategy used for handling non-available columns and for value defaults. How are these
handled and what are the specific default values used for different data types.
If there is no mapping for a target column, then a default is passed based on logical area and data type.
Char ‘-‘
OWS Num 0
Date Null
Char ‘‘
OWE Num 0
Date Null
Char ‘-‘
MDW Num 0
Date Null
Is there any support for ETL rollbacks? If so, how is this handled?
Rollback is possible through Transaction size parameter. If the transaction size is selected as zero and if
the job aborts in the middle, then the job will rollback the transactions since it follows the principle of Two-
way commit. If the Transaction size is anything other than zero and if the job fails in the middle, then the
job will perform commits for the number of rows that processed till the error message.
As much as possible aggregator stage is not used in job design since the aggregation functions are better
left to the Database since the Database can perform aggregation functions more efficiently than
DataStage.
Whenever the aggregation needs to be performed on the source data, it is achieved within DRS source
stage itself. In case of generated sql queries, aggregate functions are given in against columns in
corresponding derivation columns and group by clause is given in ‘Other clauses’ text area. Wherever User
Defined SQL option is selected the query is specified appropriately with the aggregate function.
In specific instances where an aggregation function has to be performed on data that is transformed and
not directly read from the Database and in. cases where the number of records is going to be large,
temporary table is created where the data is temporarily written and then read out, when the aggregation
functions can be performed.
Models are delivered with the indexes out of the box. Before loading the target tables, drop the indexes and
then build them after load. This would improve the ETL performance.
Usually lookups are handled using Hash file stage, except for relational joins, it is handled through DRS
stage.
3.2 Categories
What are categories? How are categories used in ETL? What is the significance?
In datastage all jobs and other components are organized into categories (folders). The categories are
based on Warehouse, Source system, Data Mart name.
What are the different kinds of job parameterizations done to increase run time flexibility?
Parameterization helps user to enter run time parameters without resorting to changing jobs. This can be
done in many ways and this section will touch upon a few ways of doing this.
Run time information such as the Database type, the database connection parameters and parameter file
directories are to be set as environmental variables, which are used in individual jobs. Refer ‘List of
Environment Parameters.xls’ for those used in EPM 8.9 ETL.
Parameter files are used for those jobs, which read from the user, input variable values or a list of values,
which may change from run to run. The variables and their respective values are given in parameter files.
Refer ‘Parameter and Source data Files Information.xls’ for those used in EPM 8.9 ETL.
How is the process flow being handled by EPM? How are job interdependencies being handled?
DataStage Sequence job allows you to specify several jobs to run in controlled manner and can be used to
specify different courses of action to take depending on whether a job in the Sequence succeeds or fails.
Every ETL load has a Sequence job. Also each business process within a datamart is provided with a
master sequence to trigger all the jobs belonging to it.
Triggers are used to control the flow of a Sequence job in triggering various other Sequence/Server child
jobs. The most commonly used ones are of type ‘Failed – Conditional’, ‘Warning – Conditional’, ‘OK –
Conditional’, and ‘Unconditional’.
The Hash Files are used for enhancing the performance of the ETL job. Hash Files are typically used for
lookups in an ETL job.
In EPM 8.9, there are jobs to initialize Hash Files. These jobs will create the hash files before the jobs
requiring them for lookup are executed. These Hash Files are also updated once the target table is loaded
in the ETL job. This method will enable multiple jobs to utilize the same hash file as long as the structure
required are the same.
Another method is to load the hash file within the same job using them as a lookup. This method requires the hash
files to be reloaded every time the job executes.
Note. Hash files are required to perform some lookups on maps later in the process. Failure to create the
necessary hash files can cause maps to fail. The hash files themselves MUST exist for the maps that use them,
even if you do not populate the sources for the hash file. For an example, if you do not use a source for a hash on
your source system and choose to bypass the load for that hash file, the fact that the hash does not exist will cause
maps that use the hash file as a lookup to fail.
For more information please refer to “Chapter 6 : Hashed File Stages” of the “DataStage Server: Server
Job Developer’s Guide”.
What are the things that are to be kept in mind for managing Hash Files
The default setting for Hash Files are project specific and cannot be shared across projects. The validity of
Hash Files is dependent on the base table it is generated from. The base table should only be updated by
the ETL jobs provided in EPM 8.9. If not, the hash file and the table will be out of sync and may result in
faulty data when used in an ETL job.
There are several Hash File utilities provided in EPM 8.9. These are located in the Utilities\Hash_Utils
category. Please refer to 'CHAPTER 5 - DATASTAGE PROJECT UTILITIES: USAGE AND
IMPLEMENTATION' of this document for more information.
What is the Language Swap utility? When is this to be used and for what purpose?
If the source and EPM database base language is different, we need to ensure that EPM base table has descriptions in
EPM base language and the related language table have descriptions in EPM installed foreign language. For this
purpose, Language Swap ETL utility is required.
What is the Currency Conversion utility? When is this to be used and how?
This utility is used to populate the reporting amount and reporting currency code columns in fact tables. This
population is considered as a ETL post process. Before running the ETL, the setup for MDW Currency Conversion
definitions should be completed in the PIA pages. The Navigation path is EPM Foundation, EPM Setup, Common
Definitions. Refer to ‘CHAPTER 4 – IMPLEMENTING EPM 8.9 ETL UTILITIES/ Section 4.3 Running MDW
Currency Conversion Utility’ of this document for steps to run these jobs.
Tree processing jobs - Details relating to these kinds of jobs and the corresponding Tree metadata
If a warehouse or data mart has Tree or Recursive hierarchy data, then ETL Utility to process this data needs to be
triggered. The utility flattens and denormalizes the set of hierarchies. These hierarchy definitions needs to defined in
PIA pages before running the ETL jobs. The navigation path is EPM Foundation, EPM Setup, Common Definition,
Hierarchy Group Definition. Refer to ‘CHAPTER 4 – IMPLEMENTING EPM 8.9 ETL UTILITIES/ Section 4.1
Running Tree and Recursive Hierarchy Jobs’ of this document for steps to run these jobs.
Jobs relating to Related Language Tables and how these jobs are pre-packaged?
In EPM 8.9, every table that requires language translation has a corresponding related language table. ETL jobs to
populate these language tables are created. These jobs are packaged along with the base table jobs. Running these
jobs is optinal, since not all customers require the use of multi-language functionality.
Where can one find the details for all the EPM 8.9 Routines?
Refer ‘EPM 89_routines.xls’.
This mapper tool utilizes data from several other tables such as PF_SRC_SETCNTRL, PF_SRC_BU_NAMES, and
PF_SRC_BU_ROLES and these are loaded by ETL jobs. These jobs are packaged along with Common_E.dsx and
Common_E1.dsx files for E and E1 respectively.
The output tables of the Dimension Mapper tool are PF_SETID_LOOKUP, PF_BUS_UNIT_MAP,
BUS_UNIT_TBL_PF, BUS_UNIT_TBL_FS, SETID_TBL, SET_SNTRL_TBL, SET_CNTRL_GROUP, and
SET_CNTRL_REC. These tables are used as lookups in ETL job design.
What is the Error validation mechanism built into EPM 8.9 ETL design?
In EPM 8.9, Error validation is only done at MDW and OWE layer. OWS layer does not have any error
validation like was done in previous releases of EPM.
Mechanism of error validation that is developed in EPM 8.9 is that when a lookup fails, the row that has
failed is moved to the Error table. Every OWS table that is acting as source to dimensions, facts, D00s and
F00’s has a corresponding error table. The error table name has the naming convention of
PS_E_<TABLE_NAME> or PS_ES_<TABLE_NAME>. This error table will help the developers to
understand which lookup has failed and for what type of data so that they can go back to the Source/OWS
layer and correct the data and run the job so that the data is populated into the corresponding target table.
As a pre-packaged delivery, the jobs have error validation functionality for each and every lookup. Basing
on the business needs customers can choose to have the error validation on or off. We have created
environmental variables that can help customers to set the error validation ON of OFF as per their
requirements.
How is Error Validation done using Error tables? What kind of errors are handled and what are not
handled?
$ERR_VALIDATE is a parameter that takes the value of “Y” or “N”. By default the value is set to “Y”, which
means that error validation is ON and all the error records will go to the error table if the lookup fails. If the
$ERR_VALIDATE is set to “N”, then the data goes to the target table even though the lookup fails.
$ERR_THRESHOLD is another parameter that will provide the ability for the customers to set a limit for
error records. If $ERR_THRESHOLD is set to 50, then job is forcefully aborted if the number of rows with
errors crosses 50. This will help customers to check the source/OWS data and make sure the lookup
condition is successful so that the data goes into the target table. In the job we use constraints to check for
the $ERR_VALIDATE is Y or N.
The below screen shot will help you to identify the Error table that is used. The table name is
PS_E_BU_TBL_IN. If the lookup PF_BUS_UNIT_MAP fails then the data goes into the error table.
In the below screen shot you will see stage variables that are used for Error validation. $ERR_VALIDATE,
$ERR_THRESHOLD are used in stage variable ErrorFound, Abort Job.
Below you can see the constraint we are checking for Error validation. If the constraint is true, then the
error validation is ON and the error data is populated in the error table. Else, the data is populated in the
target table if the constraint is false.
Below screenshot has the constraint ErrorFound= N, this means that the data that have passed the error
validation will be populated in the corresponding target table.
To support Unicode databases, the DataStage Server needs to be installed with NLS enabled. Also, the
proper character set should be selected based on the requirements by the user, in the DataStage
Administrator.
The D00 job from OWS to OWE loads data from staging tables to _D00 tables in the OWE database.
The F00 job from OWS to OWE loads data from staging tables to _F00 tables in the OWE database.
The Global Dimension jobs loads data from staging tables to dimension tables. Global Dimensions are
dimension tables that are shared across warehouses.
The Local Dimension jobs loads data from staging tables to dimension tables. Local Dimensions are
dimension tables that are shared across different marts in a warehouse.
The dimension and fact job loads data from staging tables to dimension or fact tables. A dimension
contains a key (SID) value and attributes used for slicing and dicing meaures located in a fact table.
The dimension and fact job loads data from OWE tables (D00 or F00) to dimension or fact tables. A
dimension contains a key (SID) value and attributes used for slicing and dicing meaures located in a fact
table.
A dimension job typically will load data from an OWS table or OWE table. The basic flow of a dimension
job starts with a DRS source stage and a transformer to validate values for SID lookup if required on
another dimension table. The job will also contain lookups for attribute values such as description fields.
See snapshot below:
The job will then perform a lookup on a hash file of the dimension table to check if an equivalent business
keys are already present. If so, then the existing SID is used else a new SID will be generated. The data
will then be loaded into the target DRS stage and the hash file used for incremental loading is updated.
A D00 job typically will load data from an OWS table or on some cases from an OWE table. The basic flow
of a D00 job starts with a DRS source stage then a transformer to perform lookup validations against OWS,
OWE, or MDW tables base on requirements. If the target D00 table is SETID based then a PF_SETID
lookup is also performed. The lookup validation performed in a D00 job does not necessarily extract a
value from the lookup table. Instead it is just performed to verify the existence of the value in the lookup
table. If a record fails any lookup validation, it will be inserted in the error table. To skip the lookup
validations in D00 jobs, set the parameter $ERR_VALIDATE to ‘N’.
The job will also have a lookup against a hash file of the D00 table to check if an equivalent business keys
are already present. If so, the same created date time is extracted from this lookup. The data will then be
loaded into the target DRS stage and the hash file used for incremental loading is updated. See snapshot
below:
A fact job typically will load data from an OWS table or OWE table. The basic flow of a fact job starts with a
DRS source stage and a transformer to validate values for SID lookup dimension tables. If SETID
indirection is required, the job will perform a lookup on a set control table for a SETID value to be used in
lookups for SID values in a dimension table. See snapshot below:
The fact job will then perform SID lookups against several dimension tables. These lookups will provide
the values for the SID columns in the fact tables. The composite primary key of the fact table is typically
composed of these SID columns. See snapshot below:
After performing the lookups for the SID values, the data will then be loaded into the fact table. If error
validation is enabled for the job, records with values failing the lookups that are validated will not be
inserted into the fact table. Instead it is inserted in error tables. See snapshot below:
Other data trasformation can also performed in transformers such as aggregation of values or string
manipulation. A hash file used for incremental loading is updated in cases where the fact table is loaded
incrementally. In case a fact table is loaded destructively, the server job will truncate the target table prior
to loading data.
A F00 job typically will load data from an OWS table or on some cases an OWE table. The basic flow of a
F00 job starts with a DRS source stage and a transformer to perform lookup for business unit values from
PS_PF_BUS_UNIT_MAP hash file. The job will then perform a lookup on a set control table for a SETID
value to be used in lookup validations against D00 or other F00 tables. See snapshot below:
After performing the lookups for the SETID values, the job will perform lookup validations. The lookup
validation performed in a F00 job does not necessarily extract a value from the lookup table. Instead it is
just performed to verify the existence of the value in the lookup table. If a record fails any lookup validation,
it will be inserted in the error table. To skip the lookup validations in F00 jobs, set the parameter
$ERR_VALIDATE to ‘N’. See snapshot below:
Other data trasformation can also performed in transformers such as aggregation of values or string
manipulation. After performing lookup validations, the data is then loaded to the target table. A hash file
used for incremental loading is updated in cases where the F00 table is loaded incrementally. In case a
F00 table is loaded destructively, the server job will truncate the target table prior to loading data. See
snapshot below:
4. JOB EXECUTION
All server jobs/sequencers are to be compiled before running. Uncompiled jobs will not run and have to be
compiled using the Designer or Manager prior to running.
Can we use Peoplesoft's People Tools Process Scheduler for triggering jobs?
No, you cannot use the Peoplesoft’s People Tools Process Scheduler. For scheduler you can use the DataStage
Scheduler or any other third party scheduler.
When to use the Master Run utility in the utilities folder? How does it work?
The Master_Run_Utility can be used to run the set of jobs that are present in a Flat file in the DataStage Server. This
utility will read the list of jobs that are present in the file and trigger them in a serial mode, by taking care of
dependency logic as mentioned in the Input Flat file. Master_Run_Utility can be used to run any jobs in a dependent
or independent mode. For more information, refer to 'CHAPTER 5 - DATASTAGE PROJECT UTILITIES: USAGE
AND IMPLEMENTATION' of this document.
OWE – Make sure the OWS data is populated first as the source for OWE jobs is OWS. Run the OWE job and make
sure the job is in Finished status. Do a data compare with the source and target database to make sure the data that is
populated in the target tables is matching to your expected result set.
MDW – Make sure the OWS data is populated first as the source for MDW job is OWS. In some cases, the source
can be OWE and in such cases the OWE jobs must be executed first before running the MDW jobs. Run the MDW
job and make sure the job is in ‘Finished’ status. Do a data compare with source and target database and check if the
data populated in the target database is matching to your expected results.
The above screen shot gives the job log (detailed view) for an aborted job. The whole job log consists of multiple
events each having a color-coding.
A reading of the log helps us to understand that a hash file HASH_PS_ADDRESSES_LOOKUP, which the job is
using as a lookup is missing.
Also, you can go to the job design in the DataStage Designer and view the job that aborted with Performance statistics
set to on. This will show the successful links in green while the failed links will appear in Red. This mostly helps us
to get an idea of which part of the job design failed. The performance statistics also gives the number of rows that
have been transmitted through each link, again which information can be useful for debugging a job.
The DataStage Designer also gives advanced debugging features which can help developers set break points, watch
variable values and step through the job. For more information on the Debugger read Chapter 16 in the “Server Job
Developers Guide”, available in the DataStage online documentation.
o A common cause for jobs aborting is that dependent hash files do not exist. This happens when a hash file
that a job performs a lookup on has not been pre-created. The hash file load jobs have to be run. A sample
error message showing this error will look like:
o Run the job for less number of rows by giving a limit rows option. This will restrict the job run time and the
log will also be smaller and more manageable.
While running any job in DataStage, I am getting the error message “Could not load drsoci.so” when
pointed to Oracle database and the DataStage server is UNIX. How to resolve this issue?
First make sure that all the necessary DRS patches are applied in the DataStage server. Then, verify the dsenv file
which is a centralised file for storing environmental variables in the DataStage Server. It resides in $DSHOME, where
$DSHOME identifies the DataStage main directory (for example /u1/dsadm/Ascential/DataStage/DSEngine).
The dsenv file is a series of Bourne shell arguments which are referenced during DataStage server startup and can be
referenced by interactive users or other programs or scripts. For a connection using a non-wire protocol driver, you
generally need to specify the following in the dsenv file:
Certain Plug-ins require shared libraries to be loaded and you need to include the library path in an environment
variable. The names of the library path environment variables is platform dependent:
# Oracle 8i
ORACLE_HOME=/space/oracle8i
ORAHOME=/space/oracle8i
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH;export
LD_LIBRARY_PATH
ORACLE_SID=WSMK5
ORASID=WSMK5
export ORACLE_HOME ORAHOME ORACLE_SID ORASID
Refer the section “Post-Install Checks and Configuration” in the “Ascential Install and Upgrade Guide”.
While running any job in DataStage, I am getting the error message “Could not load drsdb2.so” when
pointed to DB2 database and the DataStage server is UNIX. How to resolve this issue?
First make sure that all the necessary DRS patches are applied in the DataStage server. Then, verify the dsenv file
which is a centralised file for storing environmental variables in the DataStage Server. It resides in $DSHOME, where
$DSHOME identifies the DataStage main directory (for example /u1/dsadm/Ascential/DataStage/DSEngine).
The dsenv file is a series of Bourne shell arguments which are referenced during DataStage server startup and can be
referenced by interactive users or other programs or scripts. For a connection using a non-wire protocol driver, you
generally need to specify the following in the dsenv file:
Certain Plug-ins require shared libraries to be loaded and you need to include the library path in an environment
variable. The names of the library path environment variables is platform dependent:
#DB2 6.1
DB2DIR=/opt/IBMDB2/V6.1;export DB2DIR
DB2INSTANCE=DB2inst1; export DB2INSTANCE
INSTHOME=/export/home/DB2inst1;export INSTHOME
PATH=$INSTHOME/sqllib/bin:$INSTHOME/sqllib/adm:$INSTHOME/sqllib/misc:$PATH
export PATH
LD_LIBRARY_PATH=$INSTHOME/sqllib/lib:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH
THREADS_FLAG=native;export THREADS_FLAG
4.5 Recovery
How to backup the Hash file(s) since it has control data relating to last generated SID
Refer to 'CHAPTER 5 - DATASTAGE PROJECT UTILITIES: USAGE AND IMPLEMENTATION\ Section 5.6
Backup Surrogate Key Hash File Utility ' of this document.
How to handle cases when there is need to switch to new project? Utility to manage project corruption
There might be a need to switch to new project when the warehouse tables have already been loaded for sometime. In
such cases, there is some project specific control data that needs to be restored onto the new project. For this purpose,
it is always a good idea to backup this control data at some regular intervals of time after significant chunk of ETL
loading gets completed.
There are some utilities, which would take care of this backup/recovery process. Refer to 'CHAPTER 5 -
DATASTAGE PROJECT UTILITIES: USAGE AND IMPLEMENTATION’ and then the following sections in this
document on procedure to run these utilities.
Section 5.5 Backup DateTime HashFiles Utility
Section 5.6 Backup Surrogate Key Hash File Utility
Section 5.7 Recovery DateTime HashFiles Utility
Section 5.8 Recovery Surrogate Key Hash File Utility
What will happen if a jobs aborts after half of the one million rows are written to the tables
If the Transaction size is selected as zero and if the job aborts in the middle, then the job will rollback the transactions
since it follows the principle of Two-way commit. If the Transaction size is anything other than zero and if the job
fails in the middle, then the job will perform commits for the number of rows that processed till the error message.
How to report an issue with job log for a job that ended with warnings/errors?
When reporting an issue, the job log of the last run has to be sent. For this go to the Director and view the log for the
job in the detailed view mode. Select Project, Print, as shown in the below screen shot.
Then you will see the below screen. Select “Full details” for Print what, “All entries” for Range and “Print to file”.
Press OK which will then prompt you to enter the name of a flat file to which the log will be stored. Send the log
along with the issue description and other pertinent information.
5. 1 Customizations
1. In the source DRS stage the selection criteria will contain a where clause specifying the last update date
time. This has to be removed.
2. The target DRS stage update action has to be changed to “Truncate table then insert rows”.
3. Delete the link and the container stage, which is used for updating the last update date time into the hash
file. In this particular screen shot shown below this will entail deleting the link StoreMaxDateTime and the
container stage StoreMaxLastUpdDttm.
4. In the transformer we see a lookup being done on the target table using a hash file. Here new rows are
identified and this is done to retain the Created_EW_DTTM of rows. Since we are changing the job to
destructive we can remove the target lookup and remove the stage variable, “newRow”. The target field
CREATED_EW_DTTM can be changed to have the value DSJobStartTimestamp which is a DS Macro
(same as for the field LASTUPD_EW_DTTM). This will entail deleting the stages DRS_LKP_PS_BI_HDR,
IPC_in1 and HASH_PS_BI_HDR.
5. Delete the job parameter LastModifiedDateTime. This will complete the changes in the server job.
6. In the sequence job, which calls the above server job, remove the job activity stage, which calls the routine
GetLastUpdDateTime.
The server job that uses CRC logic cannot easily be changed to do destructive load without changing atleast 80% of
the design. Hence it is suggested that these jobs not be changed and instead a new destructive load job be done from
scratch. Go through the above section with the 8 points to see what all are the general changes required for converting
a job to destructive.
How to run ETL in destructive mode after it's been running some incremental loads?
The methodology given below is to have minimum changes done to the original job. However a new job for
destructive loads can also be created using the original job (Refer FAQ question - How to change an incremental job
to destructive job).
CRC Job:
The update action needs to be changed to ‘Truncate table and then Insert rows’ in the job. If you do not want the job
to be modified then truncate the target table from backend. Then, rerun the hash file load job followed by Sequence
job to load the table. You might want to change back the update action of the job if it will be used for incremental
loads later.
DateTime Job:
The update action needs to be changed to ‘Truncate table and then Insert rows’ in the job. If you do not want the job
to be modified then truncate the target table from backend. Now run the Batch::Reset_MaxDateTime job present in
Utilities\Hash_Utils_Reset category followed by the Sequence job to load the table. You might want to change back
the update action of the job if it will be used for incremental loads later.
If the customer wants to specify the storage for hash files, what needs to be done?
The current EPM 8.9 design does not specify the physical storage location for hash files. By default the DataStage
server will store these hash files onto the DataStage server project directory. The customer might choose to move the
hash file storage location to a user specified directory for lack of space in the project directory or for more control.
1. Type the directory path in the hash file stage as given in the below screenshot.
2. For added flexibility this hash file directory path can be parameterized as an environmental variable. See
the below screen shot.
3. If, the directory path has been parameterized as an environmental variable then the environmental variable
should have been declared in the DataStage Administrator along with a project default value.
The environmental variable should also be defined in the DataStage server job in the properties.
2. Declare a job level parameter corresponding to HASH_FILE_DIR in the job that is going to use it. This
can be done in the Job Properties, Parameter tab by clicking on the “Add Environment Variables” tab.
3. Choose the new environmental variable and this will now be part of the job parameters. The Default
value can be changed to $PROJDEF in the job parameters to signify that the value shall be taken from
the project default value unless overwritten.
4. Now this job parameter can be used in the job like any other job parameter.
Sequencer jobs, if any, which call the changed server job have to be updated to reflect the change in the
job.
5. The environmental variable has to be added to the sequencer parameters. This parameter value has to
be added to the job activity stage where the value is passed to the called job.
How does a customer handle a change requiring a new attribute in a dimension table? Discuss the impact
on the model as well as on ETL.
If there is change in the datamodel with respect to a new addition of attribute to the EPM database, then customer has
to update the corresponding dimension job to incorporate this new attribute, otherwise the job will fail. If there is no
source for this new attribute, then in the dimension job you can have a Default value assigned to this new attribute by
using a routine that is delivered.
How does a customer handle a change requiring a new dimension in a fact table? Discuss the impact on
the model as well as on ETL.
If a new dimension key is added to a fact table in the database, then this is a change to the datamodel. Since the
database has an additional dimension key for the fact table, this will result in changes to the ETL job. If this is a new
dimension, then a new job has to be developed for this new dimension. Fact job needs to be updated accordingly with
the correct dimension key and Corresponding SID population in the Fact table.
How does a customer handle a change requiring a new measure in a fact table? Discuss the impact on the
model as well as on ETL.
If a new measure is added to a fact table in the database, then this is a change to the datamodel. Since the database has
an additional measure for the fact table, this will result in changes to the ETL job. Fact job needs to be updated
accordingly with the correct measure getting assigned to the value that is either coming from the source directly or
applying any logic that is required for this measure to be populated as per your requirements.
How does a customer handle a change requiring a new dimension table? Discuss the impact on the model
as well as on ETL.
New ETL job has to be developed for this new dimension table as per the requirements.
How does a customer handle a change requiring a new fact table? Discuss the impact on the model as
well as on ETL.
New ETL job has to be developed for this new fact table as per the requirements.
How to enhance the parallel processing capabilities of server jobs and where this will not be useful for out
of the box vanilla implementation
InterProcess Stage
The IPC stage is used to implement pipeline parallelism. The IPC stage can be used to explicitly de-link two passive
stage activities to run in separate processes. As a good practice an IPC stage can be inserted before a database write
stage.
1. In the Job properties of J_Load_BATCH_INFO , please ensure that you turn ON the “Allow Multiple
Instance” check box.
2. In the Job properties of J_UPDATEHASH, please ensure that you turn ON the “Allow Multiple Instance”
check box.
3. The routine “GetNextBatchNumber” needs to be modified. An extra argument “Invocation_ID” needs to be
added. In addition to that, a line of code inside the routine needs to be changed.
4. The final step is to modify all the Sequencer Jobs and compile them.
Please refer to the detailed steps below for instructions to implement the changes mentioned in Steps 3 & 4.
An extra argument “Invocation_ID” needs to be added. Please see the screenshot below.
The delivered sequence jobs implement 2 types of logic, namely incremental and CRC.
Since the arguments of “GetNextBatchNumber” have been changed in Step 3, all the sequence Jobs will become
invalid. The input for the newly added argument “Invocation_ID” has to be passed. This requires a unique value and
hence the macro DSJobName is passed.
In the routine activity GetNextBatchNumber, pass DSJobName as input to the Invocation_ID argument (as explained
above).
Open the properties of the Job Activity J_UPDATEHASH. There will be a new text prompt for Invocation Id, with
the value “J_UPDATEHASH”. This now needs to be changed to DSJobName. (The screenshot below shows the
change)
Once all these changes have been done, all the sequence jobs need to be compiled. They can now be run in parallel.
According to the requirement master sequencers can be created to run these sequencers in parallel.
5.3 Miscellaneous
Where is the Survey related jobs are located in the dsx files? After import where can I find the Survey jobs
in the DataStage Project?
Survey jobs are present in OWE and MDW modules of HCM warehouse. In OWE module, there are some D00 jobs
which reads the flat file data as source and loads the R00 tables. These jobs can be located in OWE_E.dsx and after
the import, the jobs will be present under the \OWE_E\HCM\D00\Base\Load_Tables\Server category.
In MDW section, the R00 tables are used as source and it load the Survey Dimension tables. These jobs can be
located in WHR_WORKFORCE_PROFILE_MART_E.dsx file and after the import, the jobs will be present under
the \HCM_E\WORKFORCE_PROFILE_MART\Survey\OWE_To_MDW\Dimensions\Base\Load_Tables\Server
category.
In EPM 8.9, the dimension D_EMPL_JOB from HCM warehouse is designed as Type 2 (Slowly Changing
Dimension) and all the other dimension loads are Type 1. However, the lookup operation supports Type 2. i.e.
Whenever there is lookup on other dimension, it will have effective dated logic.
These are the jobs that reads the source flat files or the temp tables and loads the R00/D00 tables. These jobs can be
located in OWE_E.dsx files and it will be present under the path \OWE_E\HCM\D00\Base\Load_Tables\Server
category.
These are the jobs that reads the data loaded in the above step and loads the F00 tables. These jobs can be located in
OWE_E.dsx files and it will be present under the path \OWE_E\HCM\F00\Base\Load_Tables\Server category.
These jobs loads the Competency details of the employee from the OWS tables. These jobs can be located in
OWE_E.dsx files and it will be present under the path \OWE_E\HCM\F00\Base\Load_Tables\Server category.
If the datastage server is on Windows, then the survey jobs have to be modified by click the Sequencial file stage and
change the Line Termination to “DOS Style (CR LF). And then save the job, Compile and run the same.
In certain situations, importing a .dsx file can result in an error similar to the following:
“Error on CREATE.FILE command (Creating file "RT_CONFIG5651" as type 30.ymkdbfile: cannot create
file RT_CONFIG5651yUnable to create operating system file "RT_CONFIG5651".y*** Processing cannot
continue ***y)”
There is a a limit on number of links (hard links) that can be created in a directory on some file systems
running in Unix\Linux Environment. Ascential has confirmed that if the number of links in a directory
exceeds 32000+ files , then the user will get an error when trying to create new files in that directory. This
is a Unix limitation
This might cause problems during import of DSX files, if DataStage server software is installed on a
machine running Unix\Linux\AIX operating systems with certain file systems (ext2\ext3).
This is because a single job created/imported on DataStage with one hash file lookup might create up to 6
subfolders on your DataStage server project .
The best practice to avoid hitting this threshold is to create separate projects for each of the warehouses
purchased.
You need to grant CONNECT privilege to the DataStage user who will run the job that creates/drops
Universe tables at run time.
In order for you to be able to Grant and Revoke CONNECT privilege, you should have DBauth privilege
yourself.
The following details how to grant user privileges to enable users to run the DataStage jobs that require
create/drop Universe tables:
1. Log in to DataStage Administrator, select the project, go to the command prompt and type in the command
shown in following screenshot.
3. Log in to the DataStage Administrator as user “Administrator” (or as any user who has DBauth
privilege = ‘YES’ in UV_USERS table.)
4. As shown below, grant CONNECT privileges to the user who will run the DataStage job that
creates/drops universe tables at run time.
The lineage of each ETL job is documented and the information is available in the following documents. It included
the details such as Sequence job name, Server Job Name, Source and Target table name, Lookup table names,
the Hash Files used, the table owner etc.
Expand the OWE_E category on the left pane and then CRM (Warehouse_Name), D00, Base,
Load_Tables , Sequence. Run all the Sequence jobs that are present under this folder in the sequence
mentioned below. This will load the CRM OWE D00 tables in the EPM database.
SEQ_J_D00_PS_CASE
SEQ_J_D00_PS_INTERACT
SEQ_J_D00_PS_MKT_WAVE
SEQ_J_D00_PS_TERRITORY
SEQ_J_D00_PS_SALES_REP
SEQ_J_D00_PS_LEAD
SEQ_J_D00_PS_OPPORTUNITY
SEQ_J_D00_PS_ORDER
SEQ_J_D00_PS_SUPP_ORG
SEQ_J_D00_PS_TIME_FRAME
Expand the OWE_E category on the left pane and then FMS (Warehouse_Name), D00, Base,
Load_Tables , Sequence. Run all the Sequence jobs that are present under this folder in the sequence
mentioned below. This will load the FMS OWE D00 tables in the EPM database.
SEQ_J_BASE_PS_ACCT_TYPE_TBL
SEQ_J_BASE_PS_BOOK_CODE_TBL
SEQ_J_BASE_PS_ADJUST_TYPE_TBL
SEQ_J_BASE_PS_BUD_REF_TBL
SEQ_J_BASE_PS_CHARTFIELD1_TBL
SEQ_J_BASE_PS_CHARTFIELD2_TBL
SEQ_J_BASE_PS_CHARTFIELD3_TBL
SEQ_J_BASE_PS_CLASS_CF_TBL
SEQ_J_BASE_PS_FUND_TBL
SEQ_J_BASE_PS_OPER_UNIT
SEQ_J_BASE_PS_PRODUCT_TBL
SEQ_J_BASE_PS_PROGRAM_TBL
SEQ_J_BASE_PS_BU_LED_GRP_TBL
SEQ_J_BASE_PS_BD_ASSET
SEQ_J_BASE_PS_BD_ASSET_DEPR
SEQ_J_BASE_PS_BD_ASSET_ITEMS
SEQ_J_BASE_PS_BD_SCENARIO_TBL
SEQ_J_BASE_PS_BP_ASSET_ITEMS
SEQ_J_BASE_PS_BP_LEDG_DTL_VW
SEQ_J_BASE_PS_BP_LEDGER_BDEXP
SEQ_J_BASE_PS_BU_BOOK_TBL
SEQ_J_BASE_PS_BU_LED_COMB_TBL
SEQ_J_BASE_PS_BUL_CNTL_BUD
SEQ_J_BASE_PS_CAP
SEQ_J_BASE_PS_CAP_DET
SEQ_J_BASE_PS_CAP_TYPE_TBL
SEQ_J_BASE_PS_COMBO_CF_DEFN
SEQ_J_BASE_PS_COMBO_CF_TBL
SEQ_J_BASE_PS_COMBO_CF2_REQ
SEQ_J_BASE_PS_COMBO_CF2_TBL
SEQ_J_BASE_PS_COMBO_DATA_BDP
SEQ_J_BASE_PS_COMBO_DATA_BUDG
SEQ_J_BASE_PS_COMBO_EDIT_TMPL
SEQ_J_BASE_PS_COMBO_FLDS_TBL
SEQ_J_BASE_PS_COMBO_GROUP_TBL
SEQ_J_BASE_PS_COMBO_GRRUL_TBL
SEQ_J_BASE_PS_COMBO_RULE_TBL
SEQ_J_BASE_PS_COMBO_SEL_01
SEQ_J_BASE_PS_COMBO_SEL_02
SEQ_J_BASE_PS_COMBO_SEL_03
SEQ_J_BASE_PS_COMBO_SEL_04
SEQ_J_BASE_PS_COMBO_SEL_05
SEQ_J_BASE_PS_COMBO_SEL_06
SEQ_J_BASE_PS_COMBO_SEL_07
SEQ_J_BASE_PS_COMBO_SEL_08
SEQ_J_BASE_PS_COMBO_SEL_09
SEQ_J_BASE_PS_COMBO_SEL_10
SEQ_J_BASE_PS_COMBO_SEL_11
SEQ_J_BASE_PS_COMBO_SEL_12
SEQ_J_BASE_PS_COMBO_SEL_13
SEQ_J_BASE_PS_COMBO_SEL_14
SEQ_J_BASE_PS_COMBO_SEL_15
SEQ_J_BASE_PS_COMBO_SEL_16
SEQ_J_BASE_PS_COMBO_SEL_17
SEQ_J_BASE_PS_COMBO_SEL_18
SEQ_J_BASE_PS_COMBO_SEL_19
SEQ_J_BASE_PS_COMBO_SEL_20
SEQ_J_BASE_PS_COMBO_SEL_21
SEQ_J_BASE_PS_COMBO_SEL_22
SEQ_J_BASE_PS_COMBO_SEL_23
SEQ_J_BASE_PS_COMBO_SEL_24
SEQ_J_BASE_PS_COMBO_SEL_25
SEQ_J_BASE_PS_COMBO_SEL_26
SEQ_J_BASE_PS_COMBO_SEL_27
SEQ_J_BASE_PS_COMBO_SEL_28
SEQ_J_BASE_PS_COMBO_SEL_29
SEQ_J_BASE_PS_COMBO_SEL_30
SEQ_J_BASE_PS_COMBO_VAL2_TBL
SEQ_J_BASE_PS_FS_ACTIVITY_TBL
SEQ_J_BASE_PS_KK_ACT_TYPE_SET
SEQ_J_BASE_PS_KK_BD_DFLT_ACCT
SEQ_J_BASE_PS_KK_BD_SETID
SEQ_J_BASE_PS_KK_BUDGET_TYPE
SEQ_J_BASE_PS_KK_CF_VALUE
SEQ_J_BASE_PS_KK_EX_ACCT_TYPE
SEQ_J_BASE_PS_KK_EX_ACCT_VAL
SEQ_J_BASE_PS_KK_FILTER
SEQ_J_BASE_PS_KK_KEY_CF
SEQ_J_BASE_PS_KK_SUBTYPE
SEQ_J_BASE_PS_LED_DEFN_TBL
SEQ_J_BASE_PS_LED_GRP_LED_TBL
SEQ_J_BASE_PS_LED_GRP_TBL
SEQ_J_BASE_PS_LED_TMPLT_TBL
SEQ_J_BASE_PS_PC_INT_TMPL
SEQ_J_BASE_PS_PC_INT_TMPL_GL
SEQ_J_BASE_PS_PF_LED_DEFN_TBL
SEQ_J_BASE_PS_PF_LED_GRP_LED
SEQ_J_BASE_PS_PF_LED_GRP_TBL
SEQ_J_BASE_PS_PROJ_ANTYPE_TBL
SEQ_J_BASE_PS_PROJ_CATG_TBL
SEQ_J_BASE_PS_PROJ_N_REQ_TBL
SEQ_J_BASE_PS_STAT_TBL
SEQ_J_BASE_PS_DIMENSION1
SEQ_J_BASE_PS_DIMENSION2
SEQ_J_BASE_PS_DIMENSION3
SEQ_J_BASE_PS_LINE_OF_BUS
SEQ_J_BASE_PS_PAYMENT
SEQ_J_BASE_PS_PC_RT_EMPL
SEQ_J_BASE_PS_PC_RT_JOBC
SEQ_J_BASE_PS_PC_RT_ROLE
SEQ_J_BASE_PS_PRJ_ANGRPMP
Expand the OWE_E category on the left pane and then Global_D00, Base, Load_Tables , Sequence.
Run all the Sequence jobs that are present under this folder in the sequence mentioned below. This will
load the OWE Global D00 tables in the EPM database.
SEQ_J_BASE_PS_ALTACCT_TBL
SEQ_J_BASE_PS_DEPARTMENT_TBL
SEQ_J_BASE_PS_GL_ACCOUNT_TBL
SEQ_J_D00_PS_PROJECT
SEQ_J_BASE_PS_PROJ_ACTIVITY
SEQ_J_BASE_PS_PROJ_RES_TYPE
SEQ_J_BASE_PS_PROJ_SUBCAT_TBL
SEQ_J_D00_PS_BUDGET_PER
SEQ_J_D00_PS_CHANNEL
SEQ_J_D00_PS_LOCATION
SEQ_J_D00_PS_COMPANY
SEQ_J_D00_PS_CUST_MSTR
SEQ_J_D00_PS_JOB_FAMILY_TBL
SEQ_J_D00_PS_SAL_PLAN
SEQ_J_D00_PS_SAL_GRADE
SEQ_J_D00_PS_SAL_STEP
SEQ_J_D00_PS_UNION_TBL
SEQ_J_D00_PS_JOBCD_TRNPR
SEQ_J_D00_PS_JOBCODE
SEQ_J_D00_PS_PERSONAL
SEQ_J_D00_PS_PRODUCT
SEQ_J_D00_PS_PRODUCT_1
Expand the OWE_E category on the left pane and then SCM (Warehouse_Name), D00, Base,
Load_Tables , Sequence. Run all the Sequence jobs that are present under this folder in the sequence
mentioned below. This will load the SCM OWE D00 tables in the EPM database.
SEQ_J_D00_PS_MASTER_ITEM
SEQ_J_D00_PS_CUSTOMER
SEQ_J_D00_PS_CUST_ADDR
SEQ_J_BASE_PS_INV_ITEM_UOM
SEQ_J_BASE_PS_INV_VALUE_WRK
SEQ_J_BASE_PS_UOM_TYPE_INV
SEQ_J_D00_PS_VENDOR
SEQ_J_D00_PS_VENDOR_LOC
SEQ_J_D00_PS_PO
SEQ_J_D00_PS_RECV_HDR
SEQ_J_D00_PS_RMA_HDR
SEQ_J_D00_PS_VOUCHE
Expand the OWE_E category on the left pane and then HCM (Warehouse_Name), D00, Base,
Load_Tables , Sequence. Run all the Sequence jobs that are present under this folder in the sequence
mentioned below. This will load the HCM OWE D00 tables in the EPM database.
SEQ_J_D00_PS_ABS_CLASS
SEQ_J_D00_PS_ABS_TYPE
SEQ_J_D00_PS_ABS_CODE
SEQ_J_D00_PS_ABSV_PLAN
SEQ_J_D00_PS_ACCOMP
SEQ_J_D00_PS_ACCT_CD
SEQ_J_D00_PS_ACTN_REASON_TBL
SEQ_J_D00_PS_EARNINGS_TBL
SEQ_J_D00_PS_COMP_RATECD_TBL
SEQ_J_D00_PS_POSITION
SEQ_J_D00_PS_APPLICANT
SEQ_J_D00_PS_DEDUCTION_TBL
SEQ_J_D00_PS_PAYGROUP_TBL
SEQ_J_D00_PS_WA_RTGMDL
SEQ_J_D00_PS_WA_SURVEY
SEQ_J_D00_PS_ADDL_PAY_DATA
SEQ_J_D00_PS_APP_DIS
SEQ_J_D00_PS_BEN_DEFN_OPTN
SEQ_J_D00_PS_BEN_PROG_PARTIC
SEQ_J_D00_PS_BENEF_PLAN_TBL
SEQ_J_D00_PS_BENEFIT_PARTIC
SEQ_J_D00_PS_BP_JOB
SEQ_J_D00_PS_BP_POSITION
SEQ_J_D00_PS_BUDGET_BU
SEQ_J_D00_PS_CM_CLUSTER
SEQ_J_D00_PS_CM_TYPE
© Copyright PeopleSoft Corporation 2004. All rights reserved.
127
PeopleSoft Enterprise Performance Management 8.9 Implementing ETL: Frequently Asked Questions
SEQ_J_D00_PS_COMPANY_TBL
SEQ_J_D00_PS_COMPETENCY
SEQ_J_D00_PS_COURSE
SEQ_J_D00_PS_COURSE_COMP
SEQ_J_D00_PS_CRSE_SESSN
SEQ_J_D00_PS_DEDUCTION_FREQ
SEQ_J_D00_PS_DEPT_BUDERN
SEQ_J_D00_PS_DEPT_BUDGET
SEQ_J_D00_PS_DEPT_BUDGET_DED
SEQ_J_D00_PS_DISABILITY
SEQ_J_D00_PS_EMPLOYEE_SURVEY_BUS_UNIT
SEQ_J_D00_PS_EMPLOYEE_SURVEY_DEPTID
SEQ_J_D00_PS_EMPLOYEE_SURVEY_JOBCODE
SEQ_J_D00_PS_EMPLOYEE_SURVEY_LOCATION
SEQ_J_D00_PS_EMPLOYEES
SEQ_J_D00_PS_JOB_EARNDST
SEQ_J_D00_PS_JOB_TASK
SEQ_J_D00_PS_MAJOR_TBL
SEQ_J_D00_PS_NAMES
SEQ_J_D00_PS_PAY_CHECK
SEQ_J_D00_PS_PAY_EARNINGS
SEQ_J_D00_PS_PAY_OTH_EARNS
SEQ_J_D00_PS_PERS_DETAIL
SEQ_J_D00_PS_POSITION_DATA
SEQ_J_D00_PS_RTRMNT_PLAN
SEQ_J_D00_PS_RTRMNT_PLAN_TBL
SEQ_J_D00_PS_SAL_MTRXTBL
SEQ_J_D00_PS_SAL_RATECD
SEQ_J_D00_PS_SCHOOL
SEQ_J_D00_PS_SCHOOL_TYPE
SEQ_J_D00_PS_STOCK_PLAN
SEQ_J_D00_PS_TOTAL_DED
SEQ_J_D00_PS_VC_PLAN
SEQ_J_D00_PS_WA_AUTHOR
SEQ_J_D00_PS_WA_BENCHMRK_GEO_MAP_DFN
SEQ_J_D00_PS_WA_BENCHMRK_IND_MAP_DFN
SEQ_J_D00_PS_WA_BENCHMRK_TMP
SEQ_J_D00_PS_WA_BENCHMRK_UNIT_MAP_DFN
SEQ_J_D00_PS_WA_COMP_MAP_DFN
SEQ_J_D00_PS_WA_COVG_CD
SEQ_J_D00_PS_WA_DEP_BEN
SEQ_J_D00_PS_WA_EESURVEY_TMP
SEQ_J_D00_PS_WA_FIN_MAP_DFN
SEQ_J_D00_PS_WA_FIN_MAP_DFN_BENCHMARK
SEQ_J_D00_PS_WA_GEO_MAP_DFN
SEQ_J_D00_PS_WA_IND_MAP_DFN
SEQ_J_D00_PS_WA_INJ_ILL
SEQ_J_D00_PS_WA_JOB_MAP_DFN
SEQ_J_D00_PS_WA_REVWRTG
SEQ_J_D00_PS_WA_ROLE
SEQ_J_D00_PS_WA_ROLEACMP
SEQ_J_D00_PS_WA_STD_LTR
SEQ_J_D00_PS_WA_SUR_COMP
SEQ_J_D00_PS_WA_SUR_FIN
SEQ_J_D00_PS_WA_SUR_GEO
SEQ_J_D00_PS_WA_SUR_IND
SEQ_J_D00_PS_WA_SUR_JOB
SEQ_J_D00_PS_WA_SUR_UNIT
SEQ_J_D00_PS_WA_SURVALUE_TMP
SEQ_J_D00_PS_WA_TASK
SEQ_J_D00_PS_WA_TASKCMPT
SEQ_J_D00_PS_WA_TRN_HST
SEQ_J_D00_PS_WA_TRNSESSN
SEQ_J_D00_PS_WA_UNIT_MAP_DFN
SEQ_J_Stage_PS_BP_BNFT_DIS_EXP
SEQ_J_Stage_PS_BP_EARN_DIS_EXP
SEQ_J_Stage_PS_BP_JOB_EXP
SEQ_J_Stage_PS_BP_POSITION_EXP
SEQ_J_Stage_PS_BP_SAL_DIS_EXP
SEQ_J_Stage_PS_BP_TAX_DIS_EXP
Expand the OWE_E category on the left pane and then HCM (Warehouse_Name), F00, Base,
Load_Tables Sequence. Run the selective Sequence jobs that are present under this folder in the
sequence mentioned below. This will load the HCM OWE F00 tables in the EPM database which are
required as lookups in HCM D00 ETL Jobs
SEQ_J_F00_PS_JOB
SEQ_J_F00_PS_WA_EESURVEY
Note. In OWE-E category, HCM D00 ETL Jobs have a dependency on F00 ETL Jobs. Hence, run the above F00
Jobs immediately after completely running the HCM D00 ETL jobs as mentioned in Appendix-C Section 5.1.
Expand the OWE_E category on the left pane and then HCM (Warehouse_Name), F00, Base,
Load_Hash_Files\Server e. Run the selective Server jobs that are present under this folder mentioned
below. This will load the HCM OWE F00 Hash Files in the EPM database which are required as lookups in
HCM D00 ETL Jobs
J_HASH_PS_WA_EESURVEY_F00
J_HASH_PS_JOB_F00_MXSQ_VW_LOOKUP
J_HASH_PS_JOB_F00_LOOKUP
J_HASH_PS_JOB_F00_SUBQRY
J_HASH_PS_JOB_F00
Note. In OWE-E category, HCM D00 ETL Jobs have a dependency on HCM F00 ETL Jobs. Hence, run the above
F00 Hash Jobs immediately after completely running the HCM F00 ETL jobs as mentioned in Appendix-C Section
5.2. This will populate the F00 Hash Files required as a lookup in HCM D00 ETL Jobs.
5.4 Loading Sequence for HCM D00 ETL Jobs dependant on HCM F00 ETL
Jobs
Expand the OWE_E category on the left pane and then HCM (Warehouse_Name), D00, Base,
Load_Tables , Sequence. Run the Sequence jobs that are present under this folder mentioned below.
This will load the HCM OWE D00 tables in the EPM database which were dependent on HCM F00 ETL
Jobs.
SEQ_J_D00_PS_EMPLOYEE_SURVEY_BUS_UNIT
SEQ_J_D00_PS_EMPLOYEE_SURVEY_DEPTID
SEQ_J_D00_PS_EMPLOYEE_SURVEY_JOBCODE
SEQ_J_D00_PS_EMPLOYEE_SURVEY_LOCATION
SEQ_J_D00_PS_EMPLOYEES
SEQ_J_D00_PS_JOB_EARNDST
SEQ_J_D00_PS_WA_INJ_ILL
SEQ_J_D00_PS_PERS_DETAIL
Expand the OWE_E category on the left pane and then HCM (Warehouse_Name), F00, Base,
Load_Tables Sequence. Run all the Sequence jobs that are present under this folder in the sequence
mentioned below. This will load the HCM OWE F00 tables in the EPM database.
SEQ_J_F00_PS_JOB
SEQ_J_F00_PS_ABS_HIST
SEQ_J_F00_PS_ABSV_ACCR
SEQ_J_F00_PS_BP_COMP
SEQ_J_F00_PS_BP_JOB
SEQ_J_F00_PS_EMPL_REVW
SEQ_J_F00_PS_JOB_INTRADY
SEQ_J_F00_PS_STAFFING
SEQ_J_F00_PS_WA_ACMPLISH
SEQ_J_F00_PS_WA_BEN_HST
SEQ_J_F00_PS_WA_BENCHMRK
SEQ_J_F00_PS_WA_CMPTN_EE
SEQ_J_F00_PS_WA_COMP_HST1
SEQ_J_F00_PS_WA_COMP_HST2
SEQ_J_F00_PS_WA_COMP_HST3
SEQ_J_F00_PS_WA_COMP_HST4
SEQ_J_F00_PS_WA_COMP_HST5
SEQ_J_F00_PS_WA_COMP_HST6
SEQ_J_F00_PS_WA_COMPTNCY1
SEQ_J_F00_PS_WA_COMPTNCY2
SEQ_J_F00_PS_WA_EESURVEY
SEQ_J_F00_PS_WA_RECR_EXP
SEQ_J_F00_PS_WA_RECR_OFR
SEQ_J_F00_PS_WA_SURVALUE
SEQ_J_F00_PS_WA_TRNCST