Professional Documents
Culture Documents
A Informatica Mapping Variable represents a value that can change through the session.
The Integration Service saves the value of a mapping variable to the repository at the end
of each successful session run. It uses that saved value the next time when we run the
session.
At the beginning of a session, the Integration Service evaluates references to a variable
to determine the start value. We can define the start value either as the initial default
value or may define the variable name and value in the parameter file. Variable functions
like SetMaxVariable, SetMinVariable, SetVariable, SetCountVariable are used in the
mapping to change the value of the variable. At the end of a successful session, the
Integration Service saves the final value of the variable to the repository. The next time
we run the session, the Integration Service evaluates references to the variable to the
saved value. To override the saved value, define the start value of the variable in the
parameter file or assign a value in the pre-session variable assignment in the session
properties.
For example, we create a new mapping using an Datetime mapping variable, $$CDC_DT.
Suppose we do not configure an initial value for the variable or define it in a parameter
file. The first time when we run the session, the Integration Service uses the default value
for Datetime datatypes i.e. 1/1/1.
The Integration Service holds two different values for a mapping variable during a session
run.
Start value of a mapping variable: The start value is the value of the variable at
the start of the session.
Current value of a mapping variable: The current value is the value of the variable
as the session progresses.
When a session starts, the current value of a variable is the same as the start value.
As the session progresses, the Integration Service calculates the current value
using the variable function [SetMaxVariable, SetMinVariable, SetVariable or
SetCountVariable] used in the mapping. Use variable functions only once for each
mapping variable in a pipeline else it will give inconsistent results. The Integration
Service evaluates the current value of a variable as each row passes through the
mapping. The final current value for a variable is saved to the repository at the end
of a successful session. When a session fails to complete, the Integration Service
does not update the value of the variable in the repository. The Integration Service
states the value saved to the repository for each mapping variable in the session
log also.
Note: If any of the variable functions is not used to calculate the current value of
a mapping variable, the start value of the variable is saved to the repository.
The precedence to check initial or start value of the variable by the integration server is
as follows:
Step1: Find Entities, entities are something that have attributes. Eg customer has name gender etc.
Step2: Find the type of relationship among different entities, like 1:M, 1:1 or M:M
Step3: Prepare LOGICAL Model i.e. Establish Primary Key-Foreign key relationship. Foreign key is
present in table having relationship ‘M’. Logical model is independent of DBMS.
Step4: Normalize the data wherever you see duplicates upto 3NF. Retrieval will be slower.
Step5: Create the physical model, if required de-normalize the data in case of OLAP models, add the
validation rules, index or stored procedure. Denormalize upto 2NF in case of OLAP models.
ER Modelling:
Normalizations…OLTP
No Historical Data
More number of tables
Dimensional Modelling(STAR):
Denormalized
Historical data
Less number of tables
2. Physical Modelling
a. Includes tables, columns, keys, data types, validation rules, database triggers, stored
procedures, domains, and access constraints
b. Uses more defined and less generic specific names for tables and columns, such as
abbreviated column names, limited by the database management system (DBMS)
and any company defined standards.
c. Includes primary keys and includes fast data access.
What is ER Modelling?
1. Entity Relationship Data Modeling is used in OLTP systems which are transaction oriented
2. Focus of OLTP Design
2.1. Individual Data elements
2.2. Data Relationships
3. Design Goals
3.1. Accurately model business
3.2. Remove redundancy (Normalized)
ER Modelling Shortcomings:
Complex
Unfamiliar to business people
Incomplete history
Slow query performance
1. Logical data model used to represent the measures and dimensions that pertain to one or
more of business subject areas
a. Dimensional model = Star schema
2. Can easily translate into multi-dimensional database design if required
3. Overcomes ER design shortcomings
4. Facts and Dimensional TABLE
DM Advantages:
1. Understandable
2. Systematically represent history
3. Reliable Join paths
4. High performance query
5. Enterprise scalability
Definition: Dimension table is one that describe the business entities of an enterprise such as time,
departments. Locations and products. Dimension tables are sometimes called lookup or reference
tables.