You are on page 1of 8

Mapplet

When you use a mapplet in a mapping, you use an instance of the mapplet. Like a reusable transformation,
any change made to the mapplet is inherited by all instances of the mapplet.

To use a mapplet in a mapping, first we need to configure it for input and output. In addition to
transformation logic that you configure, a mapplet has the following components:

 Mapplet input
 Mapplet output
 Mapplet ports

Note:

Follow the following rules and guidelines when you edit a mapplet that is used by mappings:

•Do not delete a port from the mapplet. The Designer deletes mapplet ports in the mapping when you
delete links to an Input or Output transformation or when you delete ports connected to an Input or
Output transformation.

•Do not change the datatype, precision, or scale of a mapplet port. The datatype, precision, and scale
of a mapplet port is defined by the transformation port to which it is connected in the mapplet.
Therefore, if you edit a mapplet to change the datatype, precision, or scale of a port connected to a port
in an Input or Output transformation, you change the mapplet port.

•Do not change the mapplet type. If you remove all active transformations from an active mapplet, the
mapplet becomes passive. If you add an active transformation to a passive mapplet, the mapplet
becomes active.

Mapplets help simplify mappings in the following ways:

 Include source definitions. Use multiple source definitions and source qualifiers to provide source
data for a mapping.
 Accept data from sources in a mapping. If you want the mapplet to receive data from the
mapping, use an Input transformation to receive source data.
 Include multiple transformations. A mapplet can contain as many transformations as you need.
 Pass data to multiple transformations. You can create a mapplet to feed data to multiple
transformations. Each Output transformation in a mapplet represents one output group in a
mapplet.
 Contain unused ports. You do not have to connect all mapplet input and output ports in a
mapping.
Use the following rules and guidelines when you add transformations to a mapplet:

 If you use a Sequence Generator transformation, you must use a reusable Sequence Generator
transformation.
 If you use a Stored Procedure transformation, you must configure the Stored Procedure Type to be
Normal.

Metadata Extension

PowerCenter allows end users and partners to extend the metadata stored in the repository by
associating information with individual objects in the repository.

For example, when you create a mapping, you can store the contact information with the mapping.
You associate information with repository metadata using metadata extensions.

PowerCenter Client applications can contain the following types of metadata extensions:

Vendor-defined: Third-party application vendors create vendor-defined metadata extensions. You


can view and change the values of vendor-defined metadata extensions, but you cannot create,
delete, or redefine them.

User-defined: You create user-defined metadata extensions using PowerCenter. You can create,
edit, delete, and view user-defined metadata extensions. You can also change the values of user-
defined extensions.

Note:

All metadata extensions exist within a domain.

You see the domains when you create, edit, or view metadata extensions.

Vendor-defined metadata extensions exist within a particular vendor domain.

If you use third-party applications or other Informatica products, you may see domains such as
Ariba or PowerExchange for Siebel.

You cannot edit vendor-defined domains or change the metadata extensions in them.

User-defined metadata extensions exist within the User Defined Metadata Domain. When you create
metadata extensions for repository objects, you add them to this domain.

Both vendor and user-defined metadata extensions can exist for the following repository objects:

1. Source definitions
2. Target definition
3. Transformations
4. Mappings
5. Mapplets
6. Sessions
7. Tasks
8. Workflows
9. Worklets

Difference between bulk mode and normal mode property in target:

Bulk Load:

Powercenter loads the data bypassing the database log.This improves session performance.
But the disadvantage is that target database cannot perform rollback/recovery from the
failed session.
And need to disable/remove the key constraints before loading using the bulk mode

Normal mode:

Database log is not bypassed and therefore the target database can recover from an
incomplete session.
The session performance is not as high as is in the case of bulk load.

Transaction Control Transformation

Transaction Control Transformation in Mapping

Transaction control transformation defines or redefines the transaction boundaries in a mapping. It


creates a new transaction boundary or drops any incoming transaction boundary coming from
upstream active source or transaction control transformation.

Transaction control transformation can be effective or ineffective for the downstream transformations
and targets in the mapping. The transaction control transformation can become ineffective for
downstream transformations or targets if you have used transformation that drops the incoming
transaction boundaries after it. The following transformations drop the transaction boundaries.

 Aggregator transformation with Transformation scope as "All Input".


 Joiner transformation with Transformation scope as "All Input".
 Rank transformation with Transformation scope as "All Input".
 Sorter transformation with Transformation scope as "All Input".
 Custom transformation with Transformation scope as "All Input".
 Custom transformation configured to generate transactions
 Transaction Control transformation
 A multiple input group transformation, such as a Custom transformation, connected to multiple
upstream transaction control points
Mapping Guidelines and Validation

Use the following rules and guidelines when you create a mapping with a Transaction Control
transformation:

 If the mapping includes an XML target, and you choose to append or create a new document on
commit, the input groups must receive data from the same transaction control point.
 Transaction Control transformations connected to any target other than relational, XML, or dynamic
MQSeries targets are ineffective for those targets.
 You must connect each target instance to a Transaction Control transformation.
 You can connect multiple targets to a single Transaction Control transformation.
 You can connect only one effective Transaction Control transformation to a target.
 You cannot place a Transaction Control transformation in a pipeline branch that starts with a
Sequence Generator transformation.
 If you use a dynamic Lookup transformation and a Transaction Control transformation in the same
mapping, a rolled-back transaction might result in unsynchronized target data.
 A Transaction Control transformation may be effective for one target and ineffective for another
target. If each target is connected to an effective Transaction Control transformation, the mapping
is valid.
 Either all targets or none of the targets in the mapping should be connected to an effective
Transaction Control transformation.

LOOKUP

Connected Unconnected
Connected to pipeline Unconnected and receives input from :LKP
expression in any other transformation
Can use STATIC and DYNAMIC cache Can use only static cache
Can return more than one value Can return only one value
Caches all the ports Caches only lookup condition port(s) and return
port
Supports user defined default values which it Do not support any defined default values
returns when lookup conditions are not satisfied
Active and Passive transformation

Active: An active transformation performs any of the following operation:

1. Change the number of rows between the transformation input and output
Ex- Filter
2. Changes the transaction boundary by defining commit or rollback. Ex- Transaction control
3. Change the row type like: insert, update, delete or reject. Ex- update strategy

Passive transformation- Which does not change the number of rows passes through it and neither
change the transaction boundary nor type of row. Example- Expression transformation

Difference between router and Filter


Filter Router
Single input single output Single input multi output
Either pass row or block if condition doesn’t Itself doesn’t block any row. If a row doesn’t
satisfy satisfy any condition then it sends that row to
default group
Acts like WHERE clause in SQL Acts like CASE in SQL

How we can improve the performance of the Aggregator?


By passing sorted input and check the “sorted input” option in aggregator transformation.
Note- If we check the “Sorted input” in Aggregator and input is not sorted then session will fail.
Even if data is not sorted for same ports which we are using in group by in aggregator then also
session fail.

Types of Lookup:
Cache Lookup : Static/ read only Cache, Dynamic cache
Un-cached Lookup
Persistent and Non-persistent cache

Static cache: Which does not modify the cache once it is built and remains same when session
runs.
Dynamic cache: Which refresh during the session run by inserting or updating records in cache
based on the incoming data from source.
Note- By default informatica cache is static
Persistent cache: when Informatica retains the cache even after session run.
Non-persistent cache: When informatica deletes cache after completion of session.
Note- Lookup maintains a table for cache
Note- Dynamic Cache is synchronized with the target.
When sorter works like Active transformation?
When we select distinct option in its properties

When Lookup works Like active transformation?


When we check the “” Option

Difference between STOP and ABORT


In Informatica when we STOP a session then it stops reading data from source but continues the
processing and writing data into the target.
When we ABORT session then in ABORT there is timeout period which is set to 60 seconds. So,
when Integration service can’t finish processing and writing data into target within timeout
period then it kills DTM process and terminates session.

How we can prevent duplicate records to load into the target?


By checking Distinct option in Source qualifier

What is Tracing level?


It decides the amount of data which you want o store in session log file.
There are four levels: TERSE, VERBOSE, VERBOSE INITIALIZATION and NORMAL.
VERBOSE: Detailed one.

List of Transformations supported by sorted input


Aggregator, Joiner and LOOKUP supports sorted input to increase session performance.

How many number of session can we take in a batch?


Any number of session. But for best practice there should be less number of sessons in a batch
which helps at the time of migration

Name any 4 files which Informatica server creates when it runs the session
Session log, Workflow log, Error log and Badfile

Update Override in target Instance


By default target is updated using primary key value but if we want to update target using any
particular column value then we can include this condition in where clause of default query.
SQL Override: It ids an option available in Source qualifier and Lookup Transformation
Where we can include Joins, Filter, Group by and order by.
Can we return two columns using LOOKUP?
Yes if lookup is connected.
Can we return two columns using Unconnected lookup?
Yes, by concatenating two columns but return port will be only one.
There are 1000 rows which I am passing through aggregator transformation and there is no
group by port in aggregator. So, how many rows will come in output?

Only last row

Difference between Union, Joiner and Lookup


Union- Can join two tables without common port
Joiner- Can join any two heterogeneous data sources but common key is mandatory.
Lookup- Can join two sources using sql override.
Can check whether a particular record exists or not.

There are 10000 records in a flat file and we want to load record number 500 to record
number 600 in target.How we can do it?
Use sequence generator to generate row number. Then use filter to select row number from
500 to 600.

Why and when Sequence generator should be a reusable transformation?


When more than one mapping uses sequence generator to load data from source into target.
Then if sequence generator is reusable then it insures that it will insert unique value for all the
records in target.

Types of SCD-2:
Versioning
Flag
Date

Different threads in DTM:


Master thread
Mapping Thread
Rader Thread
Writer Thread
Pre session thread
Post session thread

Types of schedulingoptions to run a session:


Run only on demand : manually
Run Once: Run once only at specified date and time
Run every: Informatica server runs session at regular intervals as configured
Customized repeat: Informatica server runs the session at the date and time specified in the
repeat dialog box.

How can we store previous session log and prevent to overrite by current session log?
Just run session in time stamp mode.

Mapplet:
It is a set of reusable transformations for applying same business logic.

List of different tasks in workflow manager:

Session task
Assignment task
Command task
Control task
Decision task
E-mail task
Event raise task
Event wait task
Timer task
Link task

You might also like