You are on page 1of 22

Mapping : How to find the number of success , rejected and bad records in the same mapping.

Explanation :
In this Mapping we will see how to find the number of success , rejected and bad records in
one mapping.
 
 Source file is a flat file which is in .csv format . Click here to download the source
file.The table appears like as shown below..

EMPNO NAME HIREDATE SEX


100 RAJ 21-APR M
101 JOHN 21-APR-08 M
102 MAON 08-APR M
103   22-APR-08 M
105 SANTA 22-APR-08 F
104 SMITHA 22-APR-08 F
106     M
 In the above table it shows few values are missing in the table .ANd also the date
format of few records are improper.This must be considered as invlaid records and
should be loaded into Bad_records table ( target table which is relational).
 Other than 2 , 3 , 5, 6 records ,remaining all are invalid records because of NULL
values or improper DATE format or both .
 INVALID & VALID RECORDS ::
 First we seperate this data using Expression transformation.Which is used to flag the
row for 1 or 0 .The condition as follows ..
 IIF(NOT IS_DATE(HIREDATE,'DD-MON-YY') OR ISNULL(EMPNO) OR
ISNULL(NAME) OR ISNULL(HIREDATE) OR ISNULL(SEX) ,1,0)
 FLAG =1 is considered as invalid data and FLAG =0 is considered as valid data .This
data will be routed into next transformation using router transformation .Here we added
two user groups one as FLAG=1 for invalid data and the other as FLAG=0 for valid
data.
 FLAG=1 data is forwarded to the expression transformation .Here we take one variable
port and trwo ouput ports .One for increament purpose and the other for flag the row ...
 INVALID RECORDS
 INCREAMENT ::

PORT EDIT EXPRESSION


COUNT_INVALID V_PORT ( output port )
V_PORT V_PORT+1 ( variable)
 INVALID DATE ::

PORT EDIT EXPRESSION


IIF( IS_DATE(O_HIREDATE,'DD-MON-YY'),
INVALID_DATE
O_HIREDATE, 'INVALID DATE')
 This data will be moved to the BAD_RECORDS table.Look at the below table::

SE COUN
EMPNO NAME HIREDATE
X T
100 RAJ INVALID DATE M 1
102 MAON INVALID DATE M 2
103 NULL  22-APR-08 M 3
106  NULL  NULL M 4
 VALID RECORDS ::
 In this we will have the valid records.But here we dont want the Employee ,who is 'F'
(Female).So our goal is to load MALE employee info., into the SUCCESS_RECORDS
target table.
 For this we need to use a Router transformation and declare the user group as follows \
 IIF( sex='M',TRUE,FALSE)
 And the defined group will capture teh rejected records which are nothing but
employee who is FEMALE .
 This data passed to the REUSABLE Expressiona transformation.Where the
Increamental logic is applied to get the count value for the the no., of success and
rejected records which are passing it.And loaded into the target table.
 Look at the below tables :::
 SUCCESS_RECORDS::

SE COUN
EMPNO NAME HIREDATE
X T
101 JOHN 22-APR-08 M 1
 REJECTED_RECORDS::

EMPNO NAME HIREDATE SEX COUNT


105 SANTA 22-APR-2008 F 1
SMITH
106 22-APR-2008 F 2
A
 

Mapping : Increamental Data load code using SQL Override logic


Explanation : In this Mapping we will see how to implement the
incremental loading..

We will go for incremental loading to speed up the data loading and reduce
data actually loaded.
There are different ways we can implement this incremantal loads.

One of the those methods by writing SQL Override in SQL using Mapping
variable.

Explanation of the mapping: First we hav to create one mapping variable of


type date.

In the source qualifier write the sql override as follows


SELECT SRC_INCRE_LOAD.EMPNO, SRC_INCRE_LOAD.ENAME,
SRC_INCRE_LOAD.HIREDATE FROM
SRC_INCRE_LOAD WHERE SRC_INCRE_LOAD.HIREDATE>'$$V_LOAD_DATE'

In the expression assign sysdate to mapping variable to update the mapping


variable because from next load it will pick the records greater than today's
date.It will accept only recent records
Output Port :: INCRE_LOAD
SETVARIABLE($$v_incre_load,sysdate)

Mapping : How to filter the Null Value data ?


Explanation :
In this mapping we will understand how to filter the improper NULLvalue data .

To detect the NULL values in a table , we use IS_NULL( ) function.

 In general , when you take source as flatfile or relational , we find few feilds with NULL
values .If you think that in your business process this null data is a invalid data.Then its
necessary to filter invalid data.The below table shows ::

NAME AGE
JOHN 23
SMITH  
  34
LUCKY 24
 In the above table their are 4 records but in that two are valid and other two are invalid
records because in the 2 nd record AGE value is NULL and in the 3rd record the NAME
is a NULL value .So it is essential to filter 2nd and 3rd before loading into the target
table.The belove table shows how a target table appears witha valid data

NAME AGE
JOHN 23
LUCKY 24
 The 2nd and 3rd records can be filtered by using IS_NULL ( ) functionThis is declared
Filter Transformation Condition.
 The function appears like this IIF(NOT ISNULL(NAME) ,TRUE,FALSE)
 FIlter transformation passes only TRUE records to the target and drops the FALSE
records which are nothing but NULL value records.

Mapping : How to filter the improper Date format data?


Explanation : In this mapping we will understand how to filter the improper Date format
data .

To check the Date format , we use IS_DATE() function.

When you take a source as Flatfile , their are chances to see the invalid
DATE format .Like as shown in the below source table.

DATE
20-Apr
11-Mar-2008
12-Feb-2008
Feb-2008
If you look at 1 st and 4th record of above table .It clearly shows that the
DATE value format is not correct.In the 1st record year is missing and in
the 4th date is missing.In this case it is necessary to see this invalid data to
be removed before it loaded into the target .Like as shown in below target
table:

DATE
11-Mar-2008
12-Feb-2008
 The above can be acheived by using the IS_DATE( ) function .This
is declared in Filter Transformation condition.
 The function appears like this IIF(IS_DATE(DATE, 'DD-MON-
YY'),TRUE,FALSE)
Mapping : How to correct the data values without spaces?
Explanation :
In this mapping we will understand how to remove the spaces in the data .It
is very important to see the data values must be aligned equally their should
not be any spaces .

To trim this spaces , we will use LTRIM() and RTRIM() functions.

When you take a source as Flatfile and it consists one of the feild as NAME .The following below table is the source table
data::

NAME
         Ravi
Ramesh
    Swathi
    Jack
In the above table you can clearly see the Name values are not aligned properly and to align the data into equally .We
have to use TRIM concept .Now in our mapping we are loading above data into target table .But in target table it appears
like as shown below

NAME
Ravi
Ramesh
Swathi
Jack
The above can be acheived by using the LTRIM( ) function in expression editor.This is declared in Expression
transformation output port.

The function appears like this LTRIM(NAME)

Mapping : How to convert flatfile Date datatype ( string) into Relational Date
datatype (Date) ?
Explanation :

In this mapping we will understand how to use the TO_DATE( ) function.It is


mainly used to convert String datatype to Date datatype .

When you take a source as Flatfile and it consists one of the feild as DATE and its datatype generally a String .The
following below table is the source table ::

Feild Name Datatype

DATE String
If you want to load the above source data into realtional target.Then it is manditory to match the datatype.In relational
generaly the date datatype is Date.The following below table is the target table ::

Feild Name Datatype

DATE Date

To match the datatype of ports , we use the TO_DATE ( ) function in expression transformation editor.The function appears
like this TO_DATE(DATE,'DD-MON-YY')

Mapping : Design a mapping to validate the records which are extrated from a flatfile.
Solution :
Source : Flatfile
Target : Relational
Informatica version 7.1.1
Database : Oracle
 

Mapping : How to use "REPLACECHR" functiion in mapping. Explanation :


Replaces characters in a string with a single character or no character. REPLACECHR searches
the input string for the characters you specify and replaces all occurrences of all characters with
the new character you specify.

Syntax

REPLACECHR( CaseFlag , InputString , OldCharSet , NewChar )

Example

REPLACECHR(0 , IN_PHONE , 'abcdefghijklmnopqrstuvwxyz!@#$%^&*(1234567890' ,


NULL)

0 - represents non case sensitive , 1 represents case sensitive.

Mapping : How to use "SUBSTR" functiion in mapping.


Explanation :
Returns a portion of a string. SUBSTR counts all characters, including blanks, starting at the
beginning of the string.

Syntax

SUBSTR( string , start [, length ] )


Example

Substr (IN_PHONE, 1 ,3)

Mapping : Design a mapping to load the valid source records into warehouse.
Solution :
Source : Flatfile
Target : Relational
Informatica version 7.1.1
Database : Oracle

Mapping : Design a mapping to load the valid source records into warehouse. Solution :
Source : Flatfile
Target : Relational
Informatica version 7.1.1
Database : Oracle

Logic :: Use four conditions in Router transformation

FIRST::NEXTVAL%2 = 0.5 OR NEXTVAL % 4 = 1


SECOND::NEXTVAL % 4 = 2
THIRD::NEXTVAL % 2 = 1.5 OR NEXTVAL%4 = 3
FOURTH::NEXTVAL %4 = 0
 

Design a mapping , which generates sequence of numbers using setvariable function in exp
transformation( without using sequence generator)
 
Mapping Design a mapping as shown below

Source Definition Target Definition


Level1 Level2 level1 level2 level3 level4
: P1 P2 p1 p1 p1 p1
P2 P3 p1 p2 p2 p2
P3 P4 P1 p2 p3 p3
p1 p2 p3 p4
Solution : Source : Flatfile
Target : flatfile

Mapping : Design a mapping as shown below


Source Definition Target Definition
ID Name EI ENA COUNT
1 100 D ME RY
2 Ramesh 10 Rame
INDIA
3 INDIA 0 sh
1 101 10 Rakes
INDIA
2 Rakesh 1 h
3 INDIA 10
Johny USA
2
1 102
2 Johny
3 USA
Solution :
Source : Flatfile
Target : Relational

Mapping : Design a mapping to load the valid source records into warehouse. Solution :
Source : Flatfile
Target : Relational
Informatica version 7.1.1
Database : Oracle
 

Mapping : Design a mapping generates sequence of numbers without using sequence generator?
Solution :
Source : Flatfile
Target : Relational
Database : Oracle
Note : usage of setmaxvariable() function and mapping variables !

Mapping : Design a mapping to seperate or remove duplicate records?


Solution :
Source : Flatfile
Target : Relational
Database : Oracle
Tip : Using mapping variable port implemented this. Same can be done using dynamic lookup ,
Aggreator transformation .

Mapping : first half to one target and second half to other target.
Solution :
Source : Flatfile
Target : Relational
Database : Oracle
Tip : use stored procedure to count the records

Transformations

Transformation : Source Qualifier(sq_) Explanation :

  Source Qualifier(SQ) is a active transformation.

  SQ is used to read the relational or flatfile source tables  

  SQL override in SQ : To override the default query.  

  Join Condition in SQ : To join more than two tables  

Filter Condition in SQ : Filter the source records  


 
  Sorted ports : Add the ports to the order by clause in the default query  

  Distinct : select the distinct records from the source  

  Pre-SQL: power center server(PCS) executes pre-sql before it reads the source  
data

  Post-SQL: PCS executes post-sql quey after data return to the target

Transformation : Sorter (sor_) Explanation :


Sorter is a active transformation and connected .
Sorts data based on a sort key.You can sort data in ascending or descending order according to a
specified sort key
 
Sorter Cache Size
 Default sorter cache size is 1MB .And you can specify any between 1MB to 4GB .
 If the total configured session cache size is 2 GB (2,147,483,648 bytes) or greater, you
must run the session on a 64-bit PowerCenter Server.
 If the amount of incoming data is significantly greater than the Sorter cache size, the
PowerCenter Server may require much more than twice the amount of disk space
available to the work directory.
 determine the size of incoming data:

input rows [( Scolumn size bytes ) + 16]


Case Sensitive
 When you enable the Case Sensitive property, the PowerCenter Server sorts uppercase
characters higher than lowercase characters

 
Work Directory
 You must specify a work directory the PowerCenter Server uses to create temporary files
while it sorts data

 
Distinct Output Rows
 It gives distinct output rows .It discards all the duplicate rows .

 
Null Treated Low
 Enable this property if you want the PowerCenter Server to treat null values as lower than
any other value when it performs the sort operation

Transformation : Sequence(Seq_) Explanation :


Sequence Generator is a passive and connected transformation.
Used to generate unique number of values to avoid primary key voilation in slowly changing
dimension
It contains two output ports

1. NEXTVAL :: PCS generates sequence of numbers


2. CURRVAL :: Generates NEXTVAL + Increamental value

 
Reusable ::
 
 Sequnce generator can be made as reusable.
 Sequence generator must be reusable , when you pass multiple load into single target .It
is necessary to avoid duplicate sequence values.
 If you use different sequence generators , it will generate duplicate values .

 
Properties ::
 
If you select Cycle, the PowerCenter Server cycles back to this value when it
Start Value
reaches the end value. The default value is 0.
End Value The maximum value the PowerCenter Server generates
Current Value First value of sequence
Cycle PCS cycles the sequence range after it reaches the end value.
Number of Cached The number of sequence values the PCS cache at a time .Use this option
Values when multiple sessions use the same reusable sequence generator .
Check this option to generate the value from current , when you run the
Reset session for next time .If not it generates from last generated value in the last
session.

Transformation : RANK (ran_) Explanation :


Rank transforation is a active transformation.
The Rank transformation allows you to select only the top or bottom rank of data
During the session, the PowerCenter Server caches input data until it can perform the rank
calculations.
 
Rank Properties::
 
Data Movement ::
 When the PowerCenter Server runs in the ASCII data movement mode, it sorts session
data using a binary sort order.
 When the PowerCenter Server runs in Unicode data movement mode, the PowerCenter
Server uses the sort order configured for the session.

 
Case-Sensitive String Comparison ::
 If the session sort order is case-sensitive, select this option to enable case-sensitive string
comparisons, and clear this option to have the PowerCenter Server ignore case for
strings. If the sort order is not case-sensitive, the PowerCenter Server ignores this setting.
By default, this option is selected.

** Note ::If you dont check the option case sensitive string :: first preference goes to lower case
and vice versa.
 
CACHE ::
 Data cache size for the transformation. Default is 2,000,000 bytes.
 Index cache size for the transformation . Default is 1,000,000 bytes.
 If the total configured session cache size is 2 GB (2,147,483,648 bytes) or more, you
must run the session on a 64-bit PowerCenter Server.

 
Rank Index ::
 
The Designer automatically creates a RANKINDEX port for each Rank transformation. The
PowerCenter Server uses the Rank Index port to store the ranking position for each row in a
group

Transformation : Filter (fil_) Explanation :


Filter transformation is a active transformation.
The Filter transformation allows you to filter rows in a mapping only rows that meet the
condition pass through the Filter transformation.
 
Filter performance:
 
 Use the Filter transformation early in the mapping.
 Use the Source Qualifier transformation to filter.

Note:: The Source Qualifier transformation only lets you filter rows from relational sources
 
Important ::
 
 Case sensitivity. The filter condition is case-sensitive
 Appended spaces .Use the LTRIM or RTRIM function to remove additional spaces.

 
Condition Examples::
 
 SAL >3000

 SALES >30 and LOC='INDIA'

 IFF(ISNULL(NAME),FALSE,TRUE)
Transformation : Router (rtr_) Explanation :
Router transformation is a active transformation.
Routes data into multiple transformations based on group conditions.
A Router transformation tests data for one or more conditions and gives you the option to route
rows of data that do not meet any of the conditions to a default output group.
 
GROUPS ::
 
User Defined Groups
 
 You create a user-defined group to test a condition based on incoming data. A user-
defined group consists of output ports and a group filter condition.
 You can create and edit the user-defined group.

 
Default Group
 
 The Designer creates the default group after you create one new user-defined group. The
Designer does not allow you to edit or delete the default group.
 If you want the PowerCenter Server to drop all rows in the default group, do not connect
it to a transformation or a target in a mapping

 
REMEMBER ::
 
Zero (0) is the equivalent of FALSE, and any non-zero value is the equivalent of TRUE

Transformation : Expression (exp_)


Explanation :
Expression transformation is a passive and connected transformation.
use the Expression transformation to calculate values in a single row before you write to the
target

Transformation : Aggregator(Agg_) Explanation :


Aggregator transformation is a active and connected.
Performs aggregate calculations
Components ::
 Aggregate expression. Entered in an output port. Can include non-aggregate expressions
and conditional clauses.
 Group by port. Indicates how to create groups
 Sorted input. Use to improve session performance. To use sorted input, you must pass
data to the Aggregator transformation sorted by group by port, in ascending or
descending order.
 Aggregate cache. The PowerCenter Server stores data in the aggregate cache until it
completes aggregate calculations. It stores group values in an index cache(10000) and
row data in the data cache(20000).

 
Aggregate Functions ::
AVG COUNT FIRST LAST
MAX MEDIAN MIN PERCENTILE
STDDEV SUM VARIANCE  
 
** Use them in an expression within an Aggregator transformation

Transformation : Lookup(Lkp_) Explanation :


Lookup transformation is a passive.
Lookup transformation in a mapping to look up data in a flat file or a relational table
 
 
Lookup perform many tasks ::
 
 Get a related value
 Perform a calculation
 Update slowly changing dimension tables.

 
Lookup Types ::
 
 Connected or unconnected
 Relational or flat file lookup.
 Cached or uncached

 
Difference between Connected and Unconnected Lookup ::
 
Connected Unconnected
Receives input values directly from the Receives input values from the result of a :LKP
pipeline expression in another transformation.
Use a dynamic or static cache Use a static cache
Give user-defined default values Not accepts user-defined default value
Return multiple column Returns only one column
Pass multiple output values Passes one output value
Cache includes all lookup columns used in Cache includes all lookup/output ports in the
the mapping lookup condition and the lookup/return port
No match for the lookup condition::returns No match for the lookup condition:: PCS returns
the default value for all output ports NULL
Match for the lookup condition::returns the Match for the lookup condition:: returns the
result of the lookup condition for all result of the lookup condition into the return port
lookup/output ports
 
Lookup Cache ::
 
we can save the lookup cache files and reuse them the next time the PCS
Persistent
processes a Lookup transformation configured to use the cache
If the persistent cache is not synchronized with the lookup table, you can
Recache
configure the Lookup transformation to rebuild the lookup cache.
It caches the lookup file or table and looks up values in the cache for each row
that comes into the transformation. When the lookup condition is true, the
Static PowerCenter Server returns a value from the lookup cache.

PCS does not update the cache while it processes the Lookup transformation
If you want to cache the target table and insert new rows or update existing
rows in the cache and the target, you can create a Lookup transformation to
Dynamic use a dynamic cache. The PowerCenter Server dynamically inserts or updates
data in the lookup cache and passes data to the target table. You cannot use a
dynamic cache with a flat file lookup.
You can share the lookup cache between multiple transformations. You can
share an unnamed cache between transformations in the same mapping. You
Shared
can share a named cache between transformations in the same or different
mappings.

Transformation : Update Strategy (upd_) Explanation :


Update Strategy is a Active and Connected Transformation.
Determines whether to insert, delete, update, or reject rows.
It flasgs the rows for insert , update , reject or delete .This is used to maintain the historical data
in target table.
 
Mapping & Session
 Within a mapping, you use the Update Strategy transformation to flag rows for insert,
delete, update, or reject. When you configue a session, you can set the treat all rows in
the sameway ( (as insert).
 If you do not choose Data Driven, the PowerCenter Server flags all rows for the database
operation you specify in the Treat Source Rows As option and does not use the Update
Strategy transformations in the mapping to flag the rows.

 
Constants
  Constant Value
Insert DD_INSERT 0
Update DD_UPDATE 1
Delete DD_REJECT 2
Reject DD_REJECT 3
 
Update Strategy Expressions
 
 IIF( Submit Date > Last Date,DD_INSERT, DD_UPDATE)
 'DD' :: Data Driven

Transformation : Joiner (joi_) Explanation :


Joiner transformation is a active and connected.
Joins data from different databases or flat file systems or from same sources.
 
Join Type ::
 
 Normal Join
The PowerCenter Server discards all rows of data from the master and detail source that
do not match, based on the condition.
 Master Outer Join
A master outer join keeps all rows of data from the detail source and the matching rows
from the master source. It discards the unmatched rows from the master source.
 Detail Outer Join
A detail outer join keeps all rows of data from the master source and the matching rows
from the detail source. It discards the unmatched rows from the detail source.
 Full Outer Join
A full outer join keeps all rows of data from both the master and detail sources.

cannot use a Joiner transformation ::


 
 Either input pipeline contains an Update Strategy transformation
 You connect a Sequence Generator transformation directly before the Joiner
transformation

 
Perform joins in a database when possible
 Create a pre-session stored procedure to join the tables in a database.
 Use the Source Qualifier transformation to perform the join.

Transformation : Union (fil_)


Explanation :
Union transformation is a active transformation.
Merges data from different databases or flat files.
Similar to the UNION ALL statement, the Union transformation does not remove duplicate
rows.
You can connect heterogeneous sources to a Union transformation.
 
NOTE::
 
All input groups and the output group must have matching ports. The precision, datatype, and
scale must be identical across all groups.

Transformation : Normalizer (nrm_) Explanation : Normalizer transformation is a active and


connected transformation.
 
 
DESIGNER ::
 
Use the Normalizer transformation with COBOL sources, which are often stored in a
denormalized format
You can also use the Normalizer transformation with relational sources to create multiple rows
from a single row of data.
 
MAPPING ::
 
Objective : To create a mapping which converts a single row into multiple rows.

Mapping Flow : Source Definition (Flat File) > Source Qualifier > Expression (column names) >
Normalizer transformation (converts single row into multiple rows)> Target Definition (flat file)
Designing : We designed this mapping using INFORMATICA tool version 7.1.1 .

Description :

Source Definition

ESA
ENO ENAME EAGE
L
100 JOHN 2000 23
101 SMITH 5000 41
102 LUCKY 6000 32

Target Definition

COL_NA COL_VA
ENO
M L
100 ENAME JOHN
100 ESAL 2000
100 EAGE 23
101 ENAME SMITH
101 ESAL 5000
101 EAGE 41
102 ENAME LUCKY
102 ESAL 6000
102 EAGE 32

Transformation : Stored Procedure (sp_)


Explanation : Stored procedure is a passive and connected /unconnected transformation.

 Create stored procedures to automate tasks that are too complicated for standard SQL
statements.

 Stored procedures are stored and run within the database

 Stored procedures allow user-defined variables, conditional statements, and other


powerful programming features.

 
DESIGNER ::
 
stored procedures to do the following tasks:

 Check the status of a target database before loading data into it.
 Determine if enough space exists in a database.
 Perform a specialized calculation.
 Drop and recreate indexes.

Stored Procedure

create or replace procedure


revised_sal(SAL in number, R_SAL OUT NUMBER)
is
begin
R_SAL:=SAL+1000;
END;
/

Template Appereance

SAL
REV_SAL

Mapplet One : The object of this mapping is used to obtain total employees who are getting SAL
>1000
Explanation : A mapplet is a reusable object that you create in the Mapplet Designer. It contains
a set of transformations and allows you to reuse that transformation logic in multiple mappings.

The advantage of Mapplet is used to reuse the the set of transformation in multiple mappings.
 

Mapplet Two : The object of this mapping is used to obtain the employee info who is getting
maximum salary.
Explanation :
A mapplet is a reusable object that you create in the Mapplet Designer. It contains a set of
transformations and allows you to reuse that transformation logic in multiple mappings.

Mapplet Input :: You can pass data into a mapplet using source definitions and/or Input
transformations. When you use an Input transformation, you connect it to the source pipeline in
the mapping

Mapplet Output :: Each mapplet must contain one or more Output transformations to pass data
from the mapplet into the mapping.

Mapplet Mapping flow

Input mapplet > sorter > filter > expression > output mapplet

Warehouse Mapping flow

source definition > source wualifier > mapplet > target definition

Advantage

Mapplet is used to reuse it in differnt mappings / repositorys

Certification Point ::
You cannot include the following objects in a mapplet:

 Normalizer transformations
 Cobol sources
 XML Source Qualifier transformations
 XML sources
 Target definitions
 Pre- and post- session stored procedures
 Other mapplets

How to configure ODBC Connection for EXCEL

1. Create a worksheet.

1.1 Select the required rows to be read into PowerCenter.


1.2 Choose Insert | Name | Define and give the range a name then click OK.
1.3 Save the worksheet.

2. Create the ODBC connection.

2.1 System DSN | Microsoft Excel Driver (*.xls).


2.2 Configure and select workbook.

3. Import into Designer.

3.1 Sources | Import From Database.


3.2 ODBC data source must match (2.) above.
3.3 Leave username, password and ownername blank.
3.6 Click Connect
3.7 Expand the worksheet name and select the range created in (1.) above.

4. Create ODBC connection in Workflow Manager

4.1 In Workflow Manager go to Connections | Relational | New... | ODBC


4.2 Enter a name for the connection.
4.3 Username=pmnulluser
4.4 Password=pmnullpasswd
4.5 Connect string=
4.6 Use this connection in session mapping for source.
5. Please note that sheet names with spaces can be problematic.

4
Informatica 7.x vs 8.x
 
Ans
In Informatica 8.1 has an addition of transformations and supports different unstructured data.
Introduced:

1. sql transformation
2. java transformation
3. support unstructured data like emails, word doc, and pdfs.
4. In custom transformation we can build the transformation using java or vc++
5.Concept of flat file updation is also introduced in 8.x

Object Permissions

Effective in version 8.1.1, you can assign object permissions to users when
you add a user account, set user permissions, or edit an object.

Gateway and Logging Configuration

Effective in version 8.1, you configure the gateway node and location for
log event files on the Properties tab for the domain. Log events describe
operations and error messages for core and application services, workflows
and sessions.

Log Manager runs on the master gateway node in a domain.

We can configure the maximum size of logs for automatic purge in megabytes.
Powercenter 8.1 also provides enhancements to the Log Viewer and log event
formatting.

Unicode compliance

Effective in version 8.1, all fields in the Administration Console accept


Unicode characters. One can choose UTF-8 character set as the repository
code page to store multiple languages.

Memory and CPU Resource Usage

You may notice an increase in memory and CPU resource usage on machines
running PowerCenter Services.

Domain Configuration Database

PowerCenter stores the domain configuration in a database.

License Usage

Effective in version 8.0, the Service Manager registers license information.

High Availability

High availability is the PowerCenter option that eliminates a single point


of failure in the PowerCenter environment and provides minimal service
interruption in the event of failure. High availability provides the
following functionality:
Resilience: Resilience is the ability for services to tolerate transient
failures, such as loss of connectivity to the database or network failure

You might also like