You are on page 1of 27

retail vertical

mostly data migration


1st part resume ,2 nd part informatica , 3 rd part remaning things As per Me 1)s
cd type 1,type 2,type 3 total flow which t/f used and unit test cases on that
-----------------------Ju;y28th
------------------------- How to extract a particular row from a fixed width flat file using informatic
a
------------------------interview ques related to Indexing and Partitioning
----------------------------why using Variables & perameters?

---------------------------------------==========================================
Ishu Garg
Hello Everyone,
I want to extract week number of year from a date field.
But the main requirement here is week should start from sunday till saturday i.e
04-JAN-2015 should be my 2nd week of year 2015.
Please suggest how can I get this done in Informatica.
Thanks In Advance!!
.No this will not work... actually I need to group by records which are falling
in same week and week should start from Sunday till Saturday.
So, I used function TO_INTEGER(TO_CHAR(DATE_FIELD,'WW') to extract week number
e.g: i/p dates 07-JAN-2015, 08-JAN-2015
For'07-JAN-2015' o/p is 1 and for the other date o/p is 2
and this record is getting rejected. According to my requirement, both dates sho
uld fall under 2nd week.
You can use TO_CHAR(Inputdate,'W') to calculate the week of the month. TO_CHAR(I
nputdate,'WW') to calculate the week of the year
SELECT ROUND((NEXT_DAY(TO_DATE('04-JAN-2015', 'DD-MON-YYYY')-1,'SATURDAY') - NEX
T_DAY(TO_DATE('01-JAN-'||EXTRACT(YEAR FROM TO_DATE('04-JAN-2015', 'DD-MON-YYYY')
), 'DD-MON-YYYY')-7,'SUNDAY'))/7) WEEK_NUMBER FROM DUAL;
Use to_number(to_char(to_date('dd-mon-yyyy')+1,'iw'))
=======================================
=======================================

I have 3 columns. I want add add one column. Totally. I want four columns in my
target.
Not use sql use informatica
Hiii
what u can do is go to target design er add a column nd by going on target tab u

se option generate nd execute this will add a port nd will effect on mapping too
. .
---------=======================================
difference between update strategy at mapping level and session level
========================================
I have a column in Flat file value as '2015-06-09 13:53:29+03' and i have to loa
d this value in Teradata as Timestamp value only ..
I did this conversion in Expression Transformation as below :
TO_DATE (SUBSTR('2015-06-09 13:53:29+03',1,19),'YYYY-MM-DD HH24:MI:SS')
It did load it in the Database but no records are found.. We have USed TPT conne
ction .
When the same mapping i tried loading with Relational Connection its working fin
e .
Please help me with this issue.
In flat File we have this column in string datatype and while loading i am conve
rtiing it into Date/Time and above conversion script .
. The issue was with the target column data type ..since it is stage loading so
the target is dB ..so when we create target definition the column data type with
timestamp(0) is changed to timestamp(6).. So I manually changed it timestamp(0)
..and then it loaded without any fails ..I hope this issue with solution will h
elp you in future ..tHanks
========================================================
SQL Queries for SCD-1,2,3 ?
write merge script. try Searching ,erge script for SCD 1 /2
1 INSERT INTO Customer_Master
2 SELECT Source_Cust_ID, First_Name, Last_Name, Eff_Date, End_Date, Current_Flag
3 FROM
4 ( MERGE Customer_Master CM
5 USING Customer_Source CS
6 ON (CM.Source_Cust_ID = CS.Source_Cust_ID)
7 WHEN NOT MATCHED THEN
8 INSERT VALUES (CS.Source_Cust_ID, CS.First_Name, CS.Last_Name, convert(char(10
), getdate()-1, 101), 12/31/2199, y)
9 WHEN MATCHED AND CM.Current_Flag = y
10 AND (CM.Last_Name <> CS.Last_Name ) THEN
11 UPDATE SET CM.Current_Flag = n, CM.End_date = convert(char (10), getdate()- 2,
101)
12 OUTPUT $Action Action_Out, CS.Source_Cust_ID, CS.First_Name, CS.Last_Name, co
nvert(char(10), getdate()-1, 101) Eff_Date, 12/31/2199 End_Date, yCurrent_Flag
13 ) AS MERGE_OUT
14 WHERE MERGE_OUT.Action_Out = UPDATE ;
I feel the complexity from Line 12-14
Could u pls expln the functionality of $Action and Action_Out= UPDATE
==========================================
what is Creation of Balancing and Reconciliation Modules?

===========================================
There are 3 sources,2 flat files and 1 relational table .I would like to load th
ese into one flat file.This is 1 to 1 mapping.How can i do this.
============================================================
what are the roles and responsibilities of informatica developer
1)etl 2)mapping 3)session developing 4) execution 5) schedule the jobs 6) perfor
mancetuning 7) unit testing 8) developing reusable transformations and mapplets
9) developing the workflows 10) develop the test cases
==============================================
The data contains special characters like , when loading into UNIX the size of
the data is increasing to 51 from 50..and hence the data gets rejected as my tar
get file datatype is string 50.
Please let me how do I handle this..Thanks..
Note: code page at all places is defined as UTF8
And in database Oracle datatype is defined as
Varchar2(50 char) so causes no issue at db level.

change the character set in integration service to ASCII or Super Latin.. It wi


ll work.. I faced this problem earlier.
====================================
============================
source have 10 records ,each two records are loaded to target every time session
run anyone can solve this scenario,i faced this problem at interview
Create two mapping level variable
V1=0
V2=0
In exp transformation D1
D2
D3 your data port
Add variable port
Row_num=row_num+1
Exp1_v=iif(row_num=1,setmaxvariable(v1,v2))
Exp2_v=iif(row_num=1,setmaxvariable(v2,v2+2))
Now add two output port to get the mapping variable value which we have set rece
ntly.
V1_o=setmaxvariable(v1,v1)
V2_o=setmaxvariable(v2,v2)
Row_num_o=row_num
Take your data port out and last three ports out.
Now in filter transformation :Condition:- row_num >v1_o and row_num <=v2_o
==========================================
faced interview questions like this
a) How many columns we can create in table

b) how many session we can run in a batch


c) what g and i in oracle 1og and 9i/8i
d) your shell scripting type
I hope these are helpful
1000
i stands for internet and g stands for grid
https://docs.oracle.com/cd/B28359_01/server.111/b28320/limits003.htm
==============================================
I m trying to load data into SQL server 2008 database..But session failed with e
rror this type of connection is not supported by your OS . Infomatica server is
UNIX...
Please suggest how can I load data into SQL server database?
This error occurs due to incompatible Integration services that you re running t
he workflow on. Please contact your Informatica administrator to modify the exis
ting IS or create a new IS that is compatible for MSSQL databases. following the
creation of such IS, add the MSSQL server details on the ODBC file of the Repos
itory and create an ODBC type relational connection on the Workflow Manager, and
use that for the MSSQL target you d like to load data to. Please let me know if
you need further info. thanks!
===========================================
july 24
Hi guys i have one table with Millions of records data, so i created 20 Partitio
ns with based on one LIMITKEY column,
so here how i can able to loop each partition one by one while extacting records
, i want SQL Query can u pls help me in that..
===================================================
how many types we can select duplicate rows in oracle,let tell me anyone this.
Query 1:
Select a,b,c,d from <table_name> group by a,b,c,d having count(*) > 1
Query2:
Select a,b,c,d from (Select a,b,c,d, rank() over (partition by a,b,c,d order by
null) as rnk from <table_name>
) a where rnk >= 2
Query 3:
Select * from <table_name> where row_id not in (select max(rowid) from table_nam
e group by a,b,c,d)
=======================================================================
Period:2014,Apr-06 P1
If this is the value of Period column
and If period value contain P1 then take day as 1st day of the mnth n format wil

l be mm/dd/yyyy in target column see below


Requirement
Column Period Extract date: 04/01/2014
Please help how to implement this in exp trx Informatica
saved in pictures.
==========================================================================
Is there any way we can parameterize - "Number of rows to skip" under source fil
e properties. Any other work around to achieve this?
I have few files with and without header
We can not parameterize "number of rows to skip " as we do it while importing th
e file(as per my knowledge)....
But as u know,we can achieve this by using variable or sequence generator and fi
lter ....
==========================================================================
ID NAME
100 HYD
100 BANG
100 CHEN
I WANT OUTPUT LIKE BELOW IN ORACLE
ID NAME1 NAME2 NAME3
100 HYD BANG CHEN
THANKS IN ADVANCE

select id,max(decode(rank,1,name))NAME1, max(decode(rank,2,name) )NAME2, max(dec


ode(rank,3,name)) NAME3
from (select id,name,rank() over(order by id,name) as rank from table_name) grou
p by id
========================================================================
order by clause is use to display the data only not arrange the data internally
in database
======================================================
How to load XML data + flat file + Oracle data + SQL server data into same table
via single mapping?
=================================================
How to pass Boolean value to informatica mapping from parameter file.
I am going to pass this Boolean value to oracle procedure for further operation.
===============================
HI guys, I am using OR condition in lookup override (A=B OR A=C) AND (D=E)..How
can I use OR condition in Joiner..Is there any such way..please suggest

You can create multiple join condition but don t lose the realtionship
===========================================
Hi Guys,
If the source is a flatfile, it has millions of records and the through put is a
s low as 50. Then, what could be the solution to increase the performance..Thank
s smile emoticon
We can use key range partitions on the target table and improve the performance
.

If the bottleneck is at reader side then you need to improve the performance by
issue the sql query at source database side. Then see the performance if still s
ame issue then u need to tune the query,we have so many ways to run the query in
explain plan where we need to do full table scan if it is full table scan cist
of the query is more database side then u vl see the performance again if its st
ill not improved then u vl go for to create the indexes on the joining columns a
nd analyze the query and then go for create hints etc.... These many ways u need
to do then finally u vl get more performance as. U r expecting. Hope this will
help you
==========================================
Is there any other query, command (way) to retrieve data other than SELECT?
Cursor is the way to retrieve data from data base except select
===============================================
what is the major diffrence between informatica 8x and 9x....
1. The major difference is lookup transformation which is active (returns all th
e matched rows) in Informatica 9.0.
2. Can access two tables with same names (Ex:- same table name but one in upperc
ase and one in lower case).
3. Can process table with .in their table names?
4. Informatica 9 supports data integration for the cloud as well as on premise.
You can integrate the data in cloud applications, as well as run Informatica 9 o
n cloud infrastructure.
5. Informatica analyst is a new tool available in Informatica 9.
6. Session log file rollover. You can limit the size of session logs for real-ti
me sessions. You can limit the size by time or by file size. You can also limit
the number of log files for a session.
7. Database deadlock resilience. In previous releases, when the Integration Serv
ice encountered a database deadlock during a lookup, the session failed. Effecti
ve in 9.0, the session will not fail. When a deadlock occurs, the Integration Se
rvice attempts to run the last statement in a lookup. You can configure the numb
er of retry attempts and time period between attempts.
8. Passive transformation. You can configure the SQL transformation to run in pa
ssive mode instead of active mode. When the SQL transformation runs in passive m
ode, the SQL transformation returns one output row for each input row.

9. SQL overrides for un cached lookups. In previous versions you could create a
SQL override for cached lookups only. You can create an SQL override for un cach
ed lookup. You can include lookup ports in the SQL query.
=================================================================
Master Outer Join with Master Port , how the Integration service
explain.

ll works, Plz

Less no recrds u should take as master


Master outer join cantain less number of records comparing with other join
2.improve the join performance
=================================================================
how to delete last 5th line in unix... thanks
sed -e 5 filename
==================================================================
AWK command
Duplicates in a file
======================================================
make the index rebuilding faster on the tables? I am 1st disabling indexes,data
load, & rebuilding indexes..Any optimization techniques that we can use for thi
s?
Write a query ovveride on target table for create and drop the constraints and i
ndexes..
=====================================================================

My flat file consisting of "" in one field and i want to convert it to korean charact
while loading into oracle table.
Kindly do needful
Use code pages in informatica there is a particular code for Korean language use
the same and convert into required language...
=========================================================================
src1
id name
100 null
101 hyd
102 bng
src2
id name
100 bng
101 null
102 pune
trgt

id name
100 bng
101 hyd
102 bng,pune
How can i achieve this.......?
this is simple way ....select e.id,trim(both
m src1 e,src2 f
where e.id=f.id

, from e.loc|| , ||f.loc) loc fro

also saved in pictures


============================================
What is Check IN/Check Out/Versioning ? What if mapping is checkin & someone try
to modify then what it would throw ? & how to delete the invalid mapping which
is Checkin by someone in the folder?
If u want to edit a mapping session tasks etc we will put check in first(so no b
ody can access or run those)then do modifications after that check out then ever
ybody can view access execute
developers can maintains multiple copies of source code, with labels to identify
changes, you check in when you want to make a change and check out once you are
done. was there no XML backup taken? or copy that mapping from PROD
========================================================
how to achieve this.pls help on this scenario
source
-------------date_col
01-jan-2015
01-may-2015
01-jul-2015
target
--------------date_col
01-jan-2015
01-feb-2015
01-march-2015
01-april-2015
01-may-2015
01-jun-2015
01-july-2015

select distinct name,amt,mon,cal.mon


(select name,mon,cal.mon,amt(select name,sum(amt) amt,min(month) minmonth,max(mo
nth) maxmonth from (
select name,extract( mon ,date) month,case when type= credit then amount*1
case when type = debit then amount*-1
end amt from table srctable group by name,extract( mon ,date))) src
cross join
(select extract( mon ,sysdate-level) mon from dual connect by level<365) cal
where cal.mon between minmonth and maxmonth

===========================================================================
DTM architecture and integration Service process in details if possible.
DTM performs extraction ,tramsoform and loading the data.when ever excute the wo
rk flow in work flow manager here integration service connects to the repositary
service rep service connects to the rep database and extracts the code it gets
into the intigration service buffer .looking the code in buffer it performs extr
action ,transformation and loading .all these things you can see it in work flow
monitor
===================================================================
What is change data capture, when to use and different between scd and cdc.
cdc means capturing the changes in the data in source .. . that means whenever t
he data is updated or new records inserted or some records deleted we are captur
ing thise changes. in my project we use power exchange to capture the changed da
ta.
===========================================================================
I have 1000 flatfiles i had already loaded 700 flatfiles how to you load remaini
ng 300 flatfiles?
if your question is, how to load if the wkf failed/stopped in middle and continu
e for 300 files?then its better to have a db tbl and store the file name with fl
ag after each file processed. so if you want to re run wkf then mapping has to c
heck filename and flag, if it is already processed remove that file name and loa
d which are not processed
=============================================================================
incremental load or delta load or CDC are the synonyms of the concept OR whether
they are different concepts
Change data capture, in short it capture dml commands on the table on which u ha
ve applied the cdc, what all u updated, inserted or deleted.
2. Incremental loading:- first time data is already into a table, again u hv to
load some more data which can be 1 record or more than one record, just to inser
t those particular we do it with help aggregator transformation and that term is
know as incremental loading.
Delta load:- first time loading to the table.

=====================================================================
****************************************************************************
emp
boss
sales transaction
hra a/c
in which what are the fact and dimensional tables?

Sales transaction we can say it mght be fact table


emp dimensions table
----------------In your project how you can give the fact table notations
We dnt give, based on the dimensions table and client requirements we create fac
t table by which he analysis the business through a report
---------------------------

Transactions could be Fact and rest all dimensions


---------------Actually Fact Tables are those which stores Quantitative Data , Here Transaction
is the only table can be identified as Fact. Rest all tables includes the metad
ata about Fact table like ..Who has done this transaction ..
-------------one of my interview question;:- how many fact tables and dimension tables are th
ere in ur project?@
That again depends on your project data model ..like for us we dont have star sc
hema ..our is snowflake schema
-------------------*****************************************************************************
===================================================================
i want to load unique records in one target table and duplicate records in anoth
er target table how it is?
Sort--->exp (generate variable port (iff (col=color,v_count+1,1)----->router--->targets
=========================================
http://ssssupport.blogspot.in/2015/07/migrate-informatica-code-from-unix.html
======================================================================
Hi all. I have a req in my project. Inorder to improve performance,if from sourc
e 20k records are coming, i need to have 4 workflows that execute parallel and e
ach process 5k records.. Like 1st workflow- 1-5k, 2nd workflow: 5001-10k, 3rd wo
rkflow: 10001-15k, 4th workflow: 15001-20k. Plz advise for method and approach..
Source and tgt both are oracle databases.. Source to staging.. I have diff sess
ions in a worklet..
After extracting cols from SQ take seq generator or in any otherway to generate
numbers and in filter place this condition :
seqno>=$$START_NUMBER and seq<$$=END_NUMBER....For this same mapping create 4 se
ssions and parameterize $$START_NUMBER and $$END_NUMBER with diff seq no s......
......create worklet for these 4 sessions and execute it using workflow.
========================================================================
source contains data i want to load first 2 records into target table ,when run

first time ,when i run second time next 2 records have to be load into target li
ke that how can i achieve this? please help me?
take seq generator t/r and start vale 1,increment by one and end vale 2.check re
cycle and reset options..................in filter give this seqval<=2 then ever
y time when you run session only 2 records will be loaded
===========================================================================
Why session fails if we check sorted input in aggregator with incoming unsorted
data. Please explain ?
When u check the sorted input option the aggregator will think that I will get t
he sorted data now, I have to do some group calculations. But if we don t send t
he sorted data to the aggregator transformation, the session will fail.
Even though,if we check the sorted input option and if we don t provide the sort
ed data, in 3 cases session can be succeeded
1. Incremental aggregation
2. Nested aggregator
3. Our input data is partitioned
in the above 3 cases, session will succeed, even though if we haven t provide so
rted data....
================================================================================
======
Please give the solution for the below scenario:
Source
____________
Name month amount credit/debit
A 1-Jan-2015 2000 credit
A 2-jan-2015 500 debit
A 5-April-2015 1000 credit
B 3-jan-2015 300 DEBIT
B 4-march-2015 800 CREDIT
B 5-May-2015 400 credit
Target
____________
Name month Amount credit/bebit Balance
A jan-2015 1500 credit 1500
A feb-2015 0 credit 1500
A march-2015 0 credit 1500
A April-2015 1000 credit 2500
B Jan-2015 300 DEBIT -300
B Feb-2105 0 credit -300
B march-2015 800 CREDIT 500
B April-2015 0 credit 500
B May-2015 400 credit 900
For this, use a sorter to sort the data and then aggregate the data based on nam
e and month. then use an expression having variables holding previous records va
lue for that group and based on credit and debit, it will do calculations. If th
e balance is debited then do minus operation with the previous stored value for
that group else add that value. Sequence of ports in expression: 1) Input port
2) variable old = variable new
3) Variable new= iif (operation=debit , variable_old input port , variable_old + i
nput port,0)
4) Output port = variable new
============================================================================

what is fact less fact table what is conformed dimension and what is degenerativ
e dimension explain clearly with examples
fact less fact example :- student attendance record
Country table. Employee table are confirmed dimensions
=========================================================================
Hi How to achieve this
Source :
Department
4509810
4509810
4509810
4509811
4509811
4509811
4509813
4509813
4509813
target
Id Department
1 4509810
1 4509810
1 4509810
2 4509811
2 4509811
2 4509811
3 4509812
3 4509812
3 4509812
Based on department .unique Id should be generated
You can even use an expression. Increment the unique id whenever the department
id changes
Use exp tx. Create cur_dept = dept, create variable as out_dept = iif(cur_dept =
prev_dept,I d,I d+ 1) then create variable as prev_dept = dept
dense rank
create one more port v_port= iif(cur_dept=prev_dept,id,id+1)
===========================================================
"Integration service logs initialization and status information
when you select tracing level as terse so it ll keep the log status,reject recor
d informatition and imitialization
===============================================================
Source:
Product. Quantity
Samsung. Null
IPhone. 3
Lg. 0
Nokia. 4

Target
Product. Quantity
IPhone. 3
IPhone. 3
IPhone. 3
Nokia. 4
Nokia. 4
Nokia. 4
Nokia. 4
Using java we can achieve this in that u can write simple java code and use func
tion "genarateRow"... It will give multiple records in target based on source co
lumn.
java transformation docx in downloads
===================================================
code implementation
When a code is estimated or is being moved to production
=======================================================
Query:
i/p:
1
1
2
3
3
5
6
6
6
O/P:
1
2
3
5
6
Sql query to To delete extra duplicates
delete from <table-name> where rowid not in(select max(rowid) from <table-name>
group by column1
====================================================================
I need to delete rows based on updated strategy .. There are already 2 indexes p
resent. One index is defined on a particular column but the other one is automat
ically created because of the composite pk. But the column we need on which we n
eed to delete data is part of that key too. So the performance is slow. How to i
mprove
==================================================================
In interview everybody asking tell me about your business? Can u please help me
how to answer to this?
ey want to knoe what kindaa operations you are performing in your project..creat
ing mappings and workflows is one thing but in real world perspective what your

project is doing.. Is it telling the business about the sales/profit/losses or i


s it serving some other objective ..everything in a project is business driven I
T is just a part of it
==========================================================
A 1
B 2
C 3
D 4
TARGET
TRG_TABLE
---------A1
B2
C3
D4
can you explain how to write the function. One source is flat file and another i
s table.
go for join by generating surrogate key for alpha_flat...then concat.
=============================================================
Interview question:
SourceId. Desc
1. A
2. B
3. C
1. D
2. E
Target:
1. A,D
2. B,E
3. C This can be achieved in two ways using Oracle SQL list agg functions. One b
eing LISTAGG which is a global one and is always suggestible, there s also a sys
tem specific function called WM_CONCAT which may not be available on all the dat
abases. Please find the queries below:
SELECT ID, TO_CHAR (WM_CONCAT (DESC))
FROM TABLE_NAME
GROUP BY ID;
SELECT ID,
LISTAGG (DESC, , ) WITHIN GROUP (ORDER BY DESC)
FROM TABLE_NAME
GROUP BY ID;
==================================================
I HAVE A RECORD LIKE
ID STARTDATE ENDDATE
12 05-APR-1991 24-JUN-1991
TRGT SHUD BE
ID STARTDATE ENDDATE
12 05-APR-1991 30-APR-1991
12 01-MAY-1991 31-MAY-1991
12 01-JUN-1991 24-JUN1991
HOW TO ACHIEVE IT ...

some one help me


saved in Pictures
use procedure and pass input parameters then define variables.. use for loop lik
e,,,,for i in 1...months_between(date1,date2) loop then logic...hope u can achie
ve..
==============================================================
without group by can you use having clause? Y/N why?
YES...We Can, But Logically It May or May Not Return Correct Result...So It is T
otally Depend...
Exa:- SELECT MAX(Sal)Sal FROM Emp
HAVING MAX(Sal) > 3000;
So If We Consider This SELECT Statement According To Emp Table As of Now It is G
iving Correct Result But Once We Replace 3000 With 2000 Than Logically It Will N
ot Return Correct Result.
So Every Time It Will Return MAX Value Only...

=================================================================
source
student table like
sid, sub, marks
1 ,maths ,50
2 ,maths, 30
3, maths, 40
1, scince, 50
2, scince ,30
3, scince ,40
1, english, 50
2, endlish ,30
3,english ,40
Target
Sid ,Maths,science,english
1,50,50,50
2,30,30,30
3,40,40,40
How to achive dis? and how many ways achive this?
first sort the data then after generaate seq no group by s_id then assign all th
e ports to the exp tr decode when 1 then maths,when 2 then science,when 3 then e
nglish (use all are o/p ports) then after amap all the port to aggregetor where
use max(maths),max(science),max(english) take o/p ports then target.
==============================================================
Hi friends can any one tell me how to generate parameter file by using sql and h
ow we need to call that parameter file in our workflow.
i need to create a table for this parameter file.in that table i need to specify
all the details regarding to that param file

To create table use sql queries and insert the data in it. Once
u can create template parameter file which will have dummy name
s. A script which will pull data from table. Copy template file
patterns with original values and create prm file. Eg: replace
actual date. Then use it.

that is done. Yo
for actual value
and change dummy
date_sample with

===============================================================
How do you Handle Excel files? How do you import Excel files in Informatica?
Easiest and generally followed approach is to convert them to CSV and import to
the Informatica server. But we do have an approach to import excel files directl
y, by selecting the data in the Excel that you want to be imported, and creating
a Work Group for the same. I partially recollect the process but sure we can ge
t it on google. Its nowhere followed as far as i know. Thanks!
==============================================================
Column reference "rowid" not supported for views"
Keep getting this error when attempting to write to a netezza table. The mapping
has strategy as "treat source row s as "UPDATE". When I do inserts only it work
s but UPDATES give the message above. It s thinking its writing to view. Can som
eone help with this? What s the issue?
I groom the table and it worked..... Idk what grooming does but oh well... Lol
==============================================================
Source:
Dept_no, Employee_name

20, R
10, A
10, D
20, P
10, B
10, C
20, Q
20, S
Tgt1:
Dept_no, Employee_list

10, A
10, A,B
10, A,B,C
10, A,B,C,D
20, A,B,C,D,P
20, A,B,C,D,P,Q
20, A,B,C,D,P,Q,R
20, A,B,C,D,P,Q,R,S
Tgt2:
Deptno, Employee_list

10, A

10,
10,
10,
20,
20,
20,
20,
How
adv

A,B
A,B,C
A,B,C,D
P
P,Q
P,Q,R
P,Q,R,S
to achive ?
thanks......

2) first sort the data both the columns and then map all the ports to the exp t
/r create two variable ports current_deptno and pre_deptno assingn deptno to tho
se two columns and then in between those variable port as v_CNT in that write co
ndition iif(curr_deptno!=pre_deptno,employe_list||,||v_CNT,employe_list) then pa
ss to the target
1)same as above first sort the data then after in exp t/r create two variable po
rts curr_emp_list and pre_emp_list assign employe_list to those two ports in bet
ween those two take v_CNT and write the condition iif(curr_emp_list!=pre_emp_lis
t,v_CNT||,||employe_list) assign v_CNT port to the o/p port then map to the targ
et.
====================================================
Hi everyone
Can anyone please help on this senario:
I have a flat file source file which is having 3 columns such as Eno,Location,De
ptid.
The values like below in source file
Eno,Location,Deptid
1,HY,INDIA,10
2,PH,AMERICA,20
3,BU,SINGAPOOR,30
I want these data on target table like:
Eno Location Depid
1 IN INDIA 10
2 US AMERICA 20
3 BU SINGAPOORE 30
use decode in expression for second column : decode(location, HY , IN , PH , US
, BU , BU ,location)
======================================================
Please anyone tell me how many ways we have to resolve below query.
I have 10 Lakh rows, needs to load it into target using informatica.
But after 2 Lakh rows get updated my session get fail so what we have to do?? &
how can I load the unique data into target.
Thanks
IN RECOVERY STRATEGY USE RESUME FROM LAST CHECK POINT
=========================================
TEST PLAN in informatica
give one example with explanation

est Plans in informatica are the test cases that are used to test the code devel
oped by the developer. This test plans are also used for unit testing like what
all scenarios you should check to confirm that users requirements are fulfilled.
Test plan are always prepared before starting the development.
11 July at 14:20 Like 3
Kalyan Kumar Monika Gharwal Would you share the same test plan to testing team o
r they will maintain any other plans ?
11 July at 14:27 Like
Monika Gharwal Ideally the test plans should be prepared by testing team and sho
uld be shared with the developers. So developers and testers uses the same test
plan. But if the developers thinks that some more tests cases should be added wh
ich are not covered in the test plan then they can ask testing team to add them.
So no difference between Unit testing and actual testing ?
11 July at 14:33 Like
Monika Gharwal we can say that...but we cant assure that developers are testing
all the scenarios present in test plans at the time of Unit testing that s why i
n actual testing (QA Testing) all the scenarios in test plans needs to be tested
...
========================================================
Write Backward Compatible Session Log File
What if this property is checked or unchecked ?
pls explain the difference ..
If you not checked it , IS generates session log as .bin file which is not reada
ble .if you check it will create both files .bin and .txt in which .txt can be v
iewed in any text editor and you can check the sessions log there ..
===========================================================
ACCENTURE INTERVIEW QUESTIONS
> SQL override ,if you have given it in mapping level and session level, which o
ne does Informatica take ?
> Explain me how you can handle a new project with 2 new resources
> what is the datatype of all the flatiles before informatica: Answer is String
> what are the delimiters used in flatfiles in your project?
> concept of data profiling ...
> why do we use SP for seq generator ?
Usually pipe is used as delimter..using sequence generator is not a great practi
ce .specially in case of upgrade it causes problems...informatica will take sess
ion level override. ...how will i handle a project with 2 resources ..by praying
to god that they will work. Lol
=================================================================
Sarath Chandra convert it to Chandra Sarath
We can achieve this by using instr and substr. By using instring, find the posit
ion of S and use substring for result.
can anyone write the query please
Use

reg_exp(column,pattern, ,1) as value 1


reg_exp(column,pattern, ,2) as value 2
Cocatenate value 2 ||
|| value 1
See reg exp syntax in google

If you are using Unix


Column1 = echo string | cut -d
Column2 = echo string | cut -d
Var= $column2 $column1
Echo $var

- f 1
- f 2

==========================================================================
Interview question:
SourceId. Desc
1. A
2. B
3. C
1. D
2. E
Target:
1. A,D
2. B,E
3. C

Using sql query.


How to achieve it??
in pictures
SELECT ID, TO_CHAR (WM_CONCAT (DESC))
FROM TABLE_NAME
GROUP BY ID;
SELECT ID,
LISTAGG (DESC, , ) WITHIN GROUP (ORDER BY DESC)
FROM TABLE_NAME
GROUP BY ID;
===============================
ADP Interview
> There is an SSN number column in SRC and client doesn t want SSN as a maine co
lumn in TGT table, you have to create an ID column , what is your approach ?
> If your src is an CSV file, what are the issues that you face ?
> Performance tuning
> Project methodology
=======================================================
We are running PWX CDC Oracle Express 9.5.1. The Informatica CDC workflows after

a while start to hang. The only message in the session log is TIMEOUT BASED COM
MIT. We are unable to stop the session so we have to abort. The writer timeout o
n the integration service is the default of 60 and commit interval on the sessio
n is 10000. DTM size is 1G. DTM buffer block size is 8KB. The CDC workflow has s
everal lookup transformations. Why does this message occur and how can it be pre
vented? What are the recommended settings for CDC workflows? Should the default
writer timeout be increased? Am I hitting DTM deadlocks?
Increase buffer block size in session >> config object and check. It may help
=======================================================
If I have source with 14 rows .. I want to load every 4th only into the target.
. How can I achieve. ...
Use sequce genrator and start value 1 and end value 4 and check cycle box..and F
ilter nextval=4 you can achive this
================================================
1. What is data Integration?
2. What is PDO? If it is a flatfile source?
3.Explain about batch?
1). What is Pushdown Optimization?
2). Data Intigration?
3). what is a parameter file?
======================================================
Need help in below error
I have imported one table from SQL server.
Done with all the ODBC connection from workflow manager also set relational conn
ections in Task settings(Task--Mappings) ..Mapping too validated but failing and
session log showing below error
"Database driver error...
CMN_1022 The server doesn t support the database type specified on this OS platf
orm"
error in pic
============================================
I have 3 target when i run session first time load the data into first target ,2
nd session run 2nd target,3rd run 3rd target how can i achieve this scenario....
...
Create one mapping variable and increment it by 1 each time u run the wf and in
exp t/f put condition while increment of variable value if it reaches at 3 reset
to 1...and route the data based on this mapping variable value...
==================================================

pre and post session variables? with some real time simple example?
Suppose if u want to use one mapping variable value into another mapping then u
can use pre/post session variable assignment ....u need to create one wf variabl
e to transfer variable values between mapping using pre/post assignment..
=====================================================
1. Tell me about yourself
2. What are normal forms
3. What is dimensions
4. What is factless fact
5. How to count records using expression transformation
6. Write sql query for the below scenario
CALLING
C
A
L
L
I
N
G
i.e, splitting into multiple columns a string using sql.
7. What are cursors and type of cursors.
8. What is correlated query. Explain its functionality i.e. how correlated query
works.
9. What are indexes. Type of indexes. Why do we use indexes.
10. What is dense rank and rank
11. What are different types of joins.
12. What is master outer join and detail outer join.
13. Given below scenario
We hve sq1--router--joiner
--sq2
Informatica validates the mapping but fails when we run the session. Why
What is the option that we need to change to run this mapping.
14. What is row_num () function in informatica.
15. scenario
INFORMATICA
using unix command convert it as
I
N
F
O
R
M
A
T
I
C
A.
16. Can we delete a file using find command.
17. Apart from exec command in the above scenario can we use any other command.
18. What is the difference between find and grep.
pls help me out
5)in expression count of records in 2 ways:one is taking a mapping variable and
assign intial value is 0 and then exp take one output port is using setcount va
riable function assign to mapping level variable.another one is take one variabl
e port in exp and assign v_count+1 and this variable port assign o_count

4)fact less fact table means there is no numerical data like customer,product in
formation
10)dense rank will give sequential ranking like 1 1 1 2 3 like that.rank will gi
ve non sequential ranking like 1 1 3 4
16)yes by using find command we can delete file.rm also using to delete file. 9)
indexes are using to fastly retrive data in a table.types of indexes b-tree inde
x,bitmap index,clustred index,unique index,non unique index,functional based ind
ex
7)cursors are two types implicit cursor,explicit cursor
6) SELECT SUBSTR( CALLING ,LEVEL,1) FROM DUAL CONNECT BY LEVEL<=LENGTH( CALLING
);
3)Dimensions stores the textual or descriptive information about the facts. ex:cusromers,products,suppliers,stores etc. 4)Factless fact table is used to recor
d events.factless fact table consists of only keys no measures. 11)fourtypes of
joins is there normal join,full outer join,detailed joner,master joiner.12)maste
r join gives the values to the matchimg on master table and all values giving on
detail outer joiner. detail joiner gives the matching columns on the detail tab
le full columns on the master table.
17. exec or you can use "delete"
18. grep searches matching pattern in a file whereas find searches for files and
directories
16. find . -name "file name" -exec rm {} \;
(or)
find . -name "file name" -type f -delete
15. Command -> fold -1
=====================================================================

remove control M characters in Informatica ? In Unix I am aware and do not want


to use it in pre/post session. Is there any way to remove them in infa only ?
we need to use replacechr for CHR(13) - ascii value for control M
CHR(13) is Carriage return character. It is not control M character. By simply u
sing REPLACESTR or REPLACECHR it can be achieved. REPLACESTR(COL,CHR(13), )..
===========================================================================
hi all
1)what is fatal and nonfatal errors.....example
2)define reject truncated /overflowed rows
3)define enable high precision data
4)what are the types of error logs
5)what are types of errors in powercenter
6)what are the types of data moment modes

=========================================================
difference between source based commit and target based commit
Informatica server commits data into target table based on commit interval.what
ever commit interval over there in that range commit the data into target table
where as target based commit irrespective of commit interval .it trys to fill wr
iter buffer and once writer buffer got filled then informatica server commits da
ta into target table.
=========================================
Source data:
col1 Date
1 01/01/2015
3 04/01/2015
4 12/03/2015
2 17/04/2015
3 18/04/2015
3 2/06/1015
5 13/08/2015
Target:
Col1 Month
4 Jan
0 Feb
4 Mar
5 Apr
0 May
3 Jun
0 July
5 Aug
0 Sep
0 Oct
0 Nov
0 Dec
Can anyone tell me a SQL query for this.
105
pic tures
==============================================
Source:
Name State
A Mumbai
A Delhi
B Mumbai
B Delhi
C Mumbai
C Agra
Could anyone tell SQL query to get the names which has both Mumbai and Delhi as
states.
select * from source a where state= Mumbai and exists (select 1 from source b w

here a.name=b.name and state= Delhi ) ;


105233
pic
==================================
Hi All,
Facing a problem in Update Strategy. There is some 22m records in Target. I need
to delete them on the basis on one key ( Store_Num : Around 756 Stores, Total A
round 20m data). After going to the update strategy it is taking ~2 hrs for dele
ting the records. Can you please give a suggestion to fasten up the process?
Is the key indexed and primary used for deletion
Try increasing the commit interval to 1 laks and dtm buffer size too
Else check once writing the post / pre SQL fir deletion
=============================================================
difference between predefined variable denoted by $ and system variable denoted
by $$$ in informatica?
===============================================
how to convert single column to multiple rows using oracle (query)?
ex:
input:
apple
output:
a
p
p
l
e
SELECT SUBSTR( apple , LEVEL, 1)
FROM dual
CONNECT BY LEVEL <= LENGTH( apple )
pics 1169

Assuming target is Oracle DB, you have any idea how do we do this in Informatica
?
(without Java transformation)
No use lookup on some dummy table contain 100 to 1000 records. We just use this
to get rownum value which is (1,2,3,4,5,6,7,8.... etc). Now before lookup in exp
ression take the length of word coming form source. pass length to lookup for jo
in, if you pass 5 as input to lookup the 5 records will match with lookup and it
will give you 5 records out put. now using 1,2,3,4,5 using substr you can get r
equired out put. Please try and let me know if any issues.

=============================================

load window and data window DWh project?


==================================================
When a workflow is triggered what happens in the background?
When a workflow is triggered by an integration service the IS first checks for t
he parameter file and checks if all the workflow parameters are defined... and t
hen triggers the tasks attached to the workflow sequentially...
================================================
what is error threshold ......give example
====================================================
How to convert datawarehouse single row to multiple columns like
d
a
t
a
w
a
r
e
h
o
u
s
e using oracle sql and also in unix?
SELECT SUBSTR( datawarehouse , LEVEL, 1)
FROM dual
CONNECT BY LEVEL <= LENGTH( datawarehouse )
pictures 1169
=======================================================
source-INFORMATICA,
Target:
I
N
F
O
R
M
A
T
I
C
A
how to do......?
select substr( INFORMATICA ,level,1) from dual connect by level<=length( INFORMA
TICA );
https://www.youtube.com/watch?v=ZRGqJaZM6eM

===================================================================
1) how to update target table without primary key
2)how to update target with out using update strategy t/r
we can update target table without primary key by using target level update que
ry :TU
====================================================
I have a requirement in stage stable we have 5 cols.here 4 cols of data is direc
tly populationg from source to target.but 5th column having transformation logic
is union with same table with diff filter condition. when i am writing the quer
y am facing an error like union cols mis match count.please help us to give the
correct sql query
Source :
Ip_id,bank,ccy,component,rate
1,hdfc,inr,pnl_bal,1250
2,hdfcinr,int_rate,1350
3,hdfc,inr,pnl_ratem1100
the query is
Select Stg.ip_id,stg.bank,stg.ccy,stg.component,rt.union col
from(select * from stg table where rate in(sel inr_bal ||rate from stg_tble whe
re compnent= int_bal )
union
(sel inr_bal ||rate from stg_tble where compnent= panel_bal )
like this but am facing an error in this query
expected output is
union_col,ip_id,bank,ccy,component,rate
inr_bal1250,1,hdfc,inr,pnl_bal,1250
here 4 cols should populate the data as same as stage table but 5 th column we s
hould use union with another stage table help to give the correct sql query
Thanks
select b.union_col,a.ip_id,a.bank,a.ccy,a.component,a.rate from stage a,(select
concat( inr_bal ,to_char(rate)) union_col from stage) b;
==============================================================
Input: is "oracle"
and
how we can display output: as
o
or
ora
orac
oracl
oracle
????
SELECT SUBSTR( ORACLE , 1,LEVEL)
FROM dual
CONNECT BY LEVEL <= LENGTH( ORACLE )

11070

=====================================================
how to stop work flow after 5 runs autometically
==================================================================
CGI Interview questions
>Types of Schemas
> Difference b/n Truncate and delete
> SCD s Theory
> DML statements and DDL Statements
> Connected lookup and Unconnected lookup
> Version port in scd 2
> Triggering concept
> Differnce between Primary key, Unique Key and Surrogate key
> Have you used SP transformation and when have u used it?
>what is a view, Materialized view and why do we use a view instead of a table?
> dynamic lookup cache and static cache
Types of Schemas :
I guess there are two types of Schema
1. System Schema : Which contains all meta data related information like grants,
DB link information , DB Users information
ex. SYS, SYSDBA, SYSTEM etc..
2. Users Schema: Common to user which can contain all database related objects l
ike : Tables, Views, Properties of columns etc...
=======================================================
There is a table called abc_wk(random name, not actual lol) in production. Which
data gets inserted every monday for past week based on some other tables.
in SQL transformation there is a huge insert into statement that inserts data in
to this table. So now there is like 500million records in abc_wk table.
Now we have added two new columns as per requirement in DEV. But when we migrate
to production, what will happen to historical data? those two columns will be n
ull for all history?
Of course, NULL as long as the history data is truncated and refreshed in Bulk a
gain from the source or the added columns are manually updated for them
===========================================

You might also like