You are on page 1of 98

ADDING COLUMN TO AN EXISTING TABLE:alter table STG_ATC_GROUPS ADD ( ATCG_ATC_SCO_ID VARCHAR2(50 BYTE), ATCG_ATC_SCO_UID VARCHAR2(240 BYTE) );

ALTER TABLE <table_name> TRUNCATE PARTITION <partition_name>;

to del old records which are older than 3 yr:Begin Del_old_and_repeated_records.Del_3yr_old_F_SA_BE_REG\; end\; To prevent the unnecessary re-execution of the aggregation (sum(sales)), temporary tables could be created 1. Create a table named T1 to hold the total sales for all stores. 2. Create a table named T2 to hold the number of stores. 3. Create a table named T3 to hold the store name and the sum of sales for each store. A fourth SQL statement that uses tables T1, T2, and T3 to replicate the output from the original query should create table t1 as select sum(quantity) all_sales from stores; create table t2 as select count(*) nbr_stores from stores create table t3 as select store_name, sum(quantity) store_sales from store natural join sales; select store_name from t1, t2, t3 where store_sales > (all_sales / nbr_stores) ; There are three groups of pgm_main_category_id each with a value of 17 Now, if you want to select top 2 records from each group, the query is as

select pgm_main_category_id,pgm_sub_category_id,file_path from ( select pgm_main_category_id,pgm_sub_category_id,file_path, rank() over (partition by pgm_main_category_id order by pgm_sub_category_id asc) as from photo_test ) photo_test where rankid < 3 -- replace 3 by any number 2,3 etc for top2 or top3. order by pgm_main_category_id,pgm_sub_category_id The result is as:pgm_main_category_id 17 17 18 18 19 19 pgm_sub_category_id
15 16 18 19 21 22

file_path

photo/bb1.jpg photo/cricket1.jpg photo/forest1.jpg photo/tree1.jpg photo/laptop1.jpg photocamer1.jpg

http://dbcrusade.blogspot.com/2009/06/informatica-setting-codepage.html Informatica : Lookups Versus Outer Joins A plain lookup over a dimension (fetching ROW_WID) can be replaced by an outer join I have created a prototype to demonstrate this. SILOS.TEST_BULK_SIL mapping is created as a copy of SILOS.SIL_PayrollFact (loads Original mapping had a mapplet mplt_SIL_PayrollFact. This mapplet has 12 lookups over various dimensions. It takes input (datasource_num_id, integration_id etc) from the I removed this mapplet completely and incorporated all the 12 dimensions in the parent sql itself, Outer Joining them to W_PAYROLL_FS. All the expressions which were built in the Following are the results: Mapping SIL_PayrollFact (uses lookup) TEST_BULK_SIL (uses Outer Join) Results show that Outer join based mapping ran approx 2.7 times faster than the one based on lookups.

Again, lookups which involve some complex calculations may not be replaced by outer join http://www.miky-schreiber.com/Blog/CategoryView,category,BI,Informatica.aspx TO_INTEGER(TO_CHAR(SYSDATE,'YYYY')) - TO_INTEGER(TO_CHAR(END_DATE,'YYYY') create a variable port and code it : IIF(ScriptResult='FAILED',ABORT('Previous SQL SCript Failed to execute')) Add the digits of an integer. ex: 12345 integer I want its result as 1+2+3+4+5 = 15 How to do this in Informatica for each row of this field? ans:- Please follow the below steps 1)Insert the value id= 12345 in your source table 2)call in in informatica(ie source-sourcequalifer) 3)take expression transformation and in exp transformation 4)v_id1=substr(1,1) 5)v_id2=substr(2,1) 6)v_id3=substr(3,1) 7)v_id4=substr(4,1) 8)v_id5=substr(5,1) 9)O_id=v_id1+v_id2+v_id3+v_id4+v_id5 10)pass it to target

I'm using CUME() function in my expression to do calculate a running-total. However, the CUME function doesn't reset the value when a new key (program name in my case) is I would like to know if there is a way to group the CUME value by program name

ans:- CODEPort 1: KEY InputPort 2: VAL InputPort 3: V_KEY Variable Exp: KEYPort 4: V_TOTAL Variable Exp: IIF(KEY != V_KEY, VAL, V_TOTAL + VAL)Port 5: RUNNING_TOTAL Output Exp: V_TOTAL
I am trying to implement Sequence Generator with an Expression transformation. I have columns like ID_MAX_v, ID_MAX_o and $$MAX_ID in Exp trns. Logic is as follows $$MAX_ID is a Mapping variable.

ans:-$$MAX_ID ---> SETCOUNTVARIABLE($$MAX_ID) ID_MAX_v ---> IIF( ISNULL($$MAX_ID) OR 0, 1,SETCOUNTVARIABLE($$MAX_ID)) ID_MAX_o ---> ID_MAX_v.
- I want to generate multiple rows from a single row based on values in a field. How can we achieve this in Informatica / sql? Eg: Input is a table with the following values Field1 Field2

AAA 1, 2, BBB 3,4,5 Output should be Field1 Field2 AAA 1 AAA 2 BBB 3 BBB 4 BBB 5

ANS:- After Source Qualifier Use Exp TRF ---------------Field1(I/O-String) Field2(I/O-String) Field2_var(V-String) = REPLACECHR( 0, Field2, ',', NULL ) Field2_len_var(V-Integer) = length(Field2_var) Field2_out(O-String) = Field2_var Field2_len(O-Integer) = Field2_len_var Connect Field1,Field2_out,Field2_len to JAVA transformations (when you will create Java TRF you have to create 1 input grp and 1 output grp ) and put the following code in the Java code tab (On Input Row tab) and compile it. Use JAVA Transformation

Use JAVA TRF ---------------------if (! isNull("IN_field2")) { int len=IN_field2_len; String str = IN_field2; OUT_field2=IN_field2; for(int i=0;i<len;i++) { OUT_field2=str.substring(i,i+1); generateRow(); }

If Address 1 is null it have to show address2 in Add1 Source Name , Address1, Address2, Ph john 10/4Newyork , 11/4 texas,123 Paul 3/4 delas, 234 Output as Name , Address1, Address2, Ph john, 10/4Newyork , 11/4 texas,123 Paul , 3/4 delas,3/4 delas,234

ANS:-Create a new variable for address1 in the expression transformation. Define them as output port and assign following value to it: Address11(for address1) IIF(ISNULL(Address1),Address2,Address1)
How to Load Last 'n' records to the Target if my source is Flat File. Ans:-While you import the file in source analyzer..

there you can set " import from line " 11 or you can use INPUT TYPE as command in session properties and chose tail -n 5 EMP.txt these all thing you have to define in session properties for source flat file . input type : command ----------------------by default it would be file once you chose this option the other option also gets changed and in place of FILENAME you would now see COMMAND option there you can give tail -n 5 EMP.txt or like this tail -n 5 /path/EMP.txt http://radhakrishnasarma.blogspot.com http://www.informaticans.com/blog/ IIF(ISNULL(order_date), NULL,IIF(IS_DATE(order_date,'MM/DD/YYYY'),'INVALID DATE'). it would be easier to validate the condition of each field separetly in the EXP trans, and in the FILTER trans, can ID != 'NULL' OR (ORDER_DT != 'NULL' OR ORDER_DT != 'INVALID_DATE' ) so if any condition meets the criteria http://arpit.leading-technology.net/Info1000004.htm
http://randomusefulnotes.blogspot.com/2010/09/difference-between-yyyy-and-rrrr-f ormat.html

http://infapc.blogspot.com/ http://www.etl-developer.com/ How to find 3 duplicates records in 1 million records My understanding is that you want the duplicate records (and not filter them out). If thats the case you can do 1)In aggregator, group by based on key columns. Also create a output variable with code Count(*) which would 2)Have a filter after aggregator to pass only those records where count(*) >1 Q):All employees have to undergo certain tasks. Under one task there are multiple activities. We have Activity_Status as 'PASS' or 'FAIL' for each activity. Now if for instance in the example below, if employee with EmpNo=1500 has undergone Activities then for that particular Task i.e. FLD_100 I want to populate a column Task_Status = 'COMPLETE' in target since he has Activity_Status = 'PASS' in both activities for that Task.

Similarly if any one Activity_Status = 'FAIL' for a particular Task('FLD_101' in this case), the Task_Status = 'INCOMPLETE' in the target table. Now for EmpNo=1500, He has two tasks (FLD_100 with Task_Status = 'COMPLETE' and FLD_101 with Task_Status = 'INCOMPLETE'), so I want to populate a column 'Emp_Status' for all these records as 'DISQUALIFIED' since he has one For another EmpNo=1459, in the O/P desired section, he has Emp_Status='QUALIFIED' because both his Task_Status='COMPLETE' for FLD_105 and FLD_106 The issue also is that there are multiple employees and also the number of tasks and activities are not fixed ANS):-1. Use sorter (sort by EMP_ID and ACTVY_STATUS) So, if any activy_status is Fail, that record will come first. 2. Drag all the columns into an expression. Follow the same order of input, variable and ouput ports as a. Uncheck the output port for EMP_ID. a. v_FLAG--> Varible/Char 1 ----> IIF(prev_EMP_ID = EMP_ID, 'N', 'Y') c. v_EMP_STATUS ----> Varble port/char(15)--->IIF(v_FLAG = 'Y', IIF(ACTVY_STATUS = d. prev_EMP_ID ----> variable port ---> EMP_ID (Call EMP_ID) e. o_EMP_STATUS ---> output port----> v_EMP_STATUS (Call variable port) f. o_TASK_STATUS ---> output port ---> IIF(ACTVY_STATUS='PASS','COMPLETE','INCOMPLETE) ANS2):- This can be done with two instance of source( drag same source) in mappings, 1)from 1st instance of source drag all ports to Expression add and o/p port Task_status(O port)--->iif(Activity_Status='PASS','COMPLETE','INCOMPLETE' ) 2) i) From 2nd instance of source drag only EmpNo,Activity_Status to Sorter transformation and check key EmpNo(ascending) and Activity_Status(ascending) ii)Pass all ports to Aggregator transformation group by EmpNo, add an o/p port Status_out(O port)------>first(Activity_Status) 3)Take Joiner trans and drag ports for Exptrans(1st instance(detail)) and Aggtrans(2nd instance(master)) make JOIN CONDITON as EmpNo=EmpNo1 4)Pass all ports from Joiner Trans (expect empno1) to Exp trans ,add an o/p port Emp_Status(O port)------>iif(Activity_status='PASS' AND Status_out='PASS', 'QUALIFIED','DISQUALIFIED') 5)Drag required ports to target

Cannot find specified parameter file


found the problem. In windws under Folder options, there was a check box 'Hide extensions for known file types' . When i created the file as

param.txt. Since the above option was checked, i had to give parameter file name as param.txt.txt in session properties or rename the file as param(txt extension will be hidden). Let us consider the example you have provided..... xml source: ID,NAME,DOB 1234,Ram,01/10/2010 123,Ram,01/10/2010 Db2 source: 1234,Ram,01/10/2010 1234,Sam,01/10/2010 connect all the ports from xml to an expression and in the same way connect all the ports from DB2 to the same expression Flow should be like this... SQ--->EXP--->TAR Now expression will contain ports like this.. ID NAME DOB ID1 NAME1 DOB1 first three are from xml and next 3 are from DB2 Now create variable & output ports as follows... column_compare_v(V of string datatype) = IIF((ID=ID1) and (NAME=NAME1) and (DOB=DOB1),'VALID','INVALID') xml_id(o) = ID db2_id(o) = ID1 xml_name(o) = NAME db2_name(o) = NAME1 xml_dob(o) = DOB db2_dob(o) = DOB rec_status(o) = column_compare I'm looking to transform the following source: Identifier Year Amt 1 2009 50 1 2008 60 1 2007 55 1 2006 40 1 2005 30 2 2009 50 3 2008 60 4 2007 55 5 2006 40

6 2005 30 into the following target: Identifier Year1Amt Year2Amt Year3 Amt Year4Amt Year5Amt 1 50 60 55 40 30 2 50 60 55 40 30 Note: I'll always have 5 years worth of data for each identifier ANS:Use Sorter, expression and filter 1) Sort your data on Identifier, year 2) Sorter to Expression: Identifier (IN/OUT) Year (IN/OUT) Amt (IN/OUT) var_counter (VAR) = iif(isnull(var_counter) OR var_counter = 5, 1, var_counter+1) var_year1amt (VAR) = iif(var_counter = 1, Amt, var_year1amt) var_year2amt (VAR) = iif(var_counter = 2, Amt, var_year2amt) var_year3amt (VAR) = iif(var_counter = 3, Amt, var_year3amt) var_year4amt (VAR) = iif(var_counter = 4, Amt, var_year4amt) var_year5amt (VAR) = iif(var_counter = 5, Amt, var_year5amt) out_counter (OUT) = var_counter year1amt (OUT) = var_year1amt year2amt (OUT) = var_year2amt year3amt (OUT) = var_year3amt year4amt (OUT) = var_year4amt year5amt (OUT) = var_year5amt 3) Push following ports from expression to filter. Identifier out_counter year1amt year2amt year3amt year4amt year5amt add filter condition iif(out_counter = 5, true, false) 4) From filter, map your required fields to target. Source Definition Customer Phone_Number A 9848403211 A 9812675432 A 9112356788 A

9876503276 B 9567890765 B 9876098567 Customer A Home Office Other B Home Office Other

9848403211 9812675432 9112356788,9876503276

9567890765 9876098567 null

ANS):SQ--->EXP--->AGG--->TAR Follow the same port order in the exp Customer(I & O) Phone_Number(I) cnt(V - integer) = IIF(prev_cus='' OR Customer = prev_cus, cnt + 1, 1 ) prev_cus(V) = Customer home_v(V) = IIF(cnt = 1, Phone_Number, home_v ) office_v(V) = IIF(cnt = 2, Phone_Number, office_v ) others_v(V) = IIF(cnt = 3, Phone_Number, IIF(cnt > 3, others_v||','||Phone_Number,NULL )) home(O) = home_v office(O) = IIF(cnt = 1,NULL,office_v) others(O) = IIF(cnt=1,NULL,others_v) Pass the output columns to agg and group by customer Q):-Convert day of the year into a date ANS:)you can use ADD_TO_DATE(Start_date_of_year,'DD',152)....Set Start_date_of_year to 1-JAN-2010 GET_DATE_PART*(*ADD_TO_DATE*(CURRENT_YEAR,'DD' ,152) ,'MM') it returns month num thn use decode to get the month name use current name as a variable if u want to process multiple year data Q):-Coding an advanced filter to give data that is greater than or equal to seven days before today's date put in SQ as where condition. SystemModstamp >= trunc(sysdate-7)

http://informatica-elango.blogspot.com/ http://blogs.hexaware.com/informatica-way/output-files-in-informatica http://infiniteandmore.blogspot.com/search/label/Informatica http://www.itap.purdue.edu/ea/standards/powermart.cfm http://orafaq.com/node/55 http://troels.arvin.dk/db/rdbms/ http://nelrickrodrigues-informatica.blogspot.com/2010_05_01_archive.html http://learn-infa.blogspot.com http://www.nisheetsingh.com/ Q):-I am trying to write the count of rows in a trailer file. I am using the MAX(COUNT(COLUMN)) function to do this. My problem is say the count it 43.. is there a way to make informatica write the count as 0000000043? ANS):- The expression is lpad(to_char(max(count(x))),10,'0'). Scenario #01: Change detection using timestamp on source rows In this typical scenario the source rows have extra two columns say row_created_time & last_modified_time. 1. In the mapping create mapping variable $$LAST_ETL_RUN_TIME of datetime data type 2. Evaluate condition SetMaxVariable ($$LAST_ETL_RUN_TIME, SessionStartTime) ; this steps stores the time at 3. Use $$LAST_ETL_RUN_TIME in the where clause of the source SQL. During the first run or initial seed the 4. Now let us assume the session is run on 01/01/2010 00:00:000 for initial seed 5. When the session is executed on 02/01/2010 00:00:000, the sequel would be like : select * from employee

----------------------------------------------------------------------------------------------------------------------------------------1) create mapping variable $$last_load_date of date datatype and agg be max. 2) Let initial value of the variable be 01/15/2010 00:00:00 3) In the expression have a variable port like last_load_date and expression should be as 4) SQ query should be as follows... Select a,b from table where ACCT_REGISTER_DATE > = TRUNC(TO_DATE('$$last_load_date')) SQ--->EXP--->TC--->TAR EXP ---1) Pull all the columns from source to EXp 2) Create two variable and two outputport in the same order as follows. file_status_v(VAR PORT) = IIF((expgrp_prev ='') OR (ExportGroup = expgrp_prev), 'N','Y') expgrp_prev(VAR PORT) = ExportGroup file_status(OUT PORT) = file_status_v filename_o(OUT PORT) = concat('xxx',ExportGroup) Transaction control ------------------1) Pull column1-3 and two output port to the TC 2) put the expression in TC

IIF ( filename_change = 'N',TC_CONTINUE_TRANSACTION,TC_COMMIT_BEFORE ) TARGET Q):-how to get the the email notification for workflow start time and completion time in informatica 8.6.1 start---->session---->assignment---->e-mail 1) create two workflow variables with date/time datatype as follows $$wkf_start_time $$wkf_end_time 2) In the assignment task assign the values for the variables as follows. $$wkf_start_time = WORKFLOWSTARTTIME $$wkf_end_time = $s_session.EndTime(If you have multiple session then it should be end time of the last 3) E-mail task E-mail test should be as follows.. Workflow_Starttime = $$wkf_start_time Workflow_Endtime = $$wkf_end_time

-----------------------------------------------------------------------------------------------------This mapping will contain an expression. This will receive the i/p ports from source FF. While as o/p it will As far as I know the expression will consider it as a string insted of evaluating it. e.g Lets say I have 5 columns coming from a flat file source (Column1, Column2...,Column5) Column1 is a i/p port mapped from source. lets say OP_Column1 is the o/p port defined and it has a value In parameter File I have defined this parameter as $$Para1=IIF(Column1, SUBSTR(Column2, INSTR(Column3, '_', 1, 1), length(Column2)), LTRIM(RTRIM(Column4)) || TO_CHAR(SYSDATE,'ddmmyy') ) I want OP_Column1 to be assigned a value by executing the above formula solution:The Integration Service expands the mapping parameters and variables in an expression after it parses the expression. If you have an expression that changes frequently, you can define the expression string in a To define an expression string in a parameter file, you create a mapping parameter or variable to store the expression string, and set the parameter or variable to the expression string in the parameter file. The For example, to define the expression IIF(color=?red?,5) in a parameter file, perform the following steps: 1. In the mapping that uses the expression, create a mapping parameter $$Exp. Set IsExprVar to true and set the 2. In the Expression Editor, set the expression to the name of the mapping parameter as follows:

$$Exp 3. Configure the session or workflow to use a parameter file. 4. In the parameter file, set the value of $$Exp to the expression string as follows: $$Exp=IIF(color=?red?,5)

I have two ports. 1. IC-NUMBER : which is varchar2 datatype. example IC_NUMBER = 123456789H. 2. DESCRIPTION : which is varchar2 datatype I need to check the first 2 digits of IC_NUMBER , if it is '12' then DESCRIPTION(output port) should be 'PASS' Solution:iif(substr(rtrim(ltrim(IC_NUMBER)),1,2)='12', 'PASS','FAIL') The Street Address item will contain the values from the ADDRESS_LINE_1 and ADDRESS_LINE_2 from a view. Leading and trailing spaces will be stripped from the address strings of the Peradigm view. If ADDRESS_LINE_2 contains information, it will appear as a second line of information in the Street Address If ADDRESS_LINE_1 contains information and ADDRESS_LINE_2 is blank, only the ADDRESS_LINE_1 If ADDRESS_LINE_2 contains information and ADDRESS_LINE_1 is blank, only the ADDRESS_LINE_2 SOLUTION:Please find the following query, which can be used in the SQL override, and help you to get the desired result, as per the requirements: SELECT CASE WHEN ((NVL(TRIM(add_1),'NULL')) != 'NULL' AND NVL(TRIM(add_2),'NULL')) = 'NULL') THEN TRIM(add_1) WHEN ((NVL(TRIM(add_1),'NULL')) = 'NULL' AND (NVL(TRIM(add_2),'NULL')) != 'NULL') THEN TRIM(add_2) ELSE REPLACE(TRIM(add_1)||',,'||TRIM(add_2),',,',CHR(10 )) END as Street_Address FROM Address This query runs, on the following data :ADD_1 DEPLOYMENT FROM DEV TO PRODUCTION GUID LINES Run the following query, and export the result as xml file and import this file, in the prod env. 1) Connect "Repository Manager" and then select Tools-->Queries 2) Select "New" option 3) Select the Parameter Name as "Object Type" 4) Select the "Object type" is equal to "Workflow" 5) Select the Parameter Name as "folder" 6) Select the "Folder" is equal to "Mapping" 7) Save it and execute the query.

Has any one of you worked with PushDown Optimization? There are certain limitations of Pushdown optimization like :

(1) Some ETL design aspects cannot be pushed to database level. Like use of variable port, Rank transformation. (2) Teradata sessions fail when you use full pushdown optimization for a session containing a Sorter transformation because sub query in a view cannot have Order By clause. (3) All type ofIntegration Service possible in one ETL session.to the database, it cannot track errors that (4) When the partitioning is not pushes transformation logic occur in the database. Also The session log does not contain details for transformations processed on the
Workflow must stop when there is an error on log file or rejected rows check in session properties, there is an option stop on errors change that from 0 to 1. check the option "Suspend on error" checkbox in workflow properties how to create oracle sequance. SQL> create sequence pubs2 2 start with 8 3 increment by 2 4 maxvalue 10000 5 cycle 6 cache 5; COL1 ----------- Col2 ----------Col3-------aa ---------------bb -------------100 aa ----------------bb -------------200 cc -----------------xx -------------20 cc ------------------ xx-------------100 My output should be Col1 ---------------Col2 -----------------Col3 aa ------------------bb--------------------100 Null -------------- Null ----------------- 200 cc ------------------xx ------------------- 20 Null ---------------Null ------------------ 100 Logic is If Col1 of ANS:----->Col1 (input) v_var1 (V port) --> v_col1 (V port) --> o_col1 (O port) --> previous row & Current Row is same Just output Col3.

IIF(col1=v_col1,col1,'NULL') col1 v_var1

----->Col2 (input) v_var2 (V port) --> IIF(col2=v_col2,col1,'NULL') v_col2 (V port) --> col2 o_col2 (O port) --> v_var2 Note :- Send the sorted data to the EXP trans.
How to Populate Name of the Source File name into One of target field..

ANS:_ Informatica's Flat file has the feature to get the file name when the file data is passed. Port: "Add currently processed file name port". To Do: Drag and drop your source in Source Analyzer, open it, Enable the last row in the Properties tab.

After loading.. CITY (string) column has some blank values.. Those values are not NULLS/SPACES. I used both ISNULL AND IS_SPACES functions to filter those records.. but no use.. After loading i am seeing them as b

SOLUTION:-

The issues is with empty string. Empty string doesn't have spaces. '' In the filter i have given the condition as CITY not equal to ''. I want to know if the input string is all char, is there any function like isnull and isnumber, for char to check if the sting has only SOLUTION:_ REG_MATCH(whatever the input string you want use, [a-zA-Z]+) as an example. REG_MATCH('This is a test', [a-zA-Z]+). We are using Informatica 8.6.1 for ETL purpose. Some of the previously developed mappings have used User Defined Functions, but when I try to edit or create a new UDF, the link is disabled. What can be the reason ANS:- tell you the reason, in the navigation pane click on the user I will defined functions and then goto tools-> UDF , now you will see all that what you want enabled, without selecting UDF if you see inwith There are many such cases for instance it happens the same way tools, labelling wizard in Repo Manager The Street Address item will contain the values from the ADDRESS_LINE_1 and ADDRESS_LINE_2be stripped from the address strings Leading and trailing spaces will from a view. of the Peradigm view. If ADDRESS_LINE_2 contains information, it will appear as a second line of information in the Street Address item, below the information If ADDRESS_LINE_1 contains information and ADDRESS_LINE_2 is blank, only the ADDRESS_LINE_1 information will appear in Street Address (and without a following blank line or space). ADDRESS_LINE_1 is blank, If ADDRESS_LINE_2 contains information and only the ADDRESS_LINE_2 information will appear in Street Address (and without a preceding blank line or space).
SOLUTION:SELECT CASE WHEN ((NVL(TRIM(add_1),'NULL')) != 'NULL' AND NVL(TRIM(add_2),'NULL')) = 'NULL') THEN TRIM(add_1) WHEN ((NVL(TRIM(add_1),'NULL')) = 'NULL' AND (NVL(TRIM(add_2),'NULL')) != 'NULL') THEN TRIM(add_2) ELSE REPLACE(TRIM(add_1)||',,'||TRIM(add_2),',,',CHR(10 )) END as Street_Address FROM Address

Dyanamically creating flat files in Informatica?? SOLUTION)):empno deptno 100 10

100 20 100 30 200 10 200 20 200 20 200 10 300 10 300 10 300 10 step 1 drag and drop the columns form the source qualifier to sorter set the sort key on DEPTNO (Ascending) step 2 Drag and drop the ports to expression create ports like PORTNAME TYPE EXPRESSION PREVDEPTNO V VAL EMPNO I/O DEPTNO I/O VAL V DEPTNO CHECK_FILE V decode(DEPTNO,PREVDEPTNO,1,0) OUTPUT_FILE O/P CHECK_FILE STEP3empno,DEPTNO, OUTPUT_FILE TO TRANSACTION CONTROL TRANSFORMER WRITE DECODE DROP (OUTPUT_FILE,1,TC_CONTINUE_TRANSACTION,0,TC_COMMIT_ Before). step 4 connect to file target with the file name as deptno in target filename port.

How to add quotation mark( ') in the parameter file


chr(39) || 'USA' || chr(39) write a sql query following source? subject mark maths 30 science 20 social 80

TABLE NAME IS "MARK"


requird output maths science 30 20 80 social

ANS:select (select mark from MARKS where subject='maths') maths, (select mark from MARKS where subject='science') science, (select mark from MARKS where subject='social') social from MARKS group by 1;

my source contain data like this eno ename phno 100 john 9989020508 101 ram 7246599999 i want to load the data into target is eno name phno 100 john (998)-9020-508 102 ram (724)-6599-999 SOLUTUION:phoneno = '('||SUBSTR(phno,1,3)||')'||'_'||SUBSTR(phno,4,4)||'_'||SUBSTR(phno,8,3)
REPLACECHR(0, tech_data_value_in, chr(39), chr(34)) 39 is a single quote, 34 a double quote He advised me to use the REPLACESTR function instead of REPLACECHR

script to remove header and footer


Here is the script... #! user/bin/sh sed '1d' < filename > tempfile chmod 777 tempfile sed '$d' < tempfile > sourcefile chmod 777 sourcefile echo Header & Footer removed Successfully! exit 0

Search for Truncate Table Option Suppose you want to find out on which of your sessions, truncate target table option is set on. Instead of manually opening e select task_name, 'Truncate Target Table' ATTR, decode(attr_value,1,'Yes','No') Value from OPB_EXTN_ATTR OEA, REP_ALL_TASKS RAT where OEA.SESSION_ID=rat.TASK_ID and attr_id=9

Find Mappings where SQL Override is used Below query will give you count of mapping instance where SQL Override has been used. The count is presented folder by fol

WITH detail AS (SELECT c.subject_area, c.mapping_name, d.instance_name source_qualifier_name, CASE WHEN a.attr_value IS NOT NULL THEN 1 ELSE 0 END as OVR_OK FROM rep_all_mappings c, opb_widget_inst d, opb_widget_attr a WHERE c.mapping_id = d.mapping_id AND c.mapping_version_number = d.version_number AND d.widget_type = 3 AND d.widget_id = a.widget_id AND a.widget_type = d.widget_type AND a.attr_id = 1 ) SELECT subject_area, 'SQ_OVERIDE' STATUS, COUNT (DISTINCT mapping_name) NO_OF_Mapping, COUNT (DISTINCT (mapping_name || source_qualifier_name)) NO_OF_SQ_IN_MAPPING, COUNT (DISTINCT (source_qualifier_name)) NO_OF_DISTINCT_SQ FROM detail WHERE OVR_OK =1 GROUP BY subject_area UNION SELECT subject_area, 'SQ_NON_OVERIDE', COUNT (DISTINCT mapping_name) nb_mapping, COUNT (DISTINCT (mapping_name || source_qualifier_name)) nb_map_inst, COUNT (DISTINCT (source_qualifier_name)) nb_inst FROM detail WHERE OVR_OK =0 GROUP BY subject_area

Find Tracing Levels for Informatica Sessions Sessions can have different tracing levels (Terse to Verbose). Often in Development environment we test our mappings with

This query will give tracing information along with session names so that you can quickly identify unintended tracing levels w select

task_name, decode (attr_value, 0,'None', 1,'Terse', 2,'Normal', 3,'Verbose Initialisation', 4,'Verbose Data','') Tracing_Level from REP_SESS_CONFIG_PARM CFG, opb_task TSK WHERE CFG.SESSION_ID=TSK.TASK_ID and tsk.TASK_TYPE=68 and attr_id=204 and attr_type=6

Find name of all stored procedure used in stored procedure transformation This query is helpful when you require to know name of all stored procedure being used in informatica stored procedure tran select attr_value from OPB_WIDGET_ATTR where widget_type=6 and attr_id=1

Find who modified (saved) a mapping last time This information is available under mapping properties. But the below query lists down this information for all the mappings i SELECT substr(rpl.event_time,7,4) || substr(rpl.event_time,6,1) || substr(rpl.event_time,1,5) || ' ' substr(rpl.event_time,12,11) "EventTimestamp" , usr.user_name "Username", DECODE(rpl.object_type_id,21,s21.subj_name,('('rpl.object_type_id')')) "Folder", obt.object_type_name "Type", DECODE(rpl.object_type_id,21,map.mapping_name,('('rpl.object_type_id')')) "Object" FROM opb_reposit_log rpl, opb_object_type obt, opb_subject fld, opb_mapping map, opb_users usr,

opb_subject s21 WHERE obt.object_type_name = 'Mapping' AND rpl.object_type_id = obt.object_type_id AND rpl.object_id = map.mapping_id(+) AND rpl.object_id = fld.subj_id(+) AND rpl.event_uid = usr.user_id AND map.subject_id = s21.subj_id(+) ORDER BY 1 DESC

Find Lookup Information from Repository This query will give information about lookup transformations like lookup name,Tablename ,Mapping name etc. Select distinct wid.WIDGET_ID, all_map.mapping_name, wid.INSTANCE_NAME Lkp_name, Decode(widat.attr_id,2,widat.attr_value) Table_name, decode (widat.attr_id,6,widat.attr_value) src_tgt FROM rep_all_mappings ALL_MAP, rep_widget_inst wid, OPB_WIDGET_ATTR widat where all_map.mapping_id=wid.mapping_id and wid.WIDGET_ID=widat.WIDGET_ID and wid.WIDGET_TYPE=11 and widat.WIDGET_TYPE=11 and widat.ATTR_ID in (2,6)

Find all Invalid workflows from Metadata repository This query will list of all invalid workflows under the given subject area (folder) select opb_subject.subj_name, opb_task.task_name from opb_task, opb_subject where task_type = 71 and is_valid = 0

and opb_subject.subj_id = opb_task.subject_id and UPPER(opb_subject.SUBJ_NAME) like UPPER('YOUR_FOLDER_NAME')

Generate a list of failed sessions from the repository Here is a query that will display list of failed / aborted / terminated sessions from each of the folders in the repository SELECT RSL.SUBJECT_AREA AS FOLDER, RW.WORKFLOW_NAME AS WORKFLOW, RSL.SESSION_NAME AS SESSION_NAME, DECODE(RSL.RUN_STATUS_CODE, 3,'FAILED', 4,'STOPPED', 5,'ABORTED', 15,'TERMINATED','UNKNOWN') AS STATUS, RSL.FIRST_ERROR_CODE AS FIRST_ERROR, RSL.FIRST_ERROR_MSG AS ERROR_MSG, RSL.ACTUAL_START AS START_TIME, RSL.SESSION_TIMESTAMP AS END_TIME FROM REP_SESS_LOG RSL, REP_WORKFLOWS RW WHERE RSL.RUN_STATUS_CODE IN (3,4,5,14,15) AND RW.WORKFLOW_ID = RSL.WORKFLOW_ID AND RW.SUBJECT_ID = RSL.SUBJECT_ID AND RSL.SUBJECT_AREA ='CDW_FDR2_SERVICEQUALITY' ORDER BY RSL.SESSION_TIMESTAMP DESC

SELECT STDY_ID, RTRIM(XMLAGG(XMLELEMENT(ISO_COUNTRY_CD ,ISO_COUNTRY_CD||',') ORDER BY ISO_COUNTRY_CD).EXTRACT('//text()') ) AS concatval FROM study_milestone_ods GROUP BY STDY_ID

SELECT DECODE (ATTR_TYPE, 1,'Source', 2,'Target', 3,'Source Qualifier', 4,'Update Strategy', 5,'expression', 6,'Stored Procedures', 7,'Sequence Generator',

8,'External Procedures', 9,'Aggregator', 10,'Filter', 11,'Lookup', 12,'Joiner', 14,'Normalizer', 15,'Router', 26,'Rank', 44,'mapplet', 46,'mapplet input', 47,'mapplet output', 55,'XML source Qualifier', 80,'Sorter', 97,'Custom Transformation',' ' ) TRANS FROM ghhdev9.OPB_ATTR WHERE ATTR_NAME ='Tracing Level'

SELECT

decode (attr_value, 0,'None', 1,'Terse', 2,'Normal', 3,'Verbose Initialisation', 4,'Verbose Data','') Tracing_Level FROM ghhdev9.OPB_ATTR WHERE ATTR_NAME='Tracing Level'

1. select * from opb_wflow_run -- here you will get workflow_id for which workflow you need to analysis. 2. select * from opb_task_inst_run where workflow_id=xxxx -- here you will get the details of each and every task present in Note: xxxx is the workflow_id which you have taken from opb_wflow_run. I hope this is what you are looking for. Additional Knowledge you may need:

1. select * from opb_sess_task_log -- this give the details about the mapping (success rows, failure rows, etc.) 2. select * from opb_swidginst_log -- this very important when we need to analyze performance of each transformation in a s

http://www.itnirvanas.com/2009/02/informatica-metadata-queries-part1.html SELECT ATTR.INSTANCE_NAME, ATTR.MAPPING_ID, ATTR.WIDGET_TYPE, ATTR.WIDGET_ID,

SUBJ_NAME, MAPPING_NAME, Tracing_Level FROM

( SELECT A.*,Tracing_Level FROM (SELECT INSTANCE_NAME, A.MAPPING_ID, A.WIDGET_TYPE, A.WIDGET_ID, ROW_NUMBER() OVER (PARTITION BY A.MAPPING_ID,A.WIDGET_ID,INSTANCE_NAME ORDER BY A.VERSION_NUMBER DESC FROM ghhdev9.OPB_WIDGET_INST A) A,( SELECT WIDGET_ID , DECODE(ATTR_VALUE, 1,'Terse', 3,'Verbose Initialization', 4,'Verbose Data',' ') AS Tracing_Level,ROW_NUMBER() OVER (PARTITION BY WIDGET_ID,ATTR_ID ORDER BY VERSIO FROM ghhdev9.REP_WIDGET_ATTR B WHERE ATTR_NAME ='Tracing Level'AND ATTR_VALUE in('1','3','4'))B WHERE A.WIDGET_ID=B.WIDGET_ID AND A.ID=1 AND B.ID=1 ) ATTR, (SELECT MAPPING_NAME, SUBJ_NAME, SUBJECT_ID, MAPPING_ID, ID FROM (SELECT MAPPING_NAME, SUBJ.SUBJ_NAME, SUBJECT_ID, MAPPING_ID, VERSION_NUMBER , ROW_NUMBER() OVER (PARTITION BY MAPPING_NAME ORDER BY VERSION_NUMBER DESC) AS ID FROM ghhdev9.OPB_MAPPING M,ghhdev9.OPB_SUBJECT SUBJ WHERE SUBJ.SUBJ_NAME='ALIGNCOM_DS_reddveer' AND MAPPING_NAME NOT LIKE 'x%' AND IS_VALID=1 AND IS_VISIBLE=0 AND M.SUBJECT_ID = SUBJ.SUBJ_ID)) M WHERE ATTR.MAPPING_ID=M.MAPPING_ID AND M.ID=1 --AND ATTR.WIDGET_TYPE NOT IN (1,2,3,97)

CREATE TABLE C_TEMP_EDA_CUST_PROD_CTRT_TST AS (SELECT C_FGO_ATTR_ID, C_POBJ_ID, C_POBJ_MGR_ID, C_ATTR_NAME, C_ATTR_VALUE, C_ATTR_TYPE, C_ATTR_FORMAT, C_FILTERABLE, C_REALM_NUM, C_VER_NUM, C_END_VER_NUM, C_VER_STATE, C_DATE_CREATED, C_MEMBER_ID_CREATED, C_DATE_UPDATED, C_MEMBER_ID_UPDATED, C_VER_START_DATE, C_VER_END_DATE, C_BULK_OP_ID, C_MGR_ID, C_CONT_OBJ_MGR_ID, C_CONT_OBJ_ID, C_EFF_START_DATE, C_EFF_END_DATE, C_SRVC_NAME, C_TABLE_NAME, C_PK_COLUMN, C_RME_ENTITY_FLAG FROM (SELECT C_FGO_ATTR_ID, C_POBJ_ID, C_POBJ_MGR_ID, C_ATTR_NAME, C_ATTR_VALUE, C_ATTR_TYPE, C_ATTR_FORMAT, C_FILTERABLE, C_REALM_NUM, C_VER_NUM, C_END_VER_NUM, C_VER_STATE,

C_DATE_CREATED, C_MEMBER_ID_CREATED, C_DATE_UPDATED, C_MEMBER_ID_UPDATED, C_VER_START_DATE, C_VER_END_DATE, C_BULK_OP_ID, C_MGR_ID, C_CONT_OBJ_MGR_ID, C_CONT_OBJ_ID, C_EFF_START_DATE, C_EFF_END_DATE, C_SRVC_NAME, C_TABLE_NAME, C_PK_COLUMN, C_RME_ENTITY_FLAG FROM C_MN_TEMP_EDA_CUST_PROD_CTRT p, (SELECT rownum repeat FROM dual CONNECT BY LEVEL<= (SELECT MAX(C_FGO_ATTR_ID) AS C_FGO_ATTR_ID from C_MN_TEMP_EDA_CUST_PROD_CTRT ) ) R WHERE p.C_FGO_ATTR_ID>=r.repeat AND ROWNUM<=7000000) ) http://www.disoln.org/2012/06/split-your-target-file-dynamically.html

n sales;

egory_id asc) as rankid

Records Loaded (million) 183.3

Time Taken (hr.) 16.3

RPS (Reader) 3132

183.3

6.02

8429

--------------------

i am seeing them as blanks in target.

N TRIM(add_1)

of manually opening each task and checking if the truncate table option is on, you may use the below query:

presented folder by folder.

est our mappings with verbose tracing levels and then forget to reduce the level while promoting to Production environments. This creat

tended tracing levels without opening each sessions manually.

a stored procedure transformation

on for all the mappings in one place.

n the repository

ACT('//text()') ) AS concatval

d every task present in that workflow along with there start and end time which you want.

ch transformation in a session/mapping. It gives details of each transformation along with there start and end time.

ERSION_NUMBER DESC) AS ID

R_ID ORDER BY VERSION_NUMBER DESC) AS ID

below query:

g to Production environments. This creates issues like abnormal "SessLogs" growth, slower mapping performance etc.

start and end time.

ormance etc.

IIF(ISNULL(GEO_DW_ID),'NEW','OLD') TO_DATE('3000-12-31','YYYY-MM-DD')

STORE.PRDS_SCO_ID, STORE.PRDS_SCO_UID, STORE.PRDS_PRD_HIE_ID, STORE.PRDS_PRD_HIE_UID, STORE.PRDS_PRD_HIE_DESC, STORE.PRDS_SUP_PRD_UID, STORE.PRDS_SUP_PRD_LEVEL, STORE.PRDS_SUP_PRD_LEVEL_NAME, STORE.PRDS_SUB_PRD_UID, STORE.PRDS_SUB_PRD_LEVEL, STORE.PRDS_SUB_PRD_LEVEL_NAME, STORE.PRDS_LEVELS_FROM_SUP, STORE.PRDS_SCOPE_IDENTIFIER, STORE.PRDS_TOP_FLAG, STORE.PRDS_BOTTOM_FLAG

You might also like