You are on page 1of 41

Triaging Performance Bug

Motivation
* Ask for the right information
* Diagnose the problem correctly
* Reduce the time a performance bug/issue remains open
* Resolve the problem quickly

How to Approach a Performance Issue


* Define the problem
* Gather the necessary information to analyze the issue
* Identify the root cause of the problem
* Find a solution/workaround that addresses the root cause of the problem

Define the problem


* Get a clear understanding of the performance issue
* What is slow ?
* Single process or screen
* Everything in a module
* Everything in a product family
* Entire instance
* Where is it slow ? Test / Dev / Production / All
* Is it slow at particular times of the day ?
* Is it slow when there are only a few users on the system
or only why the system is loaded ?
* Is it slow only for particular users ?
* Since when is this slow ?
* Was it always slow ? If no, what could have changed since the last time
it ran ok ?
* Patches
* Gather stats
* Change in Data Volumes etc
* If the BDE performance bug template has been completed, go through
that carefully
* Check if the problem is reproducible internally

Identify the cause of the problem


* Analyze the problem first before you ask the customer to apply any patches.
* Identify the components that do the processing
* Start with the SQL Trace/TKPROF file
* Are you able to account for the user reported response time ?
* If yes, which would be > 90% of the cases, you have identified the issue
to be SQL/PLSQL related
* If no, check if the raw trace file is truncated.
* If that is not the case, then the issue is probably because of other
non-database processing being done by the program
* Forms
* Reports
* Java

Identify the cause of the problem ? SQL/PLSQL


* Are you able to identify the expensive sql in the trace ?
* Typically, less than 10% of statements account for > 90% of the time
* Is the execution plan optimal ?
If no, check for the basics
* Are you missing any filters ?
* Are you missing any indexes ? If filter column is part of a composite
index,have all the leading columns been included in the filter criteria ?
* Do you have any functions on your filter columns ?
* Are you missing any join conditions ?
* Do you have any functions ? nvl(), decode() etc. on the join conditions ?
* Are you doing a LIKE on a number column ?
* Are your views too complex ?
* Are you outer joining to views ?
* Do you see any throw-away rows ?
* If you do not find anything bad with the query, and still the plan is bad,
you will need to ask for more detailed information (discussed later).
* Are the number of executions as expected ?
If high,
* Locate the code where it could be coming from.
* Has the code been unnecessarily left in a loop ?
* Are you using user defined functions in your select/where clause ?
* Could you cache these values ?
* Are you doing row-by-row processing ?
* Can it be changed to use bulk ?
* If you are not able to identify the issue, ask for additional information

Identify the cause of the problem ? non SQL


* If it is a Forms Application, get a Forms Diagnostic Trace
* If it is a Report, get a Reports Trace (aka Reports Profiler output)
* If it is a Java based application, you will need to profile your
application using a java profiling tool ? OptimizeIt or Jdeveloper.
* In case of a concurrent program, make sure to check the Actual Start Date
and Actual Completion Date to get the exact run time for a program

Information you might need for problem analysis


* SQL Traces ? Raw and sorted TKPROF output
* Output of the SQLTXPLAIN utility
* Enhanced Explain Plan and related diagnostic info like indexes,
statistics, histograms etc. for a given sql statement.
* Refer customer to Metalink Note 215187.1 for information and
instructions on how to setup and use this utility.
* Output of afxplain.sql ( 11.5.10 and beyond ).
* Event 10053 trace, also known as the CBO trace
* Trace file contains information on the CBO costs calculations which
provides insights into why a particular plan was chosen
* In certain cases, the performance team may ask for this output.
Developers are not expected to understand this output.
* Export dump of customer statistics
* On rare occasions, we need to simulate the behavior of the CBO on
the customer's database in-house
* Create a stat table using FND_STATS.CREATE_STAT_TABLE
* Backup customer stats using FND_STATS.BACKUP_TABLE_STATS for each table
* Use the exp command to export the stats table
* PL/SQL Profiler output
* Required to drill down on PL/SQL bottlenecks identified from the SQL Trace
* Forms Runtime Diagnostics Trace
* Output provides detailed information on the processing done by the
Forms Runtime process while executing a form
* Reports Trace
* Output provides detailed information on the processing done by the
Reports runtime process while executing a report

SQL Trace .. Why/when would you use it


* It is a diagnostic/debugging facility that provides runtime performance
statistics for individual SQL statements that your application runs.
* It also records the execution plan that was used to execute each
SQL statement.
* In most cases, enabled at the session level.
* It is a text file, but raw.
* To find out which statements are causing the performance issue.
* To see what values are being bound at runtime.
* To get the data types of the bind variables.
* To find out what events the session waited for during SQL execution.
* To find out which tables are being accessed by a piece of code.
* If you are getting a runtime error like ORA-942 or ORA-904, you can
find out which statement is causing it.

SQL Trace - What information does it provide>


Provides the following information for each statement.
* Parse, execute, and fetch counts
* CPU and elapsed times
* Physical reads and logical reads
* Number of rows processed
* Misses on the library cache
* Username under which each parse occurred
* Each commit and rollback
* Runtime Execution Plan
SQL Trace ? What it looks like
PARSING IN CURSOR #1
len=53 dep=0 uid=33 oct=3 lid=33 tim= 4181264358 hv=3778312068 ad='350e5c48'
select party_name from hz_parties where party_id=:b1
END OF STMT
PARSE #1 :c=1,e=1,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=0,tim=4181264358
BINDS #1:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=80000001000cbc78 bln=22 avl=04 flg=05 value=144867
EXEC #1:
c=0,e=0,p=0,cr=0,cu=0,mis=0,r=9223372041149743104,dep=0,og=4,tim=4181264358
WAIT #1 : nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0
WAIT #1: nam='db file sequential read' ela= 1 p1=328 p2=129038 p3=1
WAIT #1: nam='file open' ela= 1 p1=0 p2=0 p3=0
WAIT #1: nam='db file sequential read' ela= 3 p1=327 p2=128915 p3=1
WAIT #1: nam='file open' ela= 0 p1=0 p2=0 p3=0
WAIT #1: nam='db file sequential read' ela= 2 p1=330 p2=121062 p3=1
FETCH #1
:c=0,e=6,p=3,cr=4,cu=0,mis=0,r=9223376430606319617,dep=0,og=4,tim=4181264364
WAIT #1: nam='SQL*Net message from client' ela= 0 p1=1650815232 p2=1 p3=0
FETCH
#1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=9223376430606319616,dep=0,og=0,tim=418126436
4
WAIT #1: nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message from client' ela= 999 p1=1650815232 p2=1 p3=0
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=20595
op= 'TABLE ACCESS BY INDEX ROWID HZ_PARTIES '
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=34093 op='INDEX UNIQUE SCAN '

SQL Trace - Levels


* Level 1 - Standard Sql Trace

* Level 4 - Level 1 PLUS Bind values


BINDS #1:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24
offset=0
bfp=80000001000cbc78 bln=22 avl=04 flg=05 value=144867

* Level 8 - Level 1 PLUS Wait Statistics


WAIT #1: nam='file open' ela= 0 p1=0 p2=0 p3=0
WAIT #1: nam='db file sequential read' ela= 2 p1=330 p2=121062 p3=1

* Level 12 - Level 1 PLUS Bind values and Wait Statistics


In >95% cases, Standard Sql Trace is good enough
SQL Trace .. init.ora parameters
* sql_trace=false
* Enables sql tracing for the entire instance
* timed_statistics=true
* Enables collection of CPU and elapsed times
* max_dump_file_size=val K/M|unlimited
* Specifies max size of the trace file
* user_dump_dest=<directory>
* Specifies location where trace files should be written
* tracefile_identifier=?string'
* Creates trace files with the name sid ora pid _string.trc
* _trace_files_public=true
* Creates trace files with mode 644 instead of 600

Enabling Trace - SQL Plus/PL/Sql


* Enabling trace for the current session
* alter session set sql_trace=true;
* alter session set events '10046 trace name context forever, level <x>';
* dbms_session.set_sql_trace(true);
* dbms_support.start_trace(waits=>true,binds=>true);

* Enabling trace for a different session


* dbms_system.set_sql_trace_in_session (SID,SERIAL#,TRUE);
* DBMS_SUPPORT.START_TRACE_IN_SESSION(SID , SERIAL#, waits=>TRUE,
binds=>TRUE )
DO NOT PUT THESE CALLS IN YOUR CODE

Gather Schema Statistics


Gathering Statistics

Do not gather statistics excessively on entire schemas or the entire database such as
nightly or weekly. Do not gather statistics on permanent objects during peak intervals.

Gathering statistics invalidates cursors


Gathering statistics requires dictionary and object level locks.
Plans are not likely to change if the data distribution has not changed.
For tables which are growing at a rapid rate, gather statistics only on those tables.

Use only FND_STATS or the Gather Schema and Gather Table Statistics Concurrent
Programs

Do NOT USE the analyze nor dbms_stats command directly. It is not supported, and
results in sub-optimal plans.
Review the table and index statistics for the objects which appear in the top SQL section
of Stats Pack.

Gathering Statistics Enhancements (11i10) Gather Schema Statistics


Auto Gather option
?Gather Statistics only on tables which have changed
?Change threshold is user definable. ?Utilizes the Table Monitoring feature. To enable
schema monitoring:
?SQL> exec fnd_stats.ENABLE_SCHEMA_MONITORING()
Auto List option
?Lists the objects which have changed.
Maintain history of Statistics Collection
No Invalidate Option
?Does not invalidate cursors

Avoid collecting statistics on GLOBAL TEMPORARY TABLES instead Force the Index
for best performance

Global Temporary Table Performance Issue

Cause: Performance noted poor in "Fund Mgmt page" of FII (Oracle Financials
Intelligence). Tracefile analysis showed full table scan with high cost on two global
temporary tables ( FII_GL_SNAP_SUM_F , FII_PMV_AGGRT_GT ).

Fix:
When collected 100% statistics on these two tables, It was observed that its avoiding full
table scan on global temporary tables as

SQL> exec FND_STATS.GATHER_TABLE_STATS


(ownname => 'FII',tabname =>'FII_GL_SNAP_SUM_F',percent => 99.999999);
SQL> exec FND_STATS.GATHER_TABLE_STATS
(ownname => 'FII', tabname => 'FII_PMV_AGGRT_GT',percent => 99.999999);
===============================================================
=
1 TABLE ACCESS (BY INDEX ROWID) of FII_COMPANY_HIERARCHIES
2 INDEX (UNIQUE SCAN) of FII_COMPANY_HIERARCHIES_U1 (UNIQUE)Per
from APPS performance team, It is said that we do not collect statistics for global
temporary tables but instead of that they forced the index FII_GL_SNAP_SUM_F_N1 in
all portlet queries that use FII_GL_SNAP_SUM_F table.

Eg: SELECT /*+ index(f fii_gl_snap_sum_f_n1) */ '||p_snap_aggrt_viewby_id||'


viewby_id,
............
More details could be found in TAR 4672213.993 .

Verifying the Statistics Column Statistics

SQL> set serveroutput on


SQL> exec apps.fnd_stats.verify_stats ('ONT','OE_ORDER_LINES_ALL');
===============================================================
=========================
Table OE_ORDER_LINES_ALL
===============================================================
=========================
last analyzed sample_size num_rows blocks
12-03-2004 22:59 3726829 37268290 3527197

Index name last analyzed num_rows LB DK LB/key DB/key CF


-----------------------------------------------------------------------------------------
OE_ORDER_LINES_N1 12-03-2004 22:12 36018080 87310 2097282 1 5
11719150
OE_ORDER_LINES_N10 12-03-2004 22:12 26519270 68610 2230418 1 4
10949180
OE_ORDER_LINES_N11 12-03-2004 22:12 16310880 96200 11196380 1 1
7668050
OE_ORDER_LINES_N12 12-03-2004 22:12 50 1 20 1 1 19
OE_ORDER_LINES_N13 12-03-2004 22:12 1363149 3132 138681 1 3
547938
...
...
OE_ORDER_LINES_N5 12-03-2004 22:12 0 0 0 0 0 0
OE_ORDER_LINES_N6 12-03-2004 22:12 158583 410 158670 1 1
85032
OE_ORDER_LINES_N7 12-03-2004 22:12 103459 238 18873 1 2 48075
OE_ORDER_LINES_N9 12-03-2004 22:12 4276030 11060 3101971 1 1
2444110
OE_ORDER_LINES_U1 12-03-2004 22:12 37510470 91790 37510470 1 1
17574490
------------------------------------------------------------------------------------------
Histogram Stats
Schema Table Name Status last analyzed Column Name
------------------------------------------------------------------------------------------
ONT OE_ORDER_LINES_ALL present 03-12-2004 22:59
OPEN_FLAGHow to avoid statistics collection on TABLES by using Procedure
FND_STATS.LOAD_XCLUD_TAB
Objective : Evaluate FND_STATS.LOAD_XCLUD_TAB procedure to exclude the
table from statistics collection.

The following is the test scenario :

The following test worked fine for FND_STATS package version ( 115.47 )
Test scenario :

SID : samrki ( Aramark Stage instance )


SID: stesti ( Test database )

( A ) Test using " Gather Schema Statistics " job

Step 1 . Executed fnd_stats. LOAD_XCLUD_TAB to insert table into


FND_EXCLUDE_TABLE_STATS table.

SQL> exec fnd_stats. LOAD_XCLUD_TAB('I',0,'FND_CONCURRENT_REQUESTS');

PL/SQL procedure successfully completed.

SQL> select * from FND_EXCLUDE_TABLE_STATS where


table_name='FND_CONCURRENT_REQUESTS';

APPLICATION_ID TABLE_NAME PARTITION


-------------- ------------------------------ ------------------------------
APPROX_NUM_ROWS CREATION_DATE CREATED_BY LAST_UPDATE_DAT
LAST_UPDATED_BY
--------------- --------------- ---------- --------------- ---------------
LAST_UPDATE_LOGIN
-----------------
0 FND_CONCURRENT_REQUESTS
19-MAY-06 1 19-MAY-06 1

Step 2 . Deleted the statistics on FND_CONCURRENT_REQUESTS.

Step 3 . Gather schema statistics ran during weekend.

Step 4 . Checked that statistics for table FND_CONCURRENT_REQUESTS is not


gathered.

SQL> select last_analyzed, num_rows from dba_tables where


table_name='FND_CONCURRENT_REQUESTS';

LAST_ANALYZED NUM_ROWS
--------------- ----------
( B ) Test using "fnd_stats.gather_ schema _statistics " ( SID= STESTI )

SQL> select last_analyzed, num_rows , table_name from dba_tables where


table_name='FND_CONCURRENT_REQUESTS';

LAST_ANALYZED NUM_ROWS TABLE_NAME


--------------- ---------- ------------------------------

FND_CONCURRENT_REQUESTS

SQL> execute fnd_stats.gather_schema_statistics(schemaname


>'APPLSYS',estimate_percent>2,degree=>3,internal_flag=>'NOBACKUP');

PL/SQL procedure successfully completed.

SQL> SQL> select last_analyzed, num_rows , table_name from dba_tables where


table_name='FND_CONCURRENT_REQUESTS';

LAST_ANALYZED NUM_ROWS TABLE_NAME


--------------- ---------- ------------------------------
FND_CONCURRENT_REQUESTS

SQL>( Note : Above test worked fine for "fnd_stats.gather_ schema statistics" , but when
using "fnd_Stats.gather table _stats" , it collcted table statistics and that is obvious. )

( C ) Test on FND_STATS package versio 115.46. ( Testing did not work in this case )

Steps done on Manitowoc ( Test ) instance :

1. SQL> exec fnd_stats.


LOAD_XCLUD_TAB('I',0,'FND_CONCURRENT_REQUESTS');

PL/SQL procedure successfully completed.

Though it indicatets " PL/SQL procedure successfully completed. " , it does not insert any
rows into table FND_EXCLUDE_TABLE_STATS.
2. Inserted row for FND_CONCURRENT_REQUESTS table using "Insert " statement
into table FND_EXCLUDE_TABLE_STATS.
3. Gathered statistics on APPLSYS schema using "fnd_stats.gather_schema_statistics"
and it collected schema statistics.
4. The change made to FND_EXCLUDE_TABLE_STATS was then reverted back.

Gather Schema Statistics concurrent request with Gather Auto option

As per the direction from the apps performance team we need to use Gather Auto option
while running Gather Schema Statistics concurrent request.
To implement the Gather Schema Statistics concurrent request with Gather Auto option
we need to put the tables in monitoring mode.
Following are the steps required to implement the change first in the test instance and
than in production.

STEP 1.

Bring down the concurrent managers.

STEP 2.

SQL> select count(1), monitoring from dba_tables group by monitoring;

STEP 3.
connect apps/xxxxxx
exec fnd_stats.enable_schema_monitoring('ALL');

In case of errors during the process of changing the mode to monitoring please bounce all
the middle tier components along with the database and retry.

STEP 4.
SQL> select count(1), monitoring from dba_tables group by monitoring;STEP 5.
Restart concurrent managers.

STEP 6.
Schedule the gather schema statistics concurrent request with Gather Auto option after
business hours to run every Saturday at 1 AM. For ALL schemas,
Estimate=10%,Degree=3,Backup Flag= NOBACKUP,History Mode None,Gather
Options Gather Auto.
Application Profiles
FND: Enable Cancel Query

Set the Profile ?FND: Enable Cancel Query? to No.

Forms - Cancel Query - Should not be enabled unless you are on


Forms patchset 14 (Forms version 6.0.8.23.x or higher). Refer to MetaLink Note
138159.1 for instructions on how to enable and tune Cancel query related parameters.
Cancel Query increases middle-tier CPU as well as DB CPU.

concurrent:Wait for Available TM


Set the profile "Concurrent:Wait for Available TM" to 1 (second).

TP:INV Transaction processing mode


Set ?TP:INV Transaction processing mode? to ?On-line processing? for small inventory
requests from the UI.

Concurrent:
Allow Debugging The profile "Concurrent: Allow Debugging" should be set to "Yes."

OM: Debug Level


Ensure that the profile "OM: Debug Level" is set to zero (0).

QP: Debug
Ensure that the profile "QP: Debug" is set to ?N?.

Profile Option "Sign-on Audit"


Sign-on Audit to be set to "FORM" to trace back FORM connections from FND tables

Concurrent Program
Concurrent Program Concurrent Form VIEW/SUBMIT is too slow

Cause :
Performance ON Concurrent form VIEW/SUBMIT is too slow (TAR 15927120.600)

Customer is experiencing performance issue with query on Concurrent request form.


"AFTER PATCH 2502208 BUT NO MAJOR IMPROVEMENT ON
ACCESSING(FND_CONC_REQ_SUMMARY_V"
is already applied but we do not see any significant improvment on collecting 100%
statistics with this patch. A temporary workaround we have given to customer by
Deleting the
statistics of following five concurrent tables where we saw the execution plan is very
optimal.

OWNER TABLE_NAME TABLESPA CHAIN_CNT NUM_ROWS


LAST_ANALYZED INSTANCES
-------- ------------------------------ -------- ---------- ---------- --------------- ---------
APPLSYS FND_CONCURRENT_PROGRAMS APPLSYSD 0 6698
21-JAN-06 1
APPLSYS FND_CONCURRENT_PROGRAMS_TL APPLSYSD 0 6698
21-JAN-06 1
APPLSYS FND_CONCURRENT_REQUESTS APPLSYSD 0 421063
21-JAN-06 1
APPLSYS FND_PRINTER_STYLES_TL APPLSYSD 0 72 21-
JAN-06 1
APPLSYS FND_USER APPLSYSD 0 178 21-JAN-06 1

After the patch 2502208 and 100% statistics of the tables , the execution plan shows
multiple
HASH joins with elapesed time went upto 9.48 seconds

************************************************************************
********
SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,PROGR
AM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_STYLE,SA
VE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_TE
XT,PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_
METHOD_CODE,RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CO
NTROLLING_MANAGER,LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENA
BLED,REQUESTED_START_DATE,PHASE_CODE,HOLD_FLAG,STATUS_CODE,
REQUEST_ID,PROGRAM,REQUESTOR,PRIORITYFROMFND_CONC_REQ_SUM
MARY_V WHERE nvl(request_type, 'X') != 'S' and(trunc(request_date) >=
trunc(sysdate-:1)) order by REQUEST_ID DESC

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.01 0.02 1 2 0 0
Execute 27 0.00 0.02 0 0 0 0
Fetch 27 6.17 9.43 119373 122438 0 297
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 56 6.18 9.48 119374 122440 0 297

1 SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,
2
PROGRAM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_ST
YLE,
3
SAVE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_
TEXT,
4 PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
5
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,
6
HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_METHO
D_CODE,
7
RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CONTROLLING_MA
NAGER,
8
LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENABLED,REQUESTED_STAR
T_DATE,
9
PHASE_CODE,HOLD_FLAG,STATUS_CODE,REQUEST_ID,PROGRAM,REQUEST
OR,PRIORITY
10 FROM
11 FND_CONC_REQ_SUMMARY_V WHERE nvl(request_type, 'X') != 'S' and
12* (trunc(request_date) >= trunc(sysdate-:1)) order by REQUEST_ID DESC

EXPLAIN_PLAN
--------------------------------------------------------------------------------------------- 11
SELECT STATEMENT Opt_Mode:CHOOSE
10 SORT (ORDER BY)
9 . HASH JOIN
1 .. TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL' (ROWS:6698 BLKS:120) 8 ..
HASH JOIN
2 ... TABLE ACCESS ***(FULL)*** OF 'APPLSYS.FND_USER' (ROWS:178
BLKS:20)
7 ... HASH JOIN
3 .... TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_PROGRAMS' (ROWS:6698 BLKS:150) 6 ....
HASH JOIN (OUTER)
4 ....+ TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_REQUESTS' (ROWS:416042 BLKS:41295)
5 ....+ TABLE ACCESS ***(FULL)*** OF 'APPLSYS.FND_PRINTER_STYLES_TL'
(ROWS:72 BLKS:4)

EXPLAIN_PLAN
--------------------------------------------------------------------------------------------- SELECT
STATEMENT SORT (ORDER BY) . HASH JOIN
.. TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS_TL (ROWS:6698
BLKS:120)
.. HASH JOIN ... TABLE ACCESS (FULL) FND_USER (ROWS:178 BLKS:20)
... HASH JOIN .... TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS
(ROWS:6698 BLKS:150)
.... HASH JOIN (OUTER)
....+ TABLE ACCESS (FULL) FND_CONCURRENT_REQUESTS (ROWS:416042
BLKS:41295)
....+ TABLE ACCESS (FULL) FND_PRINTER_STYLES_TL (ROWS:72 BLKS:4) IRS:
(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL SCAN)
BIR:(BY INDEX ROWID) OPERATION COST
CARDINALITY BYTES CPU_COST IO_COST

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)

OPERATION COST CARDINALITY BYTES CPU_COST IO_COST TEMP_SPACE


--------------------------------------------------- ---- --------- ------- -------- ------- ---------
SELECT STATEMENT ................................. 7282 17809 5805734 7282
SORT (ORDER BY) ................................. 7282 17809 5805734 7282 12182000
. HASH JOIN ...................................... 6408 17809 5805734 6408
.. TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS_TL. 20 6698 308108
20
.. HASH JOIN ..................................... 6370 17809 4986520 6370
... TABLE ACCESS (FULL) FND_USER.................. 5 178 2314 5
... HASH JOIN .................................... 6364 17809 4755003 6364
.... TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS.. 24 6698 214336 24
.... HASH JOIN (OUTER) ........................... 6328 17809 4185115 6328 3728000
....+ TABLE ACCESS (FULL) FND_CONCURRENT_REQUESTS. 6269 17809
3508373 6269
....+ TABLE ACCESS (FULL) FND_PRINTER_STYLES_TL... 2 72 2736 2

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)

TABLES AND INDEXES


==================

NUM_ROWS
TBL TABLE_NAME (ACTUAL) BLOCKS
--- ------------------------------ -------------- --------------
01 FND_CONCURRENT_PROGRAMS 6698 150
02 FND_CONCURRENT_PROGRAMS_TL 6698 120
03 FND_CONCURRENT_REQUESTS 416042 41295
04 FND_PRINTER_STYLES_TL 72 4

ACTIONS :
Evaluated from Rakesh Tikku from Apps performance development group regarding the
"Concurrent Form submit/view" performance issue and he had recommended the
following actions, When below profile option is set to "NO" , concurrent form won't
requery the fnd_concurrent_request table as we have seen previously some slow response
with this.

We need this action plan to be implemented on one of your development/test instance


first and after that if it is acceptable to you then we can implement the same action items
for your production instance.

implement the same changes in production instance.

1 SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,
2
PROGRAM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_ST
YLE,
3
SAVE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_
TEXT,
4 PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
5
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,
6
HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_METHO
D_CODE,
7
RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CONTROLLING_MA
NAGER,
8
LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENABLED,REQUESTED_STAR
T_DATE,
9
PHASE_CODE,HOLD_FLAG,STATUS_CODE,REQUEST_ID,PROGRAM,REQUEST
OR,PRIORITY
10 FROM
11 FND_CONC_REQ_SUMMARY_V WHERE nvl(request_type, 'X') != 'S' and
12 (trunc(request_date) >= trunc(sysdate-:1)) and (REQUESTED_BY=:2) order by
13* REQUEST_ID DESC

ACTION PLAN for non production instance:


========================================

1. Set the profile option "Concurrent: Show Requests Summary After Each Request
Submission" to No
2. Change this first for a specific user and when customer confirms "OK" then move
farward to
EXPLAIN PLANS (rk1.sql AUOHSE03O07_PE03OI_S0015)
=============

EXPLAIN_PLAN
--------------------------------------------------------------------------------------------------
16 SELECT STATEMENT Opt_Mode:CHOOSE
15 SORT (ORDER BY)
14 . NESTED LOOPS
11 .. NESTED LOOPS
8 ... NESTED LOOPS (OUTER)
5 .... NESTED LOOPS
2 ....+ TABLE ACCESS (BY INDEX ROWID) OF 'APPLSYS.FND_USER'
1 ....+. INDEX (UNIQUE SCAN) OF 'APPLSYS.FND_USER_U1' (UNIQUE)
SEARCH_COLUMNS=1
4 ....+ TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_REQUESTS'
3 ....+. INDEX (RANGE SCAN) OF
'APPLSYS.FND_CONCURRENT_REQUESTS_N1' (NON-UNIQUE)
SEARCH_COLUMNS=1
7 .... TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_PRINTER_STYLES_TL'
6 ....+ INDEX (UNIQUE SCAN) OF 'APPLSYS.FND_PRINTER_STYLES_TL_U1'
(UNIQUE) SEARCH_COLUMNS=2
10 ... TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS'
9 .... INDEX (UNIQUE SCAN) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_U1' (UNIQUE)
SEARCH_COLUMNS=2
13 .. TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL'
12 ... INDEX (UNIQUE SCAN) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL_U1' (UNIQUE)
SEARCH_COLUMNS=3

EXPLAIN_PLAN
---------------------------------------------------------------------------------------
SELECT STATEMENT
SORT (ORDER BY)
. NESTED LOOPS
.. NESTED LOOPS
... NESTED LOOPS (OUTER)
.... NESTED LOOPS
....+ TABLE ACCESS (BIR) FND_USER
....+. IUS FND_USER_U1 USER_ID
....+ TABLE ACCESS (BIR) FND_CONCURRENT_REQUESTS
....+. IRS FND_CONCURRENT_REQUESTS_N1 REQUESTED_BY
ACTUAL_COMPLETION_DATE
.... TABLE ACCESS (BIR) FND_PRINTER_STYLES_TL
....+ IUS FND_PRINTER_STYLES_TL_U1 PRINTER_STYLE_NAME LANGUAGE
... TABLE ACCESS (BIR) FND_CONCURRENT_PROGRAMS
.... IUS FND_CONCURRENT_PROGRAMS_U1 APPLICATION_ID
CONCURRENT_PROGRAM_ID
.. TABLE ACCESS (BIR) FND_CONCURRENT_PROGRAMS_TL
... IUS FND_CONCURRENT_PROGRAMS_TL_U1 APPLICATION_ID
CONCURRENT_PROGRAM_ID LANGUAGE

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)
OPERATION COST CARDINALITY BYTES CPU_COST IO_COST TEMP_S
PACE ---------------------------------------------- ---- ----------- ----- -------- ------- ------
----
SELECT STATEMENT ................................. 171 5 1630 171
SORT (ORDER BY) ................................. 171 5 1630 171
. NESTED LOOPS ................................... 147 5 1630 147
.. NESTED LOOPS .................................. 142 5 1400 142
... NESTED LOOPS (OUTER) ......................... 137 5 1240 137
.... NESTED LOOPS ................................ 132 5 1050 132
....+ TABLE ACCESS (BIR) FND_USER................. 1 1 13 1
....+. IUS FND_USER_U1 ........................... 1 .....+ TABLE ACCESS (BIR)
FND_CONCURRENT_REQUESTS.. 131 5 985 131
....+. IRS FND_CONCURRENT_REQUESTS_N1 ............ 49 114 49
.... TABLE ACCESS (BIR) FND_PRINTER_STYLES_TL..... 1 1 38 1
....+ IUS FND_PRINTER_STYLES_TL_U1 ............... 1 .... TABLE ACCESS (BIR)
FND_CONCURRENT_PROGRAMS.... 1 1 32 1
.... IUS FND_CONCURRENT_PROGRAMS_U1 .............. 1
.. TABLE ACCESS (BIR) FND_CONCURRENT_PROGRAMS_TL.. 1 1 46 1
... IUS FND_CONCURRENT_PROGRAMS_TL_U1 ............ 1
IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL
SCAN) BIR:(BY INDEX ROWID)
TABLES AND INDEXES
==================

NUM_ROWS
TBL TABLE_NAME (ACTUAL) BLOCKS
--- ------------------------------ -------------- --------------
01 FND_CONCURRENT_PROGRAMS 6698 150
02 FND_CONCURRENT_PROGRAMS_TL 6698 120
03 FND_CONCURRENT_REQUESTS 425705 43725
04 FND_PRINTER_STYLES_TL 72 4
05 FND_USER 178 20Purge Concurrent Request and/or Manager

Cause:
When huge amount of records present in fnd_concurrent_request table and execution of
concurrent program was taking a very long time (>300 and <500 minutes) to complete
with 15 days purge policy. Expected patch 3591639 will resolve the issue but this did not
improve the performance issue.
Found following SQL statement in tracefile anlysis
SELECT COUNT(*) FROM FND_FILE_TEMP T WHERE T.FILE_ID = :B1
Where cpu time when upto 3736.33 seconds and total elapsed time went upto 38683.68
seconds with 63.5 million disk reads and 75.3 million buffer gets for accessing just 14653
rows.
Fix:
There is a known issue related to RAC. WebIV Note 304748.1 says to setup FNDFS
appropriately so that files on one node can be accessed by requests from another node. It
is also recommended to set the init.ora parameter max_commit_propagation_delay= 0,
current value of this parameter by default will be set to 700.
Another problem is related with FND_FILE_TEMP table. When there are no indexes on
that table the performance will degrade and AOL product support recommended us to
create two non-unique indexes on REQUEST_ID and FILE_ID columns.INCTM ?
Concurrent Manager Consumes lot of memory

INCTM Using Excessive Memory After 11.5.9 Upgrade Reference Note : 275528.1
Patch : 3517095

Cost Rollup ? No Report " & "Update Standard Cost

Performance issues observed with "Cost Rollup ? No Report " & "Update Standard Cost"
programs.
Cause : Large number of rows in temporary tables.

Reference Note : 142101.1Synchronize Workflow

Synchronize Workflow

Cause :
Unused space will be wasted.

Fix:
Run the Bulk Synchronization Concurrent Program called "Synchronize WF LOCAL
tables". By default this request set should be scheduled to run once a day to provide a
minimal level of synchronization or we should schedule the request set to perform
synchronization more frequently.Purge Obsolete Generic File Manager Data

Growth in FND_LOBS table

Cause:
If the query shows too many rows (perhaps > 10,000), then recommendation is to purge
the old (expired) data.
select program_name, count(*) from fnd_lobs group by program_name;

Fix:

Change the PCTVERSION to zero for FND_LOBS table as ALTER TABLE FND_LOBS
MODIFY LOB (FILE_DATA) ( PCTVERSION 0 );

And add the concurrent program "Purge Obsolete Generic File Manager Data" through
SYSADMIN and scheduled to run with default parameters which will purge only
expired(junk) documents.

Note 171272.1 How to Drop Old/Expired Export and Attachment Data From
FND_LOBS.

Also check the bug 4393550 logged on development for clarification.SFM SM Interface
Test Service

Top process on the server map to "java" processes using high CPU

Cause :
Investigation of underlying Java process indicates the following CCM "SFM SM
Interface Test Service" manger righting to the log file endlesslely.

Fix:
Request customer to review metalink note: 304946.1 and provide a CRT to disable the
manager.Purge Debug Log And System Alerts

Purge Debug Log And System Alerts (Running for long hours) Bug.4505316 and the
solution for this is provided against WebIV Note 332103.1-Env: 11.5.10 - CU1 Tar's:
15529024.6 & 15530194.6 (ENGENIOI) Tar: 4630500.993 (AMERICAN AIRLINES
INC)Rebuild Help Search Index
Rebuilding Intermedia Index for Task Names'
Synchronize JTF_NOTES_TL_C1 index
Customer text data creation and indexing
Service Request Synchronize Index

DR$PENDING and DR$WAITING tables to be purged.


Cause:
More details with succesfull action plan is implemented in TAR 4533149.993

DR$PENDING and DR$WAITING have too many rows. Cusomers facing this problem
usually have Oracle APPS. Webiv Note 311583.1 provides more details on this issue as
the FND_LOBS is a table that has the index FND_LOBS_CTX
This table grows and the index is not being synchronized. Then the pending rows to be
synchronized are inserted into dr$pending and dr$waiting. Complete explanation in Note
Id:104262.1

Solution:
To implement the solution, please execute the following steps:
1. Purge FND_LOBS following instructions in these notes: Note Id:298698.1 Note
Id:171272.1
2. Sync the index FND_LOBS_CTX periodically: This note explains how to syncrhonize
a text index: Note Id:119172.1
3. Execute the following query to find out what are the indexes that still have pending
rows in DR$PENDING:

connect as ctxsys:
select u.username, i.idx_name from dr$index i, dba_users u where
u.user_id=i.idx_owner# and idx_id in (select pnd_cid from dr$pending);

2. Synchronize those indexes exec ctx_ddl.sync_index('USERNAME.INDEX');


3. Remove any rows remaining in DR$PENDING and DR$WAITING

From AOL point of view and refering to FND_LOBS_CTX index, there is


a specific concurrent request called 'Rebuild Help Search Index' ,(AFLOBBLD) , which
will maintain FND_LOBS_CTX index. This concurrent request can be scheduled to run
in a periodic basis. There is no standard FND way to maintian all the text indexes used by
Oracle Applications. They are usually maintained running specific concurrent request
owned by the product owning the index.

For JTF_TASKS_TL_IM index there is a concurrent request owned by JTF called


'Rebuilding Intermedia Index for Task Names', (JTFTKIMD).
For JTF_NOTES_TL_C1 index, there is a concurrent request owned by JTF called
'Synchronize JTF_NOTES_TL_C1 index', (JTF_NOTES_TL_C1_SYNC).

For HZ_CUST_ACCT_SITES_ALL_T1 index, there is a concurrent request owned by


AR called 'Customer text data creation and indexing', (ARXCSTXT).
For SUMMARY_CTX_INDEX index, there is a concurrent request owned by CS called
'Service Request Synchronize Index', (CS_SR_SYNC_INDEX).

There is no standard FND way to maintian all the text indexes used by Oracle
Applications. They are usually maintained running specific concurrent request owned by
the product owning the index.Get details of a particular concurrent request
Script available through note : 134035.1

prompt ICM Status prompt


DECLARE
l_activep NUMBER;
l_targetp NUMBER;
l_pmon_method VARCHAR2(80);
l_callstat NUMBER;
BEGIN
FND_CONCURRENT.GET_MANAGER_STATUS
( applid => 0,
managerid => 1,
targetp => l_targetp,
activep => l_activep,
pmon_method => l_pmon_method,
callstat => l_callstat);
IF l_callstat <> 0 THEN
dbms_output.put_line('Could not verify whether ICM is running -> '
|| l_callstat);
ELSE IF l_activep > 0 THEN
dbms_output.put_line('ICM is running');
ELSE
dbms_output.put_line('ICM is down -> Actual: '
|| l_activep || ', Target: '|| l_targetp);
END IF;
END IF;
END;
/RAXTRX (Autoinvoice Import Program)

During constant monitoring , I noticed one critical issue on execution


of concurrent program RAXTRX (Autoinvoice Import Program). The program
is spiking and eating up the CPU resource for upto 99%. Customer is
running thisprogram very frequently and each execution of this program
is spiking a very high cpu resource for a short time.

Here is the Details analysis and Action Plan to implement :

(1)
Top command shows for process id 29755 is spiking 99% CPU resource
and this can be mapped to concurrent program RAXTRX (Autoinvoice Import Program)
on request id 2340521 initiated by user JCHACKO.

============================================================
11:27am up 63 days, 15:06, 3 users, load average: 1.72, 0.85, 0.75
248 processes: 242 sleeping, 5 running, 0 zombie, 1 stopped
CPU0 states: 96.1% user, 3.3% system, 0.0% nice, 0.0% idle
CPU1 states: 99.4% user, 0.1% system, 0.0% nice, 50% idle
Mem: 8227704K av, 8222872K used, 4832K free, 1152896K shrd, 68932K buff
Swap: 12578768K av, 684016K used, 11894752K free 6054660K cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
29755 orpe03oi 25 0 427M 426M 420M R 99.6 5.3 0:48 oracle
14039 orpe03oi 15 0 101M 100M 99.8M S 0.3 1.2 0:08 oracle
24710 kmoizudd 15 0 1076 1076 752 R 0.3 0.0 0:08 top
12166 orpe03oi 15 0 99184 94M 97276 S 0.1 1.1 0:00 oracle
12177 orpe03oi 15 0 99704 95M 97796 S 0.1 1.1 0:00 oracle
============================================================

Database Details of this process:

file# Sid Db Status RequestID


Waiting block# Server Serial# CP PhSCode PriorityID Concurrent
ReqStart
SWait On Event id(Reason) Client SQLHash User ParentID Program Name
Minutes
----- ------------ ------------ ------- ---------- ---------- ---------- -------------------- ----------
3 db file sequ 437 24482 194 ACTIVE 2340298 Autoinvoice Import P
14-jan-06
ential read 25790 31416 1734 RR 2340257 rogram 12:00:41
1 835260576 JCHACKO 2340292 1.43M
13 latch free 1992479964 25458 144 ACTIVE 2340338 Autoinvoice
Import P 14-jan-06
98 982 49345 RR 2340313 rogram 12:05:35
0 835260576 JCHACKO 2340334 .98M

(2)

Also took trace of this concurrent program from the backend and observed that the
following
SQL in top on doing an INSERT INTO RA_INTERFACE_ERRORS statement, for one
execution doing
10807521(10million buffer gets) which is quite expensive and the CPU elasped time
went upto
63.58 seconds and just writtening 0 rows.
************************************************************************
********
INSERT INTO RA_INTERFACE_ERRORS
(INTERFACE_LINE_ID,
MESSAGE_TEXT,
INVALID_VALUE)
SELECT
INTERFACE_LINE_ID,
:b_err_msg6,
'trx_number='||T.TRX_NUMBER||','||'customer_trx_id='||TL.CUSTOMER_TRX_ID
FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES TL,
RA_CUSTOMER_TRX T
WHERE IL.REQUEST_ID = :b1
AND IL.INTERFACE_LINE_CONTEXT = 'ORDER ENTRY'
AND T.CUSTOMER_TRX_ID TL.CUSTOMER_TRX_ID
AND IL.INTERFACE_LINE_CONTEXT = TL.INTERFACE_LINE_CONTEXT
AND IL.INTERFACE_LINE_ATTRIBUTE1 = TL.INTERFACE_LINE_ATTRIBUTE1
AND IL.INTERFACE_LINE_ATTRIBUTE2 = TL.INTERFACE_LINE_ATTRIBUTE2
AND IL.INTERFACE_LINE_ATTRIBUTE3 = TL.INTERFACE_LINE_ATTRIBUTE3
AND IL.INTERFACE_LINE_ATTRIBUTE4 = TL.INTERFACE_LINE_ATTRIBUTE4
AND IL.INTERFACE_LINE_ATTRIBUTE5 = TL.INTERFACE_LINE_ATTRIBUTE5
AND IL.INTERFACE_LINE_ATTRIBUTE6 = TL.INTERFACE_LINE_ATTRIBUTE6
AND IL.INTERFACE_LINE_ATTRIBUTE7 = TL.INTERFACE_LINE_ATTRIBUTE7
AND IL.INTERFACE_LINE_ATTRIBUTE8 = TL.INTERFACE_LINE_ATTRIBUTE8
AND IL.INTERFACE_LINE_ATTRIBUTE9 = TL.INTERFACE_LINE_ATTRIBUTE9
AND IL.INTERFACE_LINE_ATTRIBUTE10 =
TL.INTERFACE_LINE_ATTRIBUTE10
AND IL.INTERFACE_LINE_ATTRIBUTE11 =
TL.INTERFACE_LINE_ATTRIBUTE11
AND IL.INTERFACE_LINE_ATTRIBUTE12 =
TL.INTERFACE_LINE_ATTRIBUTE12
AND IL.INTERFACE_LINE_ATTRIBUTE13 =
TL.INTERFACE_LINE_ATTRIBUTE13
AND IL.INTERFACE_LINE_ATTRIBUTE14 =
TL.INTERFACE_LINE_ATTRIBUTE14

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 57.69 63.54 19466 10807521 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 57.69 63.55 19466 10807521 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)
Rows Row Source Operation
------- ---------------------------------------------------
0 TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL
15973921 NESTED LOOPS
2805012 MERGE JOIN CARTESIAN
36 TABLE ACCESS BY INDEX ROWID RA_INTERFACE_LINES_ALL
36 INDEX RANGE SCAN RA_INTERFACE_LINES_N1 (object id 28270)
2805012 BUFFER SORT
77917 TABLE ACCESS FULL RA_CUSTOMER_TRX_ALL
13168908 INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (object id
29156)
Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'RA_CUSTOMER_TRX_LINES_ALL'
15973921 NESTED LOOPS
2805012 MERGE JOIN (CARTESIAN)
36 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'RA_INTERFACE_LINES_ALL'
36 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'RA_INTERFACE_LINES_N1' (NON-UNIQUE)
2805012 BUFFER (SORT)
77917 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'RA_CUSTOMER_TRX_ALL'
13168908 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'RA_CUSTOMER_TRX_LINES_N2' (NON-UNIQUE)

************************************************************************
********

(3)

Each execution of "Autoinvoice Import Program" is spiking the load on CPU (upto 99%)
and customer seems to be running this program very frequently (around >100 execution
per day) Per WebIV Note 301423.1 tells us that when no indexes in line_attribute%
columns, Autoinvoice performance will be poor when inserting rows into
RA_INTERFACE_ERRORS"
table and to fix this they have given a solution by creating concatenated index on attribute
columns.

SELECT COUNT( DISTINCT interface_line_attribute1) ATTR1,


COUNT( DISTINCT interface_line_attribute2) ATTR2,
COUNT( DISTINCT Interface_line_attribute3) ATTR3,
COUNT( DISTINCT Interface_line_attribute4) ATTR4,
COUNT( DISTINCT Interface_line_attribute5) ATTR5,
COUNT( DISTINCT Interface_line_attribute6) ATTR6,
COUNT( DISTINCT Interface_line_attribute7) ATTR7,
COUNT( DISTINCT Interface_line_attribute8) ATTR8,
COUNT( DISTINCT Interface_line_attribute9) ATTR9,
COUNT( DISTINCT Interface_line_attribute10) ATTR10,
COUNT( DISTINCT Interface_line_attribute11) ATTR11,
COUNT( DISTINCT Interface_line_attribute12) ATTR12,
COUNT( DISTINCT Interface_line_attribute13) ATTR13,
COUNT( DISTINCT Interface_line_attribute14) ATTR14
FROM ra_interface_lines_all;

ATTR1 ATTR2 ATTR3 ATTR4 ATTR5 ATTR6 ATTR7 ATTR8 ATTR9


ATTR10 ATTR11 ATTR12 ATTR13 ATTR14
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
64230 14 60318 1022 2 197655 1 1 29 15 1 7 1 1

SELECT COUNT( DISTINCT interface_line_attribute1) ATTR1,


COUNT( DISTINCT interface_line_attribute2) ATTR2,
COUNT( DISTINCT Interface_line_attribute3) ATTR3,
COUNT( DISTINCT Interface_line_attribute4) ATTR4,
COUNT( DISTINCT Interface_line_attribute5) ATTR5,
COUNT( DISTINCT Interface_line_attribute6) ATTR6,
COUNT( DISTINCT Interface_line_attribute7) ATTR7,
COUNT( DISTINCT Interface_line_attribute8) ATTR8,
COUNT( DISTINCT Interface_line_attribute9) ATTR9,
COUNT( DISTINCT Interface_line_attribute10) ATTR10,
COUNT( DISTINCT Interface_line_attribute11) ATTR11,
COUNT( DISTINCT Interface_line_attribute12) ATTR12,
COUNT( DISTINCT Interface_line_attribute13) ATTR13,
COUNT( DISTINCT Interface_line_attribute14) ATTR14
FROM RA_CUSTOMER_TRX_LINES_ALL;

ATTR1 ATTR2 ATTR3 ATTR4 ATTR5 ATTR6 ATTR7 ATTR8 ATTR9


ATTR10 ATTR11 ATTR12 ATTR13 ATTR14
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
76222 60 68326 1193 2 200638 1 3 29 15 1 7 1 1

SELECT COUNT( DISTINCT interface_header_attribute1) ATTR1,


COUNT( DISTINCT interface_header_attribute2) ATTR2,
COUNT( DISTINCT Interface_header_attribute3) ATTR3,
COUNT( DISTINCT Interface_header_attribute4) ATTR4,
COUNT( DISTINCT Interface_header_attribute5) ATTR5,
COUNT( DISTINCT Interface_header_attribute6) ATTR6,
COUNT( DISTINCT Interface_header_attribute7) ATTR7,
COUNT( DISTINCT Interface_header_attribute8) ATTR8,
COUNT( DISTINCT Interface_header_attribute9) ATTR9,
COUNT( DISTINCT Interface_header_attribute10) ATTR10,
COUNT( DISTINCT Interface_header_attribute11) ATTR11,
COUNT( DISTINCT Interface_header_attribute12) ATTR12,
COUNT( DISTINCT Interface_header_attribute13) ATTR13,
COUNT( DISTINCT Interface_header_attribute14) ATTR14
FROM RA_CUSTOMER_TRX_ALL;
ATTR1 ATTR2 ATTR3 ATTR4 ATTR5 ATTR6 ATTR7 ATTR8 ATTR9
ATTR10 ATTR11 ATTR12 ATTR13 ATTR14
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
76213 29 68119 1190 2 75049 1 3 23 15 1 5 1 1
FROM THE ABOVE OUTPUT OF ALL THREE SQL queries :

interface_line_attribute1 and interface_line_attribute6 columns in


ra_interface_lines_all and RA_CUSTOMER_TRX_LINES_ALL have the most
distinct elements.
In case of RA_CUSTOMER_TRX_ALL interface_header_attribute1 and
interface_header_attribute6 columns have most distinct elements.

(4)

ACTION PLAN Per ABOVE ANALYSIS :


=============================

Create the indexes as follows:

Note: Select the same attribute columns for indexing in all the tables below:
RA_INTERFACE_LINES_ALL
RA_CUSTOMER_TRX_LINES_ALL
RA_CUSTOMER_TRX_ALL
RA_INTERFACE_SALESCREDITS_ALL

a) Create the following index as


CREATE INDEX RA_INTERFACE_LINES_ALL_ATR ON
RA_INTERFACE_LINES_ALL (
INTERFACE_LINE_CONTEXT,
interface_line_attribute1,
interface_line_attribute6) STORAGE (
INITIAL 4K
NEXT 2K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELIST GROUPS 4
FREELISTS 4) TABLESPACE ARX INITRANS 4;
CREATE INDEX RA_CUSTOMER_TRX_LINES_ALL_ATR ON
RA_CUSTOMER_TRX_LINES_ALL (
INTERFACE_LINE_CONTEXT,
interface_line_attribute1,
interface_line_attribute6) STORAGE (
INITIAL 4K
NEXT 2K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELIST GROUPS 4
FREELISTS 4) TABLESPACE ARX INITRANS 4;
CREATE INDEX RA_CUSTOMER_TRX_ALL_ATR ON
RA_CUSTOMER_TRX_ALL (
INTERFACE_HEADER_CONTEXT,
interface_header_attribute1,
interface_header_attribute6) STORAGE (
INITIAL 4K
NEXT 2K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELIST GROUPS 4
FREELISTS 4) TABLESPACE ARX INITRANS 4;

ALTER index AR.RA_INTERFACE_LINES_ALL_ATR rebuild online;


ALTER index AR.RA_CUSTOMER_TRX_LINES_ALL_ATR rebuild online;
ALTER index AR.RA_CUSTOMER_TRX_ALL_ATR rebuild online;
exec FND_STATS.GATHER_INDEX_STATS( ownname > 'AR',indname >
'RA_CUSTOMER_TRX_ALL_ATR',
percent > 99.999999);
exec FND_STATS.GATHER_INDEX_STATS( ownname => 'AR',indname =>
'RA_CUSTOMER_TRX_LINES_ALL_ATR',
percent => 99.999999);
exec FND_STATS.GATHER_INDEX_STATS( ownname => 'AR',indname =>
'RA_INTERFACE_LINES_ALL_ATR',
percent => 99.999999);

Customer Need to provide a CRT for the above action plan listed in (4)to
implement this first on Development instance and validate the performance
of "Autoinvoice import program" by reviewing the trace and Next implement
the same action plan when satisfied by customer to implement in Production
instance.

Results Before and After:

BEFORE :

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
102,186,750 27 3,784,694.4 43.4 466.77 686.92 835260576
Module: RAXTRX
INSERT INTO RA_INTERFACE_ERRORS (INTERFACE_LINE_ID, MESSAGE_
TEXT, INVALID_VALUE) SELECT INTERFACE_LINE_ID, :b_err_msg6, '
trx_number'||T.TRX_NUMBER||','||'customer_trx_id'||TL.CUSTOMER
TRX_ID FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES TL,
RA_CUSTOMER_TRX T WHERE IL.REQUEST_ID = :b1 AND IL.INTERFAC
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
461,116 27 17,078.4 2.6 466.77 686.92 835260576
Module: RAXTRX
INSERT INTO RA_INTERFACE_ERRORS (INTERFACE_LINE_ID, MESSAGE
TEXT, INVALID_VALUE) SELECT INTERFACE_LINE_ID, :b_err_msg6, '
trx_number'||T.TRX_NUMBER||','||'customer_trx_id'||TL.CUSTOMER
_TRX_ID FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES
TL,
RA_CUSTOMER_TRX T WHERE IL.REQUEST_ID = :b1 AND IL.INTERFAC

EXPLAIN PLANS (sql2.txt AUOHSE03O07_PE03OI_S0006)


=============

EXPLAIN_PLAN
-----------------------------------------------------------------
9 INSERT STATEMENT Opt_Mode:CHOOSE
8 TABLE ACCESS (BY INDEX ROWID) OF
'AR.RA_CUSTOMER_TRX_LINES_ALL'
7 . NESTED LOOPS
5 .. MERGE JOIN (CARTESIAN)
2 ... TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_INTERFACE_LINES_ALL'
1 .... INDEX (RANGE SCAN) OF 'AR.RA_INTERFACE_LINES_N1' (NON-
UNIQUE) SEARCH_COLUMNS=1
4 ... BUFFER (SORT)
3 .... TABLE ACCESS ***(FULL)*** OF 'AR.RA_CUSTOMER_TRX_ALL'
(ROWS:83644 BLKS:6720)
6 .. INDEX (RANGE SCAN) OF 'AR.RA_CUSTOMER_TRX_LINES_N2' (NON-
UNIQUE) SEARCH_COLUMNS=1

EXPLAIN_PLAN
-----------------------------------------------------------------
INSERT STATEMENT
TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL
. NESTED LOOPS
.. MERGE JOIN (CARTESIAN)
... TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL
.... IRS RA_INTERFACE_LINES_N1 REQUEST_ID
... BUFFER (SORT)
.... TABLE ACCESS (FULL) RA_CUSTOMER_TRX_ALL (ROWS:83644
BLKS:6720)
.. IRS RA_CUSTOMER_TRX_LINES_N2 CUSTOMER_TRX_ID LINE_NUMBER

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)

OPERATION COST CARDINALITY BYTES CPU_COST


IO_COST TEMP_SPACE
----------------------------------------------- ------- ----------- -------- -------- -------- ----------
INSERT STATEMENT ............................. 2163 1 301 2163
TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL. 5 1 108
5
. NESTED LOOPS ............................... 2163 1 301 2163
.. MERGE JOIN (CARTESIAN) .................... 1023 228 44004 1023
... TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL. 2 1 178
2
.... IRS RA_INTERFACE_LINES_N1 ............... 1 54 1
... BUFFER (SORT) ............................ 1021 840 12600 1021
.... TABLE ACCESS (FULL) RA_CUSTOMER_TRX_ALL.. 1021 840 12600
1021
.. IRS RA_CUSTOMER_TRX_LINES_N2 .............. 2 6 2

AFTER :

EXPLAIN_PLAN
-----------------------------------------------------------------
9 INSERT STATEMENT Opt_Mode:CHOOSE
8 NESTED LOOPS
5 . NESTED LOOPS
2 .. TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_INTERFACE_LINES_ALL'
1 ... INDEX (RANGE SCAN) OF 'AR.RA_INTERFACE_LINES_N1' (NON-
UNIQUE) SEARCH_COLUMNS=1
4 .. TABLE ACCESS (BY INDEX ROWID) OF
'AR.RA_CUSTOMER_TRX_LINES_ALL'
3 ... INDEX (RANGE SCAN) OF 'AR.RA_CUSTOMER_TRX_LINES_ALL_ATR'
(NON-UNIQUE) SEARCH_COLUMNS=3
7 . TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_CUSTOMER_TRX_ALL'
6 .. INDEX (UNIQUE SCAN) OF 'AR.RA_CUSTOMER_TRX_U1' (UNIQUE)
SEARCH_COLUMNS=1

EXPLAIN_PLAN
-----------------------------------------------------------------
INSERT STATEMENT
NESTED LOOPS
. NESTED LOOPS
.. TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL
... IRS RA_INTERFACE_LINES_N1 REQUEST_ID
.. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL
... IRS RA_CUSTOMER_TRX_LINES_ALL_ATR INTERFACE_LINE_CONTEXT
INTERFACE_LINE_ATTRIBUTE1 INTERFACE_LINE_ATTRIBUTE6
. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_ALL
.. IUS RA_CUSTOMER_TRX_U1 CUSTOMER_TRX_ID

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)

OPERATION COST CARDINALITY BYTES CPU_COST


IO_COST TEMP_SPACE
-------------------------------------------------- ---- ----------- ----- -------- ------- ----------
INSERT STATEMENT ................................ 6 1 301 6
NESTED LOOPS ................................... 6 1 301 6
. NESTED LOOPS .................................. 5 1 286 5
.. TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL..... 2 1 178
2
... IRS RA_INTERFACE_LINES_N1 ................... 1 54 1
.. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL.. 3 1 108
3
... IRS RA_CUSTOMER_TRX_LINES_ALL_ATR ........... 2 1 2
. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_ALL......... 1 1 15
1
.. IUS RA_CUSTOMER_TRX_U1 ....................... 1

IRS:(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL


SCAN) BIR:(BY INDEX ROWID)Purge ATP Temp Tables

MRP_ATP_DETAILS_TEMP and MRP_ATP_SCHEDULE_TEMP Tables Grow


Rapidly and are not Purged
fact: Oracle Advanced Supply Chain Planning 11.5.6
fact: Oracle Order Management 11.5.5
symptom: The MRP_ATP_DETAILS_TEMP table grows rapidly and is not purged
symptom: The MRP_ATP_SCHEDULE_TEMP table grows rapidly and is not purged
cause: These tables are populated every time an ATP check is done and are currently not
self-cleaning.
fix: Truncate the MRP_ATP_DETAILS_TEMP and MRP_ATP_SCHEDULE_TEMP
tables. In 11.5.9, thereis a concurrent request available for purging these tables. It is
available to the System Administrator - Purge ATP Temp Tables

Purging
FND_LOG_TRANSACTION_CONTEXT
Problem Description :
====================

Per README description of patch 3119516 :

When logging has been completely disabled (i.e. the profile option AFLOG_ENABLED
is set to 'N'),the table FND_LOG_TRANSACTION_CONTEXT is still populated with
context information, even though this information is only used when logging has been
enabled. On large systems with many transactions this table can grow quite large and
would need to be purged regularly.After applying this patch, this table will not be
populated if logging is disabled.

Customer already applied this patch on 29-MAY-05 and had set the profile option
AFLOG_ENABLED to 'N' and truncated the table
FND_LOG_TRANSACTION_CONTEXT.
Patch Applied Details:
==================

ORIG_BUG_NUMBER A REASON_NOT_APPLIED S APPLI LAST_UPDATE_DAT


------------------------------ - -------------------- - ----- ---------------
3119516 N No active actions Y FND 29-MAY-05

Profile Setting details Details


===============================
USER_PROFILE_OPTION_NAME PROFILE_OPTION_NAME VALUE

FND: Debug Log Enabled AFLOG_ENABLED NO


FND: Debug Log Filename AFLOG_FILENAME
FND: Debug Log Level AFLOG_LEVEL
FND: Debug Log Mode AFLOG_BUFFER_MODE
FND: Debug Log Module AFLOG_MODULE

The rows are still getting inserted to FND_LOG_TRANSACTION_CONTEXT table and


impacting performance is application navigations or viewing of concurrent requests.

The content of the PATCH readme is misleading the text (this table will not be populated
if logging is disabled).

Product support : ================= Requested Action:


Truncate the following tables, in the given order:
FND_EXCEPTION_NOTES;
FND_OAM_BIZEX_SENT_NOTIF;
FND_LOG_METRICS;
FND_LOG_UNIQUE_EXCEPTIONS;
FND_LOG_EXCEPTIONS;
FND_LOG_MESSAGES;
FND_LOG_TRANSACTION_CONTEXT;
FND_LOG_ATTACHMENTS

To execute the following steps:

1. update fnd_log_unique_exceptions set STATUS ='C';


commit;

2.Submit multiple Purge CP requests with shorter time-


rangeFND_LOG_TRANSACTION_CONTEXT table in APPS 11.5.10

Doc ID: 332103.1


Purge Debug Log And System Alerts Performance Issues (Ref SR 4586434.993)
Applies to:
Oracle Application Object Library - Version: 11.5.10 to CU1
This problem can occur on any platform.
Symptoms
The FNDLGPRG Purge Debug Log program does not delete all records from the tables.
There are records which are over a month old. This program is consuming excessive CPU
resources.

It looks like the query on table fnd_log_messages is consuming a lot of time.

This problem began after applying 11.5.10 CU1.


Cause
There are too many rows in fnd_log* tables.
This is Bug.4505316 (32) PURGE DEBUG LOG FNDLGPRG CONSUMING LARGE
AMOUNT OF CPU

Customer is experiencing the exact symptoms of the Purge Debug Log program
consuming CPU and taking a long time to complete.
Solution
To implement the solution, please execute the following steps:

1. SQL> update fnd_log_unique_exceptions set STATUS ='C';


SQL> commit;

2. Submit multiple Purge CP requests with shorter time-range.

OR

Truncate the following tables, in the given order:


FND_EXCEPTION_NOTES;
FND_OAM_BIZEX_SENT_NOTIF;
FND_LOG_METRICS;
FND_LOG_UNIQUE_EXCEPTIONS;
FND_LOG_EXCEPTIONS;
FND_LOG_MESSAGES;
FND_LOG_TRANSACTION_CONTEXT;
FND_LOG_ATTACHMENTS

If there are too many System Alerts, the purge program starts taking lot of time.
Perform the following to avoid this problem from recurring:

1. Apply patch 4347939.

2. Schedule the Purge CP to run daily and purge all messages older than 7 days.

3. Monitor alerts daily using the OAM Alerts Dashboard. Once issue is
resolved, manually change alert state to 'Closed' to allow these alerts to
be purged by Purge CP.

In 11.5.10 CU3 we will provide the provision to stop the system alerts altogether.
Customers will be able to disable this feature and no data will be collected. Hence
we will not face a situation where there are excessive new system alerts that exist in the
system.
References
Bug 4505316 - Purge Debug Log Fndlgprg Consuming Large Amount Of Cpu.
Bug 4347939 - FND Logging Patch for 11.5.10Large number of rows in DR$WAITING
and DR$PENDING tables

Dr$Pending And / Or Dr$Waiting Purge In Oracle


Symptoms
DR$PENDING and DR$WAITING have too many rows.
Cause
The FND_LOBS is a table that has the index FND_LOBS_CTX
This table grows and the index is not being synchronized. Then the pending rows to be
synchronized are inserted into dr$pending and dr$waiting.
Solution
To implement the solution, please execute the following steps:
(1) Run the following scripts to identify issues with CTX indexes:
select u.username,i.idx_name, drp.pnd_cid,drp.cnt_rows from ctxsys.dr$index i,
dba_users u,
( select pnd_cid,count(*) cnt_rows from ctxsys.dr$pending group by pnd_cid) drp where
u.user_id=i.idx_owner# and i.idx_id in drp.pnd_cid;
select wtg_cid,count(*) from ctxsys.dr$waiting group by wtg_cid
(2) If the count in above output has greater than 10,000 rows, execute the following step:
Rebuild FND_LOBS_CTX index by running following script:
$FND_TOP/sql/aflobbld.sql APPLSYS APPS
(3) Add a purge job "Purge Obsolete Generic File Manager Data" and submit it by
following instructions listed in Appendix A.

Appendix A: How to add and submit a job "Purge Obsolete Generic File Manager Data" \
By default, it is not registered as a sysadmin program, so do this: 1. Navigate to
Security/Responsibility/Request 2. Query up the group "System Administrator Reports"
3. Scroll all of the way to the bottom of the list of reports. 4. Click the plus icon on the
top left to add a record. 5. Add the record: Program ... Purge Obsolete Generic File
Manager Data 6. Click the yellow disk icon to save it. 7. You should be able to see this in
the list of Requests that you can run (Request/Run/...). Enable the Concurrent Manager
program "Purge Obsolete Generic File Manager Data" by assigning it to the SYSADMIN
group. Once this program gets enabled, you can run it with the parameter of "export" to
purge the old/expired data. Note: Expiration date is automatically set by Applications in
the FND_LOBS. EXPIRATION_DATE column. ========== BUG:2148975
============ WHAT DOES 'PURGE OBSOLETE GENERIC FILE MANAGER
DATA' CONC REQUEST DO? This plsql procedure (FNDGFMPR) works much like the
'Purge Obsolete Workflow Runtime Data', it gets rid of old obsolete uploaded files
(loaded to the database) for the programs FND_HELP, export and FND_ATTACH, these
are programs that are run under the FNDGFU (Generic File Manager Access Utility), ...
This program is used to purge data that is related to Exports and Attachments
functionality and also Help Builder. It purges data from the FND_LOBS and
FND_LOB_ACCESS tables. At this time this Concurrent Program is not assigned to a
Responsibility. .
========== BUG:2148975 ============ Program Parameters . Expired - Enter Y if
you want to purge expired data only. Enter N if you want the purge to include all data.
The default is Y. . Program Name - Enter the program name(s) to process. Leave blank to
process all programs. . Program Tag - Enter the program tag(s) to process. Leave blank to
process all program tags.
=====================================
References:
Note 104262.1 - Technical Overview: DML Processing in Oracle Text
@ Note 171272.1 - How to Drop Old/Expired Export and Attachment Data From
FND_LOBS.
Note 298698.1 - Avoiding abnormal growth of FND_LOBS table in Applications 11i
For more details, please check the document
DR_waiting_pending-CheckDetails

Apps Technology Stack


DMZ <redirection> Internal MT (Problem occured internmittent)

Cause:

DMZ login was redirecting to internal MT server and some time internal MT was
redirecting to DMZ server.
Fix:

FND_NODES table plays a major role on deriving the login page URL and after the
upgrade someone just renamed the table and recreated the new table. All the existing
synonym,triggers, indexes were still pointing to renamed table.
1.Run this following select SQL statements on the target node only to Check triggers
exists and status is enabled:
SQL> SELECT trigger_name , status FROM user_triggers
WHERE table_name = 'FND_NODES' ;

TRIGGER_NAME STATUS
------------------------------ ----------
FNDSM ENABLED
UPNAME ENABLED
- If status of the triggers show as DISABLED, then enable these triggers as follows:

sql>alter trigger UPNAME enable;


sql>alter trigger FNDSM enable;

2. The other reason could be a mis-match between fnd_nodes and appl_server_id in the
dbc file and due to this we could see intermittent URL redirection.
Execute below query to find server_id of all the middle tiers of the instance.
SQL> col NODE_NAME for a30
SQL> col SERVER_ID for a75
SQL> col SERVER_ADDRESS for a20
SQL> set lines 1000
SQL> select NODE_NAME,SERVER_ID,SERVER_ADDRESS from fnd_nodes;
Verify SERVER_ID output from above result with APPL_SERVER_ID entry in dbc files
of corresponding nodes. dbc file will be located in $FND_TOP/secure/<db
hostname>_<sid>.dbc $FND_TOP/secure may have softlink to $HOME/secure directory.
If there is any mismatch in APPL_SERVER_ID entry in dbc files edit dbc file for correct
APPL_SERVER_ID entry. If fnd_nodes is also not populated with proper values, we
need to regenerate dbc files on that particular node.
Recreate dbc files on Middle tier:
$cd $COMMON_TOP/admin/install/<instance name>_<MT hostname>
$./adgendbc.sh apps <appspass>
This script creates dbc files with correct APPL_SERVER_ID entry and also it populates
fnd_nodes table with correct entries.

3. Verify correct Guest password is provided in dbc file.Use below query to find if Guest
password given in dbc file is correct.
SQL> select fnd_web_sec.validate_login('GUEST','<guest password>') from dual ;
If this query returns Y the password is correct. If query returns N change dbc file with
correct Guest password. These changes does not require bounce of any service.
Note: Before doing any changes to FND_NODES table take the backup as create table
FND_NODE_bk as select * from FND_NODES ;Redo Logfile Investigating high
redolog switches, using logminer.

1. Check if logminer is installed.


SQL> connect /as sysdba
SQL> desc dbms_logmnr
If this is not installed, refer
Note 62508.1 The LogMiner Utility
Note 148616.1 Oracle9i LogMiner New Features
for installing and using LogMiner.

2. Identify an archivelog generated during excessive switch and issue the following:
SQL> execute DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME >
'/panhgi/arch/ArchiveOnLine/PANHGI_1_23238.arc',OPTIONS
>DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.start_logmnr(OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

3. Now query V$LOGMNR_CONTENTS :

a) To get a general view of what operation was causing excessive redo switching:
eg:
SQL> select operation,seg_owner,count(*) from v$logmnr_contents group by
operation,seg_owner;
DELETE INV 6
DELETE WIP 2
DELETE WSH 123230

b) To get specific info on the SQL statements and the objects contributing to the redo:
eg:
SQL> select count(*), seg_name from v$logmnr_contents where seg_owner'WSH' group
by seg_name;
SQL> select sql_redo, rollback from v$logmnr_contents where operation'DELETE' and
seg_owner ='WSH';
Note1: If "Rollback" Flag is set to 1, it means the statement shown under SQL_REDO is
doing a rollback operation.Caution:

FRD for forms servlet mode deployment


Note 62664.1 Forms Server Logging and Forms Runtime
Diagnostics (FRD) : NOTE: There is no such connection activity logging functionality
for the new Forms ServerÂcoalesce all IOTs/indexes
Procedure to manually coalesce all the IOTs/indexes associated with Advanced Queueing
tables to maintain Enqueue/Dequeue performance and reduce QMON CPU usage and
Redo generation
Ref. Note : 271855FND_PROFILE Access SQL

Patch 3819096 -- Will reduce the number of executions of FND_PROFILE SQLs, these
SQLs are currently being executed several times.E-Business Suite performance patches

Review the MetaLink note "Recommended Performance Patches for the Oracle E-
Business Suite", 244040.1.
Recommended performance patches for all the modules and tech. stack components are
consolidated in this note.OA Framework Applications

Refer to MetaLink note 123456.1 (Recommended Patches for Applications) and


MetaLink note 275880.1 (Framework Roadmap).

If running FWK 5.7, ensure that you are running the latest rollup patch for 5.7H.
Refer to MetaLink note 258333.1.

MRP
Methodology and initial investigation for MRP issues.>

( A ) Proactive Actions :

( 1 ) - Clean up tables BOM_EXPLOSIONS_TEMP and CST_EXPLOSIONS_TEMP.

Run $BOM_TOP/sql/CSTCSROL.sql. This will delete rows from


BOM_EXPLOSIONS_TEMP and CST_EXPLOSIONS_TEMP 5000 at a time.
NOTE: If the CST_EXPLOSIONS_TEMP has been truncated CSTCSROL.sql
will not delete rows from BOM_EXPLOSIONS_TEMP. If CSTCSROL.sql errors
there are probably too many rows in BOM_EXPLOSIONS_TEMP and the table
will need to be truncated manually.
( Ref. note 101015.1 point : 4.5.1 )

( 2 ) - Schedule Concurrent job "Purge ATP Temp Tables". ( for 11.5.9 and above Ref.
note 329398.1 )

Schedule : Once a Week - Preferably during week end off hours. From the start of prior
run.
Parameter : Age of Entry ( In Hours ) = < As per the input received from customer >
Note : With the parameter 'Age of Entry (in hours)' you can specify the age of the data
you want to delete. For example, if you enter '1', this program will delete any data that is
more than 1 hour old. By default, we can set this parameter to 24 hours. However, since
this is a business decision, customer should approve the parameter value.
( 3 ) Ensure that no unwanted tracing is enabled on the MRP related processes.
Following are the two profile options for which disable tracing if it is not required for any
troubleshooting. :

MRP:Debug Mode
Indicate whether to enable debug messages within Oracle Master Scheduling/MRP.
Available values:
Yes Enable debug messages.
No Do not enable debug messages.
This profile has a predefined a value of No upon installation.
You can update this profile at all levels.

MRP:Trace Mode
Indicate whether to enable the trace option within Oracle Master Scheduling/MRP
and Supply Chain Planning. Available values are listed below:
Yes Enable the trace option within Oracle Master Scheduling/MRP and Supply Chain
Planning.
No Do not enable the trace option within Oracle Master Scheduling/MRP and Supply
Chain Planning.
This profile has a predefined a value of No upon installation.
You can update this profile at all levels.
( ref. note : 111955.1 )

( 4 ) Minimum desired processes for Standard Manager :


The Planning Manager kicks off a Planning Manager worker which runs under the
Standard Manager by default. It is therefore, necessary to have sufficient number of
standard manager processes. Validate that you have ATLEAST standard manager
processes defined as per following formula :
Minimum Recommended value for Standard Manager processes in relation to profile
option value "MRP:Snapshot Workers" :
4 + (2 * Value of MRP:Snapshot Workers)
Ref Note : 101015.1 , Tar # 16360058.6

( B ) Methodology for Reactive MRP related performance TARs :

( 1 ) Identifying long running child job and taking trace :


Generally MRP job spawns many child job and the difficult task is to identify the long
running child job. Following steps will help in identifying the long running Child jobs :

Steps :

1. Ensure that NONE of the MRP concurrent programs have trace enabled.
2. Ensure that the profile MRP: Trace Mode is set to No at the Site level and Yes for the
user level to get trace of the program submitted by that user.
3. Ensure that the profile MRP: Debug Mode is set to No at the Site level and YES at user
level for user to get the trace of the program submitted by that user.
4. Request customer to Run the plan
5. Customer to provide the request ID of the Main MRP Concurrent Program .
6. Run the following script to identify the trace files associated with each child
concurrent program
and upload the output to the TAR. The output will also tell which Child requet took more
time.

set linesize 250 verify off heading on


spool requests.out
column "Program Name" format A37
column "Delay" format 999.99
column "Elapsed" format 999.99
select /+ ORDERED USE_NL(x fcr fcp fcptl)/
fcr.request_id "Request ID",
fcr.oracle_process_id "Process ID",
fcptl.user_concurrent_program_name "Program Name",
fcr.phase_code,
fcr.status_code,
to_char(fcr.request_date,'DD-MON-YYYY HH24:MI:SS') "Submitted",
(fcr.actual_start_date - fcr.request_date)*1440 "Delay",
to_char(fcr.actual_start_date,'DD-MON-YYYY HH24:MI:SS') "Start Time",
to_char(fcr.actual_completion_date, 'DD-MON-YYYY HH24:MI:SS') "End Time",
(fcr.actual_completion_date - fcr.actual_start_date)*1440 "Elapsed"
from (select /*+ index (fcr1 FND_CONCURRENT_REQUESTS_N3) */
fcr1.request_id
from fnd_concurrent_requests fcr1
where 1=1
start with fcr1.request_id = &parent_request_id
connect by prior fcr1.request_id = fcr1.parent_request_id) x,
fnd_concurrent_requests fcr,
fnd_concurrent_programs fcp,
fnd_concurrent_programs_tl fcptl
where fcr.request_id = x.request_id
and fcr.concurrent_program_id = fcp.concurrent_program_id
and fcr.program_application_id = fcp.application_id
and fcp.application_id = fcptl.application_id
and fcp.concurrent_program_id = fcptl.concurrent_program_id
and fcptl.language = 'US'
order by 1
/

( 2 ) Scripts to find out number of rows and possible fragmentation issues :

Use following SQL to identify possible cause of fragmentation issue which may be
causing the performance issue :
Find out how many rows are in the MRP_ tables
select table_name ,num_rows ,last_analyzed from all_tables where table_name like 'MRP
%';
Check the actual allocated bytes to these table :

SELECT owner, SUBSTR(segment_name,1,40) Table_or_Index_Name, bytes


table_or_index_size,
tablespace_name,
partition_name, pct_increase, blocks, EXTENTS, initial_extent, next_extent,
Min_extents, max_extents
FROM dba_segments
WHERE
segment_name LIKE 'MRP%'
AND bytes >100000000

( above will consider tables with 100 mb or more )


Check the fragmentation of the tables involved in MRP plan runs

set linesize 200 pagesize 30 verify off trimspool on trimout on


set pages 3000
column "Segment Name" format A36
column "Tablespace" format A12
column "Segment Type" format A16
column "Partition Name" format A30
column "Bytes" format 999,999,999,999
select
owner || '.' || segment_name "Segment Name",
partition_name "Partition Name",
segment_type "Segment Type",
tablespace_name "Tablespace",
initial_extent "Init Ext",
next_extent "NextExt",
extents "Extents",
max_extents "Max Ext",
bytes "Bytes",
blocks "Blocks",
pct_increase "Pct Incr"
from dba_segments
Where owner in ('MRP','BOM','INV','WIP','ONT','PO')
and (segment_name like 'MRP%'
or segment_name like 'BOM%'
or segment_name like 'INV%'
or segment_name like 'WIP%'
or segment_name like 'ONT%'
or segment_name like 'PO%')
order by segment_name;
( 3 ) Some General Guide Lines
Performance issues on MRP are usually a maintenance issue.
1) All the notes refered to in Note 101015.1 can be provided to the customer for their
consideration:.
A summary of these notes includes:
2) Purge all unnecessary MDS/MPS/MRP/DRP plans
3) Purge all unnecessary forecasts.
4) Forecast at the weekly and period level as much as possible. Daily buckets
consume much more memory in process.
5) Try Gather statistics at greater than 50% for MRP, BOM, PO, ONT, CST, WIP and
FND on a weekly basis at the minimum.
6) Keep the number of extents on DB tables and indexes low.
7) Peg to soft pegging when pegging is needed if at all possible. End assembly pegging
takes up more processing. Avoid pegging where it is not needed.
8) Never run MRP plans when another big process is running. Such as period close.

( 4 ) Useful Notes :
279156.1 -- RAC Configuration Setup For Running MRP Planning, APS Planning, and
Data Collection Processe
101015.1 Troubleshooting Performance issues Relating to MRP,DRP,SCP?

You might also like