Professional Documents
Culture Documents
Motivation
* Ask for the right information
* Diagnose the problem correctly
* Reduce the time a performance bug/issue remains open
* Resolve the problem quickly
Do not gather statistics excessively on entire schemas or the entire database such as
nightly or weekly. Do not gather statistics on permanent objects during peak intervals.
Use only FND_STATS or the Gather Schema and Gather Table Statistics Concurrent
Programs
Do NOT USE the analyze nor dbms_stats command directly. It is not supported, and
results in sub-optimal plans.
Review the table and index statistics for the objects which appear in the top SQL section
of Stats Pack.
Avoid collecting statistics on GLOBAL TEMPORARY TABLES instead Force the Index
for best performance
Cause: Performance noted poor in "Fund Mgmt page" of FII (Oracle Financials
Intelligence). Tracefile analysis showed full table scan with high cost on two global
temporary tables ( FII_GL_SNAP_SUM_F , FII_PMV_AGGRT_GT ).
Fix:
When collected 100% statistics on these two tables, It was observed that its avoiding full
table scan on global temporary tables as
The following test worked fine for FND_STATS package version ( 115.47 )
Test scenario :
LAST_ANALYZED NUM_ROWS
--------------- ----------
( B ) Test using "fnd_stats.gather_ schema _statistics " ( SID= STESTI )
FND_CONCURRENT_REQUESTS
SQL>( Note : Above test worked fine for "fnd_stats.gather_ schema statistics" , but when
using "fnd_Stats.gather table _stats" , it collcted table statistics and that is obvious. )
( C ) Test on FND_STATS package versio 115.46. ( Testing did not work in this case )
Though it indicatets " PL/SQL procedure successfully completed. " , it does not insert any
rows into table FND_EXCLUDE_TABLE_STATS.
2. Inserted row for FND_CONCURRENT_REQUESTS table using "Insert " statement
into table FND_EXCLUDE_TABLE_STATS.
3. Gathered statistics on APPLSYS schema using "fnd_stats.gather_schema_statistics"
and it collected schema statistics.
4. The change made to FND_EXCLUDE_TABLE_STATS was then reverted back.
As per the direction from the apps performance team we need to use Gather Auto option
while running Gather Schema Statistics concurrent request.
To implement the Gather Schema Statistics concurrent request with Gather Auto option
we need to put the tables in monitoring mode.
Following are the steps required to implement the change first in the test instance and
than in production.
STEP 1.
STEP 2.
STEP 3.
connect apps/xxxxxx
exec fnd_stats.enable_schema_monitoring('ALL');
In case of errors during the process of changing the mode to monitoring please bounce all
the middle tier components along with the database and retry.
STEP 4.
SQL> select count(1), monitoring from dba_tables group by monitoring;STEP 5.
Restart concurrent managers.
STEP 6.
Schedule the gather schema statistics concurrent request with Gather Auto option after
business hours to run every Saturday at 1 AM. For ALL schemas,
Estimate=10%,Degree=3,Backup Flag= NOBACKUP,History Mode None,Gather
Options Gather Auto.
Application Profiles
FND: Enable Cancel Query
Concurrent:
Allow Debugging The profile "Concurrent: Allow Debugging" should be set to "Yes."
QP: Debug
Ensure that the profile "QP: Debug" is set to ?N?.
Concurrent Program
Concurrent Program Concurrent Form VIEW/SUBMIT is too slow
Cause :
Performance ON Concurrent form VIEW/SUBMIT is too slow (TAR 15927120.600)
After the patch 2502208 and 100% statistics of the tables , the execution plan shows
multiple
HASH joins with elapesed time went upto 9.48 seconds
************************************************************************
********
SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,PROGR
AM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_STYLE,SA
VE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_TE
XT,PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_
METHOD_CODE,RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CO
NTROLLING_MANAGER,LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENA
BLED,REQUESTED_START_DATE,PHASE_CODE,HOLD_FLAG,STATUS_CODE,
REQUEST_ID,PROGRAM,REQUESTOR,PRIORITYFROMFND_CONC_REQ_SUM
MARY_V WHERE nvl(request_type, 'X') != 'S' and(trunc(request_date) >=
trunc(sysdate-:1)) order by REQUEST_ID DESC
1 SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,
2
PROGRAM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_ST
YLE,
3
SAVE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_
TEXT,
4 PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
5
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,
6
HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_METHO
D_CODE,
7
RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CONTROLLING_MA
NAGER,
8
LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENABLED,REQUESTED_STAR
T_DATE,
9
PHASE_CODE,HOLD_FLAG,STATUS_CODE,REQUEST_ID,PROGRAM,REQUEST
OR,PRIORITY
10 FROM
11 FND_CONC_REQ_SUMMARY_V WHERE nvl(request_type, 'X') != 'S' and
12* (trunc(request_date) >= trunc(sysdate-:1)) order by REQUEST_ID DESC
EXPLAIN_PLAN
--------------------------------------------------------------------------------------------- 11
SELECT STATEMENT Opt_Mode:CHOOSE
10 SORT (ORDER BY)
9 . HASH JOIN
1 .. TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL' (ROWS:6698 BLKS:120) 8 ..
HASH JOIN
2 ... TABLE ACCESS ***(FULL)*** OF 'APPLSYS.FND_USER' (ROWS:178
BLKS:20)
7 ... HASH JOIN
3 .... TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_PROGRAMS' (ROWS:6698 BLKS:150) 6 ....
HASH JOIN (OUTER)
4 ....+ TABLE ACCESS ***(FULL)*** OF
'APPLSYS.FND_CONCURRENT_REQUESTS' (ROWS:416042 BLKS:41295)
5 ....+ TABLE ACCESS ***(FULL)*** OF 'APPLSYS.FND_PRINTER_STYLES_TL'
(ROWS:72 BLKS:4)
EXPLAIN_PLAN
--------------------------------------------------------------------------------------------- SELECT
STATEMENT SORT (ORDER BY) . HASH JOIN
.. TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS_TL (ROWS:6698
BLKS:120)
.. HASH JOIN ... TABLE ACCESS (FULL) FND_USER (ROWS:178 BLKS:20)
... HASH JOIN .... TABLE ACCESS (FULL) FND_CONCURRENT_PROGRAMS
(ROWS:6698 BLKS:150)
.... HASH JOIN (OUTER)
....+ TABLE ACCESS (FULL) FND_CONCURRENT_REQUESTS (ROWS:416042
BLKS:41295)
....+ TABLE ACCESS (FULL) FND_PRINTER_STYLES_TL (ROWS:72 BLKS:4) IRS:
(INDEX RANGE SCAN) IUS:(INDEX UNIQUE SCAN) IFS:(INDEX FULL SCAN)
BIR:(BY INDEX ROWID) OPERATION COST
CARDINALITY BYTES CPU_COST IO_COST
NUM_ROWS
TBL TABLE_NAME (ACTUAL) BLOCKS
--- ------------------------------ -------------- --------------
01 FND_CONCURRENT_PROGRAMS 6698 150
02 FND_CONCURRENT_PROGRAMS_TL 6698 120
03 FND_CONCURRENT_REQUESTS 416042 41295
04 FND_PRINTER_STYLES_TL 72 4
ACTIONS :
Evaluated from Rakesh Tikku from Apps performance development group regarding the
"Concurrent Form submit/view" performance issue and he had recommended the
following actions, When below profile option is set to "NO" , concurrent form won't
requery the fnd_concurrent_request table as we have seen previously some slow response
with this.
1 SELECT
CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,
2
PROGRAM_SHORT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_ST
YLE,
3
SAVE_OUTPUT_FLAG,ROW_ID,ACTUAL_COMPLETION_DATE,COMPLETION_
TEXT,
4 PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP_PRINT_STYLE,
5
FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUEST
ED_BY,
6
HAS_SUB_REQUEST,IS_SUB_REQUEST,UPDATE_PROTECTED,QUEUE_METHO
D_CODE,
7
RESPONSIBILITY_APPLICATION_ID,RESPONSIBILITY_ID,CONTROLLING_MA
NAGER,
8
LAST_UPDATE_LOGIN,PRIORITY_REQUEST_ID,ENABLED,REQUESTED_STAR
T_DATE,
9
PHASE_CODE,HOLD_FLAG,STATUS_CODE,REQUEST_ID,PROGRAM,REQUEST
OR,PRIORITY
10 FROM
11 FND_CONC_REQ_SUMMARY_V WHERE nvl(request_type, 'X') != 'S' and
12 (trunc(request_date) >= trunc(sysdate-:1)) and (REQUESTED_BY=:2) order by
13* REQUEST_ID DESC
1. Set the profile option "Concurrent: Show Requests Summary After Each Request
Submission" to No
2. Change this first for a specific user and when customer confirms "OK" then move
farward to
EXPLAIN PLANS (rk1.sql AUOHSE03O07_PE03OI_S0015)
=============
EXPLAIN_PLAN
--------------------------------------------------------------------------------------------------
16 SELECT STATEMENT Opt_Mode:CHOOSE
15 SORT (ORDER BY)
14 . NESTED LOOPS
11 .. NESTED LOOPS
8 ... NESTED LOOPS (OUTER)
5 .... NESTED LOOPS
2 ....+ TABLE ACCESS (BY INDEX ROWID) OF 'APPLSYS.FND_USER'
1 ....+. INDEX (UNIQUE SCAN) OF 'APPLSYS.FND_USER_U1' (UNIQUE)
SEARCH_COLUMNS=1
4 ....+ TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_REQUESTS'
3 ....+. INDEX (RANGE SCAN) OF
'APPLSYS.FND_CONCURRENT_REQUESTS_N1' (NON-UNIQUE)
SEARCH_COLUMNS=1
7 .... TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_PRINTER_STYLES_TL'
6 ....+ INDEX (UNIQUE SCAN) OF 'APPLSYS.FND_PRINTER_STYLES_TL_U1'
(UNIQUE) SEARCH_COLUMNS=2
10 ... TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS'
9 .... INDEX (UNIQUE SCAN) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_U1' (UNIQUE)
SEARCH_COLUMNS=2
13 .. TABLE ACCESS (BY INDEX ROWID) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL'
12 ... INDEX (UNIQUE SCAN) OF
'APPLSYS.FND_CONCURRENT_PROGRAMS_TL_U1' (UNIQUE)
SEARCH_COLUMNS=3
EXPLAIN_PLAN
---------------------------------------------------------------------------------------
SELECT STATEMENT
SORT (ORDER BY)
. NESTED LOOPS
.. NESTED LOOPS
... NESTED LOOPS (OUTER)
.... NESTED LOOPS
....+ TABLE ACCESS (BIR) FND_USER
....+. IUS FND_USER_U1 USER_ID
....+ TABLE ACCESS (BIR) FND_CONCURRENT_REQUESTS
....+. IRS FND_CONCURRENT_REQUESTS_N1 REQUESTED_BY
ACTUAL_COMPLETION_DATE
.... TABLE ACCESS (BIR) FND_PRINTER_STYLES_TL
....+ IUS FND_PRINTER_STYLES_TL_U1 PRINTER_STYLE_NAME LANGUAGE
... TABLE ACCESS (BIR) FND_CONCURRENT_PROGRAMS
.... IUS FND_CONCURRENT_PROGRAMS_U1 APPLICATION_ID
CONCURRENT_PROGRAM_ID
.. TABLE ACCESS (BIR) FND_CONCURRENT_PROGRAMS_TL
... IUS FND_CONCURRENT_PROGRAMS_TL_U1 APPLICATION_ID
CONCURRENT_PROGRAM_ID LANGUAGE
NUM_ROWS
TBL TABLE_NAME (ACTUAL) BLOCKS
--- ------------------------------ -------------- --------------
01 FND_CONCURRENT_PROGRAMS 6698 150
02 FND_CONCURRENT_PROGRAMS_TL 6698 120
03 FND_CONCURRENT_REQUESTS 425705 43725
04 FND_PRINTER_STYLES_TL 72 4
05 FND_USER 178 20Purge Concurrent Request and/or Manager
Cause:
When huge amount of records present in fnd_concurrent_request table and execution of
concurrent program was taking a very long time (>300 and <500 minutes) to complete
with 15 days purge policy. Expected patch 3591639 will resolve the issue but this did not
improve the performance issue.
Found following SQL statement in tracefile anlysis
SELECT COUNT(*) FROM FND_FILE_TEMP T WHERE T.FILE_ID = :B1
Where cpu time when upto 3736.33 seconds and total elapsed time went upto 38683.68
seconds with 63.5 million disk reads and 75.3 million buffer gets for accessing just 14653
rows.
Fix:
There is a known issue related to RAC. WebIV Note 304748.1 says to setup FNDFS
appropriately so that files on one node can be accessed by requests from another node. It
is also recommended to set the init.ora parameter max_commit_propagation_delay= 0,
current value of this parameter by default will be set to 700.
Another problem is related with FND_FILE_TEMP table. When there are no indexes on
that table the performance will degrade and AOL product support recommended us to
create two non-unique indexes on REQUEST_ID and FILE_ID columns.INCTM ?
Concurrent Manager Consumes lot of memory
INCTM Using Excessive Memory After 11.5.9 Upgrade Reference Note : 275528.1
Patch : 3517095
Performance issues observed with "Cost Rollup ? No Report " & "Update Standard Cost"
programs.
Cause : Large number of rows in temporary tables.
Synchronize Workflow
Cause :
Unused space will be wasted.
Fix:
Run the Bulk Synchronization Concurrent Program called "Synchronize WF LOCAL
tables". By default this request set should be scheduled to run once a day to provide a
minimal level of synchronization or we should schedule the request set to perform
synchronization more frequently.Purge Obsolete Generic File Manager Data
Cause:
If the query shows too many rows (perhaps > 10,000), then recommendation is to purge
the old (expired) data.
select program_name, count(*) from fnd_lobs group by program_name;
Fix:
Change the PCTVERSION to zero for FND_LOBS table as ALTER TABLE FND_LOBS
MODIFY LOB (FILE_DATA) ( PCTVERSION 0 );
And add the concurrent program "Purge Obsolete Generic File Manager Data" through
SYSADMIN and scheduled to run with default parameters which will purge only
expired(junk) documents.
Note 171272.1 How to Drop Old/Expired Export and Attachment Data From
FND_LOBS.
Also check the bug 4393550 logged on development for clarification.SFM SM Interface
Test Service
Top process on the server map to "java" processes using high CPU
Cause :
Investigation of underlying Java process indicates the following CCM "SFM SM
Interface Test Service" manger righting to the log file endlesslely.
Fix:
Request customer to review metalink note: 304946.1 and provide a CRT to disable the
manager.Purge Debug Log And System Alerts
Purge Debug Log And System Alerts (Running for long hours) Bug.4505316 and the
solution for this is provided against WebIV Note 332103.1-Env: 11.5.10 - CU1 Tar's:
15529024.6 & 15530194.6 (ENGENIOI) Tar: 4630500.993 (AMERICAN AIRLINES
INC)Rebuild Help Search Index
Rebuilding Intermedia Index for Task Names'
Synchronize JTF_NOTES_TL_C1 index
Customer text data creation and indexing
Service Request Synchronize Index
DR$PENDING and DR$WAITING have too many rows. Cusomers facing this problem
usually have Oracle APPS. Webiv Note 311583.1 provides more details on this issue as
the FND_LOBS is a table that has the index FND_LOBS_CTX
This table grows and the index is not being synchronized. Then the pending rows to be
synchronized are inserted into dr$pending and dr$waiting. Complete explanation in Note
Id:104262.1
Solution:
To implement the solution, please execute the following steps:
1. Purge FND_LOBS following instructions in these notes: Note Id:298698.1 Note
Id:171272.1
2. Sync the index FND_LOBS_CTX periodically: This note explains how to syncrhonize
a text index: Note Id:119172.1
3. Execute the following query to find out what are the indexes that still have pending
rows in DR$PENDING:
connect as ctxsys:
select u.username, i.idx_name from dr$index i, dba_users u where
u.user_id=i.idx_owner# and idx_id in (select pnd_cid from dr$pending);
There is no standard FND way to maintian all the text indexes used by Oracle
Applications. They are usually maintained running specific concurrent request owned by
the product owning the index.Get details of a particular concurrent request
Script available through note : 134035.1
(1)
Top command shows for process id 29755 is spiking 99% CPU resource
and this can be mapped to concurrent program RAXTRX (Autoinvoice Import Program)
on request id 2340521 initiated by user JCHACKO.
============================================================
11:27am up 63 days, 15:06, 3 users, load average: 1.72, 0.85, 0.75
248 processes: 242 sleeping, 5 running, 0 zombie, 1 stopped
CPU0 states: 96.1% user, 3.3% system, 0.0% nice, 0.0% idle
CPU1 states: 99.4% user, 0.1% system, 0.0% nice, 50% idle
Mem: 8227704K av, 8222872K used, 4832K free, 1152896K shrd, 68932K buff
Swap: 12578768K av, 684016K used, 11894752K free 6054660K cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
29755 orpe03oi 25 0 427M 426M 420M R 99.6 5.3 0:48 oracle
14039 orpe03oi 15 0 101M 100M 99.8M S 0.3 1.2 0:08 oracle
24710 kmoizudd 15 0 1076 1076 752 R 0.3 0.0 0:08 top
12166 orpe03oi 15 0 99184 94M 97276 S 0.1 1.1 0:00 oracle
12177 orpe03oi 15 0 99704 95M 97796 S 0.1 1.1 0:00 oracle
============================================================
(2)
Also took trace of this concurrent program from the backend and observed that the
following
SQL in top on doing an INSERT INTO RA_INTERFACE_ERRORS statement, for one
execution doing
10807521(10million buffer gets) which is quite expensive and the CPU elasped time
went upto
63.58 seconds and just writtening 0 rows.
************************************************************************
********
INSERT INTO RA_INTERFACE_ERRORS
(INTERFACE_LINE_ID,
MESSAGE_TEXT,
INVALID_VALUE)
SELECT
INTERFACE_LINE_ID,
:b_err_msg6,
'trx_number='||T.TRX_NUMBER||','||'customer_trx_id='||TL.CUSTOMER_TRX_ID
FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES TL,
RA_CUSTOMER_TRX T
WHERE IL.REQUEST_ID = :b1
AND IL.INTERFACE_LINE_CONTEXT = 'ORDER ENTRY'
AND T.CUSTOMER_TRX_ID TL.CUSTOMER_TRX_ID
AND IL.INTERFACE_LINE_CONTEXT = TL.INTERFACE_LINE_CONTEXT
AND IL.INTERFACE_LINE_ATTRIBUTE1 = TL.INTERFACE_LINE_ATTRIBUTE1
AND IL.INTERFACE_LINE_ATTRIBUTE2 = TL.INTERFACE_LINE_ATTRIBUTE2
AND IL.INTERFACE_LINE_ATTRIBUTE3 = TL.INTERFACE_LINE_ATTRIBUTE3
AND IL.INTERFACE_LINE_ATTRIBUTE4 = TL.INTERFACE_LINE_ATTRIBUTE4
AND IL.INTERFACE_LINE_ATTRIBUTE5 = TL.INTERFACE_LINE_ATTRIBUTE5
AND IL.INTERFACE_LINE_ATTRIBUTE6 = TL.INTERFACE_LINE_ATTRIBUTE6
AND IL.INTERFACE_LINE_ATTRIBUTE7 = TL.INTERFACE_LINE_ATTRIBUTE7
AND IL.INTERFACE_LINE_ATTRIBUTE8 = TL.INTERFACE_LINE_ATTRIBUTE8
AND IL.INTERFACE_LINE_ATTRIBUTE9 = TL.INTERFACE_LINE_ATTRIBUTE9
AND IL.INTERFACE_LINE_ATTRIBUTE10 =
TL.INTERFACE_LINE_ATTRIBUTE10
AND IL.INTERFACE_LINE_ATTRIBUTE11 =
TL.INTERFACE_LINE_ATTRIBUTE11
AND IL.INTERFACE_LINE_ATTRIBUTE12 =
TL.INTERFACE_LINE_ATTRIBUTE12
AND IL.INTERFACE_LINE_ATTRIBUTE13 =
TL.INTERFACE_LINE_ATTRIBUTE13
AND IL.INTERFACE_LINE_ATTRIBUTE14 =
TL.INTERFACE_LINE_ATTRIBUTE14
************************************************************************
********
(3)
Each execution of "Autoinvoice Import Program" is spiking the load on CPU (upto 99%)
and customer seems to be running this program very frequently (around >100 execution
per day) Per WebIV Note 301423.1 tells us that when no indexes in line_attribute%
columns, Autoinvoice performance will be poor when inserting rows into
RA_INTERFACE_ERRORS"
table and to fix this they have given a solution by creating concatenated index on attribute
columns.
(4)
Note: Select the same attribute columns for indexing in all the tables below:
RA_INTERFACE_LINES_ALL
RA_CUSTOMER_TRX_LINES_ALL
RA_CUSTOMER_TRX_ALL
RA_INTERFACE_SALESCREDITS_ALL
Customer Need to provide a CRT for the above action plan listed in (4)to
implement this first on Development instance and validate the performance
of "Autoinvoice import program" by reviewing the trace and Next implement
the same action plan when satisfied by customer to implement in Production
instance.
BEFORE :
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
102,186,750 27 3,784,694.4 43.4 466.77 686.92 835260576
Module: RAXTRX
INSERT INTO RA_INTERFACE_ERRORS (INTERFACE_LINE_ID, MESSAGE_
TEXT, INVALID_VALUE) SELECT INTERFACE_LINE_ID, :b_err_msg6, '
trx_number'||T.TRX_NUMBER||','||'customer_trx_id'||TL.CUSTOMER
TRX_ID FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES TL,
RA_CUSTOMER_TRX T WHERE IL.REQUEST_ID = :b1 AND IL.INTERFAC
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
461,116 27 17,078.4 2.6 466.77 686.92 835260576
Module: RAXTRX
INSERT INTO RA_INTERFACE_ERRORS (INTERFACE_LINE_ID, MESSAGE
TEXT, INVALID_VALUE) SELECT INTERFACE_LINE_ID, :b_err_msg6, '
trx_number'||T.TRX_NUMBER||','||'customer_trx_id'||TL.CUSTOMER
_TRX_ID FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES
TL,
RA_CUSTOMER_TRX T WHERE IL.REQUEST_ID = :b1 AND IL.INTERFAC
EXPLAIN_PLAN
-----------------------------------------------------------------
9 INSERT STATEMENT Opt_Mode:CHOOSE
8 TABLE ACCESS (BY INDEX ROWID) OF
'AR.RA_CUSTOMER_TRX_LINES_ALL'
7 . NESTED LOOPS
5 .. MERGE JOIN (CARTESIAN)
2 ... TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_INTERFACE_LINES_ALL'
1 .... INDEX (RANGE SCAN) OF 'AR.RA_INTERFACE_LINES_N1' (NON-
UNIQUE) SEARCH_COLUMNS=1
4 ... BUFFER (SORT)
3 .... TABLE ACCESS ***(FULL)*** OF 'AR.RA_CUSTOMER_TRX_ALL'
(ROWS:83644 BLKS:6720)
6 .. INDEX (RANGE SCAN) OF 'AR.RA_CUSTOMER_TRX_LINES_N2' (NON-
UNIQUE) SEARCH_COLUMNS=1
EXPLAIN_PLAN
-----------------------------------------------------------------
INSERT STATEMENT
TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL
. NESTED LOOPS
.. MERGE JOIN (CARTESIAN)
... TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL
.... IRS RA_INTERFACE_LINES_N1 REQUEST_ID
... BUFFER (SORT)
.... TABLE ACCESS (FULL) RA_CUSTOMER_TRX_ALL (ROWS:83644
BLKS:6720)
.. IRS RA_CUSTOMER_TRX_LINES_N2 CUSTOMER_TRX_ID LINE_NUMBER
AFTER :
EXPLAIN_PLAN
-----------------------------------------------------------------
9 INSERT STATEMENT Opt_Mode:CHOOSE
8 NESTED LOOPS
5 . NESTED LOOPS
2 .. TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_INTERFACE_LINES_ALL'
1 ... INDEX (RANGE SCAN) OF 'AR.RA_INTERFACE_LINES_N1' (NON-
UNIQUE) SEARCH_COLUMNS=1
4 .. TABLE ACCESS (BY INDEX ROWID) OF
'AR.RA_CUSTOMER_TRX_LINES_ALL'
3 ... INDEX (RANGE SCAN) OF 'AR.RA_CUSTOMER_TRX_LINES_ALL_ATR'
(NON-UNIQUE) SEARCH_COLUMNS=3
7 . TABLE ACCESS (BY INDEX ROWID) OF 'AR.RA_CUSTOMER_TRX_ALL'
6 .. INDEX (UNIQUE SCAN) OF 'AR.RA_CUSTOMER_TRX_U1' (UNIQUE)
SEARCH_COLUMNS=1
EXPLAIN_PLAN
-----------------------------------------------------------------
INSERT STATEMENT
NESTED LOOPS
. NESTED LOOPS
.. TABLE ACCESS (BIR) RA_INTERFACE_LINES_ALL
... IRS RA_INTERFACE_LINES_N1 REQUEST_ID
.. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_LINES_ALL
... IRS RA_CUSTOMER_TRX_LINES_ALL_ATR INTERFACE_LINE_CONTEXT
INTERFACE_LINE_ATTRIBUTE1 INTERFACE_LINE_ATTRIBUTE6
. TABLE ACCESS (BIR) RA_CUSTOMER_TRX_ALL
.. IUS RA_CUSTOMER_TRX_U1 CUSTOMER_TRX_ID
Purging
FND_LOG_TRANSACTION_CONTEXT
Problem Description :
====================
When logging has been completely disabled (i.e. the profile option AFLOG_ENABLED
is set to 'N'),the table FND_LOG_TRANSACTION_CONTEXT is still populated with
context information, even though this information is only used when logging has been
enabled. On large systems with many transactions this table can grow quite large and
would need to be purged regularly.After applying this patch, this table will not be
populated if logging is disabled.
Customer already applied this patch on 29-MAY-05 and had set the profile option
AFLOG_ENABLED to 'N' and truncated the table
FND_LOG_TRANSACTION_CONTEXT.
Patch Applied Details:
==================
The content of the PATCH readme is misleading the text (this table will not be populated
if logging is disabled).
Customer is experiencing the exact symptoms of the Purge Debug Log program
consuming CPU and taking a long time to complete.
Solution
To implement the solution, please execute the following steps:
OR
If there are too many System Alerts, the purge program starts taking lot of time.
Perform the following to avoid this problem from recurring:
2. Schedule the Purge CP to run daily and purge all messages older than 7 days.
3. Monitor alerts daily using the OAM Alerts Dashboard. Once issue is
resolved, manually change alert state to 'Closed' to allow these alerts to
be purged by Purge CP.
In 11.5.10 CU3 we will provide the provision to stop the system alerts altogether.
Customers will be able to disable this feature and no data will be collected. Hence
we will not face a situation where there are excessive new system alerts that exist in the
system.
References
Bug 4505316 - Purge Debug Log Fndlgprg Consuming Large Amount Of Cpu.
Bug 4347939 - FND Logging Patch for 11.5.10Large number of rows in DR$WAITING
and DR$PENDING tables
Appendix A: How to add and submit a job "Purge Obsolete Generic File Manager Data" \
By default, it is not registered as a sysadmin program, so do this: 1. Navigate to
Security/Responsibility/Request 2. Query up the group "System Administrator Reports"
3. Scroll all of the way to the bottom of the list of reports. 4. Click the plus icon on the
top left to add a record. 5. Add the record: Program ... Purge Obsolete Generic File
Manager Data 6. Click the yellow disk icon to save it. 7. You should be able to see this in
the list of Requests that you can run (Request/Run/...). Enable the Concurrent Manager
program "Purge Obsolete Generic File Manager Data" by assigning it to the SYSADMIN
group. Once this program gets enabled, you can run it with the parameter of "export" to
purge the old/expired data. Note: Expiration date is automatically set by Applications in
the FND_LOBS. EXPIRATION_DATE column. ========== BUG:2148975
============ WHAT DOES 'PURGE OBSOLETE GENERIC FILE MANAGER
DATA' CONC REQUEST DO? This plsql procedure (FNDGFMPR) works much like the
'Purge Obsolete Workflow Runtime Data', it gets rid of old obsolete uploaded files
(loaded to the database) for the programs FND_HELP, export and FND_ATTACH, these
are programs that are run under the FNDGFU (Generic File Manager Access Utility), ...
This program is used to purge data that is related to Exports and Attachments
functionality and also Help Builder. It purges data from the FND_LOBS and
FND_LOB_ACCESS tables. At this time this Concurrent Program is not assigned to a
Responsibility. .
========== BUG:2148975 ============ Program Parameters . Expired - Enter Y if
you want to purge expired data only. Enter N if you want the purge to include all data.
The default is Y. . Program Name - Enter the program name(s) to process. Leave blank to
process all programs. . Program Tag - Enter the program tag(s) to process. Leave blank to
process all program tags.
=====================================
References:
Note 104262.1 - Technical Overview: DML Processing in Oracle Text
@ Note 171272.1 - How to Drop Old/Expired Export and Attachment Data From
FND_LOBS.
Note 298698.1 - Avoiding abnormal growth of FND_LOBS table in Applications 11i
For more details, please check the document
DR_waiting_pending-CheckDetails
Cause:
DMZ login was redirecting to internal MT server and some time internal MT was
redirecting to DMZ server.
Fix:
FND_NODES table plays a major role on deriving the login page URL and after the
upgrade someone just renamed the table and recreated the new table. All the existing
synonym,triggers, indexes were still pointing to renamed table.
1.Run this following select SQL statements on the target node only to Check triggers
exists and status is enabled:
SQL> SELECT trigger_name , status FROM user_triggers
WHERE table_name = 'FND_NODES' ;
TRIGGER_NAME STATUS
------------------------------ ----------
FNDSM ENABLED
UPNAME ENABLED
- If status of the triggers show as DISABLED, then enable these triggers as follows:
2. The other reason could be a mis-match between fnd_nodes and appl_server_id in the
dbc file and due to this we could see intermittent URL redirection.
Execute below query to find server_id of all the middle tiers of the instance.
SQL> col NODE_NAME for a30
SQL> col SERVER_ID for a75
SQL> col SERVER_ADDRESS for a20
SQL> set lines 1000
SQL> select NODE_NAME,SERVER_ID,SERVER_ADDRESS from fnd_nodes;
Verify SERVER_ID output from above result with APPL_SERVER_ID entry in dbc files
of corresponding nodes. dbc file will be located in $FND_TOP/secure/<db
hostname>_<sid>.dbc $FND_TOP/secure may have softlink to $HOME/secure directory.
If there is any mismatch in APPL_SERVER_ID entry in dbc files edit dbc file for correct
APPL_SERVER_ID entry. If fnd_nodes is also not populated with proper values, we
need to regenerate dbc files on that particular node.
Recreate dbc files on Middle tier:
$cd $COMMON_TOP/admin/install/<instance name>_<MT hostname>
$./adgendbc.sh apps <appspass>
This script creates dbc files with correct APPL_SERVER_ID entry and also it populates
fnd_nodes table with correct entries.
3. Verify correct Guest password is provided in dbc file.Use below query to find if Guest
password given in dbc file is correct.
SQL> select fnd_web_sec.validate_login('GUEST','<guest password>') from dual ;
If this query returns Y the password is correct. If query returns N change dbc file with
correct Guest password. These changes does not require bounce of any service.
Note: Before doing any changes to FND_NODES table take the backup as create table
FND_NODE_bk as select * from FND_NODES ;Redo Logfile Investigating high
redolog switches, using logminer.
2. Identify an archivelog generated during excessive switch and issue the following:
SQL> execute DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME >
'/panhgi/arch/ArchiveOnLine/PANHGI_1_23238.arc',OPTIONS
>DBMS_LOGMNR.NEW);
SQL> execute DBMS_LOGMNR.start_logmnr(OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
a) To get a general view of what operation was causing excessive redo switching:
eg:
SQL> select operation,seg_owner,count(*) from v$logmnr_contents group by
operation,seg_owner;
DELETE INV 6
DELETE WIP 2
DELETE WSH 123230
b) To get specific info on the SQL statements and the objects contributing to the redo:
eg:
SQL> select count(*), seg_name from v$logmnr_contents where seg_owner'WSH' group
by seg_name;
SQL> select sql_redo, rollback from v$logmnr_contents where operation'DELETE' and
seg_owner ='WSH';
Note1: If "Rollback" Flag is set to 1, it means the statement shown under SQL_REDO is
doing a rollback operation.Caution:
Patch 3819096 -- Will reduce the number of executions of FND_PROFILE SQLs, these
SQLs are currently being executed several times.E-Business Suite performance patches
Review the MetaLink note "Recommended Performance Patches for the Oracle E-
Business Suite", 244040.1.
Recommended performance patches for all the modules and tech. stack components are
consolidated in this note.OA Framework Applications
If running FWK 5.7, ensure that you are running the latest rollup patch for 5.7H.
Refer to MetaLink note 258333.1.
MRP
Methodology and initial investigation for MRP issues.>
( A ) Proactive Actions :
( 2 ) - Schedule Concurrent job "Purge ATP Temp Tables". ( for 11.5.9 and above Ref.
note 329398.1 )
Schedule : Once a Week - Preferably during week end off hours. From the start of prior
run.
Parameter : Age of Entry ( In Hours ) = < As per the input received from customer >
Note : With the parameter 'Age of Entry (in hours)' you can specify the age of the data
you want to delete. For example, if you enter '1', this program will delete any data that is
more than 1 hour old. By default, we can set this parameter to 24 hours. However, since
this is a business decision, customer should approve the parameter value.
( 3 ) Ensure that no unwanted tracing is enabled on the MRP related processes.
Following are the two profile options for which disable tracing if it is not required for any
troubleshooting. :
MRP:Debug Mode
Indicate whether to enable debug messages within Oracle Master Scheduling/MRP.
Available values:
Yes Enable debug messages.
No Do not enable debug messages.
This profile has a predefined a value of No upon installation.
You can update this profile at all levels.
MRP:Trace Mode
Indicate whether to enable the trace option within Oracle Master Scheduling/MRP
and Supply Chain Planning. Available values are listed below:
Yes Enable the trace option within Oracle Master Scheduling/MRP and Supply Chain
Planning.
No Do not enable the trace option within Oracle Master Scheduling/MRP and Supply
Chain Planning.
This profile has a predefined a value of No upon installation.
You can update this profile at all levels.
( ref. note : 111955.1 )
Steps :
1. Ensure that NONE of the MRP concurrent programs have trace enabled.
2. Ensure that the profile MRP: Trace Mode is set to No at the Site level and Yes for the
user level to get trace of the program submitted by that user.
3. Ensure that the profile MRP: Debug Mode is set to No at the Site level and YES at user
level for user to get the trace of the program submitted by that user.
4. Request customer to Run the plan
5. Customer to provide the request ID of the Main MRP Concurrent Program .
6. Run the following script to identify the trace files associated with each child
concurrent program
and upload the output to the TAR. The output will also tell which Child requet took more
time.
Use following SQL to identify possible cause of fragmentation issue which may be
causing the performance issue :
Find out how many rows are in the MRP_ tables
select table_name ,num_rows ,last_analyzed from all_tables where table_name like 'MRP
%';
Check the actual allocated bytes to these table :
( 4 ) Useful Notes :
279156.1 -- RAC Configuration Setup For Running MRP Planning, APS Planning, and
Data Collection Processe
101015.1 Troubleshooting Performance issues Relating to MRP,DRP,SCP?