You are on page 1of 43

Troubleshooting of jobs / Ad-hoc job execution / Adhoc requests

Price Book IDoc Failures / EDI845 Outbound Failures


PCH Trigger jobs failure
PCE jobs
PQ Job Failures
Retro Failures
Long running job issues
IP Job Failure due to missing profitability
Month-end Monitoring
CPA_IL Lines Release
Validation Failure
Run History, How to Skip a day in CB Processing

Price Book IDoc Failures / EDI845 Outbound IDoc Failures

Whenever there is a failure, send a mail to EDI-MAINT and EDI-Technology an


email to let us know the special characters as well as line numbers that
caused the failures.
Theyll be replying with information like in below screenshot.

No Need of PQ1 access to remove special characters


Idoc number will be given in the mail from EDI Team.

Go to the IDoc in WE05 , and Line number mentioned.

Find out the material number for corresponding Customer Material Number

In WE19, Remove the trailing space for the customer material character for the
identified material.

Repeat the process for all materials that have trailing NON-ASCII characters.

Once trailing characters have been removed, Click on standard outbound


processing to re-generate IDoc without special character.

Get the new IDoc number and send it to EDI team for confirmation for transmission
from their end.

Last step is to follow up with Master data team to delete the special characters from
the source.

PCH Trigger jobs failure


Weve seen 2 kinds of failure in PCH Trigger jobs so far

Failure due to invalid date update ( Where date format is wrong /12/23/0123
Failure due to memory overflow ( Most Frequent One )

Failure due to invalid date update

We convert the trigger to C status and reach out to user who updated the
agreement.

Failure due to memory overflow


AGR / MBL Job Failures

When an AGR or MBL job fails with no details in the job log about the failure, we
have seen that this is due to memory overflows in the batch job.

For AGR records, this is due to A804 and A807 condition table triggers. In this case,
we reset the failed records and reprocess the records for A804 and A807 manually
in blocks of 50 records each.

Pull up the in process records from the AGR table with the Job Name and Job ID

Download that data to Excel to get the GUIDs

Sort the Excel by the TABNAME column to identify the records for A804 and A807
that need to be processed manually.

Reset all of the records using ZMOTC_RESET_PCH_TABLES

Reprocess the A804 and A807 records by GUID in blocks of 50 using the program
ZMOTC_BATCH_UPD_AGR.

Monitor the job to completion

PCE Job Failures

Handling PCE job failures due to Runtime Errors


Step 1
Whenever we have a PCE job failing because of Runtime errors, we dont have the
spool generated ( Unlike Data Error where we have the spool symbol along with the
failed job )

To confirm, Click on the job and go to job logs.

Job logs showing ST22 dumps.

Step 2
Get the End time of the job to go through ST22 and narrowing down the search.

Step 3
Go to TCode: ST22, Give the time slow where the job failed. (Like in above case, job
failed at 03:00 I gave 02:59:00 03:01:00 in ST22 criteria)

Click on Start

Dump details confirming BC_FIELD_OVERFLOW

Step 4
Double click on Error Analysis and search for MARA

Highlighted parts showing Material number that caused issues.

Step 5
Get the records from ZMTOTC_PRCHGEVTS table in B status by giving job name and
job number as parameter.

Extract the records into one excel, and look for the records containing that
particular material that is being shown in Dumps. In this particular case, there were
3 records for this material. Look for the change that generated this event.

In above case, one condition record was added (As there is no old value mentioned)
Go to KONH table to get more details on this specific condition record number,.

KONH table pointing to Table A016 and condition type ZB01 ( Which is a STS
condition type for gross price )
Step 6
Go to table A016 and put the Condition Record number to get purchasing requisition
and other details required.

Getting more details to enter into MEK3

Step 7
Go to TCode MEK3 (Condition Record Display)

Enter the details that we have got from Table # A016.

Gross Price added as $0.00 for Material #M6160-2 causing issues to PCH as
well as PQ Jobs ( Note : If we try to run PQ for this material with UoM as CS, PQ
job will also fail giving a dump )

Step 8
Next Steps: To Process records other than this particular record.
Apart from the records containing erroneous material, Convert everything else ( all
other GUIDs from this particular job to status A )
Program Name: ZMOTC_RESET_PCH_TABLES

In our case, only 2 such records were there.

These records will be picked up by normal PCE Cyclic jobs and look for status C
once these are processed.

Records converted to C and PCH Table updated with the information.

Handling PCE job failures due to Duplicate Insert Errors

Extract the GUIDs based on Job Number and status B


Rather than resetting everything at one go ( Using Program
ZMTOTC_RESET_PCH_TABLES ), Reset in smaller chunks ( for 10,000 records,
we generally use 2,000 as a limit to reset the records in a single batch )
Wait for one chunk to be processed completely and reset the next batch.

Long term solution

Need an exception mechanism to avoid the failure while inserting the records
in ZMTOTC_PCH_DEL table.

CPA_IL Execution

Execution of CPA_IL report

Purpose: When the order changes to incomplete after getting invoiced due
to variation in tolerance of the expected price and this scenario mainly occurs
when the customer is unavoidable or to be given the expected price in view
of further the business requirements. These orders needed to be released .

Steps to be followed for obtaining the CPA_IL report.

Login in to the SAP Production system PE2


Transaction code: ZMOTC_CPA_IL
Enter the following details on the page. (2140/10&30/00)
Sales Org - 2140
Distribution Channel 10(normal), 20, 30(value link customer)
Division- 00
Execute It (F8).
After executing it find the result and perform the below action:
If any orders found in the report list, then Release those orders and please
update your respective team lead/client side saying that Orders have been
released and to be intimated via EMAIL.

Retro Trigger job failure

RETRO_TRIGGER_MASTERJOB_MBL0518 job was long running. Since this job was


long running and it should run on business hours we had to kill the job with the help
of basis team. The Master job Retro Monitor was cancelled automatically while the
child job That is member ship trigger job was long running.

The job(Membership job) was cancelled at step 231 and hence guids has to be
extracted from step 232.

Place the cursor on step 232

Click GOTO

Click variant

From this screen the guids are transported to excel sheet and the process is
repeated until GUIDs of all the steps(232-243) are extracted and saved in a excel
file.

These are the extracted guids.

Then in sm36 the background job is created by giving the job name and going to
step we give the program name and variant name and then 51 GUIDs of one step
is copied and pasted from the excel sheet on to the edit variant screen where there
is a field for GUIDs.

The job is scheduled and released immediately.

The same process is repeated by copying the job and editing its name and changing
the guids until all the guids are processed.

GUIDs that was extracted has been processed manually

Chargeback IP Job Failure


TAKING CARE OF ACCRUAL JOB FAILURE DUE TO REASON
THE CONTROLLING AREA WAS NOT FOUND

Go To VBRP table after extracting all the billing documents from the child job which
got failed due to Profitability segment missing.

Then copy and paste all the billing documents that was extracted in the VBRP table.
And give the selection criteria for the Profitability segment as = and execute it.

The billing document which has no profitability segment will be shown and the
billing document is noted down.

The billing document is then excluded in all the billing extract job and the accrual
job.

Then we exclude the billing document from the extracted billing documents and the
remaining all documents are copied and pasted as variant in the new child job that
was copied from the failed job.
Now the new copied job is released

ZERO Dollar Billing Document

Copy the Child job and exclude the billing ( From Subsequent Billing
extract jobs as well )

After the job gets completed, unblock the failed IP Create job

CB CSV Validation Job Failure


Brief Overview of Validation job

Compares Line Items that we have in Recon and In the file ( We compare
Dollar value as well )
We send Idocs to PI and receive file from PI via MFT
We receive one log file as well along with the files
This log file contains summary of file name, line count and amount.
It creates triggers for loading the files in Hadoop.

Job

Program Name

Variant

Components of Variants

Transmission Medium
Weve 2 options here, 1 for validating CB CSV File, another one is for
marketing trace file, Market trace file validation will normally run during
month-end, however if there is a request in between for ad-hoc file
transmission request, well need to schedule ad-hoc marketing trace file
validation job for pushing the files to Hadoop.

Manual Run
Manual run is used whenever there is failure and weve couple of vendors
pending to be validated, well have to re-run the job for pending vendors by
hardcoding the vendors in vendor selection box.

Ad-hoc Run
This option is used for running on-demand validation job (For catering to our
ad-hoc sales trace requests ), There needs to be a .done file before we can
run any ad-hoc validation job.
We need process date as well as Sourcing vendor for running ad-hoc job

Sleep time
Program looks for updates on log file, if log file is not updated for 30 mins , it
assumes that log file has been received completely and it will start
comparing the log file contents against Recon entries. Sleep Time is the
interval on which program will check for update timestamp on log file. For

example : In the screenshot weve 10 minutes ( 600 secs ), This means that
system will check the timestamp on log file every 10 minutes, it cant be very
low as it will keep the program busy in looking through log file in very short
intervals of time.

File Formats and Paths on App Server


For log files
/infosystem/PE2/100/PRI/in

As shown In screenshot
Sub-directory MSTR_audit_files
For Marketing trace file validation
Sub-directory CBClaim_audit_files
For CB CSV File Validation

Contents of log file


Example
CBClaim_502312_0000600221_0000302311_P_20151207_20151208.CSV,7,5
372.00

CBClaim
Signifies that file is CB CSV and Not Market Trace CSV
502312
ZT Vendor
0000600221

PI Vendor
0000302311
VN (Sourcing Vendor)
P
Signifies that it is paid file, that is vendor can see it on portal and download it, If
value is U, CAH Team will send the pdf file to vendor.
20151207
Is the process date for which file was generated
20151208
Is the date when file was created, it is appended dynamically by PI while generation
of the file.
7
Is the line count
5372.00
Total Extended CB Amount

WYT3 Table can be used to see vendor functions

Sample snapshot of Market Trace File ( It will have MSTR instead of


CBClaim as prefix in the file name )

Different type of file formats being used for Validation job


.log
Received via MFT when file gets transferred, it is the initial file used for
validation

.wip
.log file is converted to .wip during the processing time
.adh
.adh file is the temporary file extension used during processing of ad-hoc validation.
.done
.log file is converted to .done file once processing is complete and
successful.
.mis and .cor
These files are created only in case of failures.
.mis file will contain vendor list that missed the validation (Either these vendors has
less number of lines than expected or have duplicate lines )
.cor file will contain the vendor list for which validation passed successfully.

Taking care of CB CSV Validation Job


1. Failure due to delay in receiving complete log file
This is the most prominent failure that weve in validation process.
Sometimes, weve a delay in receiving complete log file from MFT, somehow
updates to log file stops for about 2-3 hours and our validation job assumes
the receipt of complete log file and start comparing the records.
Here, All successful validated files are copied to .cor file and pending vendors
are copied to .mis file.
Step 1
Once complete log file is received, get the list of vendors pending for
validation either from job logs or .mis file.
Step 2
For running ad-hoc job, we need to have a .done file for the same process
date, Create a done file manually.
Step 3
Schedule an ad-hoc job, for all the missing vendors for the same process
date.

2 Failure due to incomplete files received


We need to check SM58 for any IDoc failure and contact BASIS for
reprocessing those IDocs manually ( one by one ), We need to be sure that
messages are not processed already in PI, else it will cause duplicates.

Once all IDocs are processed from SM58, well receive one supplemental .log file,
and from here steps will be same as mentioned above.

3. Failure due to duplicated lines


This scenario is rare and weve encountered it few times only ( 2-3 times in last 2
years ).
Steps to resolve
1. Either discard the complete file that contains duplicate lines and resend the
IDocs for that file ( We can get the IDocs by querying RECON based on Process
Date, Transmission Medium and Sourcing Vendor ), We need to reset the status
on them to 30 and use the send job to send these IDocs.
2. If weve only 1 or 2 lines as duplicates, then we can choose to remove these
duplicated lines manually and create triggers manually too ( Not
Recommended )

Trigger Path

Content of Trigger File

It is a plain text file that has the same file name as the one that needs to move into
Hadoop. It will have validated as the content.

MFT Dashboard to confirm the files for Hadoop Load