You are on page 1of 116

Mass Billing and Convergent Invoicing

Performance and scalability of SAP’s EDR management and


convergent invoicing solution with IBM System P5 AIX and
DB2 9
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

SAP Solution for Mass Billing and Invoicing


Cross Industry Billing Engine
DB2 9 for Linux, UNIX and Windows
IBM DS8300 Storage
IBM eServer POWER5

Mass Billing and Invoicing

SAP Cross Industry Billing Engine with


IBM POWER5 AIX and DB2 9
- Proof of Performance and Scalability -

IBM SAP International Competence Center


Walldorf, Germany

IBM SAP Solutions Center, IBM PSSC, Montpellier, France

DB2 Solutions Center, IBM PSSC, Montpellier France

Systems Performance and Tuning, IT Delivery, IBM SO, Milan Italy

e-business Solutions Technical Sales Support for SAP, IBM Hamburg, Germany

SAP Performance, Data Management and Scalability group


SAP AG, Walldorf, Germany

SAP Industry Business Unit Travel and Logistics Services


SAP AG, Walldorf, Germany

SAP Industry Business Unit Utilities


SAP AG, Walldorf, Germany

Version 1.1
March 2008

ISICC-Press CTB-2008-1.1 2
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

1 Preface...........................................................................................................................................5
1.1 Document Scope.....................................................................................................................5
1.2 Special Notices.......................................................................................................................5
1.3 Authors of this Document.......................................................................................................5
1.4 With gratitude and acknowledgement of our sponsors..........................................................6
1.5 Project Team...........................................................................................................................6
2 Introduction...................................................................................................................................7
3 Executive Summary.......................................................................................................................8
3.1 Proof of Concept Goals..........................................................................................................8
3.2 Proof of Concept Achievements.............................................................................................9
4 Overview of the Proof of Concept...............................................................................................14
4.1 Solution Overview................................................................................................................14
4.2 Test Scope and Design ........................................................................................................15
4.3 Critical Path Overview: Runtime and Resources................................................................15
4.4 Event Detail Billing Test System: Architecture Overview..................................................20
4.5 Data Model..........................................................................................................................21
4.5.1 Master Data....................................................................................................................21
4.5.2 Historical data................................................................................................................21
4.6 Processing Cycle...................................................................................................................22
4.6.1 Transfer of Event Detail Records..................................................................................22
4.6.2 Creation of billing orders...............................................................................................22
4.6.3 Billing in Contracts Accounts Receivable and Payable................................................23
4.6.4 Invoicing in Contracts Accounts Receivable and Payable............................................23
4.6.5 Payment Processing.......................................................................................................23
4.6.6 Payment Media Generation...........................................................................................24
4.6.7 Correspondence Print....................................................................................................24
4.6.8 Dunning Proposal..........................................................................................................24
4.6.9 Dunning Activity...........................................................................................................25
4.6.10 Deferred Revenues......................................................................................................25
4.7 Load Test Overview.............................................................................................................26
4.8 Detailed Test Requirements .................................................................................................26
4.8.1 Transfer of Event Detail Records (BAPI - Inbound Interface).....................................27
4.8.2 Creation of billing orders .............................................................................................33
4.8.3 Billing in Contracts Accounts Receivable and Payable ..............................................34
4.8.4 Invoicing in Contracts Accounts Receivable and Payable ..........................................42
4.8.5 Payment Run .................................................................................................................48
4.8.6 Payment media creation ................................................................................................52
4.8.7 Correspondence Print....................................................................................................56
4.8.8 Dunning proposal run....................................................................................................60
4.8.9 Dunning activity run .....................................................................................................60
4.8.10 Deferred Revenues......................................................................................................61
5 Solution Expert Sizing.................................................................................................................63
5.1 Solution Sizing.....................................................................................................................63
6 General recommendations...........................................................................................................88

ISICC-Press CTB-2008-1.1 3
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

6.1 Distribution of Objects ........................................................................................................88


6.2 Row Compression tests and results.....................................................................................90
7 General Database Observations..................................................................................................91
7.1 General comments................................................................................................................91
7.2 Partitioning...........................................................................................................................91
7.3 Row Compression.................................................................................................................92
7.4 DB2 Configuration used in the project.................................................................................93
8 Database Technology ................................................................................................................94
8.1 Properties of DB2 for Linux, UNIX and Windows..............................................................94
8.2 Ease of use and configuration..............................................................................................94
8.2.1 Automatic Storage Management...................................................................................94
8.2.2 Automatic memory management ..................................................................................95
8.2.3 Automated Table Maintenance......................................................................................95
8.2.4 Deep Compression.........................................................................................................96
8.2.5 Multi-dimensional Clustering (MDC)...........................................................................96
8.2.6 DB2 “optimized for SAP”.............................................................................................98
9 Hardware....................................................................................................................................99
9.1 Logical Landscape Overview...............................................................................................99
9.2 Logical Partition Layout.....................................................................................................100
9.3 Physical Hardware Infrastructure.......................................................................................101
9.4 Storage Layout....................................................................................................................102
10 Hardware Technology.............................................................................................................103
10.1 Attributes of the IBM POWER5 Server...........................................................................103
10.2 Storage Technology.........................................................................................................105
11 Appendix: ...............................................................................................................................108
11.1 Software............................................................................................................................108
11.2 AIX Parameters...............................................................................................................109
11.3 SAP Profiles.....................................................................................................................109
11.4 Database Parameters........................................................................................................112
12 Copyrights and Trademarks.....................................................................................................116

ISICC-Press CTB-2008-1.1 4
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

1 Preface
1.1 Document Scope

This joint IBM/SAP whitepaper describes a performance project for SAP’s billing and invoicing
in SAP's contracts accounts receivable and payable component. This solution is used in various
Service Industries, such as Electronic Toll Collection, Communications, Media, Public Transport
as well as Postal Services. This new component is an extension to SAP’s contracts accounts
receivable and payable component available with SAP ECC 6.0 Enhancement Package 2.

This document covers the customizing choices for the billing and invoicing functionality and
describes the scenarios tested. It covers the infrastructure basis, how the SAP components were
configured on the infrastructure and the reasons for the design. The document also covers the
database approach, design and tuning recommendations as implemented on the IBM DB2
database. In this project, the IBM System p Server (p5-p595) provided the highly scalable server
infrastructure and the IBM Storage Server DS8300 fulfilled the high performance storage
requirements.

This whitepaper was written to support sizing efforts and implementation best practices with the
expectation that this information can benefit other teams designing, implementing, or
restructuring the business processes for billing and invoicing in the contracts accounts receivable
and payable .

1.2 Special Notices


Copyright© IBM Corporation, 2007 All Rights Reserved.
All trademarks or registered trademarks mentioned herein are the property of their respective
holders.

1.3 Authors of this Document


o Carol Davis, Senior pSeries Technical Support, ISICC, IBM
o Marc-Stefan Tauchert, e-business solutions Technical Sales Support for SAP, IBM
o Dr. Gerrit Graefe, IBU Utilities, Senior Field Service Expert, SAP AG
o Hans Gerhard Landgraf, IBU Travel and Logistics Services, Solution Manager, SAP AG
o Ursula Zachmann, IT Specialist for SAP Solution Sizing, ISICC, IBM
With Technical Contributions from:
o Thomas Aiche, DB Specialist, DB2 Solutions Center, PSSC, IBM
o Brigitte Blaeser, DB Specialist, SAP DB2 Development/Porting Center, IBM
o Franck Lespinasse, Advisory IT Specialist, PSSC, IBM

ISICC-Press CTB-2008-1.1 5
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

1.4 With gratitude and acknowledgement of our sponsors


Roland Buerkle, Senior Vice President, Service Industry Development, SAP AG
Ulrich Marquard, Senior Vice President, Performance, Data Management and Scalability, SAP
Robert Reuben – System p Technical Sales Support Manager, NE and SW IOT, IBM
Laurent Montaron - SAP Strategy and Enablement Manager, Beaverton USA, IBM

1.5 Project Team


Role /Area responsible Person responsible Company
Project Lead IBM Carol Davis IBM Germany
Performance and Benchmark Expert Marc-Stephan Tauchert IBM Germany
Technology – Database DB2 Thomas Aiche IBM France
Technology – Database DB2 Umberto Turini IBM Italy
Technology – Infrastructure, Wdf Jan Muench IBM Germany
Technology – Infrastructure, Mop Sebastian Chabrolles IBM France
Technology – AIX Joergen Berg IBM France
Technology – AIX Majidkhan Remtoula IBM France
Project Steering and Management Eric Cicchiello IBM France
Project Steering and Management Franck Lespinasse IBM France
Project Lead SAP Hans G. Landgraf SAP AG
SAP Basis and Application Expert Gerrit Graefe SAP AG
Performance Expert – SAP Basis Heiko Gerwens SAP AG
Application Expert Klaus Kistl SAP AG
Application Expert Klaus Schnoklake SAP AG
Sizing Expert IBM Platforms Ursula Zachmann IBM Germany

With support and guidance for DB2 from the colleagues of the
SAP DB2 Development/Porting Center,
IBM DB2 CC Böblingen
DB2 Lab Toronto

ISICC-Press CTB-2008-1.1 6
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

2 Introduction
This new billing engine from SAP is designed to handle massive volume of data generated and
required in Industries like road billing, congestion charging, public transport, postal services,
telecommunications and even internet shopping. The profile of the data is massive volumes of
service events. Each time a vehicle passes a gantry on the motorway for example, a charging
event is generated. Expand this to cover schemes where all traffic is being monitored as soon as
the vehicle moves, and the billing event volumes become staggering. This solution finds
application no only in the travel and transportation industries but also in such diverse areas as
internet shopping, where millions of internet users purchase small components such as music.
Each of these transactions can trigger a very complete payment scenario of tiny increments: a
portion to the copyright owner, a portion to the internet shop, a portion to other service providers.
A highly scalable billing engine is vital for such mass billing volumes: and SAP and IBM provide
the solution.

The EDB Performance and Scalability Project was successfully carried out in November -
December of 2007 at the request of the SAP IBU for Travel and Logistics, and the IBU for
Communications.
This cross industry product component is focusing on high end scalability for handling the
massive data volume expected from such implementations as road tolling, congestion charging,
communications and internet sales. This product is used for a diverse spectrum of industry
applications which all have a commonality in the volume profile: extremely large volumes of
event generated data. A billing engine, built for this load volume, must show high performance,
and prove to be highly scalable. This proof of concept for billing and invoicing in the contracts
accounts receivable and payable is a collaboration project between IBM and SAP, done within
the scope of the IBM/SAP Alliance Co-Innovation initiative for product enablement. These
projects are done to pave the way for a new SAP product rollout, or a first of a kind
implementation at the high-end using the IBM scalable infrastructure. The objective is to reduce
risk for customer implementations by ensuring stability, scalability and performance of the
application design, and database integration. The results of these projects are best practices for
implementation for the application, database and infrastructure platform, and expert sizing data
for the end to end solution.

These projects combine skills across a number of specialty areas, both within SAP and IBM.
These skill sets are combined according to project requirement. The projects are driven by the
ISICC Walldorf in collaboration with SAP development and the IBM SAP Solution Center
Montpellier. SAP and IBM Germany provided the specific skills around the application, business
processes and SAP performance. The IBM manufacturing and benchmark center in Montpellier,
home of the SAP Solution Center, traditionally supports major customer stress tests and high-end
benchmark requirements. They provided the high-end landscape and infrastructure expertise.

These enablement projects are done without the focus of a specific customer’s requirements, but
instead combine the known requirements coming from a number of active SAP engagements.
This approach allows the one PoC to provide important input to multiple customer situations. The

ISICC-Press CTB-2008-1.1 7
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

major benefit coming from these engagement projects is reusable collateral around first of a kind,
or new SAP product sizing and implementation.

3 Executive Summary
This extremely flexible and adoptable billing and invoicing engine suits various different
Industries. It allows easy integration of additional services, is absolutely independent of the
technology collecting the service records (event detail records) and is a powerful backbone that
helps to keep your business compliant to future legal requirements while providing
interoperability and value added services which help to make the most of such a system.
In order to help ensure the reliability, scalability and performance of this solution, this proof of
concept focuses on scalability for extreme data volumes for various implementation models. The
results from this test shows that the solution scales and is able to handle the amount of data that is
required by your industry.

3.1 Proof of Concept Goals

1) The project business focus was to run an end to end “service to cash” business process for 1
million business partners and determine the maximum Event Detail Records (EDRs) per
hour on the given hardware.

2) Determine scalability potential of the billing engine and determine the capacity requirements
and processing window for the volume span of 2 – 6 million EDR records. This span covers
the requirements of two known implementation models: 1) truck road billing scenario, and 2)
a metropolitan congestion charging scenario.

3) Simulate a large national all vehicle road tolling scenario with 50 million EDRs per day, in
which each of the 50 million is billed the same day. This is the most demanding billing
scenario. The daily processing window normally is between 8 and 10 hours. The objective of
this work unit is to determine the scalability such that a sizing can be proven for two points:
a) Financial: the smallest amount of hardware investment to achieve the business window
b) Business growth: the smallest possible window of time necessary for the volume using
parallelization to allow for increase in load over time.

ISICC-Press CTB-2008-1.1 8
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

3.2 Proof of Concept Achievements

Service to Cash Business Process


Transfer of Creation of Billing Invoicing Payment Payment Dunning Dunning Correspondence
EDRs Billing Run Media
Proposal Activity Print
Orders

End to End processing of 1 million Billing Accounts and 50 Million EDRs = 1.5 Hrs

The proof of concept was designed around a scenario with 1 million business accounts, each with
50 EDRs per day. This simulates the large national all vehicle road billing scenario described in
KPI-3 above. The window for processing this scenario, in the KPI, is set between 8 to 10 hours.

The focus on high-end throughput was on the new business processes which handle the mass
volume for the billing engine. The critical path is depicted below. The dataflow into the system
via RFC must handle the extreme mass of raw input coming from the collection points. This is
the first critical path step. The EDR records created by the transfer process are then aggregated
per billing account in the billing step, and billing documents created. The invoicing step merges
all billing documents from multiple sources into invoice documents for each contract account.

Large National Vehicle Road Tolling

The following figures are based on a study done for an actual European scenario:

To be processed in FI-CA billing and invoicing = 74,455,200,000 EDRs per year!!

This model equates to an average of 200 million EDRs, with an expected 4 million contract
accounts to be processed per day using a monthly billing cycle. The results of this Proof of
Concept, using the IBM POWER Systems and DB2 database infrastructure, show that this
volume could be handled “end to end” in less than 6 hours.

ISICC-Press CTB-2008-1.1 9
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

High End Scalability Achievement – Service to Cash

Number of processed objects / hour of critical process steps:

Process Step No. of processed objects Object type


Transfer of Event Detail Records 197,292,857 Event Detail Records / hour
Billing in the Contracts 9,417,219 Billing Accounts / hour
Accounts Receivable and
Payable
Invoicing in the Contracts 4,886,413 Contract Accounts / hour
Accounts Receivable and
Payable
Correspondence Print 2,373,626 Contract Accounts / hour

Throughput of all process steps “Service to Cash”


Proc Transfer Billing Payment Payment Dunning Dunning Corr.
Billing Invoicing
Step of EDRs Orders Run Medium Proposal Activity Print
Billing Contract Contract Contract Contract Contract Contract
Objects EDRs
Accounts
EDRs
Accounts Accounts Accounts Accounts Accounts Accounts
Per
3,288,214 326,374 156,954 81,440 284,211 751,899 654,545 276,923 39,560
Min
Per
197,292,857 19,582,418 9,417,219 4,886,413 17,052,632 45,113,924 39,272,727 16,615,385 2,373,626
Hour

The graph above documents the throughput of each step achieved by the scalability proof of
concept, by object type per minute and hour.

The scalability proof of concept fulfilled all of the defined KPI’s and demonstrated the capacity
of this solution to address extremely large billing scenarios. Although the project focused on the
known requirements of active road billing and congestion charging scenarios, the same

ISICC-Press CTB-2008-1.1 10
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

processing model can be used for many other industries. Telco, billing of postal services, such as
letter, parcel and courier express, smart card scenarios as they are known in public transport, as
well as internet purchasing are being addressed with this solution. This proof of concept was
necessary to answer the one very critical question: Can the solution handle the extremely high
volume of data such applications will generate? The answer is that based on this POC, we
believe it can.

Resource Requirements

Business Growth
The high end measurements were achieved using the following machine capacity depicted in the
graph below. The model used here focuses on scalability and achieving the highest volume of
throughput in the shortest possible time. This graph shows the measured utilization for the
highest throughput is 94,2K SAPS for the complete end to end processing in 1.5 hrs.

DB-CI APPS
Physical CPU-Load per Business Step

60

50

40
PhysicalCPU

40
30 42 46
24 26 31
20
14
10 09
14
08 10 08 07 03 07
06 06
00 02
ct

ev
er

ro
er

un
g
g

ed

in
in
n

nA
sf

rd

nP

fR
Pr
yR

yM
lli

ic
an

llO

n
Bi

De
n
vo

Pa

Pa

Du
Du
Tr

Bi

In

Business Step

Measured Model – 1.5 Hrs, 94,2K SAPS, 50million EDRs, 1 Million Accounts

ISICC-Press CTB-2008-1.1 11
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

This graph below shows the memory utilization for the measured high-end achievement. The
graph depicts the sum of the static memory defined for two application server instances, the
memory dependent on the number of work processes defined, and the sum of the variable user
context memory which is dependent on the number of parallel jobs per step.

Static Memory Dynamic


definied by
definied by User Context
Appl.-Server Parameters
.
Transfer 7,8 GB (100 Jobs)
BillOrder 0,2 GB (1 Job)
Shared Memory

Billing 2,9 GB (80 Jobs)


Proccesses

Invoice 2,9 GB (140 Jobs)


PayRun 2,6 GB (100 Jobs)
PayMed 3,3 GB (100 Jobs) Extended Memory
Print 5,4 GB (120 Jobs)
DunnPro 1,5 GB (120 Jobs)
DunnAct 2,0 GB (120 Jobs)
4 GB 2 GB DevRef 6,9 GB (60 Jobs)

The DB is expected to remain static as the memory footprint is determined by the buffer pool
settings. A large portion of the application server memory is also determined by the statically
configured buffer pool settings, and the SAP work process private memory. The overall
utilization can vary depending on the size of the job user context (extended shared memory in
SAP). The process memory increases with the number of work processes configured. The SAP
extended memory increases with the level of parallelization.

Benefits of the Power5 Infrastructure

The proof of concept was executed using the IBM Power5 hardware virtualization, with focus on
the shared processor pool. With processor sharing, the system automatically and immediately
realigns itself to changes in the load profile and load distribution. Looking at the measured model
CPU utilization on the previous page, we can see how the CPU requirements, and the
requirement distribution over DB and application server, vary in the multiple steps of the
processing chain. In a three tier architecture, with dedicated resources, 60 CPUs would be
required to cover both the peak for the application servers (correspondence print) and the DB
(Invoicing). The shared processor pool reacts with flexible resource redistribution to cover the
necessary peaks using 52 physical CPUs. The CPU capacity is not reserved, but can also be
shared with other workloads, resident on the same Power5 server. Resource distribution can be
easily controlled via a simple priority scheme or more complex resource guarantees, depending
on the SLA’s of the competing workloads. Virtualization is explained in more detail in chapter
10. The benefit to the PoC was the flexibility of realignment without allowing any bottlenecks to
occur.

ISICC-Press CTB-2008-1.1 12
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Benefits of the DB2 Database Technology

The event detail record billing scenario has several characteristics which directly relate to the
database technology. The in-flow of mass billing data (EDR records) into the system results in
several extremely large database tables. The table holding the EDR records for this PoC reached
nearly 500GBs. The entries in this table will be maintained for a given period of time, depending
on legal requirements, and then dropped by period. In order to facilitate this dropping of data by
period, some type of table partitioning is recommended. DB2 has two solutions which would
fulfill this requirement: range partitioning, and Multi Dimensional Clustering (MDC). The DB2
Deep Compression feature was also used to compress these very large tables, thus saving both
disk space, memory space, and reducing I/O overhead. This is a unique feature of DB2 and is
described in the database chapter of this document.

ISICC-Press CTB-2008-1.1 13
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4 Overview of the Proof of Concept


4.1 Solution Overview

Event Detail Billing and Convergent Invoicing with Contract Accounting

The SAP Customer Relationship Management (SAP CRM) application is used to enter, change or
end customer contracts. It offers a complete call center application, marketing capabilities,
customer self services as well as all necessary customer care components. The CRM piece was
not part of this test as the major challenge is seen in the storage, handling and processing of
millions of records received by the back office on a daily basis. The records are typically received
from external systems, rated and then transferred to the billing system in the SAP ERP
application where they are stored, billed with the next bill cycle and then invoiced. This entire
process was part of this test – see chapter 4.2 (Scope and Design).

Each license plate number (LPN) or onboard unit (or transponder), is assigned to one billing
account. One or multiple billing accounts can be assigned to a contract account which is the level
on which invoicing takes place. The transfer of EDR records into SAP ERP is done via an API,
as we do not see it feasible, nor do we see an advantage in using an XI interface for such high
volume.

This architecture offers great adaptability and flexibility to adjust to specific project needs. It
allows to bill services from various different providers and several different services, all in one
single invoice.

ISICC-Press CTB-2008-1.1 14
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Several billing systems can feed into convergent invoicing; this could be sales orders from sales
and distribution systems or any other billing documents, also from external sources.

4.2 Test Scope and Design

The scope of the project will includes all steps of the above process chain. The major focus of the
performance and scalability, however, will be steps 1 and 3 as the billing is really the “new land”
for which there were previously no high-end sizing or scalability check points.

Entire Business Process used for this test

Inbound Trigger EDR Convergent Payment run Bank file Dunning Dunning Correspondence G/L Account
interface generation billing Invoicing creation proposal, activity printing Posting Maintenance,
Balancing
Generation creation of
Upload Billing of Invoicing of open items, Payment Creation Print letters to Post Balance
of billing list with
of EDR billed EDR creation of medium of the customer for aggregated accounts
triggers to customers
incomin records records files to be program, dunning installment sub ledger having partial
permit that didn‘t
g EDR transferred to prepare letters plans and items to payments,
billing pay in time
records house bank for files to be returns, security general payments on
direct debit sent to deposit request ledger account and
customers banks credit memos

Nightly batch processes. Each process


usually runs once during the night and
the whole chain starts at a predefined
time. Processes are typically executed
from left to right as displayed here.

Bank
4.3 Critical Pathexecuted
Batch processes Overview:
repeatedly Runtime and Resources
Payment lot
Return
processing
during daytime concurrently with Upload of processing
online. Triggered by arrival of failed Upload of
RUNTIME statement file from bank. payments
from house
incoming
payments
bank from house
bank
The following two graphs depict the relative time ratios of the individual steps in the processing
chain. This ratio shows where time is spent in the critical path, and which steps are most
important in reducing the overall processing runtime. The graphs below show two different
scenarios with different volumes of incoming data for the same number of billing accounts: one
with 50 million EDRs and one with 100 million EDRs. The EDRs are aggregated in the billing
step; therefore the effect of the EDR volume is only really evident in the loading and billing
steps.

ISICC-Press CTB-2008-1.1 15
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

View of runtime critical path with 1 million accounts and 50 million EDRs.
%TotalRunTime CriticalPath Runtime - 50 Mil EDR

33%
35%
%of total runtime

30%
25% 20%
20% 16%
15%
8%
10% 5% 5% 5%
4% 2%
5% 2%
0%
In

YM
en

PR
e

ev
YP

ct
ep
in

ic
R-

A
G

R
Pr
O
PA
vo

PA
ill

un
ED

ef
ig

C
B

un
In

D
D
Tr

D
End to end runtime: 1hr 16 minutes.

Critical path: Runtime with 1 million accounts and 100 million EDRs.
%TotalRunTime CriticalPath Runtime - 100Mil EDR

35% 31%
%of total runtime

30% 26%
25%
20%
15% 13% 13%
10%
3% 4% 4% 4%
5% 1% 2%
0%
In

YM
en

PR
g

ev
YP
e

ct
ep
in

ic
R-

A
G

R
Pr
O
PA
vo

PA
ill

un
ED

ef
ig

C
B

un
In

D
D
Tr

End to end runtime for this scenario: 1hr 37 minutes.

The following graph shows the change in the critical path when the same 100 million EDRs (as
in the graph above) is distributed over 5 million billing accounts. The physical resource
requirements will remain the same as the parallelization (number of concurrent batch jobs) has
not changed. The critical path, for total runtime, focus shifts from the ‘transfer EDR’ step to
correspondence printing and invoicing. There is an overhead in the “down stream” processing per
each billing account. If the number of billing accounts increases, this overhead shows up in the
processing time for invoicing and correspondence print.

ISICC-Press CTB-2008-1.1 16
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

%TotalRunTime CP Runtime - 100 Mil EDR 5 Mil BillAccts

%of total runtime 45% 40%


40%
35%
30%
25% 19%
20%
15% 10%
10% 5% 4% 6% 6% 6%
5% 2% 2%
0%
In

en

YM
g

PR

ct
e

ev
YP

ep
in

ic
R-

A
G

R
Pr
O
PA
vo

PA
ill

un
ED

ef
ig

C
B

un
In

D
D
Tr

D
Calculated runtime, based on measured throughput: 5hr 15 minutes.

The measured throughput at the best measured performance per step is used as input for the
expert sizing method. The highest throughput is calculated using the runtime of the longest
running job. This automatically provides a small sizing uplift while still remaining firmly
anchored in actual measured data.

ISICC-Press CTB-2008-1.1 17
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Resources

This graph is based on the end to end processing of 100 million EDRs for 1 million billing
accounts (100 lines per account) which is documented in the critical path for runtimes above.
This graph shows the SAP resources used in the individual steps to process that run. The SAPS
are depicted for two groups: the database and CI, and the combined production application
servers. The measured hardware footprint needed to cover the peaks for this run, maintaining the
runtimes as depicted is 94,2K SAPS. The peak is determined by the invoicing and
correspondence print processing steps.

Physical CPU-Load per Business Step

60

50

40
PhysicalCPU

40 APPS
30 42 46 DB-CI
24 26 31
20
14
10 09
14 08
08 10 08 07
07 06 03 03 06
00 02
ev
en

ice

p
In

PR

LD
g

ct
YP

YM

re
llin

nA
R-

fR
G

vo

UP

nP
CO
PA

PA
ig

Bi
ED

De
Du
In
Tr

Du
BF

Business Step

Measured Model – 1.5 Hrs, 94,2K SAPS, 50million EDRs, 1 Million Accounts

Business Growth
The following model is based on the throughput achieved for each processing step in the high-
end scalability tests. This model is developed using the expert sizing method which focuses on
the most effective scalability over the processing chain. This model shows how quickly 50
million EDRs can be processed with the most efficient parallelization for the best
price/performance at the high-end. The results show the same throughput capacity for the same
runtime, but with less capacity requirements.

ISICC-Press CTB-2008-1.1 18
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

phys ical CPUs ove r the w hole run in 1,5 hours

60 ph. CPUs for Appl.


ph. CPUs for DB
50

40
25
# of CPUs

33
30 33 45
31 43
20
21
12
25
10
5 17
13
9 7 8
5 4 6
0 1
PR
-In

p
en

YP

YM

ev
ct
in

re
ic

A
R

R
ll

vo

P
PA

PA

un
ED

Bi
ig

ef
C

un
In
Tr

D
D

Optimal Sizing Model – 1.5 Hrs - 89,4K SAPS, 50 Million EDRs 1 Million Accounts
Result of expert sizing, high-end requirements with improved CPU utilization.

Financial Focus
physical CPUs over the whole run in 8 hours

9 ph. CPUs for Appl.

8 ph. CPUs for DB

6
5 4 5
# of CPUs

5 3 5
7
4

3 4 5
3
2
3 4 3
3
1 2
1 1 1 1
1 0
0
PR
-In

g
en

p
e

YP

YM

ev
ct
in

re
ic
R

A
G

R
ll

vo

P
PA

un
PA
ED

Bi
ig

ef
C

un
In
Tr

D
D

Cost/Performance Model – 8 Hr Runtime – 14,3K SAPS, 50 Million EDRs 1 Million Accounts

This model represents the smallest landscape which can process the target data volume in a
processing window of 8 hrs.

ISICC-Press CTB-2008-1.1 19
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.4 Event Detail Billing Test System: Architecture Overview

Below is the overview of the EDB test system. The system is implemented as 3-tier with two
production application servers in one server, and the database and central instance on a separate
server. The hardware is a large IBM Power5 system with supports logical partitioning and shared
processor virtualization. Each SAP server is implemented as a logical partition. All batch activity
takes place on the application server LPAR (LPAR2). This separation makes it easy to attribute
the measurement metrics to the active components: LPAR1 = DB+CI, LPAR2 = APPs. The
production application server communicates with the CI and DB via a high capacity backbone
network. Measurement metrics on this backbone capture the bandwidth requirements between the
batch disp + work processes and the DB/CI.

User Network

External Network
Measurement Metrics
LPAR2 / l2_prod
Application Server SAP
Utilization
Dialog/Batch Instance Dialog Instance
D01 D02
Batch = 80 Batch = 80
Dialog = 20 Dialog = 20

Network Bandwidth
Backbone Network Requirement

LPAR1 / l1_prod APP/DB

Database and CI SAPs


Central Instance EDB Utilization
DVEBMGS00 DB2 V9.1 FP 03
Dialog =3, Batch = 3
Enq=1, Spool=1
Vb1=1, Vb2=1

San Network SAN Bandwidth


DB to Storage

EDB DB on DS8300 Storage Server

The storage layout is described in detail in the infrastructure section. The DB is connected to the
storage server via a set of dedicated SAN fibers. Measurements gathered for the fiber adapters
capture the bandwidth utilization between the DB and the storage server.

ISICC-Press CTB-2008-1.1 20
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.5 Data Model


In this chapter the data model used in this test is been described. In general it is kept close to the
SAP standard without any additional enhancements. This should give the reader an idea where
the results presented herein could not be reached in his/her normal business environment.

4.5.1 Master Data


The relevant master data objects in this test are
1. Business partners
2. Contract accounts
3. Billing accounts
Usually the relationship can be 1:n:m with 1 <= n <= m but here it is chosen to be 1:1:1. Each
business partner has exactly one contract account and to each contract account exactly one billing
account is assigned. In this test 1,000,000 business partners are used thus leading to 1,000,000
contract accounts and billing accounts as well.
The data model used for this test would allow an implementation scenario of 30 million accounts,
billed on monthly basis.

4.5.2 Historical data


In order to provide a more realistic test scenario, historical data has been generated for one year
time. It is assumed that each week every customer is being billed according to the scenario
described herein. Having 52 weeks of historical data created before the tests they are executed
with data of week 53 and 54. This should give some more realistic runtimes as usually for fiscal
reasons the data has to remain in the system for a maximum period of 12 -14 months. However,
in some countries data has to stay online for a longer period of time due to legal requirements.
Whether this has to be in the system or in an online archive depends on the requirements and is
not discussed here.

The data created include EDRs, bills, invoices and letters to the customer as well as the deferred
revenues. All open items have been balanced, but no dunning notices have been issued. As a
result, the dunning table alone is not as full as would be expected after one year. This is the only
exception where no historical data was created. This one deviation from the normal data volume
is expected to have no impact on the test results.

ISICC-Press CTB-2008-1.1 21
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The following table contains the five largest tables together with the number of rows inside
before the test started:

Table name Number of DATA_OBJECT_PAGES INDEX_OBJECT_PAGES SIZE BYTE


Rows (DATA+INDEX)
DFKKBIEDRTC
1,502,091,526 16,041,189 13,532,623 484,537,335,808
EDRs
FKKDEFREV
Deferred
462,387,820 8,328,646 5,304,277 223,361,810,432
Revenue
Triggers
DFKKINVDOC_I
Invoicing 182,574,960 4,984,901 1,227,749 101,788,057,600
Document Items
DFKKOPK
The Contracts
Accounts
Receivable and 137,302,459 3,029,903 752,809 61,975,953,408
Payable
Component
Document Items
DFKKINVDOC_P
131,488,383
Invoice posting 594,850 858,861 23,817,601,024
reference
Maximum Size of Tables on Database – using 16K Pages

4.6 Processing Cycle


Herein the steps are described in some more detail and some hints are given when and how to
scale the different activities. Looking at the big picture, each of the steps described puts a certain
load on the system, how much is written in other chapters of this document. These chapters have
to be consulted whenever a rescaling is required in order to get the proper hardware. It is
recommended to perform this with sizing experts from SAP and/or the hardware partner as they
usually can provide additional information in cases where no figures exist yet.

4.6.1 Transfer of Event Detail Records


For each billing account 50 EDR records are uploaded and written to the database (table
DFKKBIEDRTC) via calling the BAPI BAPI_EDR_TCOLL_CREATEMULTIPLE. This
corresponds to the normal scenario where the records are entered into the system by a call to this
BAPI from an external system. Within this test the external system had to be simulated by the
SAP system itself, so the runtime of the transfer job is higher then one would expect under
normal circumstances. The runtime spent outside the BAPI was about 18% of the total runtime of
the job.
The effect of a higher number of input lines is investigated separately.

4.6.2 Creation of billing orders


Each relevant billing account gets a billing order which is the basis for the next billing step.

ISICC-Press CTB-2008-1.1 22
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.6.3 Billing in Contracts Accounts Receivable and Payable


In this step the EDR records are grouped together according to certain criteria defined in
configuration settings. Each grouping criteria gives one billing line which will be invoiced later.
Unless stated otherwise, the EDR records are almost evenly distributed over three groups,
therefore the grouping is pretty simple and doesn’t put much load to the system.
This might be different in customer systems where the grouping algorithm might be more
complex. According to the SAP model each EDR contains already the final price information, but
in reality the customer might program re-pricing algorithms depending on the contract he has
with the business partners. In this case the results herein represent only the lower sizing limit for
this activity and the customer is encouraged to perform their own tests to make sure the system is
capable to handle the load.
The effect of a higher number of input lines is investigated separately.

4.6.4 Invoicing in Contracts Accounts Receivable and Payable


In this step the billing lines created in the previous step are invoiced. Taxes are added and the
new due amount is calculated, taking into account the current balance of the corresponding
contract account.
Unless stated otherwise the model uses three billing lines as input and sums them up to one
invoicing line per contract account. As tax only VAT is added and the contract accounts contain
neither debts nor deposits.
This should correspond to the standard business case where most accounts are balanced. All
payments will be posted to the same general ledger account which is a weakness in the model,
however, as long as the number of G/L accounts to be used is small (less or equal five) the effect
can be expected to be negligible. Therefore the model should be able to cover most customers
needs.
The effect of a higher number of input lines is investigated separately.

One particular activity is executed in this step which might not be the case at any customer. In
order to test deferred revenues (see below) each amount is split into 10 equal pieces and
memorized for later posting. The contribution to the runtime of convergent invoicing is measured
to be between 5 and 10% and depends mainly on the insert time into table FKKDEFREV.
Customers not using deferred revenues can downsize this step accordingly.
Customers using deferred revenues should keep in mind that the relative time grows with the
number of rows to be inserted into this table, so splitting the amount into 20 instead of 10 pieces
should double the time required for this action.

4.6.5 Payment Processing


In the used data model exactly 90% of the business partners have been opted for direct debits, the
remaining 10% are technically cash payers. It doesn’t mean they turn up at a cash desk in a
company office, they might also ask their own bank to transfer the money from their to the
companies’ bank account. This ratio was chosen because it seems to be somewhat standard in the
northern countries.

ISICC-Press CTB-2008-1.1 23
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Technically there is no difference whether a business partner permits the company to take the
money from his/her bank account or credit card. Therefore it is not distinguished here between
the two.
For countries with different business models a downscaling might be necessary, however this
should be done with care. Technically all open items must be selected form the database first and
then for each of them it must be determined whether it belongs to a business partner with direct
debit or not. Thus the first part depends only on the number of open items, while the next step,
the balancing of the items, depends on the number of direct debits.

4.6.6 Payment Media Generation


This step simulates the generation of lists with direct debit customers’ due amounts to be sent to
the house banks. Some companies replace this functionality by own programs to extract this data.
Nevertheless it has been included here because if reasonably programmed the customer programs
should place a similar load on the system and run in a comparable time frame as the SAP
standard programs. Credit card payments are considered to be of the same pattern as direct debit
customers, only the recipient of the list and the handling of the debit are different. The latter one
takes place in a completely different system, so it doesn’t matter here.
Companies that have a large number of cash payers can downscale this activity from the 90%
level. Those who have only advance payments can remove this activity completely from the list.

4.6.7 Correspondence Print


In this step all correspondences to the business partners are printed. In the test model the
following letters are printed:
1. One invoice for each business partner
2. One dunning letter for the 10% of business partners not paying on time
The print goes to the spool as simple RDI format as due to experience these types of letters are
not directly printed from the SAP system. Usually some kind of print software picks up the spool
files and then prints the physical letter.
In some scenarios the correspondence print is not necessary because business partners can see
their bill via the internet or have to pay in advance and get a receipt from outside the SAP system.
In this case the correspondence print can be taken out from the sizing or being downsized.
However, in case of internet access to bills some load will be brought on the system whenever a
business partner wants to see his/her bill, but there is no experience so far how much load this can
put on the system. In most cases customers use their own programs to extract the data as they
have different legal obligations, so some sizing assumptions must be made by the customer.

4.6.8 Dunning Proposal


The dunning proposal creates a list of business partners having overdue open items. It has been
split from the dunning activity run in order to allow customers to remove selected business
partners from the dunning proposal list before starting the dunning activity run. This makes sense
if e.g. the company is in contractual negotiations with a business partner.

ISICC-Press CTB-2008-1.1 24
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Within the test scenario it as assumed that 10% of the business partners do not pay in time and
therefore have to be dunned. This figure comes from customers in northern countries. The
business partners not meeting their obligations are equally distributed over amongst business
partners asking for direct debit and cash payers. From each group 10% are not paying.
Customers who always ask for advance payment don’t need this functionality and can take it out
for sizing purposes.
This activity scales with the number of overdue open items.

4.6.9 Dunning Activity


In this step the business partners on the dunning list created at the step before are dunned. In the
test model three activities are carried out:
1. A dunning letter is created
2. A dunning fee is added
3. A payment form is added to the dunning letter
These activities reflect somewhat the normal scenario for a first level dunning where the business
partner gets a reminder. No further action like blocking the contract account or transfer of debts
to a collection agency are performed in the test as this affects only a very small minority of the
business partners.
Customers who always ask for advance payment don’t need this functionality and can take it out
for sizing purposes.
This activity scales with the number of business partners not paying on time. Customers where
this number is smaller or bigger can scale this activity accordingly.

4.6.10Deferred Revenues
In some scenarios the money collected in advance from the business partner cannot be
immediately be posted as revenue as the service might not have been delivered yet. For example,
if an advance payment by the business partner is rewarded on a buy-10-get-2-for-free basis.
In this test scenario each invoice created is split into 10 amounts to be posted in intervals of four
weeks each. As a whole the scenario doesn’t make much sense from a business point of view, but
technically it allows the transfer of deferred revenues to be measured.
For each run one item is to be transferred per business partner. The activity scales with the
number of items to be transferred. Customers where this number is smaller or bigger can scale
this activity accordingly.

ISICC-Press CTB-2008-1.1 25
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.7 Load Test Overview

Step Purpose Test A Test B Test C Test D Test E Test F Test G Test H Test I
2-Inbound Interface A-2a B-2a C-2a D-2a E-2a F-2a
prove scalability in terms 1Job 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs
of parallel jobs in 7/3
scheme
4a-EDR Billing B-4a C-4a D-4a E-4a F-4a
prove scalability in terms 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs
of parallel jobs in 7/3
4b-EDR Billing scheme A-4b B-4b C-4b D-4b
prove scalability in terms 20 jobs 20 Jobs 20 Jobs 20 Jobs
of billing lines 3 lines per BA 10 lines per BA 100 lines per BA 1000 lines per BA
5a-Convergent A-5a B-5a C-5a D-5a E-5a F-5a G-5a H-5a I-5a
Invoicing prove scalability in terms 1Job 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs 120 Jobs 140 Jobs 160 Jobs
of parallel jobs in 7/3
scheme
5b-Convergent A-5b B-5b C-5b D-5b
Invoicing prove scalability in terms 20 jobs 20 Jobs 20 Jobs 20 Jobs
of invoicing lines 1 line per CA 10 lines per CA 100 lines per CA 1000 lines per CA

6-Payment run B-6 C-6 D-6 E-6 F-6


prove scalability 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs

7-Payment medium B-7 C-7 D-7 E-7 F-7


processing prove scalability 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs

8-Correspondence B-8 C-8 D-8 E-8 F-8 G-8


print prove scalability 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs 120 Jobs

11-Dunning B-11 C-11 D-11 E-11 F-11 G-11


proposal prove scalability 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs 120 Jobs

12-Dunning activity B-12 C-12 D-12 E-12 F-12 G-12


prove scalability 20 Jobs 40 Jobs 60 Jobs 80 Jobs 100 Jobs 120 Jobs

13-Compression A-13 B-13 C-13 D-13


test, Study influence of 8000 2600 PAYP PAYM
compression of largest 100 jobs 120 jobs 40 jobs 40 jobs
tables DFKKBIEDRTC,
DFKKOP, DFKKKO

The above diagram shows the detailed test plan overview for the different steps. Each of these
steps is described in more detail below.

4.8 Detailed Test Requirements


The following detailed test requirements describe each of the process steps tested in the PoC and
the tests carried out to understand the requirements and load profile of the step. Each of the major
process steps is documented in regard to its load profile, resource requirement, and scalability
potential. Trade-offs in the business model implementation design, considerations on
cost/effectiveness for scalability, and recommendations are documented where appropriate.

ISICC-Press CTB-2008-1.1 26
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.8.1 Transfer of Event Detail Records (BAPI - Inbound Interface)


In the PoC tests the incoming data was read from a database table within SAP, where it had been
generated earlier in a prior step that is not measured in this PoC. In our test scenario it is the SAP
system itself reading the generated data and injecting it into the system via BAPI in the same
manner an external system would enter data into the system in a real-live scenario. The BAPI
interface converts the raw input data to EDR records and stores them in the EDR-management
tables.

Purpose:
Proof scalability of this step for two scenarios:
1. Small number of business partners with many EDRs.
This scenario was tested with a constant number of jobs, but varying number of
EDR records and service types per bill account. This represents corporate accounts
that could potentially have bigger amounts of EDR records to be billed per bill
cycle.
2. Large number of business partners with few EDRs.
In this scenario we worked with a varying number of parallel jobs and increasing
data volume. Each bill account had an average of 50 EDR records randomly
assigned to 3 different EDR types (service category that is used as aggregation
level). These accounts represent the vast majority of the customers with an
average number of EDR records per bill cycle.

Total throughput [obj/sec]


Measured Throughput Transfer
Throughput with optimal scaling

250.000

200.000
Transferred EDRs / second

150.000

100.000

50.000

0
0 20 40 60 80 100 120
parallel jobs

Resource Utilization for Throughput Achieved – Scalability Costs


The following chart is based on the average physical CPU utilization during the high-load phase
for each of the scalability points. This graph shows the increase in the job runtimes and the

ISICC-Press CTB-2008-1.1 27
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

increase in physical resources. Each parallel job is handling the same amount of data so as the
level of parallelism increases, the throughput as well the runtimes are increasing.

DB
Resources & Runtimes (Transfer) Apps
runtime

08 0:20:10
07 0:17:17
06
Physical CPUs

0:14:24

(hh:mm:ss)
Runtimes
05
0:11:31
04
0:08:38
03
0:05:46
02
01 0:02:53
00 0:00:00
20 40 60 80 100
Parallel Jobs

The graph below shows the total physical CPU utilization in relation to the objects per second
being uploaded via all the parallel jobs. This shows the increase in throughput achieved as the
degree of parallelism increases. In the trend lines, we see what we would expect to see, the
throughput win per CPU used is on the decrease at the high-end. The best price/performance is
achieved with 20 parallel processes.
TotalPhyc
Resources - Throughput (Transfer) TotalObjects
Linear (TotalPhyc)
Linear (TotalObjects)
16 50000

14 45000
40000
Objects Per Second

12
35000
Physical CPU

10 30000
8 25000

6 20000
15000
4
10000
2 5000
0 0
20 40 60 80 100
Number Parallel Jobs

ISICC-Press CTB-2008-1.1 28
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

TotalCPU
Throughput/Sec per Physical CPU (Transfer) Obj/Sec/CPU
Linear (Obj/Sec/CPU)

16.0 4500

14.0 4000

12.0 3500
Physical CPUs

Obj/Sec/CPU
3000
10.0
2500
8.0
2000
6.0
1500
4.0 1000
2.0 500
0.0 0
20 40 60 80 100
Parallel Jobs

This chart shows the throughput per CPU as the load is scaled up. Here the trend emphasizes the
drop of throughput per CPU between 20 and 40 parallel processes. This trend may have been the
result of the commit size discussed below.

I/O Requirements
The IO requirements shown here are not entirely related to the upload process. In the PoC, a
batch process was used to simulate the external system. It is not possible to separate the IO
requirements of the simulation from the actual production. These values must therefore be
considered inflated.

Backbone Network Trend


BackBone Network Utilization (Transfer) Read Write

30,000
25,000
20,000
KB/sec

15,000
10,000
5,000
00
20 40 60 80 100
Parallel Jobs

This graph shows the total KB/sec, in both read and write, which took place on the backbone
network for each level of parallelism. Each additional parallel job represents an active application
server/database server connection over the backbone network.

ISICC-Press CTB-2008-1.1 29
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Backbone Utilization in relation to objects processed per second.


KBsRead
Backbone Net - Util vs throughput (Transfer) KBsWrtten
ObjSec
Linear (ObjSec)

18,000 50000
16,000 45000
14,000 40000

Objects/Sec/Job
35000
KBs/Job

12,000
30000
10,000
25000
8,000
20000
6,000
15000
4,000 10000
2,000 5000
00 0
20 40 60 80 100
Parallelization

The graph above relates the activity across the backbone network, with the number of objects per
second being processed.

KBs_read_job
Backbone Net - Util/Job and obj throughput (Transfer) KBs_write_job
OBJ_sec_Job

600.00 1000.00
500.00 800.00
Objects/Sec/J

400.00
KBs/Job

600.00
300.00
ob

400.00
200.00
100.00 200.00

0.00 0.00
20 40 60 80 100
Parallel Jobs

This shows graph KB/s, in read and write, per job in relation to the throughput (obj/sec) per job.
The throughput per job is diminishing and therefore the related network utilization.

ISICC-Press CTB-2008-1.1 30
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Storage Area Network Trends

Read
SAN Bandwidth vs Throughput (Transfer) write
TotalObjs
70,000 50000
60,000
40000
50,000
KB/Sec

Obj/Sec
40,000 30000

30,000 20000
20,000
10000
10,000
00 0
20 40 60 80 100
Parallelization

The graph above shows the SAN utilization in KB/sec, in both read and write, in relation to the
throughput achieved (objects/second).

SAN Bandwidth vs Throughput (Transfer) TotKB_Sec TotalObjs

120000 50000
45000
100000 40000
80000 35000
30000
Obj/Sec
KB/Sec

60000 25000
20000
40000 15000
20000 10000
5000
0 0
20 40 60 80 100
Parallelization

This graph shows the total SAN bandwidth utilization in relation to the throughput achieved
(objects/second).

ISICC-Press CTB-2008-1.1 31
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Commit Size
Some of the mass activities used herein do not handle object by object but handle them in
packages. This is done because it allows array inserts and updates on the database which can
increase the performance and hence the throughput. Frequently these mass activities have a
parameter that controls the package size. Investigations have been done to determine how the
package size influences the throughput. For this purpose, the transfer of EDR records has been
used. The commit size depends on the number of rows transferred from outside to the BAPI
which commits only once after having written all the information to the database.

In this test 25,000,000 EDRs have been transferred by 10 parallel jobs into the system. The graph
below shows the runtime of the jobs versus the package size (and hence the commit size).

Commit Size
Time required for transfer

1400

1200

1000

800
Time [sec]

600

400

200

0
0 2000 4000 6000 8000 10000 12000 14000 16000 18000
Commit size

The best throughput was achieved with a package size of 2,000 EDR records per BAPI call. It
should also be noted the deviation from the best value can be quite large as can be seen in the
next picture.

ISICC-Press CTB-2008-1.1 32
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Deviation From Best Value

80

70

60

50
Percent

40

30

20

10

0
100 200 400 800 1000 2000 4000 8000 16000
Commit size

If the BAPI has to handle packages of 100 EDRs instead of 2000 would go down by almost 70
%. So it should be clear that this parameter should be given some attention.

4.8.2 Creation of billing orders


Usually business partners get a monthly, weekly or daily invoice each time they have
“consumed” a service. Therefore usually the business partners are not billed every day and might
have different bill cycles depending on customer groups. This step selects those that are due to
get a bill and actually have something to pay. This pre-selection step avoids the processing of all
billing accounts in subsequent steps.
Purpose:
Creation of billing orders which are the basis to run billing.

The time for creating billing orders have been measured for different numbers of billing accounts.
For each billing account one order was generated.
The graph below shows the result:

ISICC-Press CTB-2008-1.1 33
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Billing Order Creation Runtime of creation job

80

70

60

50
Duration [sec]

40

30

20

10

0
0 100000 200000 300000 400000 500000 600000
Billing Orders Generated

The linear scaling is clearly visible and there is no reason to believe that this will change when
going further. As the performance is sufficient (an extrapolation would give 140 seconds per one
million billing accounts), this task was not made a mass activity. Nevertheless, it would be
possible to start this program in parallel where each thread would create billing orders for a
different range of billing accounts.

4.8.3 Billing in Contracts Accounts Receivable and Payable


During the billing process, the EDR records are aggregated per EDR type and other configuration
settings and the respective billing documents for each bill account are created for each bill
account.

Purpose
Show the scalability of this step for the two business scenarios outlined in section Transfer of Event
Detail Records above.

ISICC-Press CTB-2008-1.1 34
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Optimal throughput
Measured Throughput Billing

Total troughput [obj/sec]


4.500

4.000

3.500

3.000
Objects / second

2.500

2.000

1.500

1.000

500

0
20 40 60 80 100
Parallel jobs

This graph shows the throughput achieved for billing in relation to the optimal scaling curve. In
one case, the EDR table was partitioned and in the other case it was a non-partitioned table. The
expectation is that partitioning is performance neutral. It is implemented for ease of use when
removing expired EDR data en mass when no longer required.

Resource Utilization for Throughput Achieved – Scalability Costs

CPU-Util vs Runtimes (Billing) PhyCPU RunTim es

40 36 00:07:12
34 00:06:29
35
30 00:05:46
Runtime- HH:MM:SS

30
00:05:02
Physical CPU

25 23
00:04:19
20 00:03:36
15
15 00:02:53
00:02:10
10
00:01:26
5 00:00:43
0 00:00:00
20 40 60 80 100
Parallel Jobs

The graph above shows the cost of the throughput in time and resources. Each parallel job is
processing the same amount of data. From 80 parallel processes we see a definite trend toward
longer total runtimes and fewer relative resources. This indicates a bottleneck has been reached.
The graph below shows the total CPU (DB+APPS) in relation to the throughput achieved in
billing accounts processed per second.

ISICC-Press CTB-2008-1.1 35
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

PhyCPU
PhyCPU vs Billing Objs/Sec (Billing) Objs_Sec

3000
60 2616 2605

Billing Objects
2500
Physical CPUs

50 2278
40 2000

30 1521 1500
34 36
30 1000
20 823 23
10 500
15
0 0
20 40 60 80 100
Parallelization

The trend for parallelization shows 80 parallel job to be the optimal limit for these tests. Above
this point, the system seems to use slightly more CPU resources with less throughput.
As the system had no CPU limitation at this point, the bottleneck was elsewhere.

CPUTime
SAP Statistics Breakdown vs ThruPut (Billing) DB2RspTime
Throughput

12,000,000 3000.00
10,000,000 2500.00
Obj/Sec

8,000,000 2000.00
MS

6,000,000 1500.00
4,000,000 1000.00
2,000,000 500.00
0 0.00
20 40 60 80 100
Parallel Jobs

This graph looks at the SAP statistics for total CPU utilization in the work process and total DB
response time in relation to the throughput. At 100 parallel jobs, we see an increase in both DB
response time and CPU utilization. The total CPU utilization is a result of the number of objects
processed in total and not of the throughput per second. It is used here as a sanity check to verify
the data volume processed. Seen in this light, it is in line with the other parallelization levels.
The DB response times do indicate a bottleneck.

ISICC-Press CTB-2008-1.1 36
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

SAP-CPUTime
SAP Statistics for Time Distribution (Billing) SAP-EnqTime
SAP-WaitTime

10,000,000 3,500
9,000,000
3,000
8,000,000

Wait for Dispatcher


CPU and Enq TImes

7,000,000 2,500

6,000,000

(MS)
2,000
(MS)

5,000,000
1,500
4,000,000
3,000,000 1,000
2,000,000
500
1,000,000
0 0
20 40 60 80 100
Parallel Jobs

On the search for a cause of the bottleneck at 100 parallel processes, we look at the SAP statistics
for the total run: the total application server CPU time, the time spent waiting for dispatch into a
work process, and the time spend waiting on enqueue request. Only the enqueue shows an
increasing trend, the others are linear.

ENGTime
Enqueue Behaviour (Billing)
TotalEnqueus

6,000,000 2,500,000

5,000,000 2,000,000
Enqueues

4,000,000
Total

1,500,000
MS

3,000,000
1,000,000
2,000,000

1,000,000 500,000

0 0
20 40 60 80 100

Parallelization

The graph above shows the behavior of the SAP enqueue process managing logical locking. The
SAP statistics also show an increase in the enqueue response time between 80 and 100 parallel
jobs. The number of enqueue requests per job remains constant and therefore increases linearly
with the level of parallelization. The time needed to handle the enqueue requests had reached a

ISICC-Press CTB-2008-1.1 37
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

near vertical trend line. In the SAP configuration used for these tests, there was a single enqueue
process. If the enqueue process itself was the bottleneck that then can be alleviated to a degree by
adding another enqueue process. If the enqueue process was the bottleneck, then this poses a
potential limitation on the further scalability of the billing process.

DB2 Observations for Billing

DB READ
Bufferpool Hitratios.
• Data Bufferpool Hitratio upwards of 99.8%
• Index Bufferpool Hitratio upwards of 99%

This left only few logical read requests effectively implementing physical I/O :roughly 6
thousand Data I/O requests per minute and 70 thousand Index I/O requests per minute. Overall,
those 76 thousand I/O requests per minute performed at about 1.5 ms average per I/O.

DB LOGGING
Logging is clearly an area requiring close attention, particularly in such workloads driving
significant database change activity (Insert,Delete,Update)
We actually measured a very stable logging performance with about 70K log writes per minute at
an average I/O time of 0.4 ms.

DB LOCKING
Average of less than 100 lock waits per minute and an average lock wait duration of 10.5 ms.

I/O Requirements

Backbone Network Trend

This graph shows the network bandwidth utilization between the application servers and the
database.

BackBone Network (Billing) Read Write

70,000
60,000
50,000 23837
23385
40,000 20587
MB/s

30,000 14865
20,000 8625 34661 35450
30695
10,000 21897
12826
00
20 40 60 80 100

ISICC-Press CTB-2008-1.1 38
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Below is the correlation between the network bandwidth and the billing objects being processed
per second via all parallel jobs. The throughput per second is related to the throughput volume
and drops off after 80 parallel jobs, when the bottleneck takes effect. In average, the bandwidth
requirement is 30KB/s per object/sec.
Read
BackBone Util and Billing Objects Processed Write
BillingObj/sec

40,000 3000.00
35,000
2500.00
30,000

BillingObjects/
2000.00
25,000
KB/sec

sec
20,000 1500.00
15,000
1000.00
10,000
500.00
5,000
00 0.00
20 40 60 80 100
Parallel Jobs

SAN Bandwidth Trends

SAN Bandwidth (Billing) Read Write

100,000

80,000
43,505 41,814
60,000 43,068
MB/s

23,303
40,000
15,125
20,000 43,245 45,997
36,228 37,567
23,741
00
20 40 60 80 100
Parallel Jobs

The graph above shows the bandwidth utilization on the storage area network for the increasing
levels of parallelization. Below is the correlation between billing objects processed and the SAN
utilization.

ISICC-Press CTB-2008-1.1 39
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

SAN_MB/s
SAN Utilization & Billing Objects TotalObj/s
100000 3000
MB/sec on SAN

Billing Obj/Sec
80000 2500
2000
60000
1500
40000
1000
20000 500
0 0
20 40 60 80 100
Parallel Jobs
The SAN utilization trend shows an average bandwidth requirement of 38 KB/sec per object
processed per second.
SAN-MB/s
SAN Util and Parallelization (Billing)
PerJob

100000 2500
90000
80000 2000

MB/Sec per Job


70000
60000 1500
MB/sec

50000
40000 1000
30000
20000 500
10000
0 0
20 40 60 80 100
Parallel jobs
This graph shows that there is no relationship to the number of parallel jobs on the Database IO
behavior. At the total throughput of objects/sec increases, the bandwidth utilization on the SAN
increases. As the parallelization increases, the bandwidth per job decreases. Although the
parallelism affects the total processing throughput, it is not directly related to the I/O rate. The I/
O rate is per object.

ISICC-Press CTB-2008-1.1 40
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Scaling behavior1

In the following series 10 jobs are used to generate billing orders for 1,000 billing accounts. The
number of EDRs per billing account was varied from 5 to 50,000. Normally one would expect
that a job processing 100 billing accounts with 50,000 EDRs each would require 10,000 times
more time then a job where only 5 EDRs per account have to be processed.

Billing Runtime

1000

900

800

700

600
Runtim [sec]

500

400

300

200

100

0
5 50 500 5000 50000
EDRs per billing account

The graph shows that difference between 5 EDRs per billing account and 50 is minor. This can
be explained by the fact that independent of the number of EDRs processed checks have to be
done, e.g. whether a billing account exists, is valid and has not yet been billed. Only after these
checks the EDR processing begins. With 500, 5000 and 50,000 jobs the total runtime is higher,
but not as high as one could expect. That means, increasing the EDRs processed per billing
account from 5 to 50,000 does not give a runtime 10,000 times higher but only 90 times higher.

For billing the scaling behavior with a growing number of input lines is better then linear.
Nevertheless, it should be clear that a linear scaling can be expected if the higher number of input
lines is compensated by an increasing number of billing accounts in a way that the number of
lines per accounts remains.

1
All tests described so far in this document have been conducted according to the data model and on the environment described elsewhere in this
document. For billing an average of 50 EDRs were considered as input while for invoicing 3 billing lines served as input as this was considered to
be a good estimate.

However, it is important to know what happens if these numbers increase and whether the runtime of the job scales at least linearly with the
number of rows processed. As unfortunately this question arose only after the environment described herein was already dismantled these tests
had to be executed on a smaller environment. Therefore absolute runtimes cannot be compared with those described elsewhere in this document.
But a comparison of the measured runtimes of these series already gives the answer to this question.

ISICC-Press CTB-2008-1.1 41
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.8.4 Invoicing in Contracts Accounts Receivable and Payable


Merges all billing documents from multiple sources into a single invoice document per contract
account and performs an aggregation where possible. In addition open items, additional charges
and other items on the account can be selected and shown on an invoice. In addition the G/L
accounts are determined to prepare for summarized postings in G/L.

Purpose
Show the scalability of this step for the two business scenarios outlined in section Transfer of Event
Detail Records above.

Throughput achieved versus optimal throughput for linear scaling.


Max. job run time
Throughput Invoicing (maximal Runtime)
Optimal Throughput
3.000

2.500
Processed Contract Accounts / Second

2.000

1.500

1.000

500

0
1 20 40 60 80 100 120 140 160
Parallel jobs

The trend shows the optimal parallelization 100 jobs. The following graphs will show that
beyond this point, more CPU is utilized for only a minimal increase in throughput. The table
below shows the total throughput per job, and throughput per minute per job.

Total No. of jobs Processed acc. Per job Processed/minute/job


accounts
20000 1 20000 904,8
40000 20 2000 869,4
80000 40 2000 762,6
120000 60 2000 634,8
495000 80 6188 762,6
990000 100 9900 754,8
990000 120 8250 650,4
990000 140 7071 576,6
990000 160 6188 495,0

ISICC-Press CTB-2008-1.1 42
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

CPU Utilization
APPS
PhyCPU Util ( Invoicing DB+CI

60
50
Physical CPUs

40
36 40 38
30 34
29
20
16 18
10 11 14 14
08 11 12 13
03 06
00
20 40 60 80 100 120 140 160
Parallel Jobs

Up to 140 parallel jobs, the trend in CPU utilization is related to throughput. At 160 there appears
to be a bottleneck which is reducing both total CPU utilization and throughput. The total system
is still only consuming 52-54 physical CPUs of the available 64. Each LPAR has access to 48
physical CPUs and the largest LPAR consumption here is only 38-40 physical CPUs so there is
no resource constraint in this area.

TotalCPU
CPU vs Obj/sec (Invoicing) Objs/Sec
1800
60.0 53.5 51.8 1600
46.4 49.0
50.0 1400
Physical CPUs

40.7
1200
Object/Sec

40.0
1000
30.0 25.4 800
21.9
20.0 14.5 600
400
10.0
200
0.0 0
20 40 60 80 100 120 140 160

Parallel Jobs

The graph above shows the total system utilization in CPUs in relation to the throughput, in
objects processed per second. Here we verify that what we are seeing from the OS level is
reflected in the throughput behavior. The further parallelization, between 140 to 160, has caused
a negative trend in behavior.

ISICC-Press CTB-2008-1.1 43
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

DB+CI APPS
CPU vs Obj/sec (Invoicing) Objs/Sec Linear (DB+CI)
Linear (APPS)

50 1600
1400
40
1200

Object/Sec
1000
Physical

30
CPUs

800
20 600
400
10
200
00 0
20 40 60 80 100 120 140 160
Parallel Jobs

This graph analyses the trends over the components. We see the largest deviation in the
application server requirements.

CPU-Time
SAP Statistics with Throughput (Invoicing) DBtime
Throughput
80,000,000 1500

60,000,000
1000

Obj/sec
MS

40,000,000
500
20,000,000

0 0
20 40 60 80 100 120 140 160
Parallel Jobs

The graph above shows the response time statistics from SAP, the CPU time utilized by the
application server and the database response times. As the last 4 measurement points (from 100
jobs) all processed the same volume of objects (990000), the total CPU remains constant as this
is a function of the total number of processed objects. As the parallelization increases, the DB
response times increase. Interesting is that the throughput per second continues to increase with
the parallelization to 140 parallel jobs.

The graph below shows the total number of DB requests in relation to the number of billing
accounts being processed. This shows that the number of DB requests is not increasing, the DB
response time increase is a result of parallelization.

ISICC-Press CTB-2008-1.1 44
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

BillingAccounts
BillingAccounts vs DB_Requests (Invoicing)
DB_Requests

1200000 200,000,000
1000000
150,000,000

DB_Requests
800000
Accounts

600000 100,000,000
400000
50,000,000
200000
0 0
20 40 60 80 100 120 140 160
Parallelization

I/O Requirements

Backbone Network Trend


BackBone Network & Processing Throughput Read
(Invoicing) Written
Objs/sec
40,000 1600
35,000 1400
30,000 1200
25,000 1000
Obj/Sec
KB/S

20,000 800
15,000 600
10,000 400
5,000 200
00 0
20 40 60 80 100 120 140 160
Parallel Jobs

The average requirement over all runs is 49 KB/s per object processed per second.
The peak bandwidth requirement was 66.3 KB/s per second (per object).

SAN Trends

The graph below documents the SAN trends. These are the actual measured values for these tests
and show the effect of the buffer pool state on the SAN requirements. We see a marked
difference in the read bandwidth over these runs, indicating that in several cases we had optimal
hit-ratios in the DB2 buffer pools. To carry out repetitive tests with the same data volume, it was
often necessary to reset the database (using a flash-copy recovery). Although warm-up runs were
always done after a reset, the buffer pools continue to perfect their buffer pool behavior with
time. We see the results here in the read bandwidth requirements.

ISICC-Press CTB-2008-1.1 45
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Read
SAN & Processing Throughput (Invoicing) Written
Objs/sec
80,000 1600
70,000 1400
60,000 1200

Objs/Sec
50,000 1000
KB/S

40,000 800
30,000 600
20,000 400
10,000 200
00 0
20 40 60 80 100 120 140 160
Parallel Jobs

The average requirement per object processed per second is calculated at 66.3 KB/sec. The safer
value is the max requirement which is 105 KB/sec per object processed.

DB2 Observations for Billing

DB READ
Bufferpool Hit-ratios for 20-60 way:
• Data Bufferpool Hit-ratio upwards of 99.94%
• Index Bufferpool Hit-ratio upwards of 99.4%

Physical I/O reads grow from 4K/min to 30K/min with an avg duration growing from 4.7 ms to
7.4 ms for the runs from 20 – 60 parallel jobs.
The run from with 160 jobs gets 100K I/Os/min at 5.1 ms avg.

DB LOGGING
There, we see a steady correlation between activity and nb of I/Os, but times are very stable at 0.2
to 0.3 ms avg per I/O
Here are the I/O throughputs for each run :

Parallel Jobs avg log write I/Os in


thousands /min
1 4
20 10
40 33
60 50
160 106

DB LOCKING
There was very minimal locking activity observed for invoicing: less than 10 lock waits/min at an
avg lock wait time of 2.2 ms

ISICC-Press CTB-2008-1.1 46
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Scaling behavior2

In general invoicing shows the same behavior as billing. To obtain this series 10 jobs were
started, each jobs processing 100 contract accounts. The number of billing lines per contract
account was 5 – 50 – 500 – 5000 and 50000 respectively.

Invoicing Runtime

4500

4000 3811

3500

3000
Runtim [sec]

2500

2000

1500

1000

500 401

10 13 47
0
5 50 500 5000 50000
Billing lines per contract account

The increase in number of billing lines being handled by one job does not lead to a similar
increase in runtime, in fact the differences between 5 and 50 billing lines is 30% and not a factor
of 10. Going to 50,000 billing lines per contract account increases the runtime by a factor 380, far
less then the 10,000 one could have expected.

For invoicing the scaling behavior with a growing number of input lines is better then linear.
Nevertheless, it should be clear that a linear scaling can be expected if the higher number of input
lines is compensated by an increasing number of contract accounts in a way that the number of
lines per accounts remains.

4.8.5 Payment Run


Invoices generated during convergent invoicing create open items in the sub-ledger accounts.
This step balances the open items for all direct debit customers (For this test 90% of the business
2
All tests described so far in this document have been conducted according to the data model and on the environment described elsewhere in this
document. For billing an average of 50 EDRs were considered as input while for invoicing 3 billing lines served as input as this was considered to
be a good estimate.

However, it is important to know what happens if these numbers increase and whether the runtime of the job scales at least linearly with the
number of rows processed. As unfortunately this question arose only after the environment described herein was already dismantled these tests
had to be executed on a smaller environment. Therefore absolute runtimes cannot be compared with those described elsewhere in this document.
But a comparison of the measured runtimes of these series already gives the answer to this question.

ISICC-Press CTB-2008-1.1 47
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

partners are set to direct debit – we did not see a need to distinguish between bank or credit card
payments, as the processing time is comparable).

Purpose:
Show the scalability of this step for a large number of business partners with few invoicing
records. Please note that at this point in time in the process there is no need to distinguish
between the two scenarios as aggregation is already on the highest level and each invoice creates
exactly one open item, independent of the type of scenario and customer group.

Total throughput [obj/sec]


Measured Throughput Payment Processing
Throughput with optimal scaling
7.000

6.000
Processed contract accounts / second

5.000

4.000

3.000

2.000

1.000

0
1 20 40 60 80 100
Parallel jobs

The scalability graph above shows very linear behavior to 80 parallel, after which, we see a
further throughput growth but with diminishing gain. This could be a break-even point.

The view from the CPU shows the same picture, but shows that the 100way is still achieving
good price/performance on a total CPU utilization, but that a break-even point is approaching.

TotalCPU
CPU Util vs Throughput (PayRun) Obj/Sec

40 6000.00
32 35
35 5000.00
29
30
Phyical CPU

25 22 4000.00
Obj/Sec

20 14 3000.00
15 2000.00
10
5 1000.00
0 0.00
20 40 60 80 100
Parallel Jobs

ISICC-Press CTB-2008-1.1 48
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The graph below is shown with three trend-lines. The indication is that the decrease in
productivity at 100 parallel jobs may be coming from the application server side.

CPU Util vs Throughput (PayRun) DB-CI APPS


Obj/Sec Linear (Obj/Sec)
Linear (APPS) Linear (DB-CI)
30 6000.00

25 5000.00
Phyical CPU

20 4000.00

Obj/Sec
15 3000.00

10 2000.00

05 1000.00

00 0.00
20 40 60 80 100
Parallel Jobs

This graph below is taken from the SAP statistics summarized for each of the runs. This shows
the total DB requests processed for the run, and the average database response time. The volume
of data processed for the runs from 40-100 parallel is identical: we can see this in the total
number of DB requests processed for each. Therefore the change in DB behavior is a result in the
parallelization only. The DB response time begins to increase markedly with 80 parallel jobs. The
optimal parallelization appears to be between 60 and 80 parallel, from the view of the DB, and 80
parallel from the view of CPU resources.

DB_Requests
SAP Status - DB (PayRun)
DB-ReqTime

25,000,000 14,000,000

12,000,000
DB Respons TImes (MS)

20,000,000
10,000,000
Total DB-Requests

15,000,000
8,000,000

6,000,000
10,000,000

4,000,000
5,000,000
2,000,000

0 0
20 40 60 80 100
Parallel Jobs

ISICC-Press CTB-2008-1.1 49
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

DBReqTimes
Total Runtimes vs DB ReqTimes (PayRun)
RunTimes
14,000,000 00:14:24
00:12:58
12,000,000

RunTimes (hh:mm:ss)
00:11:31

DB Request (MS)
10,000,000 00:10:05
8,000,000 00:08:38
00:07:12
6,000,000 00:05:46
4,000,000 00:04:19
00:02:53
2,000,000
00:01:26
0 00:00:00
20 40 60 80 100
Parallel Jobs

The graph above shows the runtimes in relation to the DB request times. As the degree of
parallelization increases and the DB response times increase, the overall runtimes for the jobs in
decreasing. The total throughput per second is increased such that the identical volume handled
from points 40 – 100 is processed more quickly as a result of the further parallelization.

I/O Requirements

Backbone Network Trend

Read write
BackBone Network (PayRun) Obj/Sec

25,000 5000.00
4500.00
20,000 4000.00
3500.00
Obj/Sec

15,000 3000.00
KB/Sec

2500.00
10,000 2000.00
1500.00
5,000 1000.00
500.00
00 0.00
20 40 60 80 100
Parallel Jobs

The following bandwidth requirement is measured over the backbone network per object
processed per second: ave: 9.07 KB/sec - max:10.69 KB/sec

ISICC-Press CTB-2008-1.1 50
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

SAN Network Trend

SAN Network (PayRun) Read write


Obj/Sec

90,000 5000.00

80,000 4500.00

70,000 4000.00

3500.00
60,000

Obj/Sec
3000.00
50,000
KB/Sec

2500.00
40,000
2000.00
30,000
1500.00
20,000 1000.00
10,000 500.00

00 0.00
20 40 60 80 100
Parallel Jobs

The following bandwidth requirement is measured over the storage area network per object
processed per second: ave: 31.99 KB/sec - max: 39.01 KB/sec

4.8.6 Payment media creation


This step is automatically triggered by the previous step and can not run independently.
It is creating the files used for direct debit collection initiated by the banks.

Purpose:
Varying number of parallel jobs and increasing data volume. Each processed contract account
had one invoice.

ISICC-Press CTB-2008-1.1 51
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Total throughput [obj/sec]


Measured Throughput payments written to file
Throughput with optimal scaling
14.000

12.000
Processed contract acconts / second

10.000

8.000

6.000

4.000

2.000

0
1 20 40 60 80 100
Parallel jobs

The graph below shows the CPU utilization vs the throughput for payment medium creation.

TotalPhyCPU
Resource vs Runtimes (PayMed)
TotalRuntimes

60 00:04:19

50 00:03:36
Physical CPU

(hh:mm:ss)

40 00:02:53
Runtimes

30 00:02:10

20 00:01:26

10 00:00:43

00 00:00:00
20 40 60 80 100
Parallel Jobs

ISICC-Press CTB-2008-1.1 52
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

CPU Util and Throughput (PayMed) DB+CI APPS Obj/Sec

50 14000
45
45
40 41 12000
40 37

Obj/Sec Throughput
35 10000
Physical CPUs

30
8000
25
6000
20 17
15 4000
10
05 2000
05 03 02 03 03

00 0
20 40 60 80 100
Parallel Jobs

This graph shows the measured CPU utilization versus the throughput measured for the payment
medium creation. There are two major deviations in this graph: the DB utilization at 60 parallel,
and the final measurement point. The graph below tries to put this into perspective in regard to
the runtimes. The trend shows that CPU utilization gives shorter runtimes. The total runtimes for
these jobs are only a few minutes such that very precise measurements are hard to capture. A
single CPU peak can pay a role in the average CPU utilization over these short runtimes.
However, these measurements show the trends for CPU utilization and throughput for payment
medium creation.

TotalPhyCPU
Resource vs Runtimes (PayMed)
TotalRuntimes

60 00:02:53
00:02:36
50 00:02:18
Physical CPU

00:02:01
(hh:mm:ss)

40
Runtimes

00:01:44
30 00:01:26
00:01:09
20 00:00:52
10 00:00:35
00:00:17
00 00:00:00
20 40 60 80 100
Parallel Jobs

ISICC-Press CTB-2008-1.1 53
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

I/O Requirements

Average Bandwidth (KB/s) per Obj/sec (PayMed)


SAN/Obj
1.00 2.50 BB/obj
0.80 2.00 Linear (SAN/Obj)
SAN KB/s

0.60 1.50 Linear (BB/obj)

BackBone Net
0.40 1.00

KB/sec
0.20 0.50

0.00 0.00
20 40 60 80 100
Parallel Jobs

The graph above, using two trend lines, gives an idea of the I/O bandwidth this processing step
requires per object/sec processed. The measurements captured for this application, with the short
runtimes, do not show a consistent behavior. The trend lines show that the I/O requirements are
in no way critical.

ISICC-Press CTB-2008-1.1 54
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.8.7 Correspondence Print


This step creates the spool files for all invoices posted during convergent invoicing.

Purpose:
Show scalability using varying number of parallel jobs and increasing invoice volume. Every
processed contract account had one invoice; each job had to process the same number of
invoices.

Total throughput [obj/sec]


Measured Throughput Correspondence Print
Throughput with optimal scaling
1.200

1.000

800
Invoices / second

600

400

200

0
1 20 40 60 80 100 120
Parallel jobs

The correspondence printing shows very reliable scalability. There is a hint of degradation in the
120way run which is explained in the following graphs. This is a limitation of physical resources.

The system is configured to allow each LPAR to use a maximum of 48 virtual CPUs (VCPUs).
This means that the LPAR can consume a maximum of 48 physical CPUs of the available 64 in
the shared pool. This is a optional setting and for this processing step, a limitation. For this
scenario, it would have been better to have had a larger number of VCPUs configured for the
application servers. Below, in the 100way parallel, we see the physical CPU utilization in
averaged snapshots of 30 seconds duration. In these averages, the application server LPAR shows
a peak of 45 PhyCPU from the possible 48. The short-term peaks would have been higher. At
100 parallel processes, the system is still maintaining the scalability point in line with
expectations.

ISICC-Press CTB-2008-1.1 55
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Physical CPU - 48 Limit per LPAR (Print) DB+CI APPS

46.0 47.4
44.5 45.1
40
Physical CPU

30 30.3

20

10
8.2
6.2
0 0.0 0.6 0.1

At 120 parallel processes, the snapshot peaks have reached 47.4 of the possible 48 physical
CPUs, the short-term peaks having little additional resources. The throughput has still increased
between 100 and 120 jobs, showing that the system is coping well with this load, even with a
utilization of well over 90%.

The graph below shows the physical CPU utilization in relation to the object per second
throughput. These jobs only run for only 2 to 3 minutes (despite the volume they handle) so the
comparison data based on 30 second does not capture enough measurements intervals to be
entirely precise. According to the picture below, the price/performance break-even point is
between 80 and 100 parallel processes on the application server side. For 100 and 120 parallel
jobs, the database utilization remains constant as correspondence printing is extremely
application server orientated.

DB-CI
APPS
PhyCPU vs Throughput (Print) Objs /Sec

50 45 47 700
600
40 34
31 500
Physical

Throughput

30
CPU

400
Obj/Sec

19 300
20 14
200
10
100
00 0
20 40 60 80 100 120
Parallel Jobs

ISICC-Press CTB-2008-1.1 56
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Objs/Sec/Ave-CPU
Throughput per CPU (Print) Objs/Sec/MAX-CPU
Phyc
20.00 60
53
18.00
50
50
16.00

Physical CPU Utilization


14.00 38
40
Obj/Sec per CPU

34
12.00
10.00 30
8.00 20
20
6.00 15
4.00
10
2.00
0.00 00
20 40 60 80 100 120
Parallel Jobs

This graph shows the throughput per second per physical CPU utilized (both average over the run
and the peak value). Looked at from this angle, the best price/performance (highest throughput
per CPU utilized) is at 20 parallel jobs. The application continues to scale with a slight decrease
between 80 and above in throughput per additional CPU used. This processing step has excellent
parallelization.

I/O Requirements

Backbone Network Trend


BackBone Network Utilization (Print) KB/s Read KB/s Write
TotalObj/Sec

35,000 2000

1800
30,000
1600
25,000 1400
TotObj/Sec

1200
20,000
KB/sec

1000
15,000
800

10,000 600

400
5,000
200

00 0
20 40 60 80 100 120
Parallel Jobs

ISICC-Press CTB-2008-1.1 57
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The graph above shows the network utilization between application server and database in
relation to the objects/sec being processed. The bandwidth requirement per object/sec is on
average 80.3 KB/sec and maximum 85.1 KB/sec.

SAN Utilization Trend


KB/s Read KB/s Write
SAN Bandwidth related to workload (Print) TotalObj/Sec

25,000 2000
1800
20,000 1600
1400
15,000 1200

TotObj/Sec
KB/sec

1000
10,000 800
600
5,000 400
200
00 0
20 40 60 80 100 120
Parallel Jobs

This graph shows the SAN utilization measured per object/sec being processed. The bandwidth
requirement per object/sec being processed is on average 63.9 KB/sec, with a maximum of 84.7
KB/sec.

ISICC-Press CTB-2008-1.1 58
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

4.8.8 Dunning proposal run


This step is scanning the accounts for overdue items in order to create a list of possible dunning
candidates.

Purpose:
Show scalability using varying number of parallel jobs and increasing data volume. In our test
10% of all processed contract accounts did not pay the open item and went into the dunning
process.
Measured Throuput Dunning Proposal total throuput obj/sec

throuput with optimal scaling


16,000
Processed contract account/ sec.

14,000

12,000

10,000

8,000

6,000

4,000

2,000

0
1 20 40 60 80 100 120
parallel jobs

The average runtime of these jobs was 14 seconds. There are no resource utilization
measurements for these runs.

4.8.9 Dunning activity run


This step is mandatory after the previous step; it is creating dunning notices for all customers that
were identified to have overdue open items during the previous step.

Purpose:
See dunning proposal.

ISICC-Press CTB-2008-1.1 59
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

total throuput (Obj/s ec)


Measured Throuput Dunnings
throuput with optim al s caling

8,000
7,000
dunning activities / sec

6,000
5,000
4,000
3,000
2,000
1,000
0
1 20 40 60 80 100 120
parallel jobs

The runtime of these jobs was less than 30 seconds. There are no resource utilization
measurements for these runs.

4.8.10Deferred Revenues
This is an optional step that is not necessarily required for all scenarios. This step does the
posting of deferred revenue.

Purpose: Show the scalability of this step with increasing job numbers and data volume.

Total throughput [obj/sec]


Measured Throughput Deferred Revenues
Throughput with optimal scaling

8.000

7.000

6.000
Deferred revenue postings / second

5.000

4.000

3.000

2.000

1.000

0
40 60 80
Parallel jobs

ISICC-Press CTB-2008-1.1 60
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

APPS
Defered Revenue - Phyc vs Throughput DB+CI
Obj/Sec Throughput
16 5000

4500
14
4000
12
3500

Total Objs/Sec
10
3000
PhyCPU

8 2500
6.60
5.58 2000
6 5.05
4.47
3.93 1500
3.67
4
1000
2
500

0 0
40 60 80
Parallel Processes

For deferred revenue, there are only three measurement points. The average runtime is 4 minutes.
The average throughput achieved is 280 accounts/sec per CPU utilized. The CPU utilization is
small in relation to other critical path processing steps.

ISICC-Press CTB-2008-1.1 61
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

5 Solution Expert Sizing


5.1 Solution Sizing
This chapter focuses on the conclusions taken from the solutions proof of concept, and uses this
information to provide tips and recommendations on the infrastructure sizing for this solution.

The PoC was designed around a scenario with 1 million business accounts, each with 50 EDRs
per day: processing a total of 50 million EDRs per day.

The diagrams below show the load distribution ratios between database requirements and
application server requirements for each of the processing steps. In many of the steps, there is a
“fixed cost” in the low end, and the ratio changes with increasing parallelization. This is one
reason why it is less accurate to try to take a low-end test and scale it into the high-end.
For the purposes of expert sizing, an average of the ratio is taken over the mid to high-end
scaling, and this average is used. Each value is based on the maximum number of the used
physical CPU for DB and Application server, because sizing has to take peaks into account. So
consequentially some spikes will be seen in the graphs. For some steps with only a few
measurement values the average ratio is build over the ratio trendline.
The graphs on component ratio are to be understood in the following manner:
How many application servers can be supported by one DB server; referring to SAPS.(e.g. the
CPU load needs 1200 SAPS; a ratio of 1:3 means for a 3-tier environment: 300 SAPS for the DB
server and 900 SAPS for the Application server.)
The optimal scalability is determined on a cost/performance basis. The break even point selected
is not necessarily a boundary, but the point where the benefits for more CPU investment are
declining.

Transfer of EDR
DB/Appl. ratio Trans fe r of EDR DB/Appl ratio
ratio average
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1.0
0.9
0.8
20 40 60 80 100
Paralle lization / ratio ne arly 1:1

Component Ratio – 1:1 DB:APPs

ISICC-Press CTB-2008-1.1 62
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Total throughput [obj/sec]


Measured Throughput Transfer
Throughput with optimal scaling

250.000

200.000
Transferred EDRs / second

150.000

100.000

50.000

0
0 20 40 60 80 100 120
parallel jobs

Transfer of EDR: Parallelization and Throughput – Optimal Parallelization: 20

The observed upload processes used approximately 100MB memory per process.
The used commit size was 15.000.
To find the optimal value for the commit size, it’s recommended to start with a size of 2.000.
More information can be found in the chapter 4.8.1 “Transfer of EDR - Commit Size”

This data is based on test run results which loads 50 Million EDR records in 15 minutes. Included
in the load scenario is a simulation overhead which is generating the incoming EDRs. In the
scope of the PoC, it was not possible to completely separate the load of the simulation from the
actual EDR data loading via BAPI. Therefore the quality of the data for this one step is over
estimating the resource requirement. The original expectation from SAP development, who
designed the simulation, was for a 10% overhead for the simulation load.

Creation of Billing Orders:

The Trigger process can not run in parallel, however the single process was able to manage with
1 million triggers in 3 minutes so it does not yet pose any bottleneck threat to the end to end
processing. It scales almost linear over the data volume.

ISICC-Press CTB-2008-1.1 63
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Billing Order Creation Runtime of creation job

80

70

60

50
Duration [sec]

40

30

20

10

0
0 100000 200000 300000 400000 500000 600000
Billing Orders Generated

The requirements for this step can not exceed a single CPU, being a single process activity. It is
therefore represented in the sizing with this maximum value.

ISICC-Press CTB-2008-1.1 64
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Billing in the Contracts Accounts Receivable and Payable :

The following ratio chart includes both runs in which the database was using range partitioned,
and non partitioned tables. Range Partitioning has shown no significant performance
implications, rather it proves to be performance neutral. More information on range partitioning
and its usage can be found in the database chapter.
The data is based on the processing of 1 million billing accounts
DB/Appl. Ratio w ith RP
DB/Appl ratio Billing in FI-CA
DB/Appl. Ratio w ithout RP
average ratio
3.30
3.10
2.90
2.70
2.50
2.30
2.10
1.90
1.70
1.50
20 40 60 80 100
ratio 1:2,5

Billing - 1:2.5 DB:APPs

In the billing step, the profile changes significantly between the low and high end parallelization.
This is evident in both partitioned and non-partitioned database runs. In the low to medium
parallel runs, the DB server ratio is very low, and this changes between 20 and 40 parallel
processes.

PhyCPU PhyCPU vs Billing Objs/Sec


Objs_Sec

3000.00
60
2616 2605 2500.00
50 2278
Billing Objects

40 2000.00
PhyC

30 1521 34 36 1500.00
30
20 823 23 1000.00
10 500.00
15
0 0.00
20 40 60 80 100
Parallelization

Billing: Parallelization and Throughput – Optimal Parallelization: 80

ISICC-Press CTB-2008-1.1 65
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The billing scenario shows an excellent scalability to 60 parallel processes. At this point the
scenario is utilizing 30 of the available 64 physical CPUs so there is no physical limitation.
The optimal level of parallelization has been set to 80. Above 80way, the SAP enqueue response
time increased. The configuration used for the PoC used a single enqueue process and the
resulting recommendation is to use two per application server. With this modification it can be
assumed that the load could be further parallelized. However, there are other indications that the
load is beginning to flatten out between 60 and 80 parallel processes, and therefore any
extrapolation over 80way is risky without a proof point. 80 parallel jobs is therefore taken as
optimal in the sizing model.

Invoicing in the Contracts Accounts Receivable and Payable :

assumption: 1 Million contract accounts

DB/Appl. ratio Invoicing in FI-CA ratio


ratio average
Linear (ratio)
2.4

2.3

2.2

2.1

1.9

1.8

1.7

1.6
20 40 60 80 100 120 140 160
Parallalization / ratio 1:2

Invoicing – 1 million contract accounts, 1:2 DB:APPs

ISICC-Press CTB-2008-1.1 66
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Measured Throuput Invoicing Total Throughput (obj/sec)


Optimal Throughput
3,000.00

2,500.00

2,000.00
objects/sec.

1,500.00

1,000.00

500.00

0.00
1 20 40 60 80 100 120 140 160
parallel jobs

Invoicing – 1 million contract accounts, Optimal Parallelization: 100

# of phys.CPUs
Invoicing # of phy. CPUs vs. Throughput Throughput obj/sec

70 1600

1345 1320
60 1301 1400
1258

1200
50 1016
throughput obj/sec.

1000
40
# of CPUs

800
635 64
30 61
58 58
500 51 600

20
290 400
29
25
10 200
16

0 0
20 40 60 80 100 120 140 160
parallel jobs

Invoicing – CPU Utilization

The invoicing has exhausted the physical CPUs at 140 parallel processes. At this point all 64
CPUs are utilized. This profile will scale beyond what was achieved in the PoC given more
physical resources.

ISICC-Press CTB-2008-1.1 67
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Payment Run
DB/Appl.ratio
DB/Appl. ratio Payment Run
avg - DB/Appl.ratio

4.50 Linear (DB/Appl.ratio )

4.00

3.50

3.00

2.50

2.00

1.50
20 40 60 80 100
Parallalization ratio=1:3.5

Payment Run: 1:3,5 DB to Application Server Ratio

Total throughput (obj/sec)


Measured Throughput Payment Processing
Throughput with optimal scaling
7,000

6,000
Processed contract accounts /

5,000

4,000
second

3,000

2,000

1,000

0
1 20 40 60 80 100
parallel jobs

Payment Run: Optimal Parallelization is 80

The scalability graph above shows very linear behaviour to 80 parallel, after which, we see a
further throughput growth but with diminishing gain. This could be a break-even point.

ISICC-Press CTB-2008-1.1 68
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Payment Medium
DB/Appl ratio PayMed
DB/Appl. ratio Payment Medium avg - DB/Appl. ratio
Linear (DB/Appl ratio PayMed)
20.0
18.0
16.0
14.0
12.0
10.0
8.0
6.0
4.0
2.0
0.0
20 40 60 80 100
Parallalization ratio: 1:12

Payment Medium Run: 1:12 DB to Application Server Ratio

The payment medium run is very application server centric. The values above are taken from the
peak physical CPU utilization; the average shows a 1 to 12 ratio.

Total throughput [obj/sec]


Measured Throughput payments written to file
Throughput with optimal scaling
14.000

12.000

10.000
Processed contract acconts / second

8.000

6.000

4.000

2.000

0
1 20 40 60 80 100
Parallel jobs

Payment Medium Run: Optimal Parallelization is 100

The payment Medium run scales also very linear and shows a good performance up to a
parallelization of 100 jobs.
.

ISICC-Press CTB-2008-1.1 69
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Correspondence Print:

DB/Appl. ratio Corrsepondence Print DB/Appl. ratio


average ratio

11.0

10.0

9.0

8.0

7.0

6.0

5.0

4.0
20 40 60 80 100 120
Parallelization / ratio 1:8

Correspondence Print: 1:8 DB to Application Server Ratio

DB-CI
PhyCPU vs Throughput Corr-Print APPS
Objs/Sec
Linear (APPS)
60 2000.00

1800.00
50 47
45 1600.00
1400.00
40
Obj/Sec Throughput

34
Phy CPU (MAX)

1200.00
31
30 1000.00

800.00
19
20
14 600.00

400.00
10 6 6
3 4 200.00
1 2
0 0.00
20 40 60 80 100 120
Parallel Jobs

Correspondence Print: Parallelization >120

This chart shows that the bottleneck for further scalability lies with the physical resources.
Correspondence print shows a constant, nearly linear scalability in throughput.

ISICC-Press CTB-2008-1.1 70
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Dunning Proposal
The precision of the measurements for the dunning proposal, run in the PoC for 1 million
accounts, is not exact, because the duration of this run was only 11 seconds. However, with an 11
second runtime, this is also not considered a critical path component.
For purpose of sizing it can be ignored.

DB/Appl ratio
Dunning Proposal
average DB/Appl. ratio
3.5

2.5

1.5

0.5

0
20 40 60 80 100 120
parallel jobs / ratio nearly 1:2

Dunning Proposal: 1:2 DB to Application Server Ratio

Measured Throuput Dunning Proposal total throuput obj/sec


throuput with optimal scaling
16,000
Processed contract account/ sec.

14,000

12,000

10,000

8,000

6,000

4,000

2,000

0
1 20 40 60 80 100 120
parallel jobs

Optimal Parallelization: 120

ISICC-Press CTB-2008-1.1 71
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Dunning Activity

DB/Appl. ratio Dunning Activity DB/Appl. ratio


avg. DB/Appl. ratio
8.00
7.00

6.00
5.00

4.00
3.00

2.00
1.00
0.00
20 40 60 80 100 120
ratio 1:3

Dunning Activity - ratio average: 1:3

total throuput (Obj/s ec)


Measured Throuput Dunnings
throuput with optim al s caling

8,000
7,000
dunning activities / sec

6,000
5,000
4,000
3,000
2,000
1,000
0
1 20 40 60 80 100 120
parallel jobs

Dunning Activity - optimal parallelization is > 120

This chart shows a minor deviation of the throughput from the purely linear scalability. The
dunning activities run without bottlenecks.

ISICC-Press CTB-2008-1.1 72
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Deferred Revenue

DB/Appl ratio
DB/Appl ratio De fe rre d Re ve nue
avg over all steps
1.60

1.55

1.50

1.45

1.40

1.35

1.30

1.25
40 60 80
ratio 1:1,5

Deferred Revenue - ratio average: 1:1,5

PhyCPU(Ave)
PhyC vs Throughput PhyCPU(Max)
Throughput/Sec
25.00 6000.00

19.2 5000.00
20.00
16.6
14.6 4000.00
15.00
Obj/Sec
PhyC

11.1 3000.00
9.5
10.00 8.7
2000.00

5.00
1000.00

0.00 0.00
40 60 80
Parallel Jobs

Deferred Revenue - optimal parallelization is 60

ISICC-Press CTB-2008-1.1 73
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The Deferred Revenue scenario shows an excellent scalability to 60 parallel processes. At this
point the scenario is utilizing 27 of the available 64 physical CPUs so there is no physical
limitation. These runs were done in relatively close sequence. In the first run, the database has not
reached good buffer quality. We see the CPU resources requirement for the DB dropping over the
three runs as the buffer quality improves. The throughput at 80 parallel processes drops back
which would indicate that there is an application restriction at this level of parallelization. 60
parallel is therefore chosen as optimal.

ISICC-Press CTB-2008-1.1 74
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Summary

The following table is based on the Run with 50 million EDR-records, and 1 million billing
accounts. This table shows the optimal parallelization “break even” points for each of the job
steps, and the DB to application server ratio. This data is used to generate the expert sizing
examples below.

Break even point of


process step DB/Appl. Ratio parallelization

Transfer of EDR 1:1 20


Creation of Billing Orders 1:1 1
Billing in the Contracts Accounts
Receivable and Payable 1:2,5 80
Invoicing in the Contracts Accounts
Receivable and Payable 1:2 100
Payment Run 1:3,5 80
Payment Medium 1:12 100
Correspondence Print 1:8 120
Dunning Proposol 1:2 120
Dunning Activity 1:3 120
Deferred Revenue 1:1,5 60

Appl.
Distribution DB/Appl. over the whole run DB

100%
90%
80%
%

%
50

50

70%
%

60
%

%
67

67
%
71

75

60%
78

%
%

89
92

50%
40%
30% 50% 50%
40%
20%
29% 33% 33%
22% 25%
10% 11%
8%
0%
ev
ng

Du o
er

er

ed

t
t
un
g

Ac
in

Pr
llin
rd

fR
sf

ici

Pr
yM
yR

nn

nn
an

llO

De
Bi

vo

Pa

Pa

Du
Tr

Bi

In

ISICC-Press CTB-2008-1.1 75
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Ratio Fluctuation over Full Processing Chain


Average DB/Appl. ratio over the whole run DB
Appl.

12

10

8
ratio 1:n

0
Tr

Bi

Pa

Pa

D
Bi

In

Pr

D
un

un

ef
llO

v
an

llin

in
yR

yM
oi

Re
n
t

n
sf

rd

cin
g

Pr
un

Ac
ed
er

v
er

o
g

The graph above depicts the shift in component load ratios over the multiple processing steps.
The ratio between DB and Appl. Server varies from 1:1 and 1:12.
So it’s highly recommended to use a 2-tier architecture to be more flexible in sharing resources.

 The most efficient way to deploy an SAP system from a throughput perspective is a
2tier system
– Tier 1 = DB-Server & Application Server
– Tier 2 = Frontend (SAPGUI, Browser)
 Motivation for 3tier landscapes are basically:
– Scalability
– Redundancy (Clustering)
– Security
– TCA = pricing aspects
• Scale-out (Blades) vs. scale-up (Mainframe)
• Technology mix

ISICC-Press CTB-2008-1.1 76
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Parallelization

As shown in the chapter 4.8 (Detailed Test Requirements) the throughput of each process step
scales quite linear to that Break-even-point of parallelization depict in the table above.

This is an example, how the whole scenario can run in a timeframe of 8 hours:

scenario 1:
Minimal Configuration = PhyCPUs = 8 equates to normalized SAPS 3= 14,259

total Total
runtime runtime in # of physical
process step in % minutes parallel. CPUs normalized SAPS
Transfer of EDR 7% 33.60 13 6.45 11,350
Creation of Billing Order 4% 19.20 1 1.00 1,760
Billing in Contracts
Accounts Receivable
and Payable 8% 38.40 10 7.50 13,203
Invoicing in Contracts
Accounts Receivable
and Payable 22% 105.60 11 7.97 14,036
Payment Run 7% 33.60 9 4.54 7,986
Payment Medium 5% 24.00 5 5.10 8,974
Correspondence Print 37% 177.60 11 8.10 14,259
Dunning Proposal 4% 19.20 7 7.00 12,322
Dunning Activity 4% 19.20 14 3.95 6,948
Deferred Revenue 2% 9.60 19 7.65 13,473
100% 480.00 8.10

3
SAPS is a definition by SAP : SAP Application Benchmark Standard – 100 SAPS equals 6,000 dialog steps and
2,000 postings or 2,400 SAP transactions
normalized SAPS in this White Paper relates to the Power5 system capacity on which the PoC was run.

ISICC-Press CTB-2008-1.1 77
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

%TotalRunTime
Runtime over Parallelization 8 hours
Parallelization
40% 37% 20
35% 18
16
% of total runtime

30%
14

Parallel jobs
25% 22% 12
20% 10
15% 8
8% 6
10% 7% 7%
4% 5% 4% 4% 4
5% 2% 2
0% 0

ev
ng

Du o
er

er

ed

t
t
Pa n
g

Ac
in

Pr
u
llin

fR
rd
sf

ici

Pr
yM
yR

nn

nn
an

llO

De
Bi

vo

Pa

Du
Tr

Bi

In

The red line in the picture above shows the optimal parallelization over the whole run. The blue
columns show the distribution of the time in the critical path.
The Correspondence Print consumes about one-third of the total timeframe.

ph. CPUs
ph. CPUs over runtime in minutes ( 8 hours)
Runtime in minutes

9.00 200
8.00 180
7.00 160
runtime in minutes

140
6.00
120
ph.CPUs

5.00
100
4.00 8 8 8
8 7 80
3.00 6
5 60
5
2.00 4 40
1.00 20
1
0.00 0
ev
Du o
Pa g
er

er

ed

t
t
Pa n
g

Ac
in

Pr
n

u
llin

fR
sf

rd

ici

Pr
yR

yM

nn

nn
an

llO

De
Bi

vo

Du
Tr

Bi

In

This graph depicts the number of the physical CPUs used over the whole business process.
Billing, invoicing, correspondence print and deferred revenue determine the highest number of
CPUs, but the graph shows a very balanced use of CPUs.

ISICC-Press CTB-2008-1.1 78
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

What are the costs for reducing the timeframe to 4 hours?

Scenario 2:
It increases the number of CPUs to 23  40.184 SAPS

total runtime Total


runtime in # of physical
process step in % minutes parallel. CPU normalized SAPS

Transfer of EDR 10% 24.00 18 8.93 15,716


Creation of Billing Order 4% 9.60 1 1.00 1,760
Billing in Contracts
Accounts Receivable
and Payable 8% 19.20 20 15.00 26,406
Invoicing in Contracts
Accounts Receivable
and Payable 22% 52.80 22 15.95 28,071
Payment Run 7% 16.80 17 8.57 15,084
Payment Medium 5% 12.00 10 10.20 17,949
Correspondence Print 27% 64.80 31 22.83 40,184
Dunning Proposal 5% 12.00 11 11.00 19,363
Dunning Activity 7% 16.80 16 4.51 7,941
Deferred Revenue 5% 12.00 15 6.04 10,637
100% 240.00 22.83

%TotalRunTime
Runtime over Parallelization 4 hours
Parallelization
30% 27% 35

25% 30
22%
% of total runtime

25
Parallel jobs

20%
20
15%
10% 15
10% 8% 7% 7%
5% 5% 5% 10
4%
5% 5
0% 0
ev
ng

Du o
er

er

ed

t
t
Pa n
g

Ac
in

Pr
u
llin

fR
sf

rd

ici

Pr
yM
yR

nn

nn
an

llO

De
Bi

vo

Pa

Du
Tr

Bi

In

ISICC-Press CTB-2008-1.1 79
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

The graphs clearly show a peak for the Correspondence Print process.
This peak is for runtime and for the number of CPUs used and determines the resources for the
whole run.
There’s no balanced system anymore!

ph. CPUs
ph. CPUs over runtime in minutes ( 4 hours)
Runtime in minutes

25.00 70

60
20.00

runtime in minutes
50
ph.CPUs

15.00 40
23 30
10.00
15 16
20
5.00 10 11
9 9
6 10
1 5
0.00 0

ev
Pa g

o
er

er

t
ed

t
Pa n
g

Ac
in

Pr
n

u
llin
rd

fR
sf

ici

Pr
yM
yR

nn

nn
an

llO

De
Bi

vo

Du

Du
Tr

Bi

In

More than double of CPUs were used for halving the runtime from 8 hours to 4 hours.
The reason for that is the limitation of the parallelization of the Transfer of EDR run to 20.
If this limit is ignored and we take the small risk not to get the whole run through in the
timeframe of four hours, a better over all result can be achieved:

ISICC-Press CTB-2008-1.1 80
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Scenario 3:
It increases the number of CPUs to 17  28.500 SAPS

total runtime Total


runtime in # of physical
process step in % minutes parallel. CPU normalized SAPS

Transfer of EDR 7% 16.80 25 12.40 21,827


Creation of Billing Order 4% 9.60 1 1.00 1,760
Billing in Contracts
Accounts Receivable
and Payable 8% 19.20 20 15.00 26,406
Invoicing in Contracts
Accounts Receivable
and Payable 22% 52.80 22 15.95 28,071
Payment Run 7% 16.80 17 8.57 15,084
Payment Medium 5% 12.00 10 10.20 17,949
Correspondence Print 37% 88.80 22 16.20 28,517
Dunning Proposal 4% 9.60 14 14.00 24,644
Dunning Activity 4% 9.60 28 7.89 13,896
Deferred Revenue 2% 4.80 37 14.91 26,237
100% 240.00 16.20

%TotalRunTime
Runtime over Parallelization 4 hours
Parallelization
40% 37% 40
35% 35
% of total runtime

30% 30
Parallel jobs

25% 22% 25
20% 20
15% 15
10% 7% 8% 7% 10
4% 5% 4% 4%
5% 2% 5
0% 0
ev
ng

Du o
er

er

t
ed

t
Pa n
g

Ac
in

Pr
u
llin

fR
rd
sf

ici

Pr
yR

yM

nn
nn
an

llO

De
Bi

vo

Pa

Du
Tr

Bi

In

ISICC-Press CTB-2008-1.1 81
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

ph. CPUs
ph. CPUs over runtime in minutes ( 4 hours)
Runtime in minutes

18.00 100
16.00 90
14.00 80

runtime in minutes
70
12.00
60
ph.CPUs

10.00
50
8.00 16 16
15 14 15 40
6.00 12
10 30
4.00 9 8 20
2.00 10
1
0.00 0

ev
Pa g

Du ro
er

er

ed

t
Pa n

t
g

Ac
in
n

u
llin

fR
rd
sf

ici

Pr
yR

yM

nn

nn
an

llO

De
Bi

vo

Du
Tr

Bi

In

A nearly balanced system could be achieved.


The risk to exceed the parallelization limit is shown in the picture below:

CPU Indicator
Transfer of EDR
Job Indicator
5000 2500
Objs/sec/PCPU

4000 2000
3000 1500 Objs/sec/Jobs
2000 1000
1000 500
0 0
20 40 60 80 100
Number of Jobs

The throughput dramatically decreases with the increase of the parallelization. For a
parallelization of 25 jobs the disadvantage can be accepted for the advantage of a balanced
system.

Optimal Parallelization and the risk, if exceeded


The following graphs can be used to understand the possible risk on exceeding the optimal, or
break even point for parallelization. In some cases, the price/performance gap is not significant
and the saving in runtime for this resource investment is worthwhile. In some cases the cost can
be extreme. The expert sizing focuses on the optimal parallelization, and provides these graphs
for support in decision making for individual cases.

ISICC-Press CTB-2008-1.1 82
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Billing CPU Indicator


Job Indicator

100 50
Objs/sec/PCPU

Objs/sec/Jobs
80 40
60 30
40 20
20 10
0 0
20 40 60 80 100
Number of Jobs

With more than 80 parallel jobs the throughput in the EDR Billing run decreases while the
number of CPUs remains static.

Convergent Invoicing CPU Indicator


Job Indicator

30 20
Objs/sec/PCPU

Objs/sec/Jobs
15
20
10
10
5
0 0
20 40 60 80 100 120 140 160
Number of Jobs

Payment Run CPU Indicator


Job Indicator

150 80
Objs/sec/PCPU

Objs/sec/Jobs

60
100
40
50
20
0 0
20 40 60 80 100
Number of Jobs

A similar behaviour shows the Convergent Invoicing run and the Payment Run, when the optimal
number of parallel jobs is exceeded.

ISICC-Press CTB-2008-1.1 83
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Payment Medium CPU Indicator


Job Indicator

200 150
Objs/sec/PCPU

Objs/sec/Jobs
150
100
100
50
50
0 0
20 40 60 80 100
Number of Jobs

The Payment Medium run scales very linear until a parallelization of 100. If this limit is
exceeded, the total throughput decreases.

Correspondence Print CPU Indicator


Job Indicator
16 8
14 7
Objs/sec/PCPU

12 6

Objs/sec/Jobs
10 5
8 4
6 3
4 2
2 1
0 0
20 40 60 80 100 120
Number of Jobs

This grafic confirms the linear scalability in throughput of the Printing run. Up to 120 parallel
jobs – and above - no significant performance deficit can be seen.

ISICC-Press CTB-2008-1.1 84
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Memory Requirements

As described in chapter 3.2 the memory requirements are dependent on the number of work
processes defined, and the sum of the variable user context memory which is dependent on the
number of parallel jobs per step.
Overall the maximum memory requirement was 3 GB/CPU, which is less than is recommended
for an SAP solution installation.

memory used in optimal run over 8 hours


Memory used per each run in MB
memory used in run over 1.5 hours

6000

5000
memory in MB

4000

3000

2000

1000

0
Bi er

Du

Du ro

De ct
Tr

Bi

In

Pa g

Pa n

Pr d
llO

vo
an

llin

in

nn

nn

fR
yR

yM

t
ici
rd
sf

ev
P

A
u

e
n
er

Memory over the optimized run over 8 hours

Static Memory Dynamic


defined by defined by User Context
Appl.-Server
Parameters

EDR-In 1012 MB (13 Jobs)


BillOrder 204 MB (1
Shared Memory

Billing 370 MB (10 Jobs)


Processes

Invoice 225 MB (11 Jobs)


PayRun 231 MB (9 Jobs)
PayMed 165 MB (5 Jobs) Extended
Print 495 MB (11 Jobs) Memory
DunnPro 86 MB (7 Jobs)
DunnAct 230 MB (14
44GB
GB 2 GB
2 GB
DefRev 2195 (19 Jobs)

ISICC-Press CTB-2008-1.1 85
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Memory over the optimized run over 1.5 hours

Static Memory Dynamic


defined by
defined by User Context
Appl.-Server Parameters

EDR-In 1,5 GB (20 Jobs)


BillOrder 0,2 GB (1 Job)
Extended
Shared Memory

Billing 2,3 GB (61 Jobs)


Invoice 1,4 GB (69 Memory
Processes

Jobs)
PayRun 2,0 GB (80 Jobs)
PayMed 1,5 GB (46 Jobs)
Print 3,1 GB (69 Jobs)
DunnPro 0,6 GB (49 Jobs)
DunnAct 1,6 GB (98 Jobs)
DefRev 5,7 GB (50 Jobs)
4GB 2GB
GBG GB

That chart depicts that also in the 1.5 hours run, Deferred Revenue (DefRev) determines the
memory high watermark.

ISICC-Press CTB-2008-1.1 86
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

6 General recommendations
6.1 Distribution of Objects

Within the mass activity framework the distribution of objects over the jobs is done so-called
intervals. They can be generated in two ways, either by a fixed number of objects per interval or
by a fixed number of intervals. In the former case the number of intervals is not known a priori
while in the latter one the interval size is unknown.
The correct choice needs some considerations as shown below.

Single Job Duration


800

700

600
Time[sec] Avg = 721 Max = 761

500

400

300

200

100

0
Jo 001
Jo 003
Jo 005
Jo 007
Jo 009

Jo 013
Jo 015
Jo 017
Jo 019

Jo 023
Jo 025
Jo 027

Jo 031
Jo 033
Jo 035
Jo 037
Jo 039

Jo 043
Jo 045
Jo 047

Jo 051
Jo 053
Jo 055
Jo 057
Jo 011

Jo 021

Jo 029

Jo 041

Jo 049

9
05
b_
b_
b_
b_

b_
b_
b_
b_

b_
b_
b_

b_
b_
b_
b_

b_
b_
b_

b_
b_
b_
b_

b_

b_

b_
b_

b_

b_
b_

b_
Jo

As can be seen clearly 30 of the 60 jobs took nearly 50 seconds longer as the others because the
number of intervals to be processed could not be divided by 60 modulo 0. In this case there were
870 intervals to be processed by 60 jobs which would lead to 30 jobs processing 15 intervals and
30 jobs processing 14 intervals.

Assuming that a better distribution of the billing accounts into the intervals, the maximum
runtime could have been lower and therefore a higher throughput achieved as the following
picture shows:

ISICC-Press CTB-2008-1.1 87
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Avg.job run time


Comparison: Throughput Billing
Max.job run time
3.000

2.500
Billing Accounts / second

2.000

1.500

1.000

500

0
0 20 40 60 80 100 120
Parallel jobs

The upper curve assumes an equal distribution of the billing accounts over the jobs and the
maximum throughput that could be achieved. The lower one gives the throughput with the
uneven distribution.
For this case the next picture shows the difference between the curves in percent:

Improvement with even Data Distribution

5
Improvement in runtime [%]

0
1 20 40 60 80 100
Parallel jobs

This case can be considered as a normal case. In extreme cases, where for example two jobs have
to handle three intervals the gain can be as big as 25%.

ISICC-Press CTB-2008-1.1 88
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

6.2 Row Compression tests and results


Even though it was not a major goal of this test series to detect or to prove benefits of the row
compression feature of DB2 LUW. However, some measurements were done in order to see
possible effects, positive as negative ones. They should ensure this important feature to decrease
the database size does not have a substantial negative impact on performance.

For this purpose three of the biggest tables have been compressed. The following table shows the
chosen tables, the runs that could be influenced and the compression factor:

Activity: Activity: Compression


Table
Inserting into table Selecting from table Factor
DFKKBIEDRTC EDR Transfer Billing 59 %
FKKDEFREV Invoicing --- 83 %
DFKKCOHI Invoicing --- 77 %
DFKKCOH Invoicing --- 78 %
Payment run
DFKKOPK Invoicing 84 %
Payment media
Payment run
DFKKOP Invoicing 76 %
Payment media
Payment run
DFKKKO Invoicing 75 %
Payment media

The effects of compression can be seen on the graph below:

Uncompressed tables
Throughput Uncompressed vs. Compressed Tables
Compressed tables

8.000

7.000

6.000
Processed Objects / second

5.000

4.000

3.000

2.000

1.000

0
Billing Invoicing Payment Run Payment Media
Mass Activity

While for billing and invoicing almost no effect is visible, payment run and payment media
processing clearly benefit from compression by a throughput up to 25% higher.

ISICC-Press CTB-2008-1.1 89
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

This leads to the conclusion that most benefits from row compression can be expected with
activities that mainly read and update compressed tables. Those inserting new records into
compressed tables have less or no benefits and probably a slight disadvantage (invoicing).

7 General Database Observations


By Thomas Aiche, DB2 Specialist, DB2 Solutions Center, PSSC Montpellier

7.1 General comments


A careful analysis at SQL Cache level, trying to identify long running SQL statements, showed
that the vast majority of SQL statements issued by the application had fairly straightforward
access plans primarily favoring direct reads through index access. We observed very limited
sorting activity.
Actually, considering that the application is batch oriented, one would have expected some long
running SQL statements involving scans. It seems the application architecture favors processing
reasonably sized chunks of data at a time, resolved thru direct reads. The drawback of this
architecture is that you then cannot benefit from processing overlap of I/O through pre-fetching.
Therefore, all reads become blocking and introduce a latency effect in processing. This design
could represent an eventual application bottleneck to scalability.

7.2 Partitioning
Contrary to some RDBMS implementations, the DB2 database can manage very large tables
without the need to use range partitioning. In fact, it works so well that SAP actually does not
recommend implementing range partitioning and provides no explicit support for it in its internal
data dictionary. For range partitioning, this means that it has to be manually handled directly at
the database layer and re-implemented across SAP maintenance operations. Nevertheless, range
partitioning is supported by DB2 and can provide a convenient way of rolling out stale data. The
DB2 Multi Dimensional Clustering functionality would also provide a partitioning mechanism
and this option is fully supported by SAP.
For this application, a partitioning scheme is desirable from the point of view of data
management. The EDB records are grouped by period, and dropped by period when no longer
required. For the proof of concept, range partitioning was tested to demonstrate that it could be
implemented for the purpose of application convenience, but was not required for performance
reasons. Therefore, even though we performed most of the tests with normal, non partitioned
tables, we tested partitioning for a set of tables to prove it would be a viable alternative. The
scope of the PoC did not actually cover the maintenance processes connected to old data rollout,
however we did observe that partitioning for production processing had no measurable effect on
performance, one way or another. This demonstrated that it was not necessary to partition large
tables for performance, but that they can partitioned for other purposes with not cost to
performance.
Another alternative which was considered was Multi Dimensional Clustering, this is the preferred
approach for SAP, and unique to DB2. For the purpose of the PoC, range partitioning was
selected to specifically address the performance challenge coming from anther competitive
RDBM technology which does not have MDC.

ISICC-Press CTB-2008-1.1 90
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

7.3 Row Compression


One of the key technologies introduced with DB2 9 is row compression.
It has the potential to dramatically reduce storage costs.
Normally, performance impact is minimal, providing you don’t run at full CPU capacity.
In some cases, compression can even provide performance improvements; particularly in heavily
I/O bound workloads with large scans, thanks to improved I/O efficiency. This improved I/O
efficiency can also translate into a smaller memory footprint, as compression is carried all the
way into the buffer pools.

In our case, we selected some key largest tables and only demonstrated storage benefits for
performance neutral behavior, due to the specific I/O characteristics of the application workload.

Hereafter is a short summary of key compression figures, including elapse times for loading the
tables, compressing them and collecting statistics. The last column provides the actual storage
savings, in percentage, for the chosen tables.

Table LOAD (in hours) used Index for REORG (in Tempspace used RUNSTATS (in Approximate
Reorg hours) during REORG hours) percentage of
using [index] (in GB) pages saved in
the table as a
result of row
compression (in
%)
DFKKBIEDRTC 2 ~0 7:16 215.65 1:54 59
_COMP
FKKDEFREV_C 4:04 ~0 10:36 169.65 1:18 83
OMP
DFKKCOHI_CO 0:15 ~0 0:21 3.17 0:10 77
MP
DFKKCOH_COM 0:18 ~0 0:27 not monitored 0:16 78
P1
DFKKCOH_COM 0:18 ~Z01 0:25 not monitored 0:15 78
P2
DFKKOPK_COM 0:41 ~0 2:52 not monitored 0:44 84
P
DFKKOP_COMP 1:06 ~0 1:50 not monitored 0:38 76
DFKKKO_COMP 0:33 ~0 1:38 not monitored 0:16 75

We exhibit a significant 75 to 85 % savings for most tables.


However, because current db2 technology does not presently compress indexes, which make up
for a large part of the overall database size, this translates into an overall savings of 30 to 40 %,
corresponding to the generally communicated figure.

ISICC-Press CTB-2008-1.1 91
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

7.4 DB2 Configuration used in the project

The EDB test system was set up with DB2 9 FixPak 2. It was installed with a uniform page size
of 16KB and all tablespace managed by automatic storage..

One other remarkable aspect is the fact that db2 does not require you to partition very large
tables, as is the case with other DBMS. Although range partitioning is possible, it is not explicitly
supported by SAP.
Not being forced to partition tables allow for a much simpler out of the box implementation.
The system was configured with standard parameter settings according to SAP note 899322. The
following configuration parameters were adjusted:

Database manager configuration


Adjustments were made to the number of agents as described in SAP note 899322. The monitor
heap was increased to 220 MB (56128 4KB pages)..

Database configuration
Although STMM was initially allowed to automatically adjust most DB2 settings, it was decided
to turn it off for the latter runs, mainly with the goal of ensuring that we kept a stable set of
configuration parameters. In the PoC it was necessary to reset (via flashcopy restore) the system
multiple times in order to regain a specific status of the data for repeatable processing tests. This
activity never allowed the STMM to achieve its best configuration and therefore we enforced the
memory settings. This would not be the case in normal processing where it would be
recommended to use the benefits of STMM.
Adjustments were made to the configuration parameters that control locking, logging, sort
memory and certain other parameters, as shown in the table below:

Parameter New value Remark


LOGBUFSZ 8 MB (2048 4KB pages) Log buffer

LOCKLIST 2 GB (524288 4KB pages) Lock list


CATALOGCACHESZ 10 MB (2560 4KB pages) Catalog cache

PCKCACHESZ 200 MB (50000 4KB pages) Package cache

SORTHEAP 60 MB (15000 4KB pages) Sort heap

Graph 7.4-1: Database configuration parameters

The number of primary log files was increased to 40. The bufferpool, we used only the one, was
set to 3 million 16k pages, or 48GBs.

DB2 registry variable


In addition to DB2_WORKLOAD=SAP, DB2_PARALLEL_IO=’*’ was set which enables
parallel IO for the tablespace containers.

ISICC-Press CTB-2008-1.1 92
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

8 Database Technology
By Brigitte Blaeser, DB Specialist, SAP DB2 Development/Porting Center, IBM

DB2 for Linux Unix and Windows has some unique features which make it the ideal choice for
supporting large and critical SAP solutions.

8.1 Properties of DB2 for Linux, UNIX and Windows

• High End Scalability

• DB2 provides high end scalability through two classes of parallelism:

• Intra-partition parallelism can be used on any server machine with more than one CPU

• With intra-partition parallelism, DB2 can start subagents to work on a subset of the data.
This kind of parallelism requires no administrative overhead.

• Inter-partition parallelism is available with the Database Partitioning Feature (DPF).

DPF can be used in environments with several database server machines, or database partitions
can be created logically on large SMP machines. A database partition is a part of a database that
consists of its own data, indexes, configuration files, and transaction logs. Tables can be
distributed over several database partitions. Even though the database is partitioned, it appears to
be a single database for users and applications. Queries are processed in parallel in each database
partition, as well as maintenance operations like table reorganization and index creation and data
loads. For backup and restore, once the first database partition (the catalog partition) is processed,
the other database partitions can be backed up and restored in parallel.

DPF is supported for the SAP NetWeaver Business Intelligence (SAP NetWeaver BI) component
and all applications based on SAP NetWeaver BI. This includes the SAP Supply Chain
Management (SAP SCM) application.

8.2 Ease of use and configuration

8.2.1 Automatic Storage Management

Automatic Storage Management was introduced in DB2 UDB V8.2.3 for single-partition
databases. In DB2 9, it is also available for multi-partition databases.

ISICC-Press CTB-2008-1.1 93
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

With automatic storage management, DB2 can manage its storage space by itself. Tablespace
containers are either created in the home directory of the DB2 instance owner, on a path or drive
specified during database creation or on multiple storage paths specified either during database
creation or added later. Instead of the complete database, single tablespaces can also be created as
automatic storage tablespaces. The containers are then created in the storage paths available for
the database.

Automatic storage management simplifies space management considerably. Database


administrators only have to ensure that there is enough space in the storage paths defined for the
database or add additional ones. When tablespaces get full, containers are extended automatically
as long as there is space available in the storage paths.

SAP NetWeaver 2004s systems except SAP NetWeaver BI are installed with automatic storage
by default. Beginning with SAP NetWeaver BI 2004s SR2, SAP NetWeaver BI systems on DB2
9 are also installed with automatic storage management by default.

8.2.2 Automatic memory management

DB2 9 contains the self-tuning memory manager (STMM). With STMM, database memory
parameters, like size of the sort area, buffer pools, lock list and package cache, are adapted
automatically depending on the workload.

STMM works in two modes which are distinguished by the setting for the database configuration
parameter DATABASE_MEMORY. This parameter specifies the total amount of shared memory
available to the database.

In the first mode, DATABASE_MEMORY is set to a numerical value or to COMPUTE.


COMPUTE means that DATABASE_MEMORY is calculated by DB2 at database activation
time. If the database configuration parameter SELF_TUNING_MEM is set to ON and at least
two memory consumers are set to AUTOMATIC, STMM starts to work by balancing the overall
available memory resources between the consumers which are set to AUTOMATIC.

In the second mode of STMM, the DATABASE_MEMORY itself is set to AUTOMATIC. In


this mode STMM also takes memory from the operating system if it needs it and if it is available.
Memory can also be given back to the operating system if no longer needed. This mode is only
available on the Windows and AIX platforms.
In new SAP DB2 9 installations, STMM is switched on by default for all memory consumers. For
more information, see SAP OSS notes 899322 (special section about STMM) and 976285.

8.2.3 Automated Table Maintenance

DB2 9 offers automatic statistics collection and table reorganization.

ISICC-Press CTB-2008-1.1 94
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

8.2.4 Deep Compression

Deep compression uses a compression technique with a static dictionary to compress the data
rows of a table. The dictionary has to be created after a representative sample of table data is
available. It is created by either an offline table reorganization which also compresses the existing
table data or via the DB2 INSPECT command which only creates the dictionary without
compressing the existing data. In this process, recurring patterns in the table data are identified
and replaced by shorter symbol strings. The patterns can span multiple columns. The dictionary is
stored in the table header and loaded into memory when the table is accessed.

With deep compression, only table data is compressed but no indexes, LONG and LOB data. The
data is compressed on disk and in the bufferpool. Log records for compressed data contain the
data in compressed format. This has the following advantages:
• Disk storage can be saved. Tests with customer data have shown that for SAP data
compression ratios up to 80% can be achieved. In some customer cases, SAP NetWeaver
BI databases could be reduced to 50% of the consumed disk space overall. This reduces
the TCO considerably.
• The bufferpool hit ratio is increased because the data is stored compressed in the
bufferpool.
• The IO data transfer can be potentially reduced (for example for range queries and
prefetching)
• Log records are shorter (except for some kinds of updates)

Deep compression is released for all SAP applications. SAP OSS note 980067 contains general
recommendations and tools for using deep compression in SAP applications. Additional support
for managing compression for InfoCubes, DataStore objects and PSA in SAP NetWeaver BI and
its business information warehouse functionality are available (see SAP OSS notes 926919 and
906765).

8.2.5 Multi-dimensional Clustering (MDC)

Multi-dimensional clustering (MDC) was especially designed for data warehousing applications
to improve the performance of queries that have restrictions on multiple columns.

MDC provides a means for sorting the data rows in a table along multiple dimensions. The
clustering is always guaranteed. The following figure compares MDC to standard row-based
indexes:

ISICC-Press CTB-2008-1.1 95
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Region
Region
… EAST WEST WEST …

EAST,2005 WEST,2005 WEST,2006

… 2004, 2005, 2005, 2006, …


Year
Year

Standard row-based indexes MDC


Diagram 8.2-1: MDC overview

With standard row-based indexes one index can be defined as the clustering index. In the
example above, this is the index on column Region. Queries that restrict on Region benefit from
sequential I/O while queries that restrict on another column usually require random I/O. With
MDC, the data can be sorted along multiple columns, like Region and Year. In the example
above, both queries that restrict on Region and queries that restrict on Year benefit from
sequential I/O.

Each unique combination of MDC dimension values forms a logical cell, which can physically
consist of one or more blocks of pages. The size of a block equals the extent size of the
tablespace in which the table is located, so that block boundaries line up with extent boundaries.
This is illustrated in the following figure:

create table ... organize by (Year,


Region)

Year Region Customer Revenue Overhead


2004 West 001532 500,000$ 280,000$
2005 West 002047 710,000$ 60,000$
Clustering
Blocks per Value Pair
2004 East 013901 250,000$ 100,000$
2005 North 009954 330,000$ 10,000$ 2004 2005

… … … … … West 1 2

East 3
System generated
Block Index: Year
indexes Block Index: Region Block Index: Compound North 4
2004 1 3 West 1 2 2004, West 1

2005 2 4 East 3 2004, East 3

North MDC logical


4 2005, West 2
cell
2005, North 4

Diagram 8.2-2: MDC dimension values

ISICC-Press CTB-2008-1.1 96
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

MDC introduces indexes that are block-based. These indexes are much smaller than regular
record-based indexes. Thus block indexes consume a lot less disk space and can be scanned
faster. With block indexes, the blocks or extents allocated for the MDC cells are indexed.

MDC also provides support for fast data roll-in and roll-out operations in data warehouse
applications:
• When a large amount of data is inserted into an MDC cell the blocks allocated are locked
instead of every single row. This feature is called BLOCKLOCKING and can be enabled
with the ALTER TABLE statement for MDC tables.
• When data is deleted along one or more MDC dimensions, only the data pages of the
MDC blocks are marked as deleted instead of every single row.
Furthermore, the maintenance of the block indexes on the MDC dimensions is much more
efficient than the maintenance of standard row-based indexes. For both data roll-in and roll-out,
this reduces index maintenance time considerably.

SAP supports MDC for PSA, InfoCubes, and DataStore objects in SAP NetWeaver 2004s BI. A
backport with limited functionality to SAP NetWeaver 2004 BI and earlier SAP Business
Information Warehouse component releases is available in SAP OSS note 942909.

8.2.6 DB2 “optimized for SAP”

DB2 is jointly developed with SAP to help customers ease configuration, enhance performance,
and increase availability of their SAP solutions running on DB2. The joint effort involves close-
tied development teams at SAP headquarters in Walldorf, Germany and IBM laboratories in
Böblingen, Germany, Toronto, Canada, and Silicon Valley, USA.

Features like automatic storage management, STMM, Deep Compression, removal of tablespace
size limits and extensions to MDC and the SQL optimizer were designed and developed in close
cooperation with SAP to optimally support SAP applications.

DB2 provides a parameter for configuring DB2 for SAP workloads. By setting the DB2 registry
variable DB2_WORKLOAD=SAP a number of other DB2 registry settings for running SAP
applications are enabled automatically.

SAP is also an extensive in-house user of DB2. More than 800 SAP development systems run
with DB2, among them the most important ones for SAP NetWeaver development. SAP IT
adopted DB2 for business systems in 2002. The productive ERP, CRM and HR systems of SAP
run on DB2.

ISICC-Press CTB-2008-1.1 97
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

9 Hardware
Unix Server: System p5-595 64way 256GB memory using Logical Partitioning and virtualization
Storage Server: DS8300 30 TBs of 15rpm disks capacity in Raid5 and Raid0

9.1 Logical Landscape Overview

The diagram below shows the logical layout of the test system. The Event Detail Billing system
(EDB) was implemented in a 3-tier architecture with 2 logical partitions. These two LPARs were
connected internally (via the machine hypervisor) by means of a virtual Ethernet. This was done
to allow the tracking of the bandwidth required on the backbone network while at the same time
ensuring sufficient capacity to avoid any bottleneck in the network area, and avoid any external
disturbance on the network. The internal vnet supports TCPIP large packets and behaves like a
ether-channel.

SAN
P5-595 64way, 256GB
LPAR1
Copy – used
DB + CI
for production
(+Capacity for 2-tier Apps server)
VCPU=48, Mem 96GB
Vnet Large
packets Gbit Enet
frontend
LPAR2
network
2 * Application server instances Master –
VCPU=48, Mem 64-80GB used for dev
and reset

LPAR3
DS8300 *
DB Master Copy + Dev CI
VCPU=10-32, Mem 32-80GB Flash Copy
(activated as necessary) functionality
Shared Processor Pool
64CPUs

The landscape contained 3 LPARs, two for production and 1 for “development”. LPAR3 is an
exact clone of LPAR1 and is used to make permanent changes in the landscape. In order to be
able to reset the system to a state which would allow reuse of the data, the FlashCopy
functionality of the DS8300 storage server was used. The actual “production” machine is a
FlashCopy of the “development” or master system in LPAR3. Tests were done in the production
system and then the data was reset to the original state via FlashCopy to allow the same tests to
be rerun. There were up to 3 copies of the database on the storage server to allow are reset to
different points in the processing status.

ISICC-Press CTB-2008-1.1 98
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

9.2 Logical Partition Layout

IBM System p5 595 server


Installed memory 262144 MB
Physical processors 64 x 2.3 GHz

LPAR1 (is03d1) Database + CI


Processing mode Shared
Entitlement 24
Virtual processors 48
Sharing mode Uncapped Weight=128
Memory 98GB

LPAR2 (is03d2) Application Servers


Processing mode Shared
Entitlement 24
Virtual processors 48
Sharing mode Uncapped Weight=128
Memory 64GB

LPAR3 (Clone – Master Copy)


Processing mode Shared
Processing units 14
Virtual processors 32
Sharing mode Uncapped Weight=128
Memory 80GB

The logical partitions are setup such that each can access at most 48 physical processors in the
shared processor pool of 64 processors. With this configuration, the production system (LPAR1
and LPAR2) have a total of 96 active virtual CPUs (VCPUs). The 64way shared processor pool
is therefore over-committed by 32 processors. This means that there would be a possibility of a
sharing benefit in this range if the LPARs had needed this capacity. Normally when the SAP
system load oscillates between component loads (DB and APPs) or when there is a large
deviation in the load profiles over the different job steps, this processor sharing can bring a great
benefit as the hardware instantly and automatically readjusts to cover the changing peaks.
In the case of this scenario, the load profiles vary in overall capacity requirements for the
different steps, but the profile of the load distribution remains fairly constant and the processor
requirement did not push the system into a sharing mode. The system capacity exceeded the
processor requirement of the test scenarios. This is done for high-end scalability to ensure that
there is no hardware restriction and allow for easier monitoring of resource utilization. Sizing
extrapolation is therefore easier for as no consideration for virtualization need be calculated for
eventual non-virtual environments. Nevertheless, these tests do not demonstrate the full benefit of
the virtualization technology.

ISICC-Press CTB-2008-1.1 99
SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

9.3 Physical Hardware Infrastructure

SAP EDB
P5 595 DS8x00
64 CPU 2.3 GHz
256 GB RAM 30 TB
Admin Network
100 Mb Eth 10.3.62.0
LPAR 1
10.1.1. (Vlan 362)
24 CPU/48VCPU 1
4 Fibers
4 Fibers
128 GB RAM
10.10.1. Frontend Network
1 SAN 4Gb 1 Gb Eth 10.1.1.0
10.10.1. LPAR 2 4 Fibers (Vlan 262)
2 10.1.1.
24 CPU/48 VCPU
2
10.3.62. 96 - 128 GB
Vitual Network
2 RAM
10.10.1.0
4 Fibers Large Packets enabled
LPAR 3

5 CPU/10 VCPU
10.1.1.
10.3.62. 32 - 64 GB RAM .101 .102
3
3

10.3.62. Admin Network 100 Mb Full

HMC

ISICC-Press CTB-2008-1.1 100


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

9.4 Storage Layout

The chart below shows the storage layout for the production system. On this DS8300, there are 4
copies of the test system. Only the production system is active during the performance tests. Each
of the systems has an additional temp2 volume which was built to allow data reorganization and
cleanup. It is not part of the actual database. The size of the volumes documented below is
depicted in gigabytes.

Production System Data and Logs


AIX VG Hostname Comments
vg_temp2 l1_prod Data Area for Reorg
ndatavg l1_prod Data VG
nlogvg l1_prod Log VG
Other l1_prod Other systems (Flash Copy )

Storage Server: DS8300 20 TBs of 15rpm disks capacity in Raid5 and Raid0
DS S/N 75X5792 DS total Capacity (Go) DS Available Capacity (Go)
NB Ranks 24 20386 2945
Ext Size w/o Spare 909 519
Ext Size w Spare 779 388
RAID type 5

Rank Free
Luns 1 2 3 4 5 6 7 8 9 10 SPARE Capacity
150 150 0 0 0 0 0 0 0 0 SPARE 88
150 150 0 0 0 0 0 0 0 0 219
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
DA

50 50 15 15 190 190 190 10 10 10 179


50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
DA

50 50 15 15 190 190 190 10 10 10 179


50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
50 50 15 15 190 190 190 10 10 10 SPARE 49
DA

50 50 15 15 190 190 190 10 10 10 179


50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179
50 50 15 15 190 190 190 10 10 10 179

ISICC-Press CTB-2008-1.1 101


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

10 Hardware Technology
10.1 Attributes of the IBM POWER5 Server
64-bit POWER5 processing power
The P5 (2.3 GHz) chips are two-way simultaneous multithreaded dual-core chips, designed to
maximize the utilization of the computing power. They also include dynamic resource balancing
to ensure each thread receives their fair share of system resources, these cutting-edge processors
each appear as a four-way symmetric multiprocessor to the application layer: each hardware
thread appears as a logical processor.

Advanced Scalability
Built on IBM’s advanced MCM (multichip modules), the p595 is designed to scale up to 64
processing cores in a single system. These 8-core MCMs place the processors extremely close
together, to enable faster movement of data, and increase reliability. The IBM p595 comes with
8GB of DDR2 memory which can scale up to 2TB.

Consolidate with virtualization and partitioning


The p595 utilizes logical partitioning (LPAR) technology with IBM’s Virtualization Engine™ to
support the consolidation of multiple Unix and Linux workloads. The IBM System p5 595 offers
advanced consolidation technologies such as Dynamic Logical Partitioning, Micro-Partitioning™
and Virtual I/O Server to maximize the resource utilization of the p595 system
Micro-partitioning allows the consolidation of multiple independent AIX 5L and Linux
workloads with finely tune performance. Micro-partitioning is based on a concept of a shared
processor pool. The individual LPARs access the CPU resources in the shared processor pool via
“virtual CPUs” (VCPUs). Virtual servers can have a resource entitlement as small as 1/10th of a
processor, and in increments as small as 1/100th of a processor. By use of entitlements and LPAR
weightings, an effective priority system can be implemented to control the CPU resource
distribution. Physical CPU allocation is done at a micro-second level allowing a extremely quick
response to shifting load requirements.

P5 Virtualization

In this document, virtualization is used to refer to the P5 micro-partitioning or the shared


processor pool functionality. This functionality implements a level of virtualization above the
physical processors, and allows LPARs to share the actual physical processor resources. In SAP
landscapes, this functionality is expected to improve the efficiency of resource utilization. The
graphs below depict the consolidation of 4 individual SAP load profiles, with different peak
requirements and peak times, into a single resource pool.

ISICC-Press CTB-2008-1.1 102


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

mySAP BI SAP BI
Batch
100
Processor Utilization Percentage

% Overall server utilization


80
01:00 03:00 05:00 07:00 09:00 11:00 13:00 15:00 17:00 19:00 21:00 23:00

Batch
60

Web Services 40

20

00:00 02:00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 24:00

mySAP ERP
0
night noon eob night

ERP

Combined load in shared processor pool

The idle resources inherent, when hardware is sized to manage the peak requirements for
individual SAP systems, can now be utilized to cover combined peak periods where the peaks do
not occur simultaneously. The distribution is on millisecond time slices, so even concurrently
active workloads benefit from resource sharing. The shared processor pool provides mechanisms
to allow the shared resources to be distributed according to policy. It is possible to restrain the
resource consumption of a partition, for example, by “capping” it. Capping basically sets a hard
limit for the LPAR. Uncapped partitions are guaranteed a minimum entitlement, and then
according to their priority, are able to use resources far in excess of their entitlement.

Diagram 10.1-3: Virtual CPUs for processor sharing


Micro partitions, or shared pool LPARs, see virtual CPUs
rather than physical CPUs. Virtual CPUs are scheduled by
LPAR1 LPAR2 LPAR3 the Hypervisor much as processes are scheduled by the
OS. Each VCPU is given a processing time-slice
VCPU VCPU VCPU VCPU VCPU VCPU according to its entitlement. One VCPU can utilize up to
the capacity of one physical CPU in the pool. An LPAR
Hypervisor
can have as many VCPUs as there are physical CPUs in
CPU CPU CPU the spared pool. For further information:

Shared Processor Pool


http://www.redbooks.ibm.com/redbooks/SG247463/wwhelp/wwhimpl/java/html/wwhelp.htm

For this series of tests, processor virtualization was used to accommodate CPU resource sharing
between 2 LPARs which implemented the SAP system in a 3-tier configuration.

Virtual I/O Servers


When multiple systems are consolidated on a single machine, virtual I/O can provide further
physical resource sharing. The physical allocation of host board adapters can be replaced by a full

ISICC-Press CTB-2008-1.1 103


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

virtualization of I/O: all adapters can be grouped under one or more VIO servers. VIO servers
provide links between the individual partitions and all the Ethernet and/or storage requirements;
and thus provide outstanding flexibility.

10.2 Storage Technology

IBM System Storage DS8000 TUrbo


By Franck Lespinasse, IBM PSSC Montpellier

The IBM System Storage™ DS8000™ Turbo models are the newest members of
the IBM System Storage DS8000 series and offer even higher performance, higher
capacity storage systems that are designed to deliver a new higher standard in
performance, scalability, resiliency and total value. Created specifically for the
mission-critical workloads of medium and large enterprises, the DS8000 series can
help consolidate system storage, enable tiered storage solutions, simplify storage
management and support system availability to address the needs of businesses
operating in an on demand world.

The DS8000 Turbo models are designed to provide exceptional performance while adding
virtualization capabilities that can help organizations allocate system resources more effectively
and better control application quality of service. The DS8000 Turbo models offer powerful data
backup, remote mirroring and recovery functions that can help protect data from unforeseen
events. In addition, the DS8000 supports non-disruptive microcode changes. These functions are
designed to help maintain data availability, which can benefit businesses in markets where
information must be accessible around the clock, every day of the year

Offering scalability of physical capacity from 1.1TB up to 320TB and up to seven times the
performance of the previous generation enterprise disk system, the DS8000 offers dramatic
opportunities for increased storage consolidation, the first step in simplifying storage and systems
infrastructures. The DS8000 offers the opportunity to realize dramatic, previously unavailable
cost savings. Organizations can mix and match disk packages that contain 73GB(15k rpm),
146GB(10k or 15k rpm) or 300GB(10k rpm) Fiber Channel disk drives or 500GB(7200 rpm)
Fiber Channel ATA (FATA) disk drives to construct a tailored system that addresses their
specific price, performance and capacity requirements. The DS8000 offers Fiber Channel ATA
(FATA) disk drive packages to help meet second-tier storage needs. These FATA drives can
provide cost-effective storage for large amounts of less frequently accessed information such as
backup data, archiving, document imaging, retention, and reference data. IBM’s RAID-5, as well
as RAID-10, implementations are both available and intermixable within a single DS8000 Turbo
model.

Multiple DS8000 Turbo models are available. The DS8100 Turbo offers the latest POWER5+™
processors in a dual 2-Way cluster configuration and is designed to meet or exceed the vast
majority of customer capacity and performance requirements with physical capacities ranging
from 1.1TB up to 192TB. The DS8300 Turbo, also comes standard with the latest POWER5+

ISICC-Press CTB-2008-1.1 104


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

processors, however these are configured in a dual 4-Way cluster, providing even further
scalability of up to 320TB. The DS8300 9B2 model can optionally offer the added flexibility and
cost advantages of storage system LPARs (logical partitions) which are made possible by IBM’s
Virtualization Engine functionality

Resiliency for Business Continuance


The following DS8000 series functions can help keep your business running
• FlashCopy® - provides point-in-time copies of your data
• Metro Mirror (Synchronous mirroring)
• Global Mirror (Asynchronous mirroring)
• Metro/Global Mirror (3-site mirroring)
• Global Copy (Extended distance mirroring)
• FlashCopy for data protection
FlashCopy is a data duplication feature of the DS8000 and is used to create duplicate, point-in-
time volumes/LUNs with no impact to host resources. This data is then available to be used to
perform routine functions such as back up or application testing. Both source and target data are
available for use by applications immediately after a FlashCopy operation is initiated.
Remote Mirror and Copy for disaster tolerance
The copy and mirroring functions of the DS8000 are designed to offer remote data replications
capabilities. They provide several options to implement a replication solution based on your
company’s needs. These DS8000 copy and mirroring functions are application independent.
Because the copying function occurs at the disk system level, the application has no knowledge
of its existence.
Metro Mirror
Metro Mirror is a synchronous protocol that allows real-time mirroring of data from one Logical
Unit (LUN) to another LUN. LUNs can be in the same DS8000 or in another DS8000 located up
to 300 km away when using Fibre Channel communication links.
Metro Mirror is a synchronous copy solution where write operations must be completed on the
secondary copy before they are written to the primary. So data is always kept in sync.
Global Copy a non-synchronous long distance copy option. It was specifically designed to
provide long distance for data copy, data migration, remote backup, and other applications like
fast database log forwarding. Fundamentally, Global Copy provides a virtually unlimited
distance data level copy facility.
Global Mirror is designed to provide a long-distance remote copy solution across two sites
using asynchronous technology. It operates over high-speed, Fiber Channel communication links
and is designed to provide the following:
Support for virtually unlimited distance between the local and remote sites. This can better enable
you to choose your remote site location based on business needs and enables site separation to
add protection from localized disasters.
A consistent and restartable copy of the data at the remote site, created with minimal impact to
applications at the local site.

ISICC-Press CTB-2008-1.1 105


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Data currency where, for many environments, the remote site lags behind the local site an
average of 3 to 5 seconds, minimizing the amount of data exposure in the event of an unplanned
outage.
Metro/Global Mirror
For supported mainframe and open systems servers, IBM Metro/Global Mirror is designed to
enable a disk mirroring function for the DS8000 that combines Metro Mirror with Global Mirror for a
long-distance data replication and disaster recovery/backup solution.

ISICC-Press CTB-2008-1.1 106


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

11 Appendix:
11.1 Software
AIX 5.3 SL 5
DB2 9.1 FP3 SAP Special Build (Technical Name: U811590_17979)
SAP ECC 6.0 Enhancement Package 2

ISICC-Press CTB-2008-1.1 107


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

11.2 AIX Parameters

DB + CI LPAR1 Settings Default


lru_file_repage 0 1
maxclient% 8 80
maxperm% 8 80
minperm% 3 20
rfc1323 1 0
tcp_recvspace 262144 16384
tcp_sendspace 262144 16384
udp_recvspace 65636 42080
nfs_max_threads 16 3891
nfs_rfc1323 1 0
memory_affinity 0
strict_maxclient 1
strict_maxperm 0
minfree 960
maxfree 1088

APP-Servers LPAR2
lru_file_repage 0 1
maxclient% 8 80
maxperm% 8 80
minperm% 3 20
rfc1323 1 0
tcp_recvspace 524288 16384
tcp_sendspace 524288 16384
udp_recvspace 65636 42080

11.3 SAP Profiles

Profile CI
enque/table_size = 8192
rsdb/cua/buffersize = 10000
rtbb/max_tables = 4000
rtbb/buffer_length = 40000
zcsa/db_max_buftab = 10000
zcsa/table_buffer_area = 100000000
abap/buffersize = 1000000
SAPLOCALHOST = is03d1
ES/TABLE = SHM_SEGS
ES/TABLE = SHM_SEGS
ES/SHM_PROC_SEG_COUNT = 64
ES/SHM_MAX_PRIV_SEGS = 63
ES/SHM_USER_COUNT = 4000
ES/SHM_SEG_COUNT = 4000
abap/heap_area_total = 4000000000

ISICC-Press CTB-2008-1.1 108


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

abap/heap_area_nondia = 3000000000
ztta/roll_extension_dia = 3000000000
abap/heap_area_dia = 2000000000
ztta/roll_area = 5000000
SAPSYSTEMNAME = EDB
SAPSYSTEM = 00
INSTANCE_NAME = DVEBMGS00
DIR_CT_RUN = $(DIR_EXE_ROOT)/run
DIR_EXECUTABLE = $(DIR_INSTANCE)/exe
login/no_automatic_user_sapstar = 0
PHYS_MEMSIZE = 512
exe/saposcol = $(DIR_CT_RUN)/saposcol
rdisp/wp_no_dia = 3
rdisp/wp_no_btc = 3
exe/icmbnd = $(DIR_CT_RUN)/icmbnd
icm/server_port_0 = PROT=HTTP,PORT=80$$
#-----------------------------------------------------------------------
# SAP Messaging Service parameters are set in the DEFAULT.PFL
#-----------------------------------------------------------------------
ms/server_port_0 = PROT=HTTP,PORT=81$$
rdisp/wp_no_enq = 1
rdisp/wp_no_vb = 1
em/initial_size_MB = 65536
rdisp/wp_no_vb2 = 1
rdisp/wp_no_spo = 1
#---------------------------------------------------------------------
#Values for shared memory pool sizes
#---------------------------------------------------------------------
ipc/shm_psize_10 = 320000000
ipc/shm_psize_14 = 0
ipc/shm_psize_18 = 0
ipc/shm_psize_19 = 0
ipc/shm_psize_30 = -10
ipc/shm_psize_40 = 200000000
ipc/shm_psize_41 = 0
ipc/shm_psize_51 = -10
ipc/shm_psize_52 = -10
ipc/shm_psize_54 = -10
ipc/shm_psize_55 = -10
ipc/shm_psize_57 = -10
ipc/shm_psize_58 = -10

Profile Application Servers (D01 and D02)


rsdb/ntab/irbdsize = 12000
rsdb/ntab/ftabsize = 60000
rsdb/obj/max_objects = 5000
rsdb/obj/buffersize = 32768
zcsa/presentation_buffer_area = 8192000
rsdb/cua/buffersize = 10000
rtbb/max_tables = 4000
rtbb/buffer_length = 40000
zcsa/db_max_buftab = 10000
zcsa/table_buffer_area = 100000000
abap/buffersize = 1000000
SAPLOCALHOST = is03d2

ISICC-Press CTB-2008-1.1 109


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

ES/TABLE = SHM_SEGS
ES/TABLE = SHM_SEGS
ES/SHM_PROC_SEG_COUNT = 64
ES/SHM_MAX_PRIV_SEGS = 63
ES/SHM_USER_COUNT = 4000
ES/SHM_SEG_COUNT = 4000
abap/heap_area_total = 4000000000
abap/heap_area_nondia = 3000000000
ztta/roll_extension_dia = 3000000000
abap/heap_area_dia = 2000000000
ztta/roll_area = 5000000
SAPSYSTEMNAME = EDB
SAPSYSTEM = 01
INSTANCE_NAME = D01
DIR_CT_RUN = $(DIR_EXE_ROOT)/run
DIR_EXECUTABLE = $(DIR_INSTANCE)/exe
login/no_automatic_user_sapstar = 0
PHYS_MEMSIZE = 512
exe/saposcol = $(DIR_CT_RUN)/saposcol
rdisp/wp_no_dia = 20
rdisp/PG_SHM = 32768
rdisp/wp_no_btc = 80
exe/icmbnd = $(DIR_CT_RUN)/icmbnd
icm/server_port_0 = PROT=HTTP,PORT=80$$
#-----------------------------------------------------------------
# SAP Messaging Service parameters are set in the DEFAULT.PFL
#-----------------------------------------------------------------
ms/server_port_0 = PROT=HTTP,PORT=81$$
em/initial_size_MB = 65536
#-----------------------------------------------------------------
#Values used shared memory pool sizes
#-----------------------------------------------------------------
ipc/shm_psize_10 = 320000000
ipc/shm_psize_14 = 0
ipc/shm_psize_18 = 0
ipc/shm_psize_19 = 0
ipc/shm_psize_30 = -10
ipc/shm_psize_40 = 200000000
ipc/shm_psize_41 = 0
ipc/shm_psize_51 = -10
ipc/shm_psize_52 = -10
ipc/shm_psize_54 = -10
ipc/shm_psize_55 = -10
ipc/shm_psize_57 = -10
ipc/shm_psize_58 = -10

ISICC-Press CTB-2008-1.1 110


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

11.4 Database Parameters

Database Manager Configuration


Node type = Enterprise Server Edition with local and remote clients
Database manager configuration release level = 0x0b00
CPU speed (millisec/instruction) (CPUSPEED) = 4.133012e-07
Communications bandwidth (MB/sec) (COMM_BANDWIDTH) = 1.000000e+02
Max number of concurrently active databases (NUMDB) = 8
Federated Database System Support (FEDERATED) = NO
Transaction processor monitor name (TP_MON_NAME) =
Default charge-back account (DFT_ACCOUNT_STR) =
Java Development Kit installation path (JDK_PATH) =
/db2/db2edb/sqllib/java/jdk64

Diagnostic error capture level (DIAGLEVEL) = 3


Notify Level (NOTIFYLEVEL) = 3
Diagnostic data directory path (DIAGPATH) = /db2/EDB/db2dump
Default database monitor switches
Buffer pool (DFT_MON_BUFPOOL) = ON
Lock (DFT_MON_LOCK) = ON
Sort (DFT_MON_SORT) = ON
Statement (DFT_MON_STMT) = ON
Table (DFT_MON_TABLE) = ON
Timestamp (DFT_MON_TIMESTAMP) = ON
Unit of work (DFT_MON_UOW) = ON
Monitor health of instance and databases (HEALTH_MON) = OFF

SYSADM group name (SYSADM_GROUP) = DBEDBADM


SYSCTRL group name (SYSCTRL_GROUP) = DBEDBCTL
SYSMAINT group name (SYSMAINT_GROUP) = DBEDBMNT
SYSMON group name (SYSMON_GROUP) =

Client Userid-Password Plugin (CLNT_PW_PLUGIN) =


Client Kerberos Plugin (CLNT_KRB_PLUGIN) =
Group Plugin (GROUP_PLUGIN) =
GSS Plugin for Local Authorization (LOCAL_GSSPLUGIN) =
Server Plugin Mode (SRV_PLUGIN_MODE) = UNFENCED
Server List of GSS Plugins (SRVCON_GSSPLUGIN_LIST) =
Server Userid-Password Plugin (SRVCON_PW_PLUGIN) =
Server Connection Authentication (SRVCON_AUTH) = NOT_SPECIFIED
Database manager authentication (AUTHENTICATION) = SERVER_ENCRYPT
Cataloging allowed without authority (CATALOG_NOAUTH) = NO
Trust all clients (TRUST_ALLCLNTS) = YES
Trusted client authentication (TRUST_CLNTAUTH) = CLIENT
Bypass federated authentication (FED_NOAUTH) = NO
Default database path (DFTDBPATH) = /db2/EDB
Database monitor heap size (4KB) (MON_HEAP_SZ) = 512
Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) = 2048
Audit buffer size (4KB) (AUDIT_BUF_SZ) = 0
Size of instance shared memory (4KB) (INSTANCE_MEMORY) = AUTOMATIC
Backup buffer default size (4KB) (BACKBUFSZ) = 1024
Restore buffer default size (4KB) (RESTBUFSZ) = 1024

ISICC-Press CTB-2008-1.1 111


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Sort heap threshold (4KB) (SHEAPTHRES) = 0


Directory cache support (DIR_CACHE) = NO
Application support layer heap size (4KB) (ASLHEAPSZ) = 16
Max requester I/O block size (bytes) (RQRIOBLK) = 65000
Query heap size (4KB) (QUERY_HEAP_SZ) = 2000
Workload impact by throttled utilities(UTIL_IMPACT_LIM) = 10
Priority of agents (AGENTPRI) = SYSTEM
Max number of existing agents (MAXAGENTS) = 1024
Agent pool size (NUM_POOLAGENTS) = 10
Initial number of agents in pool (NUM_INITAGENTS) = 5
Max number of coordinating agents (MAX_COORDAGENTS) = MAXAGENTS
Max no. of concurrent coordinating agents (MAXCAGENTS) = MAX_COORDAGENTS
Max number of client connections (MAX_CONNECTIONS) = MAX_COORDAGENTS
Keep fenced process (KEEPFENCED) = NO
Number of pooled fenced processes (FENCED_POOL) = 5
Initial number of fenced processes (NUM_INITFENCED) = 0
Index re-creation time and redo index build (INDEXREC) = RESTART
Transaction manager database name (TM_DATABASE) = 1ST_CONN
Transaction resync interval (sec) (RESYNC_INTERVAL) = 180
SPM name (SPM_NAME) =
SPM log size (SPM_LOG_FILE_SZ) = 256
SPM resync agent limit (SPM_MAX_RESYNC) = 20
SPM log path (SPM_LOG_PATH) =
TCP/IP Service name (SVCENAME) = sapdb2EDB
Discovery mode (DISCOVER) = SEARCH
Discover server instance (DISCOVER_INST) = ENABLE

Maximum query degree of parallelism (MAX_QUERYDEGREE) = 1


Enable intra-partition parallelism (INTRA_PARALLEL) = NO

Maximum Asynchronous TQs per query (FEDERATED_ASYNC) = 0

No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC


No. of int. communication channels (FCM_NUM_CHANNELS) = AUTOMATIC
Node connection elapse time (sec) (CONN_ELAPSE) = 10
Max number of node connection retries (MAX_CONNRETRIES) = 5
Max time difference between nodes (min) (MAX_TIME_DIFF) = 60

db2start/db2stop timeout (min) (START_STOP_TIME) = 10

Database Configuration for Database EDB

Database configuration release level = 0x0b00


Database release level = 0x0b00

Database territory = en_US


Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY_16BIT
Alternate collating sequence (ALT_COLLATE) =
Database page size = 16384

Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE

ISICC-Press CTB-2008-1.1 112


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Discovery support for this database (DISCOVER_DB) = ENABLE

Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = ANY
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20

Backup pending = NO

Database is consistent = NO
Rollforward pending = NO
Restore pending = NO

Multi-page file allocation enabled = YES

Log retain for recovery status = NO


User exit for logging status = NO

Self tuning memory (SELF_TUNING_MEM) = OFF


Size of database shared memory (4KB) (DATABASE_MEMORY) = COMPUTED
Database memory threshold (DB_MEM_THRESH) = 10
Max storage for lock list (4KB) (LOCKLIST) = 524288
Percent. of lock lists per application (MAXLOCKS) = 98
Package cache size (4KB) (PCKCACHESZ) = 50000
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = 100000
Sort list heap (4KB) (SORTHEAP) = 15000

Database heap (4KB) (DBHEAP) = 25000


Catalog cache size (4KB) (CATALOGCACHE_SZ) = 2560
Log buffer size (4KB) (LOGBUFSZ) = 2048
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 10000
Buffer pool size (pages) (BUFFPAGE) = 10000
Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 128000
Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 25
Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 1600

SQL statement heap (4KB) (STMTHEAP) = 5120


Default application heap (4KB) (APPLHEAPSZ) = 4096
Statistics heap size (4KB) (STAT_HEAP_SZ) = 15000

Interval for checking deadlock (ms) (DLCHKTIME) = 10000


Lock timeout (sec) (LOCKTIMEOUT) = 3600

Changed pages threshold (CHNGPGS_THRESH) = 40


Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = AUTOMATIC

Track modified pages (TRACKMOD) = ON

ISICC-Press CTB-2008-1.1 113


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Default number of containers = 1


Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 2

Max number of active applications (MAXAPPLS) = AUTOMATIC


Average number of active applications (AVG_APPLS) = AUTOMATIC
Max DB files open per application (MAXFILOP) = 1950

Log file size (4KB) (LOGFILSIZ) = 65536


Number of primary log files (LOGPRIMARY) = 40
Number of secondary log files (LOGSECOND) = 40
Changed path to log files (NEWLOGPATH) =
Path to log files =
/db2/EDB/log_dir/NODE0000/
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file =
Block log on disk full (BLK_LOG_DSK_FUL) = YES
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0

Group commit count (MINCOMMIT) = 1


Percent log file reclaimed before soft chckpt (SOFTMAX) = 500
Log retain for recovery enabled (LOGRETAIN) = OFF
User exit for logging enabled (USEREXIT) = OFF

HADR database role = STANDARD


HADR local host name (HADR_LOCAL_HOST) =
HADR local service name (HADR_LOCAL_SVC) =
HADR remote host name (HADR_REMOTE_HOST) =
HADR remote service name (HADR_REMOTE_SVC) =
HADR instance name of remote server (HADR_REMOTE_INST) =
HADR timeout value (HADR_TIMEOUT) = 120
HADR log write synchronization mode (HADR_SYNCMODE) = NEARSYNC

First log archive method (LOGARCHMETH1) = OFF


Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
Vendor options (VENDOROPT) =

Auto restart enabled (AUTORESTART) = ON


Index re-creation time and redo index build (INDEXREC) = SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) = OFF
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 60

TSM management class (TSM_MGMTCLASS) =


TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =

ISICC-Press CTB-2008-1.1 114


SAP EVENT DETAIL BILLING -
A CROSS INDUSTRY SOLUTION

Automatic maintenance AUTO_MAINT) = OFF


Automatic database backup (AUTO_DB_BACKUP) = OFF
Automatic table maintenance (AUTO_TBL_MAINT) = OFF
Automatic runstats (AUTO_RUNSTATS) = OFF
Automatic statistics profiling (AUTO_STATS_PROF) = OFF
Automatic profile updates (AUTO_PROF_UPD) = OFF
Automatic reorganization (AUTO_REORG) = OFF

12 Copyrights and Trademarks


© IBM Corporation 1994-2005. All rights reserved. References in this document to IBM products or services do not
imply that IBM intends to make them available in every country.

The following terms are registered trademarks of International Business Machines Corporation in the United States
and/or other countries: AIX, AIX/L, AIX/L(logo), DB2, e(logo)server, IBM, IBM(logo), System P5, System/390, z/
OS, zSeries.

The following terms are trademarks of International Business Machines Corporation in the United States and/or other
countries: Advanced Micro-Partitioning, AIX/L(logo), AIX 5L, DB2 Universal Database, eServer, i5/OS, IBM
Virtualization Engine, Micro-Partitioning, iSeries, POWER, POWER4, POWER4+, POWER5, POWER5+,
POWER6.

A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

SAP, the SAP Logo, SAP, R/3 is trademarks or registered trademarks of SAP AG in Germany and many other
countries

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Other company, product or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement
material, or other publicly available sources and does not constitute an endorsement of such products by IBM.
Sources for non-IBM list prices and performance numbers are taken from publicly available information, including
vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm
the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the
capability of non-IBM products should be addressed to the supplier of those products.

More about SAP Trademarks at. http://www.sap.com/company/legal/copyright/trademark.asp

ISICC-Press CTB-2008-1.1 115


ISICC-Press CTB-2008-1.1

IBM SAP International Competence Center, Walldorf

You might also like