Professional Documents
Culture Documents
SAP TECHNICAL
EDUCATION
CONFERENCE 2002
WORKSHOP
SAP liveCache
Administration & Monitoring
Werner Thesing
PRINT ON DEMAND
sponsored by
1
1
Learning Objectives
PRINT ON DEMAND
sponsored by
2
2
About the workshop
PRINT ON DEMAND
sponsored by
3
3
Agenda
(10) Recovery
(11) Configuration
(13) Summary
z In this workshop you will learn the main tasks of a liveCache administrator. Moreover the
architecture and the concepts of the liveCache are introduced which gives an
understanding of the liveCache behavior and ideas how to analyze and to overcome
performance bottlenecks.
z This workshop refers to the liveCache release 7.4.
PRINT ON DEMAND
sponsored by
4
4
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
liveCache
Concepts and Architecture
z In this unit you will learn how to integrate an existing liveCache into the CCMS.
PRINT ON DEMAND
sponsored by
5
5
Why the liveCache has been developed (1)
z For the development of the Advanced Planning and Optimization (APO) component a
database system was needed which allows fast access to data organized in a complex
network.
z Applying conventional relational database management systems as data sources for the
APO showed a poor performance since disk I/O and the non-appropriate data description
in the relational schema limited the performance.
PRINT ON DEMAND
sponsored by
6
6
Why the liveCache has been developed (2)
Presentation client
Application buffer
Application buffer Bring data to the
0.1 ms
application
Comprehensive
computation triggers huge
Database buffer
Database Server
Database
10 ms
= 8KB
z To read data from an application buffer which is in the same address space as the
application takes about 0.1ms. Reading data from a database takes about 1ms if the
corresponding record is already in the database buffer and even 10 ms if the record must
be read from a hard disk before.
z Working with an application having a too small buffer to accommodate all required data
causes a huge data traffic between application and database server.
z An additional problem of the traditional buffering is that after reading data into the
application buffer they are still organized in a relational schema which is not appropriate to
describe complex networks.
z To achieve a good performance for applications which require access to a large amount of
data (i.e. APO) it is necessary to bring the application logic and the application data
together in one address space. One possible solution could be to shift the application
logic from the application server to the database server via stored procedures. However,
this impairs the scalability of R/3. On the other hand one could shift all required data to the
application server. But this requires that each server is equipped with very large main
memory. Furthermore, the synchronization of the data changed on each server with the
data stored in the database server is rather complicated.
PRINT ON DEMAND
sponsored by
7
7
Why the liveCache has been developed (3)
Minor performance
impact on
transactional
Presentation client processing
Message- Concurrency and
Planning Server
based transactional
Advanced
App. Server
on comprehensive
computation
Database liveCache
Buffered data
structures optimized
Dedicated for advanced
hardware/software business
system applications
PRINT ON DEMAND
sponsored by
8
8
What is the liveCache (1)
PRINT ON DEMAND
sponsored by
9
9
What is the liveCache (2)
z The APO application uses a complex object orientated application model. This model is
easier to implement by an object oriented programming than with the relational structures
of a relational database. Therefore, liveCache supports object oriented programming
through providing adequate C++ methods/functions.
z liveCache provides the application with the concept of consistent views to isolate the data
of an application from simultaneous updates by other users (reader isolation).
z COM routines are implemented in liveCache as stored procedure. Therefore the call of a
COM routine from ABAP is quite simple through using EXEC SQL.
PRINT ON DEMAND
sponsored by
10
10
liveCache objective
ABAP
C++
> 1 ms
< 10 µs
z In a standard SAP system, typical database request times are above 1 ms. For data
intensive applications, a new technology is required in order to achieve better response
times. liveCache has been developed to reduce typical database request times to below
10 µs. Key factors in achieving these response times are:
y Accesses to liveCache data usually do not involve any disk I/O.
y The processes accessing the data are optimized C++ routines that run in the process
context of the liveCache on the liveCache server.
y Object orientation enables the use of efficient programming techniques. Besides -
compared to a relational database, where many related tables may have to be
accessed to retrieve all requested information - one object contains all the relevant
information and the need to access numerous objects or tables is eliminated. In other
words, the typical liveCache data structure is NOT a relational data table.
y Objects are referenced via logical pointers OID in contrast to referencing records via
keys (as in standard SQL) no search in an index tree is required.
z APO is the first product to use liveCache technology.
PRINT ON DEMAND
sponsored by
11
11
liveCache architecture (1)
COM objects
(DLL)
liveCache
Devices
z ABAP Programs and the APO optimizers use native SQL for communicating through the
standard SAP DB interface to liveCache. liveCache has an SQL interface that is used to
communicate with the SAP instances. With native SQL, ABAP programs call stored
procedures in the liveCache that point to Component Object Model (COM) routines written
in C++. An SQL class provides SQL methods to access the SQL data through the COM
routines.
z The COM routines are part of a dynamic link library that runs in the process context of the
liveCache instance. In the Windows NT implementation of liveCache, COM routines and
their interface are registered in the Windows NT Registry. For the Unix implementation, a
registry file is provided by liveCache. A persistent C++ class provides the COM routines
with access to the corresponding Object Management System (OMS) data that is stored
in the liveCache.
z COM Routines in APO are delivered in DLL libraries as SAPXXX.DLL and SAPXXX.LST
on NT or as shared libraries SAPXXX.ISO and SAPXXX.LST on UNIX . The application
specific knowledge is built into these COM routines based on the concept of object
orientation.
PRINT ON DEMAND
sponsored by
12
12
liveCache architecture (2)
Command analyzer
Framework
SQL for COM objects
OMS
class application (DLL)
embedding
SQL
live
Cache SQL basis OMS basis
(B* trees) (page chains)
DBMS basis
SQL/Object Log
data devices devices
PRINT ON DEMAND
sponsored by
13
13
liveCache administration tools
z liveCache, similar to the standard SAP RDBMS, can be administered within the SAP
system. The SAP transaction LC10 makes it possible to monitor, configure and administer
liveCache.
z The LC10 applies the Database Manager CLI (DBMCLI) to administer the liveCache.
Therefore, it is obvious that all functionalities of an SAP System are still available without
the LC10 and could also be performed with the „native“ data base administration tool
DBMCLI.
z In addition to the DBMCLI the administration tool DBMGUI is available, which is a
graphical user interface to the liveCache management client tool DBMCLI.
z While the DBMGUI works only on Windows NT/2000, running the WEB DBM requires
only an internet browser and the DBM Web Server which can be installed anywhere in
the net.
z DBMCLI, DBMGUI and WEB DBM should not be used for starting or stopping the
liveCache, even if LC10 itself calls DBMCLI for starting or stopping the liveCache. They
should only be used for changing liveCache parameters, defining backup media and for
liveCache monitoring. That is because the LC10 runs in addition to starting, stopping and
initializing application specific reports. Moreover, it registers the COM routines each time
the liveCache is started.
PRINT ON DEMAND
sponsored by
14
14
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
liveCache Integration
into R/3 via LC10
z In this unit you will learn how to integrate an existing liveCache into the CCMS.
PRINT ON DEMAND
sponsored by
15
15
Transaction LC10
PRINT ON DEMAND
sponsored by
16
16
liveCache integration into LC10 (1)
z Choose ‘Integration’ on the initial screen of LC10 to reach the integration screen.
z The integration data are required for the multi-db-connection from an R/3 system to the
liveCache via NATIVE SQL. They are stored in tables DBCON and DBCONUSR on the
RDBMS.
z The ‘Name of the database connection’ is used for a NATIVE SQL connection to an R/3
system.
z The ‘liveCache name’ is the name of the liveCache database. It can be different from the
name of the database connection.
z The server name in the ‘liveCache server name’ is case sensitive. It must be the same as
the output from the command ‘hostname’ on a DOS prompt or UNIX shell.
z The default user/password combinations are control/control for the DBM operator and
sapr3/sap for the standard liveCache user.
z The APO application server has to be stopped and started again after changes of
liveCache connection information. This guarantees the R/3 system connects to the correct
liveCache instance.
PRINT ON DEMAND
sponsored by
17
17
liveCache integration into LC10 (2)
central authorization:
y central authorization data is stored in the APO database in table DBCONUSR
y this authorization is recommended and it is the default
y new with version 46D -> APO 3.1
PRINT ON DEMAND
sponsored by
18
18
liveCache integration into LC10 (3)
PRINT ON DEMAND
sponsored by
19
19
liveCache integration into LC10 (4)
z When the alert monitor is activated a number of performance critical data (i.e. heap and
device usage, cache hit rates) is collected periodically and displayed in the alert monitor
which can be reached by pressing ’Alert monitor’ on the initial screen of the LC10.
z The alert monitor is activated by default if the liveCache was installed by the standard
installation tool (LCSETUP).
PRINT ON DEMAND
sponsored by
20
20
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Basic
Administration
z At the end of this unit you will be able to start, stop and initialize a liveCache and you will
know where to find liveCache diagnosis files.
PRINT ON DEMAND
sponsored by
21
21
liveCache status
Basic status
information
z This is the main screen of the LC10 which can be reached by pressing ’liveCache
Monitoring’ on the initial screen of the LC10. It offers all services and information to
administrate the liveCache.
z Before this window appears, the R/3 system sends a request to the liveCache about its
status. The liveCache name and liveCache server information are stored in the table
DBCON as described in previous slides. The remaining information displays the output
from the status request. If the connection to a liveCache is not available, an error
message is displayed.
z The left frame of the screen shows a tree which contains all information and services
needed to administer the liveCache. The tree branches with the most important
information and services are opened by default.
z The right frame displays the details which belong to the activated branch of the service
tree.
z Initially the window screen which belongs to the ‘Properties’ icon is activated.
z The ‘DBM server version’ displays the version of the database manager server which is
responsible for the dbmcli communication to the liveCache.
z The ‘liveCache version’ shows the liveCache kernel build version.
z The traffic light at ‘liveCache status’ illustrates the operation mode of the liveCache.
PRINT ON DEMAND
sponsored by
22
22
liveCache operation modes
PRINT ON DEMAND
sponsored by
23
23
Starting, initializing and stopping the liveCache
Starting,stopping and
initializing the liveCache
PRINT ON DEMAND
sponsored by
24
24
Initialize liveCache (1)
z Initializing the liveCache always formats the log volumes. If the data volumes do not
already exist they are created and formatted too.
z All liveCache data will be lost after initialization, and it has to be loaded again via the
APO system or via a recovery.
z Program LCINIT.BAT with option init is used to initialize the liveCache.
z The initialization process is logged in the log file LCINIT.LOG, which is automatically
displayed at the end of the initialization process.
PRINT ON DEMAND
sponsored by
25
25
Initialize liveCache (2)
PRINT ON DEMAND
sponsored by
26
26
liveCache Message Files: lcinit.log
z Each time the liveCache is started, stopped or initialized, a log file (LCINIT.LOG ) is
written which can be viewed in the branch ‘Logs->Initialization->Currently’ of the service
tree.
z The log file of the previous starts, stops or initializations is displayed in
‘Logs->Initialization->History’.
z The tab ‘Controlfile’ of the selection ‘Problem Analysis->Logs->Initialization’ displays the
script LCINIT.BAT which is used to start, stop and initialize the liveCache.
z Whenever the liveCache is started, stopped or initialized successfully you can find a
message
liveCache <connection name> successfully started/stopped/initialized
at the end of the log file.
PRINT ON DEMAND
sponsored by
27
27
liveCache message files: knldiag
liveCache system
message files
z The knldiag file logs messages about current liveCache activities. The actions logged
include liveCache start, user logons, writing of savepoints, errors and liveCache
shutdown. Therefore, this file is one of the most important diagnostic files to analyze
database problems or performance bottlenecks.
z The knldiag file is recreated at every liveCache start. The previous one is saved under
‘knldiag.old’ (‘Problem Analysis->Messages->Kernel->Previous’ ), which means that the
content of every knldiag file is definitely lost after two consecutive restarts. To avoid
loosing the information about fatal errors that happened during two consecutive startup
failures, errors are also appended to file ‘knldiag.err’.
z To avoid that the size of the knldiag file increases unlimitedly with the time the database
spends in the operation mode online, the knldiag file has a fixed length which can be set
as a configuration parameter of the database. The system messages are written in a
roundtrip. Therefore, it can happen that the knldiag file does not contain all system
messages after long operation time. This is another reason why all error messages are
written into the file ‘knldiag.err’.
PRINT ON DEMAND
sponsored by
28
28
liveCache message files: knldiag.err
z In contrast to the knldiag file, knldiag.err is not overwritten cyclically or reinitialized during
a restart. It logs consecutively the starting time of the database and any serious errors.
z This file is required to analyze errors if the knldiag files, which originally contained the
error messages, are already overwritten.
PRINT ON DEMAND
sponsored by
29
29
liveCache directories
Directory structure
sapdb
<SID> <SID> db
PRINT ON DEMAND
sponsored by
30
30
liveCache directories: example (1)
z Sapdb/data/wrk/Lca/dbahist: detailed log files for each backup and restore of the
database
PRINT ON DEMAND
sponsored by
31
31
liveCache files: example (2)
Documentation files
Root directory for the SAP DB Web Server
Scripts for creation of system tables
List of installed files
z Sapdb/Lca/db/sap: dynamic link libraries and shared object files respectively which
contain the application code to run via COM in the database. Here you can also find the
script LCINIT.BAT to start, stop and initialize the liveCache.
PRINT ON DEMAND
sponsored by
32
32
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Complete
Data Backup
z At the end of this unit you will be able to perform a complete backup of the liveCache
using the database administration tool DBMGUI.
PRINT ON DEMAND
sponsored by
33
33
Complete data backup
Labels:
Data 1
DAT_00001
DAT_00001
Data 2
Data
Data n
volumes
paramfile
/sapdb/data/config/<SID>
Log 1
Log Log 2
volumes
z A complete backup saves all occupied pages of the data volume. In addition, the
liveCache parameter file is written to the backup.
z The complete backup as well as the incremental backups (see later) are always
consistent on the level of transactions since the before images of running transaction are
stored in the data area; i.e. they are included in the backup.
z Each backup gets a label reflecting the sequence of the backups. This label is used by
the administrative tools to distinguish the backups. A map from the logical backup media
name to the backup label can be found in the file dbm.mdf in the <Rundirectory> of the
liveCache.
z For each backup log is written to the file dbm.knl in the <Rundirectory>.
PRINT ON DEMAND
sponsored by
34
34
Backup (1) : Start the DBMGUI
z To perform an initial backup of the liveCache we will use the DBMGUI which can be called
by choosing ‘Tools->Database Manager (GUI)’. After the selection you will be asked for
the user name of the database manager and its password which are usually
CONTROL/CONTROL.
z Since backup and restore procedures of a liveCache are identical to those for a OLTP
instance of the SAP DB these functions are not directly included in the liveCache specific
transaction lc10 but can be accessed via the general administration tool DBMGUI.
z To use the DBMGUI it has to be installed on the local PC.
PRINT ON DEMAND
sponsored by
35
35
Backup (2) : Create a backup media
PRINT ON DEMAND
sponsored by
36
36
Backup (3a) : Create a backup media
z You can choose nearly any name for the media name. There are only a few names reserved for
external backup tools: ADSM, NSR, BACK. If your media name begins with one of these strings,
an external backup tool is expected.
z Besides the media name you have to specify a location. You have to enter the complete path of the
media. If you specify only a file name this file will be created in the <Rundirectory> of the database.
z There are four backup types:
y Incremental: incremental backup of the data, saves all pages changed since the last complete
data backup.
y Log: interactive backup of the full log area (in units of log segments).
y AutoLog: automatic log backup, when a log segment is completed, it will be written to the
defined media.
z For a complete or incremental data backup you can choose one of the three device types: file, tape
or pipe. For a log backup you can choose file or pipe. It is not possible to save log segments directly
to tape.
z After you have entered the necessary information, you have to press the button „OK“ (green tick).
z The media definition is stored in the file dbm.mmm in the <Rundirectory> of the database.
PRINT ON DEMAND
sponsored by
37
37
Backup (3b) : Create media for external backup tools
PRINT ON DEMAND
sponsored by
38
38
Backup (4) : Start complete data backup
z To create a complete data backup you have to select ‘Backup->Complete‘. In the central
window you are offered all media which are available for this operation.
z After you have chosen a media you have to confirm your choice by pressing the ‘Next
Step‘ button. The following window repeats your choice and ask you to confirm it. When
this is done the backup process starts and you can follow the progress displayed in a
progress bar.
PRINT ON DEMAND
sponsored by
39
39
Backup (5) : Final backup report
PRINT ON DEMAND
sponsored by
40
40
Running a backup in the background
y backup type
z Before the report can be executed the backup media must be defined which can be done
with the DBMGUI as shown in the previous slides.
PRINT ON DEMAND
sponsored by
41
41
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Data
Storage
z At the conclusion of this unit you will be able to monitor the data page usage of the
liveCache.
PRINT ON DEMAND
sponsored by
42
42
liveCache objects
z The liveCache was designed to store instances of C++ classes which are defined within
COM routines. At runtime a COM routine generates instances of classes. These instances
are called „persistent objects“ since they survive their creators (the COM routines). They
are stored in liveCache and on physical disks.
z The example above shows the definition of a class (MyObj) to generate persistent objects
in the liveCache and its usage by a COM-Object (TestComponent).
z By inheriting from the template OmsKeyedObject all instances from the class MyObj
inherit the ability to be stored persistently in the liveCache. The template
OmsKeyedObject belongs to the API supplied by the liveCache it offers transaction
control (rollback,commit), lock mechanisms, access methods and the ability to be stored
to all derived classes.
PRINT ON DEMAND
sponsored by
43
43
liveCache data storage
...
z SQL data is stored on SQL pages and is sorted using the B*tree algorithm. Access
occurs via a key and requires a search for the record position in the index. In contrast
object data is stored in OMS pages, which are linked to build page chains. Objects are
accessed via an OID. The OID contains already the object position therefore no further
search is required.
z In the liveCache, all data are stored in data volume pages regardless of the data type
(SQL data or object data).
z The size of a data page is 8 KB.
PRINT ON DEMAND
sponsored by
44
44
Object access: RDBMS approach
Table 1 Table 2
logical reference
via primary key
1 2 3 Tables
Primary are
index 3
logically
2
1 3 linked by
relational
data
1 3
1 2
2 2
rec rec Data is
record
retrieved
Data record from
pages buffers
or disk
z Application data in APO is organized as a network of linked data records. Data records
contain application data and mostly one or more links to other records used for the
navigation over the data network.
z In a traditional relational database management system, data is stored in relational
tables. Tables containing related data are logically linked through one or more fields
(which may but do not have to carry the same names). Mostly the primary key of the
tables is used as link criteria.
z To retrieve data in a table, an index will be used – either the primary index containing the
primary key or a secondary index. Normally more than one access to index data is
necessary to navigate to the table data in the data pages.
z Navigation over a network of data, stored in one or several tables, is performed using
several communications between application program and database:
y The database reads the first record and returns it to the application program.
y The application program gets the primary key of the next record from data stored in
first record.
y The database reads the next record and returns it to the application program.
y (1-3) until all data is read.
z If most of the pages accessed are buffered in the database’s RAM then no disk access
will be required, but if this is not the case the database software has to read information
stored on hard disks to fulfill the data request. Physical disk access slows down the
performance of the database.
PRINT ON DEMAND
sponsored by
45
45
Object access: liveCache approach
container 1
Objects Class
of
class 1 object object
1
2
physical
reference via OID
page 5 page 20 (= page number + offset)
container 2
obj
Class
Objects
of obj
3
class 2
z In liveCache the data (objects) are stored in class containers which consist of double
linked page chains. Navigation between objects is very fast because objects are
referenced using a physical reference – the Object ID (OID) which contains the page
number and the page offset.
z Direct accesses to the body of an object, e.g. by searching data in the body, is not
possible. Only alternative are keyed objects where the application may define a key on
the object. Features like LIKE, GT, LT etc. are not supported only a key range iterator is
supplied.
z liveCache can also store data in relational tables and access them correspondingly, but
first, this is only used for a minority of data.
PRINT ON DEMAND
sponsored by
46
46
Class container for objects of fixed length
Class container
index
next free
z The liveCache supplies two kinds of class containers to store objects. One for fixed
length objects and one for objects of variable length.
z Class container for objects of fixed length contain solely objects which are instances of
one class and thus are all of one length.
z Containers consist of chains of double linked pages. All pages which contain free space
to accommodate further objects are linked additionally in a free chain.
z Since new objects are inserted into a container always on the first page of the free chain
the class containers can be partitioned into more than one chain to avoid bottlenecks
during massive parallel insert of objects.
z The root page of each chain includes administrative data, e.g. the pointer to the first
page in the chain where is still space for another object.
z An index can be defined for a class container (but at maximum one) which maps a key
of fixed length onto an OID. Object of those containers can also be accessed via a key.
The index is organized as one or several B* trees.
PRINT ON DEMAND
sponsored by
47
47
Data page structure for objects of fixed length
pointer to
next free
object
frame
Occupied object frame
z Each page contains objects instantiated from the same class, i.e. all objects on a page
are of the same length. Therefore, they are stored in an array of object frames. With this
approach, there is no space fragmentation on a data page.
z The object frame consists of a 24 Bytes header with internal data and the data body that
is visible to the COM routines. The header stores for instance the lock state of the object,
the pointer to the next free object frame and the pointer to the before image of the object.
z The length of a data page is 8KB. Each page has a header of 80 Byte and a trailer of 12
Bytes. These parts of the page are not used for object frames but filled with structural data
as the page number, the numbers of the previous and next pages in the page chain, a
checksum to detect I/O errors, the number of occupied/free object frames on the page and
the offset of the first free frame.
z The length of a fixed length object is limited to the page size of slightly less than 8KB.
PRINT ON DEMAND
sponsored by
48
48
Class container for objects with variable length
Class container
Primary i. continuation j. continuation
container container container
2. 2. 2.
continu3
Chain continu3
Chain continu3
Chain
ation ation ation
contain contain contain
er er er
z Objects with variable length may be distributed over several pages and have a theoretical
maximal length of 2GB.
z To store the objects they are divided into pieces of less than 8KB. The pieces are stored in class
containers for objects of variable length. Each of those class containers consists of one primary
container and six continuation container.The primary container can accommodate object smaller
than 126 Byte. The ith continuation container contains object frames which can host object with
the length of ~126*(2^i) Byte with i=1,..,6.
z To insert an object the liveCache chooses a free object frame from the primary file. If the object is
smaller than 126 Byte it is put into this free frame otherwise the object is put into a frame of the
continuation container which has the smallest object frames which can still accommodate the
object. The OID of the frame, where the object is actually stored, is put into the chosen frame in
the primary file.
z The OID which is used by the application to identify an object is always the OID from the primary
container. This guarantees that the object can be accessed always by the same OID even if its
length changed and it was moved to another continuation container.
z The construction of the page chains and the pages of the continuation files is similar to those of
the fixed length class containers except that object frames in the continuation files are only 8 Byte
long.
z No index can be defined for objects of variable length.
z Accesses to objects with variable length are more expensive than accesses to ordinary objects if
they are longer than 126 Byte, since each access to those objects requires more than one page
access.
z Primary containers as well as continuation containers can be partitioned too.
PRINT ON DEMAND
sponsored by
49
49
Analysis of the class containers with the LC10
z The LC10 offers detailed data about all class containers stored in the liveCache. The class
container monitor can be reached by ‘Problem Analysis->Performance->Class container’
z The data in the class container monitor are:
y Class ID: unique internal number for each class container. The ID is assigned in the order of
the creation date of the container.
y Class name: name of the class whose instances are stored in the container.
y ContainerNo: external number of a class container. This number is used by the application to
identify a class container.
y Container size: Number of data pages which are occupied by the container.
y Free container pages: number of container pages which contain free object frames.
y Empty container pages: number of container pages which contain no occupied object frame.
y Container use: percent of usable space on the data pages which is used by occupied object
frames.
y Schema: name of the schema a class container is assigned to. Each container must be
assigned to a schema. A schema can be considered as a name space which can be dropped
with all its class containers at once.
y Class GUID: external unique identifier of the class.
PRINT ON DEMAND
sponsored by
50
50
COM routines
View registered
COM routines
z Objects stored in the class containers can be accessed and manipulated only via COM
routines which are methods of COM objects.
z The selection ´Current Status->Configuration->Database Procedures´ displays a list of
all COM objects and their methods which are currently registered at the database. For
each COM routine a detailed parameter description is available when the triangle left of
the routine name is pressed.
z The COM routines can be executed through stored procedure calls. For instance The
COM routine CREATE_SCHEMA from the example above can be executed by the SQL
command “call CREATE_SCHEMA (‘MyFirstSchema’)”.
z The registration of the COM routines is done automatically when the liveCache is
started by the LC10.
PRINT ON DEMAND
sponsored by
51
51
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Advanced
Administration
z At the conclusion of this unit you will be able to save the log, perform an incremental
backup, to add a data device and to configure the liveCache for saving the log
automatically.
PRINT ON DEMAND
sponsored by
52
52
Log full situation (1)
liveCache icon
z When performing the last exercise the liveCache ran into a log-full situation which caused a
standstill of the liveCache. All users trying to write any entry into the log were suspended.
However, users can still connect to the database and as long as they only read they can
continue to work on the database.
z The filling level of the data and log volumes can be observed by the transaction LC10 or
the DBMGUI. Within the LC10 the selection ‘Current Status->Memory Areas
->Decvspaces’ displays a detailed list of occupation of the data and log devices. However,
it is more convenient to watch the bars at the upper side of the DBMGUI. By a double click
on the liveCache icon you can get a detailed information about data and log devices in the
central screen too. If the log filling reaches critical values you can find warning messages in
the knldiag file too.
PRINT ON DEMAND
sponsored by
53
53
Log full situation (2)
z You can convince that no data base task in particular no user task is active in the log full
situation by choosing ‘Check->Server’. By clicking on the selection ‘TASKS’ you get an
overview what each database task is currently doing.
z In case the log device is full you find the archive log writer task in the state ‘log-full’.
z User tasks which have tried to write entries into the archive log you can find in the state
‘LogIOwait’
z Tasks which serve other users are not suspended and in the state ‘Command wait’, i.e.
these users can use the database for read accesses.
z Notice that a user task can be suspended even before the log filling level reaches 100%.
This is because a small amount of the log is reserved and cannot be used by user tasks.
This reserved part is required to guarantee that the liveCache can be shut down even in a
log full situation.
PRINT ON DEMAND
sponsored by
54
54
Solution to log full situation
Dev 1
(3) Log
backup BACKUP
z At first glance one could think that a log full situation could be overcome by only adding
another log volume. However, the liveCache/SAP DB writes the log cyclically onto the
volumes as they would be only one device. This means that even if a new log volume is
added, the log writing has to be continued after the last written entry. Therefore, a log
volume cannot be used immediately after it was added but the log has to be backed up
before (SAVE LOG – interactive log backup).
z Note: Prerequisite for a log backup is a data backup.
PRINT ON DEMAND
sponsored by
55
55
Interactive log backup (1)
Labels:
Data 1
Data 2
Data LOG_00001
Data n
volumes
LOG_00001
Log 1
Log Log 2
volumes
z Interactive log backup (SAVE LOG) backs up all occupied log segments from the log
volumes which have not been saved before.
z Only version files are supported as media.
z We recommend to back up the log into version files. One version file will be created for
each log segment. The version files get a number as extension (e.g. L_BackUpFile.001,
Al_BackUpFile.002, ...).
z The label versions are independent of the labels generated with complete data backup
(SAVE DATA) and incremental data backup (SAVE PAGES).
PRINT ON DEMAND
sponsored by
56
56
Interactive log backup (2)
z Choosing ‘Backup->Log`in the DBMGUI you activate the central window which allows to
back up all log segments (interactive log backup – SAVE LOG). After activating ‘Backup-
>Log` the central window displays a list of all log backup media defined so far which can be
used to save the current log. If this window is empty or all media defined are already in use
you must define a new log backup media before.
PRINT ON DEMAND
sponsored by
57
57
Interactive log backup (3)
z For the definition of the log backup media you have to enter a name and a location for the
media. By pressing the green tick the input can be confirmed. By following the footprint
icon you can now continue the log backup. No further input is required. At the end of the
backup you get a report about the save.
z You can define a log backup media as well as a data save media also by choosing
‘Configuration->Backup Media’
z The log is logically divided into a number of log segments. The size of these segments is a
configuration parameter of the liveCache. After the first of these segments is saved all
tasks which were suspended due to the log full situation are immediately resumed. That
means suspended tasks continue working already during the backup of the log area if there
exist more than one log segment.
PRINT ON DEMAND
sponsored by
58
58
Autosave log mode
z To prevent the database from further standstills due to a full log device you can activate the
autosave log mode (AutoLog mode). When the AutoLog mode is activated the log is
automatically written to files whenever a log segment is full. Each segment is saved in a
new backup file. The backup files are labeled as the corresponding media file plus a suffix
of a three digit number. The numbers are assigned in ascending order according to the
order of the saves.
z You can switch on the AutoLog mode by selecting ‘Backup->AutoLog on/off’. There you
can select a media which stores the automatically written log files. Alternatively, you can
define a new media by pressing the ‘Tape’ icon. After you have confirmed your media
selection with the AutoLog icon the auto log mode is activated.
z Pressing the tape icon on the lower taskbar of the central window you can create also a
new backup media.
z You can easily find the current status of the AutoLog mode by checking the column
AutoLog in the upper right window of the DBMGUI.
PRINT ON DEMAND
sponsored by
59
59
Add data volume (1)
z After your last exercise the database is nearly full. Therefore, another data volume should
be added to prevent the liveCache from a standstill due to a database full situation.
z In the LC10 you can add a data volume by selecting ‘Administration->Configuration-
>Devspaces’. After pressing the ‘Add Devspace’ button in the upper left corner a new
dialog window appears where you have to specify the size and the location of the new
volume.
z The new volume is immediately available after you have saved and confirmed the input
values.
z Data and log volumes can also be added using the DBMGUI (‘Configuration->Data
Volumes’).
PRINT ON DEMAND
sponsored by
60
60
Add data volume (2)
PRINT ON DEMAND
sponsored by
61
61
Incremental data backup (1)
Labels:
Data 1
DAT_00001
Data 2
Data PAG_00002
Data n
volumes PAG_00003
DAT_00004
PAG_00002
PAG_00005
PAG_00006
Log 1
Log Log 2
volumes
z In addition to a complete data backup data pages can also be backed up with an
incremental data backup.
z In contrast to a complete data backup an incremental data backup stores only those pages
which have changed since the last complete data backup.
z Notice that the incremental backup differs from those of previous liveCache releases (<7.4)
where the incremental backup contained all pages which changed since the last
incremental or complete data backup.
z The label version is increased with each complete and incremental data backup.
z To decide if you should rather make an incremental backup than a complete backup check
the number of pages which have been changed since the last complete backup. You can
find this number by choosing the tab ‘Data area’ in the selection ‘Current Status->Memory
Areas->Data Area’. An incremental backup is useful if the number of changed pages is
small compared to the number of used pages.
PRINT ON DEMAND
sponsored by
62
62
Incremental data backup (2)
z An incremental data backup can be performed via the DBMGUI by selecting ‘Backup-
>Incremental’. As for the complete data backup you have to choose a media for the
backup. Via the icons on the lower task bar of the central window you can also create and
delete media or change the properties of existing media. The ‘Next Step’ button guides you
through the further backup process. At it’s end a backup report is shown.
PRINT ON DEMAND
sponsored by
63
63
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
z In this unit the concept and the consequences of consistent views are explained.
PRINT ON DEMAND
sponsored by
64
64
Consistent views
All read accesses provide the image of an object that was committed
at a certain time. This point in time is the same for all accesses
within one transaction.
Example of reading within implicit consistent views
T1 T3
Set s=3 Commit Set s=7 Commit
T4
First reading Reading s Commit
s=3
s=7
T5
Reading s Commit
s=7
time
z liveCache uses consistent views to isolate read accesses to object from concurrent
changes on data by other applications.
z Consistent views see all liveCache data as it was when the consistent view was created.
Changes by other simultaneous running applications are invisible to the transaction.
z Databases like Oracle support similar concepts, but mostly only for single statements
(consistent read).
z Transactions are always performed as consistent views. The dedicated time which
decides about the appropriate before image to be read is the first access to a persistent
object and ends with COMMIT or ROLLBACK (implicit consistent view).
z liveCache also knows the concept of named consistent views, called versions. These
views do not end with commit or rollback but can contain several transactions (see later)
and may be active for several hours. Such named consistent views are used by APO for
transactional simulations.
z Reading within consistent views allows to provide only committed images without
waiting for the end of any other transactions.
PRINT ON DEMAND
sponsored by
65
65
Why consistent views
B B' B
C C C
A D A D A D
T2
T1
time
PRINT ON DEMAND
sponsored by
66
66
History files
OMS: pObj->value = y;
pObj->omsStore(*this);
...
Commit
Transaction list History files of
open transactions update OID 23.409
Class
x(k1)
container
Data cache
z The read consistency forces that all old images of objects which where updated by a
transaction T are stored not only until T committed but until the last consistent view
which where open before T committed is closed.
z The storage of before images is realized with the help of history files. When an object is
updated, the old value of the object (the before image) is copied to a history file which
exists for each transaction. Then the new object is copied to the data page and a pointer
in the page points to the former object version in the history file.
z History files of open transactions are not only used for the consistent read but they can
also be used for the rollback of transactions.
z In case of rollback, the old image is copied from the history file back to its original data
page and the history file is destroyed. If the transaction ends with a commit its history
file survives the transaction end and is inserted into a history file list.
PRINT ON DEMAND
sponsored by
67
67
Consistent reading via history files
T2 T3 T5 T6
Set s=15 Commit Set s=3 Commit Set s=7 Commit Set s=8
T4
First Reading Reading s
s= 15
History files
T3
s (15) Class
container
T4
T5
s (3) s (8)
T6
s (7)
z Several changes of an object made by different transactions are recorded in the history
files. These different versions of an object are linked in the history files.
z Dependent on the start time of active consistent views (transactions or named consistent
views) it may be necessary to keep several versions of an object.
z The before image of an object can be deleted when no consistent view may need to
access the object anymore.
PRINT ON DEMAND
sponsored by
68
68
Garbage Collection (1)
Problem
Objects are marked as deleted only but not removed
Solution
Garbage is collected by garbage collector tasks:
History files that cannot be accessed by consistent views anymore are
deleted
All deleted objects in the OMS pages that will not be accessed by
consistent views anymore are released
Free pages are released when all objects in the data or history page
are released
Garbage collectors are scheduled every 30 seconds and will start
working when data cache usage is higher than 80%
z Due to the consistent read no transaction that removes an object can remove the object
directly since a consistent view of an other transaction could probably access this object
or one of its before images. Therefore, objects are only marked as deleted when a
transaction deletes them.
z Actually, objects marked as deleted are removed by special server tasks called garbage
collectors. Scanning the history pages they remove objects when no consistent view can
access the objects anymore.
PRINT ON DEMAND
sponsored by
69
69
Garbage Collection (2)
T3 T4 T6 Garbage
Delete s Commit Commit Delete t Delete u Commit collection
T5
T3
delete
T4
STOP
T6
delete delete
STOP Object
marked as
deleted
z The garbage collectors scan periodically the history file list for history files of
transactions which cannot be accessed anymore by open consistent views. When the
garbage collector finds such a file it looks for all log entries which point to deleted
objects and removes these objects finally, i.e. afterwards the corresponding object
frames in the class container file are free and can be reused. After all delete entries in
the history file were found and the corresponding objects were removed the complete
file is dropped.
z The garbage collectors checks also whether the class containers contain too many
empty pages. If more than 20% of the pages of a file are empty the GC removes all
empty pages. The GC finds the empty pages by following the chain of free pages which
belongs to each container.
PRINT ON DEMAND
sponsored by
70
70
Garbage Collection (3)
z As long as transactions are not committed or named consistent views are not dropped,
the before images of objects stored in the history files cannot be released, because they
may be accessed by the consistent views. Remember that the consistent view wants to
see the liveCache as it was when the consistent view started. So before images in the
history files that are younger than the consistent view may reflect the status of liveCache
at start of the consistent view. As a result the history files may grow.
z When a transaction or a named consistent view is active for a long time, this may become
a problem for liveCache performance and availability:
y When the data cache is too small to hold the history files, data is swapped to disk.
When the data is accessed again (by the application or the garbage collectors) it must
be read into the data cache before. This leads to physical I/O what has to be avoided
for liveCache.
y When history files grow further, this may lead to a ‘database full’ situation. The result
is a standstill of the application.
z liveCache tries to optimize garbage collection, because scanning large history files is
CPU and I/O consuming.
z The total usage of the data cache as well as the occupation with history and data pages
can be monitored with transaction ‘LC10 -> Current Status -> Memory Areas -> Data
Cache ’.
PRINT ON DEMAND
sponsored by
71
71
Loss of consistent views
T2 T3 T5 T6
Set s=15 Commit Set s=3 Commit Set s=7 Commit Set s=8
Object history
not found
History files
T3
s (15)
Database Class
filling >90% container
T4
T5
s (8)
s (3)
T6
s (7)
z If the data cache filling exceeds the limit of 95% consistent views may become
incomplete since old object images which belong to the view are removed. The access
to such a removed old image causes the error ‘too old OID’ or ‘object history not found ‘.
z When the data cache filling level is above 95% before images which are not accessed by
any consistent view are removed. However, since the before images are linked in a
chain the connection to older images which might be visible in a consistent view is lost.
z When the database filling reaches the limit of 90% before images are removed which
are visible in consistent views.
PRINT ON DEMAND
sponsored by
72
72
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Memory
Areas
z In this unit you will get to know the two main memory areas of the liveCache: the data
cache and the OMS heap.
PRINT ON DEMAND
sponsored by
73
73
Calling a liveCache method in ABAP
...
... ABAP coding
set
set connection:
connection: <liveCache>
<liveCache> running on APO
...
... application server
exec
exec sql
sql call
call OID_UPD_OBJ
OID_UPD_OBJ (:KeyNo);
(:KeyNo);
...
...
exec
exec sql
sql commit;
commit;
...
...
liveCache basis
liveCache
Data cache
z A COM routine is called as a stored procedure in ABAP from the APO application server.
z Within a transaction (terminated by COMMIT or ROLLBACK), several COM routines can
be called. All these routines will work within the same session context in the liveCache. An
important feature of a session context is that global data is copied into a private memory
area (OMS heap) and that all following operations will operate on these private copies.
The access to private data is much faster than accessing global data from the data cache,
leading to a considerable win of performance – on cost of memory consumption. The
changes on private copies will be transferred into the global memory after a COMMIT and
the private memory is released (one exception are versions). The released memory is not
returned to the operating system but only free to be used again for new private caches.
Therefore, the OMS heap memory can never shrink.
PRINT ON DEMAND
sponsored by
74
74
Memory areas in the liveCache
z liveCache uses two main memory areas in the physical memory of the liveCache server:
data cache and OMS heap
z Data cache
y Data cache is allocated in full size when the liveCache is started. The size is configured by
liveCache parameter CACHE_SIZE
y data cache contains
• data pages with the persistent objects (OMS data pages)
• history pages with before images of changed or deleted objects (history pages)
• swapped named consistent views, keys for keyed objects and SQL pages (SQL pages). All pages which are
organized as B* trees are called SQL pages.
y all these pages may be swapped to data volumes if the data cache is too small to hold all data
z OMS heap
y liveCache heap grows when additional heap memory is requested. The maximum size is
configured by liveCache configuration parameter OMS_HEAP_LIMIT
y heap contains
• local copies of OMS objects (private cache for consistent views)
• local memory of a COM routine allocated by omsMalloc() and new()
y no swapping mechanism for heap memory is implemented except for inactive named
consistent views
PRINT ON DEMAND
sponsored by
75
75
Interaction of data cache and OMS heap (1)
A B C
objects
instance map
A OID 13.1
OID addr
25.1
?
object not found
session
private cache (OMS heap)
context
25 A D free
object pages
45 free free E
liveCache 13 B free C
basis data cache
z When an object is accessed via its OID, the object is searched in the private cache of the
session first. The OIDs of the private cache are stored in a hash table.
z When the object cannot be found in the private cache, the object is read from the global
data cache. The OID contains the physical page number of the page that contains the
object.
z If the page is not already in global data cache, it will be read from the data volumes.
PRINT ON DEMAND
sponsored by
76
76
Interaction of data cache and OMS heap (2)
A B C
objects
instance map
A OID 13.1
OID addr
25.1
B
?
13.1
session
private cache (OMS heap)
context
25 A D free
object pages
45 free free E
offset 1
liveCache 13 B free C
basis data cache
z When the page that contains the searched object is located in the global data cache, the
page offset which is part of the OID is used to locate the object inside the page.
z The object is copied to private cache and the hash table of the private cache is updated.
PRINT ON DEMAND
sponsored by
77
77
Interaction of data cache and OMS heap (3)
A B C
objects
instance map
A
OID addr
25.1
B
13.1
session
private cache (OMS heap)
context
25 A D free
object pages
45 free free E
liveCache 13 B free C
basis data cache
z All further accesses to the object will be handled in the private cache.
z All changes on the object will be made on the local copy of the object.
z The global version of the object in data cache remains unchanged until the transaction
performs a commit. If the transaction ends with a rollback the private cache is released
without changing any global version of the object.
z The subtransactions are completely handled within the private cache.
z When the object is used by a version, the object will never be copied back to global
cache, but will be released when the version is dropped.
PRINT ON DEMAND
sponsored by
78
78
Monitoring interaction of data cache and OMS heap
PRINT ON DEMAND
sponsored by
79
79
Data cache and OMS heap configuration
PRINT ON DEMAND
sponsored by
80
80
Monitoring the OMS heap usage
z The selection ‘Current Status->Memory Areas->Heap usage’ yields information about the usage of
OMS heap.
z ‘Available heap’ is the memory that was allocated for heap from the operating system. It reflects
the maximum heap size that was needed by the COM routines since start of liveCache .
z ‘Total Heap usage’ is the currently used heap. When additional memory is needed, liveCache uses
the already allocated heap until ‘Available’ is reached. Additional memory requests will result in
additional memory requests from operating system and the value of ‘Reserved’ will grow.
(‘Available heap’ > ‘Total Heap usage’ )
z It is important to monitor the maximum heap usage. When the ‘Available heap’ reaches
OMS_HEAP_LIMIT, errors in COM routines may occur due to insufficient memory. This
should be avoided.
z ‘OMS malloc usage’: memory currently in use that has been allocated via calls of method
'omsMalloc' (‘Total Heap usage’ > ‘OMS malloc usage’)
z 'Temp. heap at memory shortage‘: size of the emergency chunk. If a db-procedure runs out of
memory, the emergency chunk is assigned to the corresponding session and following memory
requests are fulfilled from the emergency chunk. This ensures that the db-procedure can
cleanup correctly, even if no more memory is available. After the db-procedure call the emergency
chunk is returned to public.
z 'Temporary emergency reserve space‘: memory of emergency chunk currently in use.
('Temp. heap at memory shortage' >= 'Temporary emergency reserve space')
z 'Max. emergency reserve space used‘: maximal usage of emergency chunk.
('Temp. heap at memory shortage' >= 'Max. emergency reserve space used')
PRINT ON DEMAND
sponsored by
81
81
Monitoring the data cache usage
z The menu path ‘Current Status->Memory Areas->Data cache’ leads to a screen which
displays all information about the liveCache data cache like data cache size, used data
cache and the usage and hit ratios for the different types of liveCache data.
z In an optimal configured system
y the data cache usage should be below 100%
y the data cache hit rate should be 100%
y if data cache usage is higher than 80%, the number of OMS data pages should be
higher than the number of OMS history pages
z Use the refresh button to monitor the failed accesses to the data cache. Each failed
access results in a physical disk I/O and should be avoided.
z More detailed information about the cache accesses can be found selecting ‘Problem
Analysis->Performance->Monitor->Caches’.
z Compare the size of OMS data with OMS history. If data cache usage is higher than 80 %
and OMS history has nearly the same size as OMS data, use the ‘Problem Analysis-
>Performance->Monitor->OMS Versions’ screen to find out if named consistent views
(versions) are open for a long time. Maximum age should be four hours.
PRINT ON DEMAND
sponsored by
82
82
Versions and named consistent views
Create Close
Version Reading s Reading t Set t=3 Version Commit Reading s Reading t Commit
T3 T4
s=3 t=1 s=2 t=1
Open Drop
Set s=2 Commit Reading s Reading t Commit
Version Version
T2 T5
s=3 t=3
All transactions running in one version have the same consistent view. It was
started when the version was created. Such a consistent view is called a named
consistent view.
All updates, creations and deletions of objects performed within a version remain
in the private cache of the session.
Æ Complete detachment of a user from the action of other users.
Versions can be closed temporarily and re-opened. Closed versions are called
inactive.
PRINT ON DEMAND
sponsored by
83
83
Monitoring versions
z One reason for a large consumption of OMS heap and data cache could be a long running
version which cumulates heap memory and which prevents the garbage collector from
releasing old object images.
z With the selection ‘Problem Analysis->Performance->Monitor->OMS versions’ you can
monitor the memory usage by versions.
z The column ‘Memory usage’ displays the actual usage of OMS heap memory. The
columns ‘Time’ and ‘Age (hours)’ define the starting time and the version and the time
since the start. Note, there should never be any version older than 4 hours. To avoid this
situation, the report /SAPAPO/OM_REORG_DAILY’ must be scheduled at least once a
day.
z Versions can be closed and re-opened in another session. To gain heap memory versions
can be rolled out into the global data cache where it is stored on temporary pages. The
column ‘Rolled out’ displays if the version cache was rolled out into the data cache. In the
column ‘Rolled out pages’ you find the number of temporary pages in the data cache
which are occupied by the rolled out version cache.
z Long running transactions can cause the same memory lack as versions. To display
starting time of all open transactions use ‘Problem Analysis->Performance-
>Transactions’.
PRINT ON DEMAND
sponsored by
84
84
Controlling the OMS heap consumption
After each COMMIT the liveCache checks whether the active version in
the current session consumes more than OMS_VERS_THRESHOLD KB
of the OMS heap or more than OMS_HEAP_THRESHOLD % of
OMS_HEAP_LIMIT are in use.
If YES:
- Unchanged objects are removed from the cache of the current version
- The current version cache is rolled out into the data cache.
PRINT ON DEMAND
sponsored by
85
85
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Task
Structure
z At the conclusion of this unit you will be able to monitor the tasks running inside your
liveCache server.
PRINT ON DEMAND
sponsored by
86
86
Process, thread and task structure
Clock
UKT UKT
Dcom 0 - n
DataWriter
datawriter Timer ALogWriter
Dev 0 - n
liveCache process
z The operating system sees the liveCache as one single OS process. The process is
divided into several OS threads (Windows and UNIX). liveCache calls these threads
UKTs (user kernel threads).
z Some threads contain different specialized liveCache tasks whose dispatching is under
control of liveCache.
z Other threads contain just one single task.
z The tasks that perform the application requests are called user tasks. User tasks are
contained in UKTs which contain exclusively user tasks.
z Each APO work process is connected to one or two user tasks.
z Starting with liveCache 7.2.5.4 (APO SP 13) the number of CPUs used by user tasks can
be limited by parameter MAXCPU. MAXCPU defines the number of UKTs which
accommodate user tasks. Since the usertasks consume the majority of the CPU
performance MAXCPU defines approximately how many CPUs of the liveCache server
are occupied by the liveCache.
PRINT ON DEMAND
sponsored by
87
87
Coordinator User
Initialization / UKT coordination
Console
Diagnosis
Timer
Time monitoring
Dev0 thread
Master for I/O on volume
Dev<i> slave threads
Async0 thread
Master for backup I/O
AsDev<i> threads
PRINT ON DEMAND
sponsored by
88
88
Task description
User
Executes commands from applications
and interactive components
Server Performs I/O during backups
TraceWriter
Flushes the kernel trace to the kernel
trace file
GarbageCollector
Removes outdated history files and
object data
PRINT ON DEMAND
sponsored by
89
89
Task distribution
z The task distribution of the liveCache can be viewed within the LC10 through the
selection ‘Current Status->Kernel threads->Thread Overview’.
z liveCache configuration: All garbage collector tasks run always in one thread.
z In the example above all user tasks run in one thread. Accordingly the configuration
parameter MAXCPU is one.
PRINT ON DEMAND
sponsored by
90
90
Current task state
PRINT ON DEMAND
sponsored by
91
91
liveCache Console
liveCache: Console
z The ‘liveCache: Console‘ window displays information about the liveCache status as
they are mainly also shown by the selection ‘Current Status->Kernel threads‘ in the
‘liveCache: Monitoring’ window (see previous slide). However, while the output from the
‘liveCache: Monitoring‘ window bases always on SQL queries to the liveCache the
results from the selection ‘liveCache: Console‘ get their results directly from the run time
environment of the liveCache. That means in situations where you cannot connect
anymore to the liveCache you can still use the liveCache console to investigate
the liveCache status.
z All data shown in the various selections from the console screen can also be yield by
calling the command ‘x_cons <liveCache name> show all‘ on a command line.
PRINT ON DEMAND
sponsored by
92
92
Cumulative task state
z A comprehensive description of all objects of the liveCache run time environment (RTE)
is displayed when the ‚liveCache: Console‘ screen is used. RTE objects are tasks,
disks, memory, semaphores (synchronization objects which here are called regions)
and waiting queues.
z In addition to the information about the current task states which can also be displayed
as shown on the previous slides the selection ‚Task activities‘ displays cumulated
information about the task activities. In particular the dispatcher count is given which
counts how often a task was dispatched by the task scheduler. As long as this number
is constant the task is inactive.
Other important parameter are:
y command_cnt: counts the number of application commands executed the the task.
y exclusive_cnt: number of accesses to regions (synchronization objects)
y state_vwait: counts the cases where the task had to wait for objects locked by
another task
z Among the other information which can be displayed by the liveCache console the
number of disk accesses, the accesses of critical regions (see slides in unit
‘Performance analysis’) and the PSE data are most important. Everything else is
intended to be used only by liveCache developers. Therefore the displayed values
sometimes may seem to be a little bit cryptic.
PRINT ON DEMAND
sponsored by
93
93
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Recovery
z At the conclusion of this unit you will be able to restore your liveCache.
PRINT ON DEMAND
sponsored by
94
94
Restart
T1 C
T2
T3 R
T4 C
C: Commit
R: Rollback T5 R
T6 C
t
PRINT ON DEMAND
sponsored by
95
95
Recovery process
LOG_00011
Archive log
LOG_00010
PAG_00005
DAT_00004
z Recovery always starts with a RESTORE DATA in the operation mode ADMIN. During
the restore, pages are written back to the volumes.
z RESTORE PAGES overwrites the pages in the volumes with the modified images.
z Log recovery is based on the last savepoint executed with the SAVE DATA/PAGES. After
the last RESTORE DATA/PAGES the database immediately performs a restart, if the log
entries belonging to the savepoint persist in the archive log. The restart reapplies the log
entries.
z RESTORE LOG must be run, if the savepoint belonging to the complete/incremental
backup was overwritten in the archive Log.
z The database reads the log entries from the backup media until it can find the next entry
in the log.
PRINT ON DEMAND
sponsored by
96
96
Recovery (1) : Start recovery process
Switch into
OFFLINE/ADMIN/ONLINE mode
Recover database
z To perform a recovery it is necessary to bring the database to the ADMIN mode which
can be done in the DBMGUI by pressing the yellow light in the traffic light symbol in the
left upper corner.
z To start the backup you have to change to the selection ‘Recovery->Database‘. In the
central window you can then choose which complete backup should be the basis for
the recovery of the database. You can take the last complete backup (uppermost radio
button) but also any other complete backup (middle radio button). With the ‚Next Step‘
icon you continue the recovery process.
PRINT ON DEMAND
sponsored by
97
97
Recovery (2) : Choose backup to start with
z All previously made complete data backups are shown in this list. To continue the
recovery mark the backup which you want to use as the basis for the recovery and
press the button ‘Next Step‘.
PRINT ON DEMAND
sponsored by
98
98
Recovery (3) : Choose strategy
Recovery strategies
z Now the simplest recovery strategy is shown. In the example above it is to restore the
incremental backup after the complete backup. No further log backups are required
since all needed log information are still on the log device.
z Instead of restoring the incremental backup you could restore the log backups.
Therefore you have to mark one of the log backups all further needed backups would be
marked automatically.
PRINT ON DEMAND
sponsored by
99
99
Recovery (4) : Start physical recovery
Start recovery
PRINT ON DEMAND
sponsored by
100
100
Recovery (5) : Restart liveCache
Restart
z After the recovery from the backup media is finished the DBMGUI informs you that it is
possible to restart the liveCache. Then the log entries from the log volumes will be
redone.
z When the restart is finished the liveCache is in ONLINE mode and all its data and
functionalities are available again.
PRINT ON DEMAND
sponsored by
101
101
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Configuration
z This unit introduces the key parameter of the liveCache configuration and demonstrates
how they can be manipulated.
PRINT ON DEMAND
sponsored by
102
102
Displaying configuration parameters
Display parameters
and their history
PRINT ON DEMAND
sponsored by
103
103
Change configuration parameters
Store changes
Change parameters
PRINT ON DEMAND
sponsored by
104
104
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany
Performance
Analysis
z At the conclusion of this unit you will be able to use the LC10 and the DBMGUI to find out
if the performance of your liveCache is limited by a bottleneck. Moreover, you will be given
ideas of how to improve the performance.
PRINT ON DEMAND
sponsored by
105
105
Monitoring APO and liveCache performance
z Analyzing an APO system for liveCache workload and bottlenecks, three different areas
must be covered:
y Estimate the liveCache share on the total APO response time and identify the APO
transactions which cause the high liveCache workload.
y Monitor the liveCache server and identify bottlenecks
y Detail analysis of specific APO transactions which are identified as performance
critical.
z These three areas are covered by different sets of SAP monitoring transactions
y Workload analysis transaction ST03N
y liveCache monitor transaction LC10
y A combination of runtime analysis transaction SE30, SQL trace transaction LC10
and liveCache monitoring transaction LC10
z This workshop is focused on monitoring the liveCache server. However, a complete
performance analysis has always to include all three parts shown above.
PRINT ON DEMAND
sponsored by
106
106
Reasons for poor liveCache performance
z There exist several causes for a poor liveCache performance. The most important are:
y A high rate of I/O operations performed by the user tasks.
y Serialization on liveCache synchronization objects. These objects are used to
synchronize the parallel access to shared liveCache resources, such as the data
cache.
y Too many user run COM routines.
y COM routines as well as the liveCache can raise a poor performance due to
algorithmic errors.
PRINT ON DEMAND
sponsored by
107
107
How to increase performance
z The most important sanction to improve the liveCache performance is to optimize the
setting of the liveCache configuration parameters.
z If a shortage of main memory or CPU performance is detected (see next slides) you
should enlarge the main memory or increase the number of CPUs.
z Whenever the performance is poor due to unclear reasons you should call the
APO/liveCache support.
PRINT ON DEMAND
sponsored by
108
108
Prerequisite for performance analysis
PRINT ON DEMAND
sponsored by
109
109
Performance parameters: I/O
! Should be 100% !
z Although the liveCache is constructed to keep all data in the data cache when it is in
ONLINE mode the liveCache can accommodate more data than it can host in the data
cache. However, if this happens the liveCache performance can suffer heavily from I/O
operations which are due to swap pages from the data cache to the data devices and
vice versa.
z To detect bottlenecks due to I/O operations use the selection ‘Current Status->Memory-
>Areas->Data cache’. There you can find information about the data cache filling level
as well as about the data cache accesses.
z For optimal liveCache performance (i.e. to avoid I/O-operations when accessing data
and history pages) the data cache usage should be below 100%.
z Whether the performance is significantly effected by I/O operations can be seen from
the number of failed cache accesses. The average data cache hit rate should be
above 99.9%. A lower rate is a hint for a too small data cache. A situation as shown
above shows rather poor performance.
z With the SQL command ‘monitor init’ you can reset the access counters to zero. This
allows to display the current hit rates.
z Notice that after the start of the liveCache the data cache is empty and it takes some
time till the hit rate shows a stable value which is relevant for an analysis.
PRINT ON DEMAND
sponsored by
110
110
Possible reasons for poor data cache hit rates
z The main reason of a poor cache hit rate is a data cache which is configured too small.
However, sometimes the hit rate is poor due to long running versions or transactions. To
keep the consistent view of the versions or transactions the liveCache is forced to store
a large number of history pages which fill the cache and lead to a roll out of data and
history pages to the data devices.
z To find out if a bad hit rate is caused by versions or transactions check the selections
‘Problem Analysis->Performance->Monitor->OMS versions’ and ‘Problem Analysis-
>Performance->Transactions’. There should be no version older than four hours.
PRINT ON DEMAND
sponsored by
111
111
How to configure the data cache
z The above formula gives a suggestion for the configuration parameter CACHE_SIZE
determining the size of the data cache. However, depending on your particular profile it
can be that the CACHE_SIZE has to deviate from the suggestion.
z If your cache hit rate is below 100% although the CACHE_SIZE is set as shown above
the physical memory on your liveCache server should be enlarged.
z MAX_VIRTUAL_MEMORY describes the maximum memory that can be accessed by
the liveCache. For NT this limit is displayed in the knldiag file for UNIX use the
command ‘ulimit –a’.
z On Windows NT you should use the Enterprise Edition to increase the
MAX_VIRTUAL_MEMORY from 2 GB to 3 GB.
PRINT ON DEMAND
sponsored by
112
112
How to configure the OMS heap
If (#OutOfMemoryExceptions > 0)
increase the OMS heap, if necessary on cost of the data cache.
z The free memory available for the data cache and the OMS heap should be divided in
the ratio 40/60, where the OMS heap gets the larger part of the memory.
z In contrast to the data cache the OMS heap is not allocated at the start of the liveCache
and thus there is no need to define the OMS_HEP_LIMIT in the configuration file. By
setting the OMS heap limit to 0 you allow the liveCache to allocate as much heap
memory as it can get from the operating system. However, on Windows NT and AIX the
liveCache could crash if the OS cannot allocate anymore memory therefore you should
set OMS_HEAP_LIMIT to the value suggested above. If the OMS_HEAP_LIMIT is not
zero the liveCache stops to request heap memory from the OS if the
OMS_HEAP_LIMIT is reached instead all COM routines requesting further memory are
aborted.
z The heap memory is of sufficient size if there occur no OutOfMemory exceptions. They
must be avoided since they let a COM routine abort. The occurrence of OutOfMemory
exceptions can be checked by executing the SQL command
select sum (OutOfMemoryExceptions) from Monitor_OMS
or
checking the column ‘OutOfMemory excpt.‘ in the tab ‘Transaction counter‘ of the
selection ‚Problem Analysis->Performance->OMS monitor‘ of the LC10
z If you find the number of OutOfMemory exceptions to grow you should increase the
OMS heap (if necessary by making the data cache smaller).
PRINT ON DEMAND
sponsored by
113
113
Performance parameters: regions (1)
DATA CACHE u1
16 8 64 40 132 28
Data1 region u2
42 2 74 38 66 10 Data3 region
u4
z When monitoring the liveCache task activities in the ‘liveCache: Console ->Active Task’
(or executing dbmcli –d <liveCache_name> -u control,control show act) you should find
the user tasks ideally to reside in the state ‘Running’ or ‘DcomObjCalled’. If instead user
tasks are often in the state ‘Vbegexcl’ it could be that your performance suffers from the
serialized access to internal liveCache locks. The liveCache calls these internal locks
regions (They correspond to Latches in Oracle). Regions are used to synchronize the
parallel access to shared resources. For instance searching for a page in the data cache
is saved by regions. In each region at maximum one task can search for a page.
z If a task requests a region which is already occupied by another task the requesting task
is suspended as long as it cannot enter the region. This situation is displayed by the
status ‘Vbegexcl‘ in the task monitor ‘liveCache: Console ->Active tasks’.
PRINT ON DEMAND
sponsored by
114
114
Performance parameters: regions (2)
Collision rate
z The number of collisions, i.e. situations where a task must be suspended since it
requested an occupied region, is displayed in the ‘liveCache: Console’ screen for each
region.
z The collision rates of frequently used regions should not exceed 10%. Otherwise the
liveCache performance is at risk.
z To reduce critical collision rates the configuration parameter defining number of regions
used to stripe the corresponding resource can be increased. However, since a high
collision rate could be an indicator for algorithmic errors this should be done only in
collaboration with the liveCache support.
PRINT ON DEMAND
sponsored by
115
115
Performance parameters: MAXCPU
PRINT ON DEMAND
sponsored by
116
116
Analysis of COM routine performance
z Even if the liveCache works fine the COM routines can cause a poor performance due
to algorithmic errors. To analyze those problems the liveCache supplies an expert tool
to investigate the performance of COM routines. It lists the runtime, memory
consumption and number of object accesses for each COM routine. All these data give
hints which COM routine could be problematic. However, since the analysis is not
simple this monitor should be used only by the APO support.
z Tab explanation:
y ‘Runtime’: total and average runtime of each COM routine.
y ‘Object Accesses’: number of object accesses from the private cache and from the
data cache for each routine (see also slide).
y ‘Transaction counter’ : number of exceptions thrown within the routine, number of
commits and rollbacks for subtransactions.
y ‘Cost summary’: summary of the previous four tabs.
PRINT ON DEMAND
sponsored by
117
117
Tracing internal liveCache activities
liveCache tracing
Activate/deactivate tracing
Flush trace
z To analyze the internal activities of the liveCache the liveCache can write a trace file.
This file is very helpful to look for the reasons of a bad performance which may due to
algorithmic or programming errors within the liveCache. The file should be interpreted
only by the liveCache support.
z The trace is not automatically written but must be activated using the DBMGUI. In the
selection ‘Check->Tracing’ you can choose which operations should be traced. After
activating the trace it is written into a main memory structure to avoid a slow down of the
system due to trace I/O operations. To write the trace actually to a file it must be flushed.
The resulting file is not yet readable but still an image of the memory structure. A
readable file can be created in the tab ‘Protocol’.
PRINT ON DEMAND
sponsored by
118
118
Summary
PRINT ON DEMAND
sponsored by
119
119
Further Information
Î Public Web:
www.sap.com Î Solutions Î
Supply Chain Management
www.sapdb.org
Î Service Marketplace:
http://service.sap.com MySAP SCM Technology
PRINT ON DEMAND
sponsored by
120
120
Q&A
PRINT ON DEMAND
sponsored by
121
121
Feedback
http://www.sap.com/teched/bremen/
Î Conference Activities
PRINT ON DEMAND
sponsored by
122
122
Copyright 2002 SAP AG. All Rights Reserved
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express
permission of SAP AG. The information contained herein may be changed without prior notice.
Some software products marketed by SAP AG and its distributors contain proprietary software components of other
software vendors.
Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are registered trademarks of
Microsoft Corporation.
IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®, S/390®, AS/400®, OS/390®,
OS/400®, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix
and Informix® Dynamic ServerTM are trademarks of IBM Corporation in USA and/or other countries.
ORACLE® is a registered trademark of ORACLE Corporation.
UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.
Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®, VideoFrame®, MultiWin® and
other Citrix product names referenced herein are trademarks of Citrix Systems, Inc.
HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium,
Massachusetts Institute of Technology.
JAVA® is a registered trademark of Sun Microsystems, Inc.
JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and
implemented by Netscape.
MarketSet and Enterprise Buyer are jointly owned trademarks of SAP Markets and Commerce One.
SAP, SAP Logo, R/2, R/3, mySAP, mySAP.com and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries
all over the world. All other product and service names mentioned are trademarks of their respective companies.
PRINT ON DEMAND
sponsored by
123
123
EUROPEAN
SAP TECHNICAL
EDUCATION
CONFERENCE 2002
WORKSHOP
PRINT ON DEMAND
sponsored by
124
124