Professional Documents
Culture Documents
ABSTRACT
The growing complexity of Web service platforms and their dynamically varying
workloads make manually managing their performance a tough and time consuming
task. Autonomic computing systems that are self-configuring and self-managing
have emerged as a promising approach to dealing with this increasing complexity. In
this paper we propose a framework for an Autonomic Web Services Environment
(AWSE) constructed using a collaborative framework of Autonomic Managers that
leverage each component in the AWSE with self-managing capabilities. We present
a unique approach to designing the Autonomic Managers for the AWSE framework
by combining reflective programming techniques with extended database
functionality. The validity of our approach is illustrated by a number of experiments
performed using prototype implementations of our autonomic managers. The
experimental results demonstrate self-tuning capabilities of the autonomic managers
in achieving preset performance goals geared towards satisfying a predefined
Service Level Agreement for an Autonomic Web Services environment.
Application
SOAP
including the separation of the management interface
Server
Engine
Site
from the business interface, pushing core metric
collection down to the Web services infrastructure. WS1 WS2
They use intermediate Web services that act as event
collectors and managers. Our approach uses common
tools provided by most database management LAN
systems as well as reflective programming Legacy/Backend DBMS
techniques to incorporate self-awareness into the Application
autonomic system.
server in our experimental setup. The HTTP server, Figure 7: A dynamic workload
1000 1000
0 0
0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160
Elapsed Tim e (seconds) Elapsed Tim e (seconds)
Using the workload shown in Fig. 7, we ran the 4.2 Buffer Pool
experiment under the control of the autonomic A single buffer pool is used to cache the database
manager. The connection allocation schemes used by data for both the OLTP and the OLAP services. The
the autonomic manager are shown in Fig. 9, which goal for buffer pool is to maximize the hit rate, thus
gives more importance to the OLAP workload as minimizing physical I/O. Given the nature of the
mentioned earlier in this section. We see that the OLAP Web service workload, which involves
number of connections allocated to OLAP Web mainly sequential scans of the table, the hit rate will
service calls increases gradually in response to the increase only when the entire table fits in the buffer
increases in the OLAP workload. The resulting pool. In this experiment, we begin with a pure OLAP
response times are shown in Fig. 10 . The surge of workload and observe how the system adjusts the
the OLTP response time between time 60 and 72 is buffer pool size to achieve the response time goal.
caused by the heavily biased allocation of 14 Once this goal has been reached, we introduce a
connections to the OLAP workload and only 1 second workload, the OLTP workload, which
connection to the OLTP workload, during the interferes with the OLAP performance. We expect
presence of a higher workload of 20 OLAP users and that the system will adjust the buffer pool size further
15 OLTP users. Such allocation is made due to the to accommodate both workloads. For the purpose of
priority policy used by the autonomic manager to this experiment, we assume that there is sufficient
mitigate the violation of the response time goal of the memory in the system to satisfy the required buffer
OLAP service. We observe that the response time for pool changes.
the OLAP service hovers close to the response time The OLAP service call scans a table that is close
goal of 5000ms up to time 120. Up to this time, 13% to 30MB in size, thus requiring approximately 7700
of the completed requests violated the goal, but this 4K pages to accommodate the table in the buffer
violation is less than 4% of the goal. After time 120, pool. We start with a buffer pool size of 1000 4K
the system demonstrates an extreme allocation pages. The system monitors the hit rate every 20
scheme where all the connections are allocated to the seconds, and increments the buffer pool size by 1000
OLAP service. However, the response time of the pages whenever the hit rate is found to be below
OLAP service still remains far above the 5000ms 80%. Fig. 11 shows the autonomic adjustment of the
goal. This is caused by the heavy OLAP workload of buffer pool size and Fig. 12 shows the corresponding
30 users as shown in Fig. 7. Therefore, we can changes in buffer pool hit rates at different times
conclude that 15 connections are not sufficient to corresponding to different buffer pool sizes. In Fig.
satisfy the requirements of such high (more than 11, we observe that the buffer pool reaches 8000 4K
85%) OLAP workload. pages, enough to hold the entire table in the buffer
pool, at around time 140. Fig. 12 shows that around
16 12000
Buffer Pool Size(pages)
OLTP
14
Number of Connections
OLAP
10000
12
10 8000
8 6000
6 4000
4
2000
2
0 0
0 20 40 60 80 100 120 140 160 0 40 80 120 160 200 240 280
Elapsed Tim e (seconds) Elapsed Tim e (seconds)
0.8
system for system control as well as for managing
0.6
and storing the vast array of knowledge required for
an autonomic system.
0.4
The experimental results presented in this paper
0.2
show that goal-oriented reflection-based managers
0 are effective in achieving autonomic resource
0 40 80 120 160 200 240 280
management. The manager automatically controls an
Elaps e d Tim e (s e conds )
element’s resources to meet the predefined individual
Figure 12: Buffer pool hit rate adjustments and thereby, the system’s overall performance goals.
The self-managing components thus render a
the same time (140s) the hit rate remains at 100% complete autonomic environment through the
and no further adjustment is necessary at that point. organization hierarchy and communication of the
At time 180, we introduce the OLTP workload autonomic managers.
which competes with the existing OLAP workload AWSE can extend the implementation of Web
for buffer pages. As a result, we observe that the hit services more efficiently to areas such as Enterprise
rate dramatically drops to 0, and the tuning process is Resource Management (ERM) [19]Error!
invoked again. The tuning process completes when Reference source not found., or Enterprise Project
the buffer pool reaches10000 pages, which is large Management (EPM) [33], and network and systems
enough to accommodate both tables. management [34]. The general framework for the
Fig. 13 shows the resulting reductions in response implementation of autonomic managers can be
times for the OLAP and OLTP service calls. The adapted to design more complex heterogenic systems
spikes are caused by the overhead of reconfiguring by identifying the system components and leveraging
the buffer pool size. them to autonomic elements.
Our current research focuses on implementing
1200
OLAP
WSDM [19] standards in AWSE to enhance standard
1000 OLTP based communication among the autonomic
managers. We are also working on the development
Response Time (ms)
800
of a site manager to control the hierarchy of
600 managers and to perform goal distribution and
400
assignment, which involves breaking down a high
level system goal into appropriate component level
200
goals for the hierarchical management in AWSE.
0 We also plan to extend the framework by
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 incorporating SLA negotiation, automating Web
Elapsed Tim e (seconds)
service discovery, deriving appropriate models for
Figure 13: Web service response time with different types of autonomic managers, and
autonomic buffer pool adjustment generating metrics with the performance data to
generate QoS statistics to facilitate QoS based
5 CONCLUSIONS service discovery.