You are on page 1of 4

International Journal of Emerging Technologies and Engineering (IJETE)

Volume 1 Issue 9, October 2014, ISSN 2348 8050

MANAGING OVERLOADED MECHANISMS IN VM ENVIRONMENTS


UNDER PID CONTROL SCHEMES
G.Karthika1 ,C.Radhakrishnan2 ,N.Premkumar3
1
PG Student , Dept of CSE
Kongunadu College of Engg & Technology , Trichy, Tamilnadu.
2,3
Assistant Professor, Dept of CSE
Kongunadu College of Engg & Technology , Trichy, Tamilnadu.

Abstract
Cloud computing has emerged as a new and alternative
method for providing computing services. As cloud size
increases, the probability that all workloads
simultaneously scale up to their required services. This
inspection agrees to multiplex cloud resources among
numerous workloads, significantly humanizing resource
use. The capability of host virtualized workloads such
that available physical capacity is smaller than the sum
of maximal demands of the workloads is passing on to as
over-commit or over-subscription. Elasticity is one of
the characteristic of cloud computing that improves the
flexibility for clients and it permits them to adjust the
quantity of physical resources associated to their
services over time in an on-demand basis. However,
elasticity makes troubles for cloud providers as it may
lead to poor resource utilization, especially in
combination with other reasons, such as user
overestimations and pre-defined VM sizes. Use the
elastic key feature in VM cloud environments and
Estimating the resource allocation to design the self
adapting system. By implementing distributed PID
controller that changes the level of risk in data centers.
Keywords: Resource Utilization,
elasticity, cloud computing

virtualization,

to their workflow pattern each time a request arrives


from the end users. One of the main features provided by
clouds is elasticity, which allows users to dynamically
adjust resource allocations depending on their current
needs. Thus, customers only have to pay for what they
use. Although this characteristic is one of the main
advantages from the client viewpoint, it may have an
impact on the cloud infrastructure providers side:
capacity planning becomes challenging when the exact
number of Virtual Machines (VMs) that each running
service is going to need in the future is unknown.
Consolidation and virtualization deliver more computing
resources to the associations, but crash to tune
applications to run on virtualized resources means that
un-tuned applications are wasting processing cycles. In
order to avoid to waste computing and storage resources
it is necessary to optimize management of these novel
cloud systems architectures and virtualized servers. The
use of virtualization techniques allows for the seamless
allocation of each component of the distributed service
inside the Cloud. It also makes easier the process of
horizontal elasticity, i.e. adding/removing extra duplicate
VMs for each component during runtime to maintain a
certain level of performance for the overall service when
there are variations in the workload.

2. Related work
1. Introduction
In the recent past years, academic and scientific entities
as well as some companies owned huge mainframes,
clusters, or yet supercomputers that have to be shared
across their users for satisfying their computing
requirements. Since they were exclusive machines, the
resources having essentially into account performance
metrics: throughput, response time, load-balancing, etc.
Nowadays, more and more distributed applications are
being provided in the Cloud as a composition of
virtualized services. For example, in an IaaS, each
service is deployed as a set of virtualized components,
i.e. Virtual Machines (VMs) that are activated according

In [1] A. Ali-Eldin, J. Tordsson, and E. Elmroth. In


this paper, we focuses on horizontal elasticity, i.e. the
ability of cloud infrastructure to add or remove virtual
machines allocated to a service deployed in the cloud.
To model a cloud service by using queuing theory.
Using that model we build two adaptive controllers that
estimate the future workload on a service. We explore a
different possible scenario for deploying a proactive
elasticity controller coupled with a reactive elasticity
controller in the cloud. Using the simulation with
workload traces from the FIFA world-cup web server,
shows that a hybrid controller that incorporates the
reactive controller for scale up their workload coupled
with the proactive controllers for scale down their
208

www.ijete.org

International Journal of Emerging Technologies and Engineering (IJETE)


Volume 1 Issue 9, October 2014, ISSN 2348 8050

workload and make decisions to reduces SLA violations


by a factor of 2 to 10 compared to a regression based
controller or a completely reactive controller.
In [2] A. Sulistio, K. H. Kim, and R. Buyya. In this
paper, we use overbooking concept from Revenue
Management to manage cancellations and no-shows of
reservations in a Grid system. Without overbooking, the
resource managers are faced with a prospect of loss of
income and lower system utilization. Thus, the model
aims to find an ideal limit that exceeds maximum
capacity, without sustain greater compensation cost.
Moreover, we introduce some novel strategies for
selecting which overbooking to deny, based on
compensation cost and user class level, namely Lottery,
Denied Cost First (DCF), and Lower Class DCF. The
result shows that by overbooking reservations, a
resource gains an extra 6-9% in the total net revenue.
In [3] L. Tomas and J. Tordsson. Despite the potential
given by the combination of multi-tenancy and
virtualization, resource utilization is still low in today's
data centers. We identify three key characteristics of
cloud services and infrastructure as-a-service
management practices: burstiness in service workloads,
fluctuation of resource usage over time in virtual
machine, and being limited to pre-defined virtual
machines sizes only. Based on these characteristics, we
propose admission control and scheduling algorithms
that incorporates resource overbooking to improve
resource utilization. A combination of monitoring,
modeling and prediction techniques are used to avoid
over passing the total infrastructure capacity. A
performance evaluation using a combination of
workload traces demonstrates the potential for
significant improvements in resource utilization while
avoiding over passing the total capacity.
In [4]. Elasticity is key characteristics for the cloud
paradigm, where the pay-per use nature provides greater
flexibility for cloud users. However, elasticity
complicates long-term capacity planning for cloud
providers as the exact amount of resources required over
time becomes uncertain. Admission control techniques
needed to handle the trade-off between resource
utilization and potential overload. We define a admission
control algorithms that combine risk assessment methods
with a fuzzy logic framework. An experimental result
using a mixture of bursty and steady applications
demonstrate that our algorithms can increase resource
utilization while limiting overloaded problems to a few
percent.

In [6] T.-F. Chen and J.-L. Baer. Memory latency and


bandwidth are progressing at a much slower than
processor performance. In this paper, we describe the
performance of three variations of a hardware function
unit whose goal is to assist a data cache in prefetching
data accesses so that memory latency is hidden as often
as possible. The basic idea of the prefetching scheme is
to keep track of data access patterns in a Reference
Prediction Table (RPT) organized as an instruction
cache. The three designs differs mostly on the timing of
prefetching. In the simplest scheme (basic), prefetches
can be generated one iteration ahead use. The lookahead
variation takes advantage of a lookahead program
counter that ideally stays one memory latency time
ahead of the real program counter and that is used as the
control mechanism to generate the prefetches. Finally
the correlated scheme uses a sophisticated design to
detect the patterns across loop levels. These designs are
evaluated by simulating the ten SPEC benchmarks on a
cycle-by-cycle basis. Finally the result shows that 1) the
three hardware prefetching schemes are yielding
significant reductions in the data access penalty when
compared with regular caches, 2) the benefits are higher
when the hardware assist augments small on-chip
caches, and 3) the lookahead scheme is preferred on
cost-performance wise.

3. Collocation Virtual Machine


A strict cloud solution isnt something are interested in
that can get many of the same benefits using our
collocation services. Locating up your datacenter in two
organically part areas allows you to virtually guarantee
that a single disaster wont affect both datacenters at on
one occasion. This offers you the high level of idleness
you require without relying on the third party cloud
explanation. Functioning with VM tech it is achievable
to situate up these systems so that if one location
experiences an outage of any sort all the processing and
data storage will automatically switch over to the less
important system. This consents to for virtually
continuous and seamless work for the whole business.
Locating up co-location services by VM tech also allows
you to simply place only youre most critical systems in
a server rack in another data Centre so you can get the
reliability and redundancy you require without the high
costs. Co-location is an outstanding alternative for a lot
of companies looking for world class IT solutions on a
budget. VMs to be utilized as independent computing
servers regardless whether they are co-located on a
single physical host machine or separated on two
209

www.ijete.org

International Journal of Emerging Technologies and Engineering (IJETE)


Volume 1 Issue 9, October 2014, ISSN 2348 8050

different host machines. VM maintain the information


about its co-located VMs.

4. Automated optimization in VM
This level represents the risk aware admission controller.
If the service is rejected by admission controller, risk
aware controller analyze the capacity and passed to
fuzzy assessment and compare with risk threshold values
and collocate VM based on service request capacity.
This algorithm is used to find a suitable server for task
scheduling and calculate the fairness values to the
Overbooking. Summarize over all running tasks capacity
in VM environments. Evaluates the risk associated to the
new incoming request by calling the fuzzy risk
assessment module. Calculate the acceptance risk and
the data center risk thresholds. Resource allocation is
used to assign the available resources in an economic
way. Resource allocation may be decided by using
computer programs applied to a specific domain to
automatically and dynamically distribute resources to
applicants. User sends the request to the admission
controller. The request may be resource request or file
request. Admission controller analyzes the demands for
user requests. Knowledge DB get data center behavior
that is CPU, Memory and IO Utilization also analyze
available running tasks and idle tasks, then calculate VM
execution time and memory. Admission controller
makes decision, whether it is accepted or not. If the
service is accepted, request is sent to overbooking
scheduler to analyze horizontal elasticity in virtual
machines. If service is rejected, request is sent to risk
assessment controller to analyze capacity of VM. Based
on these following parameters, decision is made. Req CPU, memory, and I/O capacity required by the new
incoming service. UnReq - the difference between total
data center capacity and the capacity requested by all
running services. Free - the difference between total data
center capacity and the capacity used by all running
services. The fuzzy assessment analyzes the risk
threshold values and PID controller finds error values
about machine status. Contains following errors such as
present error (P), the accumulated error (I), and the
prediction of future errors (D). Implement the collocated
VM concept to merging two or more predefined VMs.
Algorithm:
Input: A CPU utilization history
Output: A decision on whether to migrate a VM
1: if the CPU utilization history size > Tl then
2: Convert the last CPU utilization value to a state

3: Invoke the Multisize Sliding Window estimation to


obtain the estimates of transition probabilities
4: Invoke the MVMOD-OPT algorithm
5: return the decision by MVMOD-OPT
6: end if
7: return false

Figure 1: PID Controller

Figure 2: System Architecture

5. Conclusion
Resource allocation models in cloud computing
infrastructures tend to allow large fractions of resources
to sit idle at any given time. Overbooking has proven to
increase data center utilization. However, it may impact
applications performance, in addition to other
unexpected events, such as flash crowds or resource
failures that may aggravate the situation. Adapting to
capacity changes has become an important problem,
210

www.ijete.org

International Journal of Emerging Technologies and Engineering (IJETE)


Volume 1 Issue 9, October 2014, ISSN 2348 8050

given the recent trends towards virtualization, shared


data centers and cloud computing. However, measuring
a distributed systems capacity directly is infeasible
making this a challenging problem. We proposed a novel
method to plan cloud capacity allowing significant
additional gains in over-commit ratio and implemented
efficient admission control mechanisms to provide
migration between VM machines.

Reference
[1] A. Ali-Eldin, J. Tordsson, and E. Elmroth, An
adaptive hybrid elasticity controller for cloud
infrastructures, in Proc. of Network Operations and
Management Symposium (NOMS), 2012, pp. 204212.
[2] A. Sulistio, K. H. Kim, and R. Buyya, Managing
cancellations and no-shows of reservations with
overbooking to increase resource revenue, in Proc. of
Intl. Symposium on Cluster Computing and the Grid
(CCGrid), 2008, pp. 267276.
[3] L. Tomas and J. Tordsson, Improving Cloud
Infrastructure Utilization through Overbooking, in
Proc. of ACM Cloud and Autonomic Computing
Conference (CAC), 2013.
[4] Cloudy with a chance of load spikes: Admission
control with fuzzy risk assessments, in Proc. of 6th
IEEE/ACM Intl. Conference on Utility and Cloud
Computing, 2013, pp. 155162.
[5] K. J. A strom and R. M. Murray, Feedback
Systems: An Introduction for Scientists and Engineers.
Princeton University Press, 2008.
[6] T.-F. Chen and J.-L. Baer, Effective hardware-based
data prefetching for high-performance processors, IEEE
Transactions on Computers, vol. 44, no. 5, pp. 609623,
1995.
[7] J. Flich, S. Rodrigo, J. Duato, T. Sodring, A.
Solheim, T. Skeie, and O. Lysne, On the potential of
noc virtualization for multicore chips, in Proc. of Intl.
Conference on Complex, Intelligent and Software
Intensive Systems (CISIS), 2008, pp. 801807.
[8] J. Duato, A new theory of deadlock-free adaptive
routing in wormhole networks, IEEE Transactions on
Parallel and Distributed Systems, vol. 4, no. 12, pp.
13201331, 1993.
[9] B. B. Chen and P.-B. Primet, Scheduling deadlineconstrained bulk data transfers to minimize network
congestion, in Proc. of IEEE 7th Intl. Symposium on
Cluster Computing and the Grid (CCGRID), 2007, pp.
410417.
[10] A. J. Bernstein and J. C. Sharp, A policy-driven
scheduler for a time-sharing system, Communications
of the ACM, vol. 14, no. 2, pp. 7478, 1971.
211
www.ijete.org

You might also like