Professional Documents
Culture Documents
DynamicProgramminginMobileCloudComputing
Authors Name: K MANIKANDAN
CSE DEPARTMENT
R.M.K ENGINEERING COLLEGE
CHENNAI, TAMILNADU
kmanikandanslm@yahoo.com
Fig1.EXAMPLE
V Simulation Results
First, verification of the convergence of the ADP
Algorithm. plots the rate of convergence of the
parameter vector under three different stepsize
recipes with the corresponding parameter in brackets
It shows that converges around the point (4400,
4800)T as the iteration moves forward. In particular,
we observe apparent convergence of around 500
iterations. This is a fair rate of convergence
considering the complexity of the multidimensional
exogenous stochastic information t.Second,
comparison with alternative policies. We simulate
two alternative policies for comparison: SALSA(V) as
proposed in where V is a control knob, and the
minimum-delay policy therein. For ease of
comparison, we define a performance metric Ep,
referred to as the average energy consumption per
packet, that evaluates both system throughput and
energy consumption.
VI Implementation Issues
The ADP algorithm was tested through Monte Carlo
simulation in an offline mode. In practice, to speed
up the rate of convergence, mobile users can first use
collected workload and link trace to train the
parameter vector offline for a period of time, and
then put it online for on-the-fly training. For mobile
applications with stringent delay bounds, offline
training is particularly important for performance
guarantees. Nonetheless, it may not be necessary for
delay-tolerant applications with longer thresholds.
Our future work will focus on implementing the ADP
algorithm on a smartphone chip, and testing its
performance on real devices.