You are on page 1of 6

2014 IEEE 13th International Conference on Trust, Security and Privacy in Computing and Communications

A Evaluation Method for Web Service with Large Numbers of Historical Records
Lianyong Qi1,2,*, Jiancheng Ni2, Xiaona Xia2, Chao Yan2, Hua Wang2, Wanli Huang2
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210093, China
2
College of Computer Science, Qufu Normal University, Rizhao, 276826, China
Email: lianyongqi@gmail.com

response requirements of users on service selection or


further service composition. Moreover, for a web service,
its large numbers of historical QoS records varies in
invoked time; and correspondingly, their contribution and
significance for service evaluation are also different.
Therefore, it will lead to the Lagging Effect [8] of
service evaluation (i.e., the evaluation result cannot reflect
the up-to-date quality change trend of a web service), if all
the historical QoS records of an identical web service are
treated equally.
In view of these challenges, in this paper, we put
forward a novel service evaluation method, i.e., PartialHR (Partial Historical Records-based service evaluation
method). Through Partial-HR, we can assign different
weights to large numbers of historical QoS records of a
web service, so as to discriminate their respective
importance. And finally, only partial important historical
QoS records are recruited for service quality evaluation, so
as to improve the evaluation efficiency.
The remainder of this paper is organized as follows. In
Section 2, two scenarios are introduced to demonstrate the
motivation of our paper. In Section 3, a novel evaluation
method for web service, i.e., Partial-HR is put forward. In
Section 4, based on a real web service QoS dataset WSDREAM [9], a set of experiments are deployed to validate
the feasibility of our proposal, in terms of evaluation
accuracy and efficiency. In Section 5, Partial-HR method
is evaluated, and in Section 6, we summarize the paper and
point out our future research directions.

AbstractDue to the unstable network environment or fake


advertisement reasons, the QoS (quality of service) data
published by web service providers is not always trusted.
Therefore, it is a promising way to evaluate the service
quality, based on the historical QoS records generated from
the past invocations of web services. However, for some web
services with large numbers of historical QoS records, the
evaluation efficiency is usually low and cannot meet the
quick response requirements of subsequent service selection
or service composition. Moreover, it will lead to the Lagging
Effect (i.e., the evaluation result cannot reflect the up-todate quality change trend of a web service), if all the
historical QoS records are treated equally in service
evaluation. In view of these challenges, a novel service
evaluation method named Partial-HR (Partial Historical
Records-based service evaluation method) is put forward in
this paper, by which only partial important historical QoS
records are employed to evaluate the service quality. Finally,
a set of experiments are designed and deployed to validate
the feasibility of our proposal, in terms of evaluation
accuracy and efficiency.
Keywords-web service; quality evaluation; historical QoS
record; large numbers

I.

INTRODUCTION

With the popularity and maturity of services


computing technology, a user can deploy his/her business
applications economically and conveniently, by selecting
and integrating various web services. However, there are
so many web services that share the same or similar
functionality [1-3] on the Internet, it becomes a necessity
to evaluate and rank all the functional-qualified candidate
services, based on their QoS (Quality of Service)
information. However, due to the reasons of unstable
network running environment or the exaggerated
propaganda, the QoS information advertised by service
providers is not always trusted [4-5], which blocks a user
from selecting a trusted web service.
Fortunately, the historical QoS records generated from
past service invocations have provide us a promising way
to evaluate the real quality of a web service [6-7]. By
comparing the historical QoS data with the services
promised SLA (Service Level Agreement), the service
quality could be calculated and evaluated in a more trusted
manner. However, for some web services that are invoked
frequently, large numbers of historical QoS records are
present and ready for quality evaluation, which may lead
to a low evaluation efficiency and cannot meet the quick

978-1-4799-6513-7/14 $31.00 2014 IEEE


DOI 10.1109/TrustCom.2014.94

II.

FORMALIZATION AND MOTIVATION

In this section, we first formalize the service


evaluation problem in service selection based on
historical QoS records; and afterwards, two scenarios are
introduced to demonstrate the motivation of our paper.
A. Formalization
To ease the subsequent discussions, some basic
concepts associated with service evaluation are
formalized as below.

721

1.

TASK = {task1, , taskN}, where taski (1 i N) is a


single task node that belongs to service composition
workflow.

2.

WS = {ws1, , wsn}, where wsj (1 j n) is a


candidate web service for a certain task node in set
TASK.

3.

HR = {hr1, , hrL}, where hrk (1 k L) is a


historical QoS record for a web service in set WS,
and the invoked time of hrk is denoted by tk. Here, we
utilize taski. wsj. hrk to denote the k-th historical QoS
record of taski j-th candidate web service.

III. A SERVICE EVALUATION METHOD: PARTIAL-HR


In this section, a novel service quality evaluation
method, i.e., Partial-HR is introduced. The main idea of
Partial-HR is: First, we assign appropriate weight to each
historical QoS record of a candidate service, according to
the invoked time of historical QoS record. Second,
determine K new historical QoS records, so that the sum
of their weights is larger than the weight threshold set by
a user. Third, evaluate the service quality based on the
derived K new historical QoS records. Concretely, the
three steps of our proposed Partial-HR method are listed
in Fig.3. Here, without loss of generality, we assume that
web service wss L historical QoS records hr1, , hrL are
ranked in ascending order beforehand, i.e., t1 < , , < tL
holds.

B. Motivation
With the above formalization, we can motivate our
paper with the scenarios in Fig.1 and Fig.2. In Fig.1, we
assume that each task taski (1 i N) owns n candidate
services and each service owns L historical QoS records.
Then totally n*L historical QoS records should be
considered, in order to select a quality-optimal candidate
service for each task taski (1 i N); and totally (n*L)N
historical QoS records should be considered, for finding a
quality-optimal service composition solution. Therefore,
when L is large, the time cost for subsequent service
selection or service composition is usually large, and
cannot meet users quick response requirements. While in
Fig.2, the response time of service wss ten historical
QoS records are different. Therefore, the evaluation result
of response time would not reflect the up-to-date change
trend (i.e., the Lagging Effect), if the ten historical QoS
records are treated equally. In view of these two
challenges, a evaluation method Partial-HR is put
forward in the next section.

task1

ws1

hr1

hrL

taskN

Step1: For a candidate web service ws, model the


relationship between weight Wk of wss historical
QoS record hrk (1 k L) and the invoked time
tk of hrk, i.e., Wk = f (tk). Determine the
parameters in Wk = f (tk) by the 20-80 principle.
Step2: Determine K new historical QoS records hrLK+1, ..., hrL, so that Wk (L-K+1 k L) P%,
where P% is the weight threshold from user.
Step3: Evaluate the quality of service ws, based on the
obtained K historical QoS records hrL-K+1, ..., hrL
in Step 2.

Workflow

Figure 3. Three steps of service evaluation method Partial-HR


wsn

Candidate service

(1) Step1: For a candidate web service ws, model the


relationship between weight Wk of wss historical
QoS record hrk (1 k L) and the invoked time tk
of hrk, i.e., Wk = f (tk). Determine the parameters in
Wk = f (tk) by the 20-80 principle.
According to the Volatility Effect [10-11] of quality
feedback, the newer a feedback is, the more important it
is for quality evaluation. Therefore, we can rank all the
historical QoS records hrk (1 k L) of service ws, based
on the invoked time tk of hrk, and assign a large weight Wk
to hrk whose invoked time tk is large. Namely, there is a
positive correlation relationship between Wk and tk, i.e.,
Wk = f (tk), and the first-order derivative f (tk) > 0.
Moreover, the weighting function Wk = f (tk) should
also satisfy the Marginal Utility (i.e., in the marginal
part, the gain from an increase or loss from a decrease in
the consumption of a good or a service decreases) [12].
Consider the example in Fig.4 for illustration. In the right
marginal part of curve Wk = f (tk) (i.e., tk [tL--, tL]), the
curve rises increasingly slow with the growth of tk;
Similarly, in the left marginal part (i.e., tk [t1, t1+]), the
curve drops increasingly slow with the decline of tk;.
Therefore, in order to accommodate the Volatility Effect

Historical QoS records

Figure 1. Three-layer structure of service evaluation based


on historical QoS records

Figure 2. Response time of web service wss historical invocations

722

(3) Step3: Evaluate the quality of service ws, based on


the obtained K historical QoS records hrL-K+1, ...,
hrL in Step 2.
After Step 2, we obtain K new historical QoS
records hrL-K+1, ..., hrL of candidate service ws. Next, we
evaluate the quality of ws based on its K historical QoS
records. Here, we assume m QoS criteria qr (1 r m) are
present in each historical QoS record hrk (1 k L) of ws;
hrk (qr) denotes the QoS value of criterion qr in historical
record hrk; ws(qr) denotes wss predicted QoS value over
criterion qr; users weight over QoS criteria qr is r, and
equation r =1 holds. Then according to the weight Wk of
obtained historical QoS record hrk (L-K+1 k L), we can
predict ws(qr) as in equation (6). And further, the total
utility value Utility(ws) of candidate service ws could be
calculated by equation in (7), where Norm() denotes the
normalization operation. Here, Utility(ws) could be
regarded as the quality of candidate service ws.

Wk
WL

f (tk)
W1
0

t1

t1+

tL-

tL

tk

Figure 4. Weight of wss historical QoS records with invoked time

and Marginal Utility simultaneously in the weighting


problem, as Fig.4 shows, we utilize the Arctangent
function in (1) to model the relationship between hrks
weight Wk and hrks invoked time tk. Here, c is the
coefficient, an inflection appears when tk = , and is the
vertical axis displacement.
Wk = f (tk) = c * arctan(tk - ) + 

(1)

Next, we calculate the concrete values of parameters c,


and in (1). According to the meaning of weight,
equation (2) is obtained; and according to the symmetry
characteristics of Arctangent function, equation (3) is
achieved. Besides, equation (4) is derived based on the
well-known 20-80 principle [13], i.e., the weight sum of
the 20% new historical QoS records occupies 80% of the
total weight. Here, symbol x denotes the upper
integer of x, e.g., 1.2 = 2. Then through equations in
(2)~(4), we can obtain the concrete values of parameters c,
and ; namely, we can calculate the weight Wk of
historical QoS record hrk based on hrks invoked time tk.

ws(qr) =

0.8

Algorithm: Partial-HR (WS, HR, Q, )


Input: WS = { ws1, , wsn }: a set of candidate services

(3)

HR = {hr1, , hrL}: a set of historical records of ws


Q = {q1, , qm}: a set of QoS criteria of ws

(4)

= {1, , m}: users weights over QoS criteria

L 1
5

Output: Utility(wsi): service quality of wsi

(2) Step2: Determine K new historical QoS records hrLK+1, ..., hrL, so that Wk (L-K+1 k L) P%, where
P% is the weight threshold from user.
After Step 1, we derive the weight Wk of service wss
historical QoS record hrk (1 k L). While the main idea
of our Partial-HR is to find K new historical QoS records
hrL-K+1, ..., hrL, so that the sum of their weights just equals
to or exceeds the expected weight threshold P% requested
by users. In other words, the inequation in (5) holds. Solve
the inequation and we can derive the minimal value of K.
Then the selected K new historical QoS records hrLK+1, ..., hrL could approximately delegate the original L
records hr1, ..., hrL, through which we can reduce the
number of records that engage in the service evaluation
and improve the evaluation efficiency.

1: for i = 1 to n do
2:

Wk P%

Utility(wsi) 0

3:

rank hr1, , hrL of wsi in ascending order

4:

determine f (tk) by (1)~(4)

5:

Wk f (tk)

6:

determine K by (5)

7:

for r = 1 to m do

8:

wsi (qr) 0

9:

for k = L-K+1 to L do
wsi (qr) wsi (qr) + Wk * hrk (qr)

10:
11:
12:

(7)

With the above three steps of Partial-HR, we can


evaluate the quality of a candidate service ws, based on its
partial important historical QoS records. More formally,
the pseudocode of our proposal is specified as below.

k L

Zr * Norm( ws (qr ))
r 1

k 1

Wk

(6)

Utility(ws) =

(2)

= (t1 + tL)/2

Wk * hrk ( qr )

k L  K 1

13:

(5)

End for
Utility(wsi) Utility(wsi) + r * Norm(wsi (qr))
End for

14: End for

k L  K 1

723

IV.

EXPERIMENT

In this section, a set of experiments are designed and


deployed to validate the feasibility of our proposed
Partial-HR method, in terms of evaluation accuracy and
efficiency.
A. The dataset and experiment deployment
The employed experiment data is from web service
QoS dataset WS-DREAM [9] published by Dr. Zibin
Zheng in 2011. WS-DREAM consists of 4532 web
services from public sources on the web, and 142
distributed computers from Planet-Lab are employed for
evaluating the QoS performance of web services in 64
time intervals.
The experiments were conducted on a Lenovo PC with
3.20 GHz processors and 2.0 GB RAM. The machine is
running under Windows XP (Service Pack 3) and Matlab
7.0. Each experiment was carried out 10 times and the
average results were adopted.

Figure 5. Accuracy comparison of three methods

B. Experiment Results and Analyses


Concretely, four evaluation profiles were tested in our
experiments. The specifications and values of employed
parameters are listed in Table1. As only two QoS criteria
of response time and throughput are present in WSDREAM, m = 2 holds in the experiments.

Figure 6. The needed number of records K so that K > 80%

TABLE I. SPECIFICATION OF EXPERIMENT PARAMETERS


Parameter

Values

The experiment result is shown in Fig.5. In Fig.5, the


pink line represents the real QoS data of hr64, so its
accuracy is equal to 1. For the three methods, it is better
to be close to the pink line. As can be seen from Fig.5, the
predicted QoS data by three methods all fluctuate around
the real QoS data; and generally, Partial-HR outperforms
the other two methods. This is because on one hand,
Partial-HR only considers partial important historical
QoS records in service evaluation so as to avoid the
excessively averaging evaluation result of the Average
method; on the other hand, Partial-HR overcomes the
shortcoming of Last-K method as the latter only considers
few newest historical QoS records and neglects many
new records.

Specifications

In Profile1- Profile3: n = 1
In Profile4: n = 100, , 500

Number of candidate
web services.

m=2

Number
of
QoS
criteria of web service.

In Profile1- Profile2: L = 64
In Profile3: L = 1000, , 5000
In Profile4: L = 100

Number of historical
QoS records of a web
service.

K = L/5

Number of historical
QoS records selected
by Partial-HR.

Profile1: Accuracy comparison of Partial-HR with


other two methods.
In this profile, we compare the accuracy of Partial-HR
with other two methods: Average [14] and Last-K [15].
Average method treats all the historical QoS records of an
identical service equally, while Last-K method only
considers the K newest historical QoS records and
assigns the same weight to the K records. For measuring
the accuracy of three service evaluation methods, we
randomly select a web service ws and its 64 historical
QoS records (hr1, , hr64) from WS-DREAM. First,
through the three methods, we predict the QoS data of
hr64 based on hr1, , hr63; afterwards, we calculate the
ratio between the predicted QoS data of hr64 and the real
QoS data of hr64, and regard the ratio as the accuracy of
the three methods.

Profile2: Weight distribution of Partial-HR with other


four methods.
In this profile, we compare the weight distribution of
Partial-HR with other four methods: Average, Last-K, AP
[16] and DWF [8]. In AP method and DWF method, the
weights of historical QoS records are assigned in the
manners of arithmetical progression and geometric
progression respectively. For observing the weight
distribution, we calculate the weight of each historical QoS
record, and determine the K new historical QoS records
so that their weight sum is at least 0.8 (i.e., K records
occupy 80% of the total weight). The experiment result is
shown in Fig.6. As can be seen from the figure, only few
(i.e., 4. 4 / 64 = 6.25%) newest historical QoS records are

724

Profile4: Time cost of three methods w.r.t. n


In this profile, we study the time cost of Partial-HR,
Average and Last-K methods, with respect to the number
of candidate web services, i.e., n. The experiment result is
shown in Fig.8. As can be seen from Fig.8, the time cost of
three methods all increase approximately linearly with the
growth of n. Besides, the time cost of Partial-HR is
between those of Average and Last-K methods, because
the number of historical QoS records considered in
Partial-HR is between those of the other two methods. As
Fig.8 shows, the time cost of Partial-HR is small
(milliseconds); Therefore, Partial-HR can satisfy the quick
response requirement of users.
Figure 7. Time cost of three methods w.r.t. L

V.

EVALUATION

In this section, we first analyze the time complexity of


Partial-HR method introduced in Section 3, to evaluate
the feasibility of our proposal. Afterwards, a comparison
with related work is presented, which is followed by
discussions regarding the limitations and our future work.
A. Complexity Analysis
Suppose web service ws own L historical QoS records,
and each record correspond to m QoS criteria. Then
according to (1)~(4), the time complexity of Step 1 is
O(L). In Step 2, K (1 K L) important historical QoS
records are determined, whose time complexity is O(L).
In Step 3, according to (6), we predict wss quality values
over m QoS criteria based on the derived K historical QoS
records, whose time complexity is O(L*m). And finally,
according to (7), we calculate wss comprehensive quality
by synthesizing wss m QoS criteria values, whose time
complexity is O(m).
With the above analyses, a conclusion could be drawn
that the total time complexity of Partial-HR is O(L*m).

Figure 8. Time cost of three methods w.r.t. n

recruited for service evaluation in both Last-K and DWF


methods, which means that they neglect many new
historical QoS records. While too many historical QoS
records are considered in AP (36 records, 36/64=56.25%)
and Average methods (52 records, 52/64=81.25%), which
means that they do not emphasize the importance of the
new historical QoS records very well. While in our
proposed Partial-HR method, 13 (13/64=20.31) historical
QoS records are employed for evaluation, which satisfies
the well-known 20-80 principle.

B. Related Work and Comparison Analysis


The advertised QoS data is not always trusted as
service providers promise. Therefore, it is of significance
to evaluate the service quality based on the historical QoS
records. Today, many researchers have studied the service
evaluation problem based on historical QoS records.
In [4, 17], the historical QoS records are recruited to
evaluate the trust of web service, by comparing the
historical QoS data with the SLA promised by service
providers. In [18], the service is evaluated by considering
its historical QoS records. However, all the historical QoS
records should be considered in the above literature, and
the weights of different historical QoS records are omitted
as the Average method does. In [15], the Last-K method
studies the weight problem and only the K newest
historical QoS records are selected for evaluation.
However, this method excludes the majority of historical
QoS records and may generate an inaccurate service
evaluation result. In [16], each historical QoS record is
assigned a weight, to discriminate their different

Profile3: Time cost of three methods w.r.t. L


In this profile, we study the time cost of Partial-HR,
Average and Last-K methods, with respect to the number
of historical QoS records of a service, i.e., L. The
experiment result is shown in Fig.7. As Fig.7 shows, the
time cost of Last-K method stays relatively stable when L
grows, which is because this method only considers the K
newest historical QoS records (K=4 holds in Last-K
method, see [8]). While the time costs of Partial-HR and
Average methods both increase approximately linearly
with the growth of K, and Partial-HR outperforms
Average although their time costs are both small
(milliseconds). Therefore, Partial-HR can be applied in
service evaluation where quick response is required.

725

contribution for service evaluation. However, the weights


are assigned in a arithmetical progression manner, which
may lead to the Lagging Effect in service evaluation. A
dynamic weighting function DWF is proposed in [8] to
cope with the Lagging Effect, where the weights are
assigned in a geometric progression manner. However,
this method does not consider the Marginal Utility and
therefore is not consistent with the human cognitive habits.
In view of this, a novel service evaluation method PartialHR is proposed in this paper, which considers both the
Volatility Effect and Marginal Utility. Through PartialHR, we only select partial important historical QoS records
for service evaluation, so as to improve the evaluation
efficiency. Through a set of experiments, we validate the
feasibility of our proposal.

[2]

[3]

[4]

[5]

[6]

C. Further Discussion
In this paper, we model the weight of historical QoS
record as an Arctangent function associated with the
records invoked time. However, this model is a simple
approximation to depict users complex evaluation
preference, and further refinement is required in the future
to incorporate more impact factors.
VI.

[7]

[8]

CONCLUSIONS
[9]

It becomes a promising way to evaluate the quality of a


web service based on its historical QoS records. However,
when large numbers of historical QoS records are present,
considering all the records may lead to the low efficiency
in subsequent service selection or service composition.
Moreover, Lagging Effect may be generated if all the
historical QoS records are treated equally. In view of these
challenges, a novel evaluation method named Partial-HR
is proposed in this paper, which considers both the
Volatility Effect and Marginal Utility in service
evaluation and satisfies human cognitive habits more.
Through a set of experiments, we validate the feasibility of
Partial-HR in terms of evaluation accuracy and efficiency.
In the future, we will refine the weight model to include
more impact factors.

[10]

[11]

[12]
[13]
[14]

ACKNOWLEDGMENT
This paper is supported by the Open Project of State
Key Lab. for Novel Software Technology (No.
KFKT2012B31), Natural Science Foundation of
Shandong Province of China (No. ZR2012FQ011),
Outstanding Young Scientist Award Foundation of
Shandong Province (No. BS2013NJ003), DRF and UF
(BSQD20110123, XJ201227) of QFNU.

[15]
[16]

[17]

[18]

REFERENCES
[1]

Dou Wanchun, Qi Lianyong, Xuyun Zhang and Jinjun


Chen. An Evaluation Method of Outsourcing Services for
Developing An Elastic Cloud Platform. The Journal of
Supercomputing, 63(1): 1-23, 2013.

726

Qiang He, Jun Han, Yun Yang, John Grundy and Hai Jin.
QoS-Driven Service Selection for Multi-Tenant SaaS.
IEEE International Conference on Cloud Computing,
pp.566-573, 2012.
Lizhe Wang, Dan Chen, Yangyang Hu, Yan Ma and Jian Wang.
Towards Enabling Cyber Infrastructure as A Service in Clouds.
Computers & Electrical Engineering, 39(1): 3-14, 2013.
Erbin Lim, Philippe Thiran, Zakaria Maamar and Jamal
Bentahar. On the Analysis of Satisfaction For Web
Services Selection. IEEE International Conference on
Services Computing, pp. 122-129, 2012.
Shangguang Wang, Qibo Sun and Fangchun Yang.
Reputation Evaluation Approach in Web Service Selection.
Journal of Software, 23(6): 1350-1367, 2012.
Wanchun Dou, Xuyun Zhang, Jianxun Liu and Jinjun Chen.
HireSome-II: Towards Privacy-Aware Cross-Cloud Service
Composition for Big Data Applications. IEEE Transactions
on
Parallel
and
Distributed
Systems,
2013.
(http://doi.ieeecomputersociety.org/10.1109/TPDS.2013.246).
Lin Wenmin, Dou Wanchun, Luo Xiangfeng and Jinjun
Chen. A History Record-Based Service Optimization
Method for QoS-Aware Service Composition, IEEE
International Conference on Web Services, pp. 666-673, 2011.
Yan Wu, ChunGang Yan, Zhijun Ding, GuanJun Liu,
Pengwei Wang, Changjun Jiang and MengChu Zhou. A
Novel Method for Calculating Service Reputation. IEEE
Transaction on Automation Science and Engineering, 10(3):
634-642, 2013.
Yilei Zhang, Zibin Zheng and Michael R. Lyu. WSPred: A
Time-Aware Personalized QoS Prediction Framework for
Web Services. IEEE Symposium on Software Reliability
Engineering, pp. 210-219, 2011.
Christopher S. Leberknight, Soumya Sen and Mung Chiang.
On the Volatility of Online Ratings: An Empirical Study.
Lecture Notes in Business Information Processing, 108: 7786, 2012.
Yun Wan. The Matthew Effect in Online Review
Helpfulness. International Conference on Electronic
Commerce, pp. 38-49, 2013.
William Vickrey. Measuring Marginal Utility by Reactions
to Risk. Econometrica, 13(4): 319-333, 1945.
Allan Gibbard. A Pareto-consistent libertarian claim.
Journal of Economic Theory, 7(4): 388-410, 1974.
Lianyong Qi, Rutao Yang, Wenmin Lin, Xuyun Zhang,
Wanchun Dou and Jinjun Chen. A QoS-Aware Web
Service Selection Method Based on Credibility Evaluation.
International Conference on High Performance and
Communications, pp.471-476, 2010.
http://www.bestbuy.com/ (accessed on 2014-4-22).
Zhenpeng Liu, Aiguo An, Shuhua Liu and Junbao Li. A
prediction QoS approach reputation-Based in web services.
International Conference on Wireless Communications,
Networking and Mobile Computing, pp. 1-4, 2009.
Hamdi Yahyaoui. A Trust-based Game Theoretical Model
for Web Services Collaboration. Knowledge-Based Systems,
27: 162-169, 2012.
Yu Qi and Athman Bouguettaya. Computing Service
Skyline from Uncertain QoWS. IEEE Transactions on
Services Computing, 3(1): 16-29, 2010.

You might also like