You are on page 1of 5

Remote Usability Evaluation System (e-RUE)

Rafidah Ramli, Azizah Jaafar


Faculty of Information Science and Technology
Universiti Kebangsaan Malaysia
43600 Bangi, Selangor
{rr41034, aj}@ftsm.ukm.my

Hasiah Mohamed
Universiti Teknologi MARA (UiTM)
Faculty of Computer and Mathematical Sciences,
Dungun Campus, Dungun 23000, Terengganu,
Malaysia
hasia980@tganu.uitm.edu.my
Abstract Usability evaluation is part of the software
development cycle. Throughout the years, various
methodologies have been used to evaluate software
effectiveness, efficiency and satisfaction. The capabilities of
Internet technologies has make possible to run usability
evaluation remotely via online. The usability evaluation can be
run from different geographical locations between the
respondents, experts and researchers. This paper discusses the
development of online remote usability evaluation (e-RUE) in a
web-based environment. The development of e-RUE is a
possible solution for usability evaluation.
Keywords-component; Usability, Usability Methodology,
Remote Evaluation,
I. INTRODUCTION
Usability has been defined by the International
Organization for Standardization (ISO) as the extent to
which the product can be used by specified users to achieve
specified goals with effectiveness, efficiency, and
satisfaction in a specified context of use [1; 2]. A usable
system enable users to achieve their task and goals quickly,
easily, effectively and the users satisfied with the outcomes.
If a system is difficult to use, people will ignore and neglect
the system.
Usability has widely being used in various environment
such as in Courseware Applications [3], Web Applications
[4], E-Learning , Websites, Digital Libraries [5], Open
Source Software (OSS) Development, Content Management
System [6], Intranets [6] etc. However, the ISO definition
does not explicitly specify operational criteria on what to
evaluate. There are no specific criteria to determine
effectiveness, efficiency and satisfaction in a system. Table 1
shows the definition and criteria used to evaluate
effectiveness, efficiency and satisfactions.
TABLE I. CRITERIA USED TO EVALUATE EFFECTIVENESS,
EFFICIENCY AND SATISFACTIONS
EFFECTIVENESS EFFICIENCY SATISFACTION
Effectiveness is to
evaluate if the
system as a whole
can provide
information and
functionality
effectively and will
be measured by how
many answers are
correct [5]
Efficiency is to
evaluate if the system
as a whole can be used
to retrieve information
efficiently and will be
measured by 1) how
much time it takes to
complete tasks and 2)
how many steps
required [5].
Satisfaction will look
into the areas of ease
of use, organization
of information, clear
labeling, visual
appearance, contents,
and error corrections
and will be measured
by Likert scales and
questionnaires [5].
Effectiveness, which Efficiency, which is the Satisfaction, which is
is the accuracy and
completeness with
which users achieve
certain goals.
Indicators of
effectiveness include
quality of solution
and error rates [7].
relation between the
accuracy and
completeness with
which users achieve
certain goals and the
resources expended in
achieving them.
Indicators of efficiency
include task
completion time and
learning time [7].
the users' comfort
with and positive
attitudes towards the
use of the system.
Users' satisfaction can
be measured by
attitude rating scales
[7]
Measured by the
extent to which the
intended goals of
use of the overall
system are achieved.
[8]
The resources such as
time, money, mental
effort that have to be
expended to achieve
the intended goals
[8]
The extent to
which the user finds
the overall system
acceptable

The criteria of effectiveness, efficiency and satisfaction
are different depending on the context of use. An effective
system can provide information and achieve users goals. An
efficient system can reduce users time and money to
complete tasks easily, quickly and without frustration.
Moreover, a usable system satisfies users need towards the
systems. Table 2 shows an indicator to evaluate
effectiveness, efficiency and satisfactions.
TABLE II. INDICATORS OF EVALUATE EFFECTIVENESS,
EFFICIENCY AND SATISFACTIONS [8]
EFFECTIVENESS EFFICIENCY SATISFACTION
Percentage of user
successfully
completing task
Time to complete a
task

Rating scale for user
satisfaction

Number of user
errors

Number of references
to help

Proportion of users
who say they would
prefer using the
system over that of
some specified
competitor
Ratio if successful
interaction to errors

Effort (cognitive
workload)

Proportion of user
statements during a
test that are positive
versus critical
Number of tasks
completed in a given
time
Task completed in a
given time

Frequency of
complaints
Average accuracy of
completed tasks
Average accuracy of
completed tasks


Usability evaluation is an important part of the software
development cycle which consists of iterative cycles of
designing, prototyping and evaluating. Usability evaluation
2009 Second International Conference on Computer and Electrical Engineering
978-0-7695-3925-6/09 $26.00 2009 IEEE
DOI 10.1109/ICCEE.2009.247
641
2009 Second International Conference on Computer and Electrical Engineering
978-0-7695-3925-6/09 $26.00 2009 IEEE
DOI 10.1109/ICCEE.2009.247
639
has been practiced since 1971 and there are 30 methods
indentified that has being used to conduct usability
evaluation. Among these methods, Usability lab has been
recognized as the traditional method in usability evaluation.
This method has been used as a comparison to any new
usability methods [9].
II. BACKGROUND
A. Usability Lab Evaluation
In usability lab, evaluators and participant are in the same
room at the same time but were physically separated.
Evaluators are separated from participant by one-way glass
or curtain room. The evaluation test was conducted by
inviting participant to test a system or application in the other
side, while evaluators observed from the other side.
Participants actions were recorded on video to be analyzed
later on. The video from the users computer were piped in
for the evaluators to observe the testing as shown in Fig. 1.
In a lab environment, evaluators and participant were able to
communicate directly before, after or while the usability
evaluation is running. The lab is outfitted with multiple
software and hardware audio and video recording equipment
to record user actions on the computer. The video usually
captures participants screen, hand motions, and the facial
expression. In addition, logging software is also used to
capture keystrokes and mouse tracks to determine what the
user is typing and what menu items are selected.
Even though usability lab is the best solution for usability
testing, the cost to set up a lab is highly expensive. It
involves numbers of hardware and sophisticated software to
run a usability lab. Besides, in-house usability lab limits on
the number and variety of participant and evaluators because
of the travel and lodging expenses. Moreover, the test will be
running on artificial environment of the participant rather in
their own work environment. Remote usability evaluation is
the best solution [10].

Figure 1. Usability Lab Synchronous Approach

B. Remote Usability Evaluation (RUE)
Remote Usability Evaluation (RUE) methods have
evolved since it has been introduced in 1997. More studys
shows that RUE results are almost the same as usability lab
result [11]. These studies ensure that RUE can be
implemented and have a good result by following the same
concept in usability lab. In a RUE environment evaluators
and users are separated in space and/or time to run the
evaluation [12]. RUE enabled evaluators and users to run
the evaluation in their own working as shown Fig. 2. The
RUE can be run via two approaches. Remote usability
approaches has been divided into synchronous and
asynchronous approaches [16]. A remote usability system
will be better if these two methods were combined to
support each other [13].
The synchronous approach requires evaluator and
participant to run the test at the same time in real-time,
while asynchronous approach enables the usability test to be
run without the need for evaluators and participant to be at
the same time.

Figure 2. Remote Usability Environment
C. Synchronous Approach (mRUE)
Synchronous approach requires evaluators and
participants to be at the same time to run the usability
evaluation as shown in Fig. 3. Evaluators and participants
that are separated geographically have to log in together and
they can be in different building, state or country but they
have to be together virtually at the same time as they need to
communicate through the network. Synchronous approach
activities are same as in usability lab but with a level of
separation between evaluators and participant. Synchronous
approach enables evaluators to communicate via text or
audio, enables evaluators to see participants screen via
screen sharing and enables to record participants action via
recording tools. Various usability methods can be applied
in mRUE approach such as live collaborations [9], user-
reported critical incident [9] and video conference supported
evaluation [14].

642 640

Figure 3. RUE Synchronous Approaches
D. Assynchronous Approach (aRUE)
Asynchronous approach did not require evaluators and
participant to be at the same time to run the usability
evaluation. Participant can do the usability evaluation at any
time without direct observation from the evaluators.
Participants can hands-out their comments or have their
actions logged, but they have no direct communication with
the researchers or evaluators. Evaluators can review and
analyzed the comments and actions logged afterwards.
Various usability methods can be applied in aRUE approach
such as online survey, critical incident reporting and
automated data collection via mouse tracks, keystrokes and
logged files [12].
Even though usability lab is the best solution for
usability testing, the cost to set up a lab is highly expensive.
It involves numbers of hardware and sophisticated software
to run a usability lab. Besides, in-house usability lab limits
on the number and variety of participant and evaluators
because of the travel and lodging expenses. Moreover, the
test will be running on artificial environment of the
participant rather in their own working environment. Table
3 shows the differences between usability lab and remote
evaluation environments.
TABLE III. DIFFERENCES BETWEEN USABILITY LAB AND
REMOTE EVALUATION ENVIRONMENT
Criteria Usability Lab RUE
Method of Separation One-way glass or
curtain room
Geographically,
separated in space
and/or time
Approaches Synchronous &
Asynchronous
Synchronous &
Asynchronous
Method of data
collection
Video recording,
questionnaires
Screen capturing,
online questionnaires
Cost Highly expensive :
lab setup,
transportation,
lodging
Cheaper : Laptops,
Internet bill
Evaluation
Environment
User artificial
environment
User working
environment
Number of test
participants
Limited (due to
cost)[15]
Unlimited
The growth of internet and the capabilities of network
nowadays have made it possible for usability experts,
researches and respondent to work together online. The
online environment enables data to be collected right after
an application has been deployed. Remote Usability
Evaluation also enables usability experts and users to run
the test in their own working environment rather than being
in a lab. Besides, collaboration software also enables experts
and respondent to share their screen, desktop, capture users
mouse tracks and keystrokes, or communicates each other
via online.

III. ONLINE REMOTE USABILITY INTERFACE
EVALUATION SYSTEM (E-RUE)
The RUE and usability lab methodology has opened a
space to develop a remote usability evaluation system. A
system, that is called Online Interface Evaluation System (e-
RUE) is developed using the principals of these
methodologies. The system to be developed should be able
to capture and record user reactions during the evaluation.
The system should also be designed to record mouse clicks,
to measure the time spent on interfaces, record user
interactions and behaviors.
Fig. 4 shows the e-RUE conceptual diagram. In e-RUE
environment evaluators and users are separated in space
and/or time to run the usability evaluation. The usability
evaluation will be conducted remotely between the experts,
respondent and researches via capabilities network in a web
environment. e-RUE implement the synchronous and
asynchronous approaches in this methodology. The
synchronous approach will be used onto the surrogate user
and the asynchronous approach will be used onto the real-
user. In these approaches, online questionnaires,
communication collaboration and screen capturing tools will
be used. As a case study, e-RUE evaluates the effectiveness,
efficiency and satisfaction in Mathematic Form 1 Smart
School Courseware from different developers to evaluate
the interface of the courseware.

Figure 4. e-RUE Conceptual Diagram
643 641
A. Synchronous Approach
Synchronous approach in e-RUE requires evaluators and
participant to be at the same time in order to run the
usability evaluation. In e-RUE, evaluators and participant
that are separated geographically have to log in together at
the same time to run the test concurrently. Evaluators can be
in different building, state or country from the respondent
but they have to be at the same time as they need to
communicate through the network. e-RUE implied the
mRUE approach via the Task Analysis Exploration (PAT)
technique as shown in Fig. 5. e-RUE records respondent
behaviour via video online. PAT uses MORAE (available
software video recording tool) to record respondent
interactions, actions and facial expression towards the
courseware. The tool is capable to record user actions
without using any webcam devices. However webcam could
also be used to record respondent facial expression. The
recorded video are then kept in a small video file but with
high quality for later analysis. PAT uses two sets of
instrument to evaluate the response towards the courseware
which are (i) Information of Students Assignment Analysis
(MATP) and (ii) Users Information (MP).

Figure 5. eRUE Synchronous Approaches
B. Asynchronous Approach
Asynchronous approach in e-RUE did not require
evaluators and participant to be at the same time to run the
usability evaluation. Participant can do the usability testing
at any time without direct observation from the evaluators.
e-RUE implied the aRUE approach via the Cognitive
Walkthrough (JRK) technique as shown in Fig. 6. JRK
technique is use to collect data on the interface effectiveness
and benchmark from the surrogate user such as usability
experts, developers or teaches point of view. JRK requires
experts to answer sets of questionnaires towards the
satisfaction of the system. Participant gives comments and
reviews on the performance and benchmark of the system.
JRK use three important instruments (i) Cognitive
Evaluation Tool for Cognitive Walkthrough (ATPJRK), (ii)
Surrogate Users Information (MPS) and (iii) Problem
Description Report (LPM) as shown in Fig. 6.

Figure 6. eRUE Asynchronous Approaches
IV. DESIGNING AND DEVELOPING E-RUE
A study by [16] has demonstrated that remote usability
testing yields comparable results to a lab-based test. This
technique potentially saves some travel and facilities costs; it
is still a very labor- and time-intensive process, with the
observers involved full-time for each test users session. The
earliest remote usability testing techniques used the same
basic techniques as lab tests, but allowed the test users and
the observers to be in two different locations, geographically
separate. They used special software or video connections
that allow the observer to see what is happening on the users
screen, along with the use of a telephone for audio
communication. This can be thought of as a video
conference approach to usability testing [15].
e-RUE implied remote usability evaluation approach. In
our approach the entire evaluation is conducted over the
Web. The main objective e-RUE is to give scores between
two mathematics courseware which have been developed by
different developers. e-RUE will gives marks based on the
interface design of this courseware using specific formula
that will be discussed in another paper. The evaluation will
involve three types of participants: the surrogate user
(interface and HCI expertise), form 1 students and the
administrator (observer).
These users will contribute their experience and
expertise onto the online instruments which are ATPJRK,
MPS and LPM and the student will be observed online
through theirs screens and online instrument (MATP and
MP). We used MORAE video recording tools that allow the
observer to see what is happening on the users screen. All
participants will log-in to the system from their own
working environment neither at work, home or school using
their own computer and laptops. These users could be
anywhere, as long as they can access the Web. All
participants were specially selected and invited via an email
644 642
message that includes a link to a Welcome page
explaining the characteristics of the test.
V. CONCLUSION
The usage of remote usability evaluation has been
disscused either using sycnronous or assynchronous
approach. Combination of these approaches seems to be
benificial because these two approaches will support each
other in order to get precise evaluation result from two
different types of evaluators. The increasing number of
mathematics courseware in the market has given multiple
choices for the schools, teachers and parents to choose the
best courseware for mathematics subject. Therefore the
development of e-RUE helps schools, parents, teachers and
Ministry of Education to indentify the best solutions for
mathematics courseware that meet the usability principals.
Usability evaluation is an important phase in a system
development process but it is highly cost. The cost of a
usability test can be reduced by doing it remotely via online.
Therefore, e-RUE is one of the best possible solutions
compared to usability lab in order to run a usability
evaluation for courseware.
ACKNOWLEDGMENT
This research is conducted using e-Science Fund Grant
(01-01-02-SF0089), Universiti Kebangsaan Malaysia.
REFERENCES
[1] J. Timo, I. Netta, M. Juha and K. Minna, In, Proceedings of the Latin
American conference on Human-computer interaction, ACM Press,
Rio de Janeiro, Brazil, 2003.
[2] J. Scholtz, In, National Institute of Standards and Technology, 2004.
[3] A. Maria, Rentroia-Bonito; A. Joaquim and P. Jorge, An Integrated
Courseware Usability Evaluation Method, Departmento de
Engenharia Informatica, University of Lisbon.
[4] A. Arthi, D. Senthil, R. Hassan and N. Karthik, Proceedings of the
Third International Conference on Information Technology: New
Generations (ITNG'06) 2006.
[5] J. Jeng, International Journal of Libraries and Information Services,
Volume 55 2005.
[6] J. Robertson, KM Column, MAY 2007.
[7] F. Erik, H. Morten and H. Kasper, In, Proceedings of the SIGCHI
conference on Human factors in computing systems, ACM, The
Hague, The Netherlands, 2000.
[8] K.S. Park and C. Hwan Lim, International Journal of Industrial
Ergonomics, 23 1999.
[9] J.C. Castillo, The User-Reported Critical Incident Method For
Remote Usability Evaluation, Faculty of the Virginia Polytechnic
Institute and State University, State University, Blacksburg, Virginia,
1997.
[10] J. Scholtz, Proceedings of the 34th Hawaii International Conference
on System Sciences - 2001, IEEE, Hawaii 2001.
[11] N. Kodiyalam, Remote Usability Evaluation Tool Computer Science,
Faculty of theVirginia Polytechnic Institute and State University,
Blacksburg, Virginia, 2003.
[12] J.C. Castillo, (Ed.) Chapter 3 : Remote Usability Evaluation, 1997.
[13] M. Ames, A.J.B. Brush and J. Davis,. A comparison of synchronous
remote and local usability studies for an expert interface. Paper
presented at the Conference on Human Factors in Computing
Systems, Vienna, Austria ACM New York, NY, USA. 2004
[14] A. Canan, 8th ERCIM Workshop "User Interfaces For All" Yildiz
Technical University, Istanbul, Turkey. Faculty of Art and
Design,Department of Communication Design, Palais Eschenbach,
Vienna, Austria, 2004.
[15] S.F. Tom Tullis, Michelle McNulty, Carrie Cianchette, and
Marguerite Bergel, In, Usability Professionals Association
Conference, 2002.
[16] H.R. Hartson, J. Castillo, K. John and C.N. Wayne, In, Proceedings
of the SIGCHI conference on Human factors in computing systems:
common ground, ACM Press, Vancouver, British Columbia, Canada,
1996.


645 643

You might also like