You are on page 1of 7

Quantitative Metrics for User Experience:

A Case Study
Roberta Capellini1,2(B) , Francesca Tassistro2 , and Rossana Actis-Grosso1
1

Department of Psychology, University of Milano-Bicocca, Milan, Italy


r.capellini@campus.unimib.it
2
Avanade Italy, Milan, Italy

Abstract. Improving the human aspects of the software product development requires to take into account the users. This can be done in
several ways; here we investigate ways to involve them directly during
the rst phases of prototypical development. To this aim, rigorously constructed and validated questionnaires can help researchers, designers and
developers in collecting opinions and feedback of users to inform design
choices and implementation plans. We present a revised Italian version of
a standard questionnaire that allows to measure with little eort both the
overall User Experience of a product and the Workforce Experience. The
idea is that both components are tightly intertwined and can aect each
other. The questionnaire was adopted and tested to verify the quality of
the redesign interventions on a working tool of an Italian Tour Operator. Results encourage the team to increasingly apply the questionnaire
in other projects to collect feedback about users impressions.

Keywords: User experience


methods

Workforce experience

Quantitative

Introduction

The Software Development process needs to consider and integrate lessons learnt
from both the Software Engineering (SE) eld and the Human-Computer Interaction (HCI) research area. To take into account human factors means to involve
users throughout the entire development process in order to understand in depth
their needs and requirements and to provide them with a good user experience
(UX). This, in turn, is a consequence of (i) users internal state, (ii) the main
features of the designed system and (iii) the context within which the interaction occurs [5,6]. How is this possible and feasible? User-Centered Design
(UCD) is an approach devoted to increase user experience through user participation in the design process. The UCD process can help software designers
to develop a product tting the needs of its users and tailored to satisfy their
requirements [1].
One of the greatest challenges in UCD is how to measure the UX of a product.
UX metrics show how users perceive an interface, reveal something about the
c Springer International Publishing Switzerland 2015

P. Abrahamsson et al. (Eds.): PROFES 2015, LNCS 9459, pp. 490496, 2015.
DOI: 10.1007/978-3-319-26844-6 36

Quantitative Metrics for User Experience: A Case Study

491

interaction between the user and the product, help researchers and designers to
understand whether the design strategy is eective and give objective data on
which to base the design recommendations [2,11]. Furthermore, the quality of
UX has a direct impact on users satisfaction and a good UX increases sales and
improves brand perception.
However, while a number of standardized tests and rigorous methods are
available for usability measuring, the measurement of UX is often achieved with
ad hoc tools specically designed for dierent products and scenarios, without
any test regarding their reliability or eectiveness.
Here we present a case study where the UX has been evaluated for the two
versions (i.e. before and after the re-design) of a working tool used by the travel
agents of an Italian Tour Operator. Customer satisfaction, as perceived by the
travel agents, together with the perceived quality of the working experience
supported by the new version of the tool, have been evaluated as well.
To compare the UX for the two versions of the tool we selected 18 (out of the
original 26) items from the UEQ questionnaire [8], and developed 10 new items
to cover other complementary dimensions. The validation of the UEQ items has
been veried for the Italian version by calculating the Cronbachs Alpha (which
describes the consistency of the items of the scales) for each dimension. Cronbachs Alpha has been calculated also for the 10 new items. Values assigned to
each dimensions have been compared for the old and new version of the tool,
showing a good consistency and an improvement of the overall UX for both
the end users and the travel agents. However, as it is better detailed below,
the re-design of the tool did not improve travel agents UX along all the measured dimensions. This helped our team in better focusing future work to shape
the product according to the customer UX, showing the importance of having
quantitative metrics to measure UX.

2
2.1

Method
A Case Study

In the last months of 2014 our team (i.e. the Experience Design Team of Avanade
Italy) was involved in a design project concerning the redesign of a working tool
used by the travel agents of an Italian Tour Operator. The project aimed to
create a new way of working for the operators, in order to support collaboration
with end users and to help travel agents to be more productive and ecient,
allowing a dynamic management of work. The design process was based on the
User-Centered Design methodology and involved user research activities as stakeholders workshops, requirements analysis, observations of customers and travel
agents and design activities as informational architecture, wireframes and visual
design. Then, the interface was implemented by a developers team.
In the early stage of the project, before starting the design process, we
involved the travel agents in a general survey in order to understand their evaluation of the current version of the tool interface, which they used in their daily
work, and to dene users requirements.

492

R. Capellini et al.

After the redesign our point was that a quantitative measure of the possible
improvement in both usability and UX for travel agents could have a twofold
importance. On the one hand, it could help our team in focusing future work on
the weaknesses that are not necessarily clear to end-user but that could emerge
in a quantitative study; on the other hand it could add a special value to
the product we give to our client, who could have a clear feedback regarding
the quality of the work we did. To this aim we decided to use a questionnaire
designed to measure the main dimensions involved in usability and UX for the
tool we were working on, and to statistically compare the values obtained for
the old and the new version of the tool interface.
For the construction of the questionnaire we reviewed and studied the relevant literature, relying on the standard usability questionnaires available. We
then decided to extend the previous work in this eld by adding a new set of
items meant to measure four dimensions [7,12] mainly related to the quality of
the workforce experience and the end-user satisfaction.
2.2

The Construction of the UX Questionnaire

Surveys and questionnaires are easy ways to collect data from users. They provide direct information about what users feel during the interaction with an
interface or about the overall impression of a product and allow researchers to
gather helpful feedback and insights by asking directly the users.
A questionnaire to measure UX for interactive products has been recently
developed by Laugwitz et al. (2008). The User Experience Questionnaire [8]
encompasses 26 pairs of opposite adjectives that cover both usability and UX
aspects. Thus, UEQ is a semantic dierential [9] (i.e. a type of a rating scale
designed to measure opinions, attitudes and values on a psychometrically controlled scale). In particular, UEQ contains 6 semantic dierential scales [3]
designed to measure 6 dierent dimensions. An Italian version of the questionnaire is also available. The UEQ, which was originally developed in German,
is freely available on-line in several languages [10]. We then decided to use this
scale for our study, but we needed to shorten it, in order to add some new items
meant to investigate other aspects specically related to our client request.
We thus selected a subset of 18 pairs of constraint adjectives concerning the
6 dimensions measured by the UEQ: Attractiveness, (that concerns the general impression towards the product), Eciency (whether a product or a service
allows to work quickly and eciently), Perspicuity (whether a product or a service is clear and intuitive), Dependability (whether a product or a service is
reliable and safe), Stimulation (whether the use is interesting and exciting) and
Novelty (whether the design of the product is innovative and creative). Our questionnaire included three items for each of Novelty, Eciency, Attractiveness and
Dependability scales, two items for Stimulation and four items for Perspicuity.
To check for possible improvement in cooperative work experience and in the
end-user satisfaction (as perceived by travel agents), we created 10 items in order
to measure 4 dimensions: Engagement (which includes one item), Productivity

Quantitative Metrics for User Experience: A Case Study

493

(which includes ve items), Empathy with brand (which includes two items)
and Perceived Customer Experience (which includes two items). Travel agents
(n=84) have been asked to fulll the questionnaire for both the old and the new
version of the tool interface. The total number of items was thus 46.

3
3.1

Results
Confirming the Validation of UEQ Items

Below we present the main results of the analyses. First, we reverse scored
the negatively worded questions. Then, we conducted reliability analysis on the
responses to the 18 pairs of adjectives related to the assessment of the two interfaces, paired t-tests to compare the overall user experience and a Binomial Test
on the responses to the 10 items related with the Workforce Experience.
After the transformation in standardized z -points, 8 participants were
excluded from the analyses because they resulted outliers on the values of one
or more dimensions (|2.5| SD). Analyses were thus conducted on a sample of
76 participants. In order to calculate an index for each of the 6 dimensions
for the assessment of the overall User Experience, we averaged the two Stimulation items because they were highly consistent both in the older interface
dataset (Cronbachs = .68) and the newer one (Cronbachs = .84). For the
same reason, due to the overall satisfactory reliabilities, we combined the three
Attractiveness items (old interface Cronbachs = .85, new interface Cronbachs
= .86), the three Eciency items (old interface Cronbachs = .72, new interface Cronbachs = .81), the three Dependability items (old interface Cronbachs = .75, new interface Cronbachs = .78), the three Novelty items (old
interface Cronbachs = .78, new interface Cronbachs = .75) and the Perspicuity four items (old interface Cronbachs = .84, new interface Cronbachs
= .89).
3.2

Comparison Between the UX of the Old and the New Interface

We compared both the 6 indices related to the evaluation of the old interface and
the 6 indices related to the evaluation of the new re-design by means of paired
t-tests. The analysis revealed that the assessment of the redesign interface along
the Novelty dimension (M = 5.50, SD = 1.09) was more positive than the
assessment of the older interface along the same scale (M = 4.72, SD = 1.45),
t(75) = 4.81, p < .001. The same pattern was revealed for the other dimensions;
the new interface was perceived signicantly more challenging (M = 5.65, SD =
1.16) than the older one (M = 5.17, SD = 1.27), t(75) = 3,11, p = .003, more
trustworthy (M = 5.43, SD = 1.17) than the older one (M = 5.09, SD = 1.22),
t(75) = 2,66, p = .01, more ecient (M = 5.46, SD = 1.17) than the previous
interface (M = 5.16, SD = 1.18), t(75) = 2,38, p = .02 and more attractive
(M = 5.59, SD = 1.23) than the previous one (M = 5.08, SD = 1.35), t(75) =
3,53, p< .001, (Fig. 1).

494

R. Capellini et al.

Fig. 1. Comparison between the old and the new interface

As shown in Fig. 1, participants assessed more positively the overall user


experience of the new working tool. The design process allows the team to
improve the interface, to achieve a better evaluation and to better answer the
users needs.
The results shown in Fig. 1 did not show a signicant dierence between the
evaluation of the new and the old interface along the Perspicuity dimension,
t(75) = .85, p = .39. This latter result can be interpreted due to the following
reason: when participants answered the survey, the new interface was newly
deployed and they did not receive any training about how to use it yet. This
might have inuenced their impression about the clearness and perspicuity of
the interface and justify this nding. Further steps in the design have to consider
this aspect and try to improve it.
3.3

Workforce Experience

Following the same procedure presented above, we collapsed over the evaluation
that participants provided for each of the 10 items related to the Workforce Experience. Cronbachs alphas for the Productivity, .71, for Empathy with brand, .81,
and for Perceived Customer Experience, .77, were suciently high to allow us
to combine the traits of each dimension.
Following a technique introduced in [4], we then computed a Binomial Test
on the proportion of responses on the Engagement, Productivity, Empathy for
brand and Perceived Customer Experience. Clear responses polarizations have
been detected for the following dimensions: Engagement (.07 vs. .93, p < .001),
Productivity (.09 vs. .91, p < .001) and Perceived Customer Experience (.13 vs.
.87, p < .001).

Quantitative Metrics for User Experience: A Case Study

495

As shown, the new version of the tool supported travel agents in their daily
work. After the re-design, participants felt to be more productive, to work in a
more ecient way and to be able to provide a better service to their customers.
Moreover, they felt more engaged and emphatic with their company brand. In
line with the literature [7,12], these aspects inuence the overall quality of the
workforce experience in a positive way.

Discussion

We presented a tool that allows researchers and designers to evaluate the user
experience of a product. It measures both usability aspects like perspicuity, eciency and dependability and user experience aspects like stimulation or originality. It allows also to measure the quality of the workforce experience perceived
by the employees who use the interface under investigation in their daily work.
The case study we presented was our rst application of the UX questionnaire in a design project; our aim was to measure the quality of our work. Results
showed that the team was able to design an interface that helps travel agents
to build a travel tour together with their customers, giving them the possibility
to choose and customize itineraries, according to their wishes. The overall user
experience of the working tool was improved, in terms of an enhanced attractiveness, higher levels of perspicuity, dependability, stimulation and novelty. The
innovative interface enabled travel agents to work in a cooperative way with their
clients and to experience a new way of working.
These challenging results encourage us to use always more frequently this
UX questionnaire in our project. It should become usual the usage of surveys
and validated questionnaire to gather objective data about the user experience
of a product. Further, this should happen throughout a project; before starting
the design phase, in order to obtain insights to guide the subsequent steps and
at the end of a design project, in order to measure in a quantitative way the
quality of your work.
Moreover, to gather users requirements, to bring quantitative data and to
base the design and implementation of a product from the users point of view
might create a bridge between designers and developers. It could be the ideal
way to integrate Software Engineering and Human-Computer Interaction in the
Software Development process.
Acknowledgments. Wed like to thank Avanade Experience Design Team, based in
Milan (Italy) that actively participated to the project (in alphabetical order): Arianna
Angaroni, Marco Buonvino, Roberto Chinelli, Celeste Cirasole, Alice Deias, Giulia
Delmedico, Luca Erbifori, Alessandro Fusco, Jessica Guizzardi, Lais Kantor Caserta,
Angelo Oldani, Matteo Puggioni and Silvia Soccol. We thank also Federico Cabitza of
the University of Milano-Bicocca for his help in regard to the non-parametric inference
analysis and suggestions for the scale denition.

496

R. Capellini et al.

References
1. Abras, C., Maloney-Krichmar, D., Preece, J.: User-centered design. In: Bainbridge,
W. Encyclopedia of Human-Computer Interaction, vol. 37(4), pp. 445456. Sage
Publications, Thousand Oaks (2004)
2. Albert, W., Tullis, T.: Measuring the user experience: collecting, analyzing, and
presenting usability metrics. Newnes (2013)
3. Brooks, P., Hestnes, B.: User measures of quality of experience: why being objective
and quantitative is important. IEEE Netw. 24(2), 813 (2010)
4. Cabitza, F., Simone, C., De Michelis, G.: User-driven prioritization of features for
a prospective InterPersonal Health Record: Perceptions from the Italian context.
Comput. Biol. Med. 59, 202210 (2015)
5. Hassenzahl, M.: User experience (UX): towards an experiential perspective on
product quality. In: Proceedings of the 20th International Conference of the Association Francophone dInteraction Homme-Machine, pp. 1115. ACM, September
2008
6. Hassenzahl, M., Tractinsky, N.: User experience-a research agenda. Behav. Inf.
Technol. 25(2), 9197 (2006)
7. Keitt, T.J., Smith, A.: Measuring Your Workforce Experience: Use Customer Experience Insight As Your Guide. Forrester Research (2013)
8. Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (ed.) USAB 2008. LNCS, vol. 5298, pp. 6376.
Springer, Heidelberg (2008)
9. Osgood, C.E.: A Monograph on the Semantic Dierential. University of Illinois
Press, Urbana (1957)
10. Rauschenberger, M., Olschner, S., Cota, M.P., Schrepp, M., Thomaschewski, J.:
Measurement of user experience: a Spanish language version of the user experience
questionnaire (UEQ). In: 2012 7th Iberian Conference on Information Systems and
Technologies (CISTI), pp. 16. IEEE, June 2012
11. Sauro, J., Lewis, J.R.: Quantifying the User Experience: Practical Statistics for
User Research. Elsevier (2012)
12. Yates, S., Keitt, T.J.: Measure Workforce Experience Through Engagement. Productivity and Customer Impact, Forrester Research (2013)

You might also like