You are on page 1of 23

User Interface and

Design
USER INTERFACE AND DESIGN.........................................................................1

USER INTERFACE.................................................................................................4
Introduction.....................................................................................................................................................4

Usability...........................................................................................................................................................5

User interfaces in computing.........................................................................................................................5


Types............................................................................................................................................................5
History..........................................................................................................................................................7
Modalities and modes..................................................................................................................................7
Standardization............................................................................................................................................8

HUMAN-COMPUTER INTERACTION...................................................................8
Goals.................................................................................................................................................................8

Differences with related fields.......................................................................................................................9

Design Methodologies.....................................................................................................................................9

Some Notes on Terminology...........................................................................................................................9

USER INTERFACE DESIGN................................................................................10


Processes........................................................................................................................................................11

Criticism against the term............................................................................................................................12

USER-CENTERED DESIGN................................................................................12
UCD Models and Approaches......................................................................................................................12

Purpose..........................................................................................................................................................13

Elements.........................................................................................................................................................13
Visibility.....................................................................................................................................................13
Accessibility...............................................................................................................................................13
Legibility....................................................................................................................................................13
Language....................................................................................................................................................14

Rhetorical Situation......................................................................................................................................14
Audience....................................................................................................................................................14
Purpose.......................................................................................................................................................14
Context.......................................................................................................................................................14

User-centered design according to Donald Norman..................................................................................14

Focus on more than just computers and single users................................................................................15


USABILITY...........................................................................................................15
Introduction...................................................................................................................................................15

Defining usability..........................................................................................................................................16
ISO standard...............................................................................................................................................17
System acceptability..................................................................................................................................17
Usability considerations.............................................................................................................................17
Other considerations..................................................................................................................................18

Benefits of Usability......................................................................................................................................18

Conclusion.....................................................................................................................................................18

USABILITY TESTING..........................................................................................18
Goals of usability testing..............................................................................................................................19

What usability testing is not.........................................................................................................................19

Methods.........................................................................................................................................................19
Hallway testing..........................................................................................................................................20

HUMAN ACTION CYCLE.....................................................................................21


The three stages of the human action cycle................................................................................................21
Goal formation stage..................................................................................................................................21
Execution stage..........................................................................................................................................21
Evaluation stage.........................................................................................................................................22

Use in evaluation of user interfaces.............................................................................................................22


User interface
The user interface (or Human Machine Interface) is the aggregate of means by which
people—the users—interact with the system—a particular machine, device, computer
program or other complex tools. The user interface provides means of:

• Input, allowing the users to manipulate a system


• Output, allowing the system to produce the effects of the users' manipulation.

Introduction
To work with a system, users need to be able to control the system and assess the state of
the system. For example, when driving an automobile, the driver uses the steering wheel
to control the direction of the vehicle, and the accelerator pedal, brake pedal and gearstick
to control the speed of the vehicle. The driver perceives the position of the vehicle by
looking through the windscreen and exact speed of the vehicle by reading the
speedometer. The user interface of the automobile is on the whole composed of the
instruments the driver can use to accomplish the tasks of driving and maintaining the
automobile.

The term user interface is often used in the context of computer systems and electronic
devices. The user interface of a mechanical system, a vehicle or an industrial installation
is sometimes referred to as the human-machine interface (HMI). Yet another term used
is "operator-interface console" (OIC).

HMI is a modification of the original term MMI (Man-Machine Interface). In practice,


the abbreviation "MMI" is still frequently used although some who still use the term may
claim that MMI stands for something different now (e.g. "Management and
Manufacturing Information" or "Mammal-Machine Interface"), in order to avoid
controversy.

Whether it is called MMI or HMI, the terms refer to the 'layer' that separates a human that
is operating a machine from the machine itself.

In science fiction, HMI or MMI is sometimes used to refer to what is better described as
direct neural interface. However, this latter usage is seeing increasing application in the
use of (medical) prostheses—the artificial extension that replaces a missing body part
(e.g., cochlear implants).
The system may expose several user interfaces to serve different kinds of users. For
example, a computerized library database might provide two user interfaces, one for
library patrons (limited set of functions, optimized for ease of use) and the other for
library personnel (wide set of functions, optimized for efficiency).

In some circumstance computers might observe the user, and react according to their
actions without specific commands. A means of tracking parts of the body is required,
and sensors noting the position of the head, direction of gaze and so on have been used
experimentally. This is particularly relevant to immersive interfaces.

Usability
The design of a user interface affects the amount of effort the user must expend to
provide input for the system and to interpret the output of the system, and how much
effort it takes to learn how to do this. Usability is the degree to which the design of a
particular user interface takes into account the human psychology and physiology of the
users, and makes the process of using the system effective, efficient and satisfying.

Usability is mainly a characteristic of the user interface, but is also associated with the
functionalities of the product. It describes how well a product can be used for its intended
purpose by its target users with efficiency, effectiveness, and satisfaction, also taking into
account the requirements from its context of use. These functionalities or features are not
always parts of the user interface (e.g. are you able to reverse with your car or not), yet
they are key elements in the usability of a product.

User interfaces in computing


In computer science and human-computer interaction, the user interface (of a computer
program) refers to the graphical, textual and auditory information the program presents to
the user, and the control sequences (such as keystrokes with the computer keyboard,
movements of the computer mouse, and selections with the touchscreen) the user
employs to control the program.

Types

Currently (as of 2005) the following types of user interface are the most common:

• Graphical user interfaces (GUI) accept input via devices such as computer
keyboard and mouse and provide articulated graphical output on the computer
monitor. There are at least two different principles widely used in GUI design:
Object-oriented user interfaces (OOUIs) and application oriented
interfaces[verification needed].
• Web-based user interfaces accept input and provide output by generating web
pages which are transmitted via the Internet and viewed by the user using a web
browser program. Newer implementations utilize Java, AJAX, Adobe Flex,
Microsoft .NET, or similar technologies to provide realtime control in a separate
program, eliminating the need to refresh a traditional HTML based web browser.

User interfaces that are common in various fields outside desktop computing:

• Command line interfaces, where the user provides the input by typing a
command string with the computer keyboard and the system provides output by
printing text on the computer monitor. Used for system administration tasks etc.
• Tactile interfaces supplement or replace other forms of output with haptic
feedback methods. Used in computerized simulators etc.
• Touch interfaces are graphical user interfaces using a touchscreen display as a
combined input and output device. Used in many types of industrial processes and
machines, self-service machines etc.

Other types of user interfaces:

• Attentive user interfaces manage the user attention deciding when to interrupt
the user, the kind of warnings, and the level of detail of the messages presented to
the user.
• Batch interfaces are non-interactive user interfaces, where the user specifies all
the details of the batch job in advance to batch processing, and receives the output
when all the processing is done. The computer does not prompt for further input
after the processing has started.
• Conversational Interface Agents attempt to personify the computer interface in
the form of an animated person, robot, or other character (such as Microsoft's
Clippy the paperclip), and present interactions in a conversational form.
• Crossing-based interfaces are graphical user interfaces in which the primary task
consists in crossing boundaries instead of pointing.
• Gesture interfaces are graphical user interfaces which accept input in a form of
hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
• Intelligent user interfaces are human-machine interfaces that aim to improve the
efficiency, effectiveness, and naturalness of human-machine interaction by
representing, reasoning, and acting on models of the user, domain, task, discourse,
and media (e.g., graphics, natural language, gesture).
• Live user interfaces (LUI) utilize the power of human interaction to leverage the
user interface. With a LUI, a computer representation of a live customer service
representative could navigate with the user through the interface, and present
images, maps and video clips from within the website. The computer
representation can also help the user perform on-line purchases and complete
complex forms.
• Multi-screen interfaces, employ multiple displays to provide a more flexible
interaction. This is often employed in computer game interaction in both the
commercial arcades and more recently the handheld markets.
• Noncommand user interfaces, which observe the user to infer his / her needs
and intentions, without requiring that he / she formulate explicit commands.
• Reflexive user interfaces where the users control and redefine the entire system
via the user interface alone, for instance to change its command verbs. Typically
this is only possible with very rich graphic user interfaces.
• Tangible user interfaces, which place a greater emphasis on touch and physical
environment or its element.
• Text user interfaces are user interfaces which output text, but accept other form
of input in addition to or in place of typed command strings.
• Voice user interfaces, which accept input and provide output by generating voice
prompts which are transmitted via a telephone network and heard by the user
using a telephone. The user input is made by pressing telephone keys.
• Zero-Input interfaces get inputs from a set of sensors instead of querying the
user with input dialogs.
• Zooming user interfaces are graphical user interfaces in which information
objects are represented at different levels of scale and detail, and where the user
can change the scale of the viewed area in order to show more detail.

History

The history of user interfaces can be divided into the following phases according to the
dominant type of user interface:

• Batch interface, 1945-1968


• Command-line user interface, 1969-1980[citation needed]
• Graphical user interface, 1981 to present — see History of the GUI for a detailed
look[citation needed]
• Tangible interfaces / Ubicomp
• Touch User Interface (TUI), e.g. iPhone

Modalities and modes

A modality is a path of communication employed by the user interface to carry input and
output. Examples of modalities:

• Input — computer keyboard allows the user to enter typed text, digitizing tablet
allows the user to create free-form drawing
• Output — computer monitor allows the system to display text and graphics
(vision modality), loudspeaker allows the system to produce sound (auditory
modality)

The user interface may employ several redundant input modalities and output modalities,
allowing the user to choose which ones to use for interaction.

A mode is a distinct method of operation within a computer program, in which the same
input can produce different perceived results depending of the state of the computer
program. Heavy use of modes often reduces the usability of a user interface, as the user
must expend effort to remember current mode states, and switch between mode states as
necessary.

Standardization

This year ISO has published its standard of ISO/IEC 24752 to specify the technical
requirement of IT system.

Human-computer interaction
Human–computer interaction (HCI), alternatively man–machine interaction (MMI)
or computer–human interaction (CHI) is the study of interaction between people
(users) and computers. It is often regarded as the intersection of computer science,
behavioral sciences, design and several other fields of study. Interaction between users
and computers occurs at the user interface (or simply interface), which includes both
software and hardware, for example, general purpose computer peripherals and large-
scale mechanical systems, such as aircraft and power plants.

The following definition is given at: "Human-computer interaction is a discipline


concerned with the design, evaluation and implementation of interactive computing
systems for human use and with the study of major phenomena surrounding them."

Goals
A basic goal of HCI is to improve the interactions between users and computers by
making computers more usable and receptive to the user's needs. Specifically, HCI is
concerned with:

• methodologies and processes for designing interfaces (i.e., given a task and a
class of users, design the best possible interface within given constraints,
optimizing for a desired property such as learnability or efficiency of use)
• methods for implementing interfaces (e.g. software toolkits and libraries; efficient
algorithms)
• techniques for evaluating and comparing interfaces
• developing new interfaces and interaction techniques
• developing descriptive and predictive models and theories of interaction

A long term goal of HCI is to design systems that minimize the barrier between the
human's cognitive model of what they want to accomplish and the computer's
understanding of the user's task.
Professional practitioners in HCI are usually designers concerned with the practical
application of design methodologies to real-world problems. Their work often revolves
around designing graphical user interfaces and web interfaces.

Researchers in HCI are interested in developing new design methodologies,


experimenting with new hardware devices, prototyping new software systems, exploring
new paradigms for interaction, and developing models and theories of interaction.

Differences with related fields


HCI differs with human factors in that there is more of a focus on users working with
computers rather than other kinds of machines or designed artifacts, and an additional
focus on how to implement the (software and hardware) mechanisms behind computers
to support human-computer interaction. HCI also differs with ergonomics in that there is
less of a focus on repetitive work-oriented tasks and procedures, and much less emphasis
on physical stress and the physical form or industrial design of physical aspects of the
user interface, such as the physical form of keyboards and mice. More discussion of the
nuances between these fields is at [3]

Design Methodologies
A number of diverse methodologies outlining techniques for human–computer interaction
design have emerged since the rise of the field in the 1980s. Most design methodologies
stem from a model for how users, designers, and technical systems interact. Early
methodologies, for example, treated users' cognitive processes as predictable and
quantifiable and encouraged design practitioners to look to cognitive science results in
areas such as memory and attention when designing user interfaces. Modern models tend
to focus on a constant feedback and conversation between users, designers, and engineers
and push for technical systems to be wrapped around the types of experiences users want
to have, rather than wrapping user experience around a completed system.

• User-centered design: user-centered design (UCD) is a modern, widely practiced


design philosophy rooted in the idea that users must take center-stage in the
design of any computer system. Users, designers and technical practitioners work
together to articulate the wants, needs and limitations of the user and create a
system that addresses these elements. Often, user-centered design projects are
informed by ethnographic studies of the environments in which users will be
interacting with the system.

• Principles of User Interface Design: these are seven principles that may be
considered at any time during the design of a user interface in any order, namely
Tolerance, Simplicity, Visibility, Affordance, Consistency, Structure and
Feedback.[1]

Some Notes on Terminology


• HCI vs MMI. MMI has been used to refer to any man–machine interaction,
including, but not exclusively computers. The term was used early on in control
room design for anything operated on or observed by an operator, e.g. dials,
switches, knobs and gauges.

• HCI vs CHI. The acronym CHI (pronounced kai), for computer–human


interaction, has been used to refer to this field, perhaps more frequently in the past
than now. However, researchers and practitioners now refer to their field of study
as HCI (pronounced as an initialism), which perhaps rose in popularity partly
because of the notion that the human, and the human's needs and time, should be
considered first, and are more important than the machine's. This notion became
increasingly relevant towards the end of the 20th century as computers became
increasingly inexpensive (as did CPU time), small, and powerful. Since the turn
of the millennium, the field of human-centered computing has emerged with an
even more pronounced focus on understanding human beings as actors within
socio–technical systems.

• Usability vs Usefulness. Design methodologies in HCI aim to create user


interfaces that are usable, i.e. that can be operated with ease and efficiency.
However, an even more basic requirement is that the user interface be useful, i.e.
that it allows the user to complete relevant tasks.

• Intuitive and Natural. Software products are often touted by marketers as being
"intuitive" and "natural" to use, often simply because they have a graphical user
interface. Many researchers in HCI view such claims as unfounded (e.g. a poorly
designed GUI may be very unusable), and some object to the use of the words
intuitive and natural as vague and/or misleading, since these are very context-
dependent terms

User interface design


User interface design or user interface engineering is the design of computers,
appliances, machines, mobile communication devices, software applications, and
websites with the focus on the user's experience and interaction. Where traditional
graphic design seeks to make the object or application physically attractive, the goal of
user interface design is to make the user's interaction as simple and efficient as possible,
in terms of accomplishing user goals—what is often called user-centered design. Where
good graphic/industrial design is bold and eye catching, good user interface design is to
facilitate finishing the task at hand over drawing attention to itself. Graphic design may
be utilized to apply a theme or style to the interface without compromising its usability.
The design process of an interface must balance the meaning of its visual elements that
conform the mental model of operation, and the functionality from a technical
engineering perspective, in order to create a system that is both usable and easy to adapt
to the changing user needs.
User Interface design is involved in a wide range of projects from computer systems, to
cars, to commercial planes; all of these projects involve much of the same basic human
interaction yet also require some unique skills and knowledge. As a result, user interface
designers tend to specialize in certain types of projects and have skills centered around
their expertise, whether that be software design, user research, web design, or industrial
design.

Processes
There are several phases and processes in the user interface design some of which are
more demanded upon than others depending on the project. (note for the remainder of
this section the word system is used to denote any project whether it is a web site,
application, or device)

• Functionality requirements gathering - assembling a list of the functionality


required of the system to accomplish the goals of the project and the potential
needs of the users.
• User analysis - analysis of the potential users of the system either through
discussion with people who work with the users and/or the potential users
themselves. Typical questions involve:
o What would the user want the system to do?
o How would the system fit in with the user's normal workflow or daily
activities?
o How technically savvy is the user and what similar systems does the user
already use?
o What interface look & feel styles appeal to the user?
• Information architecture - development of the process and/or information flow of
the system (i.e. for phone tree systems, this would be an option tree flowchart and
for web sites this would be a site flow that shows the hierarchy of the pages).
• Prototyping - development of wireframes, either in the form of paper prototypes
or simple interactive screens. These prototypes are stripped of all look & feel
elements and most content in order to concentrate on the interface.
• Usability testing - testing of the prototypes on an actual user—often using a
technique called talk aloud protocol where you ask the user to talk about their
thoughts during the experience.
• Graphic Interface design - actual look & feel design of the final graphical user
interface (GUI.) It may be based on the findings developed during the usability
testing if usability is unpredictable, or based on communication objectives and
styles that would appeal to the user. In rare cases, the graphics may drive the
prototyping, depending on the importance of visual form versus function. If the
interface requires multiple skins, there may be multiple interface designs for one
control panel, functional feature or widget. This phase is often a collaborative
effort between a graphic designer and a user interface designer, or handled by one
who is proficient in both disciplines.

User interface design needs good understanding of user needs.


Criticism against the term
The term is currently criticized because its focus is more narrow than the overall user
experience. Too much concentration on the technical aspects of user interface distracts
the designer from the overall activity (see Activity theory) and real goals of users.[1]
Nevertheless, while the terms are often discussed in methodological disputes, the
activities behind them are much the same.

User-centered design
In broad terms, user-centered design (UCD) is a design philosophy and a process in
which the needs, wants, and limitations of the end user of an interface or document are
given extensive attention at each stage of the design process. User-centered design can be
characterized as a multi-stage problem solving process that not only requires designers to
analyze and foresee how users are likely to use an interface, but to test the validity of
their assumptions with regards to user behaviour in real world tests with actual users.
Such testing is necessary as it is often very difficult for the designers of an interface to
understand intuitively what a first-time user of their design experiences, and what each
user's learning curve may look like.

The chief difference from other interface design philosophies is that user-centered design
tries to optimize the user interface around how people can, want, or need to work, rather
than forcing the users to change how they work to accommodate the system or function.

UCD Models and Approaches


Models of a user centered design process help software designers to fulfill the goal of a
product engineered for their users. In these models, user requirements are considered
right from the beginning and included into the whole product cycle. Their major
characteristics are the active participation of real users, as well as an iteration of design
solutions.

• Cooperative design: involving designers and users on an equal footing. This is the
Scandinavian tradition of design of IT artefacts and it has been evolving since
1970.[1]
• Participatory design (PD), a North American term for the same concept, inspired
by Cooperative Design, focusing on the participation of users. Since 1990, there
has been a bi-annual Participatory Design Conference.[2]
• Contextual design, “customer centered design” in the actual context, including
some ideas from PD[3]
All these approaches follow the ISO standard Human-centered design processes for
interactive systems (ISO 13407 Model, 1999).

Purpose
UCD answers questions about users and their tasks and goals, then use the findings to
make decisions about development and design. UCD seeks to answer the following
questions:

• Who are the users of the document?


• What are the users’ tasks and goals?
• What are the users’ experience levels with the document, and documents like it?
• What functions do the users need from the document?
• What information might the users need, and in what form do they need it?
• How do users think the document should work?

Elements
Visibility

Visibility helps the user construct a mental model of the document. Models help the user
predict the effect(s) of their actions while using the document. Important elements (such
as those that aid navigation) should be emphatic. Users should be able to tell from a
glance what they can and cannot do with the document.

Accessibility

Users should be able to find information quickly and easily throughout the document,
whether it be long or short. Users should be offered various ways to find information
(such navigational elements, search functions, table of contents, clearly labeled sections,
page numbers, color coding, etc). Navigational elements should be consistent with the
genre of the document. ‘Chunking’ is a useful strategy that involves breaking information
into small pieces that can be organized into some type meaningful order or hierarchy The
ability to skim the document allows users to find their piece of information by scanning
rather than reading. bold and italic words are often used.

Legibility

Text should be easy to read: Through analysis of the rhetorical situation the designer
should be able to determine a useful font style. Ornamental fonts and text in all capital
letters are hard to read, but italics and bolding can be helpful when used correctly. Large
or small body text is also hard to read. (9-11 pt sans serif and 11-12 pt serif is
recommended.) High figure-ground contrast between text and background increases
legibility. Dark text against a light background is most legible.
Language

Depending on the rhetorical situation certain types of language are needed. Short
sentences are helpful. Unless the situation calls for it don’t use jargon or technical terms.
Many writer will choose to use active voice, verbs (instead of noun strings or nominals),
and simple sentence structure.

Rhetorical Situation
A User Centered Design is focused around the rhetorical situation. The rhetorical
situation shapes the design of an information medium. There are three elements to
consider in a rhetorical situation: Audience, Purpose, Context.

Audience

The audience is the people who will be using the document. The designer must consider
their age, geographical location, ethnicity, gender, education, etc.

Purpose

The purpose is how the document will be used, and what the audience will be trying to
accomplish while using the document. The purpose usually includes purchasing a
product, selling ideas, performing a task, instruction, and all types of persuasion.

Context

The context is the circumstances surrounding the situation. The context often answers the
question: What situation has prompted the need for this document? Context also includes
any social or cultural issues that may surround the situation.

User-centered design according to Donald Norman


The book "The Design of Everyday Things", originally called "The Psychology of
Everyday Things" was first published in 1986. In this book, Donald A. Norman describes
the psychology behind what he deems 'good' and 'bad' design through examples and
offers principles of 'good' design. He exalts the importance of design in our everyday
lives, and the consequences of errors caused by bad designs.

In his book, Norman uses the term "user-centered design" to describe design based on the
needs of the user, leaving aside what he considers to be secondary issues like aesthetics.
User-centered design involves simplifying the structure of tasks, making things visible,
getting the mapping right, exploiting the powers of constraint, and designing for error.
Norman's overly reductive approach in this text was redressed by him later in his own
publication "Emotional Design".

Focus on more than just computers and single users


While user-centered design is often viewed as being focused on the development of
computer and paper interfaces, the field has a much wider application. The design
philosophy has been applied to a diverse range of user interactions, from car dashboards
to service processes such as the end-to-end experience of visiting a restaurant, including
interactions such as being seated, choosing a meal, ordering food, paying the bill etc.

When user-centered design is applied to more than single user interactions, it is often
referred to as user experience. A user experience comprises a number of separate
interfaces, human-to-human contacts, transactions and conceptual architectures. The
restaurant example (above) is an example of this - ordering a meal or paying the bill are
two user interactions, but they are a part of the "user experience" called dining out. It is
not enough to have the separate interactions that comprise an experience being usable.
The goal is that each interaction should integrate with every other interaction that forms a
part of a single experience. In this way, the experience as a whole is rendered usable.

In product design, this is sometimes referred to as the "out of the box experience,"
referring to all tasks the user must complete from first opening the box the product is
shipped in, through unpacking, reading the directions, assembly, first use and continuing
use.

Usability
Usability is a term used to denote the ease with which people can employ a particular
tool or other human-made object in order to achieve a particular goal. Usability can also
refer to the methods of measuring usability and the study of the principles behind an
object's perceived efficiency or elegance.

In human-computer interaction and computer science, usability usually refers to the


elegance and clarity with which the interaction with a computer program or a web site is
designed. The term is also used often in the context of products like consumer
electronics, or in the areas of communication, and knowledge transfer objects (such as a
cookbook, a document or online help). It can also refer to the efficient design of
mechanical objects such as a door handle or a hammer.

Introduction
The primary notion of usability is that an object designed with the users' psychology and
physiology in mind is, for example:
• More efficient to use—it takes less time to accomplish a particular task
• Easier to learn—operation can be learned by observing the object
• More satisfying to use

Complex computer systems are finding their way into everyday life, and at the same time
the market is becoming saturated with competing brands. This has led to usability
becoming more popular and widely recognized in recent years as companies see the
benefits of researching and developing their products with user-oriented instead of
technology-oriented methods. By understanding and researching the interaction between
product and user, the usability expert can also provide insight that is unattainable by
traditional company-oriented market research. For example, after observing and
interviewing users, the usability expert may identify needed functionality or design flaws
that were not anticipated. Method called "contextual inquiry" does this in the naturally
occurring context of the users own environment.

In the user-centered design paradigm, the product is designed with its intended users in
mind at all times. In the user-driven or participatory design paradigm, some of the users
become actual or de facto members of the design team.[1]

The term user friendly is often used as a synonym for usable, though it may also refer to
accessibility.

There is no consensus about the relation of the terms ergonomics (or human factors) and
usability. Some think of usability as the software specialization of the larger topic of
ergonomics. Others view these topics as tangential, with ergonomics focusing on
physiological matters (e.g., turning a door handle) and usability focusing on
psychological matters (e.g., recognizing that a door can be opened by turning its handle).

Defining usability
Usability is often associated with the functionalities of the product (cf. ISO definition,
below), in addition to being solely a characteristic of the user interface (cf. framework of
system acceptability, also below, which separates usefulness into utility and usability).
For example, in the context of mainstream consumer products, an automobile lacking a
reverse gear could be considered unusable according to the former view, and lacking in
utility according to the latter view.

When evaluating user interfaces for usability, the definition can be as simple as "the
perception of a target user of the effectiveness (fit for purpose) and efficiency (work or
time required to use) of the Interface". Each component may be measured subjectively
against criteria e.g. Principles of User Interface Design, to provide a metric, often
expressed as a percentage.

It is important to distinguish between usability testing and usability engineering.


Usability testing is the measurement of ease of use of a product or piece of software. In
contrast, usability engineering (UE) is the research and design process that ensures a
product with good usability.

Usability is an example of a non-functional requirement. As with other non-functional


requirements, usability cannot be directly measured but must be quantified by means of
indirect measures or attributes such as, for example, the number of reported problems
with ease-of-use of a system.

ISO standard

The document ISO 9126 (1991) Software Engineering Product Quality, issued by the
International Organization for Standardization, defines usability as:

A set of attributes that bear on the effort needed for use, and on the individual
assessment of such use, by a stated or implied set of users.

The document ISO 9241-11 (1998) Guidance on Usability, also issued by the
International Organization for Standardization, defines usability as:

The extent to which a product can be used by specified users to achieve specified
goals with effectiveness, efficiency and satisfaction in a specified context of use.

System acceptability

Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman
have written (separately) about a framework of system acceptability, where usability is a
part of "usefulness" and is composed of:

• Learnability (e.g. intuitive navigation)


• Efficiency of use
• Memorability
• Few and noncatastrophic errors
• Subjective satisfaction

Usability considerations

Usability includes considerations such as:

• Who are the users, what do they know, and what can they learn?
• What do users want or need to do?
• What is the general background of the users?
• What is the context in which the user is working?
• What has to be left to the machine?

Answers to these can be obtained by conducting user and task analysis at the start of the
project.

Other considerations

• Can users easily accomplish their intended tasks? For example, can users
accomplish intended tasks at their intended speed?
• How much training do users need?
• What documentation or other supporting materials are available to help the user?
Can users find the solutions they seek in these materials?
• What and how many errors do users make when interacting with the product?
• Can the user recover from errors? What do users have to do to recover from
errors? Does the product help users recover from errors? For example, does
software present comprehensible, informative, non-threatening error messages?
• Are there provisions for meeting the special needs of users with disabilities?
(accessibility)

Examples of ways to find answers to these and other questions are: user-focused
requirements analysis, building user profiles, and usability testing.

Benefits of Usability
The key benefits of usability are:

• Higher revenues through increased sales


• Increased user efficiency
• Reduced development costs
• Reduced support costs

Conclusion
Usability is now recognized as an important software quality attribute, earning its place
among more traditional attributes such as performance and robustness. Indeed, various
academic programs focus on usability.

Usability testing
Usability testing is a technique used to evaluate a product by testing it on users. This can
be seen as an irreplacable usability practise, since it gives direct input on how real users
use the system[1]. This is in contrast with usability inspection methods where experts use
different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its
intended purpose. Examples of products that commonly benefit from usability testing are
web sites or web applications, computer interfaces, documents, or devices. Usability
testing measures the usability, or ease of use, of a specific object or set of objects,
whereas general human-computer interaction studies attempt to formulate universal
principles.

Goals of usability testing


During usability testing, the aim is to observe people using the product to discover errors
and areas of improvement. Usability testing generally involves measuring how well test
subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The
results of the first test can be treated as a baseline or control measurement; all subsequent
tests can then be compared to the baseline to indicate improvement.

• Efficiency -- How long does it take people to complete basic tasks? (For example,
find something to buy, create a new account, and order the item.)
• Accuracy -- How many mistakes did people make? (And were they fatal or
recoverable with the right information?)
• Recall -- How much does the person remember afterwards or after periods of non-
use?
• Emotional response -- How does the person feel about the tasks completed? Is the
person confident, stressed? Would the user recommend this system to a friend?

What usability testing is not


Simply gathering opinions on an object or document is market research rather than
usability testing. Usability testing usually involves a controlled experiment to determine
how well people can use the product. 1

Rather than showing users a rough draft and asking, "Do you understand this?", usability
testing involves watching people trying to use something for its intended purpose. For
example, when testing instructions for assembling a toy, the test subjects should be given
the instructions and a box of parts. Instruction phrasing, illustration quality, and the toy's
design all affect the assembly process.

Methods
Setting up a usability test involves carefully creating a scenario, or realistic situation,
wherein the person performs a list of tasks using the product being tested while observers
watch and take notes. Several other test instruments such as scripted instructions, paper
prototypes, and pre- and post-test questionnaires are also used to gather feedback on the
product being tested. For example, to test the attachment function of an e-mail program, a
scenario would describe a situation where a person needs to send an e-mail attachment,
and ask him or her to undertake this task. The aim is to observe how people function in a
realistic manner, so that developers can see problem areas, and what people like.
Techniques popularly used to gather data during a usability test include think aloud
protocol and eye tracking.

Hallway testing

Hallway testing (or hallway usability testing) is a specific methodology of software


usability testing. Rather than using an in-house, trained group of testers, just five to six
random people, indicative of a cross-section of end users, are brought in to test the
software (be it an application, web site, etc.); the name of the technique refers to the fact
that the testers should be random people who pass by in the hallway. The theory, as
adopted from Jakob Nielsen's research, is that 95% of usability problems can be
discovered using this technique.

In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems,
popularized the concept of using numerous small usability tests -- typically with only five
test subjects each -- at various stages of the development process. His argument is that,
once it is found that two or three people are totally confused by the home page, little is
gained by watching more people suffer through the same flawed design. "Elaborate
usability tests are a waste of resources. The best results come from testing no more than 5
users and running as many small tests as you can afford." 2. Nielsen subsequently
published his research and coined the term heuristic evaluation.

The claim of "Five users is enough" was later described by a mathematical model [2]
which states for the proportion of uncovered problems U

U = 1 − (1 − p)n

where p is the probability of one subject identifying a specific problem and n the number
of subjects (or test sessions). This model shows up as an asymptotic graph towards the
number of real existing problems (see figure below).

In later research Nielsen's claim has eagerly been questioned with both empirical
evidence 3 and more advanced mathematical models (Caulton, D.A., Relaxing the
homogeneity assumption in usability testing. Behaviour & Information Technology,
2001. 20(1): p. 1-7.). Two of the key challeges to this assertion are: (1) since usability is
related to the specific set of users, such a small sample size is unlikely to be
representative of the total population so the data from such a small sample is more likely
to reflect the sample group than the population they may represent and (2) many usability
problems encountered in testing are likely to prevent exposure of other usability
problems, making it impossible to predict the percentage of problems that can be
uncovered without knowing the relationship between existing problems. Most researchers
today agree that, although 5 users can generate a significant amount of data at any given
point in the development cycle, in many applications a sample size larger than five is
required to detect a satisfying amount of usability problems.

Bruce Tognazzini advocates close-coupled testing: "Run a test subject through the
product, figure out what's wrong, change it, and repeat until everything works. Using this
technique, I've gone through seven design iterations in three-and-a-half days, testing in
the morning, changing the prototype at noon, testing in the afternoon, and making more
elaborate changes at night." 4 This testing can be useful in research situations.

Human action cycle


The human action cycle is a psychological model which describes the steps humans take
when they interact with computer systems. The model was proposed by Donald A.
Norman, a scholar in the discipline of human-computer interaction. The model can be
used to help evaluate the efficiency of a user interface (UI). Understanding the cycle
requires an understanding of the user interface design principles of affordance, feedback,
visibility and tolerance.

The human action cycle describes how humans may form goals and then develop a series
of steps required to achieve that goal, using the computer system. The user then executes
the steps, thus the model includes both cognitive activities and physical activities.

This article describes the main features of the human action cycle. See the following
book by Donald A. Norman for deeper discussion:

The three stages of the human action cycle


The model is divided into three stages of seven steps in total, and is (approximately) as
follows:

Goal formation stage

• 1. Goal formation.

Execution stage

• 2. Translation of goals into a set of (unordered) tasks required to achieve the goal.
• 3. Sequencing the tasks to create the action sequence.

• 4. Executing the action sequence.

Evaluation stage

• 5. Perceiving the results after having executed the action sequence.

• 6. Interpreting the actual outcomes based on the expected outcomes.

• 7. Comparing what happened with what the user wished to happen.

Use in evaluation of user interfaces


Typically, an evaluator of the user interface will pose a series of questions for each of the
cycle's steps, an evaluation of the answer provides useful information about where the
user interface may be inadequate or unsuitable. These questions might be:

• Step 1, Forming a goal:


o Do the users have sufficient domain and task knowledge and sufficient
understanding of their work to form goals?
o Does the UI help the users form these goals?

• Step 2, Translating the goal into a task or a set of tasks:


o Do the users have sufficient domain and task knowledge and sufficient
understanding of their work to formulate the tasks?
o Does the UI help the users formulate these tasks?

• Step 3, Planning an action sequence:


o Do the users have sufficient domain and task knowledge and sufficient
understanding of their work to formulate the action sequence?
o Does the UI help the users formulate the action sequence?

• Step 4, Executing the action sequence:


o Can typical users easily learn and use the UI?
o Do the actions provided by the system match those required by the users?
o Are the affordance and visibility of the actions good?
o Do the users have an accurate mental model of the system?
o Does the system support the development of an accurate mental model?

• Step 5, Perceiving what happened:


o Can the users perceive the system’s state?
o Does the UI provide the users with sufficient feedback about the effects of
their actions?

• Step 6, Interpreting the outcome according to the users’ expectations:


o Are the users able to make sense of the feedback?
o Does the UI provide enough feedback for this interpretation?

• Step 7, Evaluating what happened against what was intended:


o Can the users compare what happened with what they were hoping to
achieve?