You are on page 1of 11

TCC|2014 Jessica E.

Leauanae
1

User Centered Design of an e-learning Environment:
Evaluating the User Experience

Jessica Leauanae
Educational Technology Graduate Student
University of Hawaii-Manoa
Hawaii, USA
leauanae@hawaii.edu


Abstract: The rising cost of delivering training to employees has driven many corporate
managers to migrate traditional methods of face-to-face instruction into e-learning
environments. However, the conversion from synchronous to asynchronous delivery
often poses the challenge of appropriating sufficient time and resource to the ADDIE
phases of instructional design. Limited time and budgetary constraints often force
instructional designers to employ alternative design processes that allow them to deploy
instruction rapidly, evaluate data in real-time and employ dynamic design changes
intuitively. This paper explores the iterative development process of an e-learning
training environment that utilized User Experience evaluations to implement design
revisions. All design revisions were based on the data analysis of a series of User
Experience evaluations aimed at measuring both usability and learnability or learner
experience of the e-learning environment. Synchronous, remote usability testing was
conducted following Steve Krugs framework for web usability and asynchronous learner
experience embedded evaluations followed Kirkpatricks Four Level Training Evaluation
Model. A total of three iterative cycles occurred within a three-month period. The results
of eight usability tests and 138 learner experience evaluations are discussed.


Introduction

In todays technology driven landscape, everyone is concerned about the experience. The ability to
harness the powers of technology and use it to drive, advance and transform what used to be just
clicking around on a page into an actual User Experience is fast becoming the basis for many design
decisions. Shopping for a new pair of black pants is no longer a series of clicks and thumbnails. Todays
User Centered Design (UCD) might include a virtual dressing room equipped with a customizable
mannequin to match your height, weight and body shape. And pants are not just pants anymore but
rather your statement for the occasion. Are they being worn for work, school, casual, girls night out, job
interview? In which case, the response will automatically navigate the user to a slideshow of suggested
outfits built specifically for them. This is a dynamic experience that gathers user feedback to build
familiarity and connection to the usera virtual relationship so to speakan appeal to our human
experience in a technology focused world.
TCC 2014| Jessica E. Leauanae
2
The purpose of this study was to evaluate the User Experience
1
(UX) of an e-learning training site that
was built using a rapid prototyping process. The project scope involved three iterative cycles of revision.
Evaluations of the User Experience were aimed at measuring two areas: usability
2
and learner
experience
3
. Results from the data analysis facilitated the revision of the course based on the users
feedback. Participants in this study included a team of distance educators who are employed with an
online K-12 learning solutions provider. The e-learning environment is asynchronous, self-paced and
multi-modal.

Background

*Company A is an online learning solution provider dedicated to offering an engaging online education
for students grades 6-12. *Company A offers blended learning solutions that can be customized for the
client through a variety of implementations. In April 2013, *Company A experienced a large spike in
student enrollments that required the company to hire more than 250 teachers to meet the demand of a
growing student body. While Human Resources actively recruited for this new workforce, management
was faced with the task of providing quality training that could scale the growth as rapidly as it was
coming. At the time, the current process for training a new hire teacher consisted of a single one-on-one
synchronous training session hosted in Blackboard Collaborate. The session typically lasted three or
more hours and covered a wide range of topics. At the conclusion of the session, the participant was sent
a link to a library of printable resources that ranged from marketing documents to process flow charts
and lengthy handbooks of instruction. The new teacher was then assigned students without any further
training or mentoring. The responsibility for training the new hire lied solely on the shoulders of one
trainer during one session. The decision to design a process that was more effective and efficient was
crucial.

In May 2013, *Company A decided to explore moving its legacy synchronous training sessions to an
asynchronous method. This transformation meant the development of a training solution that could
provide simultaneous trainings to large groups of new teachers despite differing time zones, geographic
locations, time frames and learning pace. In addition, the training platform would need to have dynamic
capabilities to be modified and updated on the fly as processes changed quite frequently at *Company
A. Strother (2002) explored the benefits of asynchronous e-learning in her article An Assessment of the
Effectiveness of e-learning in Corporate Training Programs. Strother found that in an interview of
training managers of the top 40 Global companies, all of them had e-learning training initiatives in place
with the exception of one. In addition to cost-savings, e-learning provides benefits such as flexibility,
convenience and self-paced instruction. However, the question then becomes, how do decision makers
and key stake holders decide what system, method, delivery or platform of e-learning produces effective
results in terms of cost to implement and improvements in learning? This paper describes the evaluation
methodology employed to ensure effective results.

Methodology

1
User Experience (UX): the overall experience of a person using a website or computer application especially in terms of
how easy or pleasing it is to use.
2
Usability: Ease of use and learnability of a human made object such as a software application or website. Usability
considers usefulness.
3
Learner Experience (LX): designing user interface in a way that supports and enhances cognitive and affective processes
that learning involves.
TCC 2014| Jessica E. Leauanae
3

The e-learning training development began in July 2013. Based on discussions with key stakeholders
and a thorough needs analysis, it was determined that the e-learning environment needed to meet the
following criteria:

1. Online and on-demand access
2. Multi-platform compatible (mobile, tablets and computers)
3. Multi-modal delivery (media, audio, printable docs, graphics, analog and digital)
4. Secure user login (copyright, intellectual and trade secret protected)
5. Minimal tracking capabilities (Login access, time spent, modules accessed and completed)

The launch deadline for the beta site was September 2013 and a final launch deadline was slated for
December 2013. Because of the short development timeframe, the instructional strategy for building the
site followed a rapid prototyping approach as part of a larger iterative process. Figure 1 illustrates the
process steps:

Figure 1. Iterative Process for Rapid Prototype Design

A total of three cyclical revisions took place. At each cycle, user feedback was gathered from 15
participants who completed four asynchronous embedded evaluations. In addition, two of the 15
participants also completed a one-hour synchronous usability test. A total of eight usability tests and 138
evaluation surveys were used to conduct data analysis and provide recommendations at each stage of
revisions.

While this study involves both instructional design and evaluation, the focus of this paper was on the
evaluation of the User Experience in terms of usability and learner experience. Focusing on user
feedback would allow the implementation of rapid prototyping design revisions that were user-centered.
Table 1 shows the technology tools and applications that were used in site design as well as evaluation
and testing.



TCC 2014| Jessica E. Leauanae
4
Participant Profile
The target population for this study includes current Virtual Instructors at *Company A. A total of 45
participants completed the asynchronous training modules and eight completed the synchronous
usability test. A pre-demographic survey was conducted prior to users embarking into the training site.
The following profile emerged from the data collection. Blue denotes highest percentage. A full
demographic report can be found in the appendices.

Gender: 10% male, 90% female
Age: 18-64 years old, 38% age 35-44
Highest Education: 17% Bachelors, 69%
Masters, 8% Doctorate, 6% Other
Content Areas: 23% World Languages, 5%
Math, 7% Science, 14% Social Studies, 12%
English, 11% Career Tech, 15% Electives
Employment Status: 65% Part-time (student
load 1-75), 35% Full-time (student load 75-
300)
Taught Online: 52% Yes, 48% No
Years teaching online: 1-15 years, 21% 1-2
years
Taken Online Course: 94% Yes, 6% No
Technology Savvy rating: 92% passed with a
scorecard of 4+, 8% with scorecard less than 4.
(rating scale 1-5, one lowest and five highest)

Evaluation Procedures

Two types of evaluations methods were used during this study. The first type included embedded online
asynchronous evaluations and the second was synchronous remote usability testing. The goal of using two
categories of evaluations was to allow a more comprehensive evaluation of the e-learning environment
(Zacharias & Poylymenakou, 2009). Squires and Preece (1999) advocates for going beyond just functional
usability in e-learning:

Evaluating e-learning may move usability practitioners outside their comfort zone. In fact,
usability for e-learning designs is directly related to their pedagogical value, An e-learning
application may be usable but not in the pedagogical sense and vice versa. (p. #)

The online asynchronous evaluations measured quantitative data while the usability studies measured
qualitative data. Each participant was given a unique access code to mark his or her results as part of the study.
Once a survey was complete, the user could not return and resubmit data.

Asynchronous Evaluations and Learner Experience
There were a total of four embedded evaluations that occurred at the conclusion of Unit 1, Unit 3b, Unit 4 and
Unit Final. The primary goal of asynchronous evaluations was to measure the results of the e-learning
environment in terms of learner experience. A total of 45 participants completed the training modules and
accompanying evaluations. A total of 138 asynchronous evaluations from three iterative cycles were submitted
for the study. The framework for the learner experience evaluations was primarily built around Kirkpatricks
Classic Model for Training Evaluations (Kirkpatricks Four-Level Evaluation Model in Instructional Design,
n.d.). According to a study in 2000, the American Society of Training and Development (ASTD) validated that
Kirkpatricks classic model could be used on any training, both traditional and e-learning (Strother, 2002).
Additional learner experience metrics were derived from Developing a Usability Evaluation Method for e-
learning applications: Beyond Functional Usability, a research-study exploring questionnaire based usability
evaluation methods (Zacharias & Poylymenakou, 2009). Data was collected and analyzed systematically using
a 5-point Likert scale rating from strongly disagrees to strongly agree. Variables showing 20% or higher of
TCC 2014| Jessica E. Leauanae
5
participants agreed or disagreed with the statement were immediately flagged for design revision
considerations. Table 2 outlines the number of participants who completed each evaluation during each
iterative cycle. It was noted that not all users completed all of the evaluation surveys.

Table 1. Participants
Survey Measurement # of recorded responses
per iteration
Evaluation 1 following Unit 1-
Model for Proactive Student
Support
Measuring participant reaction according
to Kirkpatrick
Beta 1: 15 responses
Beta 2: 13 surveys
Beta 3: 17 surveys
Evaluation 2 following Unit 3b-
Navigating the LMS and SIS
Measuring participant reaction according
to Kirkpatrick
Beta 1: 16 responses
Beta 2: 5 surveys
Beta 3: 22 surveys
Evaluation 3 Unit 4 following Unit
4- Grading the different courses
Measuring learner experience
Technology Dimensions according to
Beyond Functional Usability
Beta 1: 15 responses
Beta 2: 9 surveys
Beta 3: 22 surveys
Evaluation 4 Final Unit-Taking
Action
Measuring learner experience Course
Dimension and Design Dimension
according to Beyond Functional
Usability
Beta 1: 10 responses
Beta 2: 7 surveys
Beta 3: 13 surveys

Synchronous Remote Usability Testing
The remote usability test aimed at measuring the site usability in terms of function and navigation. Remote
testing ranged from 45 minutes to an hour and was conducted one-on-one. A total of eleven usability tests were
conducted but three of the tests resulted in unusable data and were dismissed from the data analysis. The
framework for the usability test was built around Steve Krugs do-it-yourself guide to fixing usability problems
(Krug, 2010). Testing was hosted remotely from a Blackboard Collaborate room
(www.tinyurl.com/jessicaleauanae) and participants logged in at a prescheduled meeting time. The session was
recorded and participants used their microphones to communicate but turned off their computer video cameras.
A script was read and followed to ensure consistency and participants were given a task list prior to the start of
the meeting.

Findings and Data Analysis

Asynchronous Evaluation 1 and 2: Learner Experience, Participant Reaction to e-learning Environment
Overall feedback regarding content for the training modules was positive which means the scorecard rating of
Agree 44-56% or Strongly Agree held the highest percentages (see Figure 2). Participants agreed that the units
were interesting, easy to follow, appropriate length and content, well organized and clear, appropriately
challenging and course material was essential to successful online teaching. They also felt the unit material was
an effective use of their time, relevant to their needs and the organization and they were able to apply what they
learned and the learning motivated to learn more.
TCC 2014| Jessica E. Leauanae
6

Figure 2. Snapshot of data visualization for Evaluation 1

Asynchronous Evaluation 3: Learner Experience of Technology Dimensions
Overall feedback for Technology Dimensions was positive which means the scorecard rating of Agree or
Strongly Agree held the highest percentages 32-57%.
Participants felt the content:
1. Fit their needs
2. Content was useful
3. Participants were unaffected by training being conducted online
4. Site was easy to use and they
5. Able to find content easily
6. Site responded fast enough
7. Content allowed participants to evaluate their learning performance as they moved along
8. Participants were able to control the pace of their learning
9. Able to the content they needed by choice
10. They were able to share within a learning community
11. Able to talk with a trainer and as a whole they were successful in using the system.

Asynchronous Evaluation 4: Learner Experience of Course Dimensions
Overall feedback for Course Dimensions was positive which means the scorecard rating of Agree or Strongly
Agree held the highest percentages 47-55%.
Participants felt:
1. On-demand component allowed them to complete the material more effectively,
2. Advantages outweighed the disadvantages,
3. Allowed a balance of other demands,
4. Saved time commuting, finished training more quickly,
5. Course topics was logical in sequence,
6. Allowed for recall of previous knowledge, and
7. Used guidance strategies with case studies.

Asynchronous Evaluation 4: Learner Experience Design Dimensions
Overall feedback was positive which means the scorecard rating of Agree or Strongly Agree held the highest
percentages 44-57%. Participants felt:
TCC 2014| Jessica E. Leauanae
7

1. Page layouts were clear and organized with appropriate titles and headers
2. Navigation menu was clear
3. Different color fonts contributed to the organization of topic information
4. Images were relevant and appropriately sized for web viewing
5. Instructions were clear and used a variety of media including video, audio, text and images
6. Unit size and chunking was appropriate
7. Playback features were easy
8. Activities were engaging.

Recommendations for Improvement from Asynchronous Learner Experience Evaluations
The most frequent and common complaint about the e-learning was the quality of the video presentations.
When asked about the major weaknesses of the unit, many participants responded similarly regarding the
syncing up of the voiceover with the appropriate screenshots and demo:

1. Unit videos did not load completely
2. Trainers voice was not in sync with video demos or screenshots
3. YouTube videos were not clear
4. Trainer voiceover was too fast

Synchronous Remote Usability Test: Krugs Web Usability Testing, functionality and usability
A total of 12 participants completed the usability test but only eight were included in the report findings. Three
were removed for incomplete and failure to review material prior to the testing date. Each usability test was
conducted individually and lasted from 45 minutes to an hour. Sessions were held in Blackboard Collaborate
and recorded. Data was sorted and entered into a spreadsheet documenting Success rates, Error rates and Assist
rates. A Success occurred when the participant was able to complete the assigned task without any assistance
from the test administrator. Error rates denote the participant was not able to complete the task even with
assistance from the test administrator and Assist identifies instances where the user was able to complete the
task with minimal procedural help from the administrator. The Time on Task shows how long it took the
participant to complete the task after the instructions were given. This includes talk-aloud, questioning and
exploration time up until the task is finally completed. There were a total of 18 tasks to complete. Usability
reporting methodology was taken from the National Institute of Standards and Technology Industry Usability
Reporting (Common Industry Format for Usability Test Reports, 1999). Figure 3 illustrates the data collection
procedures.

Figure 3. Individual Data Gathering Tally Sheet
TCC 2014| Jessica E. Leauanae
8

Each group of usability test participants went through a different iteration of the site. Beta 1 version launched
November 7, 2013 and three participants (User 1, 2, 3) tested the site. Beta 1 had the highest number of errors
or inability for the participant to complete the task even when assisted the administrator. User 1, 2 and 3 all had
uncompleted tasks (see Figure 4). The areas where the errors occurred were noted and improved for Beta 2
version. In Beta 2 we can see there were no errors for any users (User 4, 5, 6?). Beta 3 noted one error (User 7,
8). Overall Time on Task decreased for all users as the site progressed through improvement iterations.

Figure 4 illustrates the percent of tasks each individual participant was able to complete unassisted, assisted and
error (unable to complete with assistance). There is an increase in ability for users to complete unassisted tasks
as iterations improve and progress, a slight decrease in assisted tasks as iterations progress, and a dramatic
decrease in errors except for an anomaly from User 7.

Figure 4. Usability Testing Task Breakdowns
*Red line indicates when new site iteration was launched and tested

The most significant area needing improvements revolved around the landing page and the naming conventions
of resources when using the Search feature (see Figure 6).



TCC 2014| Jessica E. Leauanae
9


Figure 5. Landing Page Usability Think-Aloud

A search for the Escalation Table revealed the following results that were still ambiguous.


Figure 6. Search Results, Ambiguous Naming Conventions

Implications and Conclusion

The e-learning environment iterative redesign process allowed me to employ improvements to the site as noted
and indicated by the learner in a systematic and comprehensive approach that addressed both site usability and
learner experience. The learners ultimate goal in an e-learning environment is to learn. But learning doesnt
occur just because a site passes usability testing or just because formative and summative assessments
demonstrate learning is taking place. Adult learners are characteristically intrinsically motivated and a positive
learner experience contributes to higher motivation to continue learning. Improvement to learner experience
included the following items, though this list is not exhaustive of all the site improvements.

1. Migrate heavy text documents into Captivate Video Presentations with the ability of users to jump
slides, tab slides as placeholders, search content within the video and watch embedded video
demonstrations that load completely
2. Improve and add graphics for items that are difficult to visualize or illustrate process
3. Add animated graphical presentations to teach task oriented procedures in the LMS and SIS
4. Change voiceovers to convey conversational tone rather than authoritative and robotic
TCC 2014| Jessica E. Leauanae
10
5. Migrate all demo videos to Unlisted YouTube Channel and lessen to 2 minutes for easy search ability
within
6. Provide Naming Conventions Guide and rename library of documents for accurate search results
7. Integrate more Case Studies and learning activities with trainer feedback responses
8. Use Storytelling in Captivate Modules

E-Learning environments need to take into consideration the needs, wants, as well as limitations of the end user
and the product. Iterative design is critical to ensure the environment is moving in the right direction and
information gathered from the learner is extremely salient to any recommendations to achieve instructional and
educational goals.

The overall general results of this study found that at each stage of iteration there were significant
improvements that needed to be made in terms of usability and learner experience. The rapid prototyping
approach allowed for quick identification through the testing and allowed me to quickly deploy recommended
solutions. Each phase of the iterative process pushed the e-learning environment closer to suggested user
feedback that reflected both a usable site and a powerful learner experience.







TCC 2014| Jessica E. Leauanae
11
References
An approach to usability evaluation of e-learning applications - Springer. (n.d.). Retrieved September 14, 2013,
from http://link.springer.com.eres.library.manoa.hawaii.edu/article/10.1007/s10209-005-0008-6
Common Industry Format for Usability Test Reports. (1999). Industry Usability Reporting Project, 1(1).
Retrieved from http://zing.ncsl.nist.gov/iusr/documents/cifv1.1b.htm#_Toc467573722
Cook, D. (2005). The Research We Still Are Not Doing: An agenda for the study of Computer-Based Learning.
Academic Medicine, 80(6), 541548.
Distance Education: Guidelines for Good Practice. (2000). Higher Education Program and Policy Council of
the American Federation of Teachers, 25.
Kirkpatricks Four-Level Evaluation Model in Instructional Design. (n.d.). Retrieved May 6, 2013, from
http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html
Squires, D, & Preece, J. (1999). Predicting quality in educations software: Evaluating for learning, usability and
synergy between them. Interacting with Computers, 11(5), 467483.
Strother, Judith B. (2002). An Assessment of the Effectiveness of e-learning in Corporate Training Programs.
The International Review of Research in Open and Distance Learning, 3(1). Retrieved from
http://www.irrodl.org/index.php/irrodl/article/view/83/160
Sun Pei-Chen, Tsai Ray J., Finger Glen, Chen Yueh-Yang, & Yeh Dowming. (2008). What drives a successful
e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers
& Education, (50), 11831202.
Usability of E-learning tools. (n.d.). Retrieved September 14, 2013, from
http://dl.acm.org/citation.cfm?id=989873
Wang, Y.S. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems.
Information and Management, 41, 7586.
Zacharias P, & Poylymenakou. (2009). Developing a Usability Evaluation Methods for e-Learning
Applications: Beyond Functional Usability., 25(1), 7598.

You might also like