You are on page 1of 64

1

Mobilegogy: Is there an app for that?


Investigating a new mobile learning model through an exploratory study of iTunes University courses and cloudbased education platforms

Prepared by B.J. Haddad, Patricia Machun, Catherine Trau, Ann M. Wellhouse, Nadia Zaid

May 12, 2012 Prepared for ED 690 Spring 2012

Table of Contents
Overview ........................................................................................................................ 4 Theoretical Backgrounds ................................................................................................ 5
Location, Technology, Culture, and Satisfaction (LTCS) .............................................................................. 6 Cybergogy ............................................................................................................................................................................. 7 Additional Theories and Models ............................................................................................................................... 9 Individual theories of learning. ............................................................................................................................. 9 Social theories of learning. ....................................................................................................................................10 Design for Learning Uses ........................................................................................................................................... 13 Design for the Mobile Learner ................................................................................................................................ 13 Cloud-Based Learning ................................................................................................................................................. 14

Contextual Factors ........................................................................................................ 16


Intended Audience ........................................................................................................................................................ 16 The Platform .................................................................................................................................................................... 16 Inter-rater Reliability .................................................................................................................................................. 17 Mobile Devices ................................................................................................................................................................ 18

Methodology ................................................................................................................ 18
Research Design ............................................................................................................................................................. 18 Participants....................................................................................................................................................................... 19 Samples ............................................................................................................................................................................... 20 Procedure .......................................................................................................................................................................... 24 Analytical Methods ....................................................................................................................................................... 24

Findings and Analysis.................................................................................................... 26


iTunes U Courses ........................................................................................................................................................... 26

3
Category and Element Ratings................................................................................................................................ 26 Platform Findings and Analysis ............................................................................................................................. 36 Mobilegogy........................................................................................................................................................................ 43

Conclusions ................................................................................................................... 46
Uniquely Mobile Elements ........................................................................................................................................ 46 Social Learning................................................................................................................................................................ 47 Commonly Used Elements ........................................................................................................................................ 47 Types of Content ............................................................................................................................................................ 47 Best Examples ................................................................................................................................................................. 48 Poor Utilization of Available Tools ....................................................................................................................... 48 Further Study................................................................................................................................................................... 48

References .................................................................................................................... 49 Appendix A: Data Tools ................................................................................................ 53


Coding Rubrics ................................................................................................................................................................ 53 Final Data Collection Tools ....................................................................................................................................... 58 iTunes U Courses.........................................................................................................................................................58 Open-Learning Platforms.......................................................................................................................................58 Disarticulated data analysis tools ......................................................................................................................... 58 Open University vs. Stanford University ..........................................................................................................58 Highest vs. Lowest Scoring Courses:..................................................................................................................58 Courses Using iBook vs. Courses not using iBook:.......................................................................................58 Video Only Courses vs. Video Plus Courses: ....................................................................................................58

Appendix B: Data Tables ............................................................................................... 59

Overview
Mobile learning (mLearning), defined either as learning with mobile devices or learning anytime anywhere, has grabbed the attention of educators, technologists, teaching institutions, and governments around the world (Sharples et al., 2010; Wang & Shen, 2011; Xiao, Wang, & Li, 2011). In particular, 2012 has seen an explosion of high profile educational initiatives that rely on mLearning to achieve a global reach. Technological advances have dramatically expanded the audience of learners with access to mobile devices, while simultaneously enabling educators to massively produce and distribute content. mLearning is not new, but the recent rapid expansion of the field has intensified the effort to develop standards and guidelines for this evolving arena. Our definition of mLearning synthesizes the above two schools of thoughts, encompassing both the mobilization of devices and the mobility of the learner (Sharples et al., 2010; Wang & Shen, 2011; Xiao, Wang, & Li, 2011[m1] ). Design for mLearning should expand the scope of the learning process to capitalize on the potential and possibilities created by the mobility of both learner and technology, such as the ability to interact with your device in a specific location and use data capture for observational activities. mLearning should not only allow the learner to take learning out into the world, but also enable the learner to learn from the world. Even though mLearning is used widely in daily life, work, and learning, there is still a lack of principles and guidelines for effectively designing and implementing this type of learning. This study intends to fill in this tremendous gap, and to create mobilegogy (pedagogies for mobile learning).

5 Our efforts are guided by the Location, Technology, Culture, and Satisfaction (LTCS) model (Xiao, Wang, & Li, 2011), an innovative model that builds on Kellers (1983) ARCS model and Shihs mLearning model (Shih & Mills, 2007). In addition, some design elements were derived from Cybergogy (Wang & Kang, 2006; Wang, 2008). In this study, we used content analysis to systematically analyze a sample of mobile learning courses and platforms. By filtering and synthesizing concepts derived from existing learning theories, design models and research in mLearning and electronic learning (eLearning), we created two comprehensive rubrics that were used to analyze: (1) mobile courses and (2) educational platforms within the framework of a modified LTCS model. These rubrics were then used to analyze mobile design for courses from iTunes University (iTunes U) and for the iTunes U, MITx, Coursera, and Udacity educational platforms. Based on the results of the analysis, we provide recommendations to: enhance the definition of mLearning; improve the design of mobile educational content; more effectively use mobile devices to enhance learning experiences and outcomes around the world; and refine Wang, Xiao and Novaks LTCS model to create an innovative model of mLearning: Mobilegogy.

Theoretical Backgrounds
The review of the literature included surveys of learning theories, models and design principles as they relate to mobile instructional design; an examination of iTunes U as an

6 educational platform; and an exploration of cloud computing with a focus on cloud-based education platforms. The researchers primary objectives for the review included: Developing a comprehensive content analysis rubric incorporating elements derived from established and evolving learning theories, models and design principles as they relate to mobile instructional design. Building a strong knowledge base of iTunes Us current practices and new initiatives to enable effective choices in sampling schemes and research methodology. Identifying new and emerging cloud-based education platforms to evaluate with the goal of identifying practices that can be adopted by or adapted for mLearning. Below is a summary of literature detailing key theories, models and research from which we derived the variables incorporated into the analysis rubrics for iTunes U courses and cloud based educational platforms.

Location, Technology, Culture, and Satisfaction (LTCS)


The Location, Technology, Culture, and Satisfaction model (Wang et al., 2012), a broad model that applies to the design of mobile learning for corporate and higher education, provided the framework within which we developed the rubrics. The first comprehensive model for designing mobile learning, it supports various learning theories and existing models in the social sciences and can be used to construct mobile learning platforms or to design or adapt materials and resources to mobile learning. This model builds on Kellers ARCS model and Shihs mobile learning model (Shih & Mills, 2007) and suggests that the design and implementation of mLearning need to consider four critical variables: Location, Technology, Culture and Satisfaction.

7 Location refers to the learners location at the macro level (local or global). At the micro level, location is the setting in which the learner will participate in the learning (e.g., formal classroom, caf, subway). Technology includes the device and learning platform, as well as teaching and learning elements. Culture refers to the cross -cultural dimensions of globalized eLearning and mLearning (Wang et al). This portion of the model guides the learning design to remove or minimize cultural barriers, enabling multicultural learners to achieve equitable learning outcomes. Finally, Satisfaction is at the center of the model and argues that for learner satisfaction to be achieved, the variables of Location, Technology and Culture must be carefully considered in the design of learning materials.

Cybergogy
At the core of cybergogy is an awareness that strategies used for face-to-face learning may not be the same as those used in the virtual environment. The Cybergogy for Engaged Learning model, as Wang and Kang (2006) present, has three overlapping/intersecting domains: cognitive, emotive, and social (Figure 1). The authors argue that engaged learning will occur when the critical factors in each domain are well attended, so as to encourage learners cognitive, emotive, and social presence. This model is created particularly for online settings that involve more generative and constructive learning activities. For the online learning experience to be successful, students must be furnished with prior knowledge, motivated to learn, and positively engaged in the learning process. In addition, Wang and Kang suggest students must also be comfortable with the learning environment and feel a strong sense of community and social commitment.

Figure 1: Cybergogy for engaged learning (Wang et al.)

The Cybergogy Model for Engaged Learning reflects a systemic approach to online learning. The key features of this systemic view include: a) putting the right people, elements and resources in place to succeed; b) evaluating results through learning outcomes; and c) providing feedback and taking action to maintain alignment with established educational and societal goals. Therefore, the term Cybergogy becomes a descriptive label for the strategies involved in creating engaged learning online.

Additional Theories and Models


After establishing the LTCS framework, the researchers filtered and synthesized concepts from the following theories and models to develop analytical variables specific to mLearning.

Individual theories of learning.


Dual coding theory. Dual coding theory states that humans process both visual and verbal information differently and along distinct cognitive pathways (Paivio, 1990). Wang and Shen (2011) recognized that dual coding conditions exist in mLearning in a similar way to their existence in all instructional media environments. Extending the theory to mLearning suggests that instructional designers should include both verbal and visual information only in areas where the interconnection of the two will lead to an additive effect on learning outcomes. Cognitive load theory. When Chandler and Sweller (1991) applied cognitive load theory to different formats of instruction they found that instructional materials designed to minimize a learners cognitive load tended to be more effective than conventional materials. After recognizing the dual coding theory conditions existing in mLearning, Wang and Shen (2011) cautioned that an overuse of verbal and visual information could lead to an increase in cognitive load for the learner. If verbal and visual information are to be used together, they should be placed in appropriate combinations to maximize instructional effectiveness (p. 3).

10

Social theories of learning.


Transactional distance theory. Moores (1997) theory of transactional distance identifies three variables that determine the psychological and communications spaces or transactional distance between learners and instructors: dialogue between learners and instructors; structure of the instructional programs; and autonomy of the learners. In this theory, transactional distance is inversely related to dialogue and directly related to structure and autonomy (Moore, 2007). Instructional designers adhering to transactional distance theory should adjust these three variables in order to minimize the transactional distance between learners and instructors thereby improving learning outcomes. Teall, Wang, Ng, and Callaghan (2011) used transactional distance theory to conceptually organize various frameworks and mLearning design guidelines found in the literature. They found that eLearning design principles applied directly to mLearning environments resulted in a diminishment of dialogue and structure and an increase in autonomy; this change led to an increase in transactional distance. The authors argue that this suggests that mLearning is more than an evolutionary offshoot of eLearning. Instead, mLearning is not a species of the genus of eLearning but that they are both genera of learning in general, although closely related (Discussions section, para. 8). Activity theory. Activity theory is a framework for analyzing the structural properties of a system of activity amongst various people, objects and communities (Engestrm, Miettinen, & Punamaki, 1999). Sharples et al. (2010) developed a framework for analyzing mLearning using activity theory to identify

11 tensions and contradictions between mLearning systems and more conventional education systems that might inhibit learners. They argued that conversation and context are essential components for mLearning initiatives to extend learning opportunities outside the classroom and into everyday life. Social development theory. Vygotsky (1978) identifies social interaction as fundamental to cognitive development and learning and argues that all higher functions originate as relationships between individuals. Shih and Mills (2007) maintain that social development theory is one of the most relevant theories for mLearning. mLearning allows for an exchange of information about learner materials, interactive practice amongst groups of learners and the sharing of experiences that expand each learners cognitive development. These ideas are incorporated into Shihs mLearning Model, use of which enabled instructors to improve students online learning experiences and resulted in better learning outcomes than compared to traditional instruction.

Design Principles for mLearning Design for multimedia presentation. Leading researchers in mobile
learning, such as Churchill (2011) and Wang & Shen (2011), provide design strategies for presenting information on small screens:

Information should be presented visually using photographs, graphs, and


illustrations, without being redundant. The color, size, and position of the visuals are important as well. Designers should also take into consideration download speeds and connectivity issues (Wang & Shen, 2011).

12 Scenarios should support a multimedia approach when representing the concept (Churchill, 2011). Audio and video should be used only if it is effective for representational purposes, as well as being selected to override possible environmental limitations. Wang and Shen (2011) warn that audio may not be effective in noisy environments or for the hearing impaired. Wang and Shen (2011) further state designers need to use color in moderation. They recommend using colors that remain constant in different environments and provide very clear design standards for effective uses of color: o To discriminate between elements of a visual. o To focus attention on relevant cues.

o To code and link logically related elements. Design for small screens. The message design should coordinate language,
images, signs, and symbols so they work in tandem to lessen the cognitive load and help retain learning. Churchills (2011) recommendations include the following message design guides: Design for landscape presentation as it offers more flexibility than portrait. Minimize scrolling. Incorporate task-centered information into the lesson. Create one-step interactions by changing the position of the slider and immediate updates. Design floating, collapsible, overlapping, semi-transparent interactive panels in order to maximize information presented and allow for interactivity.

13

Design for Learning Uses


When designing for mLearning, the designer must also take into consideration how to combine the mobile device with the mobile learner. Churchills (2011) five recommendations are: 1. Design for Observation - Scenario based learning modules allow the learner to make correlations between the real world and the lesson. 2. Design for Analytical Use - Learning should allow users to enter data from their environment for analytics via sliders, hot spot areas, etc. 3. Design for Experimentation - Students should be able to explore within the lesson, changing data, scenarios, etc. and see the results of their input. 4. Design for Thinking - The course should be designed using the mobile device as an investigative tool, allowing for complex situations to encourage high level thinking. 5. Design for Reuse

- Designers for mLearning should take into account different

environments and activities. Designers need to consider all environments in which the lesson could be used.

Design for the Mobile Learner


A key to mLearning design is recognizing how the placement of images, spoken language, and printed words enhance learning outcomes (Wang & Shen, 2011). mLearning content design should be highly adaptable to fit the varied activities, locations, and contexts of mobile learners.

Performance support - Learners can access job aids or performance support via the mobile device. Job aids should be simple, short, and text-based. Push texts are

14 used as reminders of quizzes, lesson reviews and questions to students (Elias, 2011).

Design for location - Content is tailored to the learners location and environment (Elias, 2011, p. 150; Churchill, 2011, p. 211)

Social learning - Learners interact with each other, develop relationships, and provide support for one another using various communication methods like email, texting, voice, and video (Elias, 2011).

Cloud-Based Learning
Cloud computing. Cloud computing, though not universally defined, has some agreed characteristics and models. Two inherent characteristics are elasticity (resource scaling up) and resource pooling (running various independent services) (Hirsch & Ng, 2011). The three most common service models of cloud computing are: 1. Infrastructure as a service (IaaS) - where users can get on-demand computing and storage to host, scale, and manage applications and services (Rao, Sadidhar, & Kumar, 2010, p. 43). 2. Platform as a service (PaaS) - where the cloud provider provides a set of services with a development environment, for uses to develop their own applications (Hirsch & Ng, 2011). 3. Software as a Service (SaaS) - where the consumer uses online or mobile applications hosted by the cloud provider (Hirsch & Ng, 2011). The heart of cloud computing is abstraction and virtualization set on top of a dynamic distributed server architecture, which provides users with the flexibility to create, share, save, and collaborate anytime anywhere.

15

Cloud learning. Cloud education or learning is a new and emerging concept


associated with cloud computing. Cloud learning is built on the three services models (IaaS, PaaS, and SaaS) but also has various definitions. Here we define cloud learning as a shared pool of learning courses, digital assets and resources, which instructors and learners can access via computers, laptops, IP-TVs, mobiles and other portable devices. Learners can collaborate anywhere in the Cloud: study, experiment, explore, complete tasks, and provide assistance to others. Learners in the Cloud can also select suitable resources and record individual learning outcomes and processes. The characteristics of cloud learning are as follows: Storage and Sharing Universal Accessibility Collaborative Interactions Learner Centered

The Recursive Intelligent Learning Model (Wang, Ng, & Xiao, 2011). Mobile cloud education a novel unification of cloud and mobile learning is a
relatively new concept that holds great promises for the future development of education. Wang proposes The Recursive Intelligent Learning Model (Wang, Ng, & Xiao, 2011), a theoretical framework for cloud-based intelligent learing. This framework can be used to guide the design and development of an intelligent mobile cloud education system, which can situate learners in an intelligent learning environment (Figure 2).

16

Figure 2: Intelligent Learning Model

Contextual Factors
The researchers were limited in the following areas that may have an impact on the final results.

Intended Audience
A constraint is the intended audience of mLearning. While a small portion of the data includes K-12 due to the randomization of sampling, the primary focus of the project was higher education. This research focuses on formal educational message design at the college or university level, not the use of mLearning in the corporate world. Although this studys recommendations may be the same in the private sector as in the public, the purpose of this study is not to determine if message design may differ between the two sectors.

The Platform
Research was done based on the offerings of one platform, iTunes U. As mLearning is a relatively new reality, there is not an abundance of platforms offering

17 courses meant to be viewed primarily in a mobile setting. This study analyzed other platforms; however it did not analyze their courses. At the start of 2012, iTunes U changed its format allowing greater versatility and introduced a free authorware program, iBooks Author. This new feature allowed colleges and universities to create downloadable textbooks for their courses. Due to the newness of these upgrades, it was difficult for the researchers to determine if offerings without an iBook were due to the educational institution purposefully not including them or due to the recent availability of iBooks Author. iTunes U's catalog includes over 500,000 lectures, videos and other educational resources, covering a broad scope of subjects and designed by hundreds of universities and other third party entities. Courses selected for sampling were initially based on how often they were downloaded or viewed, allowing them to appear in the Top Charts listing. Out of the over 500,000 offerings, only the first 200 appear, thus limiting the sampling selection when random purposive sampling was used.

Inter-rater Reliability
Some of the variables used could be subjective and each course was only reviewed by one researcher. To maintain better inter-rater reliability, prior to actual data collection, all researchers analyzed the same course using a rubric to define each category. In reviewing the results, they were able to come to an agreement regarding what each variable was measuring and edited the rubric when applicable. However, this remains a possible source of error.

18

Mobile Devices
Data was collected by viewing courses on iPhones. While tablets may be subject to mLearning design, any pedagogical requirements specific to tablets are not broached in this study due to time constraints. All data was gathered from iPhones, and did not include other types of mobile devices, nor tackle which device is best for mLearning. We posit the studys limitations do not negate the validity of the results. Due to the abundance and diversity of its courses, iTunes U gave us a very broad sampling from which to analyze. Inter-rater reliability was supported by a rubric and consensus of what the variables meant. By collecting data using one of the more restrictive of viewing devices, a mobile smartphone, the findings should be applicable when using most other mobile devices.

Methodology
Research Design
In order to understand and explore contemporary mLearning design from both course and platform development perspectives, this study features content analyses (Fraenkel, Wallen, & Hyun, 2011) of (1) publicly available iTunes U courses and (2) multiple cloud-based educational platforms. The unit of analysis is therefore a single iTunes U course, or a single cloud-based education platform. This studys design and content categorization was informed by the aforementioned literature review of mLearning theories, guidelines, and design principles that identified effective mLearning design standards as well as theoretical frameworks and models for organizing these standards. The objective of these content analyses was to obtain descriptive information

19 about how current mLearning design practices at both course and platform levels compared to accepted mLearning theories and guidelines.

Participants iTunes U. iTunes U is the division of Apples iTunes Store that manages,
distributes and controls access to over 500,000 lectures, videos and books. Content is submitted by K-12 school districts and more than 1000 universities and colleges in 26 countries, as well as respected museums, libraries, and public broadcasting stations. Users can download content from the store and view it on a Mac or PC, as well as on most internet-connected mobile devices. However, mobile streaming abilities, special features such as synchronized notes, highlights and bookmarks, and enhanced materials such as instructor notes, assignments and iBook integration, are only available on iOS devices (i.e.: iPhone, iPad).

MITx. Massachusetts Institute of Technology (MIT) online learning initiative,


MITx, is a venture targeting a virtual community of learners around the world which launched in Spring 2012. MIT grants free open access to anyone. Learners watch videos, complete assignments and evaluations, access online labs and collaborate with other students (Parry, 2011). MITx operates on an open-source scalable software infrastructure that will be available to other educational institutions. (http://mitx.mit.edu)

Udacity. Udacity is a new cloud education venture, also launched in 2012,


spearheaded by Sebastian Thrun, formerly a Stanford professor and Google fellow. Thrun aspires to build a free virtual campus which could reach the whole world (Salmon, 2012). (http://www.udacity.com/us)

20

Coursera. Coursera, yet another 2012 launch, is a social entrepreneurship


company partnering with universities such as Princeton University, Stanford University, University of California, Berkeley, University of Michigan-Ann Arbor, and University of Pennsylvania to offer free courses online for anyone to take. Coursera touts key pedagogical constructs that include mastery learning, using interactivity and providing frequent feedback (https://www.coursera.org)

Samples iTunes U courses. Two samples of iTunes U courses were gathered and
analyzed. The first was a homogenous sample (Fraenkel et al., 2011) of the top 20 courses on April 2, 2012 in the United States as reported by the iTunes U Top Charts course list. This sample was designed to evaluate a selection of iTunes U courses that are being downloaded by the most people. After collecting and analyzing the top 20 sample, we decided to gather a second, more representative sample, of the top 200 iTunes U courses. Due to limited resources and time, we chose a smaller sample of 11 courses. This second sample was a systematic sample (Fraenkel et al., 2011) with a random start of the top 20 courses resulting in the first course collected being #19 of the top 20 courses. The sample was collected on April 12, 2012 in the United States, had a sampling interval of 18 and a sampling ratio of 5.5%, resulting in 11 courses from all levels of the top 200 course list.

Open-learning platforms. A purposive sample (Fraenkel et al., 2011) of


four open-learning platforms was evaluated: iTunes U, Udacity, MITx and Coursera. We chose these platforms for their popularity, public availability, and varied approaches to

21 cloud learning and mLearning design.

Instrumentation. We created two coding instruments one for iTunes U


courses and the other for open-learning platforms. These instruments are loosely based on existing instruments (Brown, Hersey, Misoni, Teall, & Wang, 2011). In addition, both instruments contained the essential mLearning guidelines identified in the literature and were categorically organized according to Xiao, Wang and Lis (2011) LTCS Model for Designing mLearning Material:

Location mobility, location specific. Technical (Technology) limitations, abilities, compatibilities. Culture global audience, cultural sensitivity, accessibility. Satisfaction motivation, interactivity, relevance. Design content organization, presentation, length.

In order to determine whether or not these guidelines were being followed, we identified elements that would represent the guidelines within a given course (see Table 1) or platform (see Table 2).

22

Location Multiple content Situational learning

Technical Data capture Crossdevice compatibilit y

Culture Cultural differences Globalized English

Satisfaction Adaptive learning Practice

Design Effective video Effective color

Captioning

Assessment Real world applications Learner attention Social interactions Pace-Learner controlled Formative feedback

Effective audio Effective font Limited Scrolling Content chunking

TABLE 1: COURSE INSTRUMENT CATEGORIES AND ELEMENTS

23

Location Location neutral

Technical Sensor access & integration Notification system Crossdevice compatibilit y Content delivery

Culture Language support Voice recognition Voiceover

Satisfaction Direct multimedia navigation Instant feedback Social networking

Design Landscape orientation One step interaction s Data entry

Closed Captions

Pace learner controlled Content learner controlled

Limited scrolling

TABLE 2: PLATFORM INSTRUMENT CATEGORIES AND ELEMENTS

For each course and platform, the corresponding elements were rated on a threepoint coding scheme (see Appendix A, Tables A1 and A2 for coding rubrics): Not present (No) = 1; Included Somewhat = 2; and, Included extensively (Yes) = 3. To maintain better inter-rater reliability, prior to actual data collection, we analyzed the same course/platform using a coding rubric to define each category. In reviewing the results, we were able to come to an agreement for what each variable was measuring and edited the rubrics when applicable. We used these rubrics to guide course/platform coding and checked all compiled data for reliability at the end of the data collection period.

24

Procedure
We divided up the iTunes U courses and open-learning platforms. iPhones were used to survey each course and platform. We coded the course or platform using the course or platform instrument (see Appendix A, Tables A1 and A2 for coding rubrics). Additionally, descriptive comments were collected on particularly interesting or crucial course/platform features. All data were collected in Google spreadsheets for review and analysis.

Analytical Methods Descriptive statistics. The statistical analysis was completed using the
formulas in a Google spreadsheet for: mean, median, mode, and standard deviation plus percent. These statistics were calculated separately for the Systematic and Top 20 Samples. Standard deviation was determined for the total point range of all the variables. Standard deviation was not determined for the range of ratings of each variable within each course because those means are based on three, whole number ratings: 1(= No = 0), 2 (= Somewhat) and 3 (= Yes). Standard deviation was also determined for the variables and courses when grouped.

Inferential statistics. GeoGebra was used for the Analysis of Variance


(ANOVA) to determine if there are significant differences among course groups: video only vs. video plus courses; UK Open University vs. Stanford University authored courses; courses that used iBook vs. courses that did not; highest vs. lowest scoring courses. In each case, the null hypothesis states that there is no difference between the

25 two groups. The optimal confidence level (alpha) is 95% accuracy (P = 0.05). If the P value for a comparative group is 0.05 or less. Location 50% Multiple forms content C Situational learning F Technical 56% Data capture F Cross-device compatibility B Culture 60% Cultural differences C Globalized English B Captioning D Satisfaction 63% Adaptive learning F Practice C Assessment C Real world applications B Learner attention B Social interactions F Pace-Learner controlled A Formative feedback C Design 76% Effective video B Effective color C Effective audio B Effective font B Limited Scrolling B Content chunking C

TABLE 3: GRADED PLATFORM INSTRUMENT CATEGORIES AND ELEMENTS Grades. We determined the percent of total for both course and variable scores. The courses and variables were also assigned a letter grade. The following standard grade scale was developed and used to determine the letter grades: Deviation from the Mean:

St. Dev. above & below the Mean = C range 1 Standard Deviation below lower C range = F 1 Standard Deviation above upper C range = A

The percentage of total points was determined for both individual courses and individual variables separately. The number of courses was not the same for each sample

26 and group so determining the percent of the total points possible for each sample and group normalized the data, allowing meaningful comparisons to be made.

Group comparisons. The data for the courses were grouped in several ways for comparison:

Original Systematic versus Top 20 Course Samples. Courses with more than video (Video Plus) versus Video Only courses. Providers with several courses represented - Open University and Stanford. Courses with an iBooks text versus those that did not use iBook. Three highest scoring courses versus three lowest scoring courses.

Findings and Analysis


iTunes U Courses
We collected data to evaluate courses within the iTunes U platform as a way to inform the development of Mobilegogy, a new model for mLearning. The major question asked was: To what extent are elements in the categories included in iTunes U courses. Table 1, (see Instrumentation) shows the elements within each category.

Category and Element Ratings


Location and technical categories. While the Location and Technical categories received fewer points, there were elements within those categories that stood out.

27

TABLE 4: NUMBER OF COURSES USING LOCATION ELEMENTS SOMEWHAT TO EXTENSIVELY : M AX=30 The element, Multiple Forms of Content in the Location Category, received a C grade, but 75% of the highest ranked courses (grade B or better) received a Yes (3 points) for this element. This suggested that providing multiple forms of content was considered important for creating effective courses by developers of quality courses. This was an element that might be considered essential to mLearning modes.

TABLE 5: NUMBER OF COURSES USING TECHNICAL ELEMENTS SOMEWHAT TO EXTENSIVELY : M AX=30 Although the Technical Category received a low number of points overall, it contains one element with a B grade: Cross device compatibility. This element was a very important support for worldwide learner mobility. The data suggest that most of the courses allow the learner to use multiple types of electronic devices. This was also an element that might be considered essential in an effective mLearning model. The other two elements that received an F grade, situational learning and observational learning, were specifically related to technology elements beyond what might be available worldwide at this time. Using GPS and data capture technology to promote

28 learning were interesting elements to include but may not yet have research data to support their use as elements to improve learning. They were certainly part of mLearning potential. They did not appear to be supported by the iTunes U platform. Culture category. The use of English that excludes contractions and colloquialisms assists the non-English speaking student and English language learners. (Brown, et al, 2011) That most of the courses, and all of the high scoring courses used globalized English, indicates that that iTunes U was providing courses that could be used by learners around the world, whose only access to learning may be on a mobile device such as a smartphone. The element, Globalized English in the Culture category, received a B grade which was much better than the other two elements within the Culture category.

TABLE 6: NUMBER OF COURSES USING CULTURE ELEMENTS SOMEWHAT TO EXTENSIVELY : M AX=30 Captioning improves accessibility for deaf students as well as students who do not speak English well. Only three courses included any captioning. There was no restriction in the platform regarding captioning, so this was purely a decision on the part of course providers and could have been easily added to all of the courses. The same is true for including sensitivity to cultural differences. Less than half of the courses included multicultural examples. This, again was by design of the course

29 providers. The elements in the higher scoring categories, such as Satisfaction, were a central focus of course providers. Satisfaction category. The Satisfaction category included elements that were considered important for mastering the content. While the category only received 63% of the points possible, it was the second highest rated category and included the element that received the highest grade: Learner control of the pace. This element received the only A grade within the elements Table 1, (see Instrumentation). None of the course Satisfaction category elements were limited by the iTunes U platform except for social interactions. Two other elements, in the course Satisfaction category, which received a B grade were: Learner Attention and Real World Applications (Churchill, 2011). Shih (2007) ranked learner attention as a primary motivation element of instruction. Real world applications were recommended in the Shihs model as important for mLearning.

TABLE 7: NUMBER OF COURSES USING SATISFACTION ELEMENTS SOMEWHAT TO EXTENSIVELY : M AX=30 The course Satisfaction element, Practice, received a C grade and many of the courses did not provide any practice. Martin suggested that this was an important element because he found that students whose courses included practice had the highest achievement and satisfaction. (Martin, Klein & Sullivan, 2007) Feedback also received a C grade. Hannon, Umble, Alexander, Francisco, Steckler, Tudor & Upshaw (2002)

30 recommended the use of class list-serves to facilitate timely feedback as well as provide elucidation of challenging content but the courses mostly did not include those types of feedback. The feedback provided by the courses mostly included answers to problem sets. The lowest rated elements made a strong statement. That the courses generally did not include them might have indicated that those specific elements were not important but could equally be interpreted to mean the courses and/or platforms were incomplete in these areas. Two of the elements, adaptive learning, and social interactions that received an F grade were considered to be important by many learning theorists (Refer to Social Theories of Learning in the Literature Review above). The inclusion of adaptive learning strategies could have improved outcomes for all students who took the courses. There was nothing in the platform that would limit designing courses for adaptive learning. This was a choice made by the course developers. Medina (2008) commented that every brain is wired differently and encourages individualized learning. Mobile learners from around the world approach courses from many levels of experience, skills and knowledge and could benefit by learning opportunities that adapt to their needs. Opportunities for asynchronous social interactions such as forums, blogs, chats, etc. were only found in the TED lectures which provided a link out to a forum. Other courses could have provided external sites for this purpose but the site would have to be maintained by course developers. The platform itself did not appear to provide social learning opportunities. Their absence implied a weakness in the iTunes U platform.

31 Design category. The Design category received the highest grade with 76 percent of the points possible (refer to Table 2 below) and all elements in the category received at least a C grade. The courses generally had effective visuals, audio and font, plus scrolling was kept to a minimum (refer to Table 1 in Instrumentation). The use of color and content chunking were not as highly rate

TABLE 8: NUMBER OF COURSES USING DESIGN ELEMENTS SOMEWHAT TO EXTENSIVELY: M AX=30 Within the design category, the highest scoring element was Effective Use of Audio. Ninety percent of the courses used audio somewhat extensively. While Wang and Shen (2011) suggest that audio should be used primarily for representational purposes, the courses mostly included audio for many purposes from purely entertainment, to instructor/student contact, as well as for providing content. This is a drawback for hearing impaired students unless the audio could be coupled with associated visuals and captioning. The element, Effective Use of Visuals, also scor ed high so it seemed that many of the courses included both effective audio and effective visuals. Captioning was only available in three courses.

32 Together, the design elements create the window that the learner uses to access the course, so they are important (Wang & Shen, 2011). It was significant that the iTunes U courses used those elements effectively and the platform allowed for their effective use in the courses. Any omission or diminished use of design elements was due to course provider design and development choices.

Category Systematic Top 20

Location% 48 52

Technical% Culture% Satisfaction% Design% 50 58 57 61 58 66 77 77

TABLE 9: PERCENT OF TOTAL POINTS POSSIBLE WITHIN CATEGORY RECEIVED

The Course Samples. The iTunes U courses were sampled twice. Data was collected from the first twenty (20) of the top ranked courses and then a systematic sample of eleven (11) courses was analyzed. One course occurred in both samples. The systematic sampling included some from the bottom ranked courses. The grade average for all the courses was a C. (Refer to the link for the Final Course Data Collection Tool in Appendix A.)

The Top 20 sample outscored the systematic sample in all categories and all elements. The course scores were probably significantly different (refer to Table 13 in Appendix A). The course with the highest grade, Introduction to China, (Grade A) was in the Top 20 sample and the two courses with the lowest grades (F) were in the systematic sample. The categories ranked the same in both of the samples and were equivalent to

33 category rankings for all courses. Analysis of Variance (ANOVA) tests for these two samples slightly indicated that there was no difference between them in terms of inclusion of elements. (Refer to Table 13 in Appendix A for complete ANOVA results.) This suggested that our total course sampling might be representative of the iTunes U courses overall. Course groups: video only vs. video plus. Originally, iTunes U was a repository for podcasts and other video or audio-only presentations. In Q1 of 2012, the platform was changed. The course data was separated into Video Only courses and Video Plus courses to compare the inclusion of elements within each group (Table 4). The TED lectures were the majority of the video only courses within the current samples. The courses that included a range of elements, the Video Plus grouping, outscored the Video Only courses in all categories and all elements. The ANOVA test results (from Table 13 in Appendix A) made a strong statement that there was a significant difference in the two groups. Based on these results, we concluded that courses based entirely on video do not incorporate enough of the learning elements to be recommended courses. Course groups: UK Open University vs. Stanford University. Several course developer organizations have been making large numbers of their courses available through iTunes U. The United Kingdom Open University (Open U) and Stanford University had the highest number of courses in the samples.

Category

Location %

Technical %

Culture %

Satisfaction %

Design %

Open U Stanford

52 56

46 56

65 46

65 62

72 68

TABLE 10: PERCENT OF TOTAL POINTS POSSIBLE WITHIN CATEGORY RECEIVED

34 Open U courses scored the highest points in the three categories that were also the highest ranked overall: Design, Satisfaction and Culture (Table 5).

TABLE 11: NUMBER OF OPEN UNIVERSITY COURSES USING CULTURE ELEMENTS SOMEWHAT TO EXTENSIVELY: M AX=8

That iTunes U courses extensively used most of the elements as positive support for the iTunes U courses as effective learning tools.

TABLE 12: NUMBER OF OPEN UNIVERSITY COURSES USING DESIGN ELEMENTS SOMEWHAT TO EXTENSIVELY: M AX=8

TABLE 13: NUMBER OF OPEN UNIVERSITY COURSES USING SATISFACTION ELEMENTS SOMEWHAT TO EXTENSIVELY: MAX=8

35 It also demonstrates that the iTunes U platform allows for inclusion of effective elements of instruction. This suggested that many Open U courses might be considered when developing a standard for quality learning opportunities in iTunes U. Open U courses took advantage of the iTunes U platform to develop courses that provided more complete learning environments. Course groups: Courses that used iBook and courses that did not. Several of the courses included iBook texts and iBook use was supported by the iTunes U platform. These texts appeared to offer rich learning experiences. A comparison was made between these courses and the courses that did not offer iBook texts (Table 6). In the Satisfaction and Design categories, the courses using iBook texts scored higher in Satisfaction and Design. The results favor the idea that an electronic text could enhance online courses.

Category

Location %

Technical %

Culture %

Satisfaction %

Design %

iBook used iBook not used

61 45

54 56

58 60

69 61

82 74

TABLE 14: PERCENT OF TOTAL POINTS POSSIBLE WITHIN CATEGORY RECEIVED Course groups: Highest and lowest scoring courses. The three highest scoring courses outscored the three lowest scoring courses in all categories (Table 7). The three highest scoring courses might be good examples of how to develop courses for iTunes U.

36
Location % Technical % Culture % Satisfaction % Design %

Category 3 Highest scoring courses 3 Lowest scoring courses

67 33

50 44

70 52

78 44

93 65

TABLE 15: PERCENT OF TOTAL POINTS POSSIBLE WITHIN CATEGORY RECEIVED The ANOVA test (from Table 13 in Appendix A) results provided a 97+% confidence that there was a significant difference between the highest scoring courses and the lowest scoring courses regarding the inclusion of the learning elements and the coverage of the categories. High scoring courses included more elements in all the categories and were given more points in all the categories, which indicated that they used the elements more extensively also. There were good examples of the use of most of the elements recommended for quality e-learning. Some courses used multi-cultural examples and all but three courses used globalized English. Looking at all the courses, the satisfaction elements were well represented as were the design elements. The courses were mostly well accessed through a smartphone in spite of some scrolling issues and some glaring exceptions.

Platform Findings and Analysis


The platform data collection tool, as with the course tool, was divided up into five key categories: Location, Technical, Satisfaction, Location, and Design. This section will discuss the strengths and weaknesses of the platforms as well as discuss the items that seemed to be of greater importance as they relate to mLearning. The following table (Table 8) displays how many percentage points the platforms received for each category. It also displays percentage data for the cloud platforms (Coursera, MITs and Udacity) and iTunes U which is a mixed (network connected app) platform.

37
Category Category Variable Total Category Maximum Total Category % of Max. Tot. Cloud Platforms iTunes U

Location 12%

Technical Culture Satisfaction Design 27% 27% 47% 32%

12

36

48

60

48

100%

75%

56%

78%

67%

100%

78%

53%

89%

67%

100%

67%

67%

47%

67%

TABLE 16: PLATFORMS - PERCENTAGE POINTS RECEIVED IN EACH CATEGORY

All four platforms scored threes (100%) for Location- the content adapting based on the size of the screen. This seemed to be more of an issue at the course level as some iTunes U courses have video content that contains difficult to read text. Both the cloud platforms and iTunes U scored 67% in the design category. For all platforms, 50% of the elements scored three while 50% of the elements received ones. All platforms allow the display window to orient in landscape as well as portrait views. All platforms are location neutral and do not require the learner to scroll to view content; however, all platforms require the learner to scroll when viewing documents and completing assignments. One element that, at this time, none of the platforms possess is the ability to input data from ones surroundings, compile the data entered by all students and display it using charts and graphs. None of the platforms are designed with floating,

38 collapsible, overlapping, interactive panels in order to maximize information presentation. The platforms averaged 75% in the technical category. All four platforms contained a notification system or a way for the professor to communicate with all students. All platforms are compatible across devices except for iTunes U which has limitations when being accessed outside of the iTunes U app, which is only available for ipad and iphone. One element that is not necessarily imperative for the mobile learner, but would enhance learning is the ability for the platform to interact with mobile device sensors such as: gps, camera, microphone, compass and accelerometer. Overall the platforms scored lowest in the culture category at 56%; however, all of the platforms make cultural accommodations in one form or another. Three of the four platforms provide transcripts of lectures. The most creative implementation is the MITx platform which has text in paragraph form running alongside all lecture and tutorial videos. Udacity and Coursera provide video transcripts, however the platform is capable of having subtitles in English as well as other languages as long as individuals volunteer to create them. An area for improvement would be the addition of voice recognition to assist those unable to use a small keyboard and voice-overs for text to aid the visually impaired. iTunes U was the only platform that contained these two elements.

39

Figure 3: MITx Screen The platforms scored a 78% with regard to Satisfaction, with the cloud platforms scoring 89% and iTunes U 47%. Both Udacity and MITx give the learner control over the content and how it is acquired. MITx allows the learner to increase or decrease the speed of the audio/video (which is also very useful from a cultural perspective) as well as jump to specific places in the content by clicking on the stream of text to the right of the video (see Figure 3 above). Feedback plays a strong role in three of the four platforms. All three cloud platforms scored threes for this element. Feedback is an integral part of any learning, but it is imperative in mLearning. Instant feedback allows the learner to gauge their own level of understanding and assess what remediation they may need. Coursera and Udacity allow the instructor to imbed interactive questions into the lecture itself (Figures 4 and 6). Courseras quizzes also provide immediate question by question feedback. MITx has a check answer button for all labs a nd practice assignments (see Figure 5).

40

Figure 4: Coursera Quizzes

Figure 5: MITx Practice

41

Figure 6: Udacity Quizzes A sense of community is another factor that has a big impact on learner satisfaction (Tan & Wang, 2011; Wang, Shen, & Pan, 2009). Three of four platforms host active discussion boards and Udacity offers a link to the social networking sites: google+, twitter and facebook, directly above the presentation window (Figure 7). While it is not built into the platform, some iTunes U courses provide links to discussion boards. Discussion boards are one way to allow students to connect on a social level, seek assistance and share insights on assignments, and engage in in depth dialogue on important topics. One problem encountered on many of the discussion boards is that there are so many active participants that many contributions get buried within minutes. One suggestion to remedy this situation is to divide the discussion boards into sections or topics to make them easier to navigate. Udacity has implemented an innovative way to create a sense of community within their courses in the form of a badge system. Badges are awarded to recognize those who participate on the discussion boards in ways that

42 benefit the community. For example, if a participant asks a question that has 300 views, they are awarded a Famous Question badge. If a participant provides an answer that is voted up (posts are voted up or down based on perceived usefulness) eight times they earn a Great Answer badge.

Figure 7: Udacity Lecture Screen Other factors that the cloud platforms have that are important to mLearning are chunking and interaction. Udacity breaks lectures down into approximately two minute chunks, and every other chunk contains a brief quiz that provides immediate feedback. MITx provides interactive labs (Figure 8).

43

Figure 8: MITx Interactive Lab

Mobilegogy
One objective of this research was to provide suggestions and ideas for further study and implications for a larger theory of mLearning: Mobilegogy. (The term mobilegogy (Wang, 2009) is suggested by Dr. Minjuan Wang as an extension of cybergogy: http://edutechwiki.unige.ch/en/Cybergogy.) At its core, mobilegogy is based on the understanding that strategies exist for mLearning that do not apply to any other learning environment. This model is to be used as a guide so the designer can successfully tailor e-learning to the mobile realm, as well as expand the scope of e-learning by incorporating the capabilities of mobile devices into instructional design. Mobilegogy as a model, not only ensures that mLearning outcomes will be comparable to those already enjoyed by good e-learning designs, but takes the learning experience to the next level by effectively utilizing the capabilities of mobile technology.

44 The Mobilegogy model consists of two tiers. The first tier describes adaptations to instructional design necessary to compensate for the limitations of mobile devices: i.e. small screens, cross-device/platform restrictions and learner mobility. The second tier encompasses the expansion of the learning experience, - things that can only be achieved with mobile devices - making them unique to mLearning. With this model we differentiate between mobilized eLearning and mLearning: Mobilized eLearning design adapts learning to the limitations of mobile devices; whereas mLearning design adapts learning to the capabilities of mobile devices. mLearning looks at not just at the mobilization of the device but also the mobility of the learner (Sharples et al., 2010). It is its own entity with its own potential and possibilities, such as the ability to interact with ones device in a specific location and use data capture for observational activities. This definition of mLearning allows the learner to take learning out into the world. Some exemplary examples are Mentira and the Smithsonian project. What mobilgogy gives the world is an opportunity to take advantage of the strengths of mobile technology to create a unique experience for the learner. The mobilized eLearning portion of the model contains elements that should be present in all well designed learning, such as practice, assessment and real world applications. What mobile platforms afford the designer is the ability to be creative when incorporating these elements. For example, all well designed learning should make connections to the real world; however, imagine the implications of being able to take ones mobile device to a battle field in order to interact with the content in the place where history was made, or interact with people on the street to bring a language lesson to life. Other innovative ways to incorporate practice that were observed during the course of this study includ the use

45 of specialized apps to aid a language learner in drawing Chinese characters by tracing the characters on a touchscreen and interactive labs that allow the learner to create and test various types of circuits. Guided by the LTCS model, we analyzed course and platform samples and created the mobilegogy model. This model is built upon the LTCS model but is more complete and includes comprehensive design elements and guidelines. The mobilegogy model (Figure 9) has learner outcomes at its center. The next tier is the previous iterations Location, Technology, Culture, Satisfaction categories, to which we added a fifth element: Design. The next circle represents elements for mobilized e-learning as they relate to each category and beyond that are considerations for mLearning. Because this is an area that is constantly evolving, this model was built to be fluid and the outer layer is expected to expand as more innovations are introduced to the field. When using the model, the idea is to balance the incorporation of elements from each category (Location, Technology, Culture, Satisfaction and Design) to maximize the mobile learning experience. When a designer looks at this model they will be reminded that chunking is especially important for the mobilized eLearner due to the fact that the learner may only have 3 min to grasp a concept while they are waiting for the subway, or that offering various forms of content acquisition is important due to the fact that learners context is constantly shifting (i.e. they are unable to hear audio on a loud bus), but may prefer to read a transcript. A further look at the model will remind the designer that data capture can be incorporated into a lesson. Students can collect and input data, which can then be displayed using charts and graphs.

46

Figure 9: Mobilegogy Principles for Designing Effective Mobile Learning

Conclusions
As our analysis of top iTunes U courses reveals, both the courses and the platform still have room for improvement.

Uniquely Mobile Elements

Situational learning and data capture/observational learning were not included in any courses and are probably not completely supported by the platform. Interestingly, these are uniquely mobile elements.

Recommendation - Situational learning frameworks and data capture/observational learning frameworks should be utilized by the iTunes U

47 platform. Also courses should look for opportunities to take advantage of the mobile nature and common features of Apple devices and build these uniquely mobile elements into their designs.

Social Learning

The absence of social learning opportunities is an extreme limitation in the iTunes U platform. However, as the TED lectures demonstrated, these can be added externally by the course providers, but generally were not.

Recommendation - iTunes U desperately needs to build more social interaction into the platform.

Commonly Used Elements

The data supports the conclusion that globalized English, gaining and keeping learner attention, and highlighting real world applications are already important elements in current mLearning design.

Types of Content

The highest quality iTunes U courses provided multiple types of content. Inversely, the data suggested that video based courses do not include enough of the elements to be good examples of mLearning.

Recommendation - Course designers should include multiple types of content for their mobile learners.

48

Best Examples

The courses provided by United Kingdom Open University incorporate most of the elements and could be used as examples of mLearning design.

Poor Utilization of Available Tools

The iTunes U platform provided more opportunities for including the elements than most of the courses used. For example, adaptive learning, as a way of individualizing the learning, is highly recommended by learning theorists. The iTunes U platform supported this type of learning but it was not included in any of the courses.

Recommendation - Course designers should be aware of and comfortable with all the available tools within the platform before they start designing. If they are porting a traditional course to the iTunes U environment, they should be aware of the platforms strengths comp ared to a traditional learning environment.

Further Study
More research needs to be conducted on the mLearning outcomes for courses designed using mobilegogy compared to an average course designed for mobile devices. Additionally, since we have argued that mLearning is fundamentally distinct from standard and mobile eLearning, research should be conducted to see how mLearning can accomplish standard learning practices in unique and effective ways, such as: practice, assessment, individualization, etc.

49

References
Brown, F., Hersey, S., Misoni, C., Teall, E., & Wang, M. (2011). An exploration of eand m-learning design principles: An analysis of iTunes University. San Diego State University. Unpublished manuscript. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293332. Churchill, D. (2011). Conceptual model learning objects and design recommendations for small screens. Educational Technology & Society, 14(1), 203216. Elias, T. (2011). Universal instructional design principles for mobile learning. International Review of Research in Open and Distance Learning, 12(2), 143 156. Engestrm, Y., Miettinen, R., & Punamaki, R.-L. (1999). Perspectives on activity theory. Cambridge University Press. Fraenkel, J. R., Wallen, N. E., & Hyun, H. (2011). How to design and evaluate research in education (8th ed.). New York: McGraw-Hill Humanities/Social Sciences/Languages. Galbraith, B & Chan, C (2010, April 1) Developing mobile apps with web technologies, CS 96SI, Lecture 1 Web vision for mobile. Stanford, [podcast] April 9, 2010. Available at: iTunes University [Accessed: February 25, 2010]. Guy, R. (2009) The evolution of mobile teaching and learning. Santa Rosa: Informing Science Press, p.1. Hannon, P., Umble, K. E., Alexander, L., Francisco, D., Steckler, A., Tudor, G., & Upshaw, V. (2002). Gagnes and Laurillards models of instruction applied to

50 distance education: A theoretically driven evaluation of an online curriculum in public health. The International Review of Research in Open and Distance Learning, 3(2). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/105/184 Hirsch, B., & Ng, J. W.P. (2010). Education beyond the cloud: anytime-anywhere learning in a smart campus environment. Retrieved from http://www.ankabut.ae Hood, L. (2011, August 29). Smartphones Are Bridging the Digital Divide. Wall Street Journal. Retrieved from http://online.wsj.com/article/SB1000142405311190332790457652673290883782 2.html Keller, J. M. (1983). Development and use of the ARCS model of motivational design. Retrieved from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED313001 Martin, F., Klein, J. D., & Sullivan, H. (2007). The Impact of instructional elements in computer-based instruction. British Journal of Educational Technology, 38(4), 623636. Medina, J. J. (2008). Brain rules 12 principles for surviving and thriving at work, home, and school. Seattle (Wash.): Pear Press. Moore, M. (1997). Theory of transactional distance. In D. Keegan (Ed.), Theoretical principles of distance education (pp. 2238). Routledge. Retrieved from http://www.c3l.uni-oldenburg.de/cde/found/moore93.pdf Moore, M. (2007). The theory of transactional distance. Handbook of distance education (2nd ed.). Mahwah N.J.: L. Erlbaum Associates.

51 Paivio, A. (1990). Mental representations: A dual coding approach. Oxford University Press. Parry, M. (2011, December 19). MIT will offer certificates to outside students who take its online courses. The Chronicle of Higher Education. Retrieved from http://chronicle.com/article/MIT-Will-Offer-Certificates-to/130121/ Rao, N. M., Sasidhar, C., & Kumar, V. S. (2010). Cloud computing through mobile learning, International Journal of Advanced Computer Science and Applications, 1(6), 42-47. Salmon, F. (2012, January 23). Udacity and the future of online universities. Reuters Blogs - Felix Salmon. Retrieved from http://blogs.reuters.com/felixsalmon/2012/01/23/udacity-and-the-future-of-online-universities/ Sharples, M., Taylor, J., & Vavoula, G. (2010). A theory of learning for the mobile age. Medienbildung in neuen kulturrumen (pp. 8799). VS Verlag fr Sozialwissenschaften. Retrieved from http://dx.doi.org/10.1007/978-3-53192133-4_6 Shih, Y. E., & Mills, D. (2007). Setting the new standard with mobile computing in online learning. The International Review of Research in Open and Distance Learning, 8(2). Teall, E., Wang, M., Ng, J. W. P., & Callaghan, V. (2011). An exposition of current mobile learning design guidelines and frameworks. Manuscript submitted for publication. Vygotsky, L. (1978). Mind in society. Harvard University Press.

52 Wang, M. J. & Kang, J. (2006). Cybergogy of engaged learning through information and communication technology: A framework for creating learner engagement. In D. Hung & M. S. Khine (Eds.), Engaged learning with emerging technologies (pp. 225-253). New York: Springer Publishing. Wang, M., & Ng, J. W. P. (2012). Intelligent Mobile Cloud Education. The 8th International Conference on Intelligent Environments IE12 (in press). Wang, M., & Shen, R. (2011). Message design for mobile learning: Learning theories, human cognition and design principles. British Journal of Educational Technology, 115. doi:10.1111/j.1467-8535.2011.01214.x Wang, M. J., Shen, R. M., Novak, D., & Pan, X. Y. (2009). The impact of mobile learning on students' learning behaviours and performance: Report from a large blended classroom. British Journal of Educational Technology, 40(4), 673-695. Wang, M. (2009, November 30). Cybergogy. Retrieved from http://edutechwiki.unige.ch/en/Cybergogy Tan, L., & Wang, M. J. (2010). Best practices in teaching online or hybrid courses: A synthesis of principles. In P. Tsang et al. (Eds.), Lecture notes in computer science: Hybrid learning, 6248 (pp. 117-126). Berlin: Springer Publishing. Xiao, J., Wang, M. J., & Li, X. (2011). A comprehensive model for designing mobile learning activities and resources. Modern Educational Technology, 21(123), 15 21

53

Appendix A: Data Tools


Coding Rubrics
Table A1: iTunes U Course Coding Rubric
Culture Cultural differences : Uses examples that are multicultura l; and/or uses cultural disclaimers. Satisfaction Adaptive learning: Starts with assessment and teaches what they don't know; different start points or levels; remediation available Location Multiple forms content: Different types of content are available to accommoda te learner's changing environmen t and context. Design Effective video: Relevant; reinforce the message; not distracting or redundant; representative of covered concept; no unnecessary complexity. Course design simple and intuitive; Text captions aligned with images. Effective color: Color is used to enhance learning and does not distract from it.

Technical Data capture: Course utilizes mobile device sensors (camera, microphone, gps, compass, acceleromete r) for observational learning.

Globalized English: Written contractions or oral or written colloquialis ms do not interfere with the learning for non-native speakers. Captioning : Captions

Practice: Lecture content connected to lab, assignments, discussions; reviews; spaced intervals; Practice sets; drill; written assignments; virtual labs; skill simulations. Assessment: Course has summative

Situational learning: Learner uses the device for on-location, embedded learning (using gps, rfid, qrcodes...)

Crossdevice compatibilit y: Examples: PC; MAC; smart phone; tablet.

Effective audio: Audio should

54
available whenever audio is present for hearing impaired or non-native speakers. assessment. be used only if it is effective for representation al purposes, enhances realism or to offload a learners cognitive processing from visuals. Effective font: Single font type using color for emphasis, text not in caps, sans serif, font weight/size should support a 32 character line. Limited Scrolling: Must not scroll to see: associated visuals appear on the same screen; image and question; each piece of lesson appears exclusively.

Real world applications: Skills: Meaningful use; job related use; real world problems; application to a project; Used within context of learner's life; scenario based. Learner attention: Capture the learners attention and encourage them to move forward. Course supports the cognitive activities of linking mental models (verbal and visual) and engages the learner. Social interactions: Provided opportunities

Content chunking: Content is chunked into

55
for interacting with other learners and instructors. Pace-Learner controlled: Learner can repeat lessons as needed; move forward/backwa rd Formative feedback: Provides learner with formative feedback.

roughly 10 minute sections.

Table A2: Open-Learning Platform Coding Rubric


Culture Language support: Examples : Glossary, pop-up definitions, course available in other languages, English learner support materials available. Satisfacti on Direct multimedi a navigatio n: Direct navigation of multimedia content using keywords, keyframes, subject matter, search...

Location Location neutral: Platform is location neutral. Content format adapts to different sized displays for use in different locations/conte xts.

Design Landscape orientatio n: Learning is oriented as Landscape because it offers more flexibility than portrait.

Technical Sensor access and integration: Platform allows access to mobile device sensors (camera, microphone, gps, compass, accelerometer...) for observational learning.

56
Voice recognitio n: Use of automated voice recognition is available to assist those unable to use a small keyboard. Instant feedback: Supports instant feedback.

One step interactio ns: Design floating, collapsible, overlapping , semitransparent interactive panels in order to maximize information presentatio n and allow for interactivity Data entry: Learning allows users to enter data from their environmen t for analytics via sliders, hot spot areas, etc.The outcome of the learners input can be displayed using numbers, graphs or images. Limited

Notification system: Is there an integrated mobile message/notifica tion system? For example: push, sms...

Voiceover : Use of voiceovers for text available to assist vision impaired.

Social networkin g: Allows for access of social networking options from within instruction al window.

Cross-device compatibility: Examples: PC; MAC; smart phone; tablet;

Closed Captions:

Pace learner

Content

57
Closed captions available for hearing impaired and nonnative speakers. controlled : Examples: rewind, back, forward, audio sped up or slowed down. Content learner controlled : Examples: Learner can repeat lessons and quizzes as needed;

scrolling Examples: don't need to scroll to see: entire lesson; visual in question

delivery: A. Cloud (browser/portal based) B. Stored on device C. Mixed (network connected app)

58

Final Data Collection Tools iTunes U Courses


https://docs.google.com/spreadsheet/ccc?key=0ApBtDtHY133SdDRkZmRmUFVzMHlh VkdTdzFYRG9Nbmc

Open-Learning Platforms
https://docs.google.com/spreadsheet/ccc?key=0AtZ0uw5ZUFcSdGZUMklNWDcwLTh KUjZGdTBiTHBONFE

Disarticulated data analysis tools Open University vs. Stanford University


https://docs.google.com/spreadsheet/ccc?key=0ApBtDtHY133SdGZxLUlHWi1mOGFUaEN sVG9hdUtGX1E#gid=0

Highest vs. Lowest Scoring Courses:


https://docs.google.com/spreadsheet/ccc?key=0ApBtDtHY133SdEN3MDhjaU9ncGxKallHZ DJNUmU4c1E#gid=0

Courses Using iBook vs. Courses not using iBook:


https://docs.google.com/spreadsheet/ccc?key=0ApBtDtHY133SdFpfWmdfNk9renUzOXRjZ mpUU2FKd1E#gid=0

Video Only Courses vs. Video Plus Courses:


https://docs.google.com/spreadsheet/ccc?key=0ApBtDtHY133SdDJqLUZBaFBweFdhRE1t YUFoN3lDMEE#gid=0

59

Appendix B: Data Tables


Table B1: Mean and Standard Deviation by sample

60
Table B2: Variable (Element) Grades and Category Grade Point Averages
Category Culture Culture Variable GPA = 2 = C D - Cultural Differences E - Globalized English F - Captioning C Grade Category Location Location Variable GPA = 1 = D N - Multi content available O - Situational Learning GPA = 2.6 =C+ P - Practice Q - Effective Visuals R - Effective Colors S - Effective Audio T - Effective Font U - Limited Scrolling V - Content Chunking C B C B B B C F B C Grade

Culture Culture

B D

Location Design Design

Satisfaction GPA = 2 = C Satisfaction G - Adaptive Learning Satisfaction H Assessment Satisfaction I - Real World Applications Satisfaction J - Learner Attention Satisfaction K - Social Interactions Satisfaction L - Learner Controls Pace Satisfaction M - Formative Feedback Technical GPA= 1.5 =D+ F C B B F A C

Design Design Design Design Design Design

Technical W - Data Capture Technical X Compatibility

61
Table B3: Grade Point Average and Percentage earned of the maximum points possible within each category
Category GPA* Category Variable Total Category Maximum Total Category % of Max. Tot. Culture Satisfaction Location Design 2=C 161 270 60% 2=C 392 630 63% 1=D 90 180 50% 2.7=C+ 472 630 76% Technical 1.5 = D+ 100 80 56%

Table B4: Percent of total points possible within category


Category Culture Satisfaction Location Design Technical Samples: All Courses Systematic Top 20 Groups: Video Only Video Plus Open U Stanford iBook used iBook not used 47 58 65 46 58 60 41 65 65 62 69 61 24 56 52 56 61 45 63 73 72 68 82 74 45 53 46 56 54 56 60% 57 61 63% 58 66 50% 48 52 76% 77 77 56% 50 58

62
High Scoring Courses: China Philosophy Fiction 78 67 67 79 79 75 67 67 67 94 89 94 50 50 50

Table B5: Results of Analysis of Variance tests - ANOVA* Green row indicates null hypothesis probably false
Null Hypothesis Systematic & Top 20 Samples: There is no difference in the inclusion of the learning elements (variables) between Systematic Sample courses and Top 20 Sample courses There is no difference in the coverage of the categories between Systematic and Top 20 Samples There is no difference in the course scores between Systematic and Top 20 Samples. Probably False .39 .75 F not close or higher than 1 means Null Hypothesis is probably correct but there is only 61% confidence in that conclusion. F not close or higher than 1 means Null Hypothesis is probably correct but there is only 25% confidence in that conclusion. F > 1 means the null hypothesis is probably not correct and there is 88% confidence in that conclusion P F Conclusion (alpha = .05)

.75 .11

.12 2.6

63
Video Only and Video Plus: There is no difference in the inclusion of the learning elements (variables) between Video Only courses and Video Plus courses. Probably False There is no difference in the coverage of the categories between Video Only and Video Plus courses. Probably False There is no difference in the course scores between Video Only and Video Plus courses UK Open Univ. and Stanford: There is no difference in the inclusion of the learning elements (variables) between Open University and Stanford courses. Probably false There is no difference in the coverage of the categories between Open University and Stanford courses There is no difference in the course scores between Open University and Stanford courses iBook Used vs iBook Not Used There is no difference in the inclusion of the learning elements (variables) between courses that use iBook and F = 1 means Null Hypothesis may be incorrect but the confidence is only 68% for that conclusion .22 1.5 F > 1 means the null hypothesis is probably not correct and there is 78% confidence in that conclusion (alpha = .05) F > 1 means the null hypothesis is probably not correct and 11.6 there is 100% confidence in that conclusion F > 1 means the null hypothesis is probably not correct and there is 95% confidence in that conclusion F higher than 1 means Null Hypothesis is probably not correct, but there is only 27% confidence in that conclusion

0.0

.05 5.4

.73 12.

.7

.16

F not close or higher than 1 means Null Hypothesis is probably correct, but there is only 30% confidence in that conclusion. F not close or higher than 1 means Null Hypothesis is probably correct, but there is only 28% confidence in that conclusion

.72 .13

.32

1.0

64
courses that do not. More probably false than true There is no difference in the coverage of the categories between courses that use iBook and courses that do not There is no difference in the course scores between courses that use iBook and courses that do not. More probably false than true Highest & Lowest Scoring Courses There is no difference in the 0.0 14. inclusion of the learning elements (variables) between the highest and lowest courses. Probably False There is no difference in the coverage of the categories between the highest and lowest courses. Probably False There is no difference in the course scores between the highest and lowest courses F > 1 means the null hypothesis is probably not correct and there is 100% confidence in that conclusion (alpha = .05) F > 1 means the null hypothesis is probably not correct and there is 97% confidence in that conclusion (alpha = .05) This ANOVA test is not usable because there were too few data points. .43 .68 F not close or higher than 1 means Null Hypothesis is probably correct, but there is only 57% confidence in that conclusion. F > 1 means Null Hypothesis may be incorrect but the confidence is only 72% for that conclusion

.28 1.2

.03 7.4

45

*GeoGebra was used to run ANOVA tests

You might also like