You are on page 1of 5

Teaching

of Psychology
http://top.sagepub.com/

Students Learn Equally Well From Digital as From Paperbound Texts


Annette Kujawski Taylor
Teaching of Psychology 2011 38: 278
DOI: 10.1177/0098628311421330
The online version of this article can be found at:
http://top.sagepub.com/content/38/4/278

Published by:
http://www.sagepublications.com

On behalf of:

Society for the Teaching of Psychology

Additional services and information for Teaching of Psychology can be found at:
Email Alerts: http://top.sagepub.com/cgi/alerts
Subscriptions: http://top.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav

>> Version of Record - Oct 6, 2011


What is This?

Downloaded from top.sagepub.com at Alexandru Ioan Cuza on September 2, 2014

Technology and Teaching

Students Learn Equally Well From


Digital as From Paperbound Texts

Teaching of Psychology
38(4) 278-281
The Author(s) 2011
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0098628311421330
http://top.sagepub.com

Annette Kujawski Taylor1

Abstract
Although digital texts are growing in popularity, few studies have systematically examined whether students can learn as well
when reading a digital text, compared to reading a paperbound textbook. The present study examined several variables related
to comprehension of digital versus paper textbooks, including text complexity, engagement with the text, and long-term retention. Seventy-four students, randomly assigned to read an entire chapter from either a paperbound text or its digital equivalent,
completed a 20-item multiple-choice quiz immediately after reading and 1 week later. No differences in comprehension
emerged across any variables except for a main effect of test time. More important, there were no interactions with text
medium. Therefore, the key to student comprehension appears not to be the method of text delivery but, rather, getting students to read in the first place.
Keywords
digital text, reading comprehension

Digital texts are rapidly becoming a preferred delivery method


for course readings. They cost less (Kingsbury & Galloway,
2006), weigh less, are more portable (Coyle, 2008), do not
disintegrate or go out of print (Clyde, 2005), and can easily
incorporate video and interactive exercises to enhance comprehension. The drawbacks are that digital texts require access to a
computer and the Internet, there is increased eye strain (Longhurst, 2003), students prefer paper textbooks (Longhurst,
2003), less time is spent on in-depth and focused reading (Liu,
2005), and reading times are slightly slower (Belmore, 1985;
Dillon, 1992; Kang, Wang, & Lin, 2009)although Moore and
Zabrucky (1995), Reinking (1988), and Reinking and Schreiner (1985) all found that among elementary school children,
the increased reading time related to slightly better comprehension. Despite these drawbacks, digital textbooks appear
to be here to stay.
Much of the research related to digital texts has focused on
technical aspects of readability (see Dillon, 1992, for a review)
and limitations of digital media for note-taking, underlining, or
highlighting text (Brown, 2001). However, the importantand
unansweredquestion from a teaching perspective is, Can
students learn as well from digital texts as from paperbound
textbooks? Few published studies have addressed this question directly, and even fewer studies have examined this question among college students.
Dillon (1992) found equivocal results for comprehension,
citing three studies that showed no differences in comprehension after reading digital versus paperbound texts, one study
that showed better comprehension after reading the digital text,
and one study that showed better comprehension after reading

the paperbound text. Coiro (2003), Miall and Dobson (2001),


and Walsh, Asha, and Sprainger (2007), who focused on comprehension as a function of hypertext navigation, concluded
that comprehension could be enhanced with instruction in good
hypertext navigation. Matthew (1997) found increased comprehension among third graders who read stories from CD-ROM
interactive books, compared to paperbound versions of the stories. The CD-ROM versions included auditory components that
may have increased comprehension for words that would have
been unfamiliar in the paperbound version. Similarly, Pearman
(2008), working with second-grade students, found that comprehension increased with CD-ROM storybooks relative to
paperbound storybooks, specifically for lower reading ability
students. She concluded that the CD-ROM books reduced the
need for decoding, allowing students to allocate more time to
constructing meaning from the text. Similarly, when Reinking
and Schreiner (1985) and Reinking (1988) tested fifth and sixth
graders, they found that the difficulty of the passages mediated
comprehension differences between reading digital and paper
texts. They also found that reading ability affected comprehension differentially such that only low-ability readers had lower
comprehension for the digitally presented text. However, this

University of San Diego, San Diego, CA, USA

Corresponding Author:
Annette Taylor, Department of Psychology, University of San Diego, 5998
Alcala Park, San Diego, CA 92110
Email: taylor@sandiego.edu

Downloaded from top.sagepub.com at Alexandru Ioan Cuza on September 2, 2014

Taylor

279

difference disappeared when digital text reading included the


ability to access help functions.
Beach (2009) found no significant differences in comprehension between digital and paperbound texts, no differences
in comprehension between longer and shorter passages, and
no interaction between passage length and text medium. In contrast, Belmore (1985), using relatively brief passages, found
significantly slower reading times and lower comprehension
for digital text. This may have been a function of older, lower
resolution screens. Joly, Capovilla, Bighetti, Neri, and Nicolau
(2009), on the other hand, found better comprehension for
newspaper articles read on the Internet than for those read on
paper. However, the authors noted that these were relatively
easy readings. Finally, Longhurst (2003) found lower comprehension for students in a college level world history course.
A review of the existing literature does not reveal any studies that used textbooks that might be encountered by students
on a regular basis. The purpose of the present study was to test
reading comprehension of authentic classroom materials and to
directly address the following research questions: (a) After
reading either a paper or a digital text, will college students
demonstrate the same level of comprehension? (b) Does the
complexity of the material make a difference? This has been
previously examined either by holding this variable constant
or by manipulating passage length, with longer passages
assumed to be more complex. (c) Will students retain the material equally well over time? Previous studies have examined
only immediate retrieval. (d) Does engagement with the text
via the ability to underline, make margin notes, take notes on
paper, or highlight passages affect learning? This variable has
not been examined in past studies, although much research supports enhanced comprehension resulting from engagement
with paper texts (cf. Sotiriou & Phillips, 2000).

Method
Participants
Seventy-four introductory psychology students, primarily
Caucasian, female, and traditional college age, participated for
subject pool credit. When asked directly, all claimed to have
never taken a course in economics. Results for two students
were later eliminated when a review of transcripts showed that
they had, in fact, completed a prior course in microeconomics.

Materials
From the publishers representative, we obtained multiple
copies of two microeconomics texts marketed by the publisher
as introductory college level textsone a basic (core) text,
(Stone, 2008), and the other a comprehensive text (Krugman &
Wells, 2005). Both texts have a paperbound version and an
equivalent e-text, and the publishers representative provided
access to the digital counterparts for both texts. The target
reading was the chapter on the topic of Supply and Demand
from each text25 pages in the Stone text and 24 pages in the
Krugman and Wells text.

We obtained the test bank accompanying each text so that


we could develop a standardized quiz to assess comprehension.
We selected items from each test bank that could be answered
after reading either text. An accountancy major reviewed an
initial pool of 30 multiple-choice items. We carefully examined
each text to ascertain that the information appeared in each text.
The resulting quiz contained 20 items. Thus, regardless of
which text any participant read, each received the same quiz
and would have been equally prepared to answer each item.

Procedure
Following random assignment and informed consent, each participant received either the paperbound text or instruction on
how to navigate the digital text. Participants read the chapter
under similar environmental conditions in individual testing
rooms. We instructed participants in the clean engagement
condition to not make any marks or take any notes. We encouraged those in the annotated condition to underline, highlight,
make margin notes for the paper text condition, and take notes
or use digital highlighting in the digital text condition. We provided pens, pencils, highlighters, and paper. Upon completion,
participants alerted us when they had completed the reading.
Each participant then received the 20-item pencil-and-paper
quiz. One week later, we sent an e-mail to each participant with
a link to the same quiz items posted at an online survey site.

Results
A 2  2  2  2 analysis of variance (ANOVA) with one withinsubjects variable (test time: immediate vs. 1 week delay) and
three between-subjects variables (medium: digital vs. paper; complexity: basic vs. comprehensive; engagement: clean vs. annotated) showed a single main effect for test time. Participants
scored lower on the quiz after 1 week (mean accuracy 58.3%)
than immediately after reading (63.5%), with F(1, 64) 9.49,
p .003. In particular, the critical comparison between digital
and paper texts was nonsignificant with mean quiz scores for the
paper text at 61.5% and for the digital text at 60.4%. The difference between the basic and comprehensive texts was nonsignificant, with 59.8% quiz accuracy for the basic text and 62.2% for
the comprehensive text. For the engagement variable, the mean
quiz scores were 59.1% for clean reading and 62.8% for annotated reading. This difference was also nonsignificant.
Furthermore, there were no significant interactions, at any
levels, between any of the variables. Thus, neither the immediate nor the delayed testing interval produced any difference in
comprehension between the digital and the paperbound texts.
Finally, the complexity of text did not produce any difference
in comprehension between the digital and the paperbound text.

Discussion
The purpose of this study was to determine whether students can
learn new information equally well from a digital text as from a
paperbound text. The first research question we asked was, After

Downloaded from top.sagepub.com at Alexandru Ioan Cuza on September 2, 2014

280

Teaching of Psychology 38(4)

reading either a paper or a digital text, can students demonstrate


the same level of comprehension? The results of this study
showed no comprehension differences between students who
read the digital version versus the traditional paperbound text version, consistent with the findings of Beach (2009) regarding college students comprehension of digital text reading.
The second research question was, Does the complexity of
the material make a difference? Again, the results showed no
differences. Specifically, there was neither a main effect of text
complexity nor an interaction with text medium (digital vs.
paper). The lack of interaction may be attributed to the lack of
main effect and, therefore, we are unable to adequately answer
this question. We relied on the publishers representatives advice
that the basic text was less complex. Complexity, however,
can be multifaceted, accounting for an entire text in terms of variables such as depth, breadth, use of graphics, and readability. It is
quite possible that the particular chapters selected for this study
were, in fact, quite equivalent, given their similar length.
In terms of the third research question, Will students retain
the material equally well over time? we found, as expected, that
students performed better in the immediate test than in the 1-week
delayed test. It is important, however, that there was no interaction
with test time and text medium. Thus, all students performed less
well after a week-long delay, regardless of which text they read.
Finally, we asked, Does engagement with the text, including the ability to underline, make margin notes, take notes on
paper, or highlight passages, affect learning? Again, we found
no differences. This is curious, given the wealth of information
suggesting that engagement should matter (see almost any website that promotes good study skills and engagement via underlying, highlighting, or note-taking, although evidence for this
recommendation is seldom cited).
The bottom line seems to be that if students actually sit and
read the text, the delivery method is not important. The more
important issue becomes one of getting students to read in the
first place. Perhaps a shortcoming of this study was its laboratory
nature. Students, when placed in a room for a research study and
told to read, apparently spent their time in doing so. Their performance on the quiz was marginally passing, which is not surprising given the complexity of the reading material and the
students unfamiliarity with the topic. We intentionally selected
an unfamiliar topic so that prior knowledge would not differentially affect quiz performance across participants.
One short-coming of our study was that, we did not assess
reading time, which has been measured in the past by many studies that have examined reading from digital texts. We allowed 1
hour to complete participation in the study and all but one student
completed the study in the time allotted. No participant took less
than 30 minutes. Thus, it is difficult to determine any effects of
reading time on comprehension in this study. A future study might
further investigate the relationship between text medium and
reading times, as well as preference among students, their prior
personal beliefs about their comprehension as a consequence of
text medium, and their actual comprehension. These complex
interactions may provide additional insight into differences in
reading comprehension based on text delivery medium.

In summary, these data suggest that it does not matter which


type of text the students read. Students can learn equally well
from one or the other. It appears that there is no pedagogical
reason to avoid using digital texts. The convenience, in addition to the cost savings, may be important factors in considering the adoption of digital texts.
Acknowledgments
The author would like to thank Courtney Gruensfelder for her assistance in conducting this study and Rachel Blaser for feedback on earlier drafts.

Declaration of Conflicting Interests


The author declared no potential conflicts of interests with respect to
the research, authorship, and/or publication of this article.

Funding
The author disclosed receipt of the following financial support for the
research, authorship, and/or publication of this article: This research
was supported in part by a Faculty Research Grant from the University
of San Diego.

References
Beach, K. L. (2009). The effect of media, text length, and reading rates
on college student reading comprehension levels (Doctoral dissertation, University of Phoenix). Available from ProQuest digital
dissertations (AAT 3322886).
Belmore, S. M. (1985). Reading computer-presented text. Bulletin of
the Psychonomic Society, 23(1), 12-14.
Brown, G. J. (2001). Beyond print: Reading digitally. Library Hi Tech,
19(4), 390-399.
Clyde, L. A. (2005). Electronic books. Teacher Librarian, 32(5), 45.
Retrieved July 28, 2009, from ERIC database. (EJ728078).
Coiro, J. (2003). Exploring literacy on the Internet. The Reading
Teacher, 56(5), 458-464.
Coyle, K. (2008). Managing technology. The Journal of Academic
Librarianship, 34(2), 160-162.
Dillon, A. (1992). Reading from paper versus screens: A critical
review of the empirical literature. Ergonomics, 35(10), 1297-1326.
Joly, M.C.R.A., Capovilla, A.S.G., Bighetti, C., Neri, M. L., & Nicolau, A. F. (2009). Reading comprehension of freshmen students:
Comparing printed and digital texts. Paper presented at the
International Conference on Multimedia and ICT in Education,
April 2024, 2009, Lisbon, Portugal.
Kang, Y., Wang, M. J., & Lin, R. (2009). Usability evaluation of
e-books. Displays, 30, 49-52. doi:10.1016/j.disp1.2008.12.002
Kingsbury, A., & Galloway, L. (2006, October 16). Textbooks enter
the digital era. U.S. News & World Report, 141(14), 63-65.
Krugman, P., & Wells, R. (2005). Microeconomics. New York: Worth
Publishers.
Liu, Z. (2005). Reading behavior in the digital environment: Changes
in reading behavior over the past ten years. Journal of Documentation, 61(6), 700-712.
Longhurst, J. (2003). World history on the World Wide Web: A student satisfaction survey and a blinding flash of the obvious. The
History Teacher, 36(3), 343-356.

Downloaded from top.sagepub.com at Alexandru Ioan Cuza on September 2, 2014

Taylor

281

Matthew, K. (1997). A comparison of influence of interactive


CD-ROM storybooks. Journal of Research on Computing in Education, 29(3), 263-275.
Miall, D. S., & Dobson, T. (2001). Reading hypertext and the experience of literature. Journal of Digital Information, 2(1). Retrieved
July 28, 2009, from http://journals.tdl.org/jodi/rt/printerFriendly/
35/37
Moore, D., & Zabrucky, K. (1995). Adult age differences in comprehension and memory for computer-displayed and printed text.
Educational Gerontology, 21, 139-150.
Pearman, C. J. (2008). Independent reading of CD-ROM storybooks:
Measuring comprehension with oral retellings. Reading Teacher,
61(8), 594-602. doi:10.1598/RT.61.8.1

Reinking, D. (1988). Computer-mediated text and comprehension


differences: The role of reading time, reader preference, and
estimation of learning. Reading Research Quarterly, 23,
484-498.
Reinking, D., & Schreiner, R. (1985). The effects of computermediated text on measures of reading comprehension and reading
behavior. Reading Research Quarterly, 20, 536-552.
Sotiriou, P. E., & Phillips, A. G. (2000). Steps to reading proficiency.
Belmont, CA: Wadsworth Publishing.
Stone, G. W. (2008). Core microeconomics. New York: Worth
Publishers.
Walsh, M., Asha, J., & Sprainger, N. (2007). Reading digital texts.
Australian Journal of Language and Literacy, 30(1), 40-53.

Downloaded from top.sagepub.com at Alexandru Ioan Cuza on September 2, 2014

You might also like