You are on page 1of 13

http://epm.sagepub.

com/

Educational and Psychological Measurement


http://epm.sagepub.com/content/59/5/866
The online version of this article can be found at:

DOI: 10.1177/00131649921970198
1999 59: 866 Educational and Psychological Measurement
Richard Perlow, D. De Wayne Moore, Rebecca Kyle and Thomas Killen
Convergent Evidence among Content-Specific Versions of Working Memory Tests

Published by:
http://www.sagepublications.com
can be found at: Educational and Psychological Measurement Additional services and information for

http://epm.sagepub.com/cgi/alerts Email Alerts:

http://epm.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://epm.sagepub.com/content/59/5/866.refs.html Citations:

by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from


EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
PERLOW ET AL.
CONVERGENT EVIDENCE AMONG CONTENT-SPECIFIC
VERSIONS OF WORKING MEMORY TESTS
RICHARD PERLOW, D. DE WAYNE MOORE,
REBECCA KYLE, AND THOMAS KILLEN
Clemson University
The evidence pertaining to inferences derived fromworking memory test scores does not
appear as developed as the evidence documenting the relation between working memory
and performance. Gaps in the literature include documenting the relations among work-
ing memory tests of similar and different item content. The authors proposed that data
from these tests would fit a correlated two-factor model best where the latent factors are
based on itemcontent. Atotal of 201 undergraduates completed four versions of a work-
ing memory measure. Confirmatory factor analytic results supported our hypothesis.
These data suggest that some working memory measures are not interchangeable.
The relationship between working memory and performance is well docu-
mented (Baddeley, Logie, Nimmo-Smith, & Brereton, 1985; Daneman &
Carpenter, 1980, 1983; Engle, Carullo, & Collins, 1991; Kyllonen & Ste-
phens, 1990; Masson & Miller, 1983; Ormrod & Cochran, 1988; Pea &
Tirre, 1992; Perlow, Jattuso, & Moore, 1997; Shute, 1991; Woltz, 1988;
Zabrucky & Moore, 1994). The fundamental role working memory appears
to have in learning has led some to suggest that working memory may be fun-
damental in understanding intellectual performance (Richardson, 1996).
Working memory involves the simultaneous processing and maintenance
of information. Theories of working memory specify that working memory
contains a central executive and two subordinate systems: the phonological
loop and the visuo-spatial sketchpad (Baddeley 1990; Baddeley & Hitch,
We thank Timothy A. Salthouse for allowing us to use his working memory measures. We
also express appreciation to three anonymous reviewers for their comments on earlier versions of
the article. The authors presented an earlier version of this article at the 1998 annual conference
of the American Psychological Society. Correspondence concerning this article should be ad-
dressed to Richard Perlow, Department of Psychology, 48 Brackett Hall, Clemson University,
Clemson, SC 29634; e-mail: rperlow@clemson.edu.
Educational and Psychological Measurement, Vol. 59 No. 5, October 1999 866-877
1999 Sage Publications, Inc.
866
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
1974). The central executive imports information from long-term memory
and is responsible for processing that information. The phonological loop
and the visuo-spatial sketchpad are responsible for maintaining information
for later processing. The phonological loop maintains auditory information
through rehearsal; the visuo-spatial sketchpad rehearses visual images.
Working memory capacity refers to the amount of attentional resources or
space used to process and to maintain information in the systems described
above (Tirre & Pea, 1992). Although the total capacity is fixed, the space
allocated for processing and maintenance functions varies (Daneman &
Green, 1986). Increasingly, complex material requires increased capacity for
processing that, in turn, leaves less capacity for maintenance purposes. Con-
versely, devoting more capacity for rehearsing information reduces the
capacity available for processing data. Given that processing and mainte-
nance occur simultaneously, while performing tasks taxing working mem-
ory, individuals need to alternately shift their attention between processing
and rehearsal. Test developers typically meet this requirement by having
individuals perform some activity (e.g., reading) while remembering other
information for later recall (e.g., remembering the last word of each sentence
read).
Although most researchers in this field have incorporated both processing
and storage tasks in their tests of working memory, there is variability in the
content of the test items across measures. For example, phonological loop-
based measures of working memory have contained items involving catego-
rization skills (Swanson, 1992: Experiment I, Task 10), reading (Daneman &
Green, 1986), arithmetic (Shute, 1991), science (Kyllonen, 1993), and items
containingbothwords andnumbers (Engle, Cantor, &Carullo, 1992; Turner &
Engle, 1989). Employing items of varying content to assess working memory
is not necessarily problematic given sufficient evidence that the inferences
derived from the test scores are valid.
Despite the importance working memory appears to have in predicting
performance, it is surprising to find little validity evidence on the inferences
of working memory test scores. Data typically show low to moderate rela-
tions among multiple measures of working memory (e.g., Jurden, 1995;
Light & Anderson, 1985; Turner & Engle, 1989; Woltz, 1988). In addition,
the relation between measures containing the same item content (e.g., read-
ing based or mathematics based) remains unclear. Understanding the relation
between tests of similar item content is important because the data provide
information about the stability of the construct and the appropriateness of
inferences drawn from test scores.
Purpose
Researchers need to address fundamental measurement issues concerning
working memory tests. One issue is the nature of the relationship among
PERLOW ET AL. 867
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
parallel versions of the same working memory measure. Another issue con-
cerns the relations of these measures with tests containing different item
content. In the present research, we assessed the convergent evidence of four
phonological loop-based working memory measures. Two of the measures are
distinct versions of a test containing reading-based items. The other two mea-
sures are twodistinct versions of a test containingmathematics-baseditems.
We evaluated three models specifying the relations among the four meas-
ures. As depicted in Figure 1, the first model specifies that all four working
memory measures constitute one factor considering that all of the measures
involve processing and maintaining verbal information (i.e., words or num-
bers) and therefore rely on the same working memory mechanisms.
868 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Figure 1. The one-factor and two-factor models.
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
Figure 1 also contains the second and third models. The second model
specifies that the tests constitute two oblique factors based on item content,
that is, a reading-based factor and a mathematics-based factor. The third
model specifies that the tests constitute two orthogonal factors.
There is a theoretical basis for proposing the one-factor model and the
oblique two-factor model described in the preceding paragraph. Some
researchers view working memory as a single system that processes and
rehearses both verbal and nonverbal information. One implication of this per-
spective is that working memory capacity should not vary as a function of the
type of information processed and maintained provided that all of the infor-
mation is maintained through one of the two maintenance systems. One
would, therefore, expect consistent relations among working memory tests
derived from either reading-based or numerically based items.
There is evidence suggesting that various operationalizations of working
memory form a single factor. As part of a larger study, Shute (1991) con-
ducted an exploratory factor analysis on several working memory measures
and found that the working memory measures and a measure of numerical
reasoning were identified with one factor. Although her research is interest-
ing, it is unclear whether alternative models based on item content would
have fit the data better than a one-factor model (see Thompson & Daniel,
1996). Moreover, the findings leave open the possibility that the manifest
indicators comprised a factor other than working memory.
The belief of a unitary working memory systemis not universal. Daneman
and Green (1986) proposed that there might be different working memory
systems depending on the type of information being processed. For example,
efficient readers might require less of their total working memory capacity
for processing reading material, thereby making more of the total capacity
available for maintenance than less efficient readers. Similarly, people who
are efficient in mathematics may allocate more of their total working memory
capacity for maintenance because they use less of the total capacity for proc-
essing than people who are less efficient in mathematics. The assumption that
people are not equally efficient at processing different kinds of information
suggests that there would be performance differences on working memory
tests derived from different content areas. Thus, we could expect reading-
based working memory measures to correlate differently with a third factor
than mathematics-based measures if there were intraindividual variability
across reading and mathematics tasks.
To our knowledge, no one previously has examined whether various
working memory factors derive fromtest itemcontent. However, evidence is
available showing that one-factor working memory models may not fit data
well. As part of a larger study, Swanson (1992) conducted a set of factor
analyses on several diverse measures of working memory. She claimed that a
two-factor model adequately represented the data and that a one-factor model
PERLOW ET AL. 869
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
did not fit the data well. She also proposed a three-factor model but did not
discuss the fit of that model in detail. She then appeared to interpret the two
factors following a varimax rotation (see Swanson, 1992, Experiment 1).
Although Swansons findings are interesting, one way of extending her work
is to analyze data from parallel forms of working memory measures after
specifying a priori models of how the measures are related in a confirmatory
factor analysis.
Hypothesis
We proposed that a correlated two-factor model based on item content
would fit the working memory test data well. We proposed this two-factor
model because comprehension and numerical reasoning are separate abilities
(Horn & Hofer, 1992) and because some claim that reading-based working
memory measures exhibit a different relationship with some criteria than do
arithmetic-based working memory measures (Baddeley et al., 1985; Dane-
man &Tardif, 1987). Thus, itemcontent should differentially affect test per-
formance to some extent because of intraindividual differences in persons
ability to activate and to process words and numbers. We also hypothesized
that the two factors would be related because the phonological loop is used to
maintain both words and numbers. Thus, any limitations in the ability of the
rehearsal subsystemto maintain verbal information should be reflected in the
performance on all four tests.
Method
Participants
Atotal of 201 students at a southern university (78 males and 123 females)
participated in the research in exchange for extra credit. The participants
average age was 19.8.
Procedure
Participants completed a brief demographics questionnaire and per-
formed a computerized version of the working memory tasks. Test adminis-
trators read the task instructions to the participants as they followed along on
a separate set of instructions. Participants received instructions before
administration of the first reading-based and the first mathematics-based test.
Each set of instructions included two examples illustrating how to perform
the test. Test proctors administered the four tests in counterbalanced order.
870 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
Measures. We used the computerized versions of Salthouses (Salthouse,
1992; Salthouse &Babcock, 1991) working memory measures in the present
study. The test has two components. The first component involves processing
through verification. Participants read a sentence (e.g., The boy ran with the
dog.) and answered a multiple-choice question based on the sentence (Who
ran?). Participants also saw a math problem (e.g., 4 + 2 = ?) and selected
the correct response from a set of alternatives. Responding to the multiple-
choice question assured that participants did some processing. The second
component involves maintenance. After reading and answering a predeter-
mined number of sentences or arithmetic problems, the word RECALL
appeared on the computer monitor. Participants indicated the last word of
each previously presented sentence or the last number in each arithmetic
problem presented in the trial.
All participants began the test at the one-sentence (or one-equation) span
level; that is, they only had to verify one sentence or equation and to recall one
word or number per test item. At the two-span level, participants read two
sentences or problems, answered a multiple-choice question after each sen-
tence or problem, and then recalled the last word or number of each sentence
or problem in sequence. Higher span levels included one more sentence or
problem than its preceding span level. There is no partial credit. Participants
needed to answer the multiple-choice problem correctly and to recall each
word or number in the correct sequence to obtain credit for each test item.
All span levels contained three test items (e.g., participants read a sentence
and recalled three times at the one-span level, participants read two sentences
and recalled three times at the two-span level, etc.). Participants had to
answer the multiple-choice question correctly and recall the words in
sequence on two of the three test items administered at each level to move to
the next higher level. We discontinued testing when participants failed at
least two of the three items at a level. The test score was the highest span suc-
cessfully completed.
Salthouse and Babcock (1991, Study 1) reported a corrected split-half
reliability coefficient of .86 for an orally presented administration of a
reading-based test and .90 for an orally presented version of a mathematics-
based test. Jurden (1995) and Perlowet al. (1997) also provided data indicat-
ing that the reading-based measure produced scores with adequate internal
consistency.
Results
Table 1 contains descriptive statistics, variable intercorrelations, and the
variance-covariance matrix on which the confirmatory factor analysis was
based. It can be seen fromthe table that the four measures are related to each
other, but the magnitude of the correlations is low.
PERLOW ET AL. 871
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
Because there is little consensus concerning the best index for evaluating
model fit, we report several fit indices following the guidelines proposed by
Hoyle and Panter (1995). Table 2 contains results of the analysis of the one-
factor model. The fit indices taken collectively provide at best little support
for the model. The chi-square statistic and other indices showthat the data did
not fit the model well. The chi-square analysis shows that the one-factor
model could be rejected,
2
(df =2) =8.393; p <.05. Furthermore, the adjusted
goodness-of-fit index (AGFI), the nonnormed fit index (NNFI), and the root
mean square residual (RMR) fit values all indicate that the model can be
improved.
Table 2 also contains data from the analysis of the oblique two-factor
model. The data fit the model well. Results from a chi-square analysis were
not statistically significant,
2
(df = 1) = .065; p > .05; thus, the model cannot
be rejected. All fit indices indicate a near-perfect fit of the model. Last, we
tested the two-factor orthogonal model. Clearly, the model does not fit the
data well (see Table 2).
Nested models can be compared directly to determine which model fits
the data best by employing the chi-square difference test (X
2
; Bentler &
Bonett, 1980). The difference in chi-squares between two nested models with
degrees of freedom equal to the difference in degrees of freedom of the first
and second model can be tested statistically, with a statistically significant
chi-square difference indicating an improvement in model fit. The models
872 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Table 1
Variable Descriptive Statistics and Intercorrelations
Measure M SD 1 2 3 4
Reading-Version 1 1.75 1.42 0.74 0.89 1.11
Reading-Version 2 2.41 1.28 0.41 0.77 0.91
Mathematics-Version 1 3.86 1.99 0.31 0.30 2.08
Mathematics-Version 2 3.85 2.21 0.35 0.32 0.47
Note. n = 201; correlations are presented belowthe diagonal, and covariances are above the diagonal. All cor-
relations are statistically significant at the .01 level.
Table 2
Model Fit Indices
Model X
2
df GFI AGFI NNFI CFI RMR
One factor 8.393 2 .978 .892 .849 .950 .115
Two factor (oblique) .065 1 1.00 .998 1.044 1.000 .009
Two factor (orthogonal) 43.902 2 .910 .552 .009 .670 .576
Note. n = 201; GFI = goodness-of-fit index; AGFI = adjusted goodness-of-fit index; NNFI = nonnormed fit in-
dex; CFI = comparative fit index; RMR = root mean square residual.
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
depicted in the figures are not nested considering that one model cannot be
obtained from the other by imposing constraints such as fixing paths
(Bentler, 1990). To compare the models, we respecified Model 1 as a modi-
fied two-factor model in which the correlation between the two factors was
fixed at 1.00. Respecifying the model this way allowed us to retain the theo-
retical integrity of the one-factor model, and this model is statistically equiva-
lent to the one-factor model (i.e., it yields fit indexes identical to the original
single-factor model specification depicted in Figure 1). Results showthat fix-
ing the path between the two factors markedly reduced fit, X
2
(df = 1) = 8.33,
p <.05. Thus, fit was improved by permitting two oblique factors. Finally, the
two-factor oblique model was clearly superior to the two-factor orthogonal
model, X
2
(df = 1) = 43.84, p < .05.
The parameter estimates for the two-factor oblique solution are presented
in Table 3. The factor parameters are consistently high. The correlation of .73
between the factors indicates that the shared variance between reading and
mathematics measures of working memory is about 53% with unique vari-
ance of about 47%.
Discussion
We took an initial step toward a better understanding of working memory
measurement by examining a set of working memory scales containing two
versions of test items that are reading and mathematics based. We proposed
that an oblique two-factor model in which the factors are based on item con-
tent would fit the data well. Our data supported our hypothesis. Additional
analyses indicated that the oblique two-factor model fit the data better than
competing one-factor and orthogonal two-factor models.
We offer two explanations of why item content can affect working mem-
ory test relations. First, the relations may be due to intraindividual differences
in reading and mathematics abilities (Horn & Hofer, 1992). An alternative
explanation of the findings is that it was not item content per se that affected
PERLOW ET AL. 873
Table 3
Standardized Parameter Estimates for Final Two-Factor Oblique Model
Working Memory
Measure Variances Reading Mathematics Error
Reading 1 .664 .000 .748
Reading 2 .617 .000 .787
Mathematics 1 .000 .657 .714
Mathematics 2 .000 .722 .652
Factor correlation
Mathematics factor .731
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
the relations among the measures. Working memory measures involve both a
processing task and a storage task. Variation in the difficulty of the process-
ing task may make different demands on individuals working memory
and/or may require that individuals use additional cognitive skills to perform
the task. The mathematics-based processing task (i.e., performing simple and
familiar math operations) may have been easier or more automatic than the
reading-based processing task, which, although simple, may have been less
automatic. Perhaps people automatically process simple arithmetic problems
because they have them memorized and routinely perform such calculations
(e.g., counting change). On the other hand, people would not be likely to have
seen the reading-based processing task items before. Although the reading-
based test is simple, it may require more resources or working memory
capacity. Consequently, there may have been fewer resources or less space
available for rehearsal of the storage items on the reading measure of working
memory. Indeed, in the present study, word recall was poorer than number
recall. Thus, working memory performance may not be equivalent for meas-
ures with processing tasks that vary in difficulty.
Whether the observed covariance pattern is due to differences across item
content or to differences in the difficulty of the processing component of
working memory measures, the important implication of our work is that
some working memory measures are not interchangeable. To consider meas-
ures as interchangeable when they are not may lead to biased conclusions
when investigating the effects of working memory measures on performance.
Specifically, biased results are most likely to occur when different working
memory test scores are aggregated to form a general index of working
memory.
These data also point to possible issues with relations across test content
and across rehearsal mechanisms (e.g., phonological loop and visuo-spatial
sketchpad). For example, we observed modest relations among alternate
forms of the same measure. Therefore, we should not be surprised at the find-
ings of others indicating small relations among measures based on specific
itemcontent. These findings challenge those of Shute (1991), who found that
various working memory measures constituted one factor. The inconsistent
findings point to a need for replication studies, particularly studies that build
convergent and discriminant evidence to assess the relations of working
memory and other abilities. We recommend that the line of research employ
multiple measures of a variety of abilities to allow researchers to make more
valid inferences about peoples working memory ability. Diverse measures
that are modestly correlated may yield a single factor because they are
affected by other cognitive abilities in addition to working memory. Thus,
including alternate forms is important because researchers have the flexibil-
ity of obtaining additional convergent as well as discriminant evidence.
874 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
Another finding is that the within-item content measures exhibit lower
correlations than we would have expected. One possible reason for the lower
correlations is that our sample size was restricted. Working memory ability
previously has been found to be negatively related to age in adult samples
(Dobbs & Rule, 1989; Salthouse, 1992; Salthouse & Babcock, 1991). Only
three of our participants were older than 26. Including older individuals in
our sample may have provided data suggestive of more consistent
performance.
Our correlational findings are similar to the results of others (e.g., Jurden,
1995). The data reported here also provide support for Salthouses (1990)
recommendation to use multiple working memory indicators to assess better
the construct. Although we agree with his suggestion, care should be used in
selecting the scales to combine because results here showthat different com-
binations of scales may not optimally reflect the construct of working
memory.
Limitations and Research Directions
We used college students in the present study. Generalization of the factor
structure results we have found to populations other than young adults is not
recommended.
We used fairly simple reading- and mathematics-based tests. Although
using simple verification tasks has its advantages, evaluation of our hypothe-
sized model with measures employing complex verification tasks may yield
different conclusions. However, using very complex verification tasks that
heavily tax abilities used in verification may change the test from a measure
of working memory to an ability measure of the mental processes employed
in the verification component of the test.
All the tests we used involved the same procedure for assessing working
memory, namely, reading sentences (arithmetic problems), answering a
multiple-choice question pertaining to the statement, and rehearsing the last
word (number) of the statements. Future research should investigate the rela-
tions between parallel versions of other item formats on model fit, including
measures taxing the visuo-spatial sketchpad rehearsal subsystem, to assess
the generalizability of our findings.
Another boundary condition was the time respondents had to process the
sentences or arithmetic problems. We allowed subjects only 7 seconds to
answer the multiple-choice questions. Perhaps allowing unlimited time to
respond, or shortening the time for processing, would alter the relations
among the measures examined here. Unfortunately, to our knowledge, there
are no agreed-on time limits for the processing component of working mem-
ory tests.
PERLOW ET AL. 875
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
References
Baddeley, A. (1990). Human memory: Theory and practice. Boston: Allyn & Bacon.
Baddeley, A. D., &Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), The psychology of
learning and motivation: Advances in research and theory (pp. 47-89). New York: Aca-
demic Press.
Baddeley, A., Logie, R., Nimmo-Smith, & Brereton, N. (1985). Components of fluid reading.
Journal of Memory and Language, 24, 119-131.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107,
238-246.
Bentler, P. M., &Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of co-
variance structures. Psychological Bulletin, 88, 588-606.
Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and read-
ing. Journal of Verbal Learning and Verbal Behavior, 19, 450-466.
Daneman, M., & Carpenter, P. A. (1983). Individual differences in integrating information be-
tween and within sentences. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 9, 561-584.
Daneman, M., & Green, I. (1986). Individual differences in comprehending and producing
words in context. Journal of Memory and Language, 25, 1-18.
Daneman, M., & Tardif, T. (1987). Working memory and reading skill re-examined. In
M. Coltheart (Ed.), Attention and performance XII: The psychology of reading (pp. 491-
508). Hove, UK: LEA.
Dobbs, A. R., & Rule, B. G. (1989). Adult age differences in working memory. Psychology &
Aging, 4, 500-503.
Engle, R. W., Cantor, J., & Carullo, J. J. (1992). Individual differences in working memory and
comprehension: A test of four hypotheses. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 18, 972-992.
Engle, R. W., Carullo, J. J., & Collins, K. W. (1991). Individual differences in working memory
for comprehensionandfollowingdirections. Journal of Educational Research, 84, 253-262.
Horn, J. L., & Hofer, S. M. (1992). Major abilities and development in the adult period. In R. J.
Sternberg &C. A. Berg (Eds.), Intellectual development (pp. 44-99). Cambridge, UK: Cam-
bridge University Press.
Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In R. H. Hoyle
(Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 158-176).
Thousand Oaks, CA: Sage.
Jurden, F. H. (1995). Individual differences in working memory and complex cognition. Journal
of Educational Psychology, 87, 93-102.
Kyllonen, P. C. (1993). Aptitude testing inspired by information processing: A test of the four-
sources model. Journal of General Psychology, 120, 375-405.
Kyllonen, P. C., & Stephens, D. L. (1990). Cognitive abilities as determinants of success in ac-
quiring logic skill. Learning and Individual Differences, 2, 129-169.
Light, L. L., & Anderson, P. A. (1985). Working-memory capacity, age, and memory for dis-
course. Journal of Gerontology, 40, 737-747.
Masson, M.E.J., &Miller, J. A. (1983). Working memory and individual differences in compre-
hension and memory of text. Journal of Educational Psychology, 75, 314-318.
Ormrod, J. E., & Cochran, K. F. (1988). Relationship of verbal ability and working memory to
spelling achievement and learning to spell. Reading Research and Instruction, 28, 33-43.
Pea, C., &Tirre, W. C. (1992). Cognitive factors involved in the first stage of programming skill
acquisition. Learning and Individual Differences, 4, 311-334.
Perlow, R., Jattuso, M., & Moore, D. (1997). Role of verbal working memory in complex skill
acquisition. Human Performance, 10, 283-302.
876 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from
Richardson, J.T.E. (1996). Evolving issues in working memory. In J.T.E. Richardson,
R. W. Engle, L. Hasker, R. H. Logie, E. R. Stoltzfus, &R. T. Zachs (Eds.), Working memory
and human cognition (pp. 120-154). New York: Oxford University Press.
Salthouse, T. A. (1990). Working memory as a processing resource in cognitive aging. Develop-
mental Review, 10, 101-124.
Salthouse, T. A. (1992). Influence of processing speed on adult age differences in working mem-
ory. Acta Psychologica, 79, 1-16.
Salthouse, T. A., &Babcock, R. L. (1991). Decomposing adult age differences in working mem-
ory. Developmental Psychology, 27, 763-776.
Shute, V. J. (1991). Who is likely to acquire programming skills? Journal of Educational Com-
puting Research, 7, 1-24.
Swanson, H. L. (1992). Generality and modifiability of working memory among skilled and less
skilled readers. Journal of Educational Psychology, 84, 473-488.
Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of
scores: A historical overview and some guidelines. Educational and Psychological Meas-
urement, 56, 197-208.
Tirre, W. C., & Pea, C. M. (1992). Investigation of functional working memory in the Reading
Span Test. Journal of Educational Psychology, 84, 462-472.
Turner, M. L., & Engle, R. W. (1989). Is working memory capacity task dependent? Journal of
Memory and Language, 28, 127-154.
Woltz, D. J. (1988). An investigation of the role of working memory in procedural skill acquisi-
tion. Journal of Experimental Psychology: General, 117, 319-331.
Zabrucky, K., & Moore, D. (1994). Contributions of working memory and evaluation and regu-
lation of understanding to adults recall of texts. Journal of Gerontology: Psychological Sci-
ences, 49, 201-212.
PERLOW ET AL. 877
by batan sanda on October 5, 2010 epm.sagepub.com Downloaded from

You might also like