You are on page 1of 12

Educational Psychology Review, Vol. 16, No.

4, December 2004 ( C 2004)

Methodological Issues in Questionnaire-Based Research on Student Learning in Higher Education


John T. E. Richardson1,2

Students scores on questionnaires concerning their approaches to studying in higher education exhibit reasonable stability over time, moderate convergent validity with their scores on other questionnaires, and reasonable levels of discriminating power and criterion-related validity. Nevertheless, the internal consistency of the constituent scales and the construct validity of these instruments are variable, their content validity within contemporary higher education is open to question, and their wording may need to be revised when they are used with students from different social or cultural groups. Future research should investigate the possibility of response bias in such instruments and the validity of self-reports concerning study behavior.
KEY WORDS: approaches to studying; higher education; methodology; questionnaires; reliability; student learning; validity.

INTRODUCTION The last 25 years have seen major developments in our understanding of how students in higher education set about the task of learning. Many of the key concepts in this eld originated in qualitative, interview-based research, but attempts were subsequently made to operationalize these concepts in formal inventories and questionnaires that could then be used to generate quantitative data from large numbers of participants. Recently, I reviewed this research literature with a specic focus upon the issue of
1 Institute of Educational Technology, The Open University, Milton Keynes, United Kingdom. 2 Correspondence

should be addressed to Professor John T. E. Richardson, Institute of Educational Technology, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom; e-mail: j.t.e.richardson@open.ac.uk. 347
1040-726X/04/1200-0347/0
C

2004 Springer Science+Business Media, Inc.

348

Richardson

whether distance education students approach their studies in a different way from students at campus-based institutions (Richardson, 2000). In this paper, I discuss the methodological issues that can arise when carrying out questionnaire-based research on student learning. Nowadays, a wide variety of instruments are used to investigate student learning, but two questionnaires have been used most frequently with campus-based students. The rst is the Approaches to Studying Inventory (ASI) developed by Ramsden and Entwistle (1981; see also Entwistle and Ramsden, 1983, chap. 4). This consists of 64 items in 16 scales that are grouped under four general headings (see Table I). For each item, participants respond to a statement on a 5-point scale from denitely agree to denitely disagree. The other instrument is the Study Process Questionnaire (SPQ) devised by Biggs (1982, 1985, 1987). This contains 42 items in six scales intended to measure motives and strategies on each of three dimensions (see Table II). For each item, participants respond to a statement on a 5-point scale from this item is never or only rarely true of me to this item is always or almost always true of me.
Table I. Scales Contained in the Approaches to Studying Inventory Scale Meaning orientation Deep approach Interrelating ideas Use of evidence Intrinsic motivation Reproducing orientation Surface approach Syllabus-boundness Fear of failure Extrinsic motivation Achieving orientation Strategic approach Disorganized study methods Negative attitudes to studying Achievement motivation Styles and pathologies Comprehension learning Globetrotting Operation learning Improvidence Meaning Active questioning in learning Relating to other parts of the course Relating evidence to conclusions Interest in learning for learnings sake Preoccupation with memorization Relying on staff to dene learning tasks Pessimism and anxiety about academic outcomes Interest in courses for the qualications they offer Awareness of implications of academic demands made by staff Unable to work regularly and effectivelya Lack of interest and applicationa Competitive and condent Readiness to map out subject area and think divergently Over-ready to jump to conclusions Emphasis on facts and logical analysis Over-cautious reliance on details

Note. From Ramsden and Entwistle (1981, p. 371). a These scales are scored in reverse.

Methodological Issues in Student Learning Research Table II. Scales Contained in the Study Process Questionnaire Approach SA: Surface Motive Surface motive (SM) is instrumental: main purpose is to gain a qualication with pass-only aspirations, and a corresponding fear of failure. Deep motive (DM) is intrinsic: study to actualize interest and competence in particular academic subjects. Achieving motive (AM) is based on competition and ego-enhancement: obtain highest grades, whether or not material is interesting. Strategy Surface strategy (SM) is reproductive: limit target to bare essentials and reproduce through rote learning. Deep strategy (DS) is meaningful: read widely, interrelate with previous relevant knowledge. Achieving strategy (AS) is based on organizing: follow up all suggested readings, schedule time, behave as model student.

349

DA: Deep

AA: Achieving

Note. From Biggs (1985, p. 186).

ISSUES OF RELIABILITY The most fundamental requirement of a research instrument is that it be reliable in the sense that it would yield consistent results if used repeatedly under the same conditions to test the same participants and is therefore relatively unaffected by errors of measurement. Most researchers have focused on internal consistency, as measured by Cronbachs coefcient alpha (Cronbach, 1951). The internal consistency of the constituent scales of the ASI and the SPQ appears to vary between .2 and .8 (Clarke, 1986; Entwistle and Ramsden, 1983, pp. 43, 228233; Watkins and Hattie, 1980). By conventional psychometric criteria, any values of coefcient alpha below .6 are regarded as poor, even for relatively heterogeneous constructs (e.g., Robinson et al., 1991). Indeed, for measures of individual differences in cognitive processing, more stringent standards of internal consistency are expected (Childers et al., 1985; McKelvie, 1994). Administering these questionnaires on a single occasion is obviously much less arduous than locating the same individuals for testing on two separate occasions. It is therefore not surprising that fewer researchers have directly evaluated the testretest reliability of these instruments. Over intervals between the two administrations of up to 3 months, the testretest reliability of the constituent scales of the ASI and the SPQ appears to vary between .5 and .8 (Clarke, 1986; Murray-Harvey, 1994; see also Richardson, 1990). These gures are regarded as broadly satisfactory by conventional psychometric criteria (e.g., Cureton, 1965), and they suggest that scores on the two instruments are relatively stable with passing time.

350

Richardson

With longer intervals between two administrations, participants receive increasing exposure to contextual inuences that might lead to genuine changes in approaches to studying (Zeegers, 2001). In this case, variability in the scores obtained on different occasions does not necessarily cast doubt upon the adequacy of the test instrument. Longitudinal studies of this sort are hard to carry out because of the high probability of attrition: Participants may decline to participate in the follow-up session, or they may have withdrawn from their studies altogether, so that the participants who contribute data from the two sessions may be unrepresentative of the original sample (Watkins and Hattie, 1985). Nevertheless, Murray-Harvey (1994) obtained SPQ scores from 280 students with an interval of roughly a year; she found that the testretest reliability of the scales varied between .42 and .64, again suggesting a high degree of stability.

ISSUES OF VALIDITY The other fundamental requirement of a research instrument is that it be valid in the sense that it measures the trait or traits that it purports to measure. The matter of construct validity is usually addressed by the use of factor analysis. Analyses of responses to individual items in the ASI or the SPQ have usually not produced satisfactory results: Characteristically, some scales are identied in the extracted solutions, but not all (Christensen et al., 1991; Kember and Gow, 1990, 1991; Meyer and Parsons, 1989). These unsatisfactory results motivated some researchers to modify these instruments by removing the scales that are less robust (e.g., Biggs et al., 2001; Richardson, 1990; Trigwell et al., 1999). Shorter instruments may also be more convenient for practical purposes. Researchers have more often used factor analyses to examine whether the scale scores on these instruments dene more global dimensions. The 16 scales of the ASI are supposed to dene four different study orientations (see Table I), and the six scales of the SPQ are supposed to dene three different approaches to studying (see Table II). Of course, the precise results of factor analyses depend on the choice of statistical model, the number of factors extracted, and the choice of orthogonal or oblique rotation. Nevertheless, the outcomes of such analyses have been disappointing (Biggs, 1987, p. 16; Cano-Garcia and Justicia-Justicia, 1994; Kember and Leung, 1998; Meyer, 1995; Murray-Harvey, 1994; Watkins and Regmi, 1996). There is considerable overlap at a conceptual level among the different questionnaires on student learning in higher education, particularly in terms of the distinction between a deep approach and a surface approach or that between a meaning orientation and a reproducing orientation (Schmeck

Methodological Issues in Student Learning Research

351

and Geisler-Brenstein, 1989). The same basic distinctions emerge from research across all national systems of higher education, although they tend to receive more specic interpretations within each system or culture (for a review, see Richardson, 1994a). Nevertheless, nding a broad conceptual convergence is not the same as nding concrete evidence for relationships at an empirical level (i.e., convergent validity). Ribich and Schmeck (1979) administered three different instruments to students taking an introductory psychology course: one was a precursor to the SPQ called the Study Behaviour Questionnaire (SBQ: Biggs, 1970); the second was the Learning Style Inventory (LSI) devised by Kolb et al. (1971); and the third was a questionnaire devised by Schmeck et al. (1977), the Inventory of Learning Processes (ILP). Ribich and Schmeck carried out a canonical correlation analysis on the scores on all possible pairs of these three instruments. The results indicated that there was a modest amount of shared variance between the SBQ and the ILP, but that there was much less overlap between the LSI and either the ILP or the SBQ. Other studies have similarly found little overlap between questionnaires on approaches to studying and instruments that aim to measure Kolbs learning styles (CanoGarcia and Justicia-Justicia, 1994; Newstead, 1992). Subsequently, Schmeck (1988) described an unpublished study in which 269 students completed both the ASI and the ILP. Although there were some signicant relationships among the scale scores on these two instruments, they were fairly modest (see also Entwistle, 1988). Somewhat clearer results were obtained by Cano-Garcia and Justicia-Justicia (1994) using the same instruments. Other researchers have used short versions of the ASI and the ILP (Entwistle and Waterston, 1988; Speth and Brown, 1988), whereas Lonka and Lindblom-Ylanne (1996) used selected items from the ASI and from the Inventory of Learning Styles devised by Vermunt and van Rijswijk (1988). Typically, the results from such studies have shown associations between different instruments but provided only equivocal support for their scale structure: that is, they provide good evidence for convergent validity but only weak evidence for construct validity. Another form of validity is the extent to which an instrument yields different scores on groups or individuals who would be expected to differ in the underlying trait or traits. There is good evidence that scores on instruments such as the ASI vary with age, such that older people tend to obtain higher scores than younger people on measures of deep approach and a meaning orientation, and older people obtain lower scores than younger people on measures of surface approach and a reproducing orientation (see Richardson, 1994b, for a review). There is also suggestive evidence that students scores vary with their previous academic qualications, their subject of study, and their level of study (see Richardson, 2000, pp. 180181).

352

Richardson

One proviso is that studies often involve large samples of students, and so observed differences may be statistically signicant and yet of little practical importance. This could be claried if researchers on student learning reported measures of effect size in addition to signicance levels (see Richardson, 1996). Two related forms of validity are (a) the identication of subgroups of respondents who differ qualitatively in their approaches to studying and (b) the demonstration of changes in approaches to studying in response to educational interventions. Following an earlier study by Entwistle and Brennan (1971), Richardson (1997) applied cluster analysis to students scores on the ASI to classify them as having a dominant orientation to either meaning or reproducing. Hambleton et al. (1998) used a short version of the ASI to evaluate a multimedia variant of the Personalized System of Instruction (or Keller Plan). They found higher scores on meaning orientation in comparison with the same students scores in a conventionally taught course. If approaches to studying vary in their efcacy, students scores should vary with some concurrent criterion (such as self-ratings of academic progress) or with some future criterion (such as subsequent attainment). Ramsden and Entwistle (1981) showed that students scores on the ASI were correlated in a meaningful way with their ratings of their academic progress. However, ratings of this sort may not be valid because they can be biased by students implicit theories of personal change (Conway and Ross, 1984). There is strong evidence that scores on the ASI are correlated with students performance in subsequent academic assessments, once again in meaningful and appropriate ways (see Richardson, 2000, pp. 182183, for a review). (This could also be interpreted as support for the construct validity of the ASI.) Nevertheless, several investigations have found that academically unsuccessful students do not just exhibit poorer approaches to studying but fail to exhibit any coherent approaches at all (Entwistle et al., 1991; Meyer et al., 1990a,b; Meyer and Dunne, 1991).

CONSTRUCTION AND PORTABILITY OF QUESTIONNAIRES The developers of questionnaires on student learning in higher education have followed conventional procedures for constructing and selecting the individual items. Nevertheless, there are questions about their content validity. Part of the problem is that the research contexts in which these instruments were devised have changed in the intervening years. As Reber (1985, p. 809) noted, content validity is situation-specic, and a questionnaire developed in one context may have low content validity in another

Methodological Issues in Student Learning Research

353

context. This also has implications for what might be called the portability of questionnaires: that is, their appropriateness as research tools in different national systems of higher education. Both the ASI and the SPQ (along with many other similar instruments) were originally constructed in the 1970s. Since that time, systems of higher education in many countries have expanded considerably. As a result, the student population has become much more representative of the general population (of young people, at least). In addition, as the result of changes in society as a whole, forms of discourse, both in academia and in everyday life, have become less formal and more exible. Nowadays, the forms of expression in many of the items in these instruments appear wordy and elaborate. They also, in some respects, assume the particular cultural milieus present in institutions of higher education in the respective countries where they were developed. Their validity is therefore threatened to the extent that these social and cultural milieus no longer exist. Even in the United Kingdom and Australia, the student population in higher education is vastly more heterogeneous in social, cultural, and ethnic terms than it was during the 1970s. The student population now incorporates groups within society that previously were excluded, to a large degree, from higher education. As a result, I believe there is a real concern about what todays students make of the items in questionnaires such as the ASI and the SPQ. This problem soon becomes clear when one attempts to carry out research with students from these formerly excluded groups. In my own research on students with deafness and other kinds of hearing loss, we have found it necessary to rephrase many of the original items in the ASI (Richardson et al., 2000; Richardson and Woodley, 1999). Any research instrument should be validated from scratch in each new context in which it is used. As mentioned earlier, the same distinctions (between deep and surface approaches or between meaning and reproducing orientations) emerge from research across different systems of higher education, but they receive different interpretations within each system or culture (see Richardson, 1994a). This means that questionnaires intended to reect those distinctions must be carefully adapted within different cultures. As I have pointed out elsewhere (Richardson, 2000, p. 185), the word culture should be understood broadly, so that it includes, for instance, the culture of the United States (Richardson, 1995), the culture of Access courses aimed at older students who are returning to formal education in the United Kingdom (Hayes et al., 1997), andas just mentionedthe culture of people who are deaf (Richardson et al., 2000). A nal point concerning content validity is that the most commonly used questionnaires on student learning make little or no attempt to control for the possibility of response bias. In both the ASI and the SPQ, all of

354

Richardson

the items within any scale have the same polarity, and so there is no way of differentiating between a high score on the relevant trait and a mere response bias. Two of the scales in the ASI are scored in reverse (see Table I), and hence discrepant scores on these two scales might be taken to indicate a response bias. In contrast, Schmeck et al. (1977) controlled for response bias when constructing their ILP by including items with both positive and negative polarity within each scale. THE VALIDITY OF SELF-REPORTS Questionnaires such as the ASI and the SPQ are intended to monitor how students conduct their normal academic learning. These instruments can be adapted to refer to individual courses, but they do not actually refer to specic situations. Thus, it would be more correct to say that they assess students predispositions to conduct learning in particular ways (Biggs, 1993; Kember and Gow, 1989). However, the assumption that people are able to provide valid reports of these predispositions can be questioned in the light of evidence about the nature of self-reports on mental processes. As mentioned earlier, contemporary research on student learning in higher education originated in interview-based research on how students had conducted particular tasks such as reading an academic text (Marton, 1975). In this situation, the validity of the students reports depends on the fact that the mental episodes in question persist as objects of focal attention in short-term memory, from which it follows that accounts obtained soon after the task in question will usually be an accurate reection of their online cognitive processing (Ericsson and Simon, 1980, 1984, pp. 19, 2530). However, questionnaires on student learning require respondents to give cumulative and retrospective accounts of how they conduct academic tasks, and it is most unlikely that they have retained an accurate record in longterm memory of the mental activities that were involved. In this case, their accounts are likely based, at least in part, on inferences and reconstructions derived from their subjective and implicit theories of the mental processes involved (Ericsson and Simon, 1980, 1984, pp. 1920; Nisbett and Wilson, 1977; White, 1989). One example of this comes from the study by Conway and Ross (1984) cited earlier. These researchers assigned students randomly to take a studyskills program or to join a waiting list for the program. They asked all the students to rate their current study skills before and after the program, and after the program they also asked all the students retrospectively to rate their study skills before the program had taken place. They found that students who had taken the program rated their study skills as having improved, but the main reason for this was that the retrospective ratings

Methodological Issues in Student Learning Research

355

tended to denigrate their former skills. Their subsequent academic performance was actually no better than that of the students on the waiting list, but many of them continued to insist that the program had been benecial, despite having been advised to the contrary at their debrieng (Ross and Conway, 1986). These results tend to indicate that under some circumstances students reconstruct their autobiographical memories to t an implicit yet invalid theory about personal change (see also Ross, 1989). Nevertheless, obtaining concurrent reports of how students set about their learning tasks may well affect the cognitive processes that they bring to bear upon those tasks (Ericsson and Simon, 1984, pp. 78107, 1993, pp. xviixxxii, respectively). Instead, the solution to this conundrum is to devise questions and questionnaires that serve to reinstate the psychological context in which that learning originally occurred (Hoc and Leplat, 1983).

CONCLUSION Students scores on questionnaires on approaches to studying show reasonable stability over time, moderate convergent validity with their scores on other questionnaires, and reasonable levels of both discriminating power and criterion-related validity. Nevertheless, the internal consistency of their constituent scales is variable, and the construct validity of these instruments (according to the results of factor analyses carried out both on item responses and on scale scores) is disappointing. These ndings have, not unreasonably, motivated attempts to devise improved instruments in the form of revised versions of the ASI (Entwistle et al., 2000; Tait and Entwistle, 1996) and the two-factor SPQ (Biggs et al., 2001). The disadvantage is that results obtained using these new instruments are not commensurate with those obtained using their predecessors, and so there is no single corpus of evidence that is based upon applications of the same instrument. The content validity of these instruments is open to question because of changes both in higher education and in society at large since they were originally devised. The appropriateness of the original wording of these questionnaires when they are used with students from other social, cultural, or ethnic groups is highly doubtful. In the future, researchers should give more attention to the possibility of response bias and to the fundamental question of the validity of self-reports. The study by Conway and Ross (1984) suggests that under some circumstances students do not provide valid and accurate accounts of their dispositions and capabilities. This disturbing aspect needs to be examined using more carefully constructed inventories and questionnaires.

356

Richardson

ACKNOWLEDGMENTS I am grateful to Noel Entwistle, Kirsti Lonka, and Erkki Olkinuora for their comments on an earlier version of this paper. REFERENCES
Biggs, J. B. (1970). Faculty patterns in study behaviour. Aust. J. Psychol. 22: 161174. Biggs, J. B. (1982). Student motivation and study strategies in university and college of advanced education populations. Higher Educ. Res. Dev. 1: 3355. Biggs, J. B. (1985). The role of metalearning in study processes. Br. J. Educ. Psychol. 55: 185 212. Biggs, J. B. (1987). Student Approaches to Learning and Studying, Australian Council for Educational Research, Melbourne. Biggs, J. (1993). What do inventories of students learning processes really measure? A theoretical review and clarication. Br. J. Educ. Psychol. 63: 319. Biggs, J., Kember, D., and Leung, D. Y. P. (2001). The revised two-factor Study Process Questionnaire: R-SPQ-2F. Br. J. Educ. Psychol. 71: 133149. Cano-Garcia, F., and Justicia-Justicia, F. (1994). Learning strategies, styles and approaches: An analysis of their interrelationships. Higher Educ. 27: 239260. Childers, T. L., Houston, M. J., and Heckler, S. E. (1985). Measurement of individual differences in visual versus verbal information processing. J. Consum. Res. 12: 125134. Christensen, C. A., Massey, D. R., and Isaacs, P. J. (1991). Cognitive strategies and study habits: An analysis of the measurement of tertiary students learning. Br. J. Educ. Psychol. 61: 290299. Clarke, R. M. (1986). Students approaches to learning in an innovative medical school: A cross-sectional study. Br. J. Educ. Psychol. 56: 309321. Conway, M., and Ross, M. (1984). Getting what you want by revising what you had. J. Pers. Soc. Psychol. 47: 738748. Cronbach, L. J. (1951). Coefcient alpha and the internal structure of tests. Psychometrika 16: 297334. Cureton, E. E. (1965). Reliability and validity: Basic assumptions and experimental designs. Educ. Psychol. Meas. 25: 327346. Entwistle, N. (1988). Motivational factors in students approaches to learning. In Schmeck, R. R. (ed.), Learning Strategies and Learning Styles, Plenum Press, New York, pp. 2151. Entwistle, N. J., and Brennan, T. (1971). The academic performance of students: 2. Types of successful students. Br. J. Educ. Psychol. 41: 268276. Entwistle, N. J., and Ramsden, P. (1983). Understanding Student Learning, Croom Helm, London. Entwistle, N., Meyer, J. H. F., and Tait, H. (1991). Student failure: Disintegrated perceptions of studying and the learning environment. Higher Educ. 21: 249261. Entwistle, N., Tait, H., and McCune, V. (2000). Patterns of response to an approaches to studying inventory across contrasting groups and contexts. Eur. J. Psychol. Educ. 15: 3348. Entwistle, N., and Waterston, S. (1988). Approaches to studying and levels of processing in university students. Br. J. Educ. Psychol. 58: 258265. Ericsson, K. A., and Simon, H. A. (1980). Verbal reports as data. Psychol. Rev. 87: 215251. Ericsson, K. A., and Simon, H. A. (1984). Protocol Analysis: Verbal Reports as Data, MIT Press, Cambridge, MA. Ericsson, K. A., and Simon, H. A. (1993). Protocol Analysis: Verbal Reports as Data (Rev. edn.), MIT Press, Cambridge, MA. Hambleton, I. R., Foster, W. H., and Richardson, J. T. E. (1998). Improving student learning using the personalised system of instruction. Higher Educ. 35: 187203.

Methodological Issues in Student Learning Research

357

Hayes, K., King, E., and Richardson, J. T. E. (1997). Mature students in higher education: III. Approaches to studying in Access students. Stud. Higher Educ. 22: 1931. Hoc, J. M., and Leplat, J. (1983). Evaluation of different modalities of verbalization in a sorting task. Int. J. Man-Machine Stud. 18: 283306. Kember, D., and Gow, L. (1989). A model of student approaches to learning encompassing ways to inuence and change approaches. Instr. Sci. 18: 263288. Kember, D., and Gow, L. (1990). Cultural specicity of approaches to study. Br. J. Educ. Psychol. 60: 356363. Kember, D., and Gow, L. (1991). A challenge to the anecdotal stereotype of the Asian student. Stud. Higher Educ. 16: 117128. Kember, D., and Leung, D. Y. P. (1998). The dimensionality of approaches to learning: An investigation with conrmatory factory analysis on the structure of the SPQ and LPQ. Br. J. Educ. Psychol. 68: 395407. Kolb, D. A., Rubin, I. M., and McIntyre, J. M. (1971). Organizational Psychology: An Experiential Approach, Prentice-Hall, Englewood Cliffs, NJ. Lonka, K., and Lindblom-Ylanne, S. (1996). Epistemologies, conceptions of learning and study practices in medicine and psychology. Higher Educ. 31: 524. Marton, F. (1975). On non-verbatim learning: I. Level of processing and level of outcome. Scand. J. Psychol. 16: 273279. McKelvie, S. J. (1994). Guidelines for judging psychometric properties of imagery questionnaires as research instruments: A proposal. Perc. Mot. Skills 79: 12191231. Meyer, J. H. F. (1995). Gender-group differences in the learning behaviour of entering rstyear university students. Higher Educ. 29: 201215. Meyer, J. H. F., and Dunne, T. T. (1991). Study approaches of nursing students: Effects of an extended clinical context. Med. Educ. 25: 497516. Meyer, J. H. F., and Parsons, P. (1989). Approaches to studying and course perceptions using the Lancaster Inventory: A comparative study. Stud. Higher Educ. 14: 137153. Meyer, J. H. F., Parsons, P., and Dunne, T. T. (1990a). Individual study orchestrations and their association with learning outcome. Higher Educ. 20: 6789. Meyer, J. H. F., Parsons, P., and Dunne, T. T. (1990b). Study orchestration and learning outcome: Evidence of association over time among disadvantaged students. Higher Educ. 20: 245269. Murray-Harvey, R. (1994). Learning styles and approaches to studying: Distinguishing between concepts and instruments. Br. J. Educ. Psychol. 64: 373388. Newstead, S. E. (1992). A study of two quick-and-easy methods of assessing individual differences in student learning. Br. J. Educ. Psychol. 62: 299312. Nisbett, R. E., and Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychol. Rev. 84: 231259. Ramsden, P., and Entwistle, N. J. (1981). Effects of academic departments on students approaches to studying. Br. J. Educ. Psychol. 51: 368383. Reber, A. S. (1985). The Penguin Dictionary of Psychology, Penguin Books, Harmondsworth, UK. Ribich, F. D., and Schmeck, R. R. (1979). Multivariate relationships between measures of learning style and memory. J. Res. Pers. 13: 515529. Richardson, J. T. E. (1990). Reliability and replicability of the Approaches to Studying Questionnaire. Stud. Higher Educ. 15: 155168. Richardson, J. T. E. (1994a). Cultural specicity of approaches to studying in higher education: A literature survey. Higher Educ. 27: 449468. Richardson, J. T. E. (1994b). Mature students in higher education: I. A literature survey on approaches to studying. Stud. Higher Educ. 19: 309325. Richardson, J. T. E. (1995a). Cultural specicity of approaches to studying in higher education: A comparative investigation using the Approaches to Studying Inventory. Educ. Psychol. Meas. 55: 300308. Richardson, J. T. E. (1996). Measures of effect size. Behav. Res. Meth. Instrum. Comp. 28: 1222.

358

Richardson

Richardson, J. T. E. (1997). Meaning orientation and reproducing orientation: A typology of approaches to studying in higher education? Educ. Psychol. 17: 301311. Richardson, J. T. E. (2000). Researching Student Learning: Approaches to Studying in CampusBased and Distance Education, SRHE and Open University Press, Buckingham, UK. Richardson, J. T. E., MacLeod-Gallinger, J., McKee, B. G., and Long, G. L. (2000). Approaches to studying in deaf and hearing students in higher education. J. Deaf Stud. Deaf Educ. 5: 156173. Richardson, J. T. E., and Woodley, A. (1999). Approaches to studying in people with hearing loss. Br. J. Educ. Psychol. 69: 533546. Robinson, J. P., Shaver, P. R., and Wrightsman, L. S. (1991). Criteria for scale selection and evaluation. In Robinson, J. P., Shaver, P. R., and Wrightsman, L. S. (eds.), Measures of Personality and Social Psychological Attitudes, Academic Press, San Diego, CA, pp. 116. Ross, M. (1989). Relation of implicit theories to the construction of personal histories. Psychol. Rev. 96: 341357. Ross, M., and Conway, M. (1986). Remembering ones own past: The construction of personal histories. In Sorrentino, R. M., and Higgins, E. T. (eds.), Handbook of Motivation and Cognition: Foundations of Social Behavior, Guilford Press, New York, pp. 122144. Schmeck, R. R. (1988). Individual differences and learning strategies. In Weinstein, C. E., Goetz, E. T., and Alexander, P. A. (eds.), Learning and Study Strategies: Issues in Assessment, Instruction and Evaluation, Academic Press, San Diego, CA, pp. 171191. Schmeck, R. R., and Geisler-Brenstein, E. (1989). Individual differences that affect the way students approach learning. Learn. Indiv. Differ. 1: 85124. Schmeck, R. R., Ribich, F., and Ramanaiah, N. (1977). Development of a self-report inventory for assessing individual differences in learning processes. Appl. Psychol. Meas. 1: 413431. Speth, C., and Brown, R. (1988). Study approaches, processes and strategies: Are three perspectives better than one? Br. J. Educ. Psychol. 58: 247257. Tait, H., and Entwistle, N. (1996). Identifying students at risk through ineffective study strategies. Higher Educ. 31: 97116. Trigwell, K., Prosser, M., and Waterhouse, F. (1999). Relations between teachers approaches to teaching and students approaches to learning. Higher Educ. 37: 5770. Vermunt, J. D. H. M., and van Rijswijk, F. A. W. M. (1988). Analysis and development of students skill in selfregulated learning. Higher Educ. 17: 647682. Watkins, D., and Hattie, J. (1980). An investigation of the internal structure of the Biggs Study Process Questionnaire. Educ. Psychol. Meas. 40: 11251130. Watkins, D., and Hattie, J. (1985). A longitudinal study of the approaches to learning of Australian tertiary students. Hum. Learn. 4: 127141. Watkins, D., and Regmi, M. (1996). Towards the cross-cultural validation of a Western model of student approaches to learning. J. Cross-Cult. Psychol. 27: 547560. White, P. A. (1989). Evidence for the use of information about internal events to improve the accuracy of causal reports. Br. J. Psychol. 80: 375382. Zeegers, P. (2001). Approaches to learning in science: A longitudinal study. Br. J. Psychol. 71: 115132.