You are on page 1of 19

This article was downloaded by: [Michigan State University]

On: 09 February 2015, At: 13:21


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in Higher


Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/caeh20

First-year students’ appraisal of


assessment tasks: implications for
efficacy, engagement and performance
a a
Alf Lizzio & Keithia Wilson
a
Griffith Institute for Higher Education, Griffith University ,
Queensland , Australia
Published online: 17 Nov 2011.

To cite this article: Alf Lizzio & Keithia Wilson (2013) First-year students’ appraisal of assessment
tasks: implications for efficacy, engagement and performance, Assessment & Evaluation in Higher
Education, 38:4, 389-406, DOI: 10.1080/02602938.2011.637156

To link to this article: http://dx.doi.org/10.1080/02602938.2011.637156

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Assessment & Evaluation in Higher Education, 2013
Vol. 38, No. 4, 389–406, http://dx.doi.org/10.1080/02602938.2011.637156

First-year students’ appraisal of assessment tasks: implications for


efficacy, engagement and performance
Alf Lizzio* and Keithia Wilson

Griffith Institute for Higher Education, Griffith University, Queensland, Australia

This study investigated students’ appraisals of assessment tasks and the impact
Downloaded by [Michigan State University] at 13:21 09 February 2015

of this on both their task-related efficacy and engagement and subsequent task
performance. Two hundred and fifty-seven first-year students rated their experi-
ence of an assessment task (essay, oral presentation, laboratory report or exam)
that they had previously completed. First-year students evaluated these assess-
ment tasks in terms of two general factors: the motivational value of the task
and its manageability. Students’ evaluations were consistent across a range
of characteristics and level of academic achievement. Students’ evaluations of
motivational value generally predict their engagement and their evaluations
of task manageability generally predict their sense of task efficacy. Engagement
was a significant predictor of task performance (viz. actual mark) for exam and
laboratory report tasks but not for essay-based tasks. Findings are discussed in
terms of the implications for assessment design and management.
Keywords: first-year assessment; appraisal of assessment; student efficacy

This study seeks to understand the factors that students’ use to appraise or evaluate
an assessment task and the consequences of their perceptions for their subsequent
motivation and performance. An evidence-based understanding of the processes that
influence students’ engagement with assessment is particularly important for inform-
ing our educational practice with first-year or commencing students who are rela-
tively unfamiliar with the culture and context of university-level assessment.
While the way in which students approach learning may, to some extent, be a
function of their personal disposition or abilities, the nature of the learning task
itself and the environment in which it is undertaken also significantly mediate their
learning strategy (Fox, McManus, and Winder 2001). More accurately, it is stu-
dents’ perceptions, rather than any objective features of tasks, that are crucial in
shaping the depth of their engagement. In this sense, students’ learning approaches
(process factors) and academic performance (product factors) are influenced by their
appraisal of, and interaction with, the curriculum content, design and culture of their
current ‘learning system’ (presage factors) (Biggs 2003). Central to this process is a
significant body of research indicating students’ perceptions of the methods, modes
and quantity of assessment to be perhaps one of the most important influences on
their approaches to learning (Entwistle and Entwistle 1991; Lizzio, Wilson, and
Simons 2002; Ramsden 1992; Trigwell and Prosser 1991). Indeed, it has been
argued that students’ perceptions of assessment tasks ‘frame the curriculum’ and are

*Corresponding author. Email: a.lizzio@griffith.edu.au

Ó 2013 Taylor & Francis


390 A. Lizzio and K. Wilson

more influential than any intended design elements (Entwistle 1991), to the extent
of potentially overpowering other features of the learning environment (Boud
1990).
The appreciation that assessment functions not only to grade students, but also
fundamentally to facilitate their learning, is central to the paradigm evolution from
a traditional summative ‘testing culture’ to an integrated ‘assessment culture’
(Birenbaum 1997). Understanding how students appraise or construct their learning
is the foundation of design frameworks such as integrated teaching (Wehlburg
2010) and constructive alignment (Biggs 1996a) which view students’ engagement
and performance with assessment as a complex interaction between learner, task
and context. From this perspective, the consequential validity of an assessment task
or mode (viz. its positive influence on students’ learning approaches and outcomes)
is a key consideration (Dierick and Dochy 2001). The importance of students per-
Downloaded by [Michigan State University] at 13:21 09 February 2015

ceiving course objectives and pedagogy to be congruent, besides satisfying the test
of ‘common-sense’, has also received empirical support. For example, a curriculum
that emphasises acquisition of knowledge and a concurrent assessment package that
emphasises problem solving (Segers, Dierick, and Dochy 2001) have been found to
contribute to both sub-optimal learning and student resentment. The implications of
these findings for educational practice are quite fundamental. A foundational educa-
tional discipline would appear to be the need to distinguish between our educational
intentions (however worthy) and their impact on students. This requires us to more
closely examine the ‘hidden curriculum’ (Snyder 1971) of our assessment practices.
If we want to understand and evaluate our learning environments we need to
authentically understand how students experience them. Thus, if our aspirations are
to influence students towards deeper learning and higher-order skill development
then a prerequisite task is to appreciate students’ perceptions of the assessment tasks
with which we ask them to engage.

Students’ perceptions of assessment


Students’ preferences for different modes of assessment (Birenbaum 2007) and the
effects of different assessment methods (e.g. multiple choice formats and essays) on
whether students adopt deep or surface approaches to study (Scouller 1997; Thomp-
son and Falchinov 1998; Trigwell and Prosser 1991) have been well-
established. It is interesting to note that while there may be clear differences in the
ways that students with different study strategies approach assessment, there are
also significant commonalities in their criticisms of the processes of some traditional
assessment practices (Lindblom-Ylanne and Lonka 2001). A series of qualitative
studies have investigated students’ perceptions of the assessment characteristics that
they report as positively influencing their learning and engagement. McDowell
(1995) found that perceived fairness was particularly salient to students’ perceptions
of both the substantive validity and procedural aspects of an assessment task.
Sambell, McDowell, and Brown (1997) extended this early work and identified the
educational values (authentic tasks, perceived to have long-term benefits, applying
knowledge), educational processes (reasonable demands, encourages independence
by making expectations clear) and the consequences (rewards breadth and depth in
learning, rewards effort) of assessment that influence students’ perceptions of its
validity. More recently, Savin-Baden (2004) examined students’ experience of
assessment in problem-based learning programmes and identified two meta themes
Assessment & Evaluation in Higher Education 391

in students’ comments, and conceptualised these as forms of student disempower-


ment: unrewarded learning (including the relationship between the quantity of work
and its percentage weighting), and disabling assessment mechanisms, including both
processes (e.g. lack of information and inadequate feedback) and forms (assessment
methods that did not fit with espoused forms of learning). Further empirical support
for the centrality of notions of empowerment and fairness to students comes from
studies of social justice processes in education. For example, Lizzio, Wilson, and
Hadaway (2007) found students’ perceptions of the fairness of their learning envi-
ronments were strongly influenced by the extent to which they both feel personally
respected by academic staff and the adequacy of the informational and support
systems provided for them to ‘do their job’, of which assessment was a core
component.
What these investigations share is an appreciation of the value of a situated and
Downloaded by [Michigan State University] at 13:21 09 February 2015

systemic investigation into the student experience. The focal question is not just
‘what type of assessment’ but ‘what type of assessment system’ are students experi-
encing on both cognitive and affective levels. Thus, there is a balanced concern
with the impact of both assessment content and assessment process (Gielen, Dochy,
and Dierick 2003) on student learning and satisfaction. Hounsell, McCune, and
Hounsell (2008) have extended this guiding notion of ‘assessment context’ by oper-
ationalising the prototypical stages of an assessment lifecycle and identifying stu-
dents’ needs and concerns as they engage with, and seek to perform, assessment
tasks (viz. their prior experiences with similar assessment, their understanding of
preliminary guidance, their need for ongoing clarification, feedback, supplementary
support and feed-forward to subsequent tasks). Hounsell et al. in particular demon-
strated that the perceived adequacy of guidance and feedback as students attempted
tasks were central to their success. Lizzio and Wilson (2010) utilised Hounsell,
McCune, and Hounsell’s (2008) framework to investigate first-year students’ apprai-
sal of a range of assessment tasks using focus groups and individual interviews.
These students evaluated their early university assessment tasks in terms of seven
dimensions (familiarity with type of assessment, type and level of demand/required
effort, academic stakes, level of interest or motivation, felt capacity or capability to
perform the task, perceived fairness and the level of available support). The dimen-
sions identified in this study confirm a number of the themes (e.g. demand, motiva-
tion, fairness and support) commonly identified in previous investigations of student
perceptions of assessment.
Parallel to this line of inquiry has been the development of a number of good
practice frameworks to guide the design of assessment protocols. For example,
Gibbs and Simpson (2004) identified 11 conditions under which assessment sup-
ports students’ learning, and developed the assessment experience questionnaire
(AEQ) as a measure of the extent to which these conditions (viz. good distribution
of time demands and student effort, engagement in learning, appropriateness of
feedback, students’ use of feedback) were evident in a particular learning context.
More recently, Boud and Associates (2010) developed a set of seven propositions
for assessment reform in higher education. The propositions address the centrality
of assessment to the learning process (assessment for learning placed at the centre
of subject and programme design) and both questions of academic standards (need
for assessment to be an inclusive and trustworthy representation of student achieve-
ment) and the cultural (students are inducted into the assessment practices and cul-
tures of higher education) and relational (students and teachers become responsible
392 A. Lizzio and K. Wilson

partners in learning and assessment) dimensions of assessment practice that can


serve to help or hinder student engagement and learning. The conceptualisation of
‘assessment systems’ and the empirical validation of underlying assumptions from a
student’s perspective is a useful basis for guiding change processes in higher educa-
tion. Thus, for example, Gibbs and Simpson’s framework has been employed to
provide the evidence-base in collaborative action research projects to improve
assessment practices (McDowell et al. 2008). Similarly, Carless (2007) reports the
use of a learning-oriented assessment framework to inform action learning at both
the institutional and course level.

Broader research traditions


What can we learn about good assessment practice from broader research traditions?
Downloaded by [Michigan State University] at 13:21 09 February 2015

There is considerable convergent support from studies of psychological needs, cog-


nitive load and well-being, for the importance of both appreciating students’ key
role in constructing the meaning or ‘making sense’ of their experiences with assess-
ment and actively incorporating this into design and management processes.
Students’ psychological needs may be particularly influential on their approach
to assessment tasks. Cognitive evaluation theory proposes that people appraise
events in terms of the extent to which their needs to feel competent and in control
will be met (Deci and Ryan 2002). From this perspective, events or tasks that posi-
tively influence students’ perceived sense of competence will enhance their motiva-
tion. Legault, Green-Demers, and Pelletier (2006) identified four dimensions of
academic motivation (students’ ability and effort beliefs, characteristics of the task
and value placed on the task) and found that in a particular context these may inter-
act to produce either high levels of motivation or feelings of general helplessness.
Clearly, a student’s appraisal of an assessment task and what it may demand/require
from them in terms of effort and ability appear to be important considerations in
how they will both engage (or detach) and approach its performance. This of course
raises the question: what are the characteristics of assessment design that optimally
support a students’ sense of competence and autonomy? Certainly, how the
assessment process is managed may be salient to student’s efficacy and approach,
as constructive task-related interpersonal support has been found to encourage
self-determined motivation (Hardre and Reeve 2003). Given that students look to
information provided from both the task and their teachers to affirm their academic
capability, appropriately matching task demands to student capabilities may be a
key design consideration.
Cognitive load theory also provides insights into the processes that are salient to
students’ experience of assessment tasks, in terms of both their efficiency and effec-
tiveness, and their appropriateness to learners’ levels of expertise. From this per-
spective, it is not just students’ performance outcome on an assessment (viz. their
grade) that is of interest, but also the degree of cognitive effort required. Students’
cognitive load typically derives from two sources, the intrinsic load of a learning
task (usually the level of interactivity or complexity of its constituent elements) and
any extraneous load placed on the student as a result of its management (e.g. poor
timing and clarity of scaffolding information to learners and mismatched instruc-
tional procedures) (Kalyuga 2011). Importantly, since load is additive, assessment
tasks with higher intrinsic load are more likely to be negatively affected by process
factors which increase extraneous load (Pass, Renkl, and Sweller 2003).
Assessment & Evaluation in Higher Education 393

The question of ‘managing cognitive load’ may be particularly important for novice
learners who lack the working schemas to integrate new knowledge. Tasks that
require students to find or construct essential information for themselves (unguided
or minimal guidance) have been found to result in less direct learning and knowl-
edge transfer compared to tasks where scaffolded guidance is provided (Kirschner,
Sweller, and Clark 2006).
The goal of protecting or enhancing student well-being may also be relevant to
our management of the assessment process. University students generally report
lower levels of well-being than the general population, with first-year being a time
of heightened anxiety (Cooke et al. 2006). Given the identified vulnerabilities of
this population, there may be a significant ethical and mental health dimensions to
good assessment practice. Work stress and well-being can be influenced by the psy-
chosocial system within which a person functions and effective person–system
Downloaded by [Michigan State University] at 13:21 09 February 2015

interaction is commonly conceptualised as the necessary balance of the factors of


reasonable job or task demand, necessary support and opportunities for control and
positive working environment (Kanji and Chopra 2009). Whether high demands
result in positive activation or strain will depend on the presence of appropriate sup-
port and felt control (Barnes and Van Dyne 2009; Karasek 1979). These findings
can be readily generalised to the leadership of learning environments, with implica-
tions for both the design of assessment tasks and the support of students through
the performance process.

First-year students
The present study is particularly concerned with the experience of first-year or com-
mencing students’ experiences of assessment. Early academic experiences have been
identified as critical to the formation of tentative learner identities and self-efficacy
(Christie et al. 2008). Clearly, ‘assessment backwash’ (Biggs 1996b), whereby badly
designed or poorly organised assessment can unintentionally impair students’ learn-
ing, is more likely with a commencing student population. Indeed, poorly matched
and managed assessment is arguably a major contributor to the phenomenon of pre-
mature ‘student burnout’ (viz. feelings of exhaustion and incompetence and a sense
of cynicism and detachment) (Schaufeli et al. 2002) and disengagement. From a
positive perspective, first-year learning environments potentially provide our greatest
opportunities to not only align our educational intentions and impact, but also to
work collaboratively with new students to develop an evidence-based culture of
success. An empirically supported understanding of the design and process elements
of our ‘assessment systems’ can potentially make a contribution to the important
challenges of student engagement and retention.
However, it should be noted that a wholistic approach to assessment that
actively engages with students’ needs and expectations is a somewhat contested
proposition. On the one hand arguments are made for greater accommodation
and inclusion of students’ voices and circumstances around assessment. In Biren-
baum’s (2007) terms this requires moving away from an ‘inside out’ approach
where the ‘insiders’ (teachers) assume ‘what’ and ‘how’ needs to be assessed, to
a greater engagement with an ‘outside in approach’ where students’ preferences
are legitimated. On the other hand there is an increasing concern with the appar-
ent rise within the student population of notions of ‘academic entitlement’
(Greenberger et al. 2008), and an invocation to academics not to collude with
394 A. Lizzio and K. Wilson

or accommodate ‘student demands’. The present paper takes the position that,
with first-year students in particular, pejorative labels of ‘demanding and entitled
students’ do little to advance practice, and that from a systems perspective, such
behaviour may also be understood as ‘students trying to assertively negotiate
with institutions’ which may be perceived as somewhat indifferent to their suc-
cess or well-being. In this sense the genesis of this observed behaviour may be
more interactive than individual.

Aims
The present study aims to contribute to our understanding of the aspects of assess-
ment that influence first-year students’ engagement, confidence and learning out-
comes. Research to date has identified a number of design and process factors that
Downloaded by [Michigan State University] at 13:21 09 February 2015

may be particularly salient to students; however, there is a need for the structure of
these to be more clearly identified and their relative impact on first-year students’
efficacy and performance assessed.
Thus the focal research questions for the present study are:

 What are the general dimensions which first-year students use to appraise or
evaluate assessment tasks?
 How do first-year students’ appraisals influence their sense of efficacy and
approach to assessment tasks?
 What is the contribution of first-year students’ appraisals and approaches to
their actual performance on assessment tasks?
 What are the implications of these insights for the design and management of
assessment?

Method
Participants
Two hundred and fifty-seven first-year students (212 females and 45 males)
across four disciplinary programmes (Medical Science [16], Nursing [65], Psy-
chology [81] and Public Health [38]) participated in this study. The mean age
of the sample was 25 years (SD = 9.1 years) and 56% were the first in their fam-
ily to attend university. Students were in their second semester of university
study and their mean grade point average (GPA) on their first semester courses
was 5.5 on a seven-point scale.

Procedure
Students were emailed at the beginning of their second semester of study and
invited to participate in an online survey. Approximately, 35% of the targeted
student population responded to the invitation. The survey process asked students
to ‘Reflect back on your first semester of university study and to recapture the
experience of ‘‘being new to it all’’. Recall your early thoughts and feelings
about the assessment tasks you had to complete’. Students were then asked to
select one type of assessment task (exam, oral presentation, essay or laboratory
Assessment & Evaluation in Higher Education 395

Table 1. Structure of first-year students’ appraisal of assessment tasks.


Loadings
Factor Factor
Item Description 1 2
1 This assessment task requires us to learn a lot of material .88
2 The weighting of this task is sufficiently high for me to be .87
concerned about how it will impact on my overall grade
3 This assessment task requires us to really think about the material .84
4 This assessment task requires me to demonstrate mastery of skills .83
5 This assessment task makes sense to me given the learning .72
objectives of this course
6 I can see how doing this assessment task will develop my .71
academic skills
Downloaded by [Michigan State University] at 13:21 09 February 2015

7 I have the skills required to do this assessment task .70


8 I have recently done something similar to this task .68

9 My sense is that there will be good mechanisms in place to .80


support students with this assessment task
10 I think the weighting of this assessment task matches the time .76
required to do it
11 It is fairly safe to ‘have a go’ with this assessment task .74
12 I feel that this assessment task is organised fairly .72
13 I think this assessment task is at an appropriate level for first-year .71
students
14 This assessment task allows individual students sufficient personal .71
control over how well they will perform
15 I have a good sense of the workload involved in this task .68
Percentage of variance 42.43 39.21

report) and to use the scales provided to ‘honestly tell us how you remember
thinking and feeling as you approached this task’. Some students (n = 87) com-
pleted the survey for more than one assessment task. In the first section of the
survey students rated their selected assessment task on a set of 28 items opera-
tionalising each of the seven dimensions (viz. familiarity with type of assess-
ment, type and level of demand/required effort, academic stakes, level of interest
or motivation, felt capacity or capability to perform the task, perceived fairness
and the level of available support) identified by first-year students as how they
appraise or evaluate assessment in Lizzio and Wilsons’ (2010) qualitative study.
Items were piloted with a small group of students (n = 12) to establish clarity of
wording and appropriate matching of items to dimensions. In the second section
of the survey students evaluated their response to the assessment task in terms
of their self-reported levels of task-related confidence (I am confident that I will
do well on this assessment task), anxiety (I’m feeling fairly anxious about this
assessment task), achievement motivation (I want to do very well on this assess-
ment task) and approach to learning on the task (I will be approaching this task
with the aim of learning as much as possible). Students were also asked to
report their level of academic performance (percentage mark) on their selected
assessment task. Finally, students provided demographic (age and gender), back-
ground (equity group membership, prior family participation in higher education)
and academic (university entrance score, GPA for their first semester courses)
information.
396 A. Lizzio and K. Wilson

Results
A series of analyses were conducted. Firstly, exploratory and confirmatory factor
analyses were used to investigate the structure of first-year students’ appraisals of
assessment tasks. Secondly, correlation analyses were used to establish if students’
appraisal processes were associated with a range of demographic, background or
academic achievement factors. Finally, structural equation modelling was used to
establish the relationships between students’ appraisal processes, their approach to
assessment tasks and their performance on those tasks.

Structure of students’ appraisal of assessment


The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy indicated the data
Downloaded by [Michigan State University] at 13:21 09 February 2015

set was highly structured and appropriate for further factor analysis (KMO = 0.94).
Students’ responses to the 28 items describing their appraisal of the four assessment
tasks (exam, oral presentation, essay and laboratory report) were analysed using
exploratory factor analysis (Principal axis factor (PAF) analysis with varimax rota-
tion). This initial analysis yielded a two-factor solution accounting for 79.36% of
the variance, with 17 items loading on factor 1 (72.19% variance, eigenvalue
20.21) and six items on factor 2 (7.17% variance, eigenvalue 2.0). Cross-loading,
low loading and highly correlated items were removed, and a PAF analysis was
then conducted on the reduced 15 item set. This analysis produced an interpretable
two-factor solution with simple structure accounting for 81.64% of variance with
eight items loading on factor 1 (variance 42.43%) and seven items on factor 2
(variance 39.21%) (see Table 1).
The first factor was defined by four themes related to first-year students’ percep-
tions of the motivational content and value of an assessment task: the perceived
academic stakes of the task (How important is it to do this well?), its level of
intellectual challenge (What will it take me to do this?), the motivational value of
the task (Do I want to do this?), and students’ sense of task capability (Can I do
this?). The academic stakes of the assessment task were pragmatically reflected in
students’ instrumental perceptions of potential contribution to their grades (The
weighting of this task is sufficiently high for me to be concerned about how it will
impact on my overall grade). The intellectual challenge of an assessment task was
reflected in terms of perceived cognitive load (This assessment task requires us to
learn a lot of material), and intellectual demands (This assessment task requires us
to really think about the material; This assessment task requires me to demonstrate
mastery of skills). The motivational value of an assessment task was reflected in
terms of items related to clarity of learning outcomes (I can see how doing this
assessment task will develop my academic skills) and curriculum alignment (This
assessment task makes sense to me given the learning objectives of this course).
First-year students’ sense of capability to undertake the assessment task was also
expressed in terms of prior familiarity (I have recently done something similar to
this task) and confidence (I have the skills required to do this assessment task).
The second factor contained four themes related to students’ perceptions of the
manageability of an assessment task: fairness (How fair is this task?), support (Who
can help with this task?), self-protection (How safe is this task?) and self-
determination (Is this in my hands?). The fairness aspects were reflected in stu-
dents’ perceptions of the required difficulty (I think this assessment task is at an
Assessment & Evaluation in Higher Education 397

appropriate level for first-year students), required investment (I think the weighting
of this assessment task matches the time required to do it) and level of organisation
(I feel that this assessment task is organised fairly). Students’ sense of appropriate
support with an assessment task was expressed through the perceived availability of
staff (My sense is that there will be good mechanisms in place to support students
with this assessment task) and workload management (I have a good sense of the
workload involved in this task). Students also associated items related to ego threat
and shame (It is fairly safe to ‘have a go’ with this assessment task without a high
risk of ‘looking stupid’ or making a mistake) and a sense of personal control (This
assessment task allows individual students sufficient personal control over how well
they will perform) on this factor.
Confirmatory factor analysis (CFA) was then used to test the fit of the data to
either a single or two-factor model. Unlike exploratory factor analysis which pro-
Downloaded by [Michigan State University] at 13:21 09 February 2015

vides only an indirect test of a theoretical model, CFA provides a direct test of the
proposed models (Bernstein and Teng 1989). Given the high level of inter-factor
correlation (.85) the first analysis tested whether all items could be associated with
a global assessment appraisal factor. However, this yielded a very poor fit with none
of the indices meeting accepted standards. A two-factor conceptualisation of stu-
dents’ appraisal of assessment was then tested and after removal of co-varying
items yielded a good level of fit of the model to data (w2 (13) = 20.42,
Tucker–Lewis Index [TLI] = .97, Comparative Fit Index [CFI] = .98, Goodness of fit
index [GFI] = .97, Adjusted goodness of fit index [AGFI] = .94, Root mean square
error of approximation [RMSEA] = .06). A good fit is generally indicated by mea-
sures of incremental fit (closer to 1 is better, with values in excess of .9 recom-
mended) and measures of residual variance [RMSEA recommended to be not
higher than .08] (Hu and Bentler 1999). The reduced first-factor was defined by
three items concerned with the perceived motivational content and value of an
assessment task: learning outcomes (.70) (I can see how doing this assessment task
will develop my academic skills), curriculum alignment (.74) (This assessment task
makes sense to me given the learning objectives of this course) and intellectual
demand (.35) (This assessment task requires me to demonstrate mastery of skills).
The reduced second factor was defined by four items concerned with the perceived
manageability of an assessment task: fairness of required investment (.81) (I think
the weighting of this assessment task matches the time required to do it),
appropriate difficulty (.89) (I think this assessment task is at an appropriate level
for first-year students), transparency of workload (.77) (I have a good sense of what
workload is involved in this task) and appropriate support (.41) (My sense is that
there will be good mechanisms in place to support students with this assessment
task). Inter-factor correlation was reduced to .68 indicating that the confirmatory
analysis had better differentiated these two assessment appraisal dimensions.

Relationship of appraisal factors with student characteristics


Correlations were conducted to examine the associations between student characteris-
tics and the appraisal dimensions (see Table 2). Students’ appraisals of assessment
tasks were not significantly associated with demographic (age and gender), back-
ground (first-generation status, equity group membership and English as a second lan-
guage) or level of academic achievement (tertiary entrance score or first-semester
GPA). This suggests that students’ perceptions of assessment tasks were not
Downloaded by [Michigan State University] at 13:21 09 February 2015

398

Table 2. Correlations of assessment appraisal factors and student characteristics.


1 2 3 4 5 6 7 8 9 10 11 12 13
1 Age 1
A. Lizzio and K. Wilson

2 Gender .16 1
3 University entrance score .08 .02 1
4 First in family at university .03 .16 .04 1
5 English as second language .04 .05 .05 .08 1
6 Equity group membership .05 .03 .06 .05 .09 1
7 Factor 1 task motivational content and value .01 .04 .04 .06 .10 .06 1
8 Factor 2 task manageability .02 .08 .02 .07 .06 .09 .67⁄⁄ 1
9 Task confidence .04 .12 .02 .10 .06 .07 .06 .46⁄⁄ 1
10 Task anxiety .04 .15 .02 .05 .03 .06 .09 .48⁄⁄ .76⁄⁄ 1
11 Task learning orientation .02 .06 .06 .08 .09 .04 .38⁄⁄ .39⁄ .41⁄⁄ .66⁄⁄ 1
12 Task achievement orientation .02 .10 .05 .08 .05 .08 .37⁄⁄ .34⁄⁄ .38⁄⁄ .68⁄⁄ .77⁄⁄ 1
13 Semester 1 GPA .07 .02 .03 .16 .02 .10 .09 .08 .04 .08 .11 .09 1
⁄⁄
Note: GPA = grade point average. ⁄p < .05; p < .01.
Assessment & Evaluation in Higher Education 399

significantly confounded by variations in their characteristics or circumstances. While


this pattern of findings does not necessarily establish the validity of students’ evalua-
tions, it does increase our confidence in their consistency or consensus. Results sug-
gested that students’ appraisals of assessment were significantly associated with their
task-related confidence, anxiety and task orientation. This indicated the value of fur-
ther analyses of the contribution of these variables to assessment performance.

General relationship of students’ appraisals to assessment approach and


performance
Structural equation modelling (SEM) analyses (using the AMOS programme) were
conducted to test the contribution of students’ appraisals of assessment tasks to their
approaches and their actual outcomes on a task. This allowed examination of direct
Downloaded by [Michigan State University] at 13:21 09 February 2015

and indirect relationships between variables and the extent to which a conceptual
model adequately fitted the empirical data. The first analysis tested the global rela-
tionship, across all assessment types, of students’ scores on the two appraisal factors
(perceived assessment motivational content and value and perceived assessment
manageability) to their reported approach to the task. Students’ ratings of their lev-
els of confidence and anxiety on an assessment task formed the latent variable
assessment task efficacy and students’ ratings of their levels of achievement motiva-
tion and deep learning orientation to the assessment task formed the latent variable
assessment task engagement, which is conceptually similar to the construct of a
deep-achieving approach to learning (Donnon and Violato 2003; Fox, McManus,
and Winder 2001). This analysis produced a moderate model fit (w2 (39) = 117.244,
p < .0001; w2/df ratio = 3.04; TLI = .86; CFI = .90; GFI = .91, AGFI = .85,
RMSEA = .10) with theoretically interpretable associations between variables. See
Figure 1 for a simplified presentation of the model.
Students’ perceptions of the motivational content and value of assessment tasks
significantly positively predicted their level of task engagement. Students appear to
be describing a pattern of engagement whereby ‘good academic behaviour’ (viz.
wanting to learn and do well) is facilitated when they experience an assessment task
to be aligned with broader learning objectives, be appropriately challenging and to
be of value to them. Students’ perceptions of the manageability of assessment (viz.
its fairness and level of support) significantly positively predicted their sense of task
efficacy. In this regard, students are making a clear association between perceived
task processes (viz. the more clear, fair, supported and in-control we feel) and their
sense of task-related efficacy. This is certainly consistent with previous findings
whereby constructive task-related interpersonal support encouraged self-determined
motivation (Hardre and Reeve 2003).

General relationship of students’ appraisals with specific assessment tasks


While the above analysis provides a level of insight as to the ‘general conceptual
model’, it is also important to understand how these processes function with specific
types of assessment tasks. Separate analyses were conducted to test relationships
between first-year students’ perceptions and approaches and assessment outcomes
for four sub-sets of the overall sample: essay, oral presentation, laboratory report
and closed-book exam tasks. Students’ reported marks on each task were converted
to a standard percentage value to form the dependent variable assessment outcome.
400 A. Lizzio and K. Wilson

Task Learning Task Achievement


Orientation Orientation

Perceived .66 .84


.61* Assessment
Assessment
Task
Motivation
Engagement
Value

.68 .31

.36
Perceived
Downloaded by [Michigan State University] at 13:21 09 February 2015

Assessment Assessment
Manageability Task Efficacy

.90 .55

Task Confidence Task Anxiety

Figure 1. Structural equation model of the influence of first-year students’ appraisal of


assessment on their reported efficacy and engagement.
Note: All other associations p < .0001. ⁄p < .005.

These analyses must be considered exploratory given that, because of smaller sam-
ple sizes (exam n = 137; laboratory report n = 85; essay n = 122; oral presentation
n = 85), they do not satisfy the recommended ratio of participants to parameters for a
latent variable analysis (Bentler and Chou 1987). Thus, while the SEM analyses for
each of these tasks produced interpretable models, because of limited sample sizes, the
levels of fit were relatively modest (exam CFI = .87, RMSEA = .10; essay CFI = .87,
RMSEA = .12; oral presentation CFI = .83, RMSEA = 1.22; laboratory report CFI = .86,
RMSEA = .10). However, given the exploratory nature of the present study, these find-
ings are cautiously reported for theory development purposes (see Figure 2).
While the previous overall analysis across all assessment tasks produced clear
general associations, these task-specific analyses suggest that students’ appraisals of
motivational value and manageability will have different impacts on their task-
related engagement and efficacy depending on the type of task being undertaken.
Exam-based tasks appeared to function in the most straightforward fashion for stu-
dents. Thus, the greater the perceived motivational value of an exam the greater stu-
dents’ task engagement, which, in turn, contributed to a better assessment outcome.
Students’ relative familiarity with the examination format may also explain why
their perceptions of support do not predict their level of task efficacy. Consistently,
given that performance-based tasks such as oral presentation involve a level of anxi-
ety and public evaluation, it should not be surprising that perceived manageability
influenced students’ sense of efficacy more than the structured (laboratory report),
and traditionally private (examination) forms of assessment.
However, first-year students reported a different pattern of associations with
their experience of essay-based tasks. Students’ perceptions of manageability of an
Assessment & Evaluation in Higher Education 401

Exam (.76)*
Lab report (ns)
Perceived
Essay (ns) Assessment
Assessment Exam (.40)***
Oral Pres (.50)** Task Lab report (.27)**
Motivation
Engagement Essay (ns)
Value Oral Pres. (ns)

Exam (ns)
Lab report (ns) Assessment
Essay (.42)* Outcome
Oral Pres. (ns)
Downloaded by [Michigan State University] at 13:21 09 February 2015

Exam (ns)
Perceived Assessment Lab report (ns
Task Essay (ns).
Assessment
Oral Pres. (ns)
Manageability Exam (ns) Efficacy
Lab report (ns)
Essay (.33)**
Oral Pres.(.21)*

Figure 2. Structural equation models of the influence of first-year students’ appraisal of


exam, laboratory report, essay and oral presentation assessment tasks on their reported
efficacy, engagement and outcomes.
Note: ns = Non significant association; ⁄⁄⁄p < .001; ⁄⁄p < .01; ⁄p < .05.

essay task were a strong positive predictor of both their essay writing efficacy and
their engagement with the task. What might be different about commencing stu-
dents’ perceptions of essay tasks that would, unlike other assessment tasks they
undertook, result in perceived support influencing both their efficacy and engage-
ment?
It may be that essay-based tasks are particularly problematic for first-year stu-
dents. As Hounsell’s (1984) seminal work demonstrated students (particularly com-
mencing students) have varied and often contradictory conceptions of academic
essay writing, and in addition, there is considerable cross-disciplinary variability in
the opportunities to systematically develop this capability. McCune (2004), follow-
ing a longitudinal study of first-year psychology students’ learning, similarly con-
cluded that students had considerable difficulty in developing their conceptions of
essay writing and understanding expected disciplinary discourses. Beyond this, the
guidance provided by tutors was often ‘tacit and sophisticated’, and thus less acces-
sible to students than staff might expect. Assessment tasks such as exams and labo-
ratory reports have a comparatively clearer logic and explicit structure than essay
writing in its many potential forms. Thus, in this regard, university-level essays are
‘new territory’ with ‘hard to learn’ rules’. Students in these circumstances may be
pejoratively labelled as ‘anxious and dependent’ (and indeed they themselves report
more anxiety), but from a system’s perspective they are simply seeking to negotiate
tasks that are perhaps significantly more challenging and inherently ambiguous than
insiders ‘comfortably in the discourse’ may expect or intend. Unfortunately, unlike
other more familiar (e.g. exams) or structured (e.g. laboratory reports) assessment
402 A. Lizzio and K. Wilson

tasks, students’ best intentions and efforts (viz. higher task engagement) were not
routinely reflected in higher grades. First-year students, in the present sample at
least, were perhaps yet to sufficiently develop the enabling conceptions and strate-
gies that mediate or bridge engagement (their intentions) and performance (their
outcomes).
Interestingly, only students’ reported level of task engagement (I will be
approaching this task with the aim of learning as much as possible; I want to do
very well on this assessment task) was a significant positive predictor of their actual
performance on an assessment task (viz. their marks). Thus clearly, outcomes may
be less a function of how students feel (anxiety, confidence) and more what they
actively seek to do (learn and achieve) in relation to a task. The present analysis
provided little support for previous findings that position self-efficacy as a strong
predictor of success (Gore 2006). Present findings are consistent with research dem-
Downloaded by [Michigan State University] at 13:21 09 February 2015

onstrating that constructs such as self-efficacy are less influential of performance as


tasks become more complex (Chen, Casper, and Cortina 2001), and that self-
efficacy expectations are less accurate in field settings (Stajkovic and Luthans
1998). It may be that first-year students are not well-equipped to make accurate
appraisals of their abilities in relatively unfamiliar learning contexts. Indeed, student
over-optimism has been identified as a significant contributor to academic failure in
the first year of university (Haynes et al. 2006).
The stronger contribution of students’ perceptions of assessment task value to
their task performance should not be taken to indicate that we can discount the
importance of effectively responding to students’ need for guidance and support
through the assessment lifecycle. Firstly, from a cognitive load perspective while
the structure of an assessment task establishes the intrinsic load of a task, an appro-
priate assessment management process reduces extraneous load on students. Sec-
ondly, both cognitive and affective processes are important to student satisfaction
and well-being. Thus, for example, while students may be able to ‘do well’ with a
challenging task they may also ‘become unnecessarily stressed’ in the process.
Designing challenging and engaging assessment tasks and implementing these
within a fair and supportive context is likely to enhance students’ satisfaction as
well their achievement. This is particularly the case since variables strongly defining
students’ evaluation of assessment process (e.g. fairness) have been shown to influ-
ence higher order affective outcomes such as identification and belonging (Lizzio,
Wilson, and Hadaway 2007).

Implications for practice


Present findings provide some guidance as to the elements that first-year students
use to appraise their assessment tasks. In terms of assessment task design, students
are particularly sensitive to a task’s motivational potential. In terms of assessment
task process, students are particularly aware of the perceived manageability of a
task (as judged by appropriate weighting, organisation and support). Given that stu-
dents’ perceptions of task motivational value and manageability both contribute to
their sense of efficacy and engagement in different ways, optimal learning is more
likely to be achieved if assessment is not so much ‘set for students’ as ‘discussed
with them’. This may have a twofold benefit: firstly, establishing a clear and explicit
assessment system, and secondly, cueing students to the implicit processes that may
be influencing their engagement. In this sense, staff–student dialogue contributes to
Assessment & Evaluation in Higher Education 403

the higher-order goal of enabling students to more effectively self-regulate with


unfamiliar or challenging tasks.

Future research
There are a number of limitations with the present study that require consideration
and should be addressed in future research. The present study was retrospective in
design, with students reporting their ‘remembered perceptions’, which may raise
questions of accuracy of recall. Future research should seek to address these ques-
tions by prospectively monitoring students’ progress through the assessment lifecy-
cle from the point of first contact onwards. A second limitation of the retrospective
nature of this study concerns the composition of the student sample. Failed and
non-retained students are underrepresented in the present sample and thus present
Downloaded by [Michigan State University] at 13:21 09 February 2015

findings may have limited generalisability to students with at-risk characteristics.


Only modest levels of fit were obtained due to restricted sample sizes for the sepa-
rate analyses of each type of assessment. Clearly, present findings should be repli-
cated with larger and more diverse student populations. The variability in the
pattern of associations across assessment types (particularly essay tasks) strongly
indicates that analyses should be conducted at the task specific level and only cau-
tiously aggregated to a general model. In this regard, students’ appraisals of diverse
assessment tasks (e.g. group work) should also be investigated to determine the
general applicability of present findings. Finally, consideration should be given to
investigating the extent to which individual difference variables may mediate the
relationships between students’ appraisal of assessment tasks and their subsequent
engagement and performance. For example, the extent to which students actively
deconstruct both the tacit and explicit expectations or cues of assessment (viz. the
extent to which they are strategic cue-seekers, cue-conscious or cue deaf/oblivious)
(Miller and Parlett 1974), may be particularly influential. Similarly, students’ dispo-
sitional approaches to learning should be measured to determine the differential
contribution of general presage factors and appraisals of specific assessment tasks to
their subsequent engagement and performance.

Acknowledgement
This study was funded by a grant from the Australian Learning and Teaching Council.

Notes on contributors
Alf Lizzio is the Director, Griffith Institute for Higher Education. He has led a number of
large-scale institutional projects in learning and teaching. His research interests include
learning and teaching quality, professional learning and development and academic
leadership.

Keithia Wilson is a professor of psychology at Griffith University and Australian Learning


and Teaching Council National Fellow for the first-year experience. She is engaged in a
number of projects focusing on enhancing first-year learning environments and supporting
the success of students from diverse backgrounds. Her research interests include students’
perceptions of their learning environments, learning and teaching quality and assessment
practices.
404 A. Lizzio and K. Wilson

References
Barnes, C.M., and L. Van Dyne. 2009. I’m tired: Differential effects of physical and emo-
tional fatigue on workload management strategies. Human Relations 62: 59–92.
Bentler, P.M., and C.P. Chou. 1987. Practical issues in structural equation modelling. Socio-
logical Methods and Research 16: 78–117.
Bernstein, I.H., and G. Teng. 1989. Factoring items and factoring scales are different: Spuri-
ous evidence for multidimensionality due to item categorization. Psychological Bulletin
105: 467–77.
Biggs, J.B. 1996a. Enhancing teaching through constructive alignment. Higher Education
32: 347–64.
Biggs, J.B. 1996b. Assessing learning quality: Reconciling institutional, staff and educational
demands. Assessment & Evaluation in Higher Education 21: 5–15.
Biggs, J.B. 2003. Teaching for quality learning at university. Buckingham: Open University
Press.
Downloaded by [Michigan State University] at 13:21 09 February 2015

Birenbaum, M. 1997. Assessment preferences and their relationship to learning strategies


and orientations. Higher Education 33: 71–84.
Birenbaum, M. 2007. Assessment and instruction preferences and their relationship with test
anxiety and learning strategies. Higher Education 53: 749–68.
Boud, D. 1990. Assessment and the promotion of academic values. Studies in Higher Edu-
cation 15: 101–11.
Boud, D., and Associates. 2010. Assessment 2020: Seven propositions for assessment
reform in higher education. Sydney: Australian Learning and Teaching Council.
Carless, D. 2007. Learning-oriented assessment: Conceptual bases and practical implications.
Innovations in Education and Teaching International 44: 57–66.
Chen, G., W.J. Casper, and J.M. Cortina. 2001. The roles of self-efficacy and task complex-
ity in the relationships among cognitive ability, conscientiousness, and work-related per-
formance: A meta-analytic examination. Human Performance 14: 209–30.
Christie, H., L. Tett, V.E. Cree, J. Hounsell, and V. McCune. 2008. ‘A real rollercoaster of
confidence and emotions’: Learning to be a university student. Studies in Higher Educa-
tion 33: 567–81.
Cooke, R., B.M. Bewick, M. Barkham, M. Bradley, and K. Audin. 2006. Measuring, monitor-
ing and managing the psychological well-being of first-year university students. British
Journal of Guidance and Counselling 34: 505–17.
Deci, E.L., and R.M. Ryan. 2002. Overview of self-determination theory: An organismic dialecti-
cal perspective. In Handbook of self-determination research, ed. E.L. Deci and R.M. Ryan,
3–33. Rochester, NY: University of Rochester Press.
Dierick, S., and F. Dochy. 2001. New lines in edumetrics: New forms of assessment lead to
new assessment criteria. Studies in Educational Evaluation 27: 307–29.
Donnon, T., and C. Violato. 2003. Testing competing structural models of approaches to
learning in a sample of undergraduate students: A confirmatory factor analysis. Canadian
Journal of School Psychology 18: 11–22.
Entwistle, N.J. 1991. Approaches to learning and perceptions of the learning environment.
Higher Education 22: 201–4.
Entwistle, N.J., and A. Entwistle. 1991. Contrasting form of understanding for degree exam-
inations: The student experience and its implications. Higher Education 22: 205–27.
Fox, R.A., I.C. McManus, and B.C. Winder. 2001. The shortened study process question-
naire: An investigation of its structure and longitudinal stability using confirmatory factor
analysis. British Journal of Educational Psychology 71: 511–30.
Gibbs, G., and C. Simpson. 2004. Does your assessment support your students’ learning?
Journal of Learning and Teaching in Higher Education 1: 3–31.
Gielen, S., F. Dochy, and S. Dierick. 2003. Evaluating the consequential validity of new
modes of assessment: The influence of assessment on learning, including pre, post and
true assessment effects. In Optimising for new modes of assessment: In search of quali-
ties and standards, ed. M. Segers, F. Dochy, and E. Cascallar, 37–54. Dordrecht: Kluwer
Academic.
Gore, P.A. 2006. Academic self-efficacy as a predictor of college outcomes: Two incremental
validity studies. Journal of Career Assessment 14: 92–115.
Assessment & Evaluation in Higher Education 405

Greenberger, E., J. Lessard, C. Chen, and S.P. Farruggia. 2008. Self-entitled college students:
Contributions of personality, parenting and motivational factors. Journal of Youth and
Adolescence 37: 1193–204.
Hardre, P.L., and J. Reeve. 2003. A motivational model of rural students’ intentions to per-
sist in, verses drop out of high school. Journal of Educational Psychology 95: 347–56.
Haynes, T.L., J.C. Ruthig, R.P. Perry, R.H. Stupnisky, and N.C. Hall. 2006. Reducing the
academic risks of over-optimism: The longitudinal effects of attribution retraining on
cognition and achievement. Research in Higher Education 47: 755–79.
Hounsell, D. 1984. Essay planning and essay writing. Higher Education Research and
Development 3: 13–31.
Hounsell, D., V. McCune, and J. Hounsell. 2008. The quality of guidance and feedback to
students. Higher Education Research and Development 27: 55–67.
Hu, L., and P.M. Bentler. 1999. Cutoff criteria for fit indices in covariance structure analysis:
Conventional criteria verses new alternatives. Structural Equation Modelling 6: 1–55.
Kalyuga, S. 2011. Cognitive load theory: How many types of load does it really need? Edu-
Downloaded by [Michigan State University] at 13:21 09 February 2015

cational Psychology Review 23: 1–19.


Kanji, G.K., and P.K. Chopra. 2009. Psychosocial system for work well-being: On measur-
ing work stress by causal pathway. Total Quality Management 20: 563–80.
Karasek, R.A. 1979. Job demands, job decision latitude, and mental strain: Implications for
job redesign. Administrative Science Quarterly 24: 285–308.
Kirschner, P.A., J. Sweller, and R.E. Clark. 2006. Why minimal guidance during instruction
does not work: An analysis of the failure of constructivist, discovery, problem-based,
experiential and inquiry-based teaching. Educational Psychologist 41: 75–86.
Legault, L., I. Green-Demers, and L. Pelletier. 2006. Why do high school students lack moti-
vation in the classroom? Toward an understanding of academic amotivation and the role
of social support. Journal of Educational Psychology 98: 567–82.
Lindblom-Ylanne, S., and A. Lonka. 2001. Students’ perceptions of assessment practices in
a traditional medical curriculum. Advances in Health Sciences Education 6: 121–40.
Lizzio, A., and K. Wilson. 2010. Assessment in first-year: Beliefs, practices and systems.
Sydney: ATN National Assessment Conference.
Lizzio, A., K. Wilson, and V. Hadaway. 2007. University students’ perceptions of a fair
learning environment: A social justice perspective. Assessment and Evaluation in Higher
Education 32: 195–214.
Lizzio, A., K. Wilson, and R. Simons. 2002. University students’ perceptions of the learning
environment and academic outcomes: Implications for theory and practice. Studies in
Higher Education 27: 27–51.
McCune, V. 2004. Development of first-year students’ conceptions of essay writing. Higher
Education 47: 257–82.
McDowell, L. 1995. The impact of innovative assessment on student learning. Innovations
in Education and Training International 32: 302–13.
McDowell, L., J. Smailes, K. Sambell, A. Sambell, and D. Wakelin. 2008. Evaluating
assessment strategies through collaborative evidence-based practice: Can one tool fit all?
Innovations in Education and Teaching International 45: 143–53.
Miller, M.C., and M.R. Parlett. 1974. Up to the mark: A study of the examination game.
London: Society for Research into Higher Education.
Pass, F., A. Renkl, and J. Sweller. 2003. Cognitive load theory and instructional design:
Recent developments. Educational Psychologist 38: 1–4.
Ramsden, P. 1992. Learning to teach in higher education. London: Routledge.
Sambell, K., L. McDowell, and S. Brown. 1997. ‘But is it fair?’: An exploratory study of
student perceptions of the consequential validity of assessment. Studies in Educational
Evaluation 23: 349–71.
Savin-Baden, M. 2004. Understanding the impact of assessment on students in problem-
based learning. Innovations in Education and Teaching International 41: 223–33.
Schaufeli, W.B., I. Martinez, A. Marques-Pinto, M. Salanova, and A. Bakker. 2002. Burnout
and engagement in university students. Journal of Cross-Cultural Psychology 33:
464–81.
406 A. Lizzio and K. Wilson

Scouller, K.M. 1997. Students’ perceptions of three assessment methods: Assignment essay,
multiple choice question examination, short-answer examination. Research and Develop-
ment in Higher Education 20: 646–53.
Segers, M., S. Dierick, and F. Dochy. 2001. Quality standards for new modes of assessment:
An exploratory study of the consequential validity of the overall test. European Journal
of the Psychology of Education 16: 569–86.
Snyder, B.R. 1971. The hidden curriculum. New York, NY: Knopf.
Stajkovic, A.D., and F. Luthans. 1998. Self-efficacy and work-related performance: A meta
analysis. Psychological Bulletin 124: 240–61.
Thompson, K., and N. Falchinov. 1998. ‘Full on until the sun comes out’: The effects of
assessment on student approaches to studying. Assessment & Evaluation in Higher Edu-
cation 23: 379–90.
Trigwell, K., and M. Prosser. 1991. Improving the quality of student learning: The influence
of learning context and student approaches to learning on learning outcomes. Higher
Education 22: 251–66.
Downloaded by [Michigan State University] at 13:21 09 February 2015

Wehlburg, C.M. 2010. Assessment practices related to student learning: Transformative


assessment. In Guide to faculty development, ed. K. Gillespie and D. Robertson, 169–84.
San Francisco, CA: Jossey-Bass.

You might also like