Professional Documents
Culture Documents
To cite this article: Alf Lizzio & Keithia Wilson (2013) First-year students’ appraisal of assessment
tasks: implications for efficacy, engagement and performance, Assessment & Evaluation in Higher
Education, 38:4, 389-406, DOI: 10.1080/02602938.2011.637156
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Assessment & Evaluation in Higher Education, 2013
Vol. 38, No. 4, 389–406, http://dx.doi.org/10.1080/02602938.2011.637156
This study investigated students’ appraisals of assessment tasks and the impact
Downloaded by [Michigan State University] at 13:21 09 February 2015
of this on both their task-related efficacy and engagement and subsequent task
performance. Two hundred and fifty-seven first-year students rated their experi-
ence of an assessment task (essay, oral presentation, laboratory report or exam)
that they had previously completed. First-year students evaluated these assess-
ment tasks in terms of two general factors: the motivational value of the task
and its manageability. Students’ evaluations were consistent across a range
of characteristics and level of academic achievement. Students’ evaluations of
motivational value generally predict their engagement and their evaluations
of task manageability generally predict their sense of task efficacy. Engagement
was a significant predictor of task performance (viz. actual mark) for exam and
laboratory report tasks but not for essay-based tasks. Findings are discussed in
terms of the implications for assessment design and management.
Keywords: first-year assessment; appraisal of assessment; student efficacy
This study seeks to understand the factors that students’ use to appraise or evaluate
an assessment task and the consequences of their perceptions for their subsequent
motivation and performance. An evidence-based understanding of the processes that
influence students’ engagement with assessment is particularly important for inform-
ing our educational practice with first-year or commencing students who are rela-
tively unfamiliar with the culture and context of university-level assessment.
While the way in which students approach learning may, to some extent, be a
function of their personal disposition or abilities, the nature of the learning task
itself and the environment in which it is undertaken also significantly mediate their
learning strategy (Fox, McManus, and Winder 2001). More accurately, it is stu-
dents’ perceptions, rather than any objective features of tasks, that are crucial in
shaping the depth of their engagement. In this sense, students’ learning approaches
(process factors) and academic performance (product factors) are influenced by their
appraisal of, and interaction with, the curriculum content, design and culture of their
current ‘learning system’ (presage factors) (Biggs 2003). Central to this process is a
significant body of research indicating students’ perceptions of the methods, modes
and quantity of assessment to be perhaps one of the most important influences on
their approaches to learning (Entwistle and Entwistle 1991; Lizzio, Wilson, and
Simons 2002; Ramsden 1992; Trigwell and Prosser 1991). Indeed, it has been
argued that students’ perceptions of assessment tasks ‘frame the curriculum’ and are
more influential than any intended design elements (Entwistle 1991), to the extent
of potentially overpowering other features of the learning environment (Boud
1990).
The appreciation that assessment functions not only to grade students, but also
fundamentally to facilitate their learning, is central to the paradigm evolution from
a traditional summative ‘testing culture’ to an integrated ‘assessment culture’
(Birenbaum 1997). Understanding how students appraise or construct their learning
is the foundation of design frameworks such as integrated teaching (Wehlburg
2010) and constructive alignment (Biggs 1996a) which view students’ engagement
and performance with assessment as a complex interaction between learner, task
and context. From this perspective, the consequential validity of an assessment task
or mode (viz. its positive influence on students’ learning approaches and outcomes)
is a key consideration (Dierick and Dochy 2001). The importance of students per-
Downloaded by [Michigan State University] at 13:21 09 February 2015
ceiving course objectives and pedagogy to be congruent, besides satisfying the test
of ‘common-sense’, has also received empirical support. For example, a curriculum
that emphasises acquisition of knowledge and a concurrent assessment package that
emphasises problem solving (Segers, Dierick, and Dochy 2001) have been found to
contribute to both sub-optimal learning and student resentment. The implications of
these findings for educational practice are quite fundamental. A foundational educa-
tional discipline would appear to be the need to distinguish between our educational
intentions (however worthy) and their impact on students. This requires us to more
closely examine the ‘hidden curriculum’ (Snyder 1971) of our assessment practices.
If we want to understand and evaluate our learning environments we need to
authentically understand how students experience them. Thus, if our aspirations are
to influence students towards deeper learning and higher-order skill development
then a prerequisite task is to appreciate students’ perceptions of the assessment tasks
with which we ask them to engage.
systemic investigation into the student experience. The focal question is not just
‘what type of assessment’ but ‘what type of assessment system’ are students experi-
encing on both cognitive and affective levels. Thus, there is a balanced concern
with the impact of both assessment content and assessment process (Gielen, Dochy,
and Dierick 2003) on student learning and satisfaction. Hounsell, McCune, and
Hounsell (2008) have extended this guiding notion of ‘assessment context’ by oper-
ationalising the prototypical stages of an assessment lifecycle and identifying stu-
dents’ needs and concerns as they engage with, and seek to perform, assessment
tasks (viz. their prior experiences with similar assessment, their understanding of
preliminary guidance, their need for ongoing clarification, feedback, supplementary
support and feed-forward to subsequent tasks). Hounsell et al. in particular demon-
strated that the perceived adequacy of guidance and feedback as students attempted
tasks were central to their success. Lizzio and Wilson (2010) utilised Hounsell,
McCune, and Hounsell’s (2008) framework to investigate first-year students’ apprai-
sal of a range of assessment tasks using focus groups and individual interviews.
These students evaluated their early university assessment tasks in terms of seven
dimensions (familiarity with type of assessment, type and level of demand/required
effort, academic stakes, level of interest or motivation, felt capacity or capability to
perform the task, perceived fairness and the level of available support). The dimen-
sions identified in this study confirm a number of the themes (e.g. demand, motiva-
tion, fairness and support) commonly identified in previous investigations of student
perceptions of assessment.
Parallel to this line of inquiry has been the development of a number of good
practice frameworks to guide the design of assessment protocols. For example,
Gibbs and Simpson (2004) identified 11 conditions under which assessment sup-
ports students’ learning, and developed the assessment experience questionnaire
(AEQ) as a measure of the extent to which these conditions (viz. good distribution
of time demands and student effort, engagement in learning, appropriateness of
feedback, students’ use of feedback) were evident in a particular learning context.
More recently, Boud and Associates (2010) developed a set of seven propositions
for assessment reform in higher education. The propositions address the centrality
of assessment to the learning process (assessment for learning placed at the centre
of subject and programme design) and both questions of academic standards (need
for assessment to be an inclusive and trustworthy representation of student achieve-
ment) and the cultural (students are inducted into the assessment practices and cul-
tures of higher education) and relational (students and teachers become responsible
392 A. Lizzio and K. Wilson
The question of ‘managing cognitive load’ may be particularly important for novice
learners who lack the working schemas to integrate new knowledge. Tasks that
require students to find or construct essential information for themselves (unguided
or minimal guidance) have been found to result in less direct learning and knowl-
edge transfer compared to tasks where scaffolded guidance is provided (Kirschner,
Sweller, and Clark 2006).
The goal of protecting or enhancing student well-being may also be relevant to
our management of the assessment process. University students generally report
lower levels of well-being than the general population, with first-year being a time
of heightened anxiety (Cooke et al. 2006). Given the identified vulnerabilities of
this population, there may be a significant ethical and mental health dimensions to
good assessment practice. Work stress and well-being can be influenced by the psy-
chosocial system within which a person functions and effective person–system
Downloaded by [Michigan State University] at 13:21 09 February 2015
First-year students
The present study is particularly concerned with the experience of first-year or com-
mencing students’ experiences of assessment. Early academic experiences have been
identified as critical to the formation of tentative learner identities and self-efficacy
(Christie et al. 2008). Clearly, ‘assessment backwash’ (Biggs 1996b), whereby badly
designed or poorly organised assessment can unintentionally impair students’ learn-
ing, is more likely with a commencing student population. Indeed, poorly matched
and managed assessment is arguably a major contributor to the phenomenon of pre-
mature ‘student burnout’ (viz. feelings of exhaustion and incompetence and a sense
of cynicism and detachment) (Schaufeli et al. 2002) and disengagement. From a
positive perspective, first-year learning environments potentially provide our greatest
opportunities to not only align our educational intentions and impact, but also to
work collaboratively with new students to develop an evidence-based culture of
success. An empirically supported understanding of the design and process elements
of our ‘assessment systems’ can potentially make a contribution to the important
challenges of student engagement and retention.
However, it should be noted that a wholistic approach to assessment that
actively engages with students’ needs and expectations is a somewhat contested
proposition. On the one hand arguments are made for greater accommodation
and inclusion of students’ voices and circumstances around assessment. In Biren-
baum’s (2007) terms this requires moving away from an ‘inside out’ approach
where the ‘insiders’ (teachers) assume ‘what’ and ‘how’ needs to be assessed, to
a greater engagement with an ‘outside in approach’ where students’ preferences
are legitimated. On the other hand there is an increasing concern with the appar-
ent rise within the student population of notions of ‘academic entitlement’
(Greenberger et al. 2008), and an invocation to academics not to collude with
394 A. Lizzio and K. Wilson
or accommodate ‘student demands’. The present paper takes the position that,
with first-year students in particular, pejorative labels of ‘demanding and entitled
students’ do little to advance practice, and that from a systems perspective, such
behaviour may also be understood as ‘students trying to assertively negotiate
with institutions’ which may be perceived as somewhat indifferent to their suc-
cess or well-being. In this sense the genesis of this observed behaviour may be
more interactive than individual.
Aims
The present study aims to contribute to our understanding of the aspects of assess-
ment that influence first-year students’ engagement, confidence and learning out-
comes. Research to date has identified a number of design and process factors that
Downloaded by [Michigan State University] at 13:21 09 February 2015
may be particularly salient to students; however, there is a need for the structure of
these to be more clearly identified and their relative impact on first-year students’
efficacy and performance assessed.
Thus the focal research questions for the present study are:
What are the general dimensions which first-year students use to appraise or
evaluate assessment tasks?
How do first-year students’ appraisals influence their sense of efficacy and
approach to assessment tasks?
What is the contribution of first-year students’ appraisals and approaches to
their actual performance on assessment tasks?
What are the implications of these insights for the design and management of
assessment?
Method
Participants
Two hundred and fifty-seven first-year students (212 females and 45 males)
across four disciplinary programmes (Medical Science [16], Nursing [65], Psy-
chology [81] and Public Health [38]) participated in this study. The mean age
of the sample was 25 years (SD = 9.1 years) and 56% were the first in their fam-
ily to attend university. Students were in their second semester of university
study and their mean grade point average (GPA) on their first semester courses
was 5.5 on a seven-point scale.
Procedure
Students were emailed at the beginning of their second semester of study and
invited to participate in an online survey. Approximately, 35% of the targeted
student population responded to the invitation. The survey process asked students
to ‘Reflect back on your first semester of university study and to recapture the
experience of ‘‘being new to it all’’. Recall your early thoughts and feelings
about the assessment tasks you had to complete’. Students were then asked to
select one type of assessment task (exam, oral presentation, essay or laboratory
Assessment & Evaluation in Higher Education 395
report) and to use the scales provided to ‘honestly tell us how you remember
thinking and feeling as you approached this task’. Some students (n = 87) com-
pleted the survey for more than one assessment task. In the first section of the
survey students rated their selected assessment task on a set of 28 items opera-
tionalising each of the seven dimensions (viz. familiarity with type of assess-
ment, type and level of demand/required effort, academic stakes, level of interest
or motivation, felt capacity or capability to perform the task, perceived fairness
and the level of available support) identified by first-year students as how they
appraise or evaluate assessment in Lizzio and Wilsons’ (2010) qualitative study.
Items were piloted with a small group of students (n = 12) to establish clarity of
wording and appropriate matching of items to dimensions. In the second section
of the survey students evaluated their response to the assessment task in terms
of their self-reported levels of task-related confidence (I am confident that I will
do well on this assessment task), anxiety (I’m feeling fairly anxious about this
assessment task), achievement motivation (I want to do very well on this assess-
ment task) and approach to learning on the task (I will be approaching this task
with the aim of learning as much as possible). Students were also asked to
report their level of academic performance (percentage mark) on their selected
assessment task. Finally, students provided demographic (age and gender), back-
ground (equity group membership, prior family participation in higher education)
and academic (university entrance score, GPA for their first semester courses)
information.
396 A. Lizzio and K. Wilson
Results
A series of analyses were conducted. Firstly, exploratory and confirmatory factor
analyses were used to investigate the structure of first-year students’ appraisals of
assessment tasks. Secondly, correlation analyses were used to establish if students’
appraisal processes were associated with a range of demographic, background or
academic achievement factors. Finally, structural equation modelling was used to
establish the relationships between students’ appraisal processes, their approach to
assessment tasks and their performance on those tasks.
set was highly structured and appropriate for further factor analysis (KMO = 0.94).
Students’ responses to the 28 items describing their appraisal of the four assessment
tasks (exam, oral presentation, essay and laboratory report) were analysed using
exploratory factor analysis (Principal axis factor (PAF) analysis with varimax rota-
tion). This initial analysis yielded a two-factor solution accounting for 79.36% of
the variance, with 17 items loading on factor 1 (72.19% variance, eigenvalue
20.21) and six items on factor 2 (7.17% variance, eigenvalue 2.0). Cross-loading,
low loading and highly correlated items were removed, and a PAF analysis was
then conducted on the reduced 15 item set. This analysis produced an interpretable
two-factor solution with simple structure accounting for 81.64% of variance with
eight items loading on factor 1 (variance 42.43%) and seven items on factor 2
(variance 39.21%) (see Table 1).
The first factor was defined by four themes related to first-year students’ percep-
tions of the motivational content and value of an assessment task: the perceived
academic stakes of the task (How important is it to do this well?), its level of
intellectual challenge (What will it take me to do this?), the motivational value of
the task (Do I want to do this?), and students’ sense of task capability (Can I do
this?). The academic stakes of the assessment task were pragmatically reflected in
students’ instrumental perceptions of potential contribution to their grades (The
weighting of this task is sufficiently high for me to be concerned about how it will
impact on my overall grade). The intellectual challenge of an assessment task was
reflected in terms of perceived cognitive load (This assessment task requires us to
learn a lot of material), and intellectual demands (This assessment task requires us
to really think about the material; This assessment task requires me to demonstrate
mastery of skills). The motivational value of an assessment task was reflected in
terms of items related to clarity of learning outcomes (I can see how doing this
assessment task will develop my academic skills) and curriculum alignment (This
assessment task makes sense to me given the learning objectives of this course).
First-year students’ sense of capability to undertake the assessment task was also
expressed in terms of prior familiarity (I have recently done something similar to
this task) and confidence (I have the skills required to do this assessment task).
The second factor contained four themes related to students’ perceptions of the
manageability of an assessment task: fairness (How fair is this task?), support (Who
can help with this task?), self-protection (How safe is this task?) and self-
determination (Is this in my hands?). The fairness aspects were reflected in stu-
dents’ perceptions of the required difficulty (I think this assessment task is at an
Assessment & Evaluation in Higher Education 397
appropriate level for first-year students), required investment (I think the weighting
of this assessment task matches the time required to do it) and level of organisation
(I feel that this assessment task is organised fairly). Students’ sense of appropriate
support with an assessment task was expressed through the perceived availability of
staff (My sense is that there will be good mechanisms in place to support students
with this assessment task) and workload management (I have a good sense of the
workload involved in this task). Students also associated items related to ego threat
and shame (It is fairly safe to ‘have a go’ with this assessment task without a high
risk of ‘looking stupid’ or making a mistake) and a sense of personal control (This
assessment task allows individual students sufficient personal control over how well
they will perform) on this factor.
Confirmatory factor analysis (CFA) was then used to test the fit of the data to
either a single or two-factor model. Unlike exploratory factor analysis which pro-
Downloaded by [Michigan State University] at 13:21 09 February 2015
vides only an indirect test of a theoretical model, CFA provides a direct test of the
proposed models (Bernstein and Teng 1989). Given the high level of inter-factor
correlation (.85) the first analysis tested whether all items could be associated with
a global assessment appraisal factor. However, this yielded a very poor fit with none
of the indices meeting accepted standards. A two-factor conceptualisation of stu-
dents’ appraisal of assessment was then tested and after removal of co-varying
items yielded a good level of fit of the model to data (w2 (13) = 20.42,
Tucker–Lewis Index [TLI] = .97, Comparative Fit Index [CFI] = .98, Goodness of fit
index [GFI] = .97, Adjusted goodness of fit index [AGFI] = .94, Root mean square
error of approximation [RMSEA] = .06). A good fit is generally indicated by mea-
sures of incremental fit (closer to 1 is better, with values in excess of .9 recom-
mended) and measures of residual variance [RMSEA recommended to be not
higher than .08] (Hu and Bentler 1999). The reduced first-factor was defined by
three items concerned with the perceived motivational content and value of an
assessment task: learning outcomes (.70) (I can see how doing this assessment task
will develop my academic skills), curriculum alignment (.74) (This assessment task
makes sense to me given the learning objectives of this course) and intellectual
demand (.35) (This assessment task requires me to demonstrate mastery of skills).
The reduced second factor was defined by four items concerned with the perceived
manageability of an assessment task: fairness of required investment (.81) (I think
the weighting of this assessment task matches the time required to do it),
appropriate difficulty (.89) (I think this assessment task is at an appropriate level
for first-year students), transparency of workload (.77) (I have a good sense of what
workload is involved in this task) and appropriate support (.41) (My sense is that
there will be good mechanisms in place to support students with this assessment
task). Inter-factor correlation was reduced to .68 indicating that the confirmatory
analysis had better differentiated these two assessment appraisal dimensions.
398
2 Gender .16 1
3 University entrance score .08 .02 1
4 First in family at university .03 .16 .04 1
5 English as second language .04 .05 .05 .08 1
6 Equity group membership .05 .03 .06 .05 .09 1
7 Factor 1 task motivational content and value .01 .04 .04 .06 .10 .06 1
8 Factor 2 task manageability .02 .08 .02 .07 .06 .09 .67⁄⁄ 1
9 Task confidence .04 .12 .02 .10 .06 .07 .06 .46⁄⁄ 1
10 Task anxiety .04 .15 .02 .05 .03 .06 .09 .48⁄⁄ .76⁄⁄ 1
11 Task learning orientation .02 .06 .06 .08 .09 .04 .38⁄⁄ .39⁄ .41⁄⁄ .66⁄⁄ 1
12 Task achievement orientation .02 .10 .05 .08 .05 .08 .37⁄⁄ .34⁄⁄ .38⁄⁄ .68⁄⁄ .77⁄⁄ 1
13 Semester 1 GPA .07 .02 .03 .16 .02 .10 .09 .08 .04 .08 .11 .09 1
⁄⁄
Note: GPA = grade point average. ⁄p < .05; p < .01.
Assessment & Evaluation in Higher Education 399
and indirect relationships between variables and the extent to which a conceptual
model adequately fitted the empirical data. The first analysis tested the global rela-
tionship, across all assessment types, of students’ scores on the two appraisal factors
(perceived assessment motivational content and value and perceived assessment
manageability) to their reported approach to the task. Students’ ratings of their lev-
els of confidence and anxiety on an assessment task formed the latent variable
assessment task efficacy and students’ ratings of their levels of achievement motiva-
tion and deep learning orientation to the assessment task formed the latent variable
assessment task engagement, which is conceptually similar to the construct of a
deep-achieving approach to learning (Donnon and Violato 2003; Fox, McManus,
and Winder 2001). This analysis produced a moderate model fit (w2 (39) = 117.244,
p < .0001; w2/df ratio = 3.04; TLI = .86; CFI = .90; GFI = .91, AGFI = .85,
RMSEA = .10) with theoretically interpretable associations between variables. See
Figure 1 for a simplified presentation of the model.
Students’ perceptions of the motivational content and value of assessment tasks
significantly positively predicted their level of task engagement. Students appear to
be describing a pattern of engagement whereby ‘good academic behaviour’ (viz.
wanting to learn and do well) is facilitated when they experience an assessment task
to be aligned with broader learning objectives, be appropriately challenging and to
be of value to them. Students’ perceptions of the manageability of assessment (viz.
its fairness and level of support) significantly positively predicted their sense of task
efficacy. In this regard, students are making a clear association between perceived
task processes (viz. the more clear, fair, supported and in-control we feel) and their
sense of task-related efficacy. This is certainly consistent with previous findings
whereby constructive task-related interpersonal support encouraged self-determined
motivation (Hardre and Reeve 2003).
.68 .31
.36
Perceived
Downloaded by [Michigan State University] at 13:21 09 February 2015
Assessment Assessment
Manageability Task Efficacy
.90 .55
These analyses must be considered exploratory given that, because of smaller sam-
ple sizes (exam n = 137; laboratory report n = 85; essay n = 122; oral presentation
n = 85), they do not satisfy the recommended ratio of participants to parameters for a
latent variable analysis (Bentler and Chou 1987). Thus, while the SEM analyses for
each of these tasks produced interpretable models, because of limited sample sizes, the
levels of fit were relatively modest (exam CFI = .87, RMSEA = .10; essay CFI = .87,
RMSEA = .12; oral presentation CFI = .83, RMSEA = 1.22; laboratory report CFI = .86,
RMSEA = .10). However, given the exploratory nature of the present study, these find-
ings are cautiously reported for theory development purposes (see Figure 2).
While the previous overall analysis across all assessment tasks produced clear
general associations, these task-specific analyses suggest that students’ appraisals of
motivational value and manageability will have different impacts on their task-
related engagement and efficacy depending on the type of task being undertaken.
Exam-based tasks appeared to function in the most straightforward fashion for stu-
dents. Thus, the greater the perceived motivational value of an exam the greater stu-
dents’ task engagement, which, in turn, contributed to a better assessment outcome.
Students’ relative familiarity with the examination format may also explain why
their perceptions of support do not predict their level of task efficacy. Consistently,
given that performance-based tasks such as oral presentation involve a level of anxi-
ety and public evaluation, it should not be surprising that perceived manageability
influenced students’ sense of efficacy more than the structured (laboratory report),
and traditionally private (examination) forms of assessment.
However, first-year students reported a different pattern of associations with
their experience of essay-based tasks. Students’ perceptions of manageability of an
Assessment & Evaluation in Higher Education 401
Exam (.76)*
Lab report (ns)
Perceived
Essay (ns) Assessment
Assessment Exam (.40)***
Oral Pres (.50)** Task Lab report (.27)**
Motivation
Engagement Essay (ns)
Value Oral Pres. (ns)
Exam (ns)
Lab report (ns) Assessment
Essay (.42)* Outcome
Oral Pres. (ns)
Downloaded by [Michigan State University] at 13:21 09 February 2015
Exam (ns)
Perceived Assessment Lab report (ns
Task Essay (ns).
Assessment
Oral Pres. (ns)
Manageability Exam (ns) Efficacy
Lab report (ns)
Essay (.33)**
Oral Pres.(.21)*
essay task were a strong positive predictor of both their essay writing efficacy and
their engagement with the task. What might be different about commencing stu-
dents’ perceptions of essay tasks that would, unlike other assessment tasks they
undertook, result in perceived support influencing both their efficacy and engage-
ment?
It may be that essay-based tasks are particularly problematic for first-year stu-
dents. As Hounsell’s (1984) seminal work demonstrated students (particularly com-
mencing students) have varied and often contradictory conceptions of academic
essay writing, and in addition, there is considerable cross-disciplinary variability in
the opportunities to systematically develop this capability. McCune (2004), follow-
ing a longitudinal study of first-year psychology students’ learning, similarly con-
cluded that students had considerable difficulty in developing their conceptions of
essay writing and understanding expected disciplinary discourses. Beyond this, the
guidance provided by tutors was often ‘tacit and sophisticated’, and thus less acces-
sible to students than staff might expect. Assessment tasks such as exams and labo-
ratory reports have a comparatively clearer logic and explicit structure than essay
writing in its many potential forms. Thus, in this regard, university-level essays are
‘new territory’ with ‘hard to learn’ rules’. Students in these circumstances may be
pejoratively labelled as ‘anxious and dependent’ (and indeed they themselves report
more anxiety), but from a system’s perspective they are simply seeking to negotiate
tasks that are perhaps significantly more challenging and inherently ambiguous than
insiders ‘comfortably in the discourse’ may expect or intend. Unfortunately, unlike
other more familiar (e.g. exams) or structured (e.g. laboratory reports) assessment
402 A. Lizzio and K. Wilson
tasks, students’ best intentions and efforts (viz. higher task engagement) were not
routinely reflected in higher grades. First-year students, in the present sample at
least, were perhaps yet to sufficiently develop the enabling conceptions and strate-
gies that mediate or bridge engagement (their intentions) and performance (their
outcomes).
Interestingly, only students’ reported level of task engagement (I will be
approaching this task with the aim of learning as much as possible; I want to do
very well on this assessment task) was a significant positive predictor of their actual
performance on an assessment task (viz. their marks). Thus clearly, outcomes may
be less a function of how students feel (anxiety, confidence) and more what they
actively seek to do (learn and achieve) in relation to a task. The present analysis
provided little support for previous findings that position self-efficacy as a strong
predictor of success (Gore 2006). Present findings are consistent with research dem-
Downloaded by [Michigan State University] at 13:21 09 February 2015
Future research
There are a number of limitations with the present study that require consideration
and should be addressed in future research. The present study was retrospective in
design, with students reporting their ‘remembered perceptions’, which may raise
questions of accuracy of recall. Future research should seek to address these ques-
tions by prospectively monitoring students’ progress through the assessment lifecy-
cle from the point of first contact onwards. A second limitation of the retrospective
nature of this study concerns the composition of the student sample. Failed and
non-retained students are underrepresented in the present sample and thus present
Downloaded by [Michigan State University] at 13:21 09 February 2015
Acknowledgement
This study was funded by a grant from the Australian Learning and Teaching Council.
Notes on contributors
Alf Lizzio is the Director, Griffith Institute for Higher Education. He has led a number of
large-scale institutional projects in learning and teaching. His research interests include
learning and teaching quality, professional learning and development and academic
leadership.
References
Barnes, C.M., and L. Van Dyne. 2009. I’m tired: Differential effects of physical and emo-
tional fatigue on workload management strategies. Human Relations 62: 59–92.
Bentler, P.M., and C.P. Chou. 1987. Practical issues in structural equation modelling. Socio-
logical Methods and Research 16: 78–117.
Bernstein, I.H., and G. Teng. 1989. Factoring items and factoring scales are different: Spuri-
ous evidence for multidimensionality due to item categorization. Psychological Bulletin
105: 467–77.
Biggs, J.B. 1996a. Enhancing teaching through constructive alignment. Higher Education
32: 347–64.
Biggs, J.B. 1996b. Assessing learning quality: Reconciling institutional, staff and educational
demands. Assessment & Evaluation in Higher Education 21: 5–15.
Biggs, J.B. 2003. Teaching for quality learning at university. Buckingham: Open University
Press.
Downloaded by [Michigan State University] at 13:21 09 February 2015
Greenberger, E., J. Lessard, C. Chen, and S.P. Farruggia. 2008. Self-entitled college students:
Contributions of personality, parenting and motivational factors. Journal of Youth and
Adolescence 37: 1193–204.
Hardre, P.L., and J. Reeve. 2003. A motivational model of rural students’ intentions to per-
sist in, verses drop out of high school. Journal of Educational Psychology 95: 347–56.
Haynes, T.L., J.C. Ruthig, R.P. Perry, R.H. Stupnisky, and N.C. Hall. 2006. Reducing the
academic risks of over-optimism: The longitudinal effects of attribution retraining on
cognition and achievement. Research in Higher Education 47: 755–79.
Hounsell, D. 1984. Essay planning and essay writing. Higher Education Research and
Development 3: 13–31.
Hounsell, D., V. McCune, and J. Hounsell. 2008. The quality of guidance and feedback to
students. Higher Education Research and Development 27: 55–67.
Hu, L., and P.M. Bentler. 1999. Cutoff criteria for fit indices in covariance structure analysis:
Conventional criteria verses new alternatives. Structural Equation Modelling 6: 1–55.
Kalyuga, S. 2011. Cognitive load theory: How many types of load does it really need? Edu-
Downloaded by [Michigan State University] at 13:21 09 February 2015
Scouller, K.M. 1997. Students’ perceptions of three assessment methods: Assignment essay,
multiple choice question examination, short-answer examination. Research and Develop-
ment in Higher Education 20: 646–53.
Segers, M., S. Dierick, and F. Dochy. 2001. Quality standards for new modes of assessment:
An exploratory study of the consequential validity of the overall test. European Journal
of the Psychology of Education 16: 569–86.
Snyder, B.R. 1971. The hidden curriculum. New York, NY: Knopf.
Stajkovic, A.D., and F. Luthans. 1998. Self-efficacy and work-related performance: A meta
analysis. Psychological Bulletin 124: 240–61.
Thompson, K., and N. Falchinov. 1998. ‘Full on until the sun comes out’: The effects of
assessment on student approaches to studying. Assessment & Evaluation in Higher Edu-
cation 23: 379–90.
Trigwell, K., and M. Prosser. 1991. Improving the quality of student learning: The influence
of learning context and student approaches to learning on learning outcomes. Higher
Education 22: 251–66.
Downloaded by [Michigan State University] at 13:21 09 February 2015