You are on page 1of 4

Book Reviews

609

examination boards and large-scale language testers, it will be a valuable resource in


policy-making. Its high price and the fact that half of it will not be relevant to most users
make it a poor choice for a personal library. I would recommend its inclusion in univer-
sity or assessment program libraries, as I believe it could inspire much-needed research
on the topic.
Finally, I cannot help but comment that the book suffers from extremely poor copyedit-
ing. There are jarring typos in virtually every chapter. I found this surprising as the book
comes from such a distinguished publisher. The content of the articles, however, more
than makes up for this.

S. J. Ross and G. Kasper (Eds.). (2013). Assessing Second Language Pragmatics. Basingstoke:
Palgrave Macmillan. 384 pp. ISBN 9781137352132 (pbk) £23.99.

Reviewed by: Edit Ficzere Willcox, CRELLA, University of Bedfordshire, UK

Assessing Second Language Pragmatics is a recent volume from the Palgrave Advances
in Language and Linguistics series, edited by Steven J. Ross and Gabriele Kasper.
The importance of assessing second language learners’ pragmatic abilities is becom-
ing evident. Over the last twenty years or so, there has been a proliferation of studies that
have endeavoured to specify the construct(s) of pragmatic competence and to develop
assessment instruments including elicitation tasks and rating scales. The present volume
is the first book-length collection of recent empirical studies in this field. With the
increasing research interest in assessing pragmatic abilities, it would seem difficult to
show the diversity in research focus. This volume, however, manages to give readers an
informative and logically organized overview of the diverse line of research in this field.
The editors were able to bring together researchers who, building on earlier research
designs of pragmatic assessment, used not only traditional methods of data analysis but
also some novel approaches to advance the field. The book does not advocate a particular
method; the aim is to introduce the different approaches, thus aiding readers to explore
changes in professional thinking in pragmatic assessment. Within the various chapters
that make up the book, there is an extensive review of relevant published literature; how-
ever, it is worth noting that at least some knowledge of the field of pragmatics is useful
in order to get the most out of the book.
After the first introductory chapter, which provides an overview of the main concepts
in pragmatic assessment extremely well, the chapters are organized into two parts.
Chapters in Part 1 use a variety of theoretical frameworks and focus on measuring differ-
ent pragmatic constructs employing a variety of assessment instruments. Chapters in Part
2, on the other hand, use a conversation analytic (CA) framework to investigate prag-
matic aspects of interaction realized in oral language tests.
Chapters in Part 1 examine the validity of assessment instruments and procedures that
have been designed to test pragmatic competence. Some chapters investigate how already
existing measures can be used with new populations, whereas others investigate the use
of new instruments. Within these chapters, approaches to assessing productive skills and,
610 Language Testing 33(4)

to a lesser degree, receptive skills (e.g. video-based instruments in chapter 3 by Rylander


et al.) can also be found. The chapters also focus on a range of assessment purposes
including measuring proficiency, diagnostic testing, and supporting the development of
learners’ pragmatic abilities. Roever, for example, in chapter 2 uses the Gricean theory
of conversational implicature to design multiple-choice test items for diagnostic pur-
poses, and found that familiarity with cultural norms affected item difficulty. Grabowski,
in chapter 6, employs Purpura’s (2004) model of language ability, alongside a role-play
test, to measure the grammatical as well as the pragmatic dimensions of language ability,
and argues that both can be measured at multiple proficiency levels. Walters, in chapter
7, on the other hand, used a CA framework to investigate the comparative validity of two
self-devised tests – a Discourse Completion Task (DCT) and an oral/listening
Conversation Analysis Informed Test (CAIT) – consisting of corresponding pragmatic
constructs. His findings indicate that DCT measures a narrower construct than CAIT,
suggesting the unsuitability of DCT to measure online pragmatic competence. His argu-
ment is in line with other doubts raised regarding the effectiveness of this instrument.
Kasper (2006), for example, argues that this task format does not allow examination of
the sequential organization of speech.
The editors also were mindful to include chapters in Part 1 that investigate the assess-
ment of pragmatic knowledge amongst speakers learning an L2 other than English.
Ishihara in chapter 5, for instance, uses Vygotsky’s sociocultural theory to measure prag-
malinguistic development amongst learners of Japanese via teacher-based assessment in
a classroom setting. She points out that through written mediated dialogue with the
teacher, learners can make some progress in the use of pragmalinguistic structures, and
advocates the classroom use of this method to increase L2 pragmatic awareness. Youn
and Brown in chapter 4, used speech act and politeness theories to examine item diffi-
culty in two tests of Korean as a foreign language (KFL) with comparable sets of data.
Their findings show that familiarity with the given topic, as well as the degree of power
and imposition in the test items, affect the difficulty level. They suggest that the construct
of pragmatic ability also should include pragmatics in interaction in future KFL tests.
Chapters in Part 2 focus specifically on interactional practices in oral language tests
and employ CA to investigate how interaction is structured in oral proficiency interviews
(OPI) in the form of interlocutor-led interviews and role-plays. References are made
mainly to assessment but also, to a much smaller extent, classroom practices. Seedhouse
in chapter 8, for example, investigates how the International English Language Testing
System (IELTS) Speaking Test interactional structure reflects practices in L2 classrooms
and university settings. He finds that IELTS Speaking Test practices differ in many
respect from classroom and university settings; and advocates that the interactional
organization of IELTS Speaking Test be made clear to students in order to make up for
this difference. Compernolle, in chapter 13, on the other hand, investigates the use of
dynamic assessment to aid French L2 learners’ sociopragmatic development. She dem-
onstrates that pedagogic intervention during the interview-format speaking task is co-
constructed by interviewer and learner, thus enabling the learners to actively engage in
the development of interactional and pragmatic competence. Tominaga, in chapter 9,
investigates pragmatic development over the period of a summer language course. Her
participants were L2 Japanese learners who were asked to perform a story-telling test
Book Reviews 611

task at the beginning and at the end of their language program. Her findings indicate that
there are indeed changes in the conversational structure of learners’ speech (e.g. length
of turns, sequential organization); however, these developments in the novice partici-
pant’s rating were not reflected in the ratings. She argues that frequent language prob-
lems may have made interactional developments in learners’ production less obvious to
raters, especially at lower levels.
Chapters 10–12 focus on the effect of interviewers’ way of repair in OPIs. Kasper’s
study in chapter 10 shows how the timing of the interviewer’s third-position repair
(Schegloff, 1992), aiming to redirect candidates’ focus back onto the task instructions,
can affect the progress of the interview. She also raises awareness of the fact that can-
didates’ and interviewers’ orientations to the OPI are different, with candidates treating
it as real-life conversation and interviewers using it as an instrument for language test-
ing. Okada and Greer’s study in chapter 11 investigates interlocutors’ interactional
intervention when candidates struggle with understanding a role-play task. They sug-
gest that interviewer training should address the use of effective interactional strate-
gies. Ross and O’Connell, in chapter 12, also investigate the interviewer’s role in the
successful outcome of role-play assessment tasks and they, similarly, recommend
interlocutors’ training in interactional strategies in order to help candidates to display
their interactional competence.
A distinctive feature of the volume is that although its main focus is assessment, it
does not ignore the relevance of the field of pragmatics to classroom practice. In particu-
lar, chapters 5 and 13 might be of interest to teachers who are somewhat familiar with
pragmatics and are keen on experimenting with new methods regarding the teaching and
assessment of pragmatic competence in their classroom.
What would have added to the value of this volume is clearer discussion on the issue
of what we are benchmarking learners’ speech against in tests. Although different authors
make some reference to this, it is fairly diluted within the overall themes of the chapters.
Research shows that some pragmatic task formats can be more easily benchmarked
against native speaker (NS) norms than others. Defining this expected norm, for exam-
ple, is a particularly pertinent issue when strategic competence and interactional compe-
tence come under scrutiny in role-play tasks, since there is less native speaker agreement
when sociopragmatic competence is judged (Matsumura, 2001). This omission, how-
ever, does not undermine the book nor distract from its importance.
In conclusion, from the range of studies in this volume it seems clear that although
no consensus has been reached yet on the exact construct(s) of pragmatic competence
or the most effective tool with which to assess it, the assessment focus seems to have
shifted more towards measuring pragmatic competence in interaction. The studies in
this volume raise awareness of the extent to which particular assessment tools and
interviewers’ interactional practices can influence the outcome of pragmatic assess-
ment and the importance of innovative approaches for shaping the future direction in
this field. It is essential reading for everyone who wants to gain insight into the differ-
ent theoretical frameworks, the validity of existing and new assessment instruments,
and novel data analysis methods used in research into pragmatic assessment. L2 lan-
guage assessment practitioners will find useful information for extending their under-
standing of pragmatic assessment, while students will find this a valuable source for
612 Language Testing 33(4)

master’s or PhD-level research. Language teachers also may find some innovative
ideas for classroom practice.

References
Kasper, G. (2006). Speech acts in interaction: Towards discursive pragmatics. In K. Bardovi-
Harlig, J. C. Felix-Brasdefer & A. S. Omar (Eds.), Pragmatics and language learning, vol. 11
(pp. 281–314). University of Hawai’i at Manoa: National Foreign Language Resource Centre.
Matsumura, S. (2001). Learning the rules for offering advice: A quantitative approach to second
language socialization. Language Learning, 51(4), 635–679.
Purpura, J. (2004). Assessing grammar. Cambridge: Cambridge University Press.
Schegloff, E. A. (1992). Repair after next turn: The last structurally provided defense in intersub-
jectivity in conversation. American Journal of Sociology, 98, 1295–1345.

You might also like