You are on page 1of 22

Assessing Writing 11 (2006) 113134

Washback to the learner: Learner and teacher perspectives on IELTS preparation course expectations and outcomes
Anthony Green
Research & Validation Group, University of Cambridge, ESOL Examinations, 1 Hills Road, Cambridge CB1 2EU, UK

Abstract The washback effect of tests on teaching has attracted considerable attention over recent years, but the critical question of how this translates into washback on learning remains under-explored. To address this issue, questionnaires relating to academic writing instruction were distributed to 108 learners from mainland China preparing for university study in the UK, either through studying for the IELTS test or through courses in English for Academic Purposes that did not include IELTS preparation. The same 24 questions were put to learners at course entry and at course exit and the results compared between courses and between occasions. The questions were also given to 39 teachers on IELTS and non-IELTS courses and their responses weighed against those from students. The results indicate that learner perceptions of course outcomes are affected by the course focus reported by teachers, but that the relationship is not deterministic. Although test preparation courses, as predicted by washback theory, did appear to cover a relatively narrow range of skills, there is evidence here that narrow preparation strategies were not driven primarily by learner expectations. 2006 Elsevier Inc. All rights reserved.
Keywords: Writing assessment; Washback; IELTS; Test preparation; Chinese students

1. Literature review This study explores the inuence of teacher priorities on learners preparing for a test of academic writing, the IELTS Academic Writing Module. Writing is a key skill for international students at university as it is most often the basis for assessing their work and so plays a key role in academic success. The IELTS is a high-stakes gate keeping test used by universities to screen

Tel.: +44 1223 553355; fax: +44 1223 553083. E-mail address: Green.A@cambridgeesol.org.

1075-2935/$ see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.asw.2006.07.002

114

A. Green / Assessing Writing 11 (2006) 113134

applicants for language ability. Performance on the test may have serious implications for the life chances of test takers. Hence, IELTS might be expected to exert a strong inuence on teacher and learner behaviour. This paper sets out to explore the interaction between teachers, course content and learners in moderating the inuence of the IELTS academic writing component. The academic module of the International English Language Testing System (IELTS) is designed to assess the readiness of candidates to study through the medium of English and is widely used as a selection tool by universities and other educational institutions. The academic writing component of the test requires candidates to complete two writing tasks within 60 min (the task instructions advise them to spend 20 min on Task 1 and 40 min on Task 2). Candidates are advised to write at least 150 words for Task 1 and 250 words for Task 2. Task 1 involves transferring information from a diagram or graph. According to the IELTS Handbook (International English Language Testing System, 2005, p. 8), this task may require candidates to organise, present and possibly compare data; describe the stages of a process or procedure; describe an object or event or sequence of events; explain how something works. Task 2 calls on prior knowledge in the construction of a written argument or case. The IELTS Handbook suggests that the task requires candidates to present the solution to a problem; present and justify an opinion; compare and contrast evidence, opinions and implications; evaluate and challenge ideas, evidence or an argument. Further information about the IELTS test can be found on the IELTS website: www.ielts.org. It is now widely recognised, and a phenomenon of increasing interest to language testers, that tests (and particularly tests that are associated with important decisions, such as university admissions) have a major impact on educational systems and on the societies in which they operate. Where impact occurs in the form of teaching and learning directed towards a test, both intended positive or unintended and perhaps negative effects are generally referred to by the term washback. Testing innovations are often designed to exploit the supposed potential of tests to encourage learning of targeted skills. Shohamy, Donitsa-Schmidt, and Ferman (1996), to give just one example, document how a new test of oral communication was introduced to encourage the teaching of speaking skills. However, research into washback has consistently shown that tests are not, of themselves, necessarily effective as levers for change (Pearson, 1988); successful educational innovations require both concerted system-wide reform and extensive support from those affected (Wall, 2000). It is now clear that washback involves complex interactions between tests, teachers and learners, which determine whether individuals will embrace or reject intended change. To address this complexity, Bailey (1996, 1999) suggests that washback research should, minimally investigate both washback to the programme (results of test-derived information provided to teachers, administrators, curriculum developers, counsellors, etc.) and washback to the learners (the effects of test-derived information provided to the test takers) (Bailey, 1996, pp. 263264) from the perspectives of both teachers and students. Of these areas, research has most often investigated the practices of teachers, with learners generally being peripheral to the design. While there is now a good deal of research into washback to teachers and programmes and concerted efforts are being made to understand the role of teaching materials in mediating the effects of a test on teachers (Hamp-Lyons, 1998; Saville & Hawkey, 2004), research into the washback of tests to learners remains limited. Studies of learner and teacher expectations have highlighted differences between the expectations of learners and those of their teachers (Peacock, 1998). Such differences of perspective suggest that the learners view should not be overlooked in washback studies. In an early questionnaire-based study into the use of learning strategies as a response to test demands, Watanabe (1992, 1996) raised questions concerning both positive and negative

A. Green / Assessing Writing 11 (2006) 113134

115

washback. In the Japanese context, college entrance tests have long been censured for a negative inuence. Buck (1988) and Brown (1993), for example, blame the poor standards of validity exhibited by these tests for negative effects on classroom practice. Watanabe (1992) found that learners preparing for college entrance tests unexpectedly employed a wider range of learning strategies than those admitted to college by recommendation (and hence exempted from the tests). Although an alternative interpretation of test inuence in this case might be that the lack of strategy use on the part of exempted students could be attributed to credentialism or placing too high a value on test results, the absence of a selection test removes the primary purpose for learning. In a study of dedicated examination preparation, Alderson and Hamp-Lyons (1996) also questioned assumptions about how learners are affected by tests. Their research included interviews with students regarding their expectations of their TOEFL preparation courses. In these interviews, students suggested that, having American friends, going to the movies, reading a lot and generally using English outside class were ideal preparation for the test (Alderson & Hamp-Lyons, 1996, p. 286). On this basis, the researchers were able to contradict teacher claims that student demand for test preparation drove course content and to argue that it might rather be the readiness of teachers to adopt the relatively undemanding strategy of following published TOEFL preparation material that led to the narrow focus on test content observed in classes. In Israel, Shohamy et al. (1996) and Ferman (2004) used questionnaires to access students views of tests. Ferman (2004) found differential washback to learners from an EFL oral matriculation test according to prociency level. Lower ability students engaged in more intensive preparation for the test, including private tutoring and memorising prompts. Students generally believed that they could boost scores through cramming and studied independently for areas of the test that were not covered in class. In the sphere investigated, washback appeared to be most intense to students in the middle ability level who reported the highest average levels of anxiety and felt that the test had led to an improvement in their language skills. Hayes and Read (2004) also used student questionnaires; administering one at the beginning and one at the end of the two courses included in their study. They found that, on entry, students expected their courses to boost their results and, on exit, were generally satised that their classes had indeed improved their chances of success (although this belief was not generally borne out by signicant increases in test scores). The IELTS Impact Studies (IIS) conducted by Cambridge ESOL (see Hawkey, 2006) included a questionnaire for students addressing learning and test-taking strategies; test preparation activities; a retrospective section for students who had previously taken the IELTS test calling for reection about the experience. In all 572 learners responded to the questionnaire with 63% classied as pre-IELTS and 37% post-IELTS. Ninety-six percent of respondents had experienced IELTS preparation courses. In one section of the questionnaire, students were asked to indicate which activities had occurred in their IELTS preparation classes. In another dimension of the IIS project, 83 teachers on IELTS preparation courses were given the same list and were asked to report on the occurrence of the activities in their classes. The responses to these items indicated some interesting differences between teachers and students. The teachers were generally more likely to indicate that activities (whether test related or not) occurred in their classes than were the learners. The three activities selected most frequently by teachers (learning how to organise essays; reading quickly to get main idea of text; learning quick and efcient ways of reading texts) ranked seventh, ninth and tenth, respectively, in frequency of student selection. Such differences raise questions about the extent to which washback to teachers can be assumed to generalise to washback to learners.

116

A. Green / Assessing Writing 11 (2006) 113134

The available evidence suggests that tests often do exert inuence on learners and that individual learners, like teachers, will experience this inuence in different ways. Although teacher and student perceptions of test inuence have been compared, there is a lack of evidence relating to how washback to teachers and programmes might interact with washback to learners. Questions addressed in this study include: what expectations do students bring to IELTS preparation courses? And how do these compare with the expectations of non-IELTS learners, with teacher perceptions of course content and with experiences reported at course exit? 2. Methods 2.1. Methodology The current study represents one element in a broader investigation of the washback of the Academic Writing Module of IELTS. The wider study, described elsewhere (Green, 2003), involved classroom observations, interviews with participants and tests administered at course entry and exit. Practices on IELTS preparation courses courses including dedicated preparation for the test were compared with those on other courses in English for Academic Purposes (EAP), which were also intended to prepare learners for academic study in Britain, but were not directed at preparation for IELTS or any other prociency test (these courses are hereafter referred to as non-IELTS courses). This paper takes as its focus a section of student questionnaires that learners on both IELTS preparation and non-IELTS courses completed at course entry and again at course exit. This section was also included on a questionnaire administered to teachers. 2.2. Instrumentation The shared section of the three questionnaire instruments relevant to this study was developed in the rst instance from teacher and student comments collected in interviews regarding salient differences between IELTS preparation and other forms of EAP instruction (see Green, 2003). As a form of qualitative validation, these comments were compared against syllabus documents and publicity for EAP and IELTS preparation courses, IELTS preparation text books and surveys of EAP learners (Herington, 1996; Jordan, 1997). Key areas of difference identied through these interviews included the following: Said to be given greater emphasis on EAP courses: - The use of books and journals as extensive input for academic writing (reected in items C12, C15 and C20). - Use and integration of sources in academic writing including the nature of academic evidence (C7, C13). - Learning of subject specic content and vocabulary (C1, C22). - Learning about expectations and cultural difference in academic settings (C3, C4, C11). - Effective written communication and organisation (C8, C9). - Editing and redrafting text (C14). - Producing extended textsmuch longer than the 250 words of IELTS Task 2 (C16). - Concern with formal academic style (C23).

A. Green / Assessing Writing 11 (2006) 113134

117

Said to be given greater emphasis on IELTS courses: - Learning of general rather than technical vocabulary (C2). - A close focus on the test format (C19, C24). - Concern with strategies for improving test scores (C5, C6, C21). - Managing study time (C17). - A focus on grammar (C10, C18). The questionnaire items were designed to focus on: (a) The expectations students brought to their courses on entry (Student Questionnaire A). (b) Students retrospective perceptions of course focus at exit (Student Questionnaire B). (c) Teachers perceptions of the focus of their courses (Teacher Questionnaire). Items comprised a sentence accompanied by a ve-point Likert scale attached to descriptors ranging from I denitely disagree to I denitely agree (see Appendix A). The same 24 items appeared on all three questionnaires with wording adapted to the perspective of the respondents. Items on Questionnaire A typically began with, I expect to learn; on Questionnaire B, they began, I learned; on the Teacher Questionnaire they began, Students learned. Questionnaire A was intended to provide a point of comparison between learners entering different coursesdo these students hold different expectations of IELTS preparation and other courses focused on English for Academic Purposes? Questionnaire B would serve as an indicator of student perceptions of course focus and as a means of comparing expectations with outcomes. The intention was to capture student beliefs concerning what they had gained from their courses, including an element of evaluation (with the understanding that skills may have been studied, but not learned). The Teacher Questionnaire provided a basis for similar comparisons between teachers working on IELTS preparation courses and those on non-IELTS courses. The sharing of items across questionnaires also enabled comparisons between groups: between the learners at course entry and at course exit and between the students and the teachers. Once draft questionnaires had been assembled, the items were discussed with a group of four teachers working on a non-IELTS course, all of whom also had experience of IELTS preparation and who suggested changes in the wording and areas of redundancy. The two student questionnaires were then trialled with three classes of students at different institutions (total N = 31) on IELTS preparation and EAP courses and the items further rened. As learners tended to award high ratings to all items concerning course expectations, an additional, open-ended item was added to Questionnaire A asking respondents to identify the most important item of the 24: Which of these items (C1C24) do you think is most important for you? At the end of each section on all three questionnaires, there was also an opportunity to add brief comments. The administration of the three questionnaires is summarised in Fig. 1. Questionnaire A was administered at course entry when students were taking initial placement tests or during the rst week of classes. Questionnaire B was administered during the nal week of classes. The Teacher Questionnaire was administered at the same time as Questionnaire B. 2.3. Participants and settings To control for differences attributable to nationality and rst language, all participants in this study were L1 speakers of Chinese from mainland China (excluding the Hong Kong SAR). They

118

A. Green / Assessing Writing 11 (2006) 113134

Fig. 1. Flow diagram showing sequence of administration of questionnaires.

comprised 108 students: 75 learners on six non-IELTS courses and 33 studying on seven dedicated IELTS preparation courses (see Table 1). IELTS preparation classes included fewer students on average with a lower proportion of Chinese learners; hence, the non-IELTS group was substantially larger than the IELTS preparation group. All were studying in the UK with the aim of entry to UK universities. The IELTS preparation courses included in this study ranged from 3 to 8 weeks in length and the non-IELTS courses ranged from 3 to 10 weeks. Although the courses varied in length, this variable did not predict differences in responses to the questionnaire items and so is not included in the analyses that follow. The non-IELTS courses were all pre-sessional courses provided by universities to prepare students for university entry. Students on these courses are assessed through a combination of teacher assessments and locally developed tests. Successful completion of a presessional course is often accepted by a university in lieu of a test score as evidence of English language ability. Thirty-two learners on non-IELTS courses had previously taken the test. Four of these, together with seven who had not previously sat the test, indicated that they were intending to take it within 6 months. On IELTS courses, 11 had taken the test previously and all but 3 of the 33 were intending to take IELTS following their courses with the intention of using their scores for university admission. Fifteen of the IELTS preparation students and 37 of those on non-IELTS courses had previously been enrolled on an IELTS preparation course. From this, it can be seen that the IELTS writing test would be both challenging and important to most of the preparation course students and hence might be expected to exert a powerful inuence. The large proportion of learners with previous experience of the test on both course types is indicative of the role that IELTS plays in the lives of prospective international students in the UK and no doubt inuenced the expectations of these learners when they arrived on their courses. This feature of the context should be kept in mind in interpreting the results.
Table 1 Background of students participating in this study Course type Non-IELTS courses (Fa : 48, Ma : 27) Age (years) Mean Median S.D. Minimum Maximum
a

IELTS preparation courses (Fa : 17, Ma : 16) Age (years) 21.58 20 3.88 17 29 Course length (weeks) 4.61 4 1.52 3 8

Course length (weeks) 5.53 4 2.27 3 10

25.36 24 4.21 18 40

Sex.

A. Green / Assessing Writing 11 (2006) 113134 Table 2 Background of teachers participating in this study Course type Non-IELTS courses (Fa : 18, Ma : 8) Age (years) Mean Median S.D. Minimum Maximum
a

119

IELTS preparation courses (Fa : 10, Ma : 3) Age (years) 41.38 40 9.37 28 61 ELT experience (years) 14.92 10 11.12 6 40

ELT experience (years) 12.78 12 7.24 2 30

42.90 45.5 8.62 28 58

Sex.

The teacher participants (Table 2) included 26 individuals working on non-IELTS English courses in English for Academic Purposes at three institutions and 13 on IELTS preparation courses at 10 different institutions. All were working in the UK. Although teacher and student participants were drawn from the same institutions, the teachers and students were not necessarily from the same classes. However, teachers were working with the same course outlines and questionnaire responses from those on courses of the same type indicated that their practices were consistent. This consistency between teachers was felt to justify the comparison of teacher and student responses that follows. Eight of the non-IELTS teachers and 10 of the IELTS teachers had previous experience of preparing students for the test. Seven of the IELTS teachers claimed they had received no training in IELTS preparation. Of the remaining six, four had received informal or ad hoc training, one had attended an in-house course at her institution and one had attended a week-long external course. 2.4. Analysis As responses to the questionnaire items were not combined into scales and were not normally distributed, non-parametric statistical methods were used in the analyses. The ratings made by the students and teachers on the ve-point Likert scales were compared between the three participant groups (IELTS preparation students, non-IELTS students and teachers) and the rank ordering of items (based on the summed ratings) was compared across questionnaires (Student Questionnaire A-entry, Student Questionnaire B-exit and Teacher Questionnaire). The signicance of differences in the ratings awarded to items by IELTS and non-IELTS groups was evaluated through a non-parametric MannWhitney U-test. The level of agreement in the ranking of items by learners on the two course types at entry and exit and the ranking of items by the teachers was evaluated through rank order correlations (Spearmans rho). 3. Results and discussion 3.1. Student IELTS scores As one aspect of the study, students were administered versions of the IELTS Academic Writing test on course entry and again at course exit. Effects of task difculty were controlled through a crossover design and each script was rated by two trained IELTS examiners using the ofcial

120

A. Green / Assessing Writing 11 (2006) 113134

Table 3 Means and standard deviations of IELTS Academic Writing band scores at course entry and exit Entry mean Non-IELTS IELTS preparation 5.22 4.95 S.D. .57 .55 Exit mean 5.57 5.39 S.D. .53 .50

IELTS scoring criteria and the nine-band IELTS scale. The average of these two ratings was used in calculating the mean scores displayed in Table 3. The learners on the non-IELTS courses achieved marginally higher mean scores at both entry and exit: the gap was slightly over a quarter of a band at course entry (.27) and rather less than a quarter of a band (.18) at exit. This indicates that the learners on both course types were generally performing below the level required for entry to UK universities and would need to improve their writing ability to meet the requirements (see the IELTS website for further information on university entry requirements). Discussion of learning gains on IELTS preparation courses is beyond the scope of this article, but the interested reader is referred to Green (2005). 3.2. Students reasons for taking their courses Student respondents were asked to rate each of a series of items on a ve-point scale and to indicate the main reason why they were taking their course by selecting one item from the list: I am taking this course because . . . I want to get a good grade on IELTS (or other test). I want to learn useful skills for studying at university. I want to improve my general ability to use English. I am required to take the course by my employer, my parents or other authority. I have a different reason (please specify) . . ..

The results are displayed in Fig. 2. Although a clear majority (69%) of non-IELTS students indicated that they were studying with the primary aim of learning useful skills for university, the IELTS preparation students were evenly split between those choosing I want to get a good grade on IELTS and those preferring, I want to learn useful skills for studying at university. Both of these items attracted high ratings from IELTS preparation students. On both course types 18% prioritized improving their general ability to use English. 3.3. Students expectations at course entry (Questionnaire A) Expectations of course content were high for learners on all course types. Inevitably, students expected to learn most of the items listed. However, differences did emerge between IELTS preparation and non-IELTS EAP courses in the ratings given to items (Table 4). The four highest rating items for non-IELTS EAP students at course entry were: 1. C9 2. C12 I expect to learn how to communicate my ideas effectively in writing. I expect to learn how to nd information from books to use in writing essays.

A. Green / Assessing Writing 11 (2006) 113134

121

Fig. 2. Reasons selected by students for taking their courses.

3. C20 4. C23

I expect to learn quick and efcient ways of reading books in English. I expect to learn how to write in a formal, academic style.

For IELTS preparation courses the highest ranking items were: 1. 2. 3. 4. C23 I expect to learn how to write in a formal, academic style. C9 I expect to learn how to communicate my ideas effectively in writing. C18 I expect my teacher to correct my grammar mistakes in my written work. C21 I expect to learn how to write successful test essays.

Of these items, all were ranked within three positions on both course types with two exceptions: C12 ranked lower for IELTS preparation students (in 17th position) and C18 was in 15th place for the non-IELTS learners. The responses reveal a shared concern among students on both course types with effective writing in a formal, academic register. On the other hand, the following were among the two lowest ranked items at course entry on both course types: C19 C10 I expect the activities we do in class will be similar to the ones on the IELTS test. I expect to learn grammar.

122

A. Green / Assessing Writing 11 (2006) 113134

Table 4 Responses to Questionnaire A by course type Item Non-IELTS Mean Use of books/journals C12 3.88 C15 3.72 C20 3.88 Academic sources and evidence C07 3.76 C13 3.74 Subject specicity C01 C22 3.79 3.77 S.D. .33 .67 .37 .68 .55 .50 .59 .46 .92 .47 .49 .25 .36 .57 .34 .78 1.13 .62 .54 1.24 1.11 .64 1.11 .53 IELTS Mean 3.52 3.52 3.79 3.39 3.48 3.79 3.55 3.61 3.48 3.48 3.61 3.82 3.70 3.55 3.85 3.73 3.64 3.70 3.82 3.27 3.52 3.06 3.36 3.82 S.D. .91 .91 .42 1.00 .80 .42 .71 .66 .62 .71 .79 .39 .64 .79 .36 .45 .65 .47 .39 1.01 .80 1.25 .78 .39 Sig.a .017 .285 .166 .019 .054 .706 .039 .010 .766 .001 .164 .074 .222 .019 .822 .187 .119 .828 .231 .000 .051 .031 .449 .560

Academic expectations/culture C03 3.85 C04 3.41 C11 3.84 Effective writing/organisation C08 3.81 C09 3.93 Redrafting and editing C14 3.85 Producing extended text C16 3.81 Formal style C23 3.86

General vocabulary C02 3.50 Test format C05 C06 C21 3.25 3.67 3.85

Test preparation strategies C19 2.04 C24 3.07 Time management C17 Grammar focus C10 C18 3.61 3.09 3.74

a MannWhitney U-test. P < 0.01.

Item C24 I expect to take practice tests in class was ranked in 23rd place by the nonIELTS students, but in 16th place by IELTS students. Item C17 I expect to learn how to organise my time for studying was ranked in 24th position by the IELTS students, but was in 18th place for the non-IELTS students. These responses suggest that learners on

A. Green / Assessing Writing 11 (2006) 113134

123

both course types had limited expectations of learning grammar or of direct test practice in class. Reecting the different focus of the two course types, MannWhitney U-tests indicated signicant (P < .01) differences between IELTS and non-IELTS students for two items: C11 I expect to learn how to write university essays and reports rated higher by non-IELTS students and C19 I expect the activities we do in class will be similar to the ones on the IELTS testrated higher by IELTS preparation students. This latter item, although given relatively low ratings by learners on both course types, received a mean rating from non-IELTS students that, at 2.04, was over a point lower than for IELTS students (3.27). This was the only item that contained explicit mention of the IELTS test. Taking the responses of the learners on the two course types together suggests that while learners on non-IELTS courses were relatively clear that they would not engage in IELTS-like tasks, the IELTS learners did not expect this to be a major focus of their courses either. Following trialling, it was anticipated that all course expectation items might be awarded high ratings. As a result, students were also asked to indicate which one of the 24 statements they felt would be most important to them. The items most often selected by the IELTS preparation students were C9 I expect to learn how to communicate my ideas effectively in writing and C23 I expect to learn how to write in a formal, academic style, each selected by three students. C5 I expect to learn ways of improving my English Language test scores attracted two respondents. For non-IELTS students, the most popular choice was C23 I expect to learn how to write in a formal, academic style, selected by 18 learners, C9 was also popular, being selected by 16 non-IELTS respondents. These responses conrmed the impression given by the relative ranking of these items of a shared concern with formal register and effective written communication. It appears that the learners arrived on the courses of different types with comparable expectations. This implies that concentrating on the test and test taking strategies was not the only consideration for the IELTS preparation students. Indeed, the prioritization of C9 and C23 (concerned with effective formal writing) and the low ranking given to items C19 and C24 (both concerned with a focus on test format) suggests that even on the IELTS preparation courses, learners prioritized the development of writing skills over test practice. The similarities between the rankings given by the learners on the two course types suggest that they arrived on their courses with broadly parallel expectations of course content. At this stage, they did not appear to hold the same clear distinction between test preparation and non-IELTS EAP courses as, it would emerge, did the teachers. 3.4. Student evaluations at course exit (Questionnaire B) At course exit, students were asked to rate the same activities and objectives once more. While at entry they had indicated what they were expecting to study, and were able to rate a list of skills that they were hoping to acquire, now students were invited to rate what they felt they had learned during the course. As might be anticipated, because students were now reecting back on and evaluating their experiences, the ratings were generally lower than at course entry (Table 5). At this point, C18 My teacher corrected my grammar mistakes in my written work was the highest ranked item on both course types. However, the high ratings given to this item may reect an aspect of the design of the questionnaire, rather than the prevalence of grammar correction, it being perhaps easier for learners to endorse a statement about teacher behaviour than ones prefaced, I learned.

124

A. Green / Assessing Writing 11 (2006) 113134

Table 5 Responses to Questionnaire B by course type Item Non-IELTS Mean Use of books/journals C12 3.12 C15 3.36 C20 2.50 Academic sources and evidence C07 3.53 C13 3.57 Subject specicity C01 C22 2.22 2.92 S.D. .86 .69 1.09 .50 .64 1.30 1.04 .90 .90 .78 .47 .88 .70 .64 .76 .92 1.17 1.18 .92 1.11 1.03 1.13 .89 .58 IELTS Mean 2.88 2.85 2.69 3.03 2.64 2.21 2.00 2.58 2.39 2.73 3.00 2.88 2.39 1.73 3.38 3.33 2.73 3.45 2.76 3.21 2.84 2.30 3.03 3.67 S.D. .99 1.06 1.09 .98 1.29 1.43 1.44 1.32 1.27 1.04 .79 .96 1.17 1.51 .66 .78 1.15 .71 1.03 .86 1.08 1.26 1.07 .54 Sig.a .237 .016 .383 .009 .000 .989 .001 .000 .000 .006 .000 .147 .000 .000 .156 .022 .045 .006 .945 .000 .232 .403 .221 .109

Academic expectations/culture C03 3.38 C04 3.27 C11 3.27 Effective writing/organisation C08 3.68 C09 3.14 Redrafting and editing C14 3.30 Producing extended text C16 3.57 Formal style C23 3.50

General vocabulary C02 2.95 Test format C05 C06 C21 2.26 2.80 2.81

Test preparation strategies C19 2.03 C24 2.64 Time management C17 Grammar focus C10 C18 2.53 2.91 3.78

a MannWhitney U-test. P < 0.01.

The next three highest ranked items on each course received very different mean ratings. Below are the remaining high-ranking items for non-IELTS students. The ranks on IELTS preparation students questionnaires are given in parentheses. The three highest ranking items for non-IELTS students concerned organisation, the use of references and the production of extended texts:

A. Green / Assessing Writing 11 (2006) 113134

125

2. C8 I learned how to organise an essay to help the reader to understand. (8) 3. C13 I learned how to use quotations and references in academic writing. (17) 4. C16 I learned how to write long essays or reports of 1000 words or more. (24) The importance of integrating source material and of generating and organising lengthy texts were both aspects of academic writing emphasized by the teachers in interviews and felt by them to be under-represented by IELTS. The following items received the second to fourth highest mean ratings from IELTS students. Here, non-IELTS ranks are given in parentheses. 2. C6 I learned words and phrases for describing graphs and diagrams. (18) 3. C23 I learned how to write in a formal, academic style. (6) 4. C2 I learned general vocabulary. (14) Two of the three (C6 and C2) had been identied with IELTS preparation in the pilot phase, but the third (C23) was not. C6 is a clear reection of IELTS Task 1, which requires candidates to describe iconic data in the form of a graph, table or diagram. However, learning of vocabulary and use of formal style are less clearly identied with the test format. The high ratings given to these items (and to grammar correction) suggests that test preparation was understood by these learners to include a traditional focus on lexico-grammar including elements specically targeted at the test (graph description) as well as introducing a more formal style of writing. MannWhitney U-tests indicated signicant differences between learners on the two course types on 11 of the 24 items. Two were generally rated higher by IELTS preparation than non-IELTS learners. IELTS learners reported signicantly (P < .01) greater emphasis on leaning words and phrases for describing graphs and diagrams (the focus of IELTS Task 1) and the use of IELTS-like tasks in the classroom: C6 I learned words and phrases for describing graphs and diagrams. C19 The activities we did in class were similar to the ones on the IELTS test. The remaining items displaying signicant (P < .01) differences were given higher mean ratings by non-IELTS students: Cultural difference and academic expectations C3 I learned about the kinds of writing tasks students do at university. C4 I learned about differences between university education in my country and in Britain. C11 I learned how to write university essays and reports. Use and integration of sources C7 I learned how to use evidence to support my written arguments. C13 I learned how to use quotations and references in academic writing. Effective organisation C8 I learned how to organise an essay to help the reader to understand. Editing and redrafting C14 I learned how to edit and redraft my written work. Producing extended text

126

A. Green / Assessing Writing 11 (2006) 113134

C16 I learned how to write long essays or reports of 1000 words or more. Learning subject specic content C22 I read books and articles about my specialist subject area. These students reported a greater emphasis on referencing and the use of evidence (C7, C13), drafting (C14), effective written communication (C8), producing extended text (C16), learning subject specic content (C22) and the expectations of university staff (C3, C4, C11). The differences in the ratings given to these items would seem to support the contention of the teachers that IELTS preparation might under-represent some of the academic writing skills they believed would be required for university study. The broad agreement in expectations between the learners on the two course types reected in the similarity of responses to Questionnaire A contrasts with the divergence in their experience apparent from Questionnaire B. Although teachers sometimes claimed (in interviews) that they felt obliged to focus instruction on the test, the students responses to Questionnaire A suggest that they did not arrive on their courses with the view that teaching should be limited to what would be tested. However, at course exit clear differences emerge between the course types that would appear to reect differences of focus. This raises the further question of the extent to which leaner experiences reect teacher perceptions of course content. The Teacher Questionnaires allowed for comparisons between learner reports of their experiences and teacher perceptions. It is therefore to the teacher responses that we turn next. 3.5. Teacher ratings Teachers were generally more condent in their ratings than students awarding a higher proportion of zeros and fours, the extreme points on the scales. In MannWhitney U-tests, only 6 of the 24 items did not display signicant differences between IELTS and non-IELTS teachers (Table 6): C1 Students learn specialist vocabulary for their university subjects. C2 Students learn general vocabulary. C18 I correct students grammar mistakes in their written work. C20 Students learn quick and efcient ways of reading books in English. C21 Students learn how to write successful test essays. C24 Students take practice tests in class. Of these items, most had been identied in the development phase with IELTS preparation rather than non-IELTS classes (C2, C18, C21, C24). Teachers reported limited attention to vocabulary in their classes, whether technical or general and classes were not said to involve taking practice tests. Both course types appeared to give similar attention to grammar correction (C18) while both IELTS and non-IELTS teachers felt that they taught students how to write successful test essays. C20, which focussed on learning to read extensively under time pressure to nd material for writing, was not said to be a major focus on either course type. Of the remaining items, three were rated signicantly (P < .01) higher by IELTS preparation teachers:

A. Green / Assessing Writing 11 (2006) 113134 Table 6 Responses to Teacher Questionnaire by course type
Item Non-IELTS Mean Use of books/journals C12 3.87 C15 3.61 C20 2.73 Academic sources and evidence C07 3.91 C13 4.00 Subject specicity C01 C22 1.78 2.70 S.D. .34 .94 1.03 .29 .00 1.44 1.36 .46 1.32 .54 .00 .21 .29 .93 .89 1.04 1.57 1.36 1.48 1.16 1.62 1.09 1.09 1.22 IELTS preparation Mean .85 1.23 2.46 3.31 .69 1.23 .46 1.69 1.23 1.23 3.69 3.31 2.69 .54 2.62 3.15 3.23 3.77 3.15 3.62 3.08 1.85 1.54 3.46 S.D. 1.07 1.36 1.27 .95 1.11 1.09 .88 1.11 .83 1.09 .48 .85 1.03 1.13 1.19 .80 .83 .44 .80 .51 1.04 1.21 1.13 .88

127

Sig.a .000 .000 .566 .008 .000 .305 .000 .000 .001 .000 .005 .001 .000 .000 .003 .870 .002 .004 .015 .000 .018 .008 .001 .210

Academic expectations/culture C03 3.87 C04 2.87 C11 3.74 Effective writing/organisation C08 4.00 C09 3.96 Redrafting and editing C14 3.91 Producing extended text C16 3.70 Formal style C23 General vocabulary C02 Test format C05 C06 C21 3.61 3.00 1.50 2.66 1.91

Test preparation strategies C19 1.27 C24 1.71 Time management C17 Grammar focus C10 C18 3.00 3.00 2.96

a MannWhitney U-test. P < 0.01.

C5 Students learn ways of improving their English Language test scores. C6 Students learn words and phrases for describing graphs and diagrams. C19 The activities we do in class are similar to the ones on the IELTS test. All three of these items were identied with a focus on test format (but did not involve direct practice with test material). The importance given to C6 (also rated highly by IELTS preparation

128

A. Green / Assessing Writing 11 (2006) 113134

students) reects the content of IELTS Task 1 (data description). The ratings awarded to these three items indicate that, although practice tests may be infrequent, the content of the test is nonetheless felt by teachers to shape the selection of class activities: this focus on the test format did clearly distinguish the preparation courses in this study from non-IELTS courses. A greater number of items (15) were rated signicantly higher by the non-IELTS teachers. Of these, three concerned issues of intertextuality, or the treatment of source material in academic writing (C12, C13 and C15). This was an area consistently identied by teachers in interviews as distinguishing other forms of EAP instruction from IELTS preparation. C12 Students learn how to nd information from books to use in writing essays. C13 Students learn how to use quotations and references in academic writing. C15 Students learn how to use ideas from text books or academic journals in their writing. A further three items were explicitly concerned with the university context or with the academic subject that students were intending to study. This would suggest that the teachers on IELTS preparation courses did not consider themselves to be preparing learners for university study to the same extent as their non-IELTS colleagues, although all the learners included in the study were intending to apply for places at a university. C3 Students learn about the kinds of writing tasks they will do at university. C4 Students learn about differences between university education in their countries and in Britain. C11 Students learn how to write university essays and reports. Interestingly, there were also signicant differences between these teachers in areas that would seem to be addressed by the design of the IELTS Academic Writing test and that hence might be expected, contrary to the pilot study ndings, to be treated similarly by teachers across courses. These included textual organisation (C8 Students learn how to organise an essay to help the reader to understand); use of supporting evidence (C7 Students learn how to use evidence to support their written arguments) and communicative quality (C9 Students learn how to communicate their ideas effectively in writing) all of which are addressed by the IELTS scoring criteria and so might be expected to be of value in preparing for the test. As the IELTS test does not include a test of grammar, it is perhaps surprising that a signicant (P < .01) difference should emerge for C10 Students learn grammar. This may signal a greater tendency among IELTS teachers to provide explicit instruction in grammarperhaps reecting the place of grammar in the IELTS scoring criteria. However, this item contrasts with the apparently similar C18 I correct students grammar mistakes in their written work, which did not display any signicant difference between groups of teachers. In interviews, the IELTS preparation teachers often reported teaching textual organisation and use of supporting evidence as aspects of their courses and here they were among the highest ranking items (C8 was ranked second, C7 fth and C9 sixth by the IELTS preparation teachers). However, that they were rated higher (ranking rst, fourth and third) for the non-IELTS teachers suggests that the latter may have felt more able to devote resources to these skills. Other areas of difference that could more readily be traced to the design of the test involved the redrafting of written work, which is not feasible within the time constraints of the test (C14 Students learn how to edit and redraft their written work); work on extensive assignments, the test being limited to two essays of a recommended 150 and 250 words (C16 Students learn how

A. Green / Assessing Writing 11 (2006) 113134 Table 7 Correlations between teacher and student ratings Non-IELTS students (entry) Non-IELTS students (entry) IELTS students (entry) Non-IELTS students (exit) IELTS students (exit) Non-IELTS teachers IELTS teachers .52a .31 .15 .50 .17 IELTS students (entry) .08 .17 .04 .29 .13 .75a .11 .06 .67a .10 Non-IELTS students (exit) IELTS students (exit) Non-IELTS teachers

129

IELTS teachers

Rank order correlations (Spearmans rho). a Correlations are signicant at the P < 0.01 level (two-tailed).

to write long essays or reports of 1000 words or more); academic style, which is not explicitly rewarded in test essays (C23 Students learn how to write in a formal, academic style) and time management, organising time for study not being an immediate requirement of the test (C17 Students learn how to organise their time for studying). The teachers were more likely than the learners to endorse the differences identied between courses in the questionnaire development phase of the study. However, certain of the features said to characterise IELTS preparation, such as grammar correction or the use of practice tests in class did not emerge as signicant for these courses. 3.6. Correlations between teacher and student ratings Between entry and exit, the two groups of students diverged from each other in the ordering of items and converged with the teachers on their respective course types (Table 7). The signicant (P < .01) correlation between non-IELTS and IELTS preparation learners in their expectations for their courses (rs = .52) is not repeated at course exit when students reect on what they have learned (rs = .13). Conversely, at course exit, but not at course entry, ratings awarded by students were signicantly (P < .01) correlated with those awarded by the teachersrs = .75 for EAP and rs = .67 for IELTS courses, respectively. This suggests that these learners experiences of their courses reect the priorities identied by the teachers more than do their expectations at course entry. The relationship between teacher ratings and student ratings at course entry and exit on selected items is illustrated in Fig. 3. This plots the differences in the ratings awarded by the teachers and students associated with the two course types as a proportion of the variance in each item. Here, the movement towards teacher descriptions of course content can be traced in the divergence between the two groups of learners. It is apparent from Fig. 3 that the differentiation between course types is clearest for the teachers and that most of the course features in the questionnaire are more strongly identied with nonIELTS than with IELTS preparation courses. The gure also illustrates areas of convergence between the student perceptions of course outcomes and the teacher responses. It is, moreover, apparent that there are areas of divergence between the teachers and learners. The differences found between teachers and students in describing course content underline the importance of seeking learners views of washback to complement those of teachers. These learners did not share their teachers understanding of the balance of activities in class and did

130

A. Green / Assessing Writing 11 (2006) 113134

Fig. 3. Comparison of mean ratings made (a) by students at course entry, (b) by students at course exit and (c) by teachers.

A. Green / Assessing Writing 11 (2006) 113134

131

not consistently identify the same distinctions between the IELTS preparation and non-IELTS classes. Contrary to the teacher responses, these students did not see non-IELTS courses as being more helpful in facilitating quick and efcient reading of books or using ideas from books in writing essays. The non-IELTS students appeared equally likely to believe that their courses had helped them to write successful test essays (C21), that their classes reected IELTS content (C19), and that they had received grammar instruction (C10, C18). These results suggest that these students believed that the broader-based non-IELTS courses may have been equally effective as IELTS preparation in improving their ability to take the test. 4. Conclusions This study included a limited number of Chinese learners on a restricted sample of courses preparing learners for academic study at UK universities. The focus has been on the skill of academic writing and on the inuence of the IELTS test. Clearly there are limitations on how far the results can be expected to generalise beyond this context. Further research is needed into how teachers and learners in other contexts preparing for tests of skills other than writing may experience washback. Evidence from both the teachers and the students regarding their course expectations and outcomes is indicative of substantive differences between the course types included in the study. These learners arrived on their courses with expectations of instruction, which varied, to a limited extent, according to course aims, but which seemed to reect to a far greater extent shared beliefs about learning. They left reporting divergent experiences of what they had learnt; experiences that were broadly in keeping with teacher reports about the nature of the instruction on the two course types. Teachers on the different course types adopted distinctive aims and students appeared to accommodate to these, reecting the focus of instruction in their reports of course outcomes. However, it did not appear that the differences in course content were driven by differences in learner expectations: those arriving on test preparation courses did not prioritize closely test-related content. For explanations of why teachers tended to draw such clear distinctions between IELTS preparation and non-IELTS classes, we must look elsewhere. Of particular interest are the course outcomes that were not anticipated, or at least were not prioritized by the learners at course entry as these are suggestive of the inuence of the teacher and the learning context on learners. On IELTS preparation courses these included the description of graphs and diagrams; on non-IELTS courses they involved referencing, learning about university writing tasks and learning about differences in university study across cultures, areas in which it may be feared that learners preparing for IELTS might not have received sufcient instruction. Some of the differences emerging from the questionnaires can be traced to the design of the IELTS test and this clearly played a role in shaping the content of preparation classes. For example, the description of graphs and diagrams is required in Task 1 of the Academic Writing Module and was frequently observed as a focus of IELTS preparation classes. However, it is less clear why teachers choose to focus on particular features of the test; how far they believe their choices will inuence students chances of success and how far these beliefs are justied by the outcomes. While there is little evidence here to suggest that these students played a direct role in shaping washback to the teacher, this study has suggested that the teachers and courses might have been inuential in shaping washback to the learners. Learner responses to Questionnaire B reected teacher reports of what had been taught. On the other hand, the study has also indicated ways

132

A. Green / Assessing Writing 11 (2006) 113134

in which learners may perceive tests and test preparation differently to their teachers. Future investigations of washback on learning will need to take into account the different perspectives on washback that teachers and students may bring. While the use of questionnaires in this study has provided one means of investigating such questions, more sensitive instruments will probably be required to probe them further. In-depth interviews with participants or the use of learning records, in conjunction with classroom observation, might provide the evidence to support more detailed accounts of such relationships. The study has further suggested that there are aspects of academic writing that may be underrepresented in current models of IELTS preparation. A possible implication for teaching is that broader-based approaches to test preparation might be of value in preparing learners for academic life. The ndings relating to learner expectations suggest that, in this context at least, such an approach may not disappoint learners. Indeed, there are clear benets in encouraging students to look beyond the immediate demands of the test and to consider how test tasks relate to the broader requirements of academic writing.

A. Green / Assessing Writing 11 (2006) 113134

133

Appendix A. Questionnaire A items

134

A. Green / Assessing Writing 11 (2006) 113134

References
Alderson, J. C., & Hamp-Lyons, L. (1996). TOEFL preparation courses: A study of washback. Language Testing, 13 (3), 280297. Bailey, K. M. (1996). Working for washback: A review of the washback concept in language testing. Language Testing, 13 (3), 257279. Bailey, K. M. (1999). Washback in language testing. Princeton, NJ: ETS. Brown, J. D. (1993). Language testing hysteria in Japan. The Language Teacher, 17 (2), 4143. Buck, G. (1988). Testing listening comprehension in Japanese university entrance examinations. JALT Journal, 10 (1), 1542. Ferman, I. (2004). The washback of an EFL national oral matriculation test to teaching and learning. In: L. Cheng, Y. Watanabe, & A. Curtis (Eds.), Washback in language testing: Research contexts and methods (pp. 191210). Mahwah, NJ: Lawrence Erlbaum. Green, A. (2003). Test impact and EAP: A comparative study in backwash between IELTS preparation and university presessional courses. Unpublished doctoral dissertation. Roehampton, UK: University of Surrey. Green, A. (2005). EAP study recommendations and score gains on the IELTS Academic Writing test. Assessing Writing, 10 (1), 4460. Hamp-Lyons, L. (1998). Ethical test preparation practice: The case of the TOEFL. TESOL Quarterly, 33 (2), 329337. Hawkey, R. (2006). Impact theory and practice: Studies of the IELTS test and Progetto Lingue 2000. Cambridge: CUP. Hayes, B., & Read, J. (2004). IELTS test preparation in New Zealand: Preparing students for the IELTS Academic Module. In: L. Cheng, Y. Watanabe, & A. Curtis (Eds.), Washback in language testing: Research contexts and methods (pp. 97112). Mahwah, NJ: Lawrence Erlbaum. Herington, R. (1996). Test-taking strategies and second language prociency: Is there a relationship? Unpublished masters thesis. UK: Lancaster University. International English Language Testing System. (2005). The IELTS Handbook. Cambridge: Cambridge ESOL Examinations/The British Council/ IELTS Australia. Jordan, R. R. (1997). English for Academic Purposes. Cambridge: Cambridge University Press. Peacock, M. (1998). Comparing learner and teacher views on classroom activities for EFL. International Journal of Applied Linguistics, 8 (2), 233250. Pearson, E. (1988). Learner strategies and learner interviews. ELT Journal, 42 (3), 173178. Saville, N., & Hawkey, R. A. (2004). A study of the impact of the International English Language Testing System with special reference to its washback on classroom materials. In: L. Cheng, Y. Watanabe, & A. Curtis (Eds.), Washback in language testing: Research contexts and methods (pp. 7396). Mahwah, NJ: Lawrence Erlbaum. Shohamy, E., Donitsa-Schmidt, S., & Ferman, I. (1996). Test impact revisited: Washback effect over time. Language Testing, 13 (3), 298317. Wall, D. (2000). The impact of high stakes testing on teaching and learning. System, 28 (4), 483499. Watanabe, Y. (1992). Washback effects of college entrance examination on language learning strategies. JACET Bulletin, 23, 175194. Watanabe, Y. (1996). Does grammar translation come from the entrance examination? Preliminary ndings from classroom-based research. Language Testing, 13 (3), 318333.

You might also like