You are on page 1of 5

C

R -A

S A 2 0 12

-A d
va

AL CO N F E RE

Advanced Research in Scientific Areas 2012 December, 3. - 7. 2012

nc
e d R e s e a rc h

IR

TU

in

Abstract- Musical abilities are generally considered as an evolutionary by-product of more important and challenging functions such as those involved in language. Indeed, much research has been carried out on the relationships that exist between music and language, asserting that there is a great deal these two domains of human behavior share [1]. However, no attempt has yet been made to near music and interpreting fields, which share fascinating features in terms of cognitive skills.1 The present study is part of an ongoing and larger PhD research project, whose aim is to discuss the role of research in interpreter education training from a methodological point of view [2]. The overall objective of the research project is intended to answer the question of whether any crossed relationships among music namely rhythm - language and simultaneous interpreting do exist. In this paper I will concentrate on such correlations, and on the work-in-progress pilot study itself, whose aim is rather to explore whether music training methodologies - exactly related to rhythm [3] - could support and improve simultaneous interpreting (SI) students learning process and thus enhance their performance skills, in terms of interpreting strategies and prosodic features. A tentative outline of expected results will be also presented. Keywords-language; music; simultaneous interpreting, cognitive skills, training, interdisciplinary perspective, pilot study.

The connection between music and language has spawned a huge interest in the literature [4]. Interestingly, the link has intrigued scholars, since it may cast new light on the nature of both music and language, their evolution, their neural underpinnings, and their dynamics [5]. The past decade has witnessed an upsurge of interest in the comparative study of language and music in terms of cognitive systems. Hence, it is not surprising that research targeted at language and music spread over many branches of cognitive science, such as linguistics, psychology, cognitive neuroscience, education [6]. Interest in the relation between music and language started long before modern cognitive science. The topic has been investigated by a wide range of scholars, including

Private conversation with A. Patel (Senior Fellow at The Neurosciences Institute, San Diego) and R. Chaffin (Professor in Psychology of Music at University of Connecticut).

INTERNATIONAL VIRTUAL CONFERENCE http://www.arsa-conf.com

Sc

ien

t i fi c A re as

Music and Simultaneous Interpreting


How music can affect simultaneous interpreting training

Silvia Velardi - PhD


IULM University Milan, Italy silvia.velardi@gmail.com

philosophers, biologists, poets, composers, linguists, and musicologists [7]. II. THE ARCHITECTURE OF MUSIC AND LANGUAGE Language and music share a common function, which supports the claim that they share a common root and common constitutive principles. The likelihood of shared mechanisms for processing music and language might be responsible for both perception and production patterns in both domains. In fact, one of the reasons why music appears interesting and comprehensible is that it contains various kinds of structures that the human brain is capable of apprehending and organizing into hierarchically structured sequences 2 [8]. This means that both music and language need to be stitched together with respect to their parts to be understood, that might otherwise be taken as a disjointed sequence of elements. Therefore, the interplays between parts and wholes plays a crucial role in shaping the hierarchical structure of music and language mental processes. Starting from this perspective, music and language may be based on the evidence for resource sharing in terms of sound systems (phonology) [9], syntactic comprehension (syntax) [10], and giving a meaning to their parts (semantics) [11]. In sum, we may consider several analogies and similarities between the structure of music and that of language, that help assume linguistic and musical meaning are somewhat compatible and based on common mechanisms. It is fair to maintain that many psychological principles are currently translated into their corresponding neural underpinnings. Research studies on the nature of the overlap between music and language has also revealed interesting notions about the functional architecture of both domains and
2

I.

INTRODUCTION

The paradigmatic expression on the nature of the representation of the tonal music in such terms remains the book by Lerdahl and Jackendoff A Generative Theory of Tonal Music (1983), which is grounded on the assumptions that 1) music is constructed according to grammar (that means, there are sets of rules or constraints determining the legitimacy of musical sentences), and 2) representation of music is often hierarchic (that means, grammar determine elements within a musical sequence which, in turn, control the neighbouring elements, so that elements at one level become subsumed at the next higher level.

SECTION 8. Linguistics

- 1300 -

R -A

S A 2 0 12

-A d
va

AL CO N F E RE

Advanced Research in Scientific Areas 2012 December, 3. - 7. 2012

nc
e d R e s e a rc h

IR

TU

in

refine the understanding of the role of different brain areas involved in the processing of these two systems [12]. III. MUSIC AND SIMULTANEOUS INTERPRETING Starting from the evidence that music is a form of language, one may wonder which is the relationship existing between learning and acquiring foreign languages, and perceiving music. In these terms, the linguistic interpretation may come as a useful domain to understand the major cognitive mechanisms and abilities underlying both language and music. In particular, one may think to music performance as a sort of translation between two languages (noted score into instrumental performance), made possible by a range of cognitive and perceptual skills and abilities shared by both music performance and simultaneous interpreting. In fact, interpreting music and interpreting foreign languages are extremely demanding tasks in terms of cognitive, and perceptual skills.

From this perspective, musical performance might be envisaged as the system pertaining to music that best reminds to the process of simultaneous interpreting from a cognitive perspective, so far, some evidences may be drawn based on the idea that there do exist underlying mechanisms intertwining both fields of investigation. My argument is that musical performance may be neared to the process of simultaneous interpreting in terms of: (a) performing an online task. Playing music implies the performer to translate notation into performance in real time, and produce expression on the spot. In line with the music performing task is the process of simultaneous interpreting, the communication process performed by the interpreter who renders the speakers message from SL into TL - that takes place in real time. As music, SI is a time-critical performance that requires a heightened level of awareness, abilities, and expertise to enable the interpreter task, in which spoken message in a source language must be accurately and simultaneously reformulated and produced into the target language 3 . This leads to acknowledge that simultaneous interpreters and musical performers acquire important skills for controlling their resources, so they can be considered as experts in executive control. It may be fair to maintain that both sight-reading and SI are online complex processes where extensive experience and practice may result in superior executive functioning. (b) acquisition of visual input and performanc e plan [13] pertaining to musical performance vs. listening and comprehension effort typical of SI [14], and the ability for the
3

Simultaneous interpreting, however, is not quite simultaneous. It usually utilizes EVS (ear-voice span) or lag, a technique which sometimes enables the interpreter to gain a few seconds to formulate his/her utterances in advance of the actual speaker. Similarly, in musical sight-reading there is a delay between reading the notes in the score and actually playing them. This lag, the eye-hand span (EHS). Despite this advantage, both simultaneous interpreting and sight-reading remain high-powered and stressful real-time tasks.

INTERNATIONAL VIRTUAL CONFERENCE http://www.arsa-conf.com

Sc

ien

t i fi c A re as

encoding of linguistic and musical material into meaningful units, as to facilitate performance. The focus on the visual input perception from the musical score in playing music, and the related phenomenon of pattern recognition as a grouping mechanism allows us to draw an extra strictly-related parallel with the listening and analysis effort pertaining to simultaneous interpreting, where there is not one-to-one relation between the sound reaching the interpreters ear and any single phoneme, word, or groups of words pronounced by the speaker. Rather, the interpreter must be able to segment the seemingly unending flow of SL discourse, a problem-solving strategy that interpreters use to divide up TL long stretches of discourse into chunks of manageable size. (c) musical performers and SI interpreters ability to generates expectations during the performance act. The basic idea is that in the process of aural perception of speech listening, the interpreters mind generates expectations of certain verbal and semantic developments of the discourse. This process is based on subjective estimates of the range of probabilities (expectations) within which the given verbal or semantic can further develop. The interpreter is called to translate the SL flow using background knowledge, while inferring other implicatures from the linguistic semantic context of the TL speech. This means that in subsequent processes the interpreter either confirms or reject his/her hypotheses by checking against critical points of the ongoing discourse, concurrently on several levels. Along this line, one may ask what happens when the musicians expectations are not met by the printed score. A study by Sloboda [15] may provide us for a similar answer. In detail, pianists performed a classically sound piece of music in which several notes had been altered by a half or whole step to violate tonal expectations. Participants were asked to play exactly what was written. As it was expected, many of the artificial alterations were erroneously corrected to sound tonal again. They were called proof readers errors. In essence, both in musical performance and SI, these processes function more effectively in better music performers/professional interpreters. However, while plausible expectations are constantly constructed and usually facilitate performance, in some instances, they may lead the performers/interpreters astray and cause errors that may unveil the underlying mediating process. (d) the ability of musical performers function to assess a musical performance feedback vs. the skills of simultaneous interpreters to monitor target language output. The feedback mechanism peculiar to the simultaneous interpreter involves monitoring both SL perception (and comprehension), and TL speech production. As in the studies conducted in musical performance, plenty of evidence of selfcorrection have been observed in conference professional

SECTION 8. Linguistics

- 1301 -

R -A

S A 2 0 12

-A d
va

AL CO N F E RE

Advanced Research in Scientific Areas 2012 December, 3. - 7. 2012

nc
e d R e s e a rc h

IR

TU

in

interpretations and in experimental materials cited by many authors [16]. Self-correction testifies to the engagement of the feedback mechanism common for both domains. (e) the ability of the musician to control and switch between musical parameters simultaneously, vs. simultaneous interpreters flexible control of attention among different tasks. In line with musical performance, in SI process, the simultaneity of the efforts increases the cognitive workload during the performing process, in terms of overlapping of the component tasks, thus the role of the musician becomes more and more demanding as s/he must have the ability to successfully execute more than one task at a time, being able to share attention among the tasks involved in the process. (f) musical performers and interpreters shared memory effort, in terms of storing, processing, and manipulating (linguistic/musical) information. Short-term memory (STM) plays a paramount role in the processing of musical information, in terms of storage and manipulation of information. But how does the manipulation of musical information take place? Rather than processing information bit by bit, performers tend to search for patterns that allow them to process units of information. For this reason, both visual and auditory perceptual inputs are processed are grouped into meaningful units, whose size is variable and depend on the level of expertise [17]. This ability to process the musical flow into meaningful units depend on previous knowledge, thus involving long-term memory [18]. Therefore, the short-term memory system is depicted small relative to long-term memory, in terms of capacity. From this, it may be fair to maintain that STM, although meeting all the need required for the encoding of visual and auditory musical information, it may sometimes be envisaged as bottleneck in the information processing capability of the memory system. There does exist an evidence for extra need in terms of capacity, while performing music: the performers ability to envisage pattern recognition mechanisms on the musical page, that is to say chunking, a memory mechanism that links the performers perception to previously stored knowledge, and the ability to overcome the limitations of STM capacity, using long-term memory (LTM) as previously learned material4. That
4

Common models of memory assume three different stages on how information in perceived, processed and stored. The first stage is assumed to be a sensory short-term memory that lasts only fractions of a second. If the information is unattended at this stage, it is lost forever. In this, the deployment of attention is paramount to recalling. Conversely, the information that is selected, enters sort-term memory where it can reside for varying amount of time. STM contains currently relevant information for further processing and manipulating. If its content is meaningfully rehearsed and actively grouped (chunking), it can be transferred to long-term memory, where information can be retrieved even after a long time. An extention of the short-term memory idea is the working memory concept, which views memory as a sort of workbench on which items are held and operated on.

INTERNATIONAL VIRTUAL CONFERENCE http://www.arsa-conf.com

Sc

ien

t i fi c A re as

means the ability to group and make sense out of musical information depends on previous knowledge. In line music performance, simultaneous interpreting is a demanding and complex task that makes use of the working memory to its extreme. In the same way, the tasks involved in simultaneous interpretation cannot be handled by the working memory alone, as, in order to perform this feat, interpreters must undertake various tasks such as listening and comprehension, information retention, retrieval, production, and monitoring almost concurrently. Of these tasks, listening and comprehension are mainly dealt with in the language comprehension system and production is dealt with in the language production system. Both systems are supported by the working memory in normal language processing with the central executive and memory system serving as a working space. Language conversion is dealt with by the central executive with the support of the long-term memory. This means short-term memory interact with long-term memory, which frees the linguistic and extra-linguistic knowledge to create a unit of meaning. Therefore, the simultaneous interpreter proceeds analogously to the musical performer by drawing up a probability of how events may unfold based on the knowledge of certain patterns and recurring facts. It is now all about linking these hypothesis and these assumptions, drawing on the fact that music and SI share common cognitive functions and mechanisms. It seems that the question of whether any crossed relationships among music, language and simultaneous interpreting do exist has been answered. May it be possible that music help in interpreting languages? Does it exist a link between interpreting abilities and musical perception - namely rhythm, semantics of language (voice quality, fluency, rhythm of speech)? A pilot-study is being conducted at IULM University, whose aim is rather to explore whether music training methodologies - exactly related to rhythm [19] - could support and improve simultaneous interpreting (SI) students learning process and thus enhance their performative skills, in terms of interpreting strategies and prosodic features. The study follows a twofold experiment designed, first, to isolate the features of prosody in interpretation as a distinctive mode of language use in simultaneous interpreting, and second, to examine the cumulative effect of rhythm on how well a text is perceived in terms of comprehension, recall and fluency. IV. THE STUDY: METHOD A. Subjects Twelve subjects at the beginning of the Second Cycle Degree at IULM University of Milan will be divided into two homogeneous groups (control group - A group, and experimental group - B group), according to data emerging from a preliminary questionnaire investigating their musical experience and musical vocation. They are all native Italian speakers and have English as B language.

SECTION 8. Linguistics

- 1302 -

R -A

S A 2 0 12

-A d
va

AL CO N F E RE

Advanced Research in Scientific Areas 2012 December, 3. - 7. 2012

nc
e d R e s e a rc h

IR

TU

in

B. Materials Drawing on the same design, two experiments will be run. The first will be based on the translation of a 1'30'' English speech selected from the European Parliament press archives; the second experiment will deal with a 1'30'' rap song, with a much more complex structure in terms of slang and rapidity. Rhythm should be mainly helpful in a text rhythmically ambiguous like rap, thus more difficult to translate. As the cognitive load of students is under pressure in terms of listening and production, this could contribute to strengthen their capacity to fluently translate regular speeches afterwards. For both experiments, students will be asked to translate from B into A language, along the line established by Gile in the theory of Directionality [20]. Technical sound and audio splitting softwares, besides recording equipment, will be used to isolate and segment rhythmical patterns of both units. C. Procedure Prior to the training and testing phase, all subjects will be given a multiple-choice questionnaire. The test will require them to express their musical aptitudes in terms of musical experience and vocation. Both the speech and the rap song will be rhythmically isolated, that means that only the rhythmical structure (no words) will be maintained. Students will be then instructed to carefully listen to the rhythmical isolated pattern of the speech first (experiment 1) and of the song after (experiment 2) and to recall it afterwards. They will be also asked to carefully evaluate every single sentence. Both rhythmical abstractions will be segmented into smaller units. The experimental group will undergo a 3-days recall training of the rhythmical pattern before interpreting the speech. Our assumption is that instilling rhythmical patterns in B group subjects mind might enhance the SI performance. The experiments will be carried out in a SI laboratory setting; a short practice test on a related topic will be provided for warm-up. The same methodological procedure will be followed for experiment 2. All subjects will be run individually and after all the tasks will be completed, they will be given a short debriefing session. Both the original and the students rendition will be recorded and transcribed by means of computer-assisted transcription software, in order to process and analyze it. Both rhythmical and linguistic performances will be evaluated by professionals at both levels (2 interpreters and 2 musicians) and scored on a 1 to 10 point scale. In particular, the evaluation criteria as for SI, will draw on the analysis at a syntactic, semantic and also at a pragmatic level [21]. After data collection, a statistical analysis will be carried out, in cooperation with an expert on statistics. D. A tentative outline of expected results What will the application of rhythmical training reveal about simultaneous interpreting? Ideally, four possible scenarios can be outlined: no relevant differences in the performance of SI students between group A and group B;

INTERNATIONAL VIRTUAL CONFERENCE http://www.arsa-conf.com

Sc

ien

t i fi c A re as

differences between musically-trained students (subjects experiencing some forms of music during their life) and non-musically-trained students in terms of reliable helpfulness of music, or likelihood of music to improve SI performative skills. Besides, some differences between group A and group B are worth mentioning: group A may outperform group B; by way of contrast, the high performance score may derive from group B proving to be the best performer: this would be a clear indication that music supports SI, which is highly desirable. V. CONCLUSION Several research works have investigated language and music, collecting objective data showing a strong relationship between the two disciplines [22]. Starting from this assumption, and in order to gain insight into the skill acquisition process during SI training, our basic objective is to examine the extent to which simultaneous interpretation can benefit from a valuable musical resource, namely rhythm, in terms of interpreting strategies and prosodic features. The main contribution deriving from this study would be a novel approach to SI education training. On the basis of a definition of rhythm derived from the literature on theory and methodology, a new paradigm could be outlined for future research, centered on the communicative and pragmatic skill acquired during SI vocational training. In conclusion, rhythm could be used as an intermediate step, a sort of training wheels [23], before letting students perform simultaneous interpretation in a real conference setting.

REFERENCES
[1] Mithen, S. (2005) The Singing Neanderthals: The Origins of Music, Language, Mind, and Body. London: Weidenfeld & Nicolson. Besson, M., & Schn, D. (2001). Comparison between language and music. In Z. R, & P. I (Eds.), Annals of the New York Academy of Sciences: The Biological Foundations of Music (Vol. 930, pp. 232-256). New York: Academy of Sciences. Patel, A.D., (2008) Music, language and the brain. Oxford: Oxforrd University Press. Patel, A.D. (2003) Language, music, syntax and the brain. Nature Neuroscience, 6, 674681. Pchhacker, F. (2004): Introducing interpreting studies. London/New York: Routledge. Sawyer, D. B. (2004): Fundamental aspects of curriculum and assessment in interpreter education. Amsterdam/Philadelphia: John Benjamins Publishing Company Grahn, J. (2009): Neuroscientific investigation of musical rhythm: recent advances and future challenges. In: Contemporary Music Review, 28/3, 251-277 Marienberg, S. (2011): Language, Rhythm, Grain of Voice. In: R. Manzotti (Ed.): Situated Aesthetics. Art Beyond The Skin. Exeter: Imprint Academic, 141-153 Aiello, R. (1994). Music and language: Parallels and contrasts. In: R. Aiello (Ed.) Musical Perceptions, 40-63. New York: Oxford University Press.

[2]

[3]

[4]

SECTION 8. Linguistics

- 1303 -

R -A

S A 2 0 12

-A d
va

AL CO N F E RE

Advanced Research in Scientific Areas 2012 December, 3. - 7. 2012

nc
e d R e s e a rc h

IR

TU

in

Cross, I. (2001). Music, cognition, culture and evolution. Annals of the New York Academy of Sciences, 930, 28-42. Deutsch, D. (2003) An evolutionary perspective on music. Review of The Origins of Music. In: N. L. Wallin, B. Merker, & S. Brown (Eds.) Contemporary Psychology, 48, 54-56. Huron, D. (2001). Is music an evolutionary adaptation? In Z. R, & P. I (Eds.), Annals of the New York Academy of Sciences: The Biological Foundations of Music (Vol. 930, pp. 46-61). New York: The New York Academy of Sciences. Lerdahl, F. (2003). The sounds of poetry viewed as music. In I. Peretz & R. J. Zatorre (Eds.), The cognitive neuroscience of music. Oxford: Oxford University Press. [5] Besson, M., & Schn, D. (2001). Comparison between language and music. In Z. R, & P. I (Eds.), Annals of the New York Academy of Sciences: The Biological Foundations of Music (Vol. 930, pp. 232-256). New York: Academy of Sciences. Deutsch, D. (1991) The tritone paradox: An influence of language on music perception. Music Perception, 8, 335-347. Huron, D. (2001). Is music an evolutionary adaptation? In Z. R, & P. I (Eds.), Annals of the New York Academy of Sciences: The Biological Foundations of Music (Vol. 930, pp. 46-61). New York: The New York Academy of Sciences. Peretz, I. (2006) The nature of music [Special issue]. Cognition, 100(1). [6] Bigand, E., Lalitte, P., & Dowling, W.J. (Eds.) (2009). Music and language: 25 years after Lerdahl & Jakendoffs GTTM [Special Issue]. Music Perception, 26(3). Peretz, I. & Zatorre, R. (Eds) (2003) The cognitive neuroscience of music. Oxford: Oxford University Press. [7] Cross, I. (2012). Music as a social and cognitive process, in Language and Music as Cognitive Systems, eds P. Rebuschat, M. Rohrmeier, J. A. Hawkins, and I. Cross (Oxford: Oxford University Press), 315328. Bernstein, L. (1976). The Unanswered Question: Six Talks at Harvard. Cambridge, MA: Harvard University Press. Darwin, C. (1871) The Descent of Man and Selection in Relation to Sex (First edn.). London: John Murray. Fitch, W.T. (2010) The Evolution of Language. Cambridge: Cambridge University Press. [8] Lerdahl, F. & Jackendoff R. (1983) A Generative Theory of Tonal Music. MIT Press, Cambridge/Massachusetts. Sloboda, J.A. (2005) Exploring the musical mind: cognition, emotion, ability, function. Oxford: Oxford University Press. [9] McMullen, E., & Saffran, J. R. (2004). Music and language: A developmental comparison. Music Perception, 21, 289-311. Scott, S. & Evans, S. (2010) Categorical speech representation in human superior temporal gyrus, Nature Neuroscience, 13: 1428-1432. [10] Patel, A.D. (2003) Language, music, syntax and the brain. Nature Neuroscience, 6, 674681. Lerdahl, F. & Jackendoff, R. (2006) The capacity for music. What i sit, and whats special about it? Cognition, 100, 33-72. Bernstein, L. (1976). The Unanswered Question: Six Talks at Harvard. Cambridge, MA: Harvard University Press. [11] Cook, D. (1959) The language of Music. Oxford: Oxford University Press. Cumming, N. (2000) The Sonic-Self: Musical Subjectivity and Signification. Bloomington: Indiana University Press.

INTERNATIONAL VIRTUAL CONFERENCE http://www.arsa-conf.com

Sc

ien

t i fi c A re as

[12]

[13]

[14] [15] [16]

[17]

[18] [19]

[20] [21]

[22]

[23]

Kivy, P. (2002) Introduction to a Philosophy of Music. Oxford, UK: Oxford University Press. Meyer, L.B. (1956) Emotion and meaning in Music. Chicago: University of Chicago Press. Monelle, R. (2000) The Sense of Music: Semiotic Essays. Princeton, NJ: Princeton University Press. Patel, A.D., (2008) Music, language and the brain. Oxford: Oxford University Press. Peretz, I. (2006) The nature of music [Special issue]. Cognition, 100(1). Rebuschat, P., Rohrmeier, M., Hawkins, J.A., Cross, I. (2012) Language and Music as Cognitive Systems. Oxford: Oxford University Press. Gabrielsson, A. (1999). Music Performance. In D. Deutsch (Ed.), Psychology of Music, second edition (pp. 501-602). San Diego: Academic Press. Gile, D. (1995). Basic concepts and models for interpreter and translator training. Amsterdam: Benjamins Sloboda, J. A. (1974). The eye-hand span: an approach to the study of sight.reading. Psychology of Music, 2, 4-10 Gerver, D. (1975). A psychological approach to simultaneous interpretation. Meta, 20(2), 119-128 Barik, H.C. (1975). Simultaneous Interpreatation: qualitative and linguistic data. Language and Speech, 18(3), 272-297 Gile, D. (1995). Basic concepts and models for interpreter and translator training. Amsterdam: Benjamins Chase, W. G., & Simon, H. A. (1973a). Perception in chess. Cognitive Psychology, 4, 55-81 Chase, W. G., & Simon, H. A. (1973b). Perception in chess. Cognitive Psychology, 4, 55-81 Baddeley, A.D. (1990). Human Memory: Theory and Practice. London: Lawrence Erlbaum Associates. Grahn, J. (2009): Neuroscientific investigation of musical rhythm: recent advances and future challenges. In: Contemporary Music Review, 28/3, 251-277 Deutsch, D. (1999): The psychology of music, 2nd Edition, San Diego: Academic Press Marienberg, S. (2011): Language, Rhythm, Grain of Voice. In: R. Manzotti (Ed.): Situated Aesthetics. Art Beyond The Skin. Exeter: Imprint Academic, 141-153 Gile, D. (1995). Basic concepts and models for interpreter and translator training. Amsterdam: Benjamins Pippa, S./Russo, M. C. (2002): Aptitude for Conference Interpreting: a Proposal for a Testing Methodology Based on Paraphrase. In: G. Garzone/M. Viezzi (Eds.): Interpreting in the 21st Century. Challenges and Opportunities. Amsterdam/Philadelphia: John Benjamins Publishing Company, 247-258. Patel, A.D. (2003) Language, music, syntax and the brain. Nature Neuroscience, 6, 674681. Besson, M., & Schn, D. (2001). Comparison between language and music. In Z. R, & P. I (Eds.), Annals of the New York Academy of Sciences: The Biological Foundations of Music (Vol. 930, pp. 232-256). New York: Academy of Sciences. Djean Le Fal, K. (1997): Simultaneous interpretation with 'training wheels'. In: Meta, 42/4, 616-621

SECTION 8. Linguistics

- 1304 -

You might also like