You are on page 1of 20

Psycholinguistics: the use of language and speech as a window to the nature and

structure of the human mind.


1- LANGUAGE ACQUISITION
How a child picks up his mother tongue so quickly and easily is one of the main
concerns of linguists and the psychology of language.
The emergence of speech comes in when we can collect the least complicated data: we
all start crying as babies before learning to speak; this crying is a direct precursor to
language and speech, since it is a kind of language without speech and it helps the child
understand how to produce linguistic sounds and how to fill their lungs with air before
producing that sound, so it is a preparation for a life of vocal communication.
At the very beginning, crying is iconic: children use it to reflect internal states (hungry)
by modifying their pitch; later on, at the age of 1-2 months, the crying becomes
symbolic: they manifest more complex feelings like the need for attention.
Even at this earliest and primitive stage of psycholinguistic development, we humans
are dependent of our caretakers during our evolving: this takes a bonding and
socialization. After several weeks of interaction, the child starts to coo, making soft
gurgling sounds, which are reinforced by their relationship with their caring-feeding
mother.
This cooing stage emerges at about two months of age and it is succeeded at the age of
six months old by a babbling stage: the natural tendency of children of this age to burst
out in strings of consonant-vowel syllable clusters almost as a kind of vocalic play.
Some psycholinguistics difference between:
-

Marginal babbling: an early stage, similar to cooing, where infants produce a


few random consonants.

Canonical babbling: 8 months more or less. The childs vocalizations narrow


down to syllables that begin to approximate the syllables of the caretakers
language.

Still, the earliest acquisition is not the segmental phonemes (the individual consonants
and vowels) that make up their mother tongue: they produce some segments of sounds
that may not belong to the language they are surrounded by. This is kind of ironic, since
when, at this age they may pronounce those sounds perfectly (aspirated /p/ in /pico/;
1

Spanish), they may have great difficulties learning it in the future.


Recently, psycholinguists have supported that what children really get during this stage
are the suprasegmental sounds of their mother tongue (the musical pitch, rhythm and
stress).
First Words
After all this process comes the first word. A child crosses this linguistic Rubicon at
about one year old, but there is another process between the first word emerging and the
constitution of a word.
Children often use idiomorphs, words they invent when they first catch on to the notion
that certain sounds have an unique reference; for example, they start using ka ka for
milk. The words they first start to learn are usually those that refer to everyday objects
that surround them and they can normally manipulate (egocentric speech): mama,
dada, doggie, cookie...
Once the first words are acquired, there is an exponential growth in vocabulary
development, which begins to tape at 6 years old.
The Birth of Grammar
Children use those single words they know as statements or request, as skeletal
sentences; this is referred to as the holophrastic stage, where they imitate the short
utterances adults use to economize the language (Milk? = Do you have any milk?).
This is the beginning of the acquisition of grammar, the most studied in developmental
linguistics; that can be related to the development of Transformational Generative
(TG) grammar, the most influential school of linguistics over the past four decades,
which has always been involved most centrally with the study of sentences. Another
reason for this is that the data us easy to obtain, discrete and can be done while caring
for the child.
Roger Brown demonstrated that children progress through different stages of
grammatical development, measured largely by the average number of words per
utterance. Children begin to create sentences after the holophrastic stage, first with two
words, then with more; studies show that this two-word stage demonstrates a
grammatical precocity:

Children do not rotate words between first and second position. Pivots, for
example, are used initially or finally, and then the other words fill the slot after
or before those pivots.

The order of the words in these two-word utterances tends to follow the normal
word order of the expanded version used by adults.

It is rare for youngsters to repeat the same word twice in their little sentences;
they make each word count.

An indication of how much children have acquired at 2 years old, is to contrast


examples of their grammar with the output collected from one of the most prominent
experiments to teach a human language to a chimp; chimpanzees learn American Sign
Language (ASL) since, due to their anatomy, they cant produce human sounds.
-

Child: it ball, see ball, get doll, want cookie...

Chimp: eat drink eat drink, grape eat Nim eat, banana me Nim me...

The child displays a greater lexical diversity with a logical syntax, elegant and simple,
and displays little repetition, while the chimp is confined to a small stock of words that
he almost enumerates and has to repeat his name and the food constantly.
To sum up, child language is aware of the syntax/grammar of their mother tongue and
seems to follow a simple set of phrase structure words, grammatical rules which
demonstrate that a series of words form a structured phrase or clause and are not simply
a list of unconnected items, while chimps dont seem to have any structural pattern.
Evidence for Innateness
Most psycholinguist hold that the acquisition of the human language is not based solely
on the external influence of the childs environment; there must be some innateness.
Chomsky has argued, from observing some cultures that discourage children from
speaking to adults, that just as humans have some kind of genetically determined ability
to learn to stand upright or to walk, so too do they possess a LAD, a Language
Acquisition Device (now called Universal Grammar, UG); the capacity to produce
and comprehend language is in our DNA.

Childish Creativity
Although it is clear that a child is influenced by his environment, theres is an
independent and individual factor that makes children come up with all kind of words
and expressions they have never heard in their environments: their creativity; for
example, theres yesbody at the door.
From two to four, it is common to use regular plurals for irregular one (mans, knifes,
sheeps), regular past-tense endings for irregular verbs (goed, singed, eated), and even
double tensing.
This kind of tuning, to use a term to describe one type of cognitive processing, usually
shows that the child has progressed to a slightly more advanced linguistic stage of
language development.
Overgeneralizations are often referred to as false analogies, since it not the child
committing an error, but the language that has a non-symmetrical pattern. This process
of creative construction is another evidence of them acquiring grammar; they just
make mistakes because of those irregularities that must be taught.
They may also over generalize some syntax patterns saying something like There
Carlos is! by getting that structure from the correct sentences There is Carlos and
There he is!
Any error a child commits is not because of a hearing problem or a slip of the eye (since
they still cant read); it is an assumption of the patterns of their mother tongue, and
those errors indicate they are sensitive to the grammatical characteristics of the language
they are learning.
Stages of linguistic development
It is difficult to present a concise summary of all of the data about the acquisition of
English as a mother tongue, and so it is if we investigate on those who receive a
bilingual education; and furthermore, the L1A research in older ages of childhood is
equally important to know what kind of complex linguistic structure they acquire and
even, probably, their evolving from child to teenager. For example, the emergence of
foreign accent in their speech of bilingual children at about the age of 12 suggests to
some psycholinguists that there exists a critical period for L1A which is biologically
determined.
4

Roger Brown discovered in one of his researches, that there was a glaring difference in
the rate of language learning between different children from different culture; one of
them was about a year ahead of the others, linguistically speaking, which is probably
explained because of his biological disposition being better for this field.
Still, all children, no matter how rapid they learn the language, proceed through the
same learning stages for any particular linguistic structure. Some Browns colleagues
indentified three distinct stages when investigating how children acquired the WH
questions:
-

Stage I: use of WH but no auxiliary verb employed; What Daddy doing?

Stage II: use of WH word and auxiliary verb after subject; Where she will go?

Stage III: use of WH word and auxiliary verb before subject; Where will she
go?

Brown also investigated the developmental stages of English negatives, and divided
their grammatical development into periods of Mean Length of Utterances (MLUs),
showing that as they acquired their mother tongue, their MLUs grew from to about 2
words to about 4 words. Stages of negative sentences:
-

Stage I: use of NO at the start of the sentence; No the sun shining

Stage II: use of NO inside the sentence but no auxiliary or BE verb; There no
rabbits

Stage III: use of NOT with appropriate abbreviation of auxiliary or BE; Its not
raining

There may be arguments over the exact number of stages for a given structur, but what
researches have demonstrated is that children, adolescent and adults learning a foreign
language differ in their rate of language acquisition (the older, the harder) but they do
not differ in the language learning stages.
2- LANGUAGE PRODUCTION
The production of speech is only manifested through disability or a slip of the tongue.
One of the most influential psycholinguistic models for speech production was
developed by Levelt, who views it as a linear progression of 4 successive stages: (I)
conceptualization, (II) formulation, (III) articulation, (IV) self-monitoring.

I Conceptualization
It is difficult to say what sparks speech. However, David McNeill has done on record
with an interesting mentalistic account of how speech is first conceptualized in the
human mind. His theory is that primitive linguistic concepts are formed as two
concurrent and parallel modes of thought:
-

Syntactic thinking: spawns the sequence of words which we typically think of


when we talk about how language is initiated. It is segmented and linear and
creates the strings of syllables, words, phrases, and sentences that together make
up speech.

Imagistic thinking: creates a more holistic and visual mode of communication.


It is global and synthetics and tends to develop the gestures which we naturally
use to illustrate our conversations.

McNeills claim, that syntactic and imagistic thought collaborate to conceptualize


conversation, is quite convincingly demonstrated by the way in which speech
utterances and ordinary gestures seem to be tied and timed together (Theres your
briefcase / Theres your briefcase).
Although we know little about this stage and if gestures are related to speech birth,
speech doesnt start from nothing, and we have evidence to help us understand Levelts
second stage:
II Formulation
Lashley was able to demonstrate in one of his essays many of his discovering and ideas
about speech production. First, he showed how slips of the tongue (or the computer
keyboard) provide insights into our understanding of how speech is formulated. Second,
he illustrated the power of priming in guiding the direction of speech production and
comprehension.
Slips of the tongue
Unlike stammering or aphasia, slips of the tongue or typographical mistakes are normal,
and, once we (or someone) spot the mistake, we can backtrack and correct it. They
allow psycholinguists to peek on the production process because we know what the

speaker wanted to say, but the unintentional mistake freezes the production process
momentarily and catches the linguistic mechanism in one instance of production.
Spoonerism are slips of the tongue in which an actual word or phrase is created, often
with a humorous twist to the meaning which was intended. Freud hypothesized that
those help to reveal the unconscious mind, but this has been ignored this because of the
danger of becoming too mentalistic and because spoonerisms reveal important
linguistic patterns. Still, sound and words are not thrown together arbitrarily; there is
clear, linear and hierarchical order in which we put them in our mouths.
There is data that proves that the units of speech, such as phoneme and morpheme,
are psychologically real. This means mistakes do not pop our just anywhere; they occur
at predictable points and follow predictable patterns, as if they were meant to fill some
slot.
We may have some slips of the tongue if the sounds, the phonemes, have similar
characteristics: we may say /l/ instead of /r/ (leading list, instead of reading list) or /p/
instead of /b/ (pig and fat, instead of big and fat) since they are pronounced in the same
area of the mouth, and, in the second example, we may explain the slip of /p/ (voiceless)
into our tongue instead of /b/ (voiced) due to an anticipation to the voiceless /f/ in the
next word or maybe even the semantic association between the words pig and fat. It
is more difficult for vowels to substitute consonants and vice versa (though possible),
and we may come up with some invented words due to the sound patterns of English.
Slips of the tongue reveal that we are not only conditioned by the sound system of the
language we are speaking, but also by its morphology and that languages reality. For
example we may say New Yorkan instead of New Yorker is we think that those who
live in America are called Americans; we may also say words of rule formation
instead of rule of words formation by thinking an apple pie is a pie mad of apples. We
just follow the logic.
We also know that, when a native speaker errs (childs instead of children) he will
correct himself, while a learner wont, and if he does, he will probably correct it in an
incorrect way (childrens). This shows that speakers organize their utterances into
smaller groups of words and have those filled with the appropriate lexical items which
express the intended meaning.

The planning of higher levels of speech


How we chose to formulate what we are about to write or speak is influenced by factors
such as politeness or social appropriateness; these are central to the concerns of
pragmatics the study of what people mean when they use language in normal social
interaction or sociolinguistics the study of why we say what we say what to whom,
when and where. Still, there are many other choices to be considered: what word should
we use? Why we express it as affirmatives or negatives, given both choices? For
example, to express something is not important we could choose to say unimportant,
but since negative prefixes are a bit more complicated to formulate, we may say
trivial, since all we have to remember is a word and not what prefix we must use.
As well, we may take into account how we speak depending on what we want to
express: Its not important VS. It isnt important
III Articulation
Articulation is an important and necessary step in the production of speech since if we
have conceptualized and formulated what we want to say, if it isnt articulated, it is like
it was never even thought.
Human organs have evolved not just in order to serve better their purposes (lungs,
breathing; teeth, chewing) but also in order to produce some sounds; Eric Lenneberg, a
psycholinguist, show that the majority of these organs has primarily evolved to serve
essential biological functions, but some of them have adopted secondarily functions
connected to the enhancement of speech articulation. A nice example of this is the
larynx, where we store our vocal chords, and where food is not supposed to get stuck
(cough).
The larynx, where the Adams apple is located, is one of the most clear examples that
difference us from other animals, and thanks to it we can make the sounds that belong to
the human language, but it also makes us more PROPENSOS to choke on our food; it
doesnt allow just to create those sounds and to make very little differences between
some of them (look vs. Luke), it also increases resonance. All in all, the linguistic
advantages outweigh the physiological disadvantages, since language is vital to our
evolutionary history.

Sounds are articulated in a somehow linear way: the lips, the larynx and the lungs may
be working at the same time, so coarticulation is the norm, not the exception. Sounds
do not emerge as segments strung together sequentially; they are mixed and melded,
with each sound shaping its neighbours while concurrently being shaped themselves.
We still dont know how the lips and tongue know where to place themselves to
produce those certain sounds, but we have some very little understanding on how the
brain is programmed to articulate those sounds thanks to the Positron Emission
Tomography (PET).
IV Self-monitoring
At this final stage we have direct evidence of what is happening when people compose
speech. All of us commit linguistic blunders during this process, is something human.
This errs are conditioned by the degree of stress we are under or the beverages imbibed.
S. Pit Corder, a pioneer in the field of SLA, classified those slips of the tongue as
mistakes. Errors, on the other hand, are committed only by non-native speakers
(NNSs); NNSs dont recognize immediately the mistake and if told, they do not replace
the deviancy with the correct form. The fact that natives only commit mistakes and not
errors reveals three insights into the production process:
-

It demonstrates that speakers/writers are constantly self-editing; production is a


self-regulating process with a feedback loop to ensure that each previous stage
of output was accurate.

It suggests that speakers are intuitively sensitive to what stage of the production
process went awry, if a mistake was made, and readjust it.

The fact that native speakers can monitor and quickly correct themselves proves
Chomskys contention that there is a distinction between performance the
words we say or write, our ability in language - and competence our tacit,
intuitive knowledge about the languages we have mastered.

Along with mistakes, such as slips of the tongue, psycholinguists have also relied on the
hesitations which punctuate our unplanned, spoken discourse to gain insights into the
ways we monitor the language we produce. They do seem to indicate a lack of fluency,
but they may appear to be articulate speakers of the language, since they are not random
but self-governed, and they never violate linguistic constraints.
9

One final point about self-monitoring: this proves that people do not just communicate
with other people, but with themselves as well. The communication process is a twoway system involving both output and the concurrent editing and modulation of that
output.
To sum this chapter up, it is only when this system of effortless language flowing
breaks down when we appreciate its intricacy and we begin to glean significant
psycholinguistic insights.
3- COMPREHENSION
The research shows that in most situations, listeners and reader use a great deal of
information other that the actual language being produced to help them decipher the
linguistic symbols they hear or see.
The comprehension of sounds
When we listen to a sentence in which we insert a different missing sound to create a
separate but appropriate words we didnt get at all is called the phoneme restoration
effect. Under these conditions, listeners do not accurately record what they hear; they
report what they expected to hear from the context, even if that means adding a sound
that never was actually spoken at the beginning of the target word. We can get some
observations from this:
-

People dont necessarily hear each of the words spoken to them; comprehension
doesnt mean passive recording.

Comprehension is strongly influenced by the slightest of changes in discourse


which the listener is attending to.

Comprehension is not a simple item-by-item analysis of words; we dont


understand each words meaning in isolation, we seek consistency even if it
comes to adding a sound or word that never was spoken.

There are some slight differences between some sounds like the /p/ in pool and the /p/
in spring that native speakers of English claim they feel as the same, thought it is not.
Still, unconsciously, we get the differences between them thanks to the Voice Onset
Timing (VOT); this means that our brain processes the differences between each
phoneme, even if it is just the puff of air starting one-twentieth of a second later, to help
10

us get which phoneme has been spoken. This subtle difference is seen between the
initial /b/ and /p/ in Benny and Penny, and this amazing ability has been proven to be
innate.
But when a native hear a sound, they dont classify it as 50% voiced and 50% voiceless
or whatever the percentage is, they tend to classify it as one sound or another; this is
called categorical perception. This seems to qualify as one aspect of UG, and thanks to
this VOT experiments we know UG exists and at least part of human language is
modular some parts of language reside in the mind as an independent system.
Although categorical perception of VOT is modular, it is influenced by the linguistic
environment; English language divides the VOT spectrum into two sets of sound, for
example the voiced and voiceless pairs of consonants /b/ /d/ /g/ VS /p/ /t/ /k/. But not
every language has de same VOT differences, and when we are exposed to another
language or learning it, we use an innate ability to hear speech sounds categorically to
acquire the appropriate VOT settings. Learning to comprehend is a merger of both
nature and nurture.
The comprehension of words
Comprehension of words is more complex than the processing of phonemes since there
are more words than sounds and they convey meanings.
Psycholinguists have adopted a model of cognition that argues that we use separate but
simultaneous and parallel processes when we try to understand spoken or written
language; it is called the Parallel Distributed Processing (PDP).
The way we access the words stored in our mental lexicon is the logogen model of
comprehension; when you hear/see a word you stimulate a lexical detection device for
that word. Logogens work in parallel to create comprehension; those high-frequency
words (like the word word) activate much rapidly, while low-frequency words (like
the word logogen) take longer to be incorporated to our system of understanding.
Psycholinguistics can account for the comprehension of words in several ways:
-

In terms of their spelling (homophones; threw through)

On the basis of their pronunciation (homographs; lead verb- lead noun)

11

In terms of the grammatical functions that the word might fill (smell can be
used as a noun or a verb, while hear has to become hearing to turn into a
noun)

Comprehension can be linked via PDP to the associations triggered by a words


meaning (leaf => trees, pages in a book, similar words like leave...)

A useful example of a PDP approach to the comprehension of words is the Tip-Of-theTongue (TOT) phenomenon, that occurs when we know a word and we cant recall it,
its in the tip of our tongue, but we can instantly recognize it if presented to us. Still,
some aspects/fragments of the word are remembered (maybe the first letters or the first
syllable); this is called bathtub effect, the middle of the word is submerged, and we can
look for it in the dictionary thanks to the alphabetical files. And, although we dont
reproduce it, we can recognize those words we are not trying to recall.
When we find some equality between those wrong words and our TOT term, our
schematic language, based on our life experiences, is assisting the lexical search
process.
All of these things here mentioned about TOT demonstrate the effects of spreading
activation networks that form part of the PDP.
We have reached other conclusions thanks to those processes:
-

We also store words according to their last syllables rhyme.

Comprehension is not an absolute state, fully comprehended or completely left


in the dark. It is a dynamic, growing, and active process of spreading activation
networks.

We need both context and meaning to decode the words we hear/read.

The comprehension of sentences


Psycholinguists first began to examine the comprehension of sentences by basing their
research on the model of sentence grammar proposed by Chomsky. This model claimed
that sentences were generated from a phrase structure skeleton that gets fleshed out
into everyday utterances by a series of transformational rules (Transformational-

12

Generative (TG) grammar).


Using this model, psycholinguists started comparing the number of transformations
used to derive sentences and the relative difficulty native speakers experienced in
comprehending them. For example:
a) The dog is chasing the cat
b) Isnt the cat being chased by the dog?
From the standpoint of TG grammar, b) is more complex than a) (that corresponds to
the underlying PS sentence) since b) has undergone three transformational changes;
negative, passive and interrogative. Therefore, kernel sentences are easier to
comprehend than complex sentences.
Psycholinguists who first experimented with this hypothesis called it the Derivational
Theory of Complexity (DTC), because difficulty in comprehension was derived from
the number of transformation added to the kernel sentence. In an experiment that
consisted of hearing sentences, starting with the kernel one and then add
transformational changes after each sentence, and then numbering some random words,
subjects would recall one fewer word on the list each successive sentence due to the
additional transformations. This shows how our brain takes a harder work to
comprehend more complex sentences; however, sentences that had just undergone a
negative change, seemed to be more difficult to comprehend than those sentences that
had experimented more changes. This showed that DTC was not as insightful as hoped.
Later on, they placed the list of words before hearing the sentence, and then they
discovered that the difficulty was not syntax, but semantics and especially negative
sentences.
Thus, they had to revise the DTC model in the 1960s; by that time, Chomsky had
already made several changes in his TG grammar, in which he featured a prominent role
for semantics. Nowadays, transformational rules are not psycholinguistically relevant.
Then, if complexity doesnt affect comprehension, what does? Ambiguity seems to slow
down the process, as demonstrate by several studies that use phoneme monitoring
tasks: a method psycholinguist use to tap the process of sentence comprehension.
Thanks to them they got to some other conclusions:

13

Sentences which contain more complex info in the clause preceding the target
phoneme will create a greater lag in reaction time.

Sentences seem to be understood as words, following the left-to-right order;


each new word serves to add meaning, and may help us anticipate the next
word(s). This spreading activation` has led to the Automatic Transition
Networks (ATNs), used to predict the next words or word sentences; it was not
really insightful VS the PDP model, which was more robust.

A natural comprehension strategy is garden-pathing in which we construct the end of


the sentence following our linguistic rules, and we are unaware of it until it is
interrupted and we hear another different thing from what we are expecting; this causes
in us a little block and more effort to catch the sentence meaning.
The comprehension of texts
Except for mnemonists people who have a rare ability to recall texts that they have
heard or read our memory is rather poor for structure but very accurate for content; we
usually remember the basic content but not the grammar of the sentence. If we are given
a context or a title for the text, it is easier for us to remember even better its content or
some wording in a much more accurate way; top-down information is useful in the
comprehension of larger units of language because it helps activate mental associations.
Comprehension concluded
VOT is hard-wired to the human brain, and even young children seem to classify very
small differences in VOT into one or another phonetic category; this helps them get the
significant differences of their mother tongue and to ignore the insignificant ones.
Our knowledge is facilitated thanks to each logogen. Our knowledge of and about
words is extensive: the meaning of a word triggers a spreading activation of associations
which help us understand it in many different contexts, and may bring other related
words to mind.
The grammatical structure of a sentence might initially influence the garden path we
choose, but the greatest influence on sentence comprehension is meaning. It is easier to
remember the what than the how, and if we have a context it is easier to catch the
full meaning of the text.
14

Only a complex model of comprehension like the PDP can account for the way we
receive linguistic messages every day, but we need to explore further and investigate the
innate mechanisms for language that are wired into the human mind.
4- DISSOLUTION: LANGUAGE LOSS
Neurolinguistics and language loss
The evidence from aphasia
Neurolinguistics, an offspring of psycholinguistics, investigates how the human brain
creates and processes speech and language. When speaking about the human brain, we
have to avoid some popular mistakes: brain is not just divided in two hemispheres;
those two sides are connected by millions of association pathways which connect the
left and right hemispheres together, so the two of them share any information.
The function of the corpus callosum, the largest sheath of association pathways
connecting the two hemispheres, is often misunderstood, unknown or even ignored, so
nowadays is a fact that there left-brained and right-brained people. Misconceptions
like these about neurology lead to misconceptions about the relationship between the
brain and mental states or linguistic structures. Sadly, we learn the most when the brain
is damaged.
If we observe the left side of the brain, we will first find two strips:
-

The motor cortex: the primary area of the brain for the initiation of all voluntary
muscular movement.

The sensory cortex: the primary location for processing all sensations to the
brain from the body.

We are most concerned with the location of the control of speech organs and the
sensation of speech sounds within these two strips. Heres one the oddities of our brain:
the left side controls the right side of the body and vice versa, and the top of the brain
the lower parts of the body (and vice versa, as well). So, since it is the lower part of the
brain the one that controls our head, it is the one we are most interested in:

15

Brocas area: located in the bottom portion of the motor cortex, the area that is
slightly more forward. Named after Paul Broca who also helped coin the term of
aphasia the loss of speech or language due to brain damage. It takes care of
speech production.

Wernickes area: located just behind Brocas area, at the lower portion of the
sensory cortex. Named after Brocas Austrian contemporary, Karl Wernicke. It
takes care of language comprehension.

These discoveries helped demonstrate that human brain is not equipotential, as


animals; any area is equally important to any other area. If we a stroke or some
damage to the brain, we may have some kind of loss; luckily, human brain cant
feel pain, it is the tissue around it that could make us feel pain. There are some
injuries that can affect the two language centers of the brain:
-

Brocas aphasia: characterized by speech and writing which is slow and, in


severe cases, completely inhibited. Although automatic speech and function
words can remain almost unaffected, usually the production of key words like
subjects, verbs, and objects, is inaccurate and hesitant. Nevertheless,
comprehension is relatively spared.

Wernickes aphasia: speech production and writing are pretty much intact, but
patients experience a great deal of trouble processing linguistic input. Although
speech flows more fluently, due to their problems to process conversational
feedback, they tend to ramble incoherently.

Both of these types and for most cases, aphasias occurs only if those areas are damaged
in the left side of the brain, although if we get those areas damaged in the right side, we
wont get language problems but other kind of problems like recalling faces or the
ability to read maps.
The surgical evidence
Lately, the sub-field of aphasiology, the study of aphasia or loss of speech, has
flourished especially; two kinds of surgical operation have a particular bearing on
questions of language dissolution:

16

Hemispherectomy: consists of opening up the affected side of the skull and


remove almost the entire left or right hemisphere. This is rarely performed
nowadays in people older than ten, since they have lost their neuroplasticity
(their cognitive and linguistic functions have already been localized to specific
areas); if a young brain encounters traumatic injury, since the primary areas of
cognitive and linguistic functioning have not undergone canalization
(established as neuronal networks), a child does not suffer the extensive
functional loss that an adult does. This doesnt mean that children cant have
aphasia; they do, but it is less common.

Split-brain operation: firstly used to treat specific and rare cases of epilepsy,
which was caused by discharges in the motor cortex in one hemisphere that are
instantly transmitted to the corresponding cortex of the other hemisphere via the
corpus callosum. To fix this, the surgeon would cut front-to-back the corpus
callosum, severing the association pathways.
This surgical operation have some unique consequences; daily functions are
unaffected while under experimental conditions, some linguistic processing
constraints emerged: when a word is flashed during a fraction of a second, the
patient would only get the right part of the word (HEART) since it goes to the
left hemisphere, but the left part of the word that goes to the right hemisphere
gets its lexical info stuck there and does not go to the left side of the brain since
there is no corpus callosum. But the left side of the brain does not monopolize
all of language processing: there are secondary or tertiary linguistic areas even
in the right hemisphere; patients are unconsciously aware of the existence of the
HE part in the word HEART. This is why we cant just assume a left VS
right duality.

To sum up, production and comprehension of speech is locates in Brocas and


Wernickes, but this localization is not fully completed until the age of ten. Also, we
must take into account that language dissolution shows that human linguistic ability
does not reside just in those two small areas in the brain.

17

Speech and language disorders

Dissolution from non-damaged brains


Stuttering, also referred to as stammering, is one of the most common articulation
problems encountered by speech pathologists, at least in most English-speaking
countries. Stuttering also reveals psycholinguistic information about how speech is
organized and planned; stuttering is not random: usually occurs on the initial word of a
clause, the first syllable of a word, the initial consonant of a syllable, and on stop
consonants. There are two controversial and extreme theories about this:
-

The Jonson theory: the extreme behavioural view that claims that stuttering
originates from traumatic events in early childhood.

The Orton/Travis theory: states that stammering is caused by the absence of


unambiguous lateralization of speech to the left hemisphere, since there is no
primary language center established.

These two theories claim that boys are more affected than girls (girls supersede boys in
linguistic terms; women as teachers; boys are more criticized). So it is not just
neurological but environmental. Stuttering is not seen the same way in older people all
around the world; some see it as a disorder, others as something normal. So we know
that the disability is not just in the mouth of the speakers but it is also framed in the ears
of the listeners.
Another disability is autism. There are several types of autism, and it is not just a
language impairment; they also experiment a disregard for human interaction and
ignores eye and face contact. Since they dont bond with people, coupled with the
linguistic consequences of this constraint, this creates a behavioural pathology severe
enough to be labelled a psychosis (often referred to as childhood schizophrenia).
Language loss arising from inherited disorders
Genetics should be used as a court of last resort, not as the first line of defense, but
recent work in psycholinguistics has uncovered certain rare examples of how language
18

dissolution appears to be inherited. But these inherited disabilities do not attack


language directly; loss of language is a consequence of the global loss of all higher
cognitive functions. The least rare of these cases is Downs syndrome, a disorder that
occurs in every 600 births and leaves the child moderately to severely impaired in all
cognitive functions. The degree of language disability is directly proportionate to the
amount of cognitive damage, and some severe cases may not even acquire their mother
tongue. Sometimes it can lead to poor articulation, though comprehension is not
significantly affected, in a manner reminiscent of Brocas aphasia.
Language loss through aging
As we get older, we go through a slow language loss. The most conspicuous faculty
eroded by the aging process is memory, and since language represents a major
component of Long Term Memory (LTM), it is inevitable that linguistic performance
is affected by any form of significant deficit in LTM.
Old people claim to have problems recalling some names or words, but it is not a
problem of old ages, it is because of the limited access to LTM; the more you have to
remember the easier it is to forget.
Aged people retain as good as LTM as young people. The memory constraints that may
become evident as we get older seem to be due primarily to Short Term Memory
(STM) constraints. No definitive research has been undertaken on the effects of the
aging process on specific aspects of language, but the little evidence just reviewed on
the impact of aging on lexical recall indicates that language remains robust.
Alzheimers disease patients have their brain prematurely deteriorated and this loss
affects every aspect of a persons performance. But researches show that speech and
language are not affected in isolation. Linguistic functions gradually disintegrate
together with those of emotion, cognition and personality. Recent study shows that
those who use more complex sentences have better chance of not succumbing to AD;
this happens as well in almost every aspect of human behaviour: the more complex the
endeavour, the greater the degree of affliction from the disease.
Concluding summary
The everyday use of language without disorders in acquisition, production or
comprehension is a wonder of miraculous proportions.

19

Language is not localized just in one area of the brain; it is controlled by both
hemispheres. And there is a puzzle remaining to be solved: speech and language are
independent from other aspects of behaviour for neurolinguists, but language is part and
parcel of cognition and perception for psycholinguists. Why?
Thanks to researches on examples of dissolution that do not seem to be caused by brain
damage (autism, aging, AD...), we discover that the date on speech and hearing
disorders does not differ significantly from the information we have on normal
development. So, whatever is the disease, language seems to be closely related to other
aspects of human behaviour, particularly to cognition.
To sum up, the disruptions in the environment or in the genetic code that bring about
speech and language disabilities never seem to single out language: they affect linguistic
communication because they afflict cognition and perception as a whole. This is why
psycholinguistics is drawn by language into a more general inquiry of the workings of
the human mind.

20

You might also like