Professional Documents
Culture Documents
Still, the earliest acquisition is not the segmental phonemes (the individual consonants
and vowels) that make up their mother tongue: they produce some segments of sounds
that may not belong to the language they are surrounded by. This is kind of ironic, since
when, at this age they may pronounce those sounds perfectly (aspirated /p/ in /pico/;
1
Children do not rotate words between first and second position. Pivots, for
example, are used initially or finally, and then the other words fill the slot after
or before those pivots.
The order of the words in these two-word utterances tends to follow the normal
word order of the expanded version used by adults.
It is rare for youngsters to repeat the same word twice in their little sentences;
they make each word count.
Chimp: eat drink eat drink, grape eat Nim eat, banana me Nim me...
The child displays a greater lexical diversity with a logical syntax, elegant and simple,
and displays little repetition, while the chimp is confined to a small stock of words that
he almost enumerates and has to repeat his name and the food constantly.
To sum up, child language is aware of the syntax/grammar of their mother tongue and
seems to follow a simple set of phrase structure words, grammatical rules which
demonstrate that a series of words form a structured phrase or clause and are not simply
a list of unconnected items, while chimps dont seem to have any structural pattern.
Evidence for Innateness
Most psycholinguist hold that the acquisition of the human language is not based solely
on the external influence of the childs environment; there must be some innateness.
Chomsky has argued, from observing some cultures that discourage children from
speaking to adults, that just as humans have some kind of genetically determined ability
to learn to stand upright or to walk, so too do they possess a LAD, a Language
Acquisition Device (now called Universal Grammar, UG); the capacity to produce
and comprehend language is in our DNA.
Childish Creativity
Although it is clear that a child is influenced by his environment, theres is an
independent and individual factor that makes children come up with all kind of words
and expressions they have never heard in their environments: their creativity; for
example, theres yesbody at the door.
From two to four, it is common to use regular plurals for irregular one (mans, knifes,
sheeps), regular past-tense endings for irregular verbs (goed, singed, eated), and even
double tensing.
This kind of tuning, to use a term to describe one type of cognitive processing, usually
shows that the child has progressed to a slightly more advanced linguistic stage of
language development.
Overgeneralizations are often referred to as false analogies, since it not the child
committing an error, but the language that has a non-symmetrical pattern. This process
of creative construction is another evidence of them acquiring grammar; they just
make mistakes because of those irregularities that must be taught.
They may also over generalize some syntax patterns saying something like There
Carlos is! by getting that structure from the correct sentences There is Carlos and
There he is!
Any error a child commits is not because of a hearing problem or a slip of the eye (since
they still cant read); it is an assumption of the patterns of their mother tongue, and
those errors indicate they are sensitive to the grammatical characteristics of the language
they are learning.
Stages of linguistic development
It is difficult to present a concise summary of all of the data about the acquisition of
English as a mother tongue, and so it is if we investigate on those who receive a
bilingual education; and furthermore, the L1A research in older ages of childhood is
equally important to know what kind of complex linguistic structure they acquire and
even, probably, their evolving from child to teenager. For example, the emergence of
foreign accent in their speech of bilingual children at about the age of 12 suggests to
some psycholinguists that there exists a critical period for L1A which is biologically
determined.
4
Roger Brown discovered in one of his researches, that there was a glaring difference in
the rate of language learning between different children from different culture; one of
them was about a year ahead of the others, linguistically speaking, which is probably
explained because of his biological disposition being better for this field.
Still, all children, no matter how rapid they learn the language, proceed through the
same learning stages for any particular linguistic structure. Some Browns colleagues
indentified three distinct stages when investigating how children acquired the WH
questions:
-
Stage II: use of WH word and auxiliary verb after subject; Where she will go?
Stage III: use of WH word and auxiliary verb before subject; Where will she
go?
Brown also investigated the developmental stages of English negatives, and divided
their grammatical development into periods of Mean Length of Utterances (MLUs),
showing that as they acquired their mother tongue, their MLUs grew from to about 2
words to about 4 words. Stages of negative sentences:
-
Stage II: use of NO inside the sentence but no auxiliary or BE verb; There no
rabbits
Stage III: use of NOT with appropriate abbreviation of auxiliary or BE; Its not
raining
There may be arguments over the exact number of stages for a given structur, but what
researches have demonstrated is that children, adolescent and adults learning a foreign
language differ in their rate of language acquisition (the older, the harder) but they do
not differ in the language learning stages.
2- LANGUAGE PRODUCTION
The production of speech is only manifested through disability or a slip of the tongue.
One of the most influential psycholinguistic models for speech production was
developed by Levelt, who views it as a linear progression of 4 successive stages: (I)
conceptualization, (II) formulation, (III) articulation, (IV) self-monitoring.
I Conceptualization
It is difficult to say what sparks speech. However, David McNeill has done on record
with an interesting mentalistic account of how speech is first conceptualized in the
human mind. His theory is that primitive linguistic concepts are formed as two
concurrent and parallel modes of thought:
-
speaker wanted to say, but the unintentional mistake freezes the production process
momentarily and catches the linguistic mechanism in one instance of production.
Spoonerism are slips of the tongue in which an actual word or phrase is created, often
with a humorous twist to the meaning which was intended. Freud hypothesized that
those help to reveal the unconscious mind, but this has been ignored this because of the
danger of becoming too mentalistic and because spoonerisms reveal important
linguistic patterns. Still, sound and words are not thrown together arbitrarily; there is
clear, linear and hierarchical order in which we put them in our mouths.
There is data that proves that the units of speech, such as phoneme and morpheme,
are psychologically real. This means mistakes do not pop our just anywhere; they occur
at predictable points and follow predictable patterns, as if they were meant to fill some
slot.
We may have some slips of the tongue if the sounds, the phonemes, have similar
characteristics: we may say /l/ instead of /r/ (leading list, instead of reading list) or /p/
instead of /b/ (pig and fat, instead of big and fat) since they are pronounced in the same
area of the mouth, and, in the second example, we may explain the slip of /p/ (voiceless)
into our tongue instead of /b/ (voiced) due to an anticipation to the voiceless /f/ in the
next word or maybe even the semantic association between the words pig and fat. It
is more difficult for vowels to substitute consonants and vice versa (though possible),
and we may come up with some invented words due to the sound patterns of English.
Slips of the tongue reveal that we are not only conditioned by the sound system of the
language we are speaking, but also by its morphology and that languages reality. For
example we may say New Yorkan instead of New Yorker is we think that those who
live in America are called Americans; we may also say words of rule formation
instead of rule of words formation by thinking an apple pie is a pie mad of apples. We
just follow the logic.
We also know that, when a native speaker errs (childs instead of children) he will
correct himself, while a learner wont, and if he does, he will probably correct it in an
incorrect way (childrens). This shows that speakers organize their utterances into
smaller groups of words and have those filled with the appropriate lexical items which
express the intended meaning.
Sounds are articulated in a somehow linear way: the lips, the larynx and the lungs may
be working at the same time, so coarticulation is the norm, not the exception. Sounds
do not emerge as segments strung together sequentially; they are mixed and melded,
with each sound shaping its neighbours while concurrently being shaped themselves.
We still dont know how the lips and tongue know where to place themselves to
produce those certain sounds, but we have some very little understanding on how the
brain is programmed to articulate those sounds thanks to the Positron Emission
Tomography (PET).
IV Self-monitoring
At this final stage we have direct evidence of what is happening when people compose
speech. All of us commit linguistic blunders during this process, is something human.
This errs are conditioned by the degree of stress we are under or the beverages imbibed.
S. Pit Corder, a pioneer in the field of SLA, classified those slips of the tongue as
mistakes. Errors, on the other hand, are committed only by non-native speakers
(NNSs); NNSs dont recognize immediately the mistake and if told, they do not replace
the deviancy with the correct form. The fact that natives only commit mistakes and not
errors reveals three insights into the production process:
-
It suggests that speakers are intuitively sensitive to what stage of the production
process went awry, if a mistake was made, and readjust it.
The fact that native speakers can monitor and quickly correct themselves proves
Chomskys contention that there is a distinction between performance the
words we say or write, our ability in language - and competence our tacit,
intuitive knowledge about the languages we have mastered.
Along with mistakes, such as slips of the tongue, psycholinguists have also relied on the
hesitations which punctuate our unplanned, spoken discourse to gain insights into the
ways we monitor the language we produce. They do seem to indicate a lack of fluency,
but they may appear to be articulate speakers of the language, since they are not random
but self-governed, and they never violate linguistic constraints.
9
One final point about self-monitoring: this proves that people do not just communicate
with other people, but with themselves as well. The communication process is a twoway system involving both output and the concurrent editing and modulation of that
output.
To sum this chapter up, it is only when this system of effortless language flowing
breaks down when we appreciate its intricacy and we begin to glean significant
psycholinguistic insights.
3- COMPREHENSION
The research shows that in most situations, listeners and reader use a great deal of
information other that the actual language being produced to help them decipher the
linguistic symbols they hear or see.
The comprehension of sounds
When we listen to a sentence in which we insert a different missing sound to create a
separate but appropriate words we didnt get at all is called the phoneme restoration
effect. Under these conditions, listeners do not accurately record what they hear; they
report what they expected to hear from the context, even if that means adding a sound
that never was actually spoken at the beginning of the target word. We can get some
observations from this:
-
People dont necessarily hear each of the words spoken to them; comprehension
doesnt mean passive recording.
There are some slight differences between some sounds like the /p/ in pool and the /p/
in spring that native speakers of English claim they feel as the same, thought it is not.
Still, unconsciously, we get the differences between them thanks to the Voice Onset
Timing (VOT); this means that our brain processes the differences between each
phoneme, even if it is just the puff of air starting one-twentieth of a second later, to help
10
us get which phoneme has been spoken. This subtle difference is seen between the
initial /b/ and /p/ in Benny and Penny, and this amazing ability has been proven to be
innate.
But when a native hear a sound, they dont classify it as 50% voiced and 50% voiceless
or whatever the percentage is, they tend to classify it as one sound or another; this is
called categorical perception. This seems to qualify as one aspect of UG, and thanks to
this VOT experiments we know UG exists and at least part of human language is
modular some parts of language reside in the mind as an independent system.
Although categorical perception of VOT is modular, it is influenced by the linguistic
environment; English language divides the VOT spectrum into two sets of sound, for
example the voiced and voiceless pairs of consonants /b/ /d/ /g/ VS /p/ /t/ /k/. But not
every language has de same VOT differences, and when we are exposed to another
language or learning it, we use an innate ability to hear speech sounds categorically to
acquire the appropriate VOT settings. Learning to comprehend is a merger of both
nature and nurture.
The comprehension of words
Comprehension of words is more complex than the processing of phonemes since there
are more words than sounds and they convey meanings.
Psycholinguists have adopted a model of cognition that argues that we use separate but
simultaneous and parallel processes when we try to understand spoken or written
language; it is called the Parallel Distributed Processing (PDP).
The way we access the words stored in our mental lexicon is the logogen model of
comprehension; when you hear/see a word you stimulate a lexical detection device for
that word. Logogens work in parallel to create comprehension; those high-frequency
words (like the word word) activate much rapidly, while low-frequency words (like
the word logogen) take longer to be incorporated to our system of understanding.
Psycholinguistics can account for the comprehension of words in several ways:
-
11
In terms of the grammatical functions that the word might fill (smell can be
used as a noun or a verb, while hear has to become hearing to turn into a
noun)
A useful example of a PDP approach to the comprehension of words is the Tip-Of-theTongue (TOT) phenomenon, that occurs when we know a word and we cant recall it,
its in the tip of our tongue, but we can instantly recognize it if presented to us. Still,
some aspects/fragments of the word are remembered (maybe the first letters or the first
syllable); this is called bathtub effect, the middle of the word is submerged, and we can
look for it in the dictionary thanks to the alphabetical files. And, although we dont
reproduce it, we can recognize those words we are not trying to recall.
When we find some equality between those wrong words and our TOT term, our
schematic language, based on our life experiences, is assisting the lexical search
process.
All of these things here mentioned about TOT demonstrate the effects of spreading
activation networks that form part of the PDP.
We have reached other conclusions thanks to those processes:
-
12
13
Sentences which contain more complex info in the clause preceding the target
phoneme will create a greater lag in reaction time.
Only a complex model of comprehension like the PDP can account for the way we
receive linguistic messages every day, but we need to explore further and investigate the
innate mechanisms for language that are wired into the human mind.
4- DISSOLUTION: LANGUAGE LOSS
Neurolinguistics and language loss
The evidence from aphasia
Neurolinguistics, an offspring of psycholinguistics, investigates how the human brain
creates and processes speech and language. When speaking about the human brain, we
have to avoid some popular mistakes: brain is not just divided in two hemispheres;
those two sides are connected by millions of association pathways which connect the
left and right hemispheres together, so the two of them share any information.
The function of the corpus callosum, the largest sheath of association pathways
connecting the two hemispheres, is often misunderstood, unknown or even ignored, so
nowadays is a fact that there left-brained and right-brained people. Misconceptions
like these about neurology lead to misconceptions about the relationship between the
brain and mental states or linguistic structures. Sadly, we learn the most when the brain
is damaged.
If we observe the left side of the brain, we will first find two strips:
-
The motor cortex: the primary area of the brain for the initiation of all voluntary
muscular movement.
The sensory cortex: the primary location for processing all sensations to the
brain from the body.
We are most concerned with the location of the control of speech organs and the
sensation of speech sounds within these two strips. Heres one the oddities of our brain:
the left side controls the right side of the body and vice versa, and the top of the brain
the lower parts of the body (and vice versa, as well). So, since it is the lower part of the
brain the one that controls our head, it is the one we are most interested in:
15
Brocas area: located in the bottom portion of the motor cortex, the area that is
slightly more forward. Named after Paul Broca who also helped coin the term of
aphasia the loss of speech or language due to brain damage. It takes care of
speech production.
Wernickes area: located just behind Brocas area, at the lower portion of the
sensory cortex. Named after Brocas Austrian contemporary, Karl Wernicke. It
takes care of language comprehension.
Wernickes aphasia: speech production and writing are pretty much intact, but
patients experience a great deal of trouble processing linguistic input. Although
speech flows more fluently, due to their problems to process conversational
feedback, they tend to ramble incoherently.
Both of these types and for most cases, aphasias occurs only if those areas are damaged
in the left side of the brain, although if we get those areas damaged in the right side, we
wont get language problems but other kind of problems like recalling faces or the
ability to read maps.
The surgical evidence
Lately, the sub-field of aphasiology, the study of aphasia or loss of speech, has
flourished especially; two kinds of surgical operation have a particular bearing on
questions of language dissolution:
16
Split-brain operation: firstly used to treat specific and rare cases of epilepsy,
which was caused by discharges in the motor cortex in one hemisphere that are
instantly transmitted to the corresponding cortex of the other hemisphere via the
corpus callosum. To fix this, the surgeon would cut front-to-back the corpus
callosum, severing the association pathways.
This surgical operation have some unique consequences; daily functions are
unaffected while under experimental conditions, some linguistic processing
constraints emerged: when a word is flashed during a fraction of a second, the
patient would only get the right part of the word (HEART) since it goes to the
left hemisphere, but the left part of the word that goes to the right hemisphere
gets its lexical info stuck there and does not go to the left side of the brain since
there is no corpus callosum. But the left side of the brain does not monopolize
all of language processing: there are secondary or tertiary linguistic areas even
in the right hemisphere; patients are unconsciously aware of the existence of the
HE part in the word HEART. This is why we cant just assume a left VS
right duality.
17
The Jonson theory: the extreme behavioural view that claims that stuttering
originates from traumatic events in early childhood.
These two theories claim that boys are more affected than girls (girls supersede boys in
linguistic terms; women as teachers; boys are more criticized). So it is not just
neurological but environmental. Stuttering is not seen the same way in older people all
around the world; some see it as a disorder, others as something normal. So we know
that the disability is not just in the mouth of the speakers but it is also framed in the ears
of the listeners.
Another disability is autism. There are several types of autism, and it is not just a
language impairment; they also experiment a disregard for human interaction and
ignores eye and face contact. Since they dont bond with people, coupled with the
linguistic consequences of this constraint, this creates a behavioural pathology severe
enough to be labelled a psychosis (often referred to as childhood schizophrenia).
Language loss arising from inherited disorders
Genetics should be used as a court of last resort, not as the first line of defense, but
recent work in psycholinguistics has uncovered certain rare examples of how language
18
19
Language is not localized just in one area of the brain; it is controlled by both
hemispheres. And there is a puzzle remaining to be solved: speech and language are
independent from other aspects of behaviour for neurolinguists, but language is part and
parcel of cognition and perception for psycholinguists. Why?
Thanks to researches on examples of dissolution that do not seem to be caused by brain
damage (autism, aging, AD...), we discover that the date on speech and hearing
disorders does not differ significantly from the information we have on normal
development. So, whatever is the disease, language seems to be closely related to other
aspects of human behaviour, particularly to cognition.
To sum up, the disruptions in the environment or in the genetic code that bring about
speech and language disabilities never seem to single out language: they affect linguistic
communication because they afflict cognition and perception as a whole. This is why
psycholinguistics is drawn by language into a more general inquiry of the workings of
the human mind.
20