You are on page 1of 14

Sign-Language

Research Paper

Anthony Hunke

SPCH 202

October 16, 2014

Hunke 2

What is the most important aspect of our contemporary society? Is it cooperation,


stability, unification, or all of the above? However one might want to spin it, at the core of every
successful society is communication. To say that effective communication is the most important
cornerstone of our civilization is a major understatement. Only a select few are consciously
aware of its significance and influence upon our everyday life. Evidently, communication is
conducted through a medium in several different ways, but the basic definition is the same.
Communication is the ability to develop and share meanings. As symbol users, humans create
words that represent a particular object, and we use these words to convey meanings to each
other to accomplish a personal desire. Just one of the many forms communication can take is that
of sign-language. Sign language is just as important as verbal language. The transfer of
knowledge and information is the exact same for each; the only difference is that one is
processed through hearing, and the other is processed through sign-recognition. This difference
is essential to this paper. Thus, I will elaborate upon sign-language and its basic constituents.
To begin, humans must first learn. According to the web article, How Human Beings
Learn, Learning is acquiring new or modifying existing knowledge, behavior, skills, values, or
preferences (How Human Beings Learn, n.d.). What begins this process? Well, the five basic
senses are our most effective tools. Through sight, sound, touch, smell, and taste, we assemble
incredible amounts of knowledge. The Sensory System collects these senses and stores them into
the brain for analysis. Only when we are consciously aware can we effectively learn what we
take in. It is the unconscious aspects that are discarded. This goes hand in hand with selective
perception. We will take in, or remember things that spark our interest and captivate our
attention; we are more likely to reject or forget about those stimulants that do not peak our
interest (Wells-Papanek & Hargrove, 2010). Returning to the Sensory system, more specifically,

Hunke 3

it is the Central Nervous system which acts as our card reader. The brain and the spinal cord
are the basic constituents, and they read the stimulations we come into contact with, and they
store them in the places they need to be. This central nervous system rapidly gathers, organizes,
interprets, and makes sense of the inputs, to prepare our body and mind (peripheral nervous
system sensory neurons, clusters of neurons, and nerves) to adapt and take action based on
need or circumstance (Wells-Papanek & Hargrove, 2010). It is amazing how active the brain is,
and we are only aware of a fraction of its potential.
Since we have taken in this information from our senses, we must now comprehend it.
According to the Meriam-Websters Online Dictionary, comprehension is the act or action of
grasping with the intellect (Comprehension, n.d.). One cannot simply gather the information
and stop; one must, also, understand the knowledge. The central nervous system collects the
perceptions we prefer and stores them into our short-term memory. This databank will, in turn,
draw upon our long-term memory for reference material. In other words, our mind will compare
the new stimuli with knowledge that we have collected in the past. The more humans can relate
to what they are learning, the more likely new short-term memories will link with prior
knowledge or previous experiences and therefore result in new understandings (Wells-Papanek
& Hargrove, 2010). Another good way of putting it was described very well by Yingxu Wang. He
stated that Comprehension is a higher cognitive process of the brain that searches relations
between a given object or attribute and other objects, attributes, and relations in the long-term
memory, and establishes a representational model for the object or attribute by connecting it to
appropriate clusters of memory (Wang, 2003). Basically, the more we know, the more likely we
will be to connect the new information with the old, and this, in effect, allows us to learn even
more! However, our short-term memory has a limit on its saturation potential. The average

Hunke 4

number of things we can attentively attend to, at a time, is seven. Thus, we can rehearse it to
assure its footing in our minds, move it to the long-term databank, or forget about it (McLeod,
2008). This sounds quite familiar when it comes to college students. After all, most of the
material we get in class is forgotten unless we rehearse it directly afterward. Thus, we, as human
beings, learn and comprehend.
The act of comprehending, itself, involves processing information. This may sound a bit
too much like a computer; well, that would not be too far off from the truth. The information that
our senses collect is processed by certain systems, such as attention, perception, and short-term
memory. These systems, in turn, convert them into systematic information for our brain to deal
with. Since the human brain resembles a computer, it behaves as such. Both the brain and a
computer will code the information, store the information, use the information, and produce an
output. Once the eyes, ears, skin, or tongue receive stimulation, this information is transformed
into neural signals which is transferred to the brain. Once in the brain, the information is stored,
or in computer terminology, it is coded. Since the information has been stored, much like
students at a campus library, the information is made available to the rest of the brain for
utilization (McLeod, 2008). Hence the information processing approach characterizes thinking
as the environment providing input of data, which is then transformed by our senses. The
information can be stored, retrieved and transformed using mental programs, with the results
being behavioral responses (McLeod, 2008). Information processing is defined, according to
Slamecka, as the acquisition, recording, organization, retrieval, display, and dissemination of
information (Slamecka, n.d.). This information processing system works in stages. Stage one
consists of the stimulations being taken in by our senses. Stage two dictates that the neural
impulses are then stored in the brain. The third stage consists of the output processes helping us

Hunke 5

to make sense of these stimuli (McLeod, 2008). For example, the output would be us reading a
book.
Once we process this information, we produce an output. These outputs are, for the most
part, methods of communication. Communication involves the imparting or interchanging
thoughts, opinions, or information among people by speech, writing, or signs (Nayab, 2012).
There are several forms of communicating. Verbal is the first. Verbal Communication
encompasses sounds, words, and speaking through a particular language. While Verbal and signlanguage are, evidentially, different, verbal language is said to have developed from sounds and
gestures. However, gestures and sign-language are not always the same thing. More on this topic
will be discussed later. This verbal communication tops the list due to its high significance.
Verbal communication allows us to vocally tell our loved ones how we love them, it allows us to
vocally convey thoughts to procure and maintain our own agendas, and it allows us to work in an
effective business environment. Verbal communication deals with public speaking as well
(Young, 2014). Many people are petrified of presenting a speech in front of a crowd, but it is
highly regarded as an important part of our society, all the same. Another form is non-verbal
communication. Sign-language fits under this category more than any other in the fact that nonverbal utilizes other means of communicating besides speech. Our hand gestures, our body
language, and our senses all fall under this category. Nonverbal is just as important as verbal due
to the fact that our body language will speak volumes about who we are. While we may say one
thing, our body posture and position may say another. For instance, we may say we like an idea,
but our folded arms, drooped head, and avoided eye contact would dictate otherwise. Thus, one
might see how non-verbal plays into the business sector. We must look the part and appear
professional. Slouching, moping, and dragging feet will not convey a sense of trust or

Hunke 6

responsibility. Paralanguage is, also, a form of non-verbal. Paralanguage is the way something is
said, not what is said. Voice quality, intonation, pitch, stress, emotion, tone, and style are all
constituents of paralanguage (Nayab, 2012). Ones aesthetic communication, such as dancing
and painting, ones appearance, or the style of dressing and personal pruning, ones territorial
occupancy, and ones personal symbols, such as religion and status, are all important elements of
non-verbal as well (Nayab, 2012). Written communication is yet another form. As the name
would imply, this includes any form of a message that is written out rather than spoken. Thus, emails, chats, memos, and other forms along that nature are included. Nayab states that written
communication would fall under verbal, but Young separates it into its own category. With
technology perpetually evolving, this now encompasses texting, IMs, and social networking.
One of the primary advantages to this form over the others is the fact that the message being
transmitted can be amended before being submitted. After all, they say that cruel words cannot
be taken back. Written communication allows us to take the time to revise and clear our head if
we are writing in a bad mood. A final type of communication, according to Young, is that of
Visual communication. Photography, signs, symbols and designs are all constituents. In the
contemporary forms, television and YouTube videos are now considered as examples. One must
realize a few facts regarding communication. First of all, each form relates to business in some
way, shape, or form. If one wants to be truly successful in their professional field, regardless of
what it may be, one must perfect their communication skills (Young, 2014). Also, not just one,
but all forms should be studied and enhanced by every individual.
Now that we know what the forms are, let us focus more specifically on sign-language.
Many scholars believe that sign language may have been the earliest form of language
communication. After all, one must reflect upon the cavemen and their depictions of the world

Hunke 7

around them. They did not have a verbal system that we know of; perhaps, they had clicks,
gutturals, or other sounds, but that would fit into paralanguage, and not vocal. Anyway, what
started off as a gestural origin gradually developed into a spoken language (Mathur, 2011). The
spoken form of language could have manifested due to increasing brain size over the years of
evolutionary development (Mathur, 2011). According to Marilyn Daniels, Sign language is a
term that refers to many languages that have evolved throughout the world in situations in which
spoken language was not possible. There were periods of silence, so a need for a silent form of
communication among the monks was required in Monastic communities (Daniels, 2001). Let
us go back as far as we can. One of the first documentations we have regarding sign-language is
that of Xenophon, a Greek historian. His writings, around 431 B.C., spoke about an early
exposure to a primitive form of sign-language (Corballiss, 2002). Over the years, sign-language
has developed to the forms we recognize today. Honestly, there are just as many different forms
of sign-language as there are vocal languages. Sign-languages have different dialects and
accents, just as spoken languages have! The signs, used in this form of communication, are made
by hand movements, and the hands are always placed near the body. The hands are not the only
weapons for this type of combat; the facial expressions also contribute. Some signs utilize one
hand, and some signs utilize both. Since there are many forms of sign language, it will be
different for each, but, as for American sign-language, or ASL, there are some generalizations.
There are five basic parts to sign-language in ASLHandshakes, movements, locations, palm
orientations, and facial expressions. Facial expressions are important when multiple words have
very similar hand signs. The only way to tell them apart may be the facials (Bayley, Lucas, &
Valli, 2003). Signs in sign language are made up of sequences of movements and holds. Holds
are periods of time during which all aspects of the sign are in a steady state, not changing, while

Hunke 8

movements are periods of time during which some part of the sign is changing (Bayley et al.,
2003). Over the years, much debate has ensured about whether sign-language is even a real
language. Ferdinand de Saussure, a Swiss linguist, argued upon this topic. One of the critical
features of language is the arbitrary nature of the symbols it employs. The words we use typically
bear no relation to what they represent. Saussure distinguished between the sequence of sounds
that make up a spoken word, which he called the significant, and what it represents, the signifie.
This distinction is often blurred in signed language, where signs have a more iconic, or pictorial
relation to what they represent. This has sometimes been taken to imply that signed language is
not a true language, but is more like a mime show (Corballis, 2002). Unfortunately, there are
those out there who believe the movement of the body to signify little more than a side show.
Sign-language depends upon visual cues. When someone makes a hand sign, the eyes of
the recipient of that message will interpret it. As mentioned earlier, the senses take in
information, and this information is shifted to the eyes for processing. Once it has been
processed, the output, or the knowledge of the sign, has been gleaned, and thus, the recipient can
take action in response to it.
Thus far, it has been said that these visual cues have been stored, but stored where? One
would think that auditory and visual information would be processed differently, but there may
be more similarities than one thinks. Vocal language is stored in the left hemisphere of the brain.
This has been proven by the fact that children who are not exposed to a language early on will
have undeveloped left-hemispherical brains. This is what is known as a critical period. It is the
period in time when a child needs to be exposed to language. If they miss this window, they
might never be able to recover and learn language capabilities (Phillips, 2002). Research,
conducted by Aaron J. Newman, stated that the right angular gyrus may be essential in

Hunke 9

comprehending ASL for children who learn it from birth. It is a different story for those people
who learn it after they hit puberty (Phillips, 2002). In recent years, more and more studies have
been conducted that show the left hemisphere has a primary role in comprehending both vocal
and signed languages.
While the left hemisphere is important for comprehending language of many forms, the
right hemisphere is important for visual-spatial comprehension. Sign-language includes
linguistics and visual-spatial processing (Bellugi, Hickok, & Klima, 2001). It has been
determined that the neural organization of sign language has more in common with that of
spoken language than it does with the brain organization for visual-spatial processing (Bellugi
et al., 2001). Visual-spatial is not confined to one area of the brain. Instead different neural
modules process visual inputs in different ways. For example, visual inputs that carry linguistic
information would be translated into a format optimized for linguistic processing, allowing the
brain to access the meanings of signs, extract grammatical relations, and so on. But visual stimuli
that carry a different kind of information -- such as the features and contours of a drawing -would be translated into a format that is optimized for, say, carrying out motor commands to
reproduce that drawing (Bellugi et al., 2001).
Until this point, this paper has proven pretty confusing and vague. However, a nice
summarized version follows. It has been proven that damage to the left hemisphere, or the area
of the brain that deals with language, causes language aphasias (Emmorey, 2002). This is
because the left hemisphere is the dominant factor for language due to it being sufficient in
analytics, rationale, and linguistics (Emmorey & Reilley, 1995). According to Professor Jan
Moore, a Doctor of education at the University of Nebraska at Kearney, "The linguistics of sign
(hand shape, movement, palm orientation, etc.) is processed in the left hemisphere. Other parts of

Hunke 10

communication (emotion, nonverbal cues) might be processed elsewhere (J. Moore, personal
communication, October 15, 2014). Another piece of evidence comes from Karen Emmorey and
Harlan Lane in their book, The Signs of Language Revisited. They said that both the left and the
right hemispheres process neural signals from non-language interactions between the
environment and the body, including sensory and motor systems (Emmorey & Lane, 2000).
More reasons that the right hemisphere may be more involved than we think is that the occipital
lobe runs within both the left and the right hemispheres. So, when we read a sentence or see a
sign, the occipital lobe in both the left and right hemispheres light up. Some Chinese characters
are logographic in nature, so, when reading these symbols, multiple areas of the right hemisphere
were said to have lit up. This is due to the fact that the right hemisphere deals with visual-spatial
processing. The right hemisphere is involved in the processing of lexically ambiguous words
and indirect forms of language use such as metaphors, suggesting that the right hemisphere
provides an alternative interpretation when the initially constructed meaning turns out to be
incompatible with contextual information (de Groot, 2011). In addition, the right hemisphere is
said to deal with emotional, holistic, and imagistic aspects of the mind. It is related to speech in
the fact that gestures are related to it. Thus, the right hemisphere has been dubbed, the minor
hemisphere in relation to speech (Emmorey & Reilley, 1995). Damages to the right hemisphere
have shown decreasing abilities to organize linguistic narrations, appreciate jokes, or understand
metaphors (Emmorey & Reilley, 1995). Over the years, it has been noticed that the right
hemisphere is utilized more when listening to spoken language than when reading or speaking
(Emmorey, 2002). Since the right hemisphere is better with imaging, the right hemisphere is
efficient when processing words with an image attached to it, or a referent. This associating of an
image happens after the linguistics have been processed, thus, it is postlexical. "Once the lexicon

Hunke 11

has been accessed, and a semantic representation retrieved, subsequent right hemisphere
semantic processing is mediated by imagery, while the left hemisphere can utilize either verbal
or imaginal codes" (Emmorey, 2002). So, in a grand sweep of summary, the facts remain as such.
The left hemisphere is the dominant player when it comes to processing, interpreting, and
communicating with vocal speech. The right hemisphere is more space-oriented and contributes
less than the left. However, the fact remains that the right hemisphere does play a part in
comprehending sign-language.
Many tests have been executed to determine the differences in verbal and signed
languages and where they are stored in the brain. One study was conducted that included two
groups of individuals. Both groups were fluent in English and could hear, as well. One group
learned ASL early in life due to the fact that they had deaf parents. The other group grew up with
English and did not learn ASL until later in their life. Both groups were exposed to English and
ASL sentences, alternating in turn. While this was transpiring, their brain functions were
measured. FMRI, or Functional Magnetic Resonance Imaging, was used to achieve this
measuring. During the English sentence exposure, both groups utilized their left hemispheres.
However, when viewing ASL sentences, both groups used both their left and right hemispheres.
However, it was more common for the native ASL signers to use the temporal lobe of the right
hemisphere than the late learners (Phillips, 2002). Another test was administered to certain
individuals. The left side of the brain underwent cortical stimulation, and ASL signers
experienced hand shape errors. People who suffered injuries to the right side of the brain
experienced no sign-language impairments (Emmorey, 2002). The Wada test, specifically, stated
that when the left hemisphere was impaired, both sign and speech became impaired as well
(Emmorey, 2002). Two men, Broca and Wernicke, committed to ground-breaking work when

Hunke 12

they made revelations in their study of the brain. Broca found that a patient had a lesion in the
middle posterior part of the inferior frontal gyrus in the left hemisphere. This lesion left him
unable to speak, but his comprehension remained top-notch. Wernicke found a patient with a
lesion in the left superior temporal gyrus. This patient could not understand anything, but he
could talk, although the speech was nonsense (de Groot, 2011). Thus, Brocas area deals with the
storing of motor representations of words while Wernickes area represents the auditory forms of
words (de Groot, 2011). Basically, the left hemisphere has more evidence to back up the fact that
it deals with language processing more than the right hemisphere.
In conclusion, sign language is processed slightly differently than regular vocal speech.
However, there are more similarities than there are differences. This may confuse some people
since it might appear they are quite different. But when looked upon from a neurological
perspective, it may make sense. Communication is essential for any community. As John Dewey
once said, a Democratic society will fail if it does not have effective communication. For those
who solely rely on sign-language, they face more obstacles than the rest of us, and they face
these tedious obstacles to accomplish the same thing we can do that we take for granted. Many of
the children in local school systems will have some access to sound, but 11% of sign-only users
in local schools and schools for the deaf will need an alternative to a spoken rendering of fluent
reading because they are reading one language and rendering it face-to-face in another language.
When a child does not have fluent verbal speech, we cannot reliably assess fluency through
spoken means alone (Easterbrooks & Beal-Alvarez, 2013). Thus, the real question must be
asked--is there a measurable difference in comprehension between audio learners and sign
language learners?
Works Cited
Bayley, R., Lucas, C., & Valli, C. (2003). What's your sign for pizza? An introduction to

Hunke 13

variation in American Sign Language. Washington, D.C.: Gallauet University Press.


Bellugi, U., Hickok, G., & Klima, E. S. (2001). Sign language in the brain. Retrieved from
http://www.zorna.com/spike/docs/SignLanguageInBrain/
Comprehension. (2014). In Merriam-Websters online dictionary. Retrieved from
http://www.merriam-webster.com/dictionary/comprehension
Corballiss, M. C. (2002). From hand to mouth: The origins of language. New Jersey:
Princeton University Press.
Daniels, M. (2001). Dancing with words: Signing for hearing children's literacy. Connecticut:
Bergin & Garvey
de Groot, A. (2011). Language and cognition in bilinguals and multilinguals. New York:
Psychology Press.
Easterbrooks, S. R., & Beal-Alvarez, J. (2013). Literacy instruction for students who are deaf
and hard of hearing. New York: Oxford University Press, Inc.
Emmorey, K. (2002). Language, cognition, and the brain. New Jersey: Lawrence Erlbaum
Associates, Inc.
Emmorey, K., & Lane, H. (2000). The signs of language revisited. New Jersey: Lawrence
Erlbaum Associates, Inc.
Emmorey, K, & Reilley, J. (1995). Language, gesture, and space. New Jersey: Lawrence
Erlbaum Associates, Inc.
How human beings learn. (n.d.). Retrieved from http://visual.ly/how-humans-learn
Mathur, G., & Napoli, D. J. (2011). Deaf around the world: The impact of language. New York:
Oxford University Press, Inc.
McLeod, S. (2008). Information processing. Retrieved from
http://www.simplypsychology.org/information-processing.html
Nayab, N. (2012). How are you communicating to your team? Retrieved from
http://www.brighthubpm.com/methods-strategies/79297-comparing-various-forms-ofcommunication/
Phillips, M. L. (2002, April 17). Sign language and the brain. Retrieved from
http://faculty.washington.edu/chudler/sign.html
Slamecka, V. (n.d.). Information processing [Britannica Academic Edition]. Retrieved from
http://www.britannica.com/EBchecked/topic/287847/information-processing

Hunke 14

Wang, Y. (2003). The cognitive process of comprehension. Retrieved from


http://dl.acm.org/citation.cfm?id=943406
Wells-Papanek, D., & Hargrove, W. E. (2010). How humans (we) learn. Retrieved from
https://sites.google.com/site/humancenteredactionresearch/how-humans-we-learn
Young, J. (2014). Different forms of communication. Retrieved from
http://www.beamentornow.org/different-forms-of-communication/
(J. Moore, personal communication, October 15, 2014).

You might also like