You are on page 1of 14

Agona 1 Allison Agona ENG 455: The English Language Dr.

Hall Research Paper November 26, 2013 Linguistic Properties in Signed and Spoken Languages Sign languages and many spoken languages vary in that the former appeals to those who do not have the ability to hear and the later appeals to who do have the ability to hear, and speak as well. Although many people view sign language as a language that is not comparable to our spoken language, sign language is very much like spoken languages. Sign language holds many of the same properties and characteristics of spoken languages. Linguistically, sign language holds the same language universals as spoken languages, such as: morphological properties, syntax, pragmatics, and even their own phonological properties. Sign languages are, linguistically speaking, comparable to spoken languages? Sign languages developed over time, share the same linguistic properties and characteristics as spoken languages. This shows that language acquisition is present for all humans learning language, including those who learn sign language, not just those who learn spoken languages. Although many people may consider a deaf persons lack of hearing as an impairment or a deficiency, children who acquire sign language go through the same linguistic stages as hearing children. In an article presenting the challenges and successes of children who are deaf and cannot hear our spoken language, Amy R. Lederberg, Brenda Schick, and Patricia E. Spencer, argue that researchers failed to recognize that the vast majority of deaf people develop language- namely sign language- if they are exposed to it (16). They also look at research which concludes that sign languages not only provide a valid way to communicate, but are, in fact, true languages with all the properties of spoken languages (16), such as the linguistic stages and the language universals. In the 1990s, many doctors were unable to identify if a child were born

Agona 2 deaf. The average age of identifying if a child were deaf in the United States was around twentyfour months of age unless the child were born to deaf parents (16). Due to new advances in technology, detection of this possibility is more identifiable. This bridges the learning gaps that many of the deaf communitys children face when it comes to language acquisition. With these new technological advances, deaf children are more exposed to language at an earlier age, as explained by the writers of the article. With neonatal identification, the age of identifying if a child is deaf or hard of hearing is six months of age instead of the recent age of twenty-four months (16). The learning environment, along with age, is a critical factor in both hearing and deaf children. The critical age hypothesis is a theory that states: the ability to learn a native language develops within a fixed period, from birth to middle childhood (Fromkin et al. 62). This principle applies not only to hearing children, but to deaf children as well. The worries of a deaf child not being exposed to sign language because the identification period in the 1990s were years into a childs life hindered a deaf childs language acquisition. No exposure to signing can affect a deaf child just as no exposure to a spoken language would hinder language acquisition of hearing children. This can also be known as the critical period, where language acquisition proceeds easily, swiftly, and without external intervention (62). The environment is also very crucial to a childs language acquisition. Statistics show that approximately five percent of deaf children are born to deaf parents; the other ninety-five percent of deaf children are born to hearing parents (Lederberg et al. 16). Mostly all hearing parents in the United States have to seek outside support and education for their children because if they would not take these steps, their childs ability to learn a language, whether it is spoken or signed, would be affected (Lederberg et al. 16). There is, however, a connection with hearing

Agona 3 children and deaf children and how environment affects their language acquisition. Probably the best known hearing child example where the environment affected the critical age hypothesis is Genie. Genie was confined to a small room and was denied the exposure to human communication and the spoken language. This occurred from the time Genie was eighteen months to fourteen years of age. Genie displayed no knowledge of a spoken language and was not able to produce grammatically correct phrases or sentences. Her speech consisted of babbles and her brain development was stuck at the age of a two-year-old child (Fromkin et al. 63). An example of the influence of a deaf childs acquisition hindered, due to environmental factors is Chelsea, a woman born deaf but falsely diagnosed as retarded. At the age of thirty-one, she was diagnosed as deaf. She was then fitted for hearing aids and went through rigorous training in order to acquire spoken language (64). Like Genie, Chelsea was unable to produce a proficient knowledge of grammar. This connection underscores to the linguistics stages and the language universals as well. The linguistic stages that hearing and deaf children go through, include babbling, the holophrastic stage, and the telegraphic stage (Fromkin et al. 354). These stages are the first pieces of evidence that deaf children acquire language in the same ways that hearing children do. It also contributes to the argument that sign languages share the same linguistic properties as spoken languages. During the babbling stage, children of spoken languages are starting to process what they are hearing and babble nonsense words (354). Deaf children do just the same, except that babbling is not done with their mouths, but with their hands. As children progress in language acquisition, the holophrastic stage, where in hearing children use one word to convey a message, parallel similar to a phrase or sentence; deaf children do this with signs (354). A deaf child will use a simple, single sign just as hearing children will use one simple word. The next

Agona 4 stage, the telegraphic stage, occurs when hearing children start to use two-word phrases. Deaf children start to combine the signs to form two sign phrases as well, but function signs are omitted (355). These three stages and the sharing of certain language universals are indicators that signed and spoken languages are linguistically acquired. Along with normal language acquisition, it has been suggested that deaf children have bilingual language acquisition. Bilingual language acquisition is the simultaneous acquisition of two languages beginning in infancy (Fromkin et al. 357). Because many deaf children are born to hearing parents, their exposure to sign language may not occur as rapidly as their exposure to spoken languages, whether through speech or writing. Fromkin et al., suggest this theory as simultaneous bilingualism (357). In the Lederberg et al. article Language and Literacy Development of Deaf and Hard-of-Hearing Children: Success and Challenges, the writers view this learning as simultaneous communication. This model was formed by educators in the 1960s and 1970s and created sign systems, not just sign languages. These systems combine signs from natural sign languages with newly-created signs to represent grammatical morphemes of spoken language (Lederberg et al. 17). These sign systems are not the only indication that deaf children are bilingual. While learning a sign language, a deaf child is also learning to write and read a spoken language, and is also possibly speaking as a result of other advancements in technology, such as the hearing aid or the cochlear implant (19). With bilingual language acquisition, Thomas Hopkins Gallaudet was able to learn French Sign Language as well as his spoken language to develop a genetically related sign language in North America, American Sign Language or ASL. American Sign Language or ASL was established in North America (United States and some English speaking sections of Canada) around the year of 1817 when Thomas Hopkins

Agona 5 Gallaudet and Laurent Clerc established the first school for the deaf in Hartford, Connecticut (Valli and Lucas 14). The Connecticut Asylum for the Education and Instruction of Deaf and Dumb Person, now called American School for the Deaf, was created with the basis of another sign language, FSL or French Sign Language (14). Since American Sign Language branches from French sign language, the two are genetically related. Sicard, a Frenchman who was the director of the Royal Institution of the Deaf in Paris, invited Gallaudet to learn French Sign Language and their teaching methods. While learning the language and the teaching methods, Gallaudet met one of Sicards students, Laurent Clerc, who later traveled to the United States with Gallaudet. The two exchanged their knowledge in French Sign Language and English, and later built upon that knowledge and develop a school for Americas deaf community. American Sign Language has similar characteristics of the spoken English. This includes the use of language universals, such as: morphology, phonology, syntactic structure, as well as pragmatics and mental lexicon. Morphology is the study of the structure of words, which would include the formation of a word, or sign in this case (Fromkin et al. 81). Phonology is the sound system of a language (230). English has three types of phonology: auditory, acoustic, and articulatory. American Sign Language also has three types of phonology: configuration, movement, and location. Location in American Sign Language would be similar to the manners and places of articulation in English. Syntax or syntactic structure is the rules of sentence formation and the speakers knowledge, or signers knowledge in this argument, of sentences and their structure (118). American Sign Language also includes its own proper syntax when forming sentences. Along with the other language universals, there is pragmatics. Pragmatics can be defined as the study of how context and situation affect meaning (207). This can be viewed as body language or kinesics. American Sign Language definitely uses body language and

Agona 6 kinesics to display meaning in their signs, such as certain facial expressions to match the tense or parts of speech of a sign. The morphology of American Sign Language is similar to, but different from English. Signs, just like words, belong to grammatical categories such as: root morphemes, affix and suffix morphemes, free and bound morphemes, lexical content, grammatical morphemes, as well as inflectional and derivational morphemes. Along with the grammatical categories, there are morphological rules, which are used to form signs (Fromkin et al. 103). The difference in American Sign Language and English is that the morphological process in not as linear as it is in spoken languages. The stem or root of a word occurs in various movements and location in what signers call signing space so that all gestures are simultaneous. While researching a model of language learning environments, it was found that American Sign Language has multiple morpho-syntactic characteristics that are performed simultaneously (Lederberg et al. 16).. This would be represented in hands, the face, in space, in different types of movement, rather than in sequential order in spoken languages (16). In an article examining American Sign Language morphology, Dennis Galvan states that ASL incorporates additional information into a sign via simultaneously produced layers rather than sequentially produced units (321). This action cannot occur in spoken languages (Fromkin et al. 103). An Introduction to Language explains these categories and what each category means to a signer. Important categories include suffix morphemes, inflectional, and derivational morphemes. Suffix morphemes are used for negation. In American Sign Language, if the signer wants something, the palm is facing upward. To negate that, the signer would simply just face their palm downwards to indicate something they do not want. For derivational morphemes, the root of the sign stays the same, but the movement of the hands change, which gives the meaning

Agona 7 of the word a different context and a different part of speech (Fromkin et al. 103). A study was conducted by Galvan, that examines the morphological knowledge of early signers (deaf children who did not learn ASL before the age of twelve) and native signers (deaf children born to deaf parents and learned ASL in infancy). Within the study, Galvan asked thirty deaf children to look through a story and then sign it. Gathering the most commonly used verbs in the childrens stories, Galvan explains that American Sign Language incorporates changes in movement to indicate different verb forms (322). The examples Galvan used in his research explains this phenomenon perfectly. For example, repeating a verb in short, regular movements adds the recursive aspect, which would mean that someone or something is performing a task over and over again, while repeating the verb in a long, regular ellipses will add durative aspect, which means that someone or something is performing a task for a long period of time, such as sleeping (322). And finally, an inflectional morpheme, which indicates how close to the body or away from the body the signer signs the word. For example, if a signer would want to indicate that he or she loves another person, the signer would draw his or her hand closer to their body. If a signer wants to indicate that another person loves him or her, the signer would draw his or her hand away from the body (Fromkin et al. 104). Phonology is the study of sounds for spoken language, but for sign language it is the study of location. While English has three types of phonology- acoustic properties, auditory properties, and articulatory properties- sign language has three types as well. Those three types include: configuration, movement, and location. The configuration of the hand is the shape of the hand, which can be straight or an arc. The movement deals with the hands and arms and how they move toward or away from the body. There are two types of movement with signs: unidirectional (one direction) and bidirectional (moving in one direction and back). And lastly,

Agona 8 location deals with signing space and where the hands are located within that space (Fromkin et al. 257). Any changes in location, hand shape, or movement changes the sign and the meaning of that sign. Along with these three properties, sign languages also have manners and places of articulation. For example, the sign for father and the sign for fine differ in only one place of articulation. American Sign Language has over thirty hand shapes and those hand shapes, which include one-handed and two-handed signs, vary in other sign languages just as different words vary in spoken languages (259). In a study of sign language phonology, Robert Wilbur explores the role of contact in each element of sign language phonology. He explains that there are approximately twelve possible locations: 6 for the face, head, and neck, 1 for the trunk, 2 for the arm, 2 for the hand, and 1 for neutral space (Wilbur 203). Along with the twelve different locations discussed in the study, whenever any sign makes contact with the body, it is given a location value, while if there is no contact signs are considered signs of neutral space (203). This study also identifies another type of sign language phonology: orientation. It refers to these four categories as the big four (204). Phonology and pragmatics occur simultaneously. This means that body movement and facial expressions, kinesics, occur while the formation occurs, especially in certain verb classes, as proposed by Malaia et al. in their article on parameters of signed verbs. Malaia et al. refer to this movement and body language as kinematics and the different verbs that contribute to the movement or body language of a signer. The two types of verbs examined are telic verbs and atelic verbs (1677). Telic verbs are verbs that describe things that involve change. Atelic verbs are verbs that can be used to describe the event itself (1677). Malaia et al. also makes reference to an earlier study of Wilbur, which concluded that ASL lexical verbs could be analyzed as telic

Agona 9 an atelic, based on their form (1678). These two verb classes contribute to the phonological structure of American Sign Language and the pragmatics of the language as well. Each category of the big four is closely examined with contact as an underlying factor. For configuration, signs display an ease of articulation or, as Wilbur explains it, slurring the movement (204). The study also suggests that the second element, hand shapes, exhibit complementary distribution, which means that the phones cannot occur in the same phonetic environment (Fromkin, et all. 275). For contact and location, Wilbur described an occurrence between location and contact as co-occurrence, whereby certain places of formation occur with different manners of formation, just as articulatory phonetics of spoken languages occur (Wilbur 204). Although the last element, orientation, was not closely examined, stress was examined in this study. This could lead a reader to believe that sign language phonetics also contains characteristics of prosodic features (another branch of pragmatics), which is also suggested in another study. This particular study suggests that facial expressions, speed and movement, as well as body movement (what Wilbur calls lesser components of sign language phonology), convey the complex range of functions that pitch, duration, and loudness do in spoken languages (Lederberg et al. 16). Along with kinesics and prosodic features, a different argument was developed: iconicity. Iconicity is the relationship between form and meaning (Thompson et al. 550), and researchers believe that American Sign Language uses this principle more than spoken language. These researchers introduce the topic by stating, Although sign languages conform to the same grammatical constraints and linguistic principles found in spoken languages and are acquired along the same timeline, they make use of iconicity to a much greater extent than spoken languages (550). This suggests that arbitrariness does not apply to sign languages in the same

Agona 10 way that it applies to spoken languages and what other scholars conclude about sign language as well. Arbitrary meaning for spoken languages means that there is a relationship between the way a word is pronounced and its meaning (Fromkin et al. 5). In sign language, it means the way the sign is formed and the meaning of the sign. These researchers conducted a study evaluating if signers, hearing signers, and simply hearing people saw a connection in a picture between a sign and the object itself. Native signers were able to respond faster than any other group (552). This showed that there was, in fact, an arbitrary meaning to native signers of American Sign Language. Thompson et al. suggest that arbitrariness in critical to a larger lexicon and, interestingly enough, that children do not use iconicity in the first stages of language acquisition (554). Although, this principle is viewed as linguistically impossible for both languages, this development of a new theory holds the potential for other new developments and theories as well. With American Sign Language, syntactic structure has its own rules and guidelines just as spoken languages do. Space is an important part of ASL grammar (Kegl 173), or signing space, as referred to earlier. Kegl provides the location of this signing space, which is just below the stomach to the top of the head, to be vertical. Horizontally, it creates a bubble in front of the speaker spanning about 180 degrees (173-74). An interesting focus in this study is word order in American Sign Language. Although this is a controversial topic to some researchers, American Sign Language follows a subject-verb-object order (Kegl 178). Many signers, because of their bilingual abilities, were also able to identify a proper sentence in various orders (178). Bilingualism may contribute to the syntactic structure of American Sign Language. When sentences are shown to signers in different orders or are lacking certain syntactic properties, signers refer to English strategies in order to determine a relationship (178-79). Kegl

Agona 11 came to the conclusion that most structure comes from inflected verbs (182). What was interesting about this particular study was that Kegl proposed a condition that suggests that there is really no word order in a language. This condition is called the Flexibility Condition, which states, the more inflected the verb, the freer the word order (182). If this principle would apply to American Sign Language, could perhaps it eventually apply to spoken languages since other linguistic properties are shared between the two. This principle, plus the evaluation of iconicity, proves that language acquisition and language is always evolving and humans could possibly recognize these in the future as new forms of language acquisition. While the focus of this paper is the language acquisition of both sign and spoken languages and how the two share linguistic properties, it is important to also look at other theories that are being developed by scholars since language is constantly changing. Indeed, in the future there may be other advances in sign languages and spoken languages that will affect language acquisition. Each language universal takes that approach with the different models and new research, such as, syntax having a free order (the Flexibility Condition), phonology being closely related to contact and to kinesics, or how iconicity challenges arbitrary meaning. New studies suggest that the languages children learn through linguistic stages are very important to discuss different arguments. In conclusion, signed languages and spoken languages are viewed by many people as two completely different languages. Linguistically, however, these languages are quite similar. Language acquisition is present in all humans, whether they are born deaf, or they were born hearing. The acquisition of signed languages shares the linguistic stages and universals as spoken languages. Through the study of how deaf children and hearing children learn language, it can be concluded that they share the same universals, such as: morphology, phonology, syntax, and

Agona 12 pragmatics. Among the normal studies of sign language, particularly American Sign Language, new studies emerging that would provide humans with another way to acquire language in the future. Sign languages and spoken languages differ in one primary aspect: that one is for the hearing and one is for the deaf.

Agona 13 Works Cited Galvan, Dennis. Difference in the Use of American Sign Language Morphology by Deaf Children: Implication for Parents and Teachers. American Annals of the Deaf 144.4 (1999): 320-324. Education Source. Web. 19 November 2013. Fromkin, Victoria, Robert Rodman, and Nina Hyams. An Introduction to Language. 9th Ed. Boston: Wadsworth, Cengage Learning, 2011. Print. Kegl, Judy Anne. ASL Syntax: Research in Progress and Proposed Research. Sign Language & Linguistics 7.2 (2004): 173-206. Academic Search Premier. Web. 19 November 2013. Lederberg, Amy R., Brenda Schick, and Patricia E. Spencer. Language and Literacy Development of Deaf and Hard-of-Hearing Children: Success and Challenges. Developmental Physchology 49.1 (2013): 15-30. PsycARTICLES. Web. 19 November 2013. Malaia Evie, Ronnie B. Wilbur, and Marina Milkovic. Kinematic Parameters of Signed Verbs. Journal of Speech, Language & Hearing Research 56.5 (2013): 1677-1688. Academic Search Premier. Web. 19 November 2013. Thompson, Robin L., David P. Vinson, and Gabriella Vigliocco. The Link Between Form and Meaning In American Sign Language: Lexical Processing Effects. Journal of Experimental Psychology: Learning, Memory, and Cognition 35.2 (2009): 550-557. PsycARTICLES. Web. 19 November 2013. Valli, Clayton and Ceil Lucas. Linguistics of American Sign Language: An Introduction. 3rd Ed. Washington D.C.: Gallaudet University Press, 2000. Web. 15 November 2013.

Agona 14 Wilbur, Ronnie B. The Role of Contact in the Phonology of ASL. Sign Language & Linguistics 13.2 (2010): 203-216. Academic Search Premier. Web. 19 November 2013.

You might also like