You are on page 1of 19

Mike Mackus

Professor Safir
Honors 264
December 12, 2008

Artificial Intelligence and the Evolution of Language

Recent research within the field of artificial intelligence has aimed to simulate through

the use of autonomous agents interacting within an artificial environment how language may

have emerged and evolved. Research of this nature can be of great assistance in answering

questions on the genesis of language, especially since it offers possibilities that theorizing from

the arm-chair does not. That is, while many accounts of the evolution of language appear

coherent, there is almost a complete lack of empirical data on which to form such beliefs, leaving

us with a lot of mere speculation. In a debate of this nature, it is difficult to find definitive

evidence no matter how simple the question one wishes to answer is; evolution must be partly a

guessing game simply because there is no real way to know for sure (whether or not a certain

trait was the result of exaptation; whether or not a given feature is only a spandrel; etc). But the

key is, however, that we are making educated guesses, well-founded guesses, and not simply

positing what might sound appealing. This is an especially daunting task when hoping to

reconstruct the evolution of something as intangible as cognition seeing as it can only indirectly

leave us means by which to infer certain capacities. Given such a state of affairs, we undoubtedly

run the risk of reverting science into its dark past; we must be cautious that all hypotheses are

testable and that theories are falsifiable. Without, at least, these basic groundings the question of

the evolution of language will generate a forum of discussion that can only lead to

unconstructive and destructive arguments with people in all corners choosing sides- and such is

the antithesis of science. In light of this dilemma AI lends the possibility of modeling different

scenarios of evolution in order to evaluate possibility. We have the opportunity to gather actual,
concrete evidence as opposed to working off only indirect inference. Computational simulations

allow for the control of all variables and the ability to manipulate certain constraints within the

domains of speech construction, conceptualization, the grounding of meaning and the mapping

of that meaning, etc. While current research has come to interesting results I feel there is good

reason that it still is not getting the attention it may appear to deserve. Moreover, after looking at

dominant patterns in AI research with a critical eye we will be able to better assess its value and,

more importantly, be able to determine the direction in which it must move in order to be of the

most assistance in answering the questions that lay at the foundation of the evolution of

language. At the very least, as noted by researchers in fields varying from anthropology to

psychology, linguistics to paleontology, the question of how language arose is cross-disciplinary

task that can only be handled by cooperation throughout these disciplines and this includes the

computer scientist: without properly bringing AI research into the discussion it may never be

properly informed in regards to linguistics and cognition as to model accurately that which is in

question; furthermore, this discussion must work in both directions thus requiring the computer

scientist to make his research explicit and open to the others working alongside him on language

evolution as to garner the proper feedback and critiques necessary for this cross-disciplinary

venture.

Language, by definition, can only exist and being actualized by a community of social

beings. AI research takes this as an obvious starting point: simulations of language evolution

require a community of interacting autonomous agents. These agents are essentially the hardware

platform that will eventually perform and hold the language that emerges. Many researchers take

the notion of embodiment very seriously and attempt to replicate what it means to be acting

within a body inside of an environment (Steels and de Beule 2006). Thus, in many cases,

research makes use of robots acting within an artificial ecosystem. These agents require
automaticity; they must be able to act upon their environment and ‘see’ that environment through

a number of sensory channels; and an agent must be able to determine its own destiny to a

certain extent.

Luc Steels, a leading computer scientist who has dedicated much of his time to the

question of language evolution, takes this framework, the interaction of autonomous agents

within an artificial environment, in order to explore two hypotheses (Steels 1998). First, Steels

proposes that language is an emergent property. This appears to be a rather vague starting point:

there is obviously an emergent nature to language given that new languages can and, indeed, are

born; particularly, this is the case in regards to sign languages when a new community of deaf

people are brought together; likewise, we can witness similar trends in a pidgin language

developing into a creole. There is not much to argue with in such an hypothesis that poses

language is an emergent property and, as noted above, while this may be a falsifiable premise, an

hypothesis as such does not leave much room for criticism and construction. Nonetheless, Steels

draws three sub-hypotheses from this. First, he argues, that language is mass phenomena that can

only be actualized by the interaction of agents within the community. Once again, this is a point

that has been addressed throughout history by philosophers and linguists alike: Wittgenstein

noting that a private language is not a language at all; Saussure stating that language can only

exist throughout the whole community; among countless others who have captured the very

same essence in their own words. Following from the notion of mass phenomena, Steels argues,

is the fact that there is no central authority over language. Once again, however, this is not

something that is debatable; in fact, it is the very reason that language can be arbitrary in nature.

Thus it appears that Steels is not formulating hypotheses, but rather stating necessary conditions

of language. However, Steels draws an extra conclusion from the fact that language has no

central authority, a conclusion that contemporary linguistics would be in disagreement with: he


proposes that it is not the case that each agent has the same language within his head; moreover

de Beule’s research simulating the emergence of grammar draws on this premise (de Beule

2008). Within the framework of simulating the growth of concepts, lexical items, and phonemes,

it may be the case that each user does have differences between them: as Steels writes “In this

sense, language is like a cloud of birds which attains and keeps its coherence based on individual

behaviours enacted by each bird” (1998). This analogy, though, is lost on language as it pertains

to grammar. Surely, individual users do possess a unique idiolect: one user may have a concept

that another does not; one user may possess different words than others- maybe even attributing

meanings to words differently or inaccurately (we only need to think of Kripke’s puzzle about

belief to see that this is so); and, of course, in thinking of trends in sociolinguistics,

pronunciation of phonemes often diverge along the lines of class, gender, race, sex and age.

None of these points take away from the fact that the users still are using the same language.

However, in applying this principle, that each user possesses his own language, to grammar

Steels and de Beule run into a wall. The universals apparent throughout the world’s languages

display that a grammar takes on a particular and necessary form; if this were not the case then

there should be an infinite amount of possible grammars with any type of construction being

permissible in order to represent a given proposition. We know, however, that grammars follow

constraints and if one construction is possible then another will not be. This mistake appears to

be the result of extending the notion of an individual’s unique language too far: for example, the

fact that different users may mistakenly attribute different meanings to a given word is a result of

contrasting knowledge about the world and the lack of knowledge about how the linguistic

community deploys a certain term; on the other hand, knowledge of a grammar is below the level

of consciousness and is not influenced by one’s knowledge of the world.


From his first hypothesis, Steels also draws that once the proper physiological,

psychological and social conditions have been satisfied language will spontaneously form itself.

In one sense this appears to undermine the question we are asking- how did language evolve?

Given all the necessary conditions it is apparent that language would arise. Thus should we

rather be asking what those conditions are? To an extent, yes. Here, Hauser, Chomsky and Fitch

weigh in a heavy objection to research of this nature: “… some recent attempts to model the

evolution of language begin with a hypothetical organism that is equipped with the capacity for

imitation and intentionality, as opposed to working out how these mechanisms evolved in the

first place” (2002). I believe, however, that Hauser, Chomsky and Fitch somewhat miss the mark

in this critique. AI research that could simulate the evolution of imitation and intentionality

would be ground breaking and it might even be, in practice, impossible. The point being made by

Steels is simply given ‘language-readiness’, language will happen. That is, accounts that call for

a catastrophic miracle or adaption of one universal parameter at a time are not necessary

(Bickerton 1990, Pinker and Bloom 1990). There may be no need for a wave of the magic wand,

but, still, the question remains what are the necessary conditions in which language can develop.

Here is where I believe Hauser, Chomsky and Fitch should have aimed their objection. Rather

than simulating the evolution of language I think it is equally important for AI research to

simulate contrasting scenarios- even if failure is predicted- in order to highlight what is a

necessary condition and what is not. This approach would better underline what a sufficient

account of language readiness may look like.

Steels’ second hypothesis is that language and meaning coevolve. He argues, as most

would agree, that language cannot be thought of as a simple means of labeling concepts and

structures of concepts that preexisted language. Rather “the complexification of language

contributes to the ability to form richer conceptualizations which then in turn cause language
itself to become more complex” (pp. 385, 1998). This, however, poses a problem for the fact that

one must have come first and we certainly would not assume it is language because there must

be something for language to denote in order for it to exist. Moreover, we know that other

animals have concepts grounded in perception without any need for language (frogs that have the

concept for ‘food’ when they see a black spec coming through their line of vision and the vervets

that have a concept for ‘danger’ when they see a leopard). So it does not appear necessary to

formulate the hypothesis is such a strong sense; it is certainly possible that humans had a

complex conception of various actions prior to language. Still, Steels’ hypothesis has value in the

sense that the need to express complex concepts requires a complex language and, in turn, the

complex language allows for growing sophistication in the understanding of concepts and

relationships between concepts. But reformulating the hypothesis in this way means we are

assuming selectional pressure that would make it advantageous to express sophisticated

concepts. Of course, it would be advantageous for any social being to be able to so readily

transmit information as complex as language allows but that does not directly imply pressure for

it: it would be advantageous for a monkey to evolve an extra limb so as to continue climbing as

he grabs bananas, yet, this does not mean there is a selectional pressure for such evolution. We

also see that this hypothesis is a miscategorization of the term coevolution insofar that it assumes

coevolution requires a growing complexity. We will examine the differences between evolution

and coevolution following a review of Steels’ definition of selection and the adaptive games used

to simulate the course of evolution.

Steels sets up an interesting definition of selection, leaving out any mention of genetics.

His outline for the requirements of selection is as follows: there must be a mechanism for

preserving information; there must be a source of variation; and there must be a feedback loop

that determines the success of particular variations. So in terms of natural selection on genes we
see that the organism, or, more specifically, the DNA, stores the information of the genes;

variation is the result of random genetic mutation; and the feedback loop is the reproductive

success of a particular organism carrying those genes. In setting up the definition for selection

this way, however, Steels is able to use selection in a cultural sense. He posits that language does

not necessarily evolve in a genetic sense, but rather that is the result of cultural selection. That is,

the individual stores the language, its rules, including the lexicon, the phonology, the grammar,

etc; variation is the result of random mistakes such as overgeneralization and mispronunciation

as well as the formation of new rules and the changing of existing ones; and the feedback loop is

the based on minimizing cognitive effort while achieving maximum communicative success. In a

sense, Steels’ definition of selection captures the exact problem of AI research: it deletes the

natural. While the sort of cultural selection he emphasizes must be an important part of language

genesis, it still removes natural selection from the picture and, in turn, takes evolution out of the

question of language evolution. An approach to modeling the evolution of language that assumes

this sort of criteria for selection as an explicit premise already shows specific problems. The

cultural sense of selection emphasized pertains only to the E-language and that is exactly what

we are not looking to examine as the source of language evolution (such is the task of the

historical linguist); the goal of research of this nature should be to determine how the I-language

can evolve- how competence of a language becomes possible (note that this premise may be the

cause for AI researchers drawing the conclusion that innateness is not necessary). Moreover,

there are general problems with such a definition which is the result of Steels attempting to

squeeze his model for language into a traditional selectional account. As we have already noted,

AI research supports a view of language that gives prominence to the E-language; the individual

user does not have an entire view of the language. This poses a difficulty for cultural selection

because the individual cannot be the storage place for language; rather, Steels has made explicit
that the whole community determines the global patterns of language. This may allow for an

analogy to genetics (there are varying genes across the community and it is only the collective

population that makes the characteristics of a species appear coherent) but that fails: the genes of

one organism are not determined by the genes of another; while one language user may propose

a simpler, more manageable utterance for a concept, convention will likely stand in his way.

However, a particular gene has no care for the genes of conspecifics and will be successful if the

trait it produces increases sexual reproductivity. When put in this light, Steels transformation

goes from innocent and appealing to construing selection in the wrong sense and applying it to

language, where it does not fit.

Steels, nonetheless, uses this definition within the adaptive games. Adaptive games are

essentially the interactions that lead to ‘evolution’ (if we still wish to call it that within this

framework) and the results of the games determine the course that evolution will take. The

games are interactions between two agents or the interaction of an agent with its environment.

These games have definitive measures for success and failure which should immediately warrant

hesitance. From an evolutionary standpoint, success is only measured by reproduction and thus

the prolonging of genes. These games do not handle success in such a way; success is determined

strictly by accurate communication. While this may result in the ‘selection’ of a communal

language that is proficient at transmitting messages about given information, it does not

necessarily imply success from an evolutionary point of view. That is, maybe all the beings that

develop language devote too much of their conscious awareness to deploying and understanding

speech that predators that were once easily avoidable now sneak up on them with ease. Sure, this

may be a stretch and we may very well be right in assuming that developing a language is highly

beneficial for a species, but we must also be aware that there is no direct implication that

developing a language is an evolutionary success. This captures the point that one must be
hesitant when surrounded by language such as “success” and “failure”, which is crucial to AI

research, because, in terms of evolution, there is only one criterion for success.

Discrimination games take place as interactions between a single agent and its

environment. The agent is presented with the context and must use its sensory channels in order

to discriminate between the objects present. This is done by the agent attempting to formulate a

distinct feature set for the particular object in the context that is the topic. When the topic is

defined according to a feature set that picks it out uniquely then the game is a success; the game

is a failure when the agent cannot come up with a feature set to identify the topic and the agent

must then create new means of categorization by refining existing categories or by exploring

previously unused sensory channels. The agent, however, will not know if the change is adequate

until another discrimination game. These games show a quick build up of a hierarchy of

concepts. But games of this nature pose difficulties for the question of language evolution. First,

the game must presuppose a pressure on conceptualization. Such a presupposition may not be a

large leap of faith but, nonetheless, it implies that a sophisticated set of concepts is a precursor to

language. Secondly, the game does not offer what appears to be an accurate representation of

how there might be pressure on conceptualization: in reality, an animal of a certain species may

not accurately discriminate among a number of concepts and that animal would never know and

thus never refine its means of categorizing; that is, while there is no way to have knowledge of a

failure of perceiving a given concept, the agents in the simulation are provided with objective

information about their success and failure. The only real pressure for grounding new meanings

through perception would be the goal of survival so that concepts for things like “food” and

“predator” would either be accurate or the animal would not survive or at least be less likely to

survive. As we may recall, in many animals these essential concepts appear to be the result of

genetic programming and they are for the most part predetermined. But language, and culture in
general, shows that humans have the capacity to create completely arbitrary concepts and

essentially ground distinct meaning any way we so choose: when does a bowl becomes a cup or

a bush become a tree? The question then does not seem to be one about whether or not the agents

will form sets of concepts grounded through perception; the games highlight the point that it is

necessary that the agents be able to absorb knowledge of the outside world. Steels states, “When

there is no distinctive feature set, the discrimination fails and there is pressure to form new

feature detectors” (pp. 4, Steels 1996). This pressure is not evolutionary pressure but simply the

result of the game that is dictated to the agents. At first thought, the only pressure that might be

supposed for an agent refining its conceptualizations is another agent trying to discuss a topic

which the original agent is unable to discriminate from the rest of the context. But such a

pressure already presupposes language. We are left with the puzzle of what could place pressure

on forming arbitrary categories not directly related to the need for survival. As suggested before,

it seems to be that the agent must have the ability to absorb knowledge about the environment; in

acquiring perceptually grounded knowledge there is then the need for perceptually grounded

meaning in order to discriminate different pieces of knowledge. Being able to communicate

information about the environment would be highly beneficial for a social animal but the

prerequisite for sharing information is first having the ability to gather this information. Steels’

discrimination games explain that a set of concepts can grow rather quickly but that much is

easily imagined especially given the outside pressure put on the agents. However, it seems more

plausible, when picturing what value an experiment of this nature has in actual practice, that the

role discrimination must have played was one directly related to the growing intelligence of

homo-sapiens: knowledge of the environment requires discrimination and thus the formation of

concepts for accurately organizing knowledge of the world. It can be assumed that humans had a

strong capacity for perceiving the world around them accurately thus, unlike the agents in the
discrimination game, pre-linguistic humans did not need to learn how to see their surroundings;

it is more important that they had the ability to recognize and remember information about the

world. Hence, even though most would agree that concepts are a necessary precursor to

language, more emphasis should be placed on how pertinent information of the environment

requires particular conceptualizations. If such is the case, that the ability to gain knowledge plays

into the development of concepts, reinforces that language (or at least the necessary components

for language) and general intelligence must go hand in hand. Steels uses the discrimination game

along with the other games to argue that language and meaning coevolve. Later we will examine

this argument and, given what has just been stated about knowledge and intelligence, I will argue

that a more likely picture is that language and knowledge coevolve.

A second type of adaptive game dictates the growth of the lexicon. These lexical games

take place as an interaction between two agents. One agent, the speaker, selects the topic out of a

given context by gesturing to it. The speaker then, similarly to the discrimination games, comes

up with a feature set that denotes the topic. The speaker codes the features according to its

lexicon and communicates it the other agent, the hearer. The hearer then decodes the speech with

its own lexicon in order to see if the feature set provided by the speaker matches the expected

feature set of the hearer. The game is a success when the feature set decoded is the one predicted

and the game fails when the feature set decoded does not match the hearer’s expectations. Upon

failure the lexicon of one or both of the agents must be adjusted: the speaker may create a new

word a particular feature if such a meaning is not yet lexicalized; the hearer may notice the

feature set it decoded was too general and thus refine meanings of words already in the lexicon.

In terms of language genesis, the lexical games capture the essence of how a lexicon will grow

throughout a community. One can imagine the beginning of a pidgin language where early on

certain words will be used inaccurately. Over time and after repeated interactions, the whole
community will begin to use the same words for the same meanings as a result of positive and

negative feedback. Similarly, the lexical games show growing stability in an agent’s lexicon

throughout the course of repeated interactions and the whole community will converge on a

similar set of word-meaning associations. As noted above, the agents do not have the same

lexicon; successful communication, however, requires that the lexicons of two agents must have

some similarity. One may take this moment to consider the skeptical challenge that if the

association between word and meaning is in the head of an individual and not necessarily shared

then there is no determinate meaning. When the speaker gestures towards the topic how is it that

the hearer should anticipate the feature set accurately without coming up with his own,

completely unique feature set? Similarly for the child during language acquisition: he learns

thousands of words in a relatively short amount of time making it highly unlikely that the child

considers each possible meaning of a word before figuring out its precise meaning. According to

the lexical games and what we have assumed of pidgins, the answer seems to rely on the

feedback loop providing reinforcement over a number of interactions. But, for the child,

however, it does not appear to be the case that he requires repeated interactions to the same

extent. That is, the human language faculty must have some innate component that allows for

such rapidity in generating a vocabulary and delineating meanings that is especially active during

the acquisition period. So while the lexical games do explain how a common lexicon may arise

among a community of speakers it still does not explain how humans may have evolved the

genetic disposition to acquire a lexicon with such rapidity during the critical period. It may be

interesting to put a twist on the lexical game where a new agent with a blank slate is introduced

into a population with a fully developed lexicon. Taking the results of typical lexical games this

adjusted one it could be of interest to compare the different rates at which a lexicon is stabilized

by the community and adopted by a new “infant” agent, especially in regards to varying numbers
of interactions.

Another type of adaptive game uses imitation in order to show the development of a

repertoire of shared sounds. These imitation games occur between two agents. The speaker

selects a phoneme in his repertoire or produces a new phoneme. After the speaker produces the

phoneme the hearer attempts to reproduce the given phoneme. The original speaker then listens

in order to see if the hearer has reproduced accurately and can provide feedback accordingly.

These games lead to a distinct phonetic system especially if there is pressure for a larger phonetic

inventory due to a growing lexicon. While these imitation games highlight how a phonetic

continuum can be divided into discrete phonemes we may assume that the first phonetic

inventory was not built in such a way; nonetheless, the games make clear that a growing lexicon

is adds sufficient pressure on the need for a phonetic inventory.

De Beule, following in the footsteps of Steels, proposes that through similar adaptive games a

grammar will also be constructed and that this grammar will be compositional, hierarchal and

recursive (2008). In these interactions both the speaker and hearer are presented with the scene

but only the speaker is given the topic. Possible topics could be an event or an aspect of the scene

such as “Jones kicked Mary” given to the speaker in terms of logical predicates: “Jones(x) &

Mary(y) & kick(x,y)”. Using its lexicon the speaker will attempt to communicate the topic to the

hearer; if the speaker does not have grammatical rules to form a given construction it must

simply invent them. As we have seen through the other games, repeated interactions lead to the

formation of a coherent set of grammatical rules. More interestingly, the rules that stay in the

language tend to be compositional and offer the possibility for recursion. This research leads De

Beule to conclude that there is no necessity to posit a universal grammar that dictates

grammatical rules and that the formation of a grammar need not be the result of cross-

generational selection. First, we must be weary of any account that claims it is possible to
abandon the existence of universal grammar. Such a conclusion, as in De Beule’s case, is the

result of misunderstanding UG. De Beule’s grammar games show the rise of one particular

grammar while UG is a set of parameters that dictate the possible form of grammar. If there was

not a UG then any type of grammar would be possible, which is not the case; if there was not a

UG we would not be able to make predictions about a grammar such as given one type of

construction we may conclude that another type of construction is not permissible. Moreover, as

we noted with the lexical games, the infant during acquisition does not appear to learn grammar

as the result of repeated interactions. While the child’s acquisition does not have to necessarily

correlate with how language initially evolved, even despite De Beule’s research, there is no way

to circumvent the question of how such a refined genetic disposition to learn grammar evolved.

And secondly, the conclusion that the construction of a grammar is not a multi-generational

project does not seem in agreement with our natural experiments. To return to the formation of

pidgins, these languages are merely ad hoc and void of any grammatical items, lacking the

expressivity of a fully-formed language. However, it is when the children of pidgin speakers

adopt this language as their native tongue that it becomes a fully functional language with a true

grammar. And lastly, De Beule is doing this research within the framework of the Fluid

Construction Grammar, a formalized construction grammar created alongside Luc Steels (Steels

and De Beule 2006). A construction grammar envisions grammar in a much different light than

most contemporary linguists whom adhere to transformational grammar. A construction grammar

is essentially a pairing of form and content, where a proposition conforms itself to a syntactic

template. This puts emphasis on the actual constructions as opposed to syntactic categories.

Under such a model, grammar simply becomes an inventory of construction. Approaching the

problem of how syntax may have emerged from this standpoint seems to immediately separate

the research from the question looking to be answered. Steels and De Beule state that,
“Construction grammar is receiving a growing amount of attention lately partly because it has

allowed linguists to discuss a wide range of phenomena which were difficult to handle in earlier

frameworks, and partly because it has allowed psychologists to describe in a more satisfactory

way early language development” (pp. 73, 2006). While there may be competing schools of

thoughts on a number of questions concerning grammar, it is not the case that Steels and De

Beule would not find much support among linguists concerning their celebration of construction

grammar. While I do not know the difficulties that may inhibit such a project, I feel that if the

research was constructed under a framework devised in close connection among linguists in the

generative grammar camp then we may be able to see how transformational rules could arise and

possibly see if it would result in constructions that are common among natural languages.

From the collection of adaptive games, Steels argues that language is not a product of

evolution but that it is the result of coevolution (1998). We think of evolution in terms of a

species adapting to better survive in its environment but coevolution takes into account that

components of the environment may adapt in response. Imagine a species that is continually

stalked as prey by a certain predator. Natural selection may select for a new trait that allows the

prey to be able to avoid the predator which would then lead to a state of equilibrium. But the

predator may also coevolve in response to the prey’s adaptation which, Steels argues, leads to

growing complexity. He views language in a similar model stating that language and meaning

coevolve in the sense that the language must be sophisticated enough to handle given concepts

and at the same time a sophisticated language allows for a growing number of concepts. This can

be seen as the result of one game’s output being the input for another game: the growing ability

for conceptualization puts pressure on the lexicon and a growing lexicon puts a demand on the

phonetic inventory. But there are a few immediate problems with viewing language in terms of

coevolution. First, it is not self-evident that Steels’ proposal is actually relevant to coevolution in
the genetic sense: that is, this model for language development seems to just be the case because

of the nature of a communication system. The parts of language, in a sense, do coevolve but

coevolution does not directly imply growing complexity. Evolution is not about a goal that

nature wishes to reach; it is only about the survival of genes by any means necessary- with or

without complexity. So it seems that a better term for Steels would simply be co-develop.

Secondly, the argument that language and meaning coevolve may not be necessary. As we see

today, new meanings and words are continually invented but they do not lead to a more complex

language. Maybe new meanings lead to a more capable language but not necessarily more

complex. To return to a thought we pondered earlier, could it be that instead of language and

meaning coevolving (or co-developing), is it the case that language and knowledge coevolve (co-

develop). This may be a subtle distinction but nonetheless, I believe, a proper one. To develop

certain predicates, whether or not for use in language, a human must have had a level of

intelligence capable of understanding things in the world: the predicate for something like ‘give’

would require a person to know who gave what to whom, and a predicate such as ‘kick’ could

only be understood by a human capable of organizing knowledge in terms of subject and direct

object. Thus it is not necessarily the case that an agent able to discriminate objects and form

concepts accordingly will then formulate a lexicon; rather, it is much more plausible to take the

stance that language co-developed alongside a growing capacity for holding knowledge (in direct

relation with working memory and long-term memory). Many animals have a number of

different concepts yet these species do not generate a language. The difference seems to be the

ability for associating pieces of information with concepts.

Steels further poses that language exhibits self-organization for evolution and coevolution

alone cannot account for the coherence and stability that arise throughout the population of

agents. While each has a different language in its head they still all manage successful
communication. A self-organizing system exhibits growth without guidance from any external

force and the individual component parts do not show the characteristics of the whole. That is,

there are emergent properties that are not easily predictable from the properties of the parts of the

whole. While Steels may be right in saying that the local interactions do not show the complexity

of the whole language system, I believe it is still inaccurate to use the term self-organizing the

way it is used here. The simple fact is that the global patterns, the emergent language, is quite

predictable from local interactions. Even though no user has a view of the entire language it does

not change the fact that each agent, after a number of interactions, does indeed have a good

grasp; the agents must adopt a similar lexicon and thus stability and coherence throughout the

system should be predicted and a view of that stability and coherence should be somewhat

predictable from the local interactions among agents. Just as in humans, the acquisition of a

lexicon is not identical in each person; however, there still must be coherence and stability

among each person’s lexicon or else we would no longer be speaking the same language. If we

were able to extract one language user’s lexicon and see every lexical item he has the ability to

deploy it may not give us the whole picture of the language and it may not even give a

completely accurate picture of the whole language; yet, it would still give us a firm basis on

which to draw conclusions about the lexicon of the whole language in general.

The AI approach has used its research to draw claims such as there is no need for

innateness; given that the individual agent does not have an entire view of the language there is

no need to posit the existence of universal grammar; that the research shows no need for a

specific module such as a human language faculty. We have seen problems with aspects of this

research and we have seen ways in which these projects could be redirected and, maybe most

importantly, more linguistically informed, but I believe the biggest downfall of the AI approach

is drawing such extreme conclusions. While Steels does warn that his research offers no
empirical claims it still does us no good to have the computer scientist conclude a negation of

that which is true and that which should be the foundation of language evolution. There should

be no question that humans somehow did evolve in such a way that allows us to acquire a

language with almost no effort; there are universal parameters that every grammar abides by; and

there is some specific quality or collection of qualities that are unique to the human species that

allows for language. There is no reason for AI research to compete with any of these notions for

it does not further our understanding of the problem of how language evolved. Research of this

nature must be more linguistically informed and, in turn, also be made explicit to the linguist.

Language evolution is a cross-disciplinary effort and this applies as well to the role computer

science and artificial intelligence plays in it. To claim that AI research has the potential to debunk

all nativist accounts is absurd; rather, AI research could prove itself crucial in weeding out many

of the ‘just-so’ stories that have become foundational in the literature. Pinker and Bloom (1990)

claiming that each universal parameter was somehow selected for individually is a case in point

example: there is no reason to believe an arbitrary packet of rules for grammatical construction

was somehow selected for by natural selection; these rules are simply a reflection of the way the

brain is capable of processing language. With more information from the linguist, the

neurologist, and the psychologist, I feel AI research may be able to better simulate actual

language processing in order to see if we arrive at an artificial reflection of UG. Such a goal

would remove the emphasis current AI work has placed on the development of an E-language

within a population and instead be more directed at finding out what are the components

necessary for an I-language as sophisticated as in homo-sapiens. And while artificial intelligence

may be a long way off from modeling something of such complexity, the problem of the

evolution of language also may not be solved for a long time (or ever).
Works Cited

Bickerton, Derek. Language and Species. New York: University of Chicago P, 1992.

De Beule, Joachim. "7th evolution of language conference." Evolang Conference 2008, 11 Mar.

2008, Barcelona. The emergence of compositionality, hierarchy and recursion in peer-to-

peer interactions.

Pinker, Steven, and Paul Bloom. "Natural Selection and Natural Language." Behavioral and

Brain Sciences 13 (1990): 707-84.

Steels, Luc, and Joachim De Beule. Association for Computational Linguistics, Proceedings of

the 3rd Workshop on Scalable Natural Language Understanding, June 2006, New York,

NY. 73-80.

Steels, Luc. "Synthesizing the origins of language and meaning." Approaches to the Evolution of

Language. Ed. James R. Hurfurd. New York, NY: Cambridge UP, 1998. 384-404.

You might also like