You are on page 1of 17

Natural Language Processing

1: INTRODUCTION
ARTIFICIAL INTELLIGENCE It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. As a theory in the philosophy of mind, artificial intelligence (or AI) is the view that human cognitive mental states can be duplicated in computing machinery. Accordingly, an intelligent system is nothing but an information processing system. Discussions of AI commonly draw a distinction between weak and strong AI. Weak AI holds that suitably programmed machines can simulate human cognition. Strong AI, by contrast, maintains that suitably programmed machines are capable of cognitive mental states. The weak claim is unproblematic, since a machine which merely simulates human cognition need not have conscious mental states. It is the strong claim, though, that has generated the most discussion, since this does entail that a computer can have cognitive mental states. In addition to the weak/strong distinction, it is also helpful to distinguish between other related notions. First, cognitive simulation is when a device such as a computer simply has the same the same input and output as a human. Second, cognitive replication occurs when the same internal causal relations are involved in a computational device as compared with a human brain. Third, cognitive emulation occurs when a computational device has the same causal relations and is made of the same stuff as a human brain. This condition clearly precludes silicon-based computing machines from emulating human cognition. Proponents of weak AI commit themselves only to the first condition, namely cognitive simulation. Proponents of strong AI, by contrast, commit themselves to the second condition, namely cognitive replication, but not the third condition.

2: APPLICATIONS OF ARTIFICIAL INTELLIGENCE

2.1: GAME PLAYING You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second. 2.2: SPEECH RECOGNITION In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient. 2.3 : UNDERSTANDING NATURAL LANGUAGE Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains. 2.4: NEURAL NETWORKS The field of Artificial Neural networks looks at utilizing data structures that are designed to mimic neurons within the brain to perform data recognition and classification. They can be (and have been) used for a huge variety of tasks: predicting the stock market, extract image data from radar information, controlling cars, robots - you name it. The neat thing about neural networks is that they learn. They are basically fancy mapping functions: they will map one group of vector inputs to another, but they learn how to do this mapping themselves, either through supervised or unsupervised learning. They can be applied to areas such as sound and image processing or even robot controllers - making for more interesting research and results. 2.5: ROBOTICS Robotics is almost the best of both worlds since you get the "neat" factor of Artificial Intelligence coupled with the physicality of the robot - ie, it is something you can touch, build and interact with. 2.6: ARTIFICIAL LIFE Artificial Life is a fast moving field that looks at simulating life within a computer. It can be life in the most exact sense (mimicking biological phenomena such as digestion and nervous systems) or, more commonly, consists of abstractions of life. A lot of a-life comes in the form of cellular automata. CA are normally organized on a 2D plane and are

governed by some very simple rules, from these rules some incredibly complex behaviour can arise.

3:NATURAL LANGUAGE PROCESSING NATURAL LANGUAGE PROCESSING (NLP) is one of the upcoming applications of AI. The goal of the Natural Language Processing (NLP) is to design and build software that will analyze, understand, and generate languages that humans use naturally, so that eventually you will be able to address your computer as though you were addressing another person. This goal is not easy to reach. "Understanding" language means, among other things, knowing what concepts a word or phrase stands for and knowing how to link those concepts together in a meaningful way. It's ironic that natural language, the symbol system that is easiest for humans to learn and use, is hardest for a computer to master. Long after machines have proven capable of inverting large matrices with speed and grace, they still fail to master the basics of our spoken and written languages. The challenges we face stem from the highly ambiguous nature of natural language. As an English speaker you effortlessly understand a sentence like "Flying planes can be dangerous". Yet this sentence presents difficulties to a software program that lacks both your knowledge of the world and your experience with linguistic structures. Is the more plausible interpretation that the pilot is at risk, or that the danger is to people on the ground? Should "can" be analyzed as a verb or as a noun? Which of the many possible meanings of "plane" is relevant? Depending on context, "plane" could refer to, among other things, an airplane, a geometric object, or a woodworking tool. How much and what sort of context needs to be brought to bear on these questions in order to adequately disambiguate the sentence? We address these problems using a mix of knowledge-engineered and statistical/machine-learning techniques to disambiguate and respond to natural language input. Our work has implications for applications like text critiquing, information retrieval, question answering, summarization, gaming, and translation. The grammar checkers in Office for English, French, German, and Spanish are outgrowths of our research; Encarta uses our technology to retrieve answers to user questions; Intellishrink uses natural language technology to compress cellphone messages; Microsoft Product Support uses our machine translation software to translate the Microsoft Knowledge Base into other

languages. As our work evolves, we expect it to enable any area where human users can benefit by communicating with their computers in a natural way. Extensive research in NLP over the past decade has brought us one of the most useful applications of AI: machine translation. If we could one day created a program that could translate (for example) English text to Japanese and vice versa without need of polishing by a professional translator then bridges of communication could be significantly widened. Our current translation programs have not yet reached this level, but they may do so very soon. In particular, NLP research also deals with speech recognition. Currently, programs that convert spoken speech into text have been widely used and are fairly dependable. Recent research in Machine Translation (MT) has focused on data-driven systems. Such systems are self-customizing in the sense that they can learn the translations of terminology and even stylistic phrasing from already translated materials. Microsoft Researchs MT (MSR-MT) system is such a data-driven system, and it has been customized to translate Microsoft technical materials through the automatic processing of hundreds of thousands of sentences from Microsoft product documentation and support articles, together with their corresponding translations. This customization processing can be completed in a single night, and yields an MT system that is capable of producing output on par with systems that have required months of costly human customization. During the automatic customization (or training) of MSR-MT (see figure below), pairs of corresponding source and target sentences are parsed to produce graph-like structures called Logical Forms (LFs). These LFs consist of nodes containing normalized words that are connected by arcs representing the functional relations between the words. The words are fed to a statistical word association learner that identifies corresponding single- and multi-word translation pairs. The word pairs, together with the LFs themselves, are then processed by an alignment function, which creates correspondences, or mappings, between nodes or groups of nodes in the source and target LFs. These LF mappings are stored in a special repository called MindNet, and learned word pairs may also be merged with other dictionary information. During translation of new source text (runtime in the figure), the same parser employed during training is used to parse the text and produce a representative LF. The LF is then matched against the LF mappings stored in MindNet in a graph-matching procedure known as MindMeld." The corresponding target portions of each LF are then stitched together during Transfer" processing, with recourse to the dictionary of word pairs as needed, to yield a target language LF. Generation produces a target sentence (i.e., translation) from the target LF. The technology for the components comprising MSR-MT, including parsing, LFs, MindNet, and generation, was developed in MSRs NLP group over more than a decade of

research. Portions of this technology have been used in the grammar checkers in Word, in the natural language query function of Encarta, and in other MS products. MSR-MT has already been used to translate Microsofts Product Support Services (PSS) Knowledge Base (KB) into Spanish. In early April of this year, nearly 140,000 articles translated by MSR-MT, together with a few thousand human translated ones, were made available online at http://support.microsoft.com. (If you go to the web site, click on International Support and choose Spain as your country. You can then enter Spanish queries for the KB and receive back machine-translated hits.) In this case, MSR-MT lowered the cost barrier to obtaining customized, higher-quality MT and PSS is now able to provide usable translations for its entire online KB. It can also keep current with updates and additions on a weekly basissomething that was previously unthinkable both in terms of time and expense. MindNet is knowledge representation project that uses our broad-coverage parser to build semantic networks from dictionaries, encyclopedias, and free text. MindNets are produced by a fully automatic process that takes the input text, sentence-breaks it, parses each sentence to build a semantic dependency graph (Logical Form), aggregates these individual graphs into a single large graph, and then assigns probabilistic weights to subgraphs based on their frequency in the corpus as a whole. The project also encompasses a number of mechanisms for searching, sorting, and measuring the similarity of paths in a MindNet. We believe that automatic procedures such as MindNets provide the only credible prospect for acquiring world knowledge on the scale needed to support common-sense reasoning.

3.1: AMALGAM Amalgam is a novel system developed in the Natural Language Processing group at Microsoft Research for sentence realization during natural language generation. Sentence realization is the process of generating (realizing) a fluent sentence from a semantic representation. From the outset, the goal of the Amalgam project has been to build a sentence realization system in a data-driven fashion using machine learning techniques.To date, we have implemented Amalgam for both German and French, with English in the works. Amalgam accepts as input a logical form graph capturing the meaning of a sentence. The logical form shown here is for the German sentence Die ODBCSpezifikation definiert das Feld, das die Komponente bezeichnet, die die Meldung ausgegeben hat. (from MS technical manuals)

Amalgam constrains the search for a fluent sentence realization by following a linguistically informed approach that includes such component steps as labeling of phrasal projections, raising, ordering of elements within a constituent, and extraposition of relative clauses. For the above example,the following tree illustrates the transformed tree just prior to ordering.

Proceeding through these steps, Amalgam transforms the logical form into a fully articulated tree structure from which an output sentence is read.

The contexts for each linguistic operation in the process are primarily machine-learned. The promise of machine-learned approaches to sentence realization is that they can easily be adapted to new domains and ideally to new languages merely by retraining.

3.2: INTELLISHRINK IntelliShrink is a plug-in for Microsoft Outlook that shipped as part of Microsoft Outlook Mobile Messenger (MOMM). IntelliShrink consists of two stages. The first stage is a message router that the user trains to identify important or urgent messages. These messages are sent to a cell phone or mobile device. The second stage modifies the messages to fit on the small display device. Since a cell phone has a much smaller display than the typical computer monitor, it would be tedious to scroll through screen after screen of the message. We perform linguistic analysis on the message to identify portions that can be omitted or abbreviated. For example, "next Monday" might reduce to "6/12", or "an apartment" might reduce to "apt". Here are some samples taken from actual email messages: A note concerning payroll processing: DrctDepositPymntsWllBAvlbleInYrAccntWthn3BsnssDysFrmPymntDteBlw. And a note from an online retailer: ThghWeHdExpctd2BAble2SndThsItm2UWe'veSnceFndThtItIsNtAvlbleFrmAnyOfO urSrcs@ThsTme.

3.3: ELIZA

Another field of study in NLP are storyunderstanders, as well as chatterboxes. During the 1960s Joseph Weizenbaum created ELIZA. ELIZA created a storm of public interest in AI, as it helped thousands overcome their personal problems. ELIZA was a psychiatrist, particularly one that posed analytical questions for every answer the user gave it. Though sometimes they may have seemed ambiguous, people actually felt ELIZA could take care of their needs just as well as any other therapist. They became emotionally involved with ELIZA, even Weizenbaum's secretary demanded to be left alone with the program. The following is a transcript of ELIZA chatting with another program PARRY: Parry: I don't understand your motives. Eliza: Why don't you understand my motives? Parry: Shouldn't I be? Eliza: What did you say just now? Parry: That's just the way it is. Eliza: Suppose that's just the way it were not. Parry: So? Eliza: I am not sure I understand you fully. Parry: You harp on the same questions too much. Eliza: What does this similarity suggest to you? Parry: I don't get you. Eliza: Do you wish that I you did get I? Parry: It's about time you showed some interest in my feelings. Eliza: Oh? I showed some interest in your feelings? When people had started calling ELIZA intelligent, Joseph Weizenbaum went into an uproar. Technically, ELIZA was actually unable to understand people's personal problems to the depth of any other human being. ELIZA could only manipulate syntax (grammar), and check for some key words. Certainly, if someone had no knowledge of ELIZA being a program, one could easily conclude that it behaved like a human conversing, although it never really neccessary understood everything to the detail that humans do. Coincidentally, ELIZA creates questions to help people's personal problems, while IQATS(Intelligent Question and Answer Test Summarizer) , a program written by Sam Hsiung, creates questions for test-making purposes. Unlike ELIZA, IQATS is able to learn how to ask new questions, if it is given a sample question and answer. Yet, like ELIZA, it knows and will learn only how to manipulate syntax. It will be able to ask a question about what the capital or Saudi Arabia is, however if it were given something a bit more complex, such as Martin Luther King's 'I have a dream...' speech, it would not be able to come up with questions that force people to draw inferences (Ex.: Under what context was this speech given in?); neither does it really understand what it is asking.

Many researchers realized this limitation, and as a result CONCEPTUAL DEPENDENCY(CD) theory was created. CR systems such as SAM (Script Applier Mechanism) are story understanders. When SAM is given a story, and later asked questions

about it, it will answer many of those questions accurately. (Thus showing that it "understands") It can even infer. It accomplishes this through use of scripts. The scripts designate a sequence of actions that are to be performed in chronological fashion for a certain situation. A restaurant script would say that you would need to sit down by a table before you are served dinner. The following is a small example of SAM (Script Applier Mechanism) paraphrasing a story (notice the inferences): Input: John went to a restaurant. He sat down. He got mad. He left. Paraphrase: JOHN WAS HUNGRY. HE DECIDED TO GO TO A RESTAURANT. HE WENT TO ONE. HE SAT DOWN IN A CHAIR. A WAITER DID NOT GO TO THE TABLE. JOHN BECAME UPSET. HE DECIDED HE WAS GOING TO LEAVE THE RESTAURANT. HE LEFT IT. Scripts allow CD systems to draw links and inferences between things. They are also able to classify and distinguish primitive actions. Kicking someone, for example could be a physical action that institutes 'hurt', while loving could be an emotional expressiong that implies 'affection'.

3.4: THE LEGENDARY TURINGS TEST AND ITS WEAKNESS

Let's move on to a controversial subject involving understanding in natural language systems. How can we be sure whether or not a machine actually understands something? In 1950, Dr. Alan Turing, a British mathematician who is now considered the father of AI proposed the Turing's test for intelligence. Rather simply, the Turing's test boils down to the question: "Can this machine trick the human to think that its human". Specifically, the machine is a natural language system that converses with human subjects. In the Turing's test, a human (the judge) is placed in one room, and the machine/or another human is placed in another. The judge may ask questions or answer questions posed by the computer/or another human. All communication is done through a terminal, input is done by typing. The judge is not aware whether or not the subject that he/she is talking to is either a human or a computer before the conversation begins. Supposing that the judge was conversing with a computer, during and after the conversation, he/she must be "fooled" into thinking that the machine is a human in order for the machine to pass the Turings test. There are actually very many pitfalls to the Turings test, and it is in fact, not very widely accepted as a test for true intelligence. Today, the Loebner Prize is a modern version of the Turings test. The criticisms surrounding the Loebner prize deals with how the Turings Test is carried out. The goal of the contestant is to fool or trick the judge into thinking that his program is a human. Such a prospect does not encourage the advancement of AI> For example, messages are transmitted via text, as the subject (human or computer) types, the judge sees the text that is being typed, live. Thus, many contestants have been forced to emulate typing conditions of humans, i.e. text that is outputed comes out at varied speeds, sometimes words must be misspelled and corrected, incorrect punctuation is often used etc. Even then, the programs in the contest usually talk about only one subject (to talk about everything present in our culture is simply impossible- at least for a natural language system that understands only words, syntax and semantics and not really what they look like, what some objects really do etc.- which will be discussed later in other essays). If the judge picks another subject to discuss, the programs usually try to divert the attention of the judge. Programs have even tried to use vulgarity or an element of surprise, to get the judge excited (Truly, no computer could be vulgar or unpredictable could it?). For example, you may want to see a transcript of Jason Hutchen's program which competed for the Loebner Prize. You can also read an interview with Robby Glen Garner, winner of the 98/99 Loebner Prize.

In summary, the outcome of the test is too dependent on human involvement, and so also is the question of whether a certain system is really intelligent or not. Such a question is actually quite trivial and shallow. As Tantomito puts it, We should be asking about the kinds, quality and quantity of knowledge in a aystem, the kinds of inference that it can make with this knowledge, how well-directed its search procedure is, and what

means of automatic knowledge acquisition are provided. There are many dimensions of intelligence, and these interact with one another.

You might also like