You are on page 1of 38

INTRODUCTION

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines." The field was founded on the claim that a central property of humans, intelligence - the sapience of Homo sapiens - can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. AI research is highly technical and specialized, and deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. General intelligence (or "strong AI") is still among the field's long term goals.

HISTORY
1

The history of artificial intelligence began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field. Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation the problem of creating 'artificial intelligence' will substantially be solved". In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts.

By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.
2

However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began. On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. AI applications are no longer the exclusive domain of Department of defense R&D, but are now common place consumer items and inexpensive intelligent toys. In common usage, the term "AI" no longer seems to apply to off-the-shelf solved computing-science problems, which may have originally emerged out of years of AI research.

PROBLEMS
The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.

Deduction, solving

Reasoning,

Problem

For problems, most of algorithms can require enormous computational resources most experience a "combinatorial explosion". The amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research. Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-bystep deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensor motor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill.

Knowledge Representation

Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge and many other, less well researched domains. Among the most difficult problems in knowledge representation are: Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.

Planning
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future and be able to make choices that maximize the utility (or "value") of the available choices. In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change
5

its plan as this becomes necessary, requiring the agent to reason under uncertainty.

Learning
Machine learning has been central to AI research from the beginning. In 1956, at the Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine". Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.

Natural Language Processing

ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs. Natural language
6

processing gives machines the ability to read and understand the languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation. Natural language processing is a very attractive method of humancomputer interaction. Natural language understanding is sometimes referred to as an AI-complete problem because it seems to require extensive knowledge about the outside world and the ability to manipulate it.

Motion and Manipulation

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).

Perception

Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A
7

few selected sub-problems are speech recognition, facial recognition and object recognition.

Social Intelligence

Kismet, a robot with rudimentary social skills. Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.

Creativity
A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). A related area of computational research is Artificial Intuition and Artificial Imagination.

General Intelligence

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
8

Many of the problems above are considered AIcomplete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AIcomplete: it may require strong AI to be done as well as humans can do it.

APPROACHES
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using highlevel symbols, similar to words and ideas? Or does it require "sub-symbolic" processing? John Haugeland, who coined the term GOFAI, also proposed that AI should more properly be referred to as synthetic intelligence, a term which has since been adopted by some non-GOFAI researchers.

Cybernetics And Brain Simulation


In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England. By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.
10

Symbolic
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI" Types of symbolics are: Cognitive simulation Logic-based "Anti-logic" or "scruffy" Knowledge-based

Sub-Symbolic
During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[92] By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "subsymbolic" approaches to specific AI problems. Bottom-up, embodied, situated, behavior-based or nouvelle AI Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on
1. 11

the basic engineering problems that would allow robots to move and survive.[93] Their work revived the nonsymbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence. 2. Computational Intelligence Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s. These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.

Statistical
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."

Integrating the Approaches

1. Intelligent agent paradigm An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are
12

programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fieldssuch as decision theory and economicsthat also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s. Agent architectures and cognitive architectures Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multiagent system. A system with both symbolic and subsymbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling. Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system
2.

TOOLS AND METHODS

Logical AI

What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some
13

mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96b] lists some of the concepts involved in logical aI. [Sha97] is an important text.

Search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

Pattern Recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.

Representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.

Inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been
14

added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of nonmonotonic reasoning.

Common Sense Reasoning

Knowledge

And

This is the area in which AI is farthest from humanlevel, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.

Learning From Experience


Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a
15

comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.

Planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.

Epistemology

This is a study of the kinds of knowledge that are required for solving problems in the world.

Ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.

Heuristics
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful.
16

Genetic Programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza's group and here's a tutorial.

17

APPLICATIONS OF ARTIFICIAL INTELLIGENCE


Almost every branch of science and engineering currently shares the tools and techniques available in the domain of artificial intelligence. However, for the sake of the convenience, we mention here a few applications, where AI plays a significant and decisive role in engineering automation.

Image Understanding and Computer Vision

A digital image can be regarded as a two-dimensional array of pixels containing gray levels corresponding to the intensity of the reflected illumination received by a video camera. For interpretation of a scene, its image should be passed through three basic processes: low, medium and high level vision .

Speech and Understanding

Natural

Language

In speech analysis, the main problem is to separate the syllables of a spoken word and determine features like amplitude, and fundamental and harmonic frequencies of each syllable. The words then could be identified from the extracted features by pattern classification techniques. A robot capable of understanding speech in a natural language will be of immense importance. The phonetic typewriter, which prints the words pronounced by a person, is another recent invention where speech understanding is employed in a commercial application.

18

while it is possible to instruct some computers using speech.

Scheduling

In a scheduling problem, one has to plan the time schedule of a set of events to improve the time efficiency of the solution. Flowshop scheduling problems are a NP complete problem and determination of optimal scheduling (for minimizing the make-span) thus requires an exponential order of time with respect to both machine-size and job-size. Finding a sub-optimal solution is thus preferred for such scheduling problems. Recently, artificial neural nets and genetic algorithms have been employed to solve this problem. The heuristic search, to be discussed shortly, has also been used for handling this problem.

Intelligent Control

In process control, the controller is designed from the known models of the process and the required control objective. When the dynamics of the plant is not completely known, the existing techniques for controller design no longer remain valid. Rule-based control is appropriate in such situations.

Games Playing
Programming computers to play games such as chess and checkers . You can buy machines that can play master level chess for a few hundred dollars. There is some ai in them, but they play well against people mainly through brute force computation looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
19

Expert Systems
A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed.

Neural Networks
Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex: artificial neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter most from an information processing point of view.

Robotics
Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software

20

Driverless Car
A driverless car is a vehicle equipped with an autopilot system, which is capable of driving from one point to another without input from a human operator. Car have capability to park itself without drivers help.

Heuristic Classification
One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

21

EXPERT SYSTEM
Expert Systems are computer programs that are derived from a branch of computer science research called Artificial Intelligence (AI). AI's scientific goal is to understand intelligence by building computer programs that exhibit intelligent behavior. It is concerned with the concepts and methods of symbolic inference, or reasoning, by a computer, and how the knowledge used to make those inferences will be represented inside the machine. Expert systems aim to mimic human reasoning. The methods and techniques used to build these programs are the outcome of efforts in a field of computer science known as Artificial intelligence In conventional computer programs, problem-solving knowledge is encoded in program logic and programresident data structures. Expert systems differ from conventional programs both in the way problem knowledge is stored and used. Expert systems are especially important to organizations that rely on people who possess specialized knowledge of some problem domain, especially if this knowledge and experience cannot be easily transferred. Artificial intelligence methods and techniques have been applied to a broad range of problems and disciplines, some of which are esoteric and others which are extremely practical.

22

EXAMPLES OF PROBLEMS WHERE AN EXPERT ASSISTANT MAY HELP


Productivity
Bottlenecks result where there is only one or a few experienced personnel who spend much of their time helping others rather than applying their expertise to future planning or higher level tasks. An Expert Assistant can reduce the demands on their time as well as raising the knowledge level of the junior employees.

Diagnosis Of Problems

Many diagnostic systems have been implemented as Expert Systems. The help desk of a computer facility lends itself to implementation as an Expert Assistant. Relevant facts can be gathered about the problem by clerical staff and the system can recommend the person most able to cope with that type of problem. Routinely occurring problems can have solutions suggested by the Expert Assistant.

Distribution of policy, knowledge or information


With an Expert Assistant, all users in an organisation can have access to the knowledge. Moreover, the knowledge will be made available in more than . One location at a time, and when and where needed. Manuals of policy, procedures or regulations are often daunting to many employees. As the information
23

becomes more complex, there is an increased probability of incorrect information being given to clients because the information is so difficult or time consuming to locate. These documents are easily represented as a knowledge base. Having the knowledge captured in a central location improves the ease and speed with which these manuals can be updated.

Loss of expertise from employee turnover or retirement


Long serving employees take expertise from the company when they retire. Developing an Expert System can retain this expertise which would otherwise be lost.

24

WHEN TO USE AN EXPERT SYSTEM


1.

Provide a high potential payoff or significantly reduced downside risk Capture and expertise preserve irreplaceable human

2.

3. Provide expertise needed at a number of locations at the same time or in a hostile environment that is dangerous to human health 4. Provide expertise that is expensive or rare 5. Develop a solution faster than human experts can
6.

Provide expertise needed for training and development to share the wisdom of human experts with a large number of people

25

ADVANTAGES OF EXPERT SYSTEMS

Permanence Reproducibility
but and and the

Expert systems do not forget, but human experts may

Many copies of an expert system can be made, training new human experts is time-consuming expensive. If there is a maze of rules (e.g. tax auditing), then the expert system can "unravel" maze

Efficiency

Expert system can increase throughput and decrease personnel costs. Although expert systems are expensive to build and maintain, they are inexpensive to operate. Development and maintenance costs can be spread over many users. The overall cost can be quite reasonable when compared to expensive and scarce human experts. It is Cost savings: -Wages (elimination of a room full of clerks) -Other costs (minimize loan loss)

Consistency

With expert systems similar transactions handled in the same way. The system will make comparable recommendations for like situations. Humans are influenced by recency effects and primacy effects.

Documentation
permanent

An expert system can provide documentation of the decision process


26

Completeness

An expert system can review all the transactions, a human expert can only review a sample

Timeliness

Fraud and/or errors can be prevented. Information is available sooner for decision making

Breadth

The knowledge of multiple human experts can be combined to give a system more breadth that a single person is likely to achieve

Entry barriers

Expert systems can help a firm create entry barriers for potential competitors

Differentiation

In some cases, an expert system can differentiate a product or can be related to the focus of the firm. Computer programs are best in those situations where there is a structure that is noted as previously existing or can be elicited

27

DISADVANTAGES OF EXPERT SYSTEMS

Common Sense

In addition to a great deal of technical knowledge, human experts have common sense. It is not yet known how to give expert systems common sense.

Creativity
Human experts can respond creatively to unusual situations, expert systems cannot.

Learning
adapt to changing must be explicitly and neural networks learning.

Human experts automatically environments; expert systems updated. Case-based reasoning are methods that can incorporate

Sensory Experience

Human experts have available to them a wide range of sensory experience; expert systems are currently dependent on symbolic input.

Degradation

Expert systems are not good at recognizing when no answer exists or when the problem is outside their area of expertise.

28

APPLICATION OF ARTIFICIAL IN BUSINESS AND COMMERCE WORLD


Are e-commerce and e-business having relation with Artificial Intelligence or AI? AI is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. Then, what about the relation between e-business, e-commerce, and AI? AI techniques are extensively used in the development of e-commerce systems. The field of ecommerce can be classified as B2C e-commerce and B2B e-commerce, in terms of AI techniques involved in this field. In B2C e-commerce, AI is used primarily for product selection and recommendation, negotiation, auctions, solving real-world scheduling problems and enhancing servers scalability, generating automated responses, and decisions on bundling and pricing of goods, etc. In B2B e-commerce, AI is used mainly for supply chain management. Business applications utilise the specific technologies to try and make better sense of potentially enormous variability (for example, unknown patterns/relationships in sales data, customer buying habits, and so on). However, within the corporate world, AI is widely used for complex problem-solving and decision-support techniques in real-time business applications. The business applicability of AI techniques is spread across functions ranging from finance management to forecasting and production.

29

ARTIFICIAL BUSINESS

INTELLIGENCE

(AI)

IN

Artificial Intelligence (AI) has been used in business applications since the early eighties. As with all technologies, AI initially generated much interest, but failed to live up to the hype. However, with the advent of web-enabled infrastructure and rapid strides made by the AI development community, the application of AI techniques in real-time business applications has picked up substantially in the recent past. Computers are fundamentally well suited to performing mechanical computations, using fixed programmed rules. This allows artificial machines to perform simple monotonous tasks efficiently and reliably, which humans are ill-suited to. For more complex problems, things get more difficult. Unlike humans, computers have trouble understanding specific situations, and adapting to new situations. Artificial Intelligence aims to improve machine behavior in tackling such complex tasks.

IMPORTANCE INTELLIGENCE

OF

ARTIFICIAL (AI)

Enterprises that utilize AI-enhanced applications are expected to become more diverse, as the needs for the ability to analyze data across multiple variables, fraud detection and customer relationship management emerge as key business drivers to gain competitive advantage. Artificial Intelligence is a branch of Science which deals with helping machines, finds solutions to complex problems in a more human-like fashion. This generally involves borrowing characteristics from human
30

intelligence, and applying them as algorithms in a computer friendly way.

ADVANTAGES
1. Increase efficiency and quality by using optimal settings from past production. 2. Artificial Intelligence can optimize your schedule beyond normal human capabilities. 3. Increase productivity by eliminating downtime due to unpredictable changes in the schedule.

DISADVANTAGES

Limited sensory input. Compared to a biological mind, an artificial mind is only capable of taking in a small amount of information. This is because of the need for individual input devices. The most important input that we humans take is the condition of our bodies. Because we feel what is going on with our own bodies, we can maintain them much more efficiently than an artificial mind. At this point, it is unclear whether that would be possible with a computer-system.

31

ADVANTAGES AND DISADVANTAGES OF ARTIFICIAL INTELLIGENCE


Without getting into too many technical specifics, here are some advantages

Artificial intelligence would not need any sleep.

This would be an advantage because it would not be interrupted from its tasks for sleep, as well as other issues that plague biological minds like restroom breaks and eating.

Unemotional problems.

consideration

of

While an artificial mind could theoretically have emotions, it would be better for performance if it were programmed for unemotional reasoning. When people make decisions, sometimes those decisions are based on emotion rather than logic. This is not always the best way to make decisions. Copied very easily Once an artificial mind is trained in a task, that mind can then be copied very easily, compared to the training of multiple people for the same task.
32

There are some disadvantages to the artificial mind as well

Limited Sensory Input


Compared to a biological mind, an artificial mind is only capable of taking in a small amount of information. This is because of the need for individual input devices. The most important input that we humans take in is the condition of our bodies. Because we feel what is going on with our own bodies, we can maintain them much more efficiently than an artificial mind. At this point, it is unclear whether that would be possible with a computer system.

The Inability to Heal.

Biological systems can heal with time and treatment. For minor conditions, most biological systems can continue normally with only a small drop in performance. Most computer systems, on the other hand, often need to be shut down for maintenance.

Replacement of Human
If robots start replacing human resources in every field, we will have to deal with serious issues like unemployment in turn leading to mental depression, poverty and crime in the society. Human beings deprived of their work life may not find any means to channelize their energies and harness their expertise. Human beings will be left with empty time.
33

Absence of Human Touch


Replacing human beings with robots in every field may not be a right decision to make. There are many jobs that require the human touch. Intelligent machines will surely not be able to substitute for the caring behaviour of hospital nurses or the promising voice of a doctor. Intelligent machines may not be the right choice for customer service.

Cannot Be Human
One of the major disadvantages of intelligent machines is that they cannot be human. We might be able to make them think. But will we be able to make them feel?

34

THE FUTURE OF ARTIFICIAL INTELLIGENCE


In spite of its great advances and strong promise, AI, in name, has suffered from low esteem in both academic and corporate settings.AI unfavourably associated with impractical chess playing computers and recluse professors trying to build a "thinking machine." As a result, many developers of Al theories and applications consciously shun the moniker, preferring instead to use the newer jargon of fuzzy applications, flexible software, and data-mining tools. In avoiding the label Al, they have found more receptive audiences among corporate decision-makers and private investors for their Al-inspired technologies. Thus, while the practices and ideas known as Al are hardly dead, the name itself is drifting toward obscurity. This is true not only because of the perceived stigma, but also as a consequence of the diversity and heterogeneity of ways in which Al concepts have been implemented. Furthermore, these concepts are verging on ubiquity in software applications programming. Such disparate objectives as building a customer order system, implementing a self-diagnostic manufacturing system, designing a sophisticated search engine, and adding voicerecognition capabilities to applications all employ AI theories and methods. Indeed, Ford Motor Company was slated to implement an engine-diagnostic neural network in its car computers beginning in the 2001 model year. With Al so entrenched in modern software development, it has lost many of its distinctions from software generally. Artificial Intelligence is a common topic in both science fiction and projections about the future of technology and society. The existence of an artificial intelligence that rivals human intelligence raises

35

difficult ethical issues, and the potential power of the technology inspires both hopes and fears. In fiction, Artificial Intelligence has appeared fulfilling many roles, including a servant (R2D2 in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt. Commander Data in Star Trek: The Next Generation), a conqueror/overlord (The Matrix), a dictator (With Folded Hands), an assassin (Terminator), a sentient race (Battlestar Galactica/Transformers), an extension to human abilities (Ghost in the Shell) and the savior of the human race (R. Daneel Olivaw in the Asimov's Robot Series). Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? The idea also appears in modern science fiction, including the films I Robot, Blade Runner and A.I.: Artificial Intelligence, in which humanoid machines have the ability to feel human emotions. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature.[155] The subject is profoundly discussed in the 2010 documentary film Plug & Pray.[156] Martin Ford and others argue that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs. Ford predicts that many knowledge-based occupations and in particular entry level jobs will be increasingly susceptible to automation via expert systems and other AI-enhanced applications. AI-based applications may also be used to amplify the capabilities of low-wage offshore workers, making it more feasible to outsource knowledge work. Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was
36

deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life. Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore's law which describes the relentless exponential improvement in digital technology, to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "singularity". Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the sciencefiction series Dune. Edward Fredkin argues that "artificial intelligence is the next stage in evolution," an idea first proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by George Dyson in his book of the same name in 1998. Pamela McCorduck writes that all these scenarios are expressions of the ancient human desire to, as she calls it, "forge the gods".

37

INDEX
S. No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Particulars Introduction History Problems Approaches Tools And Methods Applications Of Artificial Intelligence Expert System Examples Of Problems Where An Expert Assistant May Help When To Use An Expert System Advantages Of Expert Systems Disadvantages Of Expert Systems Application Of Artificial In Business And Commerce World Advantages And Disadvantages Of Artificial Intelligence The Future Of Artificial Intelligence Pg. no

38

You might also like