You are on page 1of 10

Expert System.

An expert system is a computer program that simulates the judgement and behavior of a human or an organization that has expert knowledge and experience in a particular field. Typically, such a system contains a knowledge base containing accumulated experience and a set of rules for applying the knowledge base to each particular situation that is described to the program. Sophisticated expert systems can be enhanced with additions to the knowledge base or to the set of rules.

Medical Expert Systems:


Medical expert systems have evolved to provide physicians with both structured questions and structured responses within medical domains of specialized knowledge or experience (1). The structure is embodied in the program on the advice of one or more medical experts, who also suggest the optimal questions to consider, and provide the most accurate conclusions to be drawn from the answers the physician chooses. In software programs, these decision sequences are represented in clauses of the form: "If..., Then..., Else...", with final else having positive value in the closed system of the program (5). Although the physician is free to select any one of the choices offered in each clause, the physician is limited to the choices offered by the expert in writing the program. The program is thus limited by the fixed input from the expert at the particular time of formulation. If the physician has new questions or new data, a medical expert system program will not be able to accomodate the physician. It is for this basic reason that open system programs have been developed to meet the new needs of the user (5), with the contrast being paraphrased as: "Expert Systems are by experts; Open Systems are for experts".

What is the difference between Artificial Intelligence and Human Intelligence?


What is Human Intelligence? Human Intelligence is defined as the quality of the mind that is made up of capabilities to learn from past experience, adaptation to new situations, handling of abstract ideas and the ability to change his/her own environment using the gained knowledge. For example, a physician learning to treat a patient with unfamiliar symptoms or an artist modifying a painting to change the impression it makes, comes under this definition very neatly. Effective adaptation requires perception, learning, memory, logical reasoning and solving problems. This means that intelligence is not particularly a mental process; it is rather a summation of these processes toward effective adaptation to the environment. So when it comes to the example of the physician, he/she is required to adapt by seeing material about the disease, learning the

meaning behind the material, memorizing the most important facts and reasoning to understand the new symptoms. So, as a whole, intelligence is not considered a mere ability, but a combination of abilities. What is Artificial Intelligence? Artificial Intelligence (AI) is the field of computer science dedicated to developing machines that will be able to mimic and perform the same tasks just as a human would. AI researchers spend time on finding a feasible alternative to the human mind. The rapid development of computers after its arrival 50 years ago has helped the researchers take great steps towards this goal of mimicking a human. Modern day applications like speech recognition, robots playing chess, table tennis and playing music have been making the dream of these researchers true. But according to AI philosophy, AI is considered to be divided in to two major types, namely Weak AI and Strong AI. Weak AI is the thinking focused towards the development of technology capable of carrying out pre-planned moves based on some rules and applying these to achieve a certain goal. Strong AI is developing technology that can think and function similar to humans, not just mimicking human behavior in a certain domain.

What is Natural Language Processing?


Short for natural language processing, a branch of artificial intelligence that deals with analyzing, understanding and generating the languages that humans use naturally in order to interface with computers in both written and spoken contexts using natural human languages instead of computer languages. One of the challenges inherent in natural language processing is teaching computers to understand the way humans learn and use language. Take, for example, the sentence "Baby swallows fly." This simple sentence has multiple meanings, depending on whether the word "swallows" or the word "fly" is used as the verb, which also determines whether "baby" is used as a noun or an adjective. In the course of human communication, the meaning of the sentence depends on both the context in which it was communicated and each persons understanding of the ambiguity in human languages. This sentence poses problems for software that must first be programmed to understand context and linguistic structures.

Inference engine
An inference engine is a computer program that tries to derive answers from a knowledge base. It is the "brain" that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions. Inference engines are considered to be a special case of reasoning engines, which can use more general methods of reasoning.

Rule-based system
In computer science, a rule-based system is a set of "if-then" statements that uses a set of assertions, to which rules on how to act upon those assertions are created. In software development, rule-based systems can be used to create software that will provide an answer to a problem in place of a human expert. These type of system may also be called an expert system. Rule-based systems are also used in AI (artificial intelligence) programming and systems.

robotics
The field of computer science and engineering concerned with creating robots, devices that can move and react to sensory input. Robotics is one branch of artificial intelligence.

Robots are now widely used in factories to perform high-precision jobs such as welding and riveting. They are also used in special situations that would be dangerous for humans -- for example, in cleaning toxic wastes or defusing bombs. Although great advances have been made in the field of robotics during the last decade, robots are still not very useful in everyday life, as they are too clumsy to perform ordinary household chores. Robot was coined by Czech playwright Karl Capek in his play R.U.R (Rossum's Universal Robots), which opened in Prague in 1921. Robota is the Czech word for forced labor. The term robotics was introduced by writer Isaac Asimov. In his science fiction book I, Robot, published in 1950, he presented three laws of robotics: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Applications of AI
game playing You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second. speech recognition In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient. understanding natural language Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains. computer vision The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of

representing three-dimensional information directly, and they are not as good as what humans evidently use. expert systems A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense. heuristic classification One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

Turing Machines
A Turing machine can be thought of as a primitive, abstract computer. Alan Turing, who was a British mathematician and cryptographer, invented the Turing machine as a tool for studying the computability of mathematical functions. Turing's hypothesis (also known has Church's thesis) is a widely held belief that a function is computable if and only if it can be computed by a Turing machine. This implies that Turing machines can solve any problem that a modern computer program can solve. There are problems that can not be solved by a Turing machine (e.g., the halting problem); thus, these problems can not be solved by a modern computer program. A Turing machine has an infinite tape that consists of adjacent cells (or squares). On each cell is written a symbol. The symbols that are allowed on the tape are finite in number and include the blank symbol. Each Turing machine has it's own alphabet (i.e., finite set of symbols), which determines the symbols that are allowed on the tape.

Search Engines
Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be virtually impossible to locate anything

on the Web without knowing a specific URL. But do you know how search engines work? And do you know what makes some search engines more effective than others? When people use the term search engine in relation to the Web, they are usually referring to the actual search forms that searches through databases of HTML documents, initially gathered by a robot. There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.

Sensors and its types


Sensors are sophisticated devices that are frequently used to detect and respond to electrical or optical signals. A Sensor converts the physical parameter (for example: temperature, blood pressure, humidity, speed, etc.) into a signal which can be measured electrically. Lets explain the example of temperature. The mercury in the glass thermometer expands and contracts the liquid to convert the measured temperature which can be read by a viewer on the calibrated glass tube. The sensors are classified into the following criteria: 1. Primary Input quantity (Measurand) 2. Transduction principles (Using physical and chemical effects) 3. Material and Technology 4. Property 5. Application Classification based on property is as given below: Temperature - Thermistors, thermocouples, RTDs, IC and many more. Pressure - Fibre optic, vacuum, elastic liquid based manometers, LVDT, electronic.

Flow - Electromagnetic, differential pressure, positional displacement, thermal mass,

etc. Level - Differential pressure, ultrasonic radio frequency, radar, thermal displacement,

etc. Proximity and displacement - LVDT, photoelectric, capacitive, magnetic, ultrasonic. Biosensors - Resonant mirror, electrochemical, surface Plasmon resonance, Light

addressable potentio-metric. Image - Charge coupled devices, CMOS Gas and chemical - Semiconductor, Infrared, Conductance, Electrochemical. Acceleration - Gyroscopes, Accelerometers.

Others - Moisture, humidity, Speed sensor, mass, force, viscosity.

Computer vision
Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.[1][2][3] A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image.[4] This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.[5] Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.

Alpha-beta pruning
Alpha-beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games (Tic-tac-toe, Chess, Go, etc.). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision .

Predicate logic
predicate logic is the generic term for symbolic formal systems like first-order logic, second-order logic, many-sorted logic, orinfinitary logic. This formal system is distinguished from other systems in that its formulae contain variables which can be quantified. Two common quantifiers are the existential ("there exists") and universal ("for all") quantifiers. The variables could be elements in the universe under discussion, or perhaps relations or functions over that universe. For instance, an existential quantifier over a function symbol would be interpreted as modifier "there is a function".

Speech Recognition.
Speech Recognition is the process by which a computer maps an acoustic speech signal to text. It is different that speech understanding which is the process by which a computer maps an acoustic speech signal to some form of abstract meaning of the speech. This process depends on the speaker, and how he speaks the language. There are three different systems for the speaker. * Speaker dependent system. * Speaker independent system. * Speaker adaptive system.

Speech synthesis:computers imitating of speech: computer-generated audio output that resembles human speech A computer-controlled recording system in which basic sounds, numerals, words, or phrases are individually stored for playback under computer control as the reply to a keyboarded query. The process of generating an acoustic speech signal that communicates an intended message, such that a machine can respond to a request for information by talking to a human user. Also known as speech synthesis.

What
Frames representing information a frame. a

is
are a

frames
variant to of a non-procedural

and
semantic knowledge particular the data like of

scripts
networks in an is concept that

in
are in expert stored record. As in a

artificial
one system. a Frames of the In a single

intelligence?
popular frame, ways all of the called They is a

relevant Frames

complex entity, inheritance. a and frame even as

look

like

structure, rectangle. like frame,

support earlier,

are often used to capture knowledge about typical objects or events, such as a car, or even are We Mammal Subclass:Animal warm_blooded:YES mathematical object and in represent used may object different stead some mentioned Script, lion computer in structured names Schema, about Prototype, Object follows: :

science frames

literature.

knowledge

Natural language generation


Natural Language Generation (NLG) is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical

form. Psycholinguists prefer the term language production when such formal representations are interpreted as models for mental representations. It could be said an NLG system is like a translator that converts a computer based representation into a natural language representation. However, the methods to produce the final language are different from those of a compiler due to the inherent expressivity of natural languages.

Cognitive Science
Cognitive Science is the interdisciplinary study of minds. It is characterized in two important ways: theoretically and methodologically. Theoretically, cognitive science differs from some other fields that study minds in that it tends to focus on a particular level of analysis-- that of information processing.

SEARCH IN ARTIFICIAL INTELLIGENCE


Search plays a major role in solving many Artificial Intelligence (AI) problems. Search is a universal problem-solving mechanism in AI. In many problems, sequence of steps required to solve is not known in advance but must be determined by systematic trial-and-error exploration of alternatives. The problems that are addressed by AI search algorithms fall into three general classes: single-agent path-finding problems, two-players games, and constraint-satisfaction problems SINGLE-AGENT PATH-FINDING PROBLEMS Classic examples in the AI literature of path-finding problems are sliding-title puzzles, Rubiks Cube and theorem proving. The single-title puzzles are common test beds for research in AI search algorithms as they are very simple to represent and manipulate. Real-world problems include the traveling salesman problem, vehicle navigation, and the wiring of VLSI circuits. In each case, the task is to find a sequence of operations that map an initial state to a goal state. TWO-PLAYERS GAMES Two-players games are two-player perfect information games. Chess, checkers, and othello are some of the two-player games. CONSTRAINT SATISFACTION PROBLEMS Eight Queens problem is the best example. The task is to place eight queens on an 8*8 chessboard such that no two queens are on the same row, column or diagonal. Real-world examples of constraint satisfaction problems are planning and scheduling applications.

RULE BASED EXPERT SYSTEMS


Rules are the popular paradigm for representing knowledge. A rule based expert system is one whose knowledge base contains the domain knowledge coded in the form of rules.

ELEMENTS OF A RULE BASED EXPERT SYSTEM


A rule based expert system consists of the following components:

USER INTERFACE
This is a mechanism to support communication between and the system. The user interface may be a simple text-oriented display or a sophisticated, high resolution display. It is determined at the time of designing the system. Nowadays graphical user interfaces are very common for their user-friendliness.

EXPLANATION FACILITY
It explains the user about the reasoning process of the system. By keeping track of the rules that are fired, an explanation facility presents a chain of reasoning that led to a certain conclusion. So explanation facility is also called justifier. This feature makes a huge difference between expert systems and other conventional systems. almost all the commercial expert system shells do trace based explanation, that is, explaining the inferencing on a specific input data set. Some systems explain the knowledge base itself, and some explain the control strategy as well.

WORKING MEMORY
This is a database used to store collection of facts which will later be used by the rules. More effort may go into the design and implementation of the user interface than in the expert system knowledge base. Working memory is used by the inference engine to get facts and match them against the rules. The facts may be added to the working memory by applying some rules.

INFERENCE ENGINE
Same as above

KNOWLEDGE ACQUISITION FACILITY


This allows the user to enter knowledge in the system thereby avoiding the need of knowledge engineer expolicitly code the knowledge. It is an optional feature on many expert systems. Simple rules can be created using rule induction. In rule based expert systems, knowledge base is also called production memory as rules in the form of ifthen are called productions.

ADVANTAGES OF RULE BASED EXPERT SYSTEMS


Modular nature: This allows encapsulating knowledge and expansion of the expert system done in a a easy way. Explanation facilities: Rules make it easy to build explanation facilities. By keeping track of the rules that are fired, an explanation facility can present a chain of reasoning that led to a certain conclusion. Similarity to the human cognitive process: Newel and Simon have showed that rules are the natural way of modeling how humans solve problems. Rules make it easy to explain the structure of knowledge to the experts.

WHAT IS THE TURING TEST?


A. Alan Turing's 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer. The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human. Daniel Dennett's book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer's knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.

You might also like