You are on page 1of 43

Hill Climbing Procedure. HILL CLIMBING PROCEDURE This is a variety of depth-first (generate - and - test) search.

A feedback is used here to decide on the direction of motion in the search space. In the depth-first search, the test function will merely accept or reject a solution. But in hill climbing the test function is provided with a heuristic function which provides an estimate of how close a given state is to goal state. The hill climbing test procedure is as follows :

1. General he first proposed solution as done in depth-first procedure. See if it is a solution. If so quit , else continue. 2. From this solution generate new set of solutions use , some application rules 3. For each element of this set (i) Apply test function. It is a solution quit. (ii) Else see whether it is closer to the goal state than the solution already generated. If yes, remember it else discard it. 4. Take the best element so far generated and use it as the next proposed solution. This step corresponds to move through the problem space in the direction Towards the goal state. 5. Go back to step 2. Sometimes this procedure may lead to a position, which is not a solution, but from which there is no move that improves things. This will happen if we have reached one of the following three states. (a) A "local maximum " which is a state better than all its neighbors , but is not better than some other states farther away. Local maxim sometimes occur with in sight of a solution. In such cases they are called " Foothills". (b) A "plateau'' which is a flat area of the search space, in which neighboring states have the same value. On a plateau, it is not possible to determine the best direction in which to move by making local comparisons. (c) A "ridge" which is an area in the search that is higher than the surrounding areas, but can not be searched in a simple move. To overcome theses problems we can (a) Back track to some earlier nodes and try a different direction. This is a good way of dealing with local maxim.

(b) Make a big jump an some direction to a new area in the search. This can be done by applying two more rules of the same rule several times, before testing. This is a good strategy is dealing with plate and ridges. Hill climbing becomes inefficient in large problem spaces, and when combinatorial explosion occurs. But it is a useful when combined with other methods. Hill-Climbing as an optimization technique Hill climbing is an optimization technique for solving computationally hard problems. It is best used in problems with the property that the state description itself contains all the information needed for a solution (Russell & Norvig, 2003).[1] The algorithm is memory efficient since it does not maintain a search tree: It looks only at the current state and immediate future states. Hill climbing attempts to iteratively improve the current state by means of an evaluation function. Consider all the [possible] states laid out on the surface of a landscape. The height of any point on the landscape corresponds to the evaluation function of the state at that point (Russell & Norvig, 2003).[1] In contrast with other iterative improvement algorithms, hill-climbing always attempts to make changes that improve the current state. In other words, hill-climbing can only advance if there is a higher point in the adjacent landscape. [edit]Iterative Improvement and Hill-Climbing The main problem that hill climbing can encounter is that of local maxima. This occurs when the algorithm stops making progress towards an optimal solution; mainly due to the lack of immediate improvement in adjacent states. Local maxima can be avoided by a variety of methods: Simulated annealing tackles this issue by allowing some steps to be taken which decrease the immediate optimality of the current state. Algorithms such as simulated annealing can sometimes make changes that make things worse, at least temporarily (Russell & Norvig, 2003).[1] This allows for the avoidance of dead ends in the search path. [edit]Random-Restart Hill-Climbing Another way of solving the local maxima problem involves repeated explorations of the problem space. Random-restart hill-climbing conducts a series of hill-climbing searches from randomly generated initial states, running each until it halts or makes no discernible progress (Russell & Norvig, 2003).[1] This enables comparison of many optimization trials, and finding a most optimal solution thus becomes a question of using sufficient iterations on the data. [edit]Algorithm in Pseudocode function HILL-CLIMBING(problem) returns a solution state inputs: problem, a problem static: current, a node next, a node current < MAKE-NODE(lNlTIAL-STATE[problem])

loop do next a highest-valued successor of current if VALUE[next] < VALUE[current] then return current current *next end (Russell & Norvig, 2003)[1] [edit]Computational Complexity Since the evaluation used looks only at the current state, hill-climbing does not suffer from computational space issues. The source of its computational complexity arises from the time required to explore the problem space. Random-restart hill-climbing can arrive at optimal solutions within polynomial time for most problem spaces. However, for some NP-complete problems, the numbers of local maxima can be the cause of exponential computational time. To address these problems some researchers have looked at using probability theory and local sampling to direct the restarting of hill-climbing algorithms. (Cohen, Greiner, & Schuurmans, 1994).[2] [edit]Applications Hill climbing can be applied to any problem where the current state allows for an accurate evaluation function. For example, the travelling salesman problem, the eight-queens problem, circuit design, and a variety of other real-world problems. Hill Climbing has been used in inductive learning models. One such example is PALO, a probabilistic hill climbing system which models inductive and speed-up learning. Some applications of this system have been fit into explanation-based learning systems, and utility analysis models. (Cohen, Greiner, & Schuurmans, 1994). [2] Hill Climbing has also been used in robotics to manage multiple-robot teams. One such example is the Parish algorithm, which allows for scalable and efficient coordination in multi-robot systems. The group of researchers designed a team of robots [that] must coordinate their actions so as to guarantee location of a skilled evader. (Gerkey, Thrun, & Gordon, 2005).[3] Their algorithm allows robots to choose whether to work alone or in teams by using hill-climbing. Robots executing Parish are therefore collectively hill-climbing according to local progress gradients, but stochastically make lateral or downward moves to help the system escape from local maxima.

Heuristic Search. HEURISTIC SEARCH Solve complex problems efficiently ,it is necessary to compromise the requirements of the movability and systematically. A control structure has to be constructed that no longer guarantees the best solution, but that will almost always find a very good answer. Such a technique is said to be heuristic (rule of thumb). A heuristic search improves the efficiently of the search process, but sacrifices the claims of completeness. But they improve the quality of the paths that are explored. Using good heuristics we can get good solutions to hard problems, such as the traveling salesman problem. Applying it to the traveling salesman problem produces the following procedure. 1. Arbitrarily select a starting city. 2. To select the next city, look at all cities not yet visited. Select the one closet to the current city. Go to it next. 3. Repeat step 2 until all the cities have been visited. This procedure executes in time proportional to N * N , instead of N! and it is possible to prove an upper bound on the error it incurs. In many AI problems , however, it is not possible to produce such bounds. This is true for two reasons. For real world problems, it is often hard to measure precisely the goodness of a particular solution. For instance , answers to questions like Why has inflation increased? can not be precise. ii) For real world problems it is often useful to introduce heuristics based on relatively unstructured knowledge. This is because often a mathematical analysis is not possible. Without heuristics, it is not possible to tackle combinatorial explosion. Moreover, we go for optimum solution that satisfy some set of requirements. We stop with satisfactory solutions even though there might be better solutions. Ex.A good example of this is the search for a taking space. Most people will stop as soon as there find a fairly good space, even if there must be a slightly better space or ahead.

Heuristic Search Techniques. Introduction:- Many if the problems are too complex to be solvable by direct techniques.They have to be solved only by suitable heuristic search techniques. Though the heuristic techniques can be described independently, they are domain specific. They are called " Weak Methods", since they are

vulnerable to combinatorial explosion. Even so, they provide the frame work into which domain specific knowledge can be placed. Every search process can be viewed as a traversal of a directed graph, in which the nodes represent problem states and the arcs represent relationships between states. The search process must find a path through this graph, starting at an initial state and ending in one or more final states. The following issues have to be considered before going for a search. Heuristic Search Techniques.

Heuristic techniques are called weak methods, since they are vulnerable to combinatorial explosion. Even then these techniques continue to provide framework into which domain specific knowledge can be placed, either by hand or as a result of learning. The following are some general purpose control strategies ( often called weak methods). Generate and test

Hill

climbing

Breadth

First

search

Depth

First

search

Best

First

Search

(A*

search)

Problem

reduction(AO*

search)

Constraint

satisfaction

Means

ends

analysis

A heuristic procedure, or heuristic, is defined as having the following properties. 1. It will usually find good, although not necessary optimum solutions.

2. It is faster and easier to implement than any known exact algorithm ( one which guarantees an optimum solution ).

In general, heuristic search improve the quality of the path that are exported. Using good heuristics we can hope to get good solutions to hard problems such as the traveling salesman problem in less than exponential time. There are some good general purpose heuristics that are useful in a wide variety of problems. It is also possible to construct special purpose heuristics to solve particular problems. For example, consider the traveling salesman problem.

HEURISTIC FUNCTIONS HEURISTIC FUNCTIONS A Heuristic technique helps in solving problems, even though there is no guarantee that it will never lead in the wrong direction. There are heuristics of every general applicability as well as domain specific. The strategies are general purpose heuristics. In order to use them in a specific domain they are coupler with some domain specific heuristics. There are two major ways in which domain - specific, heuristic information can be incorporated into rulebased search procedure. - In the rules themselves - As a heuristic function that evaluates individual problem states and determines how desired they are. A heuristic function is a function that maps from problem state description to measures desirability, usually represented as number weights. The value of a heuristic function at a given node in the search process gives a good estimate of that node being on the desired path to solution. Well designed heuristic functions can provides a fairly good estimate of whether a path is good or not. ( " The sum of the distances traveled so far" is a simple heuristic function in the traveling salesman problem) . the purpose of a heuristic function is to guide the search process in the most profitable directions, by suggesting which path to follow first when more than one path is available. However in many problems, the cost of computing the value of a heuristic function would be more than the effort saved in the search process. Hence generally there is a trade-off between the cost of evaluating a heuristic function and the savings in search that the function provides. AO* Algorithm2. The AO* ALGORITHM

The problem reduction algorithm we just described is a simplification of an algorithm described in Martelli and Montanari, Martelli and Montanari and Nilson. Nilsson calls it the AO* algorithm , the name we assume.

1. Place the start node s on open. 2. Using the search tree constructed thus far, compute the most promising solution tree T 3. Select a node n that is both on open and a part of T. Remove n from open and place it on closed. 4. If n is a terminal goal node, label n as solved. If the solution of n results in any of ns ancestors being solved, label all the ancestors as solved. If the start node s is solved, exit with success where T is the solution tree. Remove from open all nodes with a solved ancestor. 5. If n is not a solvable node (operators cannot be applied), label n as unsolvable. If the start node is labeled as unsolvable, exit with failure. If any of ns ancestors become unsolvable because n is, label them unsolvable as well. Remove from open all nodes with unsolvable ancestors. 6. Otherwise, expand node n generating all of its successors. For each such successor node that contains more than one sub problem, generate their successors to give individual sub problems. Attach to each newly generated node a back pointer to its predecessor. Compute the cost estimate h* for each newly generated node and place all such nodes that do not yet have descendents on open. Next, recomputed the values of h* at n and each ancestor of n. 7. Return to step 2. A* Algorithm2. A* algorithm: The best first search algorithm that was just presented is a simplification an algorithm called A* algorithm which was first presented by HART. Algorithm: Step 1: put the initial node on a list start Step 2: if (start is empty) or (start = goal) terminate search. Step 3: remove the first node from the start call this node a Step 4: if (a= goal) terminate search with success. Step 5: else if node has successors generate all of them estimate the fitness number of the successors by totaling the evaluation function value and the cost function value and sort the fitness number. Step 6: name the new list as start 1. Step 7: replace start with start 1.

Step 8: go to step 2. What do you know about BFS.? Breadth First Search (BFS): This is also a brute force search procedure like DFS. We are searching progresses level by level. Unlike DFS which goes deep into the tree. An operator employed to generate all possible children of a node. BFS being a brute force search generates all the nodes for identifying the goal. ALGORITHM: Step 1. Put the initial node on a list START Step 2. If START is empty or goal terminate the search. Step 3. Remove the first node from the Start and call this node a Step 4. If a =GOAL terminate search with success Step 5. Else if node a has successors generate all of them and add them at the tail of START Step 6. Go to step 2. Advantages: 1. BFS will not get trapped exploring a blind alley. 2. If there is a solution then BFS is guaranteed to find it. 3. The amount of time needed to generate all the nodes is considerable because of the time complexity. 4. Memory constraint is also a major problem because of the space complexity. 5. The searching process remembers all unwanted nodes, which are not practical use for the search process. Generate and Test Procedure. GENERATE AND TEST This is the simplest search strategy. It consists of the following steps; 1. Generating a possible solution for some problems; this means generating a particular point in the problem space. For others it may be generating a path from a start state. 2. Test to see if this is actually a solution by comparing the chosen point at the end point of the chosen path to the set of acceptable goal states.

3. If a solution has been found, quit otherwise return to step 1. The generate - and - Test algorithm is a depth first search procedure because complete possible solutions are generated before test. This can be implemented states are likely to appear often in a tree; it can be implemented on a search graph rather than a tree. Problem Reduction with AO* Algorithm. PROBLEM REDUCTION ( AND - OR graphs - AO * Algorithm) When a problem can be divided into a set of sub problems, where each sub problem can be solved separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for representing the solution. The decomposition of the problem or problem reduction generates AND arcs. One AND are may point to any number of successor nodes. All these must be solved so that the arc will rise to many arcs, indicating several possible solutions. Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR graph.

An algorithm to find a solution in an AND - OR graph must handle AND area appropriately. A* algorithm can not search AND - OR graphs efficiently. This can be understand from the give figure.

FIGURE : AND - OR graph In figure (a) the top node A has been expanded producing two area one leading to B and leading to C-D . the numbers at each node represent the value of f ' at that node (cost of getting to the goal state from current state). For simplicity, it is assumed that every operation(i.e. applying a rule) has unit cost, i.e., each are with single successor will have a cost of 1 and each of its components. With the available information till now , it appears that C is the most promising node to expand since its f ' = 3 , the lowest but going through

B would be better since to use C we must also use D' and the cost would be 9(3+4+1+1). Through B it would be 6(5+1). Thus the choice of the next node to expand depends not only n a value but also on whether that node is part of the current best path form the initial mode. Figure (b) makes this clearer. In figure the node G appears to be the most promising node, with the least f ' value. But G is not on the current beat path, since to use G we must use GH with a cost of 9 and again this demands that arcs be used (with a cost of 27). The path from A through B, E-F is better with a total cost of (17+1=18). Thus we can see that to search an AND-OR graph, the following three things must be done. 1. traverse the graph starting at the initial node and following the current best path, and accumulate the set of nodes that are on the path and have not yet been expanded. 2. Pick one of these unexpanded nodes and expand it. Add its successors to the graph and computer f ' (cost of the remaining distance) for each of them. 3. Change the f ' estimate of the newly expanded node to reflect the new information produced by its successors. Propagate this change backward through the graph. Decide which of the current best path. The propagation of revised cost estimation backward is in the tree is not necessary in A* algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the current best path can be selected. The working of AO* algorithm is illustrated in figure as follows:

Referring the figure. The initial node is expanded and D is Marked initially as promising node. D is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we can see that the AND arc B-C is better . it is now marked as current best path. B and C have to be expanded next. This process continues until a solution is found or all paths have led to dead ends, indicating that there is no solution. An A* algorithm the path from one node to the other is always that of the lowest cost and it is independent of the paths through other nodes.

The algorithm for performing a heuristic search of an AND - OR graph is given below. Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single structure G. G represents the part of the search graph generated so far. Each node in G points down to its immediate successors and up to its immediate predecessors, and also has with it the value of h' cost of a path from itself to a set of solution nodes. The cost of getting from the start nodes to the current node "g" is not stored as in the A* algorithm. This is because it is not possible to compute a single such value since there may be many paths to the same state. In AO* algorithm serves as the estimate of goodness of a node. Also a there should value called FUTILITY is used. The estimated cost of a solution is greater than FUTILITY then the search is abandoned as too expansive to be practical. For representing above graphs AO* algorithm is as follows AO* ALGORITHM: 1. Let G consists only to the node representing the initial state call this node INTT. Compute h' (INIT). 2. Until INIT is labeled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the following procedure. (I) Trace the marked arcs from INIT and select an unbounded node NODE.

(II) Generate the successors of NODE . if there are no successors then assign FUTILITY as h' (NODE). This means that NODE is not solvable. If there are successors then for each one called SUCCESSOR, that is not also an ancester of NODE do the following

(a) add SUCCESSOR to graph G (b) if successor is not a terminal node, mark it solved and assign zero to its h ' value. (c) If successor is not a terminal node, compute it h' value. (III) propagate the newly discovered information up the graph by doing the following . let S be a set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat the following procedure; (a) select a node from S call if CURRENT and remove it from S. (b) compute h' of each of the arcs emerging from CURRENT , Assign minimum h' to CURRENT.

(c) Mark the minimum cost path a s the best out of CURRENT. (d) Mark CURRENT SOLVED if all of the nodes connected to it through new marked are have been labeled SOLVED.

the

(e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status must be propagate backwards up the graph . hence all the ancestors of CURRENT are added to S. (Refered From Artificial Intelligence TMH) AO* Search Procedure. 1. Place the start node on open. 2. Using the search tree, compute the most promising solution tree TP . 3. Select node n that is both on open and a part of tp, remove n from open and place it no closed. 4. If n is a goal node, label n as solved. If the start node is solved, exit with success where tp is the solution tree, remove all nodes from open with a solved ancestor. 5. If n is not solvable node, label n as unsolvable. If the start node is labeled as unsolvable, exit with failure. Remove all nodes from open ,with unsolvable ancestors. 6. Otherwise, expand node n generating all of its successor compute the cost of for each newly generated node and place all such nodes on open. 7. Go back to step(2) Note: AO* will always find minimum cost solution. Best First Seach Procedure with A* Algorithem.

A is an initial node, which is expand to B,C and D. A heuristic function, say cost of reaching the goal , is applied to each of these nodes, since D is most promising, it is expanded next, producing two successor nodes E and F. Heuristic function is applied to them. Now out of the four remaining ( B,C and F) B looks more promising and hence it is expand generating nodes G and H . Again when evaluated E appears to be the next stop J has to be expanded giving rise to nodes I and J. In the next step J has to be expanded, since it is more promising . this process continues until a solution is found. Above figure shows the best - first search tree. Since a search tree may generate duplicate nodes, usually a search graph is preferred. The best - first search is implemented by an algorithm known as A* algorithm. The algorithm searches a directed graph in which each node represents a

point in the problem space. Each node will contain a description of the problem state it represents and it will have links to its parent nodes and successor nodes. In addition it will also indicate how best it is for the search process. A* algorithm uses have been generated, heuristic functions applied to them, but successors not generated. The list CLOSED contains nodes which have been examined, i.e., their successors generated. A heuristic function f estimates the merits of each generated node. This function f has two components g and h. the function g gives the cost of getting from the initial state to the current node. The function h is an estimate of the addition cost of getting from current node to a goal state. The function f (=g+h) gives the cost of getting from the initial state to a goal state via the current node. THE A* ALGORITHM:1. Start with OPEN containing the initial node. Its g=0 and f ' = h ' Set CLOSED to empty list. 2. Repeat If OPEN is empty , stop and return failure Else pick the BESTNODE on OPEN with lowest f ' value and place it on CLOSED If BESTNODE is goal state return success and stop Else Generate the successors of BESTNODE. For each SUCCESSOR do the following: 1. Set SUCCESSOR to point back to BESTNODE. (back links will help to recover the path) 2. compute g(SUCCESSOR) = g(BESTNODE) cost of getting from BESTNODE to SUCCESSOR. 3. If SUCCESSOR is the same as any node on OPEN, call that node OLS and add OLD to BESTNODE 's successors. Check g(OLD) and g(SUCCESSOR). It g(SUCCESSOR) is cheaper then reset OLD 's parent link to point to BESTNODE. Update g(OLD) and f '(OLD). 4. If SUCCESSOR was not on OPEN , see if it is on CLOSED . if so call the node CLOSED OLD , and better as earlier and set the parent link and g and f ' values appropriately.

5. If SUCCESSOR was not already on earlier OPEN or CLOSED, then put it on OPEN and add it to the list of BESTNODE 's successors. Compute f ' (SUCCESSOR) = g(SUCCESSOR) + h ' (SUCCESSOR) Best first searches will always find good paths to a goal after exploring the entire state space. All that is required is that a good measure of goal distance be used. Breadth First Search Procedure.

Depth First Search Procedure. ALGORITHM : DEPTH - FIRST SEARCH 1. place the starting node in the queue.

2. If the queue is empty, return failure and stop. 3. If the first element on the queue is a goal node g , return succed and stop otherwise. 4. Remove and expand the first element and place the children at the front of the queue. 5. Go back to step 2. What do you know about DFS.? Depth first search: This is a very simple type of brute force searching techniques. The search begins by expanding the initial node i.e. by using an operator generate all successors of the initial node and test them. This procedure finds whether the goal can be reached or not but the path it has to follow has not been mentioned. Diving downward into a tree as quickly as possible performs Dfs searches. Algorithm: Step1: Put the initial node on a list START. Step2: If START is empty or START = GOAL terminates search. Step3: Remove the first node from START. Call this node a. Step4: If (a= GOAL) terminates search with success. Step5: Else if node a has successors, generate all of them and add them at the beginning Of START. Step6: Go to Step 2. The major draw back of the DFS is the determination of the depth citric with the search has to proceed this depth is called cut of depth. The value of cutoff depth is essential because the search will go on and on. If the cutoff depth is smaller solution may not be found. And if cutoff depth is large time complexity will be more. Advantages: DFS requires less memory since only the nodes on the current path are stored. By chance DFS may find a solution with out examining much of the search space at all.

Artificial Intelligence Applications. 1 Problem Solving:This is the first application area of AI research., the objective of this particular area of research is how to implement the procedures on AI systems to solve the problems like Human Beings. 2 :- Game Playing:Much of early research in state space search was done using common board games such as checkers, chess and 8 puzzle. Most games are played using a well defined set of rules. This makes it easy to generate the search space and frees the researcher from many of the ambiguities and complexities inherent in less structured problems. The board Configurations used in playing these games are easily represented in computer, requiring none of complex formalisms. For solving large and complex AI problems it requires lots of techniques like Heuristics. We commonly used the term intelligence seems to reside in the heuristics used by Human beings to solve the problems. 3 :- Theorem Proving:Theorem proving is another application area of AI research., ie. To prove Boolean Algebra theorems as a humans we first try to prove Lemma., i.e it tell us whether the Theorem is having feasible solution or not. If the theorem having feasible solution we will try to prove it otherwise discard it., In the same way whether the AI system will react to prove Lemma before trying to attempting to prove a theorem., is the focus of this application area of research. 4 Natural Langauge understading:The main goal of this problem is we can ask the question to the computer in our mother tongue the computer can receive that particular language and the system gave the response with in the same language. The effective use of a Computer has involved the use off a Programming Language of a set of Commands that we must use to Communicate with the Computer. The goal of natural language processing is to enable people and language such as English, rather than in a computer language. It can be divided in to Two sub fields. Natural Language Understanding : Which investigates methods of allowing the Computer to improve instructions given in ordinary English so that Computers can understand people more easily.

Natural Language Generation : This aims to have Computers produce ordinary English language so that people an understand Computers more easily. 5. Perception:The process of perception is usually involves that the set of operations i.e. Touching , Smelling Listening , Tasting , and Eating. These Perceptual activities incorporation into Intelligent Computer System is concerned with the areas of Natural language Understanding & Processing and Computer Vision mainly. The are two major Challenges in the application area of Perception. 1. Speech Reorganization 2. Pattern Reorganization Speech Reorganization:The main goal of this problem is how the Computer System can recognize our Speeches. (Next process is to understand those Speeches and process them i.e. Encoding & Decoding i.e producing the result in the same language.) Its one is very difficult; Speech Reorganization can be described in two ways. 1. Discrete Speech Reorganization Means People can interact with the Computer in their mother tongue. In such interaction whether they can insert time gap in between the two words or two sentences (In this type of Speech Reorganization the computer takes some time for searching the database). 2. Continues Speech Reorganization Means when we interact with the computer in our mother tongue we can not insert the time gap in between the two words or sentences , i.e. we can talk continuously with the Computer (For this purpose we can increase speed of the computer). Pattern Reorganization: this the computer can identify the real world objects with the help of Camera. Its one is also very difficult , because - To identify the regular shape objects, we can see that object from any angle; we can imagine the actual shape of the object (means to picturise which part is light fallen) through this we can identify the total structure of that particular object.

-To identify the irregular shape things, we can see that particular thing from any angle; through this we cannot imagine the actual structure. With help of that we can attach the Camera to the computer and picturise certain part of the light fallen image with the help of that whether the AI system can recognize the actual structure of the image or not? It is some what difficult compare to the regular shape things, till now the research is going on. This is related the application area of Computer Vision. A Pattern is a quantitative or structured description of an object or some other entity of interest of an Image. Pattern is found an arrangement of descriptors. Pattern recognition is the research area that studies the operation and design of systems that recognize patterns in data. It encloses the discriminate analysis, feature extraction, error estimation, cluster analysis, and parsing (sometimes called syntactical pattern recognition). Important application areas are image analysis, character recognition, speech recognition and analysis, man and machine diagnostics, person identification and industrial inspection. Closely Related Areas Pattern Recognition 2Artificial Intelligence 3 Expert systems and machine learning4Neural Networks 5 Computer Vision6Cognition7Perception8Image Processing 6.Image Processing:Where as in pattern reorganization we can catch the image of real world things with the help of Camera. The goal of Image Processing is to identify the relations between the parts of image. It is a simple task to attach a Camera to a computer so that the computer can receive visual images. People generally use Vision as their primary means of sensing their environment. We generally see more than we here. i.e. how can we provide such perceptual facilities touch, smell, taste, listen, and eat to the AI System. The goal of Computer Vision research is to give computers this powerful facility for understanding their surroundings. Currently, one of the primary uses of Computer Vision is in the area of Robotics. Ex: - We can take a Satellite image to identify the roots and forests; we can make digitize all the image and place on the disk. With the help of particular scale to convert the image in to dots form, later we can identify that particular image at any time. Its one is time consuming process. With the help of image processing how to reduce the time to process an image till now the AI research will be continuously going on. In Image Processing the process of image recognition can be broken into the following main stages.

Image capture Edge detection Segmentation Recognition and Analysis. Image capturing can be performed by a simple Camera, which converts light signals from a scale of electrical signals., i.e., done by human visual system. We obtained these light signals in a set of 0s and 1s. Each pixel takes on one of a number of possible values often from 0 to 255. Color images are broken down in the same way, but with varying colors instead of gray scales. When a computer receives an image from sensor in form of set of pixels. These pixels are integrated to give the computer an understanding of what it is perceiving. An image has been obtained, is to determine where the edges are in the image, the very first stage of analysis is called edge detection. Objects in the real world are almost all have solid edges of one kind or another, detecting those images is first step in the process of determining which objects are present in a scene. Once the edges have been detected, in an image, this information can be used to Segment the image, into homogeneous areas. There are other methods available for segmenting an image, apart from using edge detection, like threshold method. This method involves finding the color of each pixel in an image and considering adjacent pixels to be in the same area as long as their color is similar enough. A similar method for segmenting images is splitting and merging. Splitting involves taking an area that is not homogeneous and splitting it into two or more smaller areas, each of which is homogeneous. Merging involves taking two areas that are the same as each other, and adjacent to each other and combining them together into a large area. This provides a sophisticated interactive approach to segmenting an image. Intermediate Level of processing Low Level Processing High Level Processing 7.Expert system:- Expert means the person who had complete knowledge in particular field, ie is called as an expert. The main aim of this problem is with the help of experts, to load their tricks on to the compute and make available those tricks to the other users. The expert can solve the problems with in the time.

The goal of this problem is how to load the tricks and ideas of an expert on to the computer, till now the research will be going on. 8. Computer Vision:- It is a simple task to attach a camera to a computer so that the computer can receive visual images. People generally use vision as their primary means of sensing their environment. We generally see more than we here, feel, smell, or taste. The goal of computer vision research is to give computers this powerful facility for understanding their surroundings. Currently, one of the primary uses of computer vision is in the area of Robotics. 9. Robotics:A robot is an electro mechanical device that can be programmed to perfume manual tasks. The robotics industries association formally defines to move a Robot as a Programmable multi-functional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of variety of tasks. Not all robotics is considered to be part of AI. A Robot that perform sonly the actions that it is has been pre-programmed to perform is considered to be a dumb robot, includes some kind of sensory apparatus, such as a camera , that allows it to respond to changes in its environment , rather than just to follow instructions mindlessly. 10. Intelligent Computer Assisted Instruction:Computer - Assisted Instruction (CAI) has been used in bringing the power of the computer to bear on the educational process. Now AI methods are being applied to the development of intelligent computerized Tutors that shape their teaching techniques to fit the leaning patterns of individual students. 11. Automatic Programming:- Programming is the process of telling the computer exactly what we want to do . the goal of automatic programming is to create special programs that act as intelligent Tools to assist programmers and expedite each phase of the programming process. The ultimate aim of automatic programming is a computer system that could develop programs by itself, in response to an in according with the specifications of the program developer. 12. Planning and Decision Support system:- When we have a goal, either we rely on luck and providence to achieve that goal or we design and implement a plan. The realization of a complex goal may require to construction of a formal and detailed plan. Intelligent planning programs are designed to provide active assistance in the planning process and are

expected to the particularly helpful to managers with decision making responsibilities. 13. Engineering Design & Camical Analysis:Artificial Intelligence applications are playing major role in Engineering Drawings & Camical analysis to design expert drawings and Camical synthesis. 14. Neural Architecture:People or more intelligent than Computers,. But AI researchers are trying how make Computers Intelligent. Humans are better at interpreting noisy input, such as recognizing a face in a darkened room from an odd angle. Even where human may not be able to solve some problem, we generally can make a reasonable guess as to its solution. Neural architectures, because they capture knowledge in a large no. of units. Neural architectures are robust because knowledge is distributed somewhat uniformly around the network. Neural architectures also provide a natural model for parallelism, because each neuron is an independent unit. This showdown searching the data base a massively parallel architecture like the human brain would not suffer from this problem. 15. Heuristic Classification:The term Heuristic means to Find & Discover., find the problem and discover the solution. For solving complex AI problems its requires lots of knowledge and some represented mechanisms in form of Heuristic Search Techniques., i.e refered to known as Heuristic Classification. Statistical Reasoning and Fuzzy Logic. STATISTICAL REASONING there are several techniques that can be used to augment knowledge representation techniques with statistical measures that describe levels of evidence and belief . an important goal for many problem solving systems is to collect evidence as the systems goes along and to modify its behavior , we need a statistical theory of evidence. Bayesian statistics is such a theory which stresses the conditional probability as fundamental notion. FUZZY LOGIC In fuzzy logic, we consider what happens if we make fundamental changes to our idea of set membership and corresponding changes to our definitions of logical operations. While traditional set-theory defines set membership as a Boolean predicate, fuzzy set theory allows us to represent set membership as

a possibility distribution such as tall-very for the set of tall people and the set of very tall people. This contrasts with the standard Boolean definition for tall people where one is either tall or not and there must be a specific height that defines the boundary. The same is true for very tall. In fuzzy logic, ones tallness increases with ones height until the value 1 is reached. So it is a distribution. Once set membership has been redefined in this way, it is possible to define a reasoning system based on techniques for combining distributions. Such reasoners have been applied in control systems for devices as diverse as trains and washing machines. Script. A structured representation of background world knowledge. This structure contains knowledge about objects, actions, and situations that are described in the input text. If we consider the Knowledge about Shooping or Entering into the Restraunt. This kind of stored Knowledge about stereotypical events is called a Script. Frames. FRAMES Semantic Networks and conceptual depednecy can be used to represent specific events or experiences. A frame structure is used to analyze new situations from scratch and then build new knowledge structures to describe those situations. Typically , a frame describes a class of objects, such as CHAIR or ROOM . it consists of a collection of slots that describe aspects of the objects. Associated with each slot may be a set of conditions that must be met by any filler for it. Each slot may also be filled with a default value, so that in the absence of specific information, things can be associated to be as they usually are. Procedural information may also be associated with particular slots. The AI systems exploit not one but many frames. Related frames can be grouped together to form a frame system. Frames represent an object as a group of attributes. Each attributes in a particular frame is stored in a separate slot. For example, when a furniture salesman says I have a nice chair, that I want you to see, the word chair would immediately trigger in our minds a series of expectations. We would probably expect to see an object with four legs, a seat , a back and possibly (but not necessarily) two arms. We would expect it to have a particular size and serves a place to sit. In an AI system, a frame CHAIR might include knowledge organized as shown below: Frame : CHAIR Parts : seat, back, legs, arms Number of legs : 4 Number of arms: 0 or 2 Conceptual Dependency (CD). Conceptual Dependency (CD)

This representation is used in natural language processing in order to represent them earning of the sentences in such a way that inference we can be made from the sentences. It is independent of the language in which the sentences were originally stated. CD representations of a sentence is built out of primitives , which are not words belonging to the language but are conceptual , these primitives are combined to form the meaning s of the words. As an example consider the event represented by the sentence.

In the above representation the symbols have the following meaning:

Arrows indicate direction of dependency Double arrow indicates two may link between actor and the action P indicates past tense ATRANS is one of the primitive acts used by the theory . it indicates transfer of possession 0 indicates the object case relation R indicates the recipient case relation Conceptual dependency provides a str5ucture in which knowledge can be represented and also a set of building blocks from which representations can be built. A typical set of primitive actions are ATRANS - Transfer of an abstract relationship(Eg: give) PTRANS - Transfer of the physical location of an object(Eg: go) PROPEL - Application of physical force to an object (Eg: push) MOVE - Movement of a body part by its owner (eg : kick) GRASP - Grasping of an object by an actor(Eg: throw) INGEST - Ingesting of an object by an animal (Eg: eat)

EXPEL - Expulsion of something from the body of an animal (cry) MTRANS - Transfer of mental information(Eg: tell) MBUILD - Building new information out of old(Eg: decide) SPEAK - Production of sounds(Eg: say) ATTEND - Focusing of sense organ toward a stimulus (Eg: listen) A second set of building block is the set of allowable dependencies among the conceptualization describe in a sentence. Semantic Nets. DECLARATIVE REPRESNTATIONS The following are the four declarative mechanisms, used for representing knowledge: (a) Semantic Nets: - These describe both objects and events in general (b) Conceptual Dependency:- Provides a way of representing relationships among components of an action. Frames:- A general structure to represent complex objects from several different points of view. (d) Scripts: A sophisticated structure use to represent common sequence of events. All these structures share a common notion that complex entities can be describe as a collection of attributes and associated values (hence they are often called slot-and filter structures) ie., these structures have the form of ordered triples OBJECTS x ATTRIBUTES x VALUE Information can be retrieved from the knowledge base by an associative search with VALUE. Semantic Nets. it is useful to think of semantic nets using graphical notation. In this information is represented as a set nodes connected to each other by a set of labeled arcs, which represent relationships between the nodes. A typical example (with ISA and ISPART relationships) is shown below.

The knowledge in the above semantic network is represented inside a program using some kind of attribute value structure. The following gives a LISP representation of the above semantic net.

ATOM PROPERTY LIST

CHAIR ((IS A FURNITURE))

MY-CHAIR ((IS A CAHIR)

(COLOR TAN)

(COVERING LEATHER)

(OWNER ME ))

ME ((IS A PERSON))

TAN ((IS A BROWN))

SEAT ((ISPART CHAIR))

In predicate logic the above may be represented as IS A (chair, furniture)

IS A ( me, person)

COVERING ( my-chair, leather)

COLOR (my-chair, tan) Knowledge Acquisition. KNOWLEDGE ACQUISITION:How can we build expert systems? Typically a knowledge engineer interviews a domain expert to elucidate expert knowledge, which is then translated into rules. Then initial system is to be built. It must be refined until it approximates expert level performance. This process is expensive and time-consuming, so it is necessary to look for more automatic ways of constructing expert acquisition systems that exist. Yet, there are many programs that interact with domain experts to extract knowledge efficiently. These programs provide support for the following activities: A. Entering Knowledge B. Maintaining knowledge base consistency C. Ensuring knowledge base completeness Further ,statistical techniques. Such as multivariate analysis, provide an alternative approach to building expert level systems. Expert Systems. EXPERT SYSTEMS OBJECTIVES On completion of this lesson , you should be able to - Explain the meaning of expert system. - Explain the capabilities of expert system. - Explain the role of knowledge acquisition. - Explain the importance of knowledge representation. - Explain the approaches to knowledge representation. - Explain the issues in knowledge representation. - Explain what is frame problem ? - Explain the importance of predicate logic.

- Explain the use of rules in representing knowledge. - Explain forward versus backward reasoning. - Explain matching - Explain the statistical reasoning, fuzzy logic, semantic nets, frames, concept dependency and scripts. - Explain case-based reasoning -Give a short note on a DENDRAL and MYCIN. Expert systems solve problems that are normally solved by human experts. To solve expert level problems.

(A) Expert systems need access to a substantial domain knowledge base, which must be built as efficiently as possible. (B) Expert systems also need to exploit one or more reasoning mechanisms to apply their knowledge to the given problems. (C)Expert systems need a mechanism for explaining that they have done to the users who rely on them. (D) Expert systems represent applied AI in a very broad sense. The problem that expert systems deal with are highly diverse. There are some general issues that arise across varying domains. Also, there are powerful techniques that can be defined for specific classes of problems. Some key problem characteristics play an important role in guiding the design of the problem solving systems. For example, tools that are developed to support one classification or diagnosis task are often useful for another, while different tools are useful for solving various kinds of design tasks. Expert Systems are complex AI programs. Almost all the techniques of AI (that includes heuristic techniques) are used in expert systems. The most widely used way of representing in expert systems. The most widely used way of representing domain knowledge in expert systems is a set of production rules which are often coupled with a frame system that defines the objects that occur in the rules. MYCIB is one such system.

In an expert system is to be an effective tool, people must be able to interact with it easily . to facilitate this interaction, the expert system must have the following two capabilities in addition to the ability to perform its underlying task. Explain its Reasoning:In many of the domains in which expert systems operate, people will not accept results unless they have been convinced of the accuracy of the reasoning process that produced those results. This is particularly true, for example, in medicine, where a doctor must accept ultimate responsibility for a diagnosis, even if that diagnosis was arrived at with considerable help from a program. Thus it is important that the reasoning process used in such programs proceed in understandable steps and that enough meta-knowledge (knowledge about reasoning process) be available so that the explanations of those steps can be generated. (B) Acquire new knowledge and modifications of old knowledge:Since expert system derive their power from the richness of the knowledge bases they exploit, it is extremely important that those knowledge bases be as complete and as accurate as possible. But often there exists no standard codification of that knowledge; rather it exists only inside the heads of human experts. One way with human expert. Another way is to have the program learn expert behavior from new data. Alpha-Beta Cut Offs(Pruning) ALPHA-BETA pruning

is a method that reduces the number of nodes explored in Minimax strategy. It reduces the time required for the search and it must be restricted so that no time is to be wasted searching moves that are obviously bad for the current player. The exact implementation of alpha-beta keeps track of the best move for each side as it moves throughout the tree. We proceed in the same (preorder) way as for the minimax algorithm. For the MIN nodes, the score computed starts with +infinity

and decreases with time. For MAX nodes, scores computed starts with -infinity and increase with time. The efficiency of the Alpha-Beta procedure depends on the order in which successors of a node are examined. If we were lucky, at a MIN node we would always consider the nodes in order from low to high score and at a MAX node the nodes in order from high to low score. In general it can be shown that in the most favorable circumstances the alpha-beta search opens as many leaves as minimax on a game tree with double its depth. Here is an example of Alpha-Beta search

Alpha-Beta algorithmThe algorithm maintains two values, alpha and beta, which represent the minimum

score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes less than alpha, it means that the current position cannot be the

result of best play by both players and hence need not be explored further.

Pseudocode for the alpha-beta algorithm is given below. evaluate (node, alpha, beta) if node is a leaf return the heuristic value of node if node is a minimizing node for each child of node beta = min (beta, evaluate (child, alpha, beta)) if beta <= alpha return beta return beta if node is a maximizing node for each child of node alpha = max (alpha, evaluate (child, alpha, beta)) if beta <= alpha return alpha return alpha Fuzzy Concepts. The notion central to fuzzy systems is that truth values (in fuzzy logic) or membership values (in fuzzy sets) are indicated by a value on the range [0.0, 1.0], with 0.0 representing absolute Falseness and 1.0 representing absolute Truth. For example, let us take the statement: "Jane is old." If Jane's age was 75, we might assign the statement the truth value of 0.80. The statement could be translated into set terminology as follows: "Jane is a member of the set of old people." This statement would be rendered symbolically with fuzzy sets as: mOLD(Jane) = 0.80 where m is the membership function, operating in this case on the fuzzy set of old people, which returns a value between 0.0 and 1.0.

At this juncture it is important to point out the distinction between fuzzy systems and probability. Both operate over the same numeric range, and at first glance both have similar values: 0.0 representing False (or non- membership), and 1.0 representing True (or membership). However, there is a distinction to be made between the two statements: The probabilistic approach yields the natural-language statement, "There is an 80% chance that Jane is old," while the fuzzy terminology corresponds to "Jane's degree of membership within the set of old people is 0.80." The semantic difference is significant: the first view supposes that Jane is or is not old (still caught in the Law of the Excluded Middle); it is just that we only have an 80% chance of knowing which set she is in. By contrast, fuzzy terminology supposes that Jane is "more or less" old, or some other term corresponding to the value of 0.80. Further distinctions arising out of the operations will be noted below. The next step in establishing a complete system of fuzzy logic is to define the operations of EMPTY, EQUAL, COMPLEMENT (NOT), CONTAINMENT, UNION (OR), and INTERSECTION (AND). Before we can do this rigorously, we must state some formal definitions: Definition 1 : Let X be some set of objects, with elements noted as x. Thus, X = {x}. Definition 2 : A fuzzy set A in X is characterized by a membership function mA(x) which maps each point in X onto the real interval [0.0, 1.0]. As mA(x) approaches 1.0, the "grade of membership" of x in A increases. Definition 3 : A is EMPTY iff for all x, mA(x) = 0.0. Definition 4 : A = B iff for all x: mA(x) = mB(x) [or, mA = mB]. Definition 5 : mA' = 1 - mA. Definition 6 : A is CONTAINED in B iff mA <= mB.

Definition 7 : C = A UNION B, where: mC(x) = MAX(mA(x), mB(x)). Definition 8: C = A INTERSECTION B where: mC(x) = MIN(mA(x), mB(x)). It is important to note the last two operations, UNION (OR) and INTERSECTION (AND), which represent the clearest point of departure from a probabilistic theory for sets to fuzzy sets. Operationally, the differences are as follows: For independent events, the probabilistic operation for AND is multiplication, which (it can be argued) is counterintuitive for fuzzy systems. For example, let us presume that x = Bob, S is the fuzzy set of smart people, and T is the fuzzy set of tall people. Then, if mS(x) = 0.90 and uT(x) = 0.90, the probabilistic result would be: mS(x) * mT(x) = 0.81 whereas the fuzzy result would be: MIN(uS(x), uT(x)) = 0.90 The probabilistic calculation yields a result that is lower than either of the two initial values, which when viewed as "the chance of knowing" makes good sense. However, in fuzzy terms the two membership functions would read something like "Bob is very smart" and "Bob is very tall." If we presume for the sake of argument that "very" is a stronger term than "quite," and that we would correlate "quite" with the value 0.81, then the semantic difference becomes obvious. The probabilistic calculation would yield the statement If Bob is very smart, and Bob is very tall, then Bob is a quite tall, smart person.The fuzzy calculation, however, would yield If Bob is very smart, and Bob is very tall, then Bob is a very tall, smart person.Another problem arises as we incorporate more factors into our equations (such as the fuzzy set of heavy people, etc.). We find that the ultimate result of a series of AND's approaches 0.0, even if all factors are initially high. Fuzzy theorists argue that this is wrong: that five factors of the value 0.90 (let us

say, "very") AND'ed together, should yield a value of 0.90 (again, "very"), not 0.59 (perhaps equivalent to "somewhat"). Similarly, the probabilistic version of A OR B is (A+B - A*B), which approaches 1.0 as additional factors are considered. Fuzzy theorists argue that a sting of low membership grades should not produce a high membership grade instead, the limit of the resulting membership grade should be the strongest membership value in the collection. The skeptical observer will note that the assignment of values to linguistic meanings (such as 0.90 to "very") and vice versa, is a most imprecise operation. Fuzzy systems, it should be noted, lay no claim to establishing a formal procedure for assignments at this level; in fact, the only argument for a particular assignment is its intuitive strength. What fuzzy logic does propose is to establish a formal method of operating on these values, once the primitives have been established. Fuzzy Concepts. Hedges Another important feature of fuzzy systems is the ability to define "hedges," or modifier of fuzzy values. These operations are provided in an effort to maintain close ties to natural language, and to allow for the generation of fuzzy statements through mathematical calculations. As such, the initial definition of hedges and operations upon them will be quite a subjective process and may vary from one project to another. Nonetheless, the system ultimately derived operates with the same formality as classic logic. The simplest example is in which one transforms the statement "Jane is old" to "Jane is very old." The hedge "very" is usually defined as follows: m"very"A(x) = mA(x)^2 Thus, if mOLD(Jane) = 0.8, then mVERYOLD(Jane) = 0.64. Other common hedges are "more or less" [typically SQRT(mA(x))], "somewhat,"

"rather," "sort of," and so on. Again, their definition is entirely subjective, but their operation is consistent: they serve to transform membership/truth values in a systematic manner according to standard mathematical functions. A more involved approach to hedges is best shown through the work of Wenstop in his attempt to model organizational behavior. For his study, he constructed arrays of values for various terms, either as vectors or matrices. Each term and hedge was represented as a 7-element vector or 7x7 matrix. He ten intuitively assigned each element of every vector and matrix a value between 0.0 and 1.0, inclusive, in what he hoped was intuitively a consistent manner. For example, the term "high" was assigned the vector 0.0 0.0 0.1 0.3 0.7 1.0 1.0 and "low" was set equal to the reverse of "high," or 1.0 1.0 0.7 0.3 0.1 0.0 0.0 Wenstop was then able to combine groupings of fuzzy statements to create new fuzzy statements, using the APL function of Max-Min matrix multiplication. These values were then translated back into natural language statements, so as to allow fuzzy statements as both input to and output from his simulator. For example, when the program was asked to generate a label "lower than sortof low," it returned "very low;" "(slightly higher) than low" yielded "rather low," etc. The point of this example is to note that algorithmic procedures can be devised which translate "fuzzy" terminology into numeric values, perform reliable operations upon those values, and then return natural language statements in a reliable manner. Fuzzy Inferencing Fuzzy Inferencing

The process of fuzzy reasoning is incorporated into what is called a Fuzzy Inferencing System. It is comprised of three steps that process the system inputs to the appropriate system outputs. These steps are 1) Fuzzification, 2) Rule Evaluation, and 3) Defuzzification. The system is illustrated in the following figure.

1Fuzzification is the first step in the fuzzy inferencing process. This involves a domain formation where crisp inputs are transformed into fuzzy inputs. Crisp inputs are exact inputs measured by sensors and passed into the control system for processing, such as temperature, pressure, rpm's, etc.. Each crisp input that is to be processed by the FIU has its own group of membership functions or sets to which they are transformed. This group of membership functions exists within a universe of discourse that holds all relevant values that the crisp input can possess. The following shows the structure of membership functions within a universe of discourse for a crisp input. 2 Degree of membership:

degree to which a crisp value is compatible to a membership function, value from 0 to 1, also known as truth value or fuzzy input.membership function, MF: defines a fuzzy set by mapping crisp values from its domain to the sets associated degree of membership. 3.crisp inputs: distinct or exact inputs to a certain system variable, usually measured 4.parameters external from the control system, e.g. 6 Volts. 5.label: descriptive name used to identify a membership function. 6.scope: or domain, the width of the membership function, the range of concepts, usually numbers, over which a membership function is mapped. 7.universe of discourse: range of all possible values, or concepts, applicable to a system variable. When designing the number of membership functions for an input variable, labels must initially be determined for the membership functions. The number of labels correspond to the number of regions that the universe should be divided, such that each label describes a region of behavior. A scope must be assigned to each membership function that numerically identifies the range of input values that correspond to a label. The shape of the membership function should be representative of the variable. However this shape is also restricted by the computing resources available. Complicated shapes require more complex descriptive equations or large lookup tables. The next figure shows examples of possible shapes for membership functions. Knowledge Based Computer System (KBCS). Abstract : The IT industry in India has been growing at above 25% annually for several years now.Information Technology has emerged as a dominant sector of the Indian

Economy. The currenteducation offers a variety courses with all Combinations of words Computer, Information and Softwarewith Science, Engineering and Applications. Software I ndustry in India had recognized growth in the last decade and is hoped to play a much bigger role in near future for growing Indian economy. Knowledge Based Computer System(K.B.C.S) is a process for extending a knowledge base. Knowledge means information that can be used in making a decision process with understanding, accumulated experience. Many application areas of AI Research having appreciated mechanism to implement AI systems by using knowledge bases. I.e. Knowledge Based Computer Systems are required with desired implications in such application areas. I. What is AI? AI is a name given to Scientific Research. Its Origin is Japan; The primary goal of this AI Research is to develop an Intelligent Computer System. The term Intelligence means how to make the Computers do things at which the movement people do better, i e like Human Brain how the Computer can take its owndecisions automatically depending upon the Situation. Thus it is having both Scientific and Engineering goals. AI is the part of the Computer Science concerned with designing, Intelligent Computer Systems, that is systems that exhibit the characteristics we associate with intelligence in Human Behavior. Once again this definition will raise the following question. Intelligent Behavior , in view of the difficulty in defining the Intelligence, Let us try to characterize that is a list of number of characteristics by which we can identify the Human Intelligence. It is related to the similar task of using computers to understand Human Intelligence. The term AI is referred to known as Intelligent Behavior in Artifacts. Artifacts are Man-Made Machines.Thus AI is related with Psychology, Cognition, and Behavioral Science. Thus we have to consider the following Characteristics that are passed by an AI System. 1. Perception 2. Reasoning 3. Learning 4. Communicating 5. Acting in Complex Environments. what is Intelligence? An Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines. The following are somemeasurable characteristics of Intelligence that are recognized by Computer System. 1. To respond situations very flexibility. 2. To make sense of out of ambiguity to the contradictory messages. 3. To recognize the relative importance of different elements of a situation. 4. To find similarities between situations despite the differences which my separate them.

5. To draw distinction between situations despite similarities, which may link them. AI is the Branch of Computer Science dealing with symbolic non algorithmic methods of a problem solving. But AI researchers shown people are more Intelligent than Computers, AI tries to improve theperformance of computers in activities that pe ople do better, then the goal of AI is to make computersmore Intelligent. AI researches have show that Intelligence requires knowledge, and knowledge itself posses some less desirable activities of Real Life Situations. - It voluminous - It is hard characterize accurately - It is constantly changing AI is the branch of computer science that deals with ways of representing knowledge by using symbols rather than numbers and with rules of thumb, or heuristic methods for processing. AI works with pattern matching methods, which attempts to describe objects, events and processes in terms of their qualitative features and logical and computational relationships. While reading the above definitions one must be remember keeping in mind that the AI is fast new developing science. II. Will Artificial Intelligence Applications Rules Future Information Technology.? Yes. AI Applications are the talk of IT industry today. Pattern Recognition and Image Processing, Expert systems(Knowledge based Computer Systems) are the major concern of AI research. The computers of today are knowledge Information processing systems. Expert systems in turn, embody modules of organized knowledge about specific areas of Human Expertise. They also support sophisticated problem- solving and inference functions, providing users with a source of intelligent advice on some specialized topic. Expert systems also provide human oriented I/O in the form of natural Languages, speech, and picture images. For example an Expert System for Medical Diagnosis could operate in the way analogous to the way a Physician, a surgeon, and a patient interact and use their knowledge to make a diagnosis. KNOWLEDGE BASED COMPUTER SYSTEM The KBCS division carries out Research and Development in selected sub fields of ArtificialIntelligence. KBCS is a computer program designed to act as an expert in specific field of knowledge.These are designed to solve complex problems.

KBCS simulates the human reasoning process andapplying specific knowledge and inferences. Characteristics of KBCS: 1. Expand the Knowledge Base for the domain of interest. 2. Support for Heuristics Analysis 3. Application of Search Techniques. 4. It is having capability to infer new knowledge from existing knowledge. 5. use the symbol processing approach 6. Ability to explain its own reasoning. KBCS can help acting as intelligent assistants to human experts or, in certain cases, even replacing thehuman experts. STRUCTURE OF A KNOWLEDGE BASED COMPUTER SYSTEM. The major components of an Expert System are High level Base, Communication Interface,Inference Engine and User interface. Knowledge

1.Knowledge Base : An expert system can give intelligent answers to sufficient knowledge about thefield . The component of the expert system that contains the systems knowledge is calle d theKnowledge Base. It is vital component of KBCS. Knowledge - Base consists of Declarative Knowledge and Procedural Knowledge Declarative Knowledge consists of Facts about objects, events and situations. Proce dural Knowledgeconsists of courses of action. Knowledge Representation is a process of putting the Knowledge into thesystems knowledge base inform of Facts by using above two types. While KBCS uses reasoning to drawthe co nclusions from stored facts. 2. Inference Engine : If the system has knowledge, it must be capable of using the knowledge inappropri ate manner. Systems must know how and when to apply the knowledge. I.e Inference Engineworks as a control programme to decide the direction of the search in KBCS. KBCS uses different types of search techniques to fine the solutions of given proble ms. Search is thename given to the process of shifting through alternative solutions to reach the Goal state. The search iscarried

through search space. Problem is moved from Initial Sate to Goal State through state space. TheInference Engine decides which heuristic search techniques are used to determine and how t he rulesin the knowledge base are to be applied to the problem. Inference Engine is independent of theknowledge base. 3. User Interface : User must have communicate with the system. KBCS helps its user to communicatewith it is known as user interface inform of bidirectional communication. The system should be able toask much information to arrive at a solution or the user want to kno w the reasoning about the Fats. Thusit is important to have a user interface that can be used by common peopl e.. KBCS DEVELOPMENT: The development of a KBCS may require a team of several people workingtogether. There are two types of people are involved in the developm ent of expert system. 1. Domain Experts 2. Knowledge Engineers. Domain Experts provide the information for the Knowledge Base. Knowledge Engineer develops the Knowledge Based Computer System. The followi ng are the differentstages in the development of KBCS 1. Identifications 2. Conceptualization 3. Formalization 4. Implementation 5. Testing. 1. Identification: In this phase Knowledge Engineer & Domain Expert work together closely to describethe problem that the KBCS expecte d to solve. Such interactive procedure is typical of the entire KBCSdevelopment process. Additional resources, such as other Experts and Knowledge Engineers andreference Journals are also identified in the identification stage. 2. Conceptualization : This stage involved analyzing the problem. Knowledge Engineer representsgraphical representation of the relation ship between the Objects and the Process in the problemdomain. The problem is decomposed in to sub problems and their interrelationships are properly conceptualized. Then the identification stage is revised again. Such interactive process can occur in anystage of development. 3. Formalization : Both Identification & Conceptualization are concerned on understanding theproblem. The process of Formalization means that the problem is connected to its KBCS, byanalyzin g the relation

mentioned in Conceptualization. Knowledge Engineer selects the development techniques that are appropriate to required KBCS. He must familiar with 1. Different types of AI Techniques i.e heuristic search techniques and knowledge representationmechanis ms that are used in the development of KBCS. 2. The KBCS tools that are involved in the development of KBCS 3. KBCS are in form of Rule Based& Model Based. In Rule Based KBCS Knowledge Engineer develops the set of Rules. These are modi fied andrevised by the domain expert. 4. Implementation : During the Implementation stage of KBCS Knowledge Engineer determines whethe rcorrect techniques were chosen or not. Otherwise the knowledge engineer was re formalize theconcepts or use new development tools. 5. Testing : Testing provides an opportunity to knowledge engineer to identify the str engths andweaknesses of the system. That will lead to identify is there any modifications are required The following example shows how different types of Control Structures that are used in the developmentprocess of KBCS by using symbol manipulation and IFTHEN implication is as follows. Symbol Manipulation: In Expert systems (Knowledge based Computer Systems), Knowledge is often represented in terms of IF THEN rules of the form: IF Condition.1 and Condition.2 and __________ __________ Condition n THEN implication (with significance) If all conditions are true, then the implication is true, with an associated logical significance factor. While a set of rules is searched, an overall significance factor is manipulated, and when this significance becomes unacceptably low the search is abounded and a view set of rules is searched.

This structure of Expert Systems is most closely matched by the structure of logical programming (its computational model). In a logic programming language such as LISP & PROLOG. Prolog statements are relations of a restricted form called Clauses and the execution of such program is a suitably controlled logic deduction from the Clauses forming the program. A Clause is a Well formed Formula consisting of Conjunction and Disjunction of Literals. The following logic program for family three Conditions of four Clauses. Father (Bill, John) Father (John, Tom) Grandfather (X,Z) :- father (X,Y) ,mother (Y,Z). Grandfather (X,Z) :- father (X,Y) ,father (Y,Z). The first two clauses define that Bill is the father of John, second two clauses use the variables X, Yand Z to represent (express) the rule that if X is the grandfather of Z, if X is the father of Y and Y is either the mother or father of Z . Such a program can be asked a range of questions- from is John, the father of Tom? [Father (John, Tom)?] To Is there any A who is the grandfather of C?[Grandfather (A, C)?] . The possible operation of computer based on logic is illustrated in the following using the family tree program. Execution of , for example Grandfather (Bill,R)?Will match each Grandfather ( ) Clause. Grandfather ( X=Bill, Z=R ) :-father (Bill,Y),mother (Y,R). Grandfather ( X=Bill, Z=R ) :-father (Bill,Y),father (Y,R). Both clauses will attempt in parallel to satisfy their Goals, such a concept is called OR Parallelism. The first clause will fail being unable to satisfy its goal, search will continues to the second clause i.e., called OR Parallelism. The first clause will fail being unable to satisfy the Mother( ) goal form the program. The secondgoal has Father( ) , Mother( ) , which is attempt to solve in parallel, such a concept is called ANDparallelism. The later concept involves Pattern Matching methods and substitution to satisfy both theindividual goals. Grandfather (X=Bill, Z=R) : - father (Bill,Y), father (Y,R). :-father (Bill,Y=John), father (Y=Bill,R=John). And the Overall Consistency :- father (Bill,Y=John), father (Y=John,R=Tom).

Computers Organization supporting Expert Systems is a highly micro programmed(Control Flow Based). PROLOG machines analogous to current Lisp machines although we can expect a number of suchdesigns in the ne ar feature. PROLOG machines are not TRUE Logic Machines. Just as LISP Machinesare not considered reduction machines liked by a Common logic Machine language and architecture. Future Potential:Further Developments in Future the area of AI Research will be in hopefulmanner. Fifth Generation Project Form the basis of what is called Intelligent Consumer Electronics. Further developments of this typeof computer is motivated by the fact that these electronics will be the major money earning industry FINDINGS Knowledge based computer system(K.B.C.S) depicted in AI applications having appreciated mechanism to implement AI systems using knowledge bases, it is found that knowledge based computer systems are to developed in suitable areas where knowledge can be represented appropriately with the desired implications. CONCLUSION While reading the above information of AI and the role of knowledge based systems depicted in AIapplications one must remember keeping in mind that the AI is new developing science and knowledge based computer system(K.B.C.S) is a process for extending a knowledge base. in

You might also like