You are on page 1of 21

CS6659-ARTIFICIAL INTELLIGENCE

VI SEMESTER
SYLLABUS

UNIT -1
INTRODUCTION TOAlAND PRODUCTION SYSTEMS
Introduction to AI-Problem formulation, Problem Definition-Production systems
,Control strategies, Search strategies. Problem characteristics, Production system characteristics-
Specialized production system-Problem solving methods-Problem graphs, Matching, Indexing
and Heuristic functions-Hill Climbing-Depth first and Breath first, Constraints satisfaction-
Related algorithms, Measureof performance and analysis of search
algorithms.
Introduction to AI
What is artificial intelligence?
Artificial Intelligence (AI) is a branch of Science which deals with helping
machines find solutions to complex problems in a more human like fashion.
Major AI textbooks define artificial intelligence as "the study and design of
intelligent agents," where an intelligent agent is a system that perceives its environment
and takes actions which maximize its chances of success.
The definitions of AI according to some text books are categorized into four
approaches and are summarized in the table below :

Systems that think like humans“The Systems that think rationally“The


exciting new effort to make computers study of mental faculties through the use
think machines with minds,in the full of computer models.” (Charniak and
and literal sense.”(Haugeland,1985) McDermont,1985)

Systems that act like humans


The art of creating machines that Systems that act rationally
perform function that require “Computational intelligence is the study
intelligence when performed by of the design of intelligent
people.”(Kurzweil,1990) agents.”(Poole et al.,1998)

Agents and environments


An agent is anything that can be viewed as perceiving its environment through sensors and
SENSOR acting upon that environment through actuators. This simple idea is illustrated in Figure 1.2.

 A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and other
body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and various motors
for actuators.
 A software agent receives keystrokes, file contents, and network packets as sensory inputs and
acts on the environment by displaying on the screen, writing files, and sending network packets.
Figure 1.2 Agents interact with environments through sensors and actuators.

Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent function
that maps any given percept sequence to an action.

Agent program
Internally, The agent function for an artificial agent will be implemented by an agent program.
It is important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world
shown in Figure 1.3. This particular world has just two locations: squares A and B. The
vacuum agent perceives which square it is in and whether there is dirt in the square. It can
choose to move left, move right, suck up the dirt, or do nothing. One very simple agent
function is the following: if the current square is dirty, then suck, otherwise move to the other
square. A partial tabulation of this agent function is shown in Figure 1.4.

Figure 1.3 A vacuum-cleaner world with just two locations.

Agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck

Figure 1.4 Partial tabulation of a simple agent function for the vacuum-cleaner
world shown in Figure 1.3.

Rational Agent
A rational agent is one that does the right thing-conceptually speaking, every entry in
the table for the agent function is filled out correctly. Obviously, doing the right thing is
better than doing the wrong thing. The right action is the one that will cause the agent to be
most successful.
Performance measures
A performance measure embodies the criterion for success of an agent's behavior. When
an agent is plunked down in an environment, it generates a sequence of actions according
to the percepts it receives. This sequence of actions causes the environment to go through a
sequence of states. If the sequence is desirable, then the agent has performed well.
Rationality
What is rational at any given time depends on four things:
 The performance measure that defines the criterion of success.
 The agent's prior knowledge of the environment.
 The actions that the agent can perform.
 The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is ex-
pected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Omniscience, learning, and autonomy
An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
Doing actions in order to modify future percepts-sometimes called information gathering-is
an important part of rationality.
Our definition requires a rational agent not only to gather information, but also to learn
as much as possible from what it perceives.
To the extent that an agent relies on the prior knowledge of its designer rather than
on its own percepts, we say that the agent lacks autonomy. A rational agent should be
autonomous-it should learn what it can to compensate for partial or incorrect prior
knowledge.

Task environments
We must think about task environments, which are essentially the "problems" to which rational
agents are the "solutions."
Specifying the task environment
The rationality of the simple vacuum-cleaner agent, needs specification of
 the performance measure
 the environment
 the agent's actuators and sensors.
PEAS
All these are grouped together under the heading of the task environment.We call this the
PEAS (Performance, Environment, Actuators, Sensors) description.In designing an agent, the
first step must always be to specify the task environment as fully as possible.

Agent Type Performance Environments Actuators Sensors


Measure
Taxi driver Safe: fast, Roads,other Steering,accelerator, Cameras,sonar,
legal, traffic,pedestrians, brake, Speedometer,GPS,
comfortable customers Signal,horn,display Odometer,engine
trip, sensors,keyboards,
maximize accelerometer
profits

Figure 1.5
PEAS
description of
the task
environment
for an
automated
taxi.

Properties of task environments


 Fully observable vs. partially observable
 Deterministic vs. stochastic
 Episodic vs. sequential
 Static vs. dynamic
 Discrete vs. continuous
 Single agent vs. multiagent
Fully observable vs. partially observable.
If an agent's sensors give it access to the complete state of the environment at each point in
time, then we say that the task environment is fully observable. A task environment is
effectively fully observable if the sensors detect all aspects that are relevant to the choice of
action;
An environment might be partially observable because of noisy and inaccurate sensors or
because parts of the state are simply missing from the sensor data.
Deterministic vs. stochastic.
If the next state of the environment is completely determined by the current state and the
action executed by the agent, then we say the environment is deterministic; otherwise, it is
stochastic.
Episodic vs. sequential
In an episodic task environment, the agent's experience is divided into atomic episodes.
Each episode consists of the agent perceiving and then performing a single action. Crucially,
the next episode does not depend on the actions taken in previous episodes.
For example, an agent that has to spot defective parts on an assembly line bases each decision
on the current part, regardless of previous decisions;
In sequential environments, on the other hand, the current decision could affect all future
decisions. Chess and taxi driving are sequential:
Discrete vs. continuous.
The discrete/continuous distinction can be applied to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent. For example, a discrete-state
environment such as a chess game has a finite number of distinct states. Chess also has a
discrete set of percepts and actions. Taxi driving is a continuous- state and continuous-time
problem: the speed and location of the taxi and of the other vehicles sweep through a range of
continuous values and do so smoothly over time. Taxi-driving actions are also continuous
(steering angles, etc.).

Single agent vs. multiagent.


An agent solving a crossword puzzle by itself is clearly in a single-agent environment,
whereas an agent playing chess is in a two-agent environment.
As one might expect, the hardest case is partially observable, stochastic, sequential, dynamic,
continuous, and multiagent.
Figure 1.7 lists the properties of a number of familiar environments.

Figure 1.7 Examples of task environments and their characteristics.

Agent programs
The agent programs all have the same skeleton: they take the current percept as input from the
sensors and return an action to the actuatom6 Notice the difference between the agent
program, which takes the current percept as input, and the agent function, which takes the
entire percept history. The agent program takes just the current percept as input because
nothing more is available from the environment; if the agent's actions depend on the entire
percept sequence, the agent will have to remember the percepts.
Function TABLE-DRIVEN_AGENT(percept) returns an action
static: percepts, a sequence initially empty
table, a table of actions, indexed by percept sequence
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
Figure 1.8 The TABLE-DRIVEN-AGENT program is invoked for each new percept and
returns an action each time.

Drawbacks:
 Table lookup of percept-action pairs defining all possible condition-action rules
necessary to interact in an environment
 Problems
 Too big to generate and to store (Chess has about 10^120 states, for example)
 No knowledge of non-perceptual parts of the current state
 Not adaptive to changes in the environment; requires entire table to be updated if
changes occur
 Looping: Can't make actions conditional
 Take a long time to build the table
 No autonomy
 Even with learning, need a long time to learn the table entries
Some Agent Types
 Table-driven agents
 use a percept sequence/action table in memory to find the next action. They are
implemented by a (large) lookup table.
 Simple reflex agents
 are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not have memory of past world states.
 Agents with memory
 have internal state, which is used to keep track of past states of the world.
 Agents with goals
 are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
 Utility-based agents
 base their decisions on classic axiomatic utility theory in order to act rationally.

An AI system is composed of an agent and its environment. The agents act in their environment.
The environment may contain other agents.
What are Agent and Environment?
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.

 A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to
the sensors, and other organs such as hands, legs, mouth, for effectors.

 A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.

 A software agent has encoded bit strings as its programs and actions.

Agent Terminology
 Performance Measure of Agent − It is the criteria, which determines how successful
an agent is.

 Behavior of Agent − It is the action that agent performs after any given sequence of
percepts.

 Percept − It is agent’s perceptual inputs at a given instance.

 Percept Sequence − It is the history of all that an agent has perceived till date.

 Agent Function − It is a map from the precept sequence to an action.

Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.

Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important part
of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of −

 Its percept sequence


 Its built-in knowledge base
Rationality of an agent depends on the following four factors −

 The performance measures, which determine the degree of success.

 Agent’s Percept Sequence till now.

 The agent’s prior knowledge about the environment.

 The actions that the agent can carry out.

A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors (PEAS).

The Structure of Intelligent Agents


Agent’s structure can be viewed as −

 Agent = Architecture + Agent Program


 Architecture = the machinery that an agent executes on.
 Agent Program = an implementation of an agent function.
Simple Reflex Agents

 They choose actions only based on the current percept.


 They are rational only if a correct decision is made only on the basis of current precept.
 Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.

Model − The knowledge about “how the things happen in the world”.

Internal State − It is a representation of unobserved aspects of current state depending on


percept history.

Updating the state requires the information about −

 How the world evolves.


 How the agent’s actions affect the world.
Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible than
reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing
for modifications.

Goal − It is the description of desirable situations.

Utility Based Agents


They choose actions based on a preference (utility) for each state. Goals are inadequate when −

 There are conflicting goals, out of which only few can be achieved.

 Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.
Nature of Environments
Some programs operate in the entirely artificial environment confined to keyboard input,
database, computer file systems and character output on a screen.

In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbots
domains. The simulator has a very detailed, complex environment. The software agent needs
to choose from a long array of actions in real time. A softbot designed to scan the online
preferences of the customer and show interesting items to the customer works in the real as
well as an artificial environment.

The most famous artificial environment is the Turing Test environment, in which one real
and other artificial agents are tested on equal ground. This is a very challenging environment as
it is highly difficult for a software agent to perform as well as a human.

Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.

Two persons and a machine to be evaluated participate in the test. Out of the two persons, one
plays the role of the tester. Each of them sits in different rooms. The tester is unaware of who
is machine and who is a human. He interrogates the questions by typing and sending them to
both intelligences, to which he receives typed responses.

This test aims at fooling the tester. If the tester fails to determine machine’s response from the
human response, then the machine is said to be intelligent.

Properties of Environment
The environment has multifold properties −
 Discrete / Continuous − If there are a limited number of distinct, clearly defined, states
of the environment, the environment is discrete (For example, chess); otherwise it is
continuous (For example, driving).

 Observable / Partially Observable − If it is possible to determine the complete state of


the environment at each time point from the percepts it is observable; otherwise it is
only partially observable.

 Static / Dynamic − If the environment does not change while an agent is acting, then it
is static; otherwise it is dynamic.

 Single agent / Multiple agents − The environment may contain other agents which may
be of the same or different kind as that of the agent.

 Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the
complete state of the environment, then the environment is accessible to that agent.

 Deterministic / Non-deterministic − If the next state of the environment is completely


determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.

 Episodic / Non-episodic − In an episodic environment, each episode consists of the


agent perceiving and then acting. The quality of its action depends just on the episode
itself. Subsequent episodes do not depend on the actions in the previous episodes.
Episodic environments are much simpler because the agent does not need to think ahead.
Problem Definition :
 The process of working through details of a problem to reach a solution.
 Problem solving may include mathematical or systematic operations.
 4 necessity things to solve a problem
1. Define the problem
The definition must include specification of the initial situations and also final situations.
2. Analyze the problem
Apply the various techniques for solving the problems
3. Isolate and represent the knowledge to solve the problem
4. Choose the best problem.
Example: Water Jug Problem
Consider the following problem:
A Water Jug Problem: given two jugs, a 4-gallon one and a 3-gallon one, a pump which
has unlimited water which you can use to all the jug, and the ground on which water may be
poured. Neither jug has any measuring markings on it. How can you get exactly 2 gallons of
water in the 4-gallon jug?
Solutions:
 State: (x, y)

x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3

 Start state: (0, 0).

 Goal state: (2, n) for any n.

 Attempting to end up in a goal state.

State Space Search: Water Jug Problem

1. (x, y) →(4, y)
if x < 4
2. (x, y) → (x, 3)
if y < 3
3. (x, y) →(x - d, y)
if x > 0
4. (x, y) →(x, y - d)
if y > 0
5. (x, y) → (0, y)
if x >0
6. (x, y) → (x, 0)
if y > 0
7. (x, y) →(4, y - (4 - x))
if x + y≥ 4, y > 0
8. (x, y) →(x- (3- y), 3)
if x + y≥ 3, x > 0
9.(x, y) →(x + y, 0)
if x+ y≤ 4, y > 0
10.(x, y) →(0,x + y)
if x+ y≤ 3, y > 0
11.(0, 2) → (2, 0)
12.(2, y) →(0, y)

Solving by production rules:

1. Current state = (0, 0)

2. Loop until reaching the goal state (2, 0)


- Apply a rule whose left side matches the current state

- Set the new current state to be the resulting state

0: (4, 0) (3, 0)--- empty the 4 gallon jug & 3 gallon jug

1: (4, 3) (3, 0)---fill the 4 gallon jug

2: (4, 1) (3, 3)--pour water front the 4 gallon jug into 3 gallon jug unitl 3 gallon jug is full

3: (4, 1) (3, 0)--empty the 3 gallon jug

4: (4, 0) (3, 1)--fill the 3 gallon jug with remaining water from the 4 gallon jug

5: (4, 4) (3, 1)--fill the 4 gallon jug

6: (4, 2) (3, 3)-pour water from the 4 gallon jug into 3 gallon jug until jug gets filled
EXAMPLE 2:
The 8-puzzle
An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile
adjacent to the balank space can slide into the space. The object is to reach the goal state
,as shown in figure 2.4
.Example: The 8-puzzle

Figure 1.21 A typical instance of 8-puzzle.


The problem formulation is as follows :
States : A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.

Initial state : Any state can be designated as the initial state. It can be noted that any
given goal can be reached from exactly half of the possible initial states.

Successor function : This generates the legal states that result from trying the four
actions(blank moves Left,Right,Up or down).

Goal Test : This checks whether the state matches the goal configuration
shown in figure 2.4.(Other goal configurations are possible)
Path cost : Each step costs 1,so the path cost is the number of steps in the path.

EXAMPLE 3:8-queens problem


The goal of 8-queens problem is to place 8 queens on the chessboard such that no
queen attacks any other.(A queen attacks any piece in the same row,column or
diagonal).

Figure 2.5 shows an attempted solution that fails: the queen in the right most column is
attacked by the queen at the top left.
An Incremental formulation involves operators that augments the state description, starting
with an empty state. For 8-queens problem, this means each action adds a queen to the
state.

A complete-state formulation starts with all 8 queens on the board and move them around.In
either case the path cost is of no interest because only the final state counts.

Figure 1.22 8-queens problem


The first incremental formulation one might try is the following:
 States: Any arrangement of 0 to 8 queens on board is a state.

 Initial state: No queen on the board.

 Successor function: Add a queen to any empty square.

 Goal Test: 8 queens are on the board, none attacked.


In this formulation, we have 64.63…57 = 3 x 1014 possible sequences to investigate.
A better formulation would prohibit placing a queen in any square that is already attacked.
 States: Arrangements of n queens (0 <= n < = 8), one per column in the left most
columns, with no queen attacking another are states.
 Successor function: Add a queen to any square in the left most empty column such that
it is not attacked by any other queen.

This formulation reduces the 8-queen state space from 3 x 1014 to just 2057, and solutions
are easy to find. For the 100 queens the initial formulation has roughly 10400 states
whereas the improved formulation has about 1052 states. This is a huge reduction, but the
improved state space is still too big for the algorithms to handle.
Production system
Production system - uses knowledge in the form of rules to provide diagnoses or advice on
the basis of input data.
Parts
o Database of rules (knowledge base)
o Database of facts
o Inference engine which reasons about the facts using the rules
Problem solving methods
Problem graphs:
Example 1: 8-puzzle problem (GRAPH)

State space of the 8-puzzle generated by “move blank” operations

Problem Characteristics
To choose an appropriate method for a particular problem:
 Is the problem decomposable?

 Can solution steps be ignored or undone?

 Is the universe predictable?

 Is a good solution absolute or relative?

 Is the solution a state or a path?

 What is the role of knowledge?

 Does the task require human-interaction?


 Is the problem decomposable?








Can solution steps be ignored or undone?
 Ignorable problems can be solved using a simple
o control structure that never backtracks.
 Recoverable problems can be solved using backtracking.
 Irrecoverable problems can be solved by recoverable style methods via planning.

Is the universe predictable?


 The 8-Puzzle
 Every time we make a move, we know exactly what will happen.
  Certain outcome!

Is a good solution absolute or relative?
 The Travelling Salesman Problem

 We have to try all paths to find the shortest one.

 Any-path problems can be solved using heuristics that suggest good paths to explore.
For best-path problems, much more exhaustive search will be performed
Is the solution a state or a path?
Finding a consistent intepretation
“The bank president ate a dish of pasta salad with the fork”.
– “bank” refers to a financial situation or to a side of a river?
– “dish” or “pasta salad” was eaten?
– Does “pasta salad” contain pasta, as “dog food” does not contain “dog”?
– Which part of the sentence does “with the fork” modify?
What if “with vegetables” is there?
No record of the processing is necessary.
Is the solution a state or a path?
The Water Jug Problem
The path that leads to the goal must be reported.
 A path-solution problem can be reformulated as a state-solution problem by
describing a state as a partial path to a solution.
 The question is whether that is natural or not.

What is the role of knowledge?


 Playing Chess
 Knowledge is important only to constrain the search for a solution.
 Reading Newspaper
 Knowledge is required even to be able to recognize a solution.
SEARCHING FOR SOLUTIONS
What is Search?
Search is the systematic examination of states to find path from the start/root state to the goal
state.The set of possible states, together with operators defining their connectivity constitute
the search space.The output of a search algorithm is a solution, that is, a path from the initial
state to a state that satisfies the
goal test.
SEARCH TREE
Having formulated some problems,we now need to solve them. This is done by a search
through the state space.A search tree is generated by the initial state and the successor function
that together define the state space. In general,we may have a search graph rather than a search
tree,when the same state can be reached from multiple paths.
Types of Search
There are three broad classes of search processes:
1) Uninformed- Blind Search-
– There is no specific reason to prefer one part of the search space to any other,
in finding a path from initial state to goal state.
– Systematic, exhaustive search
 depth-first-search
 Breadth-first-search
2) Informed – Heuristic search - there is specific information to focus the search.

– Hill climbing
– Branch and bound
– Best first
– A*

3)Game playing – there are at least two partners opposing to each other.
– Minimax (a, b pruning)
– Means ends analysis

UNINFORMED SEARCH STRATGES


Uninformed Search Strategies have no additional information about states beyond that
provided in the problem
Definition.
Strategies that know whether one non goal state is “more promising” than another are called
Informed search or heuristic search strategies.
There are five uninformed search strategies as given below.
 Breadth-first search
 Uniform-cost search
 Depth-first search
 Depth-limited search
 Iterative deepening search
Breadth-first search
 Breadth-first search is a simple strategy in which the root node is expanded first, then
all successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any nodes
at the next level are expanded.
 Breath-first-search is implemented by calling TREE-SEARCH with an empty fringe
that is a first in-first-out (FIFO) queue, assuring that the nodes that are visited first will
be expanded first.
 In other wards, calling TREE-SEARCH (problem, FIFO-QUEUE ()) results in breadth-
first-search.
 The FIFO queue puts all newly generated successors at the end of the queue, which
means that Shallow nodes are expanded before deeper nodes.

You might also like