Professional Documents
Culture Documents
VI SEMESTER
SYLLABUS
UNIT -1
INTRODUCTION TOAlAND PRODUCTION SYSTEMS
Introduction to AI-Problem formulation, Problem Definition-Production systems
,Control strategies, Search strategies. Problem characteristics, Production system characteristics-
Specialized production system-Problem solving methods-Problem graphs, Matching, Indexing
and Heuristic functions-Hill Climbing-Depth first and Breath first, Constraints satisfaction-
Related algorithms, Measureof performance and analysis of search
algorithms.
Introduction to AI
What is artificial intelligence?
Artificial Intelligence (AI) is a branch of Science which deals with helping
machines find solutions to complex problems in a more human like fashion.
Major AI textbooks define artificial intelligence as "the study and design of
intelligent agents," where an intelligent agent is a system that perceives its environment
and takes actions which maximize its chances of success.
The definitions of AI according to some text books are categorized into four
approaches and are summarized in the table below :
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and other
body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various motors
for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory inputs and
acts on the environment by displaying on the screen, writing files, and sending network packets.
Figure 1.2 Agents interact with environments through sensors and actuators.
Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent function
that maps any given percept sequence to an action.
Agent program
Internally, The agent function for an artificial agent will be implemented by an agent program.
It is important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world
shown in Figure 1.3. This particular world has just two locations: squares A and B. The
vacuum agent perceives which square it is in and whether there is dirt in the square. It can
choose to move left, move right, suck up the dirt, or do nothing. One very simple agent
function is the following: if the current square is dirty, then suck, otherwise move to the other
square. A partial tabulation of this agent function is shown in Figure 1.4.
Agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
…
Figure 1.4 Partial tabulation of a simple agent function for the vacuum-cleaner
world shown in Figure 1.3.
Rational Agent
A rational agent is one that does the right thing-conceptually speaking, every entry in
the table for the agent function is filled out correctly. Obviously, doing the right thing is
better than doing the wrong thing. The right action is the one that will cause the agent to be
most successful.
Performance measures
A performance measure embodies the criterion for success of an agent's behavior. When
an agent is plunked down in an environment, it generates a sequence of actions according
to the percepts it receives. This sequence of actions causes the environment to go through a
sequence of states. If the sequence is desirable, then the agent has performed well.
Rationality
What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agent's prior knowledge of the environment.
The actions that the agent can perform.
The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is ex-
pected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Omniscience, learning, and autonomy
An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
Doing actions in order to modify future percepts-sometimes called information gathering-is
an important part of rationality.
Our definition requires a rational agent not only to gather information, but also to learn
as much as possible from what it perceives.
To the extent that an agent relies on the prior knowledge of its designer rather than
on its own percepts, we say that the agent lacks autonomy. A rational agent should be
autonomous-it should learn what it can to compensate for partial or incorrect prior
knowledge.
Task environments
We must think about task environments, which are essentially the "problems" to which rational
agents are the "solutions."
Specifying the task environment
The rationality of the simple vacuum-cleaner agent, needs specification of
the performance measure
the environment
the agent's actuators and sensors.
PEAS
All these are grouped together under the heading of the task environment.We call this the
PEAS (Performance, Environment, Actuators, Sensors) description.In designing an agent, the
first step must always be to specify the task environment as fully as possible.
Figure 1.5
PEAS
description of
the task
environment
for an
automated
taxi.
Agent programs
The agent programs all have the same skeleton: they take the current percept as input from the
sensors and return an action to the actuatom6 Notice the difference between the agent
program, which takes the current percept as input, and the agent function, which takes the
entire percept history. The agent program takes just the current percept as input because
nothing more is available from the environment; if the agent's actions depend on the entire
percept sequence, the agent will have to remember the percepts.
Function TABLE-DRIVEN_AGENT(percept) returns an action
static: percepts, a sequence initially empty
table, a table of actions, indexed by percept sequence
append percept to the end of percepts
action LOOKUP(percepts, table)
return action
Figure 1.8 The TABLE-DRIVEN-AGENT program is invoked for each new percept and
returns an action each time.
Drawbacks:
Table lookup of percept-action pairs defining all possible condition-action rules
necessary to interact in an environment
Problems
Too big to generate and to store (Chess has about 10^120 states, for example)
No knowledge of non-perceptual parts of the current state
Not adaptive to changes in the environment; requires entire table to be updated if
changes occur
Looping: Can't make actions conditional
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn the table entries
Some Agent Types
Table-driven agents
use a percept sequence/action table in memory to find the next action. They are
implemented by a (large) lookup table.
Simple reflex agents
are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not have memory of past world states.
Agents with memory
have internal state, which is used to keep track of past states of the world.
Agents with goals
are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
Utility-based agents
base their decisions on classic axiomatic utility theory in order to act rationally.
An AI system is composed of an agent and its environment. The agents act in their environment.
The environment may contain other agents.
What are Agent and Environment?
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to
the sensors, and other organs such as hands, legs, mouth, for effectors.
A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
A software agent has encoded bit strings as its programs and actions.
Agent Terminology
Performance Measure of Agent − It is the criteria, which determines how successful
an agent is.
Behavior of Agent − It is the action that agent performs after any given sequence of
percepts.
Percept Sequence − It is the history of all that an agent has perceived till date.
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important part
of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of −
A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors (PEAS).
Model − The knowledge about “how the things happen in the world”.
There are conflicting goals, out of which only few can be achieved.
Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.
Nature of Environments
Some programs operate in the entirely artificial environment confined to keyboard input,
database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbots
domains. The simulator has a very detailed, complex environment. The software agent needs
to choose from a long array of actions in real time. A softbot designed to scan the online
preferences of the customer and show interesting items to the customer works in the real as
well as an artificial environment.
The most famous artificial environment is the Turing Test environment, in which one real
and other artificial agents are tested on equal ground. This is a very challenging environment as
it is highly difficult for a software agent to perform as well as a human.
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two persons, one
plays the role of the tester. Each of them sits in different rooms. The tester is unaware of who
is machine and who is a human. He interrogates the questions by typing and sending them to
both intelligences, to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response from the
human response, then the machine is said to be intelligent.
Properties of Environment
The environment has multifold properties −
Discrete / Continuous − If there are a limited number of distinct, clearly defined, states
of the environment, the environment is discrete (For example, chess); otherwise it is
continuous (For example, driving).
Static / Dynamic − If the environment does not change while an agent is acting, then it
is static; otherwise it is dynamic.
Single agent / Multiple agents − The environment may contain other agents which may
be of the same or different kind as that of the agent.
Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the
complete state of the environment, then the environment is accessible to that agent.
x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
1. (x, y) →(4, y)
if x < 4
2. (x, y) → (x, 3)
if y < 3
3. (x, y) →(x - d, y)
if x > 0
4. (x, y) →(x, y - d)
if y > 0
5. (x, y) → (0, y)
if x >0
6. (x, y) → (x, 0)
if y > 0
7. (x, y) →(4, y - (4 - x))
if x + y≥ 4, y > 0
8. (x, y) →(x- (3- y), 3)
if x + y≥ 3, x > 0
9.(x, y) →(x + y, 0)
if x+ y≤ 4, y > 0
10.(x, y) →(0,x + y)
if x+ y≤ 3, y > 0
11.(0, 2) → (2, 0)
12.(2, y) →(0, y)
0: (4, 0) (3, 0)--- empty the 4 gallon jug & 3 gallon jug
2: (4, 1) (3, 3)--pour water front the 4 gallon jug into 3 gallon jug unitl 3 gallon jug is full
4: (4, 0) (3, 1)--fill the 3 gallon jug with remaining water from the 4 gallon jug
6: (4, 2) (3, 3)-pour water from the 4 gallon jug into 3 gallon jug until jug gets filled
EXAMPLE 2:
The 8-puzzle
An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile
adjacent to the balank space can slide into the space. The object is to reach the goal state
,as shown in figure 2.4
.Example: The 8-puzzle
Initial state : Any state can be designated as the initial state. It can be noted that any
given goal can be reached from exactly half of the possible initial states.
Successor function : This generates the legal states that result from trying the four
actions(blank moves Left,Right,Up or down).
Goal Test : This checks whether the state matches the goal configuration
shown in figure 2.4.(Other goal configurations are possible)
Path cost : Each step costs 1,so the path cost is the number of steps in the path.
Figure 2.5 shows an attempted solution that fails: the queen in the right most column is
attacked by the queen at the top left.
An Incremental formulation involves operators that augments the state description, starting
with an empty state. For 8-queens problem, this means each action adds a queen to the
state.
A complete-state formulation starts with all 8 queens on the board and move them around.In
either case the path cost is of no interest because only the final state counts.
This formulation reduces the 8-queen state space from 3 x 1014 to just 2057, and solutions
are easy to find. For the 100 queens the initial formulation has roughly 10400 states
whereas the improved formulation has about 1052 states. This is a huge reduction, but the
improved state space is still too big for the algorithms to handle.
Production system
Production system - uses knowledge in the form of rules to provide diagnoses or advice on
the basis of input data.
Parts
o Database of rules (knowledge base)
o Database of facts
o Inference engine which reasons about the facts using the rules
Problem solving methods
Problem graphs:
Example 1: 8-puzzle problem (GRAPH)
Problem Characteristics
To choose an appropriate method for a particular problem:
Is the problem decomposable?
Can solution steps be ignored or undone?
Ignorable problems can be solved using a simple
o control structure that never backtracks.
Recoverable problems can be solved using backtracking.
Irrecoverable problems can be solved by recoverable style methods via planning.
Any-path problems can be solved using heuristics that suggest good paths to explore.
For best-path problems, much more exhaustive search will be performed
Is the solution a state or a path?
Finding a consistent intepretation
“The bank president ate a dish of pasta salad with the fork”.
– “bank” refers to a financial situation or to a side of a river?
– “dish” or “pasta salad” was eaten?
– Does “pasta salad” contain pasta, as “dog food” does not contain “dog”?
– Which part of the sentence does “with the fork” modify?
What if “with vegetables” is there?
No record of the processing is necessary.
Is the solution a state or a path?
The Water Jug Problem
The path that leads to the goal must be reported.
A path-solution problem can be reformulated as a state-solution problem by
describing a state as a partial path to a solution.
The question is whether that is natural or not.
– Hill climbing
– Branch and bound
– Best first
– A*
3)Game playing – there are at least two partners opposing to each other.
– Minimax (a, b pruning)
– Means ends analysis