Professional Documents
Culture Documents
Intelligent Agents
Reasoning
Actions
Actuators
1
Example: Vacuum Cleaner Agent
Percept Sequence Action
[[A,, Clean]] Right
g
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean],[A, Clean] Right
[A, Clean],[A, Dirty] Suck
: :
[A Clean]
[A, Clean], [A
[A, Clean]
Clean], [A,
[A Clean] Right
: :
Agent-Related Terms
Percept sequence: A complete history of everything
the agent
g has ever pperceived. Think of this as the state
of the world from the agents perspective.
Agent function (or Policy): Maps percept sequence to
action (determines agent behavior)
Agent program: Implements the agent function
2
Question du Jour
Whats the difference between the agent
function and the agent program?
Rationality
Rationality: do the action that causes the agent to be
most successful
How do you define success? Need a performance
measure
Eg. reward agent with one point for each clean square
at each time step (could penalize for costs and noise)
3
Rationality
Rationality depends on 4 things:
1 Performance measure of success
1.
2. Agents prior knowledge of environment
3. Actions agent can perform
4. Agents percept sequence to date
Rational
R ti l agent:
t for
f eachh possible
ibl percept sequence, a rational
i l
agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has
Learning
Successful agents split task of computing policy in 3
periods:
1. Initially, designers compute some prior
knowledge to include in policy
2. When deciding its next action, agent does some
computation
3. Agent learns from experience to modify its
behavior
4
PEAS Descriptions of Agents
Performance, Environment, Actuators, Sensors
Properties of Environments
Fully observable: can access complete state of vs Partially observable: could be due to noisy,
environment at each point in time inaccurate or incomplete sensor data
Deterministic: if next state of the environment vs Stochastic: a partially observable environment
completely determined by current state and can appear to be stochastic.
stochastic (Strategic:
agents action environment is deterministic except for actions
of other agents)
Episodic: agents experience divided into Vs Sequential: current decision affects all future
independent, atomic episodes in which agent decisions
perceives and performs a single action in each
episode.
Static: agent doesnt need to keep sensing vs Dynamic: environment changes while agent is
while decides what action to take, doesnt need thinking (Semidynamic: environment doesnt
t worry about
to b t time
ti change
h with
ith time
ti but
b t agents
t performance
f does)
d )
5
Examples of task environments
Task Observable Deterministic Episodic Static Discrete Agents
Environment
Crossword Fully Deterministic Sequential Static Discrete Single
puzzle
Chess with a Fully Strategic Sequential Semi Discrete Multi
clock
Poker Partially Stochastic Sequential Static Discrete Multi
Backgammon Fully Stochastic Sequential Static Discrete Multi
Taxi driving Partially Stochastic Sequential Dynamic Continuous Multi
Medical Partially Stochastic Sequential Dynamic Continuous Multi
diagnosis
Image analysis Fully Deterministic Episodic Semi Continuous Single
Part-picking Partially Stochastic Episodic Semi Continuous Single
robot
Refinery Partially Stochastic Sequential Dynamic Continuous Single
controller
Interactive Partially Stochastic Sequential Dynamic Discrete Multi
English tutor
Agent Programs
Agent program: implements the policy
Simplest agent program is a table
table-driven
driven agent
6
4 Kinds of Agent Programs
Simplex reflex agents
Model based refle
Model-based reflex agents
Goal-based agents
Utility-based agents
13
state INTERPRET-INPUT(percept)
rule RULE-MATCH(state, rules)
action RULE-ACTION[rule]
return action
14
7
Simple Reflex Agents
15
16
8
Model-based Reflex Agents
Maintain some internal state that keeps track of the part of the
world it cant see now
Needs model (encodes knowledge about how the world works)
18
9
Goal-based Agents
Goal information guides agents actions (looks to
the future)
Sometimes
S i achieving
hi i goall isi simple
i l eg. from
f a
single action
Other times, goal requires reasoning about long
sequences of actions
Flexible: simply reprogram the agent by changing
goals
19
Goal-based Agents
20
10
Utililty-based Agents
What if there are many paths to the goal?
Utility
Utilit measures
meas res which
hich states are preferable
to other state
Maps state to real number (utility or
happiness)
21
Utility-based Agents
22
11
Learning Agents
23
Learning Agents
Critic: Tells learning element how well
the agent is doing with respect ot the
performance standard (because the
percepts dont tell the agent about its
success/failure)
/f il )
24
12
Learning Agents
Think of this as outside
the agent since you dont
want it to be changed by
the agent
25
26
13
In-class Exercise
Develop a PEAS description of the task environment
for a face-recognition agent
Performance
Measure
Environment
Actuators
Sensors
27
In-class Exercise
Describe the task environment
Fully Observable Partially Observable
Deterministic Stochastic
Episodic Sequential
Static Dynamic
Discrete Continuous
28
14
In-class Exercise
Select a suitable agent design for the face-
recognition agent
29
15