You are on page 1of 43

ARTIFICIAL INTELLIGENCE

AGENTS
Institute of Information and Communication Technology
University of Sindh, Jamshoro

Dr. Zeeshan Bhatti

BSSW-PIV
Chapter 2
BY: DR. ZEESHAN BHATTI

Last Time: Acting Humanly: The Full Turing Test


Alan Turing's 1950 article Computing Machinery and Intelligence
discussed conditions for considering a machine to be intelligent
Can machines think? Can machines behave intelligently?
The Turing test (The Imitation Game): Operational definition of
intelligence.

Computer needs to possess: Natural language processing, Knowledge


representation, Automated reasoning, and Machine learning

Problem: 1) Turing test is not reproducible, constructive, and amenable to


mathematic analysis. 2) What about physical interaction with interrogator
and environment?

Total Turing Test: Requires physical interaction and needs perception and
actuation.

By: Dr. Zeeshan Bhatti

Last time: The Turing Test

http://aimovie.warnerbros.com

http://www.ai.mit.edu/projects/infolab/
By: Dr. Zeeshan Bhatti

Last time: The Turing Test

http://aimovie.warnerbros.com

http://www.ai.mit.edu/projects/infolab/
By: Dr. Zeeshan Bhatti

Last time: The Turing Test

http://aimovie.warnerbros.com

http://www.ai.mit.edu/projects/infolab/
By: Dr. Zeeshan Bhatti

Last time: The Turing Test

http://aimovie.warnerbros.com

http://www.ai.mit.edu/projects/infolab/
By: Dr. Zeeshan Bhatti

Last time: The Turing Test

http://aimovie.warnerbros.com

http://www.ai.mit.edu/projects/infolab/
By: Dr. Zeeshan Bhatti

This time: Outline

Intelligent Agents (IA)


Environment types
IA Behavior
IA Structure
IA Types

By: Dr. Zeeshan Bhatti

What is an (Intelligent) Agent?


An over-used, over-loaded, and misused term.
Anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through its effectors or actuators to
maximize progress towards its goals.

By: Dr. Zeeshan Bhatti

What is an (Intelligent) Agent?


A human agent has eyes, ears, and other organs for sensors and
hands, legs, vocal tract, and so on for actuators.

A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
A software agent receives keystrokes, file contents, and network
packets as sensory inputs and acts on the environment by
displaying on the screen, writing files, and sending network
packets.

By: Dr. Zeeshan Bhatti

10

By: Dr. Zeeshan Bhatti

11

Agents and environments


We use the term percept to refer to the agents perceptual inputs
at any given instant.

An agents percept sequence is the complete history of


everything the agent has ever perceived.
Mathematically speaking, we say that an agents behaviour is
described by the agent function that maps any given percept
sequence to an action.
Internally, the agent function for an artificial agent will be
implemented by an agent program.
It is important to keep these two ideas distinct. The agent function
is an abstract mathematical description; the agent program is a
concrete implementation, running within some physical system.
By: Dr. Zeeshan Bhatti

12

Agents and environments

The agent function maps from percept histories to actions:


[f: P* A]

The agent program runs on the physical architecture to produce f


agent = architecture + program

Example: Vacuum-cleaner world

This particular world has just two locations: squares A and B.


The vacuum agent perceives which square it is in and whether
there is dirt in the square. It can choose to move left, move right,
suck up the dirt, or do nothing.
Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp

One very simple agent function is the following: if the current


square is dirty, then suck; otherwise, move to the other square.

Looking at Figure 2.3, we see that various vacuum-world agents can be


defined simply by filling in the right-hand column in various ways.
The obvious question, then, is this: What is the right way to fill out the
table? In other words, what makes an agent good or bad, intelligent or
stupid? We answer these questions in the next section.
By: Dr. Zeeshan Bhatti

15

What is an (Intelligent) Agent?

PAGE (Percepts, Actions, Goals, Environment)


Task-specific & specialized: well-defined goals
and environment
The notion of an agent is meant to be a tool for
analyzing systems,
It is not a different hardware or new programming
languages

By: Dr. Zeeshan Bhatti

16

Intelligent Agents and Artificial Intelligence


Example: Human mind as network of thousands or
millions of agents working in parallel. To produce real
artificial intelligence, this school holds, we should build
computer systems that also contain many agents and
systems for arbitrating among the agents' competing
results.
Agency

effectors

Challenges:

sensors

Distributed decision-making
and control

Action selection: What next action


to choose
Conflict resolution
By: Dr. Zeeshan Bhatti

17

Agent Types
We can split agent research into two main strands:
Distributed Artificial Intelligence (DAI)
Multi-Agent Systems (MAS)
(1980 1990)

Much broader notion of "agent"

(1990s present)

interface, reactive, mobile, information

By: Dr. Zeeshan Bhatti

18

Rational Agents

How to design this?


Sensors
percepts
?
Agent

Environment
actions

Effectors
By: Dr. Zeeshan Bhatti

19

Remember: the Beobot example

By: Dr. Zeeshan Bhatti

20

A Windshield Wiper Agent


How do we design a agent that can wipe the windshields
when needed?

Goals?
Percepts?
Sensors?
Effectors?
Actions?
Environment?

By: Dr. Zeeshan Bhatti

21

A Windshield Wiper Agent (Contd)

Goals:
Keep windshields clean & maintain visibility
Percepts:
Raining, Dirty
Sensors:
Camera (moist sensor)
Effectors:
Wipers (left, right, back)
Actions:
Off, Slow, Medium, Fast
Environment: Inner city, freeways, highways, weather

By: Dr. Zeeshan Bhatti

22

Towards Autonomous Vehicles

http://iLab.usc.edu
By: Dr. Zeeshan Bhatti

http://beobots.org
23

Interacting Agents: Exercise


Collision Avoidance Agent (CAA)
Goals:
Avoid running into obstacles
Percepts ?
Sensors?
Effectors ?
Actions ?
Environment: Freeway
Lane Keeping Agent (LKA)
Goals:
Stay in current lane
Percepts ?
Sensors?
Effectors ?
Actions ?
Environment: Freeway
By: Dr. Zeeshan Bhatti

24

Interacting Agents
Collision Avoidance Agent (CAA)
Goals:
Avoid running into obstacles
Percepts:
Obstacle distance, velocity, trajectory
Sensors:
Vision, proximity sensing
Effectors:
Steering Wheel, Accelerator, Brakes, Horn, Headlights
Actions:
Steer, speed up, brake, blow horn, signal (headlights)
Environment: Freeway
Lane Keeping Agent (LKA)
Goals:
Stay in current lane
Percepts:
Lane center, lane boundaries
Sensors:
Vision
Effectors:
Steering Wheel, Accelerator, Brakes
Actions:
Steer, speed up, brake
Environment: Freeway
By: Dr. Zeeshan Bhatti

25

Conflict Resolution by Action Selection Agents


Override:

CAA overrides LKA

Arbitrate:

if Obstacle is Close then CAA


else LKA

Compromise:

Choose action that satisfies both


agents

Any combination of the above


Challenges:

Doing the right thing


By: Dr. Zeeshan Bhatti

26

GOOD BEHAVIOR:
THE CONCEPT OF RATIONALITY

By: Dr. Zeeshan Bhatti

27

Rational agents

Rational Agent: For each possible percept


sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and whatever
built-in knowledge the agent has.

What is Rational Agent?


A rational agent is one that does the right thing
conceptually speaking, every entry in the table for the
agent function is filled out correctly.

Obviously, doing the right thing is better than doing the


wrong thing, but what does it mean to do the right
thing?
by considering the consequences of the agents
behaviour.

By: Dr. Zeeshan Bhatti

29

Rational agents
Rationality is distinct from omniscience (all-knowing with
infinite knowledge)

Agents can perform actions in order to modify future


percepts so as to obtain useful information (information
gathering, exploration)
An agent is autonomous if its behavior is determined by
its own experience (with ability to learn and adapt)

Rational agents: Performance Measure


An agent should strive to "do the right thing", based on
what it can perceive and the actions it can perform. The
right action is the one that will cause the agent to be
most successful.
Performance measure: An objective criterion for
success of an agent's behavior in any given sequence of
environment states.

E.g., performance measure of a vacuum-cleaner agent


could be amount of dirt cleaned up, amount of time
taken, amount of electricity consumed, amount of noise
generated, etc.

As a general rule, it is better to design performance

measures according to what one actually wants in the


environment, rather than according to how one thinks
the agent should behave

By: Dr. Zeeshan Bhatti

32

Rationality?
What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agents prior knowledge of the environment.
The actions that the agent can perform.
The agents percept sequence to date.
This leads to a definition of a rational agent:

For each possible percept sequence, a rational agent should select an


action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has.

By: Dr. Zeeshan Bhatti

33

Consider the simple vacuum-cleaner agent that cleans a square if it is


dirty and moves to the other square if not; this is the agent function
tabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is,
what is known about the environment, and what sensors and actuators
the agent has. Let us assume the following:
The performance measure awards one point for each clean square
at each time step, over a lifetime of 1000 time steps.
The geography of the environment is known a priori (Figure 2.2)
but the dirt distribution and the initial location of the agent are not.
Clean squares stay clean and sucking cleans the current square. The
Left and Right actions move the agent left and right except when this
would take the agent outside the environment, in which case the agent
remains where it is.

The only available actions are Left , Right, and Suck.


The agent correctly perceives its location and whether that location
By: Dr. Zeeshan Bhatti
34
contains dirt

We claim that under these circumstances the agent is


indeed rational; its expected performance is at least as
high as any other agents.

By: Dr. Zeeshan Bhatti

35

One can see easily that the same agent would be irrational under
different circumstances.
For example, once all the dirt is cleaned up, the agent will oscillate
needlessly back and forth;
if the performance measure includes a penalty of one point for each
movement left or right, the agent will fare poorly.
A better agent for this case would do nothing once it is sure that all
the squares are clean.
If clean squares can become dirty again, the agent should
occasionally check and re-clean them if needed.
If the geography of the environment is unknown, the agent will
need to explore it rather than stick to squares A and B.

By: Dr. Zeeshan Bhatti

36

Exercise: Home Work.


Task: Let us examine the rationality of various vacuum-cleaner agent
functions.
a. Show that the simple vacuum-cleaner agent function described in
Figure 2.3 is indeed rational under the assumptions listed on page 38.
b. Describe a rational agent function for the case in which each
movement costs one point. Does the corresponding agent program
require internal state?
c. Discuss possible agent designs for the cases in which clean squares
can become dirty and the geography of the environment is unknown.
Does it make sense for the agent to learn from its experience in these
cases? If so, what should it learn? If not, why not?

By: Dr. Zeeshan Bhatti

37

The Right Thing = The Rational Action


Rational Action: The action that maximizes the
expected value of the performance measure given the
percept sequence to date

Rational =
Rational =
Rational =
Rational =
Rational =

Best ?
Optimal ?
Omniscience ?
Clairvoyant ?
Successful ?

(Clairvoyant = Intuitive, Psychic, Telepathic)

By: Dr. Zeeshan Bhatti

38

The Right Thing = The Rational Action


Rational Action: The action that maximizes the
expected value of the performance measure given the
percept sequence to date

Rational = Best
Rational = Optimal
Rational Omniscience
Rational Clairvoyant
Rational Successful

Yes, to the best of its knowledge


Yes, to the best of its abilities (incl.
its constraints)

By: Dr. Zeeshan Bhatti

39

Behavior and performance of IAs


Perception (sequence) to Action Mapping:
f : P* A
Ideal mapping: specifies which actions an agent ought to take at
any point in time
Description: Look-Up-Table, Closed Form, etc.

Performance measure: a subjective measure to


characterize how successful an agent is (e.g., speed,
power usage, accuracy, money, etc.)
(degree of) Autonomy: to what extent is the agent able
to make decisions and take actions on its own?
By: Dr. Zeeshan Bhatti

40

Look up table

Distance
10

Action

obstacle

No action

sensor
5

Turn left 30
degrees

Stop

By: Dr. Zeeshan Bhatti

agent

41

Closed form
Output (degree of rotation) = F(distance)
E.g., F(d) = 10/d

(distance cannot be less than 1/10)

By: Dr. Zeeshan Bhatti

42

Thankyou

Q&A
Referred Book
Artificial Intelligence: A Modern Approach., 3rd Edition, by
Stuart Russell and Peter Norvig, Prentice-Hall, 2003
For Course Slides and Handouts
web page:

https://sites.google.com/site/drzeeshanacademy/

Blog:

http://zeeshanacademy.blogspot.com/

Facebook:

https://www.facebook.com/drzeeshanacademy
By: Dr. Zeeshan Bhatti

43

You might also like