You are on page 1of 6

Introduction to Intelligent Systems

Intelligent Agents
Objectives Alternative approaches to AI Agent and environment Factors affecting agent evaluation Properties of agent environment Types of agents References Russell & Norvig: 1.1, 2.1-2.4.
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 1

What AI Can Do Today


Robotic vacuum home cleaners Driverless robotic car over dessert rough terrain Defeat the world champion in chess Fight spam through learning algorithms Plan and schedule scientific, industrial, and military operations Monitor and trouble-shoot equipment Assist medical diagnosis and treatment
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 2

What is AI?
The field of artificial intelligence attempts to understand and create intelligent entities. How can a handful of stuff perceive, understand, predict, and manipulate a world far larger and far more complicated? What constitutes success?
Human centered approach Thought process oriented Systems that think like humans Principled approach Systems that think logically Systems that act rationally
3

Human Centered Approach


Acting humanly Represented by the Turing Test Must machine fly like birds? Should machine duplicate human weakness? Thinking humanly Require understanding on how human mind works Is it necessary for intelligent entities to duplicate human thought process?
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 4

Behavior oriented Systems that act like humans


Y. Xiang, CIS 3700, Introduction to Intelligent Systems

Introduction to Intelligent Systems

Principled Approach
Thinking logically Build intelligent systems based on logic Not always practical to solve problems logically Acting rationally: the rational agent approach A rational agent acts so as to achieve the best outcome. Not necessarily imitate human, either internally or externally Not necessarily based on logic
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 5

Agents and Environment


Operationally, an agent perceives its env through sensors and acts upon the env through actuators.
Agent Sensors Agent Program Actuators

Percept Environment

Action

Situatedness of agent
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 6

Agent Function
Percept sequence: everything that the agent has perceived so far Agent behavior can be described by agent function that maps each percept sequence to an action. Ex A simple vacuum-cleaner agent Relation btw agent function and agent program Is it practical to implement as a look-up table?

Ex A Simple Vacuum-Cleaner Agent


Agents working in household Environment Squares A and B may become dirty any time. Sensors Perceive current square and whether its dirty Actuators Move left or right; vacuum dirt; do nothing Possible agent function AF Idea: Vacuum if dirty and move otherwise
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 8

Y. Xiang, CIS 3700, Introduction to Intelligent Systems

Introduction to Intelligent Systems

Factors Affecting Rationality


How do we measure rationality of an agent? Performance measure
PM1: Average amount of dirt vacuumed per hour PM2: Average number of clean squares at any time

Rationality Omniscience
Ex Given all available test results, a dying patient D is diagnosed of disease X with 99% of accuracy. A type-Y surgery is the only cure for disease X, but will paralyze the patient if he does not have disease X. What should a rational agent do? What if surgery reveals that D does not have disease X? Was the surgery a mistake?
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 10

Agents prior knowledge of env


Ex What if the number of squares is small but unknown?

Alternative actions Ex Alternative therapies to cure a disease Agents percept sequence Ex What if the agent can see the other square?
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 9

Properties of Task Environment -1


Why study properties of task environment? Fully observable vs partially observable Fully observable: An agents sensors give it access to the complete state of env at any time. How to represent an env state? Variable and space Ex Is medical diagnostic env fully observable? Observability depends on performance measure.

Properties of Task Environment - 2


Deterministic vs stochastic Deterministic: Next state of env is completely determined by current state and agent action. Ex 8-puzzle, chess, checker, Agents in stochastic env have only partial control over env and must prepare for failure. A partially observable, deterministic env has to be treated as stochastic. Ex Weather forecasting
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 12

Y. Xiang, CIS 3700, Introduction to Intelligent Systems

11

Introduction to Intelligent Systems

Properties of Task Environment - 3


Episodic vs sequential In an episodic env, agents experience is divided into episodes that are independent of each other. Ex Detect defective parts on assembly line Decision at current episode has no consequence to future episodes. In sequential envs, current action has long-term consequences. Ex Opening plays in a board game
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 13

Properties of Task Environment - 4


Static vs dynamic Static: Env does not change while agent is deliberating. Ex Board games without clock Agent does not need to worry passage of time and to monitor env while deliberating. Dynamic env Ex Blackout crisis Undecidedness counts as deciding to do nothing and has a negative consequence.
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 14

Properties of Task Environment - 5


Discrete vs continuous Discrete: Each env variable typically has a finite number of possible values. Implication to number of env states Ex Checker vs heating control Discrete representation of continuous env can be accurate to any desired degree but is inherently approximate.

Properties of Task Environment - 6


Single agent vs multiagent Ex Printer trouble-shooting Ex Board game Ex Factory floor populated by mobile robots Difficulty of agent design is affected significantly by properties of task environment. Simplest: fully observable, deterministic, episodic, static, discrete, and single agent Hardest: partially observable, stochastic, sequential, dynamic, continuous, and multiagent
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 16

Y. Xiang, CIS 3700, Introduction to Intelligent Systems

15

Introduction to Intelligent Systems

Basic Types of Agent Program

Simple Reflex Agents


Select action based on current percept only and ignore the rest of percept history Partial agent program skeleton
simpleReflexAgent(percept) { static rules; // condition-action rules state = interpretInput(percept); rule = matchRule(state, rules); action = getAction(rule); return action; }

Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

Only works if env is fully observable


Y. Xiang, CIS 3700, Introduction to Intelligent Systems 17 Y. Xiang, CIS 3700, Introduction to Intelligent Systems 18

Overcome Partial Observability


To overcome partial observability, agent can internally keep track of unobservable part of env. Ex If vacuum agent cannot see perfectly, it can keep track of where it is internally. To do so, the agent needs to know how agents actions affect env, and how env evolves independently of agent actions. Model: the knowledge about how env works

Model-Based Reflex Agents


Partial agent program skeleton: modelBasedReflexAgent(percept) { static: rules, state=initial state, action=null; state = updateState(state, action, percept); rule = matchRule(state, rules); action = getAction(rule); return action; } Interfaces with sensors/actuators as well as iterations are not shown.
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 20

Y. Xiang, CIS 3700, Introduction to Intelligent Systems

19

Introduction to Intelligent Systems

Goal-Based and Utility-based Agents


Goal-based agents
Knowing the current state is not always enough to decide an action, e.g., delivery agents. Agent also needs to know what states are desirable.

Utility-based agents
Goals divide env states into desirable ones and undesirable ones. They cannot represent preference among goal states. Ex Treat a disease with little or severe side-effect Utility-based agents use a numerical function to express preference among states.
Y. Xiang, CIS 3700, Introduction to Intelligent Systems 21

You might also like