You are on page 1of 22

C463 / B551 Artificial Intelligence

Dana Vrajitoru

Intelligent Agents

Intelligent Agent
Agent: entity in a program or environment capable of generating action. An agent uses perception of the environment to make decisions about actions to take. The perception capability is usually called a sensor. The actions can depend on the most recent perception or on the entire history (percept sequence).
Artificial Intelligence D. Vrajitoru

Agent Function
The agent function is a mathematical function that maps a sequence of perceptions into action. The function is implemented as the agent program. The part of the agent taking an action is called an actuator. environment sensors agent function actuators environment
Artificial Intelligence D. Vrajitoru

Environment
Environment Sensors

Percept (Observations)

Agent Function
Agent Actuator Environment Action
Artificial Intelligence D. Vrajitoru

Environment

Rational Agent
A rational agent is one that can take the right decision in every situation. Performance measure: a set of criteria/test bed for the success of the agent's behavior. The performance measures should be based on the desired effect of the agent on the environment.
Artificial Intelligence D. Vrajitoru

Rationality
The agent's rational behavior depends on: the performance measure that defines success the agent's knowledge of the environment the action that it is capable of performing the current sequence of perceptions. Definition: for every possible percept sequence, the agent is expected to take an action that will maximize its performance measure.
Artificial Intelligence D. Vrajitoru

Agent Autonomy
An agent is omniscient if it knows the actual outcome of its actions. Not possible in practice. An environment can sometimes be completely known in advance. Exploration: sometimes an agent must perform an action to gather information (to increase perception). Autonomy: the capacity to compensate for partial or incorrect prior knowledge (usually by learning).
Artificial Intelligence D. Vrajitoru

Environment
Task environment the problem that the agent is a solution to. Properties: Observable - fully or partially A fully observable environment needs less representation. Deterministic or stochastic Strategic deterministic except for the actions of other agents.
Artificial Intelligence D. Vrajitoru

Environment
Episodic or sequential Sequential future actions depend on the previous ones. Episodic individual unrelated tasks for the agent to solve. Static dynamic Discrete continuous Single agent multi agent Multiple agents can be competitive or cooperative.
Artificial Intelligence D. Vrajitoru

More Definitions of Agents


"An agent is a persistent software entity dedicated to a specific purpose. " (Smith, Cypher, and Spohrer 94 ) "Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires." (IBM) "Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. "(Hayes-Roth 94)
Artificial Intelligence D. Vrajitoru

Agent vs. Program


Size an agent is usually smaller than a program. Purpose an agent has a specific purpose while programs are multi-functional. Persistence an agent's life span is not entirely dependent on a user launching and quitting it. Autonomy an agent doesn't need the user's input to function.
Artificial Intelligence D. Vrajitoru

Simple Agents
Table-driven agents: the function consists in a lookup table of actions to be taken for every possible state of the environment. If the environment has n variables, each with t possible states, then the table size is tn. Only works for a small number of possible states for the environment. Simple reflex agents: deciding on the action to take based only on the current perception and not on the history of perceptions. Based on the conditionaction rule: (if (condition) action) Works if the environment is fully observable

Artificial Intelligence D. Vrajitoru

(defun table_agent (percept) (let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action)) (defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))
Artificial Intelligence D. Vrajitoru

percepts = [] table = {} def table_agent (percept): action = True percepts.append(percept) action = lookup(percepts, table) return action def reflex_agent (percept): state = interpret(percept) rule = match(state) action = decision(rule) return action
Artificial Intelligence D. Vrajitoru

Model-Based Reflex Agents


If the world is not fully observable, the agent must remember observations about the parts of the environment it cannot currently observe. This usually requires an internal representation of the world (or internal state). Since this representation is a model of the world, we call this model-based agent.
Artificial Intelligence D. Vrajitoru

(setq state t) ; the world model (setq action nil) ; latest action (defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))
Artificial Intelligence D. Vrajitoru

state = True # the world model action = False # latest action def model_reflex_agent (percept) state = update_state(state, action, percept) rule = match(state) action = decision(rule) return action
Artificial Intelligence D. Vrajitoru

Goal-Driven Agents
The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal). In some cases the goal is easy to achieve. In others it involves planning, sifting through a search space for possible solutions, developing a strategy. Utility-based agents: the agent is aware of a utility function that estimates how close the current state is to the agent's goal.
Artificial Intelligence D. Vrajitoru

Learning Agents
Agents capable of acquiring new competence through observations and actions. Components:

learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator).
Artificial Intelligence D. Vrajitoru

Other Types of Agents


Temporarily continuous a continuously running process, Communicative agent exchanging information with other agents to complete its task. Mobile agent capable of moving from one machine to another one (or from one environment to another). Flexible agent whose actions are not scripted. Character an agent with conversation skills, personality, and even emotional state.
Artificial Intelligence D. Vrajitoru

Agent Classification

Artificial Intelligence D. Vrajitoru

Agent Example
A file manager agent. Sensors: commands like ls, du, pwd. Actuators: commands like tar, gzip, cd, rm, cp, etc. Purpose: compress and archive files that have not been used in a while. Environment: fully observable (but partially observed), deterministic (strategic), episodic, dynamic, discrete.
Artificial Intelligence D. Vrajitoru

You might also like