Professional Documents
Culture Documents
ARTIFICIAL INTELLIGENCE
INTELLIGENT AGENTS
An agent is something that reason (agent come from latin agere, make). But
informatics agents have other attributes that distinguish it from conventional
programs, thus are: Automatic controls, a perception of their environment, persist over
a long period of time, change adaptability and be able to reach different goals. A
rational agent acts seeking the best result or in case of uncertainty, the best hoped
result.
From the artificial intelligence point, taking account the thinking laws, all emphasis is I
do the right inferring. Get this right inference can be part of that a rational agent is,
and that one rational way to act is arrive to conclusion if some action allows to arrive
an objective, then that action has to be made. However, do a right inference doesnt
ever depends on rationality, due to the existence of different situations where is
nothing right to do and have to take a decision. Also exist ways to act rationally that
doesnt implies to make inference, for example, quit the hand of hot surface is so much
more a reflex act than a slowly response delivered after a careful deliberation.
Get a perfect rationality (do ever the right thing) not is possible in complex
environment. The computational complexity is excessively big. So is established a
limited rationality (act in a fitting manner with limited time).
The idea is to establish a set of design principles that allows constructing meaningful
agents, systems that can be called reasonably intelligent.
We talk about agents, their environment and interaction. Some agents get better result
that others even in the same environment, thus is, it do everything it can. The way in
which an agent proceed highly depends in environment
Actuators
ESCOM IPN
ARTIFICIAL INTELLIGENCE
An human agent have eyes, ears and nose between other sensorial inputs and have
legs, hands and other body parts to act. A robotic agent receives keyboard hits,
information files, etc. We assume that each agent perceives its own actions but not
ever its effects.
The perception term is used in this context to indicate that the agent can receive inputs
in any instant. The perceptions sequence of an agent is the complete historical records
of all that have received until that moment. If we are able to specify what decision an
agent can take for each one of the possible sequences of perceptions, then we have
explained almost everything about that agent.
In mathematics terms, an agent behavior is given by the agent function that projects a
given perception into an action.
EFFICIENCY MEASURAMENTS
The efficiency measurements use the criterion that determines the output. When an
agent is inside environment, this realizes actions accordingly with the perceptions
received from the environment. This action modifies the environment making pass
trough a sequence of environmental states. If the sequence is the correct then the act
of the agent was correct. Obviously, doesnt exist only one measurement for all agents.
We can ask an agent for it opinion about its own actuation, but a lot of agents wouldnt
be able to answer and others would deluding itself
As General rule, is better to design performance measures according to what one
actually wants in the environment, rather than according to how one thinks the agent
should behave.
RATIONALITY
The rationality in a given moment depends of 4 factors:
1.
2.
3.
4.
Based in this principles, the definition of a rational agent is: in each possible sequence
of perceptions, an rational agent must to realize that action that maximizes its
performance measure, based in the evidence of perceptions and the stored knowledge.
ESCOM IPN
ARTIFICIAL INTELLIGENCE
A omniscience is impossible, certainly an agent knows the
immediate result of his action in the environment, but it doesnt knows the total impact
and the influences of other agents or the answer of the environment.
Take actions with the intention of modify future perceptions is process called
information recompilation. A rational agent mustnt only recompile information; rather
have to learn about his perceptions.
An agent must to be autonomous, thus is able to complete the incomplete or partial
initial knowledge, is reasonable give to agents a initial knowledge and the ability to
learn. After a successive experiences interacting with the environment, the agent
knowledge is independent of the initial knowledge
TASK ENVIRONMENT
In the specification of a simple agent, is necessary to specify the efficiency,
environment, actuators and sensors. This is the so-called task environment, in agent
design this must be the first step and is desired as filly as possible.
DETERMINISTIC VS STOCASTIC
ESCOM IPN
ARTIFICIAL INTELLIGENCE
If the next state of the environment is completely determined by
current state and the action executed by the agent, then the environment is
deterministic, other way is stochastic. If the medium is deterministic, except for the
actions of other agents then the medium is strategic.
STATIC VS DINAMIC
If environment change while agent deliberate then the environment is dynamic for the
agent; otherwise is static. Static environments are easy to manipulate because the
agent dont need to be aware of the world while take a decision over an action, neither
about the time. Dynamic environment, continuously asks agent what want to do, if still
not decided, assumes the decision to do nothing.
If environment doesnt change with time, but agents efficiency does, then the
environment is semidinamic.
DISCRETE VS CONTINOUS
Distinction between discrete and continuous can be applied to environment state, time
management, and agents perceptions and actions.
AGENT STRUCTURE
So far we have talked about agents by describing behaviortheaction that is
performed after any given sequence of percepts. Now we must bite the bullet and talk
about how the insides work. The job of Al is to design an agentprogramthat implements
the agent function the mapping from percepts to actions. We assume this program
will run on some sort of computing device with physical sensors and actuatorswe call
this the architecture:
agent=architecture+program
The architecture might be just an ordinary PC, or it might be a robotic car with several
onboard computers, cameras, and other sensors. In general, the architecture makes
the percepts from the sensors available to the program, runs the program, and feeds
the program's action choices to the actuators as they are generated.
The agent program applies the methods of the agent function. Exist a great variety of
agent program designs, which varies in efficiency, robustness and flexibility, and reflect
the information that becomes explicit and is used in the decision take process. Due to
this, the appropriate design of the agent program depends in great part in the
environment nature. The basic classification according to their structures is:
1. Simple reflex agent, whose responds directly to the perceptions
ESCOM IPN
ARTIFICIAL INTELLIGENCE
2. Model based reflex agents, this have inner state that
allows him to keep track of world aspects that not are evident with the actual
perceptions.
a. Goal based Agents. They acts with the intention of reach their goals
b. Utility based agents. They try to maximize their utility
Each kind of agent program combines particular components in particular ways to
generate actions.
Agente
Sensors
What the world is like now
Environment
Condition- action Rules
Actuators
Note that the description in terms of "rules" and "matching" is purely conceptual;
actual implementations can be as simple as a collection of logic gates implementing a
Boolean circuit.
Simple reflex agents have the admirable property of being simple, but they turn out to
be of limited intelligence. The agent will work only if the correct decision can be made
on the basis of only the current perceptthat is. only if the environment is fully observable. Even a little bit of unobservability can cause serious trouble. Infinite loops are
often unavoidable for simple reflex agents operating in partially observable
environments. Escape from infinite loops is possible if the agent can randomize its
actions. a randomized simple reflex agent might outperform a deterministic simple
reflex agent.
ESCOM IPN
ARTIFICIAL INTELLIGENCE
Randomized behavior of the right kind can be rational in some
multiagent environments. In single-agent environments, randomization is usually not
rational. It is a useful trick that helps a simple reflex agent in some situations, but in
most cases we can do much better with more sophisticated deterministic agents.
MODEL-BASED
REFLEX AGENTS
The most effective way to handle partial observability is for the agent to keeptrackofthe
Sensors
What the world is like now
Environment
What action shoul I do now
Actuators
The interesting part is that the estate is actualized,thus is create the new internal state
description. The details of how models and states are represented vary widely
depending on the type of environment and the particular technology used in the agent
design.
ESCOM IPN
ARTIFICIAL INTELLIGENCE
Regardless of the kind of representation used, it is seldom
possible for the agent to determine the current state of a partially observable
environment exactly.Instead, the box labeled "what the world is like now" represents
the agent's "best guess" (or sometimes best guesses).
A perhaps less obvious point about the internal "state" maintained by a model- based
agent is that it does not have to describe "what the world is like now" in a literal sense.
It can be based in the suposition of doing an determined action, n other words, as well
as a current state description, the agent needs some sort of goal information that
describes situations that are desirable. The agent program can combine this with the
model (the same information as was used in the model- based reflex agent) to choose
actions that achieve the goal.
What my actions do
Goals
Sensors
What the world is like now
Environment
Actuators
Sometimes goal-based action selection is straightforwardfor example, when goal
satisfaction results immediately from a single action. Sometimes it will be more tricky
for example, when the agent has to consider long sequences of twists and turns in
order to find a way to achieve the goal.
ESCOM IPN
ARTIFICIAL INTELLIGENCE
Notice that decision making of this kind is fundamentally different
from the condition- action rules described earlier, in that it involves consideration of
the futureboth "What will happen if I do such-and-such?" and "Will that make me
happy?" In the reflex agent designs ; this information is not explicitly represented,
because the built-in rules map directly from percepts to actions.
Although the goal-based agent appears less efficient, it is more flexible because the
knowledge that supports its decisions is represented explicitly and can be modified.
ESCOM IPN
ARTIFICIAL INTELLIGENCE
The utility-based agent structure appears in the following figure
Agente
State
Sensors
What my actions do
Utility
Environment
It's true that such agents would be intelligent, but it's not simple. A utility-based agent
has to model and keep track of its environment, tasks that have involved a great deal
of research on perception, representation, reasoning, and learning. Choosing the utilitymaximizing course of action is also a difficult task, requiring ingenious algorithms. Even
with these algorithms, perfect rationality is usually unachievable in practice because of
computational complexity
REFERENCES:
1 Artificial intelligent a modern approach 3rd edition, chapters 1-2