Professional Documents
Culture Documents
An Assignment
Presented in Partial Fulfillment
of the Requirements for the Course
CPTR451: ARTIFICIAL INTELLIGENCE
By
Jozeene Springer
21st December 2012
Approval..
Introduction
The modern definition of artificial intelligence (or AI) is "the study and design of intelligent
agents" where an intelligent agent is a system that perceives its environment and takes actions
which maximize its chances of success. It is one of the newest sciences which started soon after
World War 2, and whose name was coined in 1956 by John McCarthy. Artificial science has
become one of the more popular fields of scientific study along with molecular biology. AI
currently encompasses a huge variety of subfields, ranging from general-purpose areas, such as
learning and perception to such specific tasks as playing chess, proving mathematical theorems,
writing poetry, and diagnosing diseases. It systematizes and automates intellectual tasks and is
therefore potentially relevant to any sphere of human intellectual activity. In this sense, AI is
truly a universal field. It is concerned not only with thought processes and reasoning, but also
behavior. The success of artificial intelligence can be measured in terms of fidelity to human
performance, or against an ideal concept of intelligence, which is usually defined as rationality.
A system is said to be rational if it does the "right thing," given what it knows. The Turing Test,
proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of
intelligence. In it, the computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or not (Russell and
Norvig (2003)).
Artificially Intelligent systems operate in a particular environment, using a particular system of
rationality. There must be an entity which can gather environmental information and act on this
information in the form of tasks. This entity acts as an agent. Furthermore, because solutions are
carried out in the form of tasks, there must be a way for the agent to arrive at this solution on
their own. This is usually done by problem solving through searching, where the environmental
conditions are considered (searched) and a decision is made. This boils down to knowledge
(collection of information), and reasoning (problem-solving) (Bernstein, R. B., & Curtis, W. N.
(2009)).
Intelligent Agents
One of the basic components of any intelligent system is an agent. According to Russell and
Norvig (2003), an agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators to achieve a desired goal. We have
established intelligence as rational action derived from reasoning. Therefore it follows that an
agent must simply have knowledge, i.e. they must know things. As human agents, we have eyes,
eyes, and other sensory organs that act as sensors, as well as our limbs for actuators. Our robotic
equivalent could then have, as sensors, cameras and infrared range finders, and as actuators,
various motors. Software agents, such as those that are currently flooding our everyday lives,
receives data (percepts) as sensory input, and acts on its environment by giving relevant output.
Most agents can be aware of their own actions, but are not always aware of the effects that they
bring.
Rational Agents
What is the right thing? A rational agent does the right thing, that is to say, it aims to be as
successful as possible. This success is measured using a performance measure. This measure is
relative to each environment as well as to each agent. It must however be objective. This means
that those outside the environment must establish a standard of what it means to be successful
and use this as the performance measure. Russell and Norvig (2003) gives an example of a dirtremoving agent, where a reasonable measure would be the amount of dirt cleaned in an 8-hour
time period. Or on a more complicated level, the amount of electricity consumed, and the level
of noise produced. We must also be able to factor in small details; such as if the agent would
create a mess, just to clean it up. It is also important to consider how often the agent is acting.
This allows us to measure reliability and consistency within that environment.
In artificial intelligence, an omniscient agent is one who knows the precise outcome of its
actions, and is able to act accordingly. This is not necessarily a measure of rationality, since
rationality is concerned more with expected success within a given perspective. Thus an agent
that does not have a certain piece of knowledge cannot act on that knowledge.
Autonomy refers to the ability of an agent to act on its percepts beyond built-in knowledge
used in constructing that agent for the particular environment in which it operates. Its behavior is
determined by its own experience. However it is advisable that an agent be provided with initial
knowledge as well as the ability to learn. In fact, an artificial intelligent agent lacks flexibility if
it operates solely on the basis of built-in assumptions, as it can operate successfully only if these
assumptions hold true. An agent that is autonomous should be able to adapt and operate
successfully in various environments.
Intelligent Agents
The main objective of AI is to design agent programs: functions that implement an agent
mapping from percepts to actions, assuming that the program will run on some sort of computing
architecture. The architecture can range from a plain computer, to special-purpose hardware. It is
used to make the percepts available to the program, run the program, and feed its action choices
to the effectors. It can be said that agents are basically composed of a program running on
hardware. A program is built upon percepts and actions; the goals or performance measure to be
achieved; and the surrounding environment.
Environments can be of different varieties with 5 principal distinctions: Accessible vs.
inaccessible: for an agent to be accessible it must give access to the complete state of the
environment, and its sensors must detect all aspects of the environment that are relevant to the
choice of action. It is convenient because the agent does not need to maintain an internal state to
keep track of its universe. Deterministic vs. nondeterministic: a deterministic agent completely
determines its next state by the current state as well as the actions selected by the agent. Episodic
vs. non-episodic: Here, the agents experience is divided into episodes., consisting of the agent
perceiving and then acting. Subsequent episodes do not depend on what actions occur in
previous episodes so that the agent does not need to think ahead. Static vs. dynamic: A dynamic
environment can change while an agent is deliberating; it keeps looking at the world while it is
deciding on an action. If the environment does not change with time but the agents performance
score does, then the environment is semi-dynamic. Discrete vs. continuous: discrete
environments have a limited number of distinct, clearly defined percepts and actions with a range
of continuous values. Different environment types require somewhat different agent programs to
deal with them in an effective manner. The hardest environment to deal with would be one that is
inaccessible, non-episodic, dynamic, and continuous. (Richard and Norvig (2003))
Initial state
Actions/Operator or successor function - for any state x returns s(x), the set of states
reachable from x with one action
State space - all states reachable from initial by any sequence of actions
Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs of
individual actions along the path
A single-state problem formulation does not necessarily involve a state space or a path.
Problem-Solving Agents:
Goals help organize behavior by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider. Goal formulation (adopting a goal) based on the current
state and the agents performance measure, is usually the first step in problem solving.
The agents task is to perceive how to act, currently and in the future, so that it reaches a goal
state by deciding (or the programmer deciding on its behalf) what set of actions and states it
should consider. Problem formulation is the process of deciding what actions and states to
consider, even up to the desired goal.
An agent ignores its percepts when choosing an action, while executing the solution sequence,
because it knows in advance what the percepts will be. An agent that carries out its plans
blindfolded, so to speak, must be quite certain of what is going on. This is called an open-loop
system, because ignoring the percepts breaks the loop between the agent and the environment.
Solutions:
Problem formulation follows goal formulation. It usually requires taking away or generalising
real-world details to define a state-space that can feasibly be explored. Search algorithms
consider various possible action sequences which form a search tree, in which the branches
signify actions and the nodes signify states
Search Strategies:
Search strategies are defined by picking the order of node expansion. They are evaluated by
Uninformed Search: uses only the information available in the problem definition. Examples
include breadth first, uniform-cost, depth-first (also with depth limit).
In cases where the agents precept does not suffice to derive the exact state, partial observability
occurs. If the agent is in one of several possible states, then an action may lead to one of several
possible outcomeseven if the environment is deterministic. A belief state, representing the
agents current belief about the possible physical states is required for solving partially
observable problems.
Informed Search: uses problem-specific knowledge beyond the definition of the problem. It is
more efficient that the uninformed approach, and may have access to a heuristic function, h(n),
that estimates the cost of the solution.
Search Algorithms:
Various search algorithms for problems beyond finding the shortest path to a goal in an
observable, deterministic, discrete environment would include various methods. Local search
methods such as hill climbing operate on complete-state formulations, keeping only a small
number of nodes in memory. Several stochastic algorithms have been developed, including
simulated annealing, which returns optimal solutions when given an appropriate cooling
schedule. Many local search methods apply also to problems in continuous spaces. Linear
programming and convex optimization problems obey certain restrictions on the shape of the
state space and the nature of the objective function, and admit polynomial-time algorithms that
are often extremely efficient in practice. A genetic algorithm is a stochastic hill-climbing search
in which a large population of states is maintained. New states are generated by mutation and by
crossover, which combines pairs of states from the population. In nondeterministic
environments, agents can apply AND-OR search to generate contingent plans that reach the goal
regardless of which outcomes occur during execution. When the environment is partially
observable, the agent can apply search algorithms in the space of belief states, or sets of possible
states that the agent might be in. Incremental algorithms that construct solutions state-by-state
within a belief state are often more efficient. Sensor-less problems can be solved by applying
standard search methods to a belief-state formulation of the problem. The more general partially
observable case can be solved by belief-state AND-OR search. Exploration problems arise when
the agent has no idea about the states and actions of its environment. For safely explorable
environments, online search agents can build a map and find a goal if one exists. Updating
heuristic estimates from experience provides an effective method to escape from local minima.
(www-g.eng.cam.ac.uk)
a flexible and compact yet sufficiently expressive representation of knowledge. The syntax of
First-Order Logic is as follows (Bernstein, R. B., & Curtis, W. N. (2009)):
Sentence AtomicSentence
Constant
Variable
Connective
Sentence
Quanitfier
(Sentence)
Variable xy z ...
Term=Term
Term FunctionTerm,Term,.
There are 3 main first-logic inference-rule algorithms, namely forward chaining, backward
chaining, and resolution. Forward Chaining starts with atomic sentences, adds Modus Ponens
inference in forward direction, keeps adding atomic sentences, until no more inferences can be
made. Backward Chaining starts from the goal. Rules are worked through in chains to find
known facts that support the proof in question. It is used in logic programming which is one of
the most widely used forms of automated programming. Resolution allows a clause to be added
to a set of clauses which constitute a derivation. If the set is contradictory, the resolution may
result in a refutation (Russell and Norvig (2003)).
Identifying the task (which determines the knowledge that must be represented)
Encoding general knowledge of the domain (axioms for all vocabulary terms are
constructed)
Posing queries to the inference procedure and answers (test inference procedures)
Actions, events, and time can be denoted either by situation calculus or in more open
representations such as event calculus and fluent calculus. Such representations enable an agent
to construct plans by logical inference. The rational states of agents can be represented by strings
that denote beliefs. Special-purpose representation systems, such as semantic networks and
description logics, help in the organization of a chain of categories. Inheritance is an important
form of inference, because it allows the properties of objects to be deduced from their respective
categories (Russell and Norvig (2003)).
Conclusion
Here we have seen the ins and outs of artificial intelligence: the basis of which involves solving
problems through the use of knowledge and reason applied to a non-human agent within a
particular environment. Artificial Intelligence is a field that is continually growing due to
necessity and new technology. It cannot be said that we now have all the answers. In fact John
McCarthy himself stated It's difficult to be rigorous about whether a machine really 'knows',
'thinks', etc., because we're hard put to define these things. We understand human mental
processes only slightly better than a fish understands swimming. Until we can understand
ourselves more, we are going to engineer intelligent systems with limited capabilities; systems
that are only as smart as we can perceive them to be.
REFERENCES
Bernstein, R. B., & Curtis, W. N. (2009). Artificial Intelligence : New Research. Nova Science
Publishers.
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach ; [the intelligent
agent book]. Upper Saddle River, NJ [u.a.: Prentice Hall.
www-g.eng.cam.ac.uk (n.d.). AI - Problem Solving and Search. [online] Retrieved from:
http://www-g.eng.cam.ac.uk/mmg/teaching/artificialintelligence/nonflash/problemframenf.htm
[Accessed: 20 Dec 2012].