You are on page 1of 13

UNIVERSITY OF THE SOUTHERN CARIBBEAN

MARACAS ROYAL ROAD, ST. JOSEPH


P.O. BOX 175, PORT OF SPAIN.

RESEARCH PAPER: ARTIFICIAL INTELLIGENCE

An Assignment
Presented in Partial Fulfillment
of the Requirements for the Course
CPTR451: ARTIFICIAL INTELLIGENCE

INSTRUCTOR: Mr. George Mubita

By
Jozeene Springer
21st December 2012

Approval..

Introduction
The modern definition of artificial intelligence (or AI) is "the study and design of intelligent
agents" where an intelligent agent is a system that perceives its environment and takes actions
which maximize its chances of success. It is one of the newest sciences which started soon after
World War 2, and whose name was coined in 1956 by John McCarthy. Artificial science has
become one of the more popular fields of scientific study along with molecular biology. AI
currently encompasses a huge variety of subfields, ranging from general-purpose areas, such as
learning and perception to such specific tasks as playing chess, proving mathematical theorems,
writing poetry, and diagnosing diseases. It systematizes and automates intellectual tasks and is
therefore potentially relevant to any sphere of human intellectual activity. In this sense, AI is
truly a universal field. It is concerned not only with thought processes and reasoning, but also
behavior. The success of artificial intelligence can be measured in terms of fidelity to human
performance, or against an ideal concept of intelligence, which is usually defined as rationality.
A system is said to be rational if it does the "right thing," given what it knows. The Turing Test,
proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of
intelligence. In it, the computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or not (Russell and
Norvig (2003)).
Artificially Intelligent systems operate in a particular environment, using a particular system of
rationality. There must be an entity which can gather environmental information and act on this
information in the form of tasks. This entity acts as an agent. Furthermore, because solutions are
carried out in the form of tasks, there must be a way for the agent to arrive at this solution on
their own. This is usually done by problem solving through searching, where the environmental
conditions are considered (searched) and a decision is made. This boils down to knowledge
(collection of information), and reasoning (problem-solving) (Bernstein, R. B., & Curtis, W. N.
(2009)).

Intelligent Agents
One of the basic components of any intelligent system is an agent. According to Russell and
Norvig (2003), an agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators to achieve a desired goal. We have
established intelligence as rational action derived from reasoning. Therefore it follows that an
agent must simply have knowledge, i.e. they must know things. As human agents, we have eyes,
eyes, and other sensory organs that act as sensors, as well as our limbs for actuators. Our robotic
equivalent could then have, as sensors, cameras and infrared range finders, and as actuators,
various motors. Software agents, such as those that are currently flooding our everyday lives,
receives data (percepts) as sensory input, and acts on its environment by giving relevant output.
Most agents can be aware of their own actions, but are not always aware of the effects that they
bring.

Rational Agents
What is the right thing? A rational agent does the right thing, that is to say, it aims to be as
successful as possible. This success is measured using a performance measure. This measure is
relative to each environment as well as to each agent. It must however be objective. This means
that those outside the environment must establish a standard of what it means to be successful
and use this as the performance measure. Russell and Norvig (2003) gives an example of a dirtremoving agent, where a reasonable measure would be the amount of dirt cleaned in an 8-hour
time period. Or on a more complicated level, the amount of electricity consumed, and the level
of noise produced. We must also be able to factor in small details; such as if the agent would
create a mess, just to clean it up. It is also important to consider how often the agent is acting.
This allows us to measure reliability and consistency within that environment.
In artificial intelligence, an omniscient agent is one who knows the precise outcome of its
actions, and is able to act accordingly. This is not necessarily a measure of rationality, since
rationality is concerned more with expected success within a given perspective. Thus an agent
that does not have a certain piece of knowledge cannot act on that knowledge.

Autonomy refers to the ability of an agent to act on its percepts beyond built-in knowledge
used in constructing that agent for the particular environment in which it operates. Its behavior is
determined by its own experience. However it is advisable that an agent be provided with initial
knowledge as well as the ability to learn. In fact, an artificial intelligent agent lacks flexibility if
it operates solely on the basis of built-in assumptions, as it can operate successfully only if these
assumptions hold true. An agent that is autonomous should be able to adapt and operate
successfully in various environments.

Intelligent Agents
The main objective of AI is to design agent programs: functions that implement an agent
mapping from percepts to actions, assuming that the program will run on some sort of computing
architecture. The architecture can range from a plain computer, to special-purpose hardware. It is
used to make the percepts available to the program, run the program, and feed its action choices
to the effectors. It can be said that agents are basically composed of a program running on
hardware. A program is built upon percepts and actions; the goals or performance measure to be
achieved; and the surrounding environment.
Environments can be of different varieties with 5 principal distinctions: Accessible vs.
inaccessible: for an agent to be accessible it must give access to the complete state of the
environment, and its sensors must detect all aspects of the environment that are relevant to the
choice of action. It is convenient because the agent does not need to maintain an internal state to
keep track of its universe. Deterministic vs. nondeterministic: a deterministic agent completely
determines its next state by the current state as well as the actions selected by the agent. Episodic
vs. non-episodic: Here, the agents experience is divided into episodes., consisting of the agent
perceiving and then acting. Subsequent episodes do not depend on what actions occur in
previous episodes so that the agent does not need to think ahead. Static vs. dynamic: A dynamic
environment can change while an agent is deliberating; it keeps looking at the world while it is
deciding on an action. If the environment does not change with time but the agents performance
score does, then the environment is semi-dynamic. Discrete vs. continuous: discrete
environments have a limited number of distinct, clearly defined percepts and actions with a range
of continuous values. Different environment types require somewhat different agent programs to

deal with them in an effective manner. The hardest environment to deal with would be one that is
inaccessible, non-episodic, dynamic, and continuous. (Richard and Norvig (2003))

In the creation of an artificial intelligent system, an environmental simulator is required. An


environmental simulator receives one or more agents as input and arranges them in a way as to
repeatedly give each agent the right percepts and obtain an action in return. The environment is
then updated by the simulator based on the input actions, and sometimes other dynamic
processes within the environment. An agent is typically designed to work in an environment
class (an entire set of different environments). In order to measure the performance of an agent,
there is need of an environment generator that selects particular environments (with particular
likelihoods) in which to run the agent. Within the simulator, the agents version of the state must
be constructed from its percepts alone, without access to the complete state information (Russell
and Norvig (2003)).

Problem Solving by Searching


An important aspect of intelligence is goal-based problem solving which is pertinent for larger
domain environments where mappings would be too much and take too long to learn. Goal-based
agents are more successful and are made to consider future actions, as well as the desirability of
those actions. Because of these characteristics they are able to operate in highly complex
environments without much problem.
A solution is a sequence of actions leading from the initial state to a goal state. The solution of
many problems can be described by finding a sequence of actions that lead to a desirable goal.
Each action changes the state and its aim is to find the sequence of actions and states that lead
from the initial (start) state to a final (goal) state.
A well-defined problem can be described by:

Initial state

Actions/Operator or successor function - for any state x returns s(x), the set of states
reachable from x with one action

State space - all states reachable from initial by any sequence of actions

Path - sequence through state space

Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs of
individual actions along the path

Goal test - test to determine if at goal state

A single-state problem formulation does not necessarily involve a state space or a path.
Problem-Solving Agents:
Goals help organize behavior by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider. Goal formulation (adopting a goal) based on the current
state and the agents performance measure, is usually the first step in problem solving.
The agents task is to perceive how to act, currently and in the future, so that it reaches a goal
state by deciding (or the programmer deciding on its behalf) what set of actions and states it
should consider. Problem formulation is the process of deciding what actions and states to
consider, even up to the desired goal.

An agent ignores its percepts when choosing an action, while executing the solution sequence,
because it knows in advance what the percepts will be. An agent that carries out its plans
blindfolded, so to speak, must be quite certain of what is going on. This is called an open-loop
system, because ignoring the percepts breaks the loop between the agent and the environment.
Solutions:
Problem formulation follows goal formulation. It usually requires taking away or generalising
real-world details to define a state-space that can feasibly be explored. Search algorithms
consider various possible action sequences which form a search tree, in which the branches
signify actions and the nodes signify states
Search Strategies:
Search strategies are defined by picking the order of node expansion. They are evaluated by

Completeness: as long as a solution exists, it is found.

Time complexity: refers to the number of nodes generated

Space complexity: refers to the maximum number of nodes in memory

Optimality: the least cost solution is always found.

Uninformed Search: uses only the information available in the problem definition. Examples
include breadth first, uniform-cost, depth-first (also with depth limit).
In cases where the agents precept does not suffice to derive the exact state, partial observability
occurs. If the agent is in one of several possible states, then an action may lead to one of several
possible outcomeseven if the environment is deterministic. A belief state, representing the
agents current belief about the possible physical states is required for solving partially
observable problems.
Informed Search: uses problem-specific knowledge beyond the definition of the problem. It is
more efficient that the uninformed approach, and may have access to a heuristic function, h(n),
that estimates the cost of the solution.
Search Algorithms:
Various search algorithms for problems beyond finding the shortest path to a goal in an
observable, deterministic, discrete environment would include various methods. Local search

methods such as hill climbing operate on complete-state formulations, keeping only a small
number of nodes in memory. Several stochastic algorithms have been developed, including
simulated annealing, which returns optimal solutions when given an appropriate cooling
schedule. Many local search methods apply also to problems in continuous spaces. Linear
programming and convex optimization problems obey certain restrictions on the shape of the
state space and the nature of the objective function, and admit polynomial-time algorithms that
are often extremely efficient in practice. A genetic algorithm is a stochastic hill-climbing search
in which a large population of states is maintained. New states are generated by mutation and by
crossover, which combines pairs of states from the population. In nondeterministic
environments, agents can apply AND-OR search to generate contingent plans that reach the goal
regardless of which outcomes occur during execution. When the environment is partially
observable, the agent can apply search algorithms in the space of belief states, or sets of possible
states that the agent might be in. Incremental algorithms that construct solutions state-by-state
within a belief state are often more efficient. Sensor-less problems can be solved by applying
standard search methods to a belief-state formulation of the problem. The more general partially
observable case can be solved by belief-state AND-OR search. Exploration problems arise when
the agent has no idea about the states and actions of its environment. For safely explorable
environments, online search agents can build a map and find a goal if one exists. Updating
heuristic estimates from experience provides an effective method to escape from local minima.
(www-g.eng.cam.ac.uk)

Knowledge and Reasoning


Knowledge Representation (KR) and Reasoning is a subarea of Artificial Intelligence
concerned with understanding, designing, and implementing ways of representing information in
computers, and using that information to derive new information. KR is more concerned with
belief than knowledge". Reasoning deals with deriving information that is implied by the
information already present. Knowledge representation schemes are useless without the ability to
reason with them. Given that an agent (human or computer) has certain beliefs, it must determine
what else is reasonable for it to believe, and how it is reasonable for it to act, regardless of
whether those beliefs are true and justified (Russell and Norvig (2003)). Knowledge-Based
Agents are required to have a knowledge base of justified beliefs; have a way of putting new
beliefs into that knowledge base; and also have a reasoning mechanism to derive new beliefs
from ones existing in the knowledge base.
Logic is the study of correct reasoning. It must have syntax (rules for constructing proper
expressions); semantics (meaning of symbols and rules for determining expression meaning);
and proof theory (the rules for determining theorems). One type of logic, propositional logic,
occurs within a domain. It does not analyze information below the level of the proposition (an
expression that is true or false). There are two types of propositional logic agents: generic
knowledge-based and circuit based.
Reasoning Patterns are standard patterns of inference (inference rules) that are applied to
the problem to derive chains of conclusions that lead to a desired goal. 2 such Reasoning Patterns
in Propositional Logic are: Modus Ponens, whereby when an implication and an initial
proposition is given, the second proposition is inferred; as well as And-Elimination, whereby, as
a conjunction is given, a conjunct is inferred. This brings into question the issue of monotonicity,
which demands that the set of entailed sentences can only increase as information is added to the
knowledge base, therefore, inference rules can be applied whenever suitable premises are found
in the knowledge base (Bernstein, R. B., & Curtis, W. N. (2009)).
First Order Logic has an ontology of objects (terms), properties (unary predicates on
terms), relations (n-ary predicates on terms), and functions (mappings from terms to terms). It is

a flexible and compact yet sufficiently expressive representation of knowledge. The syntax of
First-Order Logic is as follows (Bernstein, R. B., & Curtis, W. N. (2009)):
Sentence AtomicSentence

Constant

Sentence Connective Sentence

Variable

Quantifier Variable Sentence

Connective

Sentence

Quanitfier

(Sentence)

Constant AJohn Car1

AtomicSentence Predicate(Term, Term, ...)

Variable xy z ...

Term=Term

Predicate Brother Owns ...

Term FunctionTerm,Term,.

Function father-ofplus ...

There are 3 main first-logic inference-rule algorithms, namely forward chaining, backward
chaining, and resolution. Forward Chaining starts with atomic sentences, adds Modus Ponens
inference in forward direction, keeps adding atomic sentences, until no more inferences can be
made. Backward Chaining starts from the goal. Rules are worked through in chains to find
known facts that support the proof in question. It is used in logic programming which is one of
the most widely used forms of automated programming. Resolution allows a clause to be added
to a set of clauses which constitute a derivation. If the set is contradictory, the resolution may
result in a refutation (Russell and Norvig (2003)).

Knowledge Engineering is the process of knowledge base construction which includes:

Identifying the task (which determines the knowledge that must be represented)

Assembling the relevant knowledge (knowledge acquisition: extracting knowledge from


real experts)

Deciding on a vocabulary for predicates, functions and constants (translate domain-level


concepts to logic-level names)

Encoding general knowledge of the domain (axioms for all vocabulary terms are
constructed)

Encoding a description of specific problem instance (constructing simple atomic


sentences)

Posing queries to the inference procedure and answers (test inference procedures)

Debugging the knowledge base (identifying and fixing errors)

Ontological Engineering comprises the construction of a structural framework of abstract


concepts such as actions, time, physical objects, and beliefs. Large-scale knowledge
representation requires a general-purpose ontology to organize and tie together the various
specific domains of knowledge. This should cover a wide variety of knowledge and should be
capable of handling an arbitrary domain. Upper ontology is based on sets and the event calculus.

Actions, events, and time can be denoted either by situation calculus or in more open
representations such as event calculus and fluent calculus. Such representations enable an agent
to construct plans by logical inference. The rational states of agents can be represented by strings
that denote beliefs. Special-purpose representation systems, such as semantic networks and
description logics, help in the organization of a chain of categories. Inheritance is an important
form of inference, because it allows the properties of objects to be deduced from their respective
categories (Russell and Norvig (2003)).

Conclusion
Here we have seen the ins and outs of artificial intelligence: the basis of which involves solving
problems through the use of knowledge and reason applied to a non-human agent within a
particular environment. Artificial Intelligence is a field that is continually growing due to
necessity and new technology. It cannot be said that we now have all the answers. In fact John
McCarthy himself stated It's difficult to be rigorous about whether a machine really 'knows',
'thinks', etc., because we're hard put to define these things. We understand human mental
processes only slightly better than a fish understands swimming. Until we can understand
ourselves more, we are going to engineer intelligent systems with limited capabilities; systems
that are only as smart as we can perceive them to be.

REFERENCES
Bernstein, R. B., & Curtis, W. N. (2009). Artificial Intelligence : New Research. Nova Science
Publishers.
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach ; [the intelligent
agent book]. Upper Saddle River, NJ [u.a.: Prentice Hall.
www-g.eng.cam.ac.uk (n.d.). AI - Problem Solving and Search. [online] Retrieved from:
http://www-g.eng.cam.ac.uk/mmg/teaching/artificialintelligence/nonflash/problemframenf.htm
[Accessed: 20 Dec 2012].

You might also like