You are on page 1of 18

INTELLIGENT CONTROLLER

UNIT I

INTRODUCTION
Intelligent control is a class of control techniques that use
various AI computing approaches.
Intelligent control can be divided into the following major
sub-domains:
Neural network control

Bayesian control

Fuzzy (logic) control

Neuro-fuzzy control

Expert Systems

Genetic control

Intelligent agents (Cognitive/Conscious


control)

Expert System
An expert system is software that attempts to provide an answer to a problem,
or clarify uncertainties where normally one or more human experts would need to
be consulted.
Expert systems are most common in a specific problem domain.
Methods for simulating the performance of the expert are
1) The creation of a so-called "knowledgebase" which uses some
knowledge representation formalism to capture the Subject Matter
Expert's (SME) knowledge
2) A process of gathering that knowledge from the SME and codifying it
according to the formalism, which is called knowledge engineering.

Intelligent Agents (Cognitive / Conscious Control)


Intelligent agents may also
learn or use knowledge to
achieve their goals.
Example: a reflex machine such
as a thermostat is an intelligent
agent, as is a human being, as is
a community of human beings
working together towards a
goal.
It is an autonomous entity in AI
which observes and acts upon
an environment (i.e. it is an
agent) and directs its activity
towards achieving goals (i.e. it
is rational).

APPROACHES TO INTELLIGENT
CONTROL
The field of artificial intelligence, or AI, attempts to
understand intelligent entities whose definitions as described
by books gives us four possible goals to pursue in artificial
intelligence:
Systems that think like
humans

Systems that think


rationally

Systems that act like


humans

Systems that act rationally

A human-centered approach must be an empirical science,


involving hypothesis and experimental confirmation.
A rationalist approach involves a combination of
mathematics and engineering.

Acting humanly: The Turing Test approach


Proposed by Alan Turing (Turing, 1950)
Designed to provide a satisfactory operational definition of intelligence.
Turing defined intelligent behavior as the ability to achieve human-level
performance in all cognitive tasks, sufficient to fool an interrogator.
Roughly speaking, the test he proposed is that the computer should be
interrogated by a human via a teletype, and passes the test if the interrogator
cannot tell if there is a computer or a human at the other end.
Avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for
intelligence.
The issue of acting like a human comes up primarily when AI programs have
to interact with people, as when an expert system explains how it came to its
diagnosis, or a natural language processing system has a dialogue with a user.
These programs must behave according to certain normal conventions of
human interaction in order to make themselves understood.

Thinking humanly: The cognitive modelling


approach
If a given program thinks like a human, we need to determine how
humans think.
Two ways to do this:
through introspection--trying to catch our own thoughts as they go by
through psychological experiments.

The interdisciplinary field of cognitive science brings together


computer models from AI and experimental techniques from
psychology to try to construct precise and testable theories of the
workings of the human mind.
We will simply note that AI and cognitive science continue to fertilize
each other, especially in the areas of vision, natural language, and
learning.

Thinking rationally: The laws of thought


approach
These laws of thought were supposed to govern the
operation of the mind, and initiated the field of logic.
The development of formal logic provided a precise
notation for statements about all kinds of things in the world
and the relations between them.
By 1965, programs existed that could, given enough time
and memory, take a description of a problem in logical
notation and find the solution to the problem, if one exists.
The so-called logicist tradition within artificial intelligence
hopes to build on such programs to create intelligent
systems.

Acting rationally: The rational agent approach


Acting rationally means acting so as to achieve one's goals,
given one's beliefs.
An agent is just something that perceives and acts.
In this approach, AI is viewed as the study and
construction of rational agents.

AI APPROACHES
CYBERNETICS AND BRAIN SIMULATION ( CYBERNETICS
AND COMPUTATIONAL NEUROSCIENCE )
There is no consensus on how closely the brain should be simulated.
In the 1940s and 1950s, a number of researchers explored the
connection between neurology, information theory, and cybernetics.
Some of them built machines that used electronic networks to exhibit
rudimentary intelligence.
By 1960, this approach was largely abandoned, although elements of it
would be revived in the 1980s.

AI APPROACHES
SYMBOLIC ( GOOD OLD FASHIONED ARTIFICIAL
INTELLIGENCE )
When access to digital computers became possible in the middle 1950s, AI
research began to explore the possibility that human intelligence could be
reduced to symbol manipulation.
Cognitive simulation
Economist Herbert Simon and Alan Newell studied human problem solving
skills and attempted to formalize them, and their work laid the foundations of
the field of artificial intelligence, as well as cognitive science, operations
research and management science.
Their research team performed psychological experiments to demonstrate the
similarities between human problem solving and the programs.

AI APPROACHES
SYMBOLIC ( GOOD OLD FASHIONED ARTIFICIAL
INTELLIGENCE )
Logic based
John McCarthy felt that machines did not need to simulate human thought, but should
instead try to find the essence of abstract reasoning and problem solving, regardless
of whether people used the same algorithms.
Used formal logic to solve a wide variety of problems, including knowledge
representation, planning and learning.
"Anti-logic" or "scruffy"
Marvin Minsky and Seymour Papert found that solving difficult problems in vision
and natural language processing required ad-hoc solutions they argued that there
was no simple and general principle (like logic) that would capture all the aspects of
intelligent behavior.
Roger Schank described their "anti-logic" approaches as "scruffy". Commonsense
knowledge bases are an example of "scruffy" AI, since they must be built by hand,
one complicated concept at a time.

AI APPROACHES
SYMBOLIC ( GOOD OLD FASHIONED ARTIFICIAL
INTELLIGENCE )
Knowledge based
When computers with large memories became available around 1970,
researchers from all three traditions began to build knowledge into AI
applications. This "knowledge revolution" led to the development and
deployment of expert systems (introduced by Edward Feigenbaum),
the first truly successful form of AI software.

AI APPROACHES
SUB - SYMBOLIC
Bottom-up, embodied, situated, behavior-based or nouvelle AI
Researchers from the related field of robotics, such as Rodney Brooks,
rejected symbolic AI and focused on the basic engineering problems that
would allow robots to move and survive. Their work revived the non-symbolic
viewpoint of the early cybernetics researchers of the 50s and reintroduced the
use of control theory in AI. These approaches are also conceptually related to
the embodied mind thesis.
Computational Intelligence
Interest in neural networks and "connectionism" was revived by David
Rumelhart and others in the middle 1980s. These and other sub-symbolic
approaches, such as fuzzy systems and evolutionary computation, are now
studied collectively by the emerging discipline of computational intelligence

AI APPROACHES
STATISTICAL
In the 1990s, AI researchers developed sophisticated mathematical
tools to solve specific sub problems. These tools are truly scientific, in
the sense that their results are both measurable and verifiable, and they
have been responsible for many of AI's recent successes.
The shared mathematical language has also permitted a high level of
collaboration with more established fields (like mathematics,
economics or operations research).

RULE BASED SYSTEMS


The simplest form of artificial intelligence, which is generally used in
industry, is the rule-based system, also known as the expert system.
A rule-based system is a way of encoding a human expert's knowledge in a
fairly narrow area into an automated system.
The knowledge of the expert is captured in a set of rules, each of which
encodes a small piece of the expert's knowledge. Each rule has a left hand side
and a ride hand side. The left hand side contains information about certain
facts and objects, which must be true in order for the rule to potentially, fire
(that is, execute).
Any rules whose left hand sides match in this manner at a given time are
placed on an agenda. One of the rules on the agenda is picked (there is no way
of predicting which one), and its right hand side is executed, and then it is
removed from the agenda. The agenda is then updated (generally using a
special algorithm called the Rete algorithm), and a new rules is picked to
execute. This continues until there are no more rules on the agenda.

KNOWLEDGE REPRESENTATION
Knowledge representation is an area in artificial intelligence that is concerned
with how to formally "think", that is, how to use a symbol system to represent
"a domain of discourse" - that which can be talked about, along with functions
that may or may not be within the domain of discourse that allow inference
(formalized reasoning) about the objects within the domain of discourse to
occur.
Generally speaking, some kind of logic is used both to supply a formal
semantics of how reasoning functions apply to symbols in the domain of
discourse, as well as to supply (depending on the particulars of the logic),
operators such as quantifiers, modal operators, etc. that, along with an
interpretation theory, give meaning to the sentences in the logic.
When we design a knowledge representation (and a knowledge representation
system to interpret sentences in the logic in order to derive inferences from
them) we have to make trades across a number of design spaces, described in
the following sections.

THANK YOU

You might also like