You are on page 1of 36

Artificial Intelligence

versus classical
Robotics
All robot control architectures are build on some ideas
of Artificial
RobotIntelligence
control architectures
They form also, what the AI considered now, in
contrast to classical AI
AL is the best example

Is AI Engineering or Science?
Is Robotics Engineering or
Science?
Construction ==> Engineering
all scientific problems solved
representative: Feigenbaum

Science
more scientific principles to be discoverer

representative: McCarthy

What is a robot? More


definitions.
An intelligent robot is a machine able to extract

information from its environment and use knowledge


about its world to move safely in a meaningful and
purposeful manner.
A robot is a system which exists in the physical world

and autonomously senses its environment and acts in it.

Robotics is the intelligent connection of perception to


action (M. Brady)

How these definitions relate to AI?


Compare to classical robot definitions.

Alternative terms we will use:


UAV: unmanned aerial vehicle
UGV: unmanned ground vehicle
UUV: unmanned undersea vehicle

What makes a robot?

sensors

effectors/actuators

locomotion system

on-board computer system

controllers for all of the above (smart methods everywhere)

Sensing:
What can be sensed?
depends on the sensors on the robot
the robot exists in its sensor space (i.e., all

possible values of its sensory readings, also called


perceptual space)
robotic sensors are very different from biological
sensors;
a

designer needs to put his mind into the robot's sensor


space

roboticist has to try to imagine the world in the


robots sensor space

What needs to be sensed?


depends on the robot's task

State: a sufficient description of the system

observable: the robot knows its state at all times

hidden/inaccessible/unobservable: the robot does not


know its state

partially-observable: the robot knows some part of its


state

discrete (e.g., up, down, blue, red) or continuous (e.g., 3.765


mph)

State space: all the states a system can be in

External state: state of the world

night/day, raining/sunny, at home, etc.

sensed using the robot's sensors

Internal state: state of the robot

happy/sad, stalled/moving, battery level, velocity, etc.

can be sensed (e.g., velocity)

can be stored/remembered (e.g., happy/sad)

The robot's state is a combination of its external and internal state.

How intelligent the robot appears will strongly depend on how much
and quickly it can sense its environment and itself.

We will talk more about sensors in next lectures.

Internal state can be used to remember information about the world


(e.g., remember paths to the goal, remember maps, remember friends
versus enemies, etc.)

This is called a representation or an internal model.

Representations/models have a lot to do with how complex a controller is!

Acting:
A robot acts through the use of its actuators, also
called effectors
Robotic actuators are very different from biological
ones, both are used for:

locomotion (moving around, going places)


manipulation (handling objects)

This divides robotics into three areas:

mobile robotics
manipulator robotics
communication robotics (theatre, toys)

Acting:

Action versus Behavior :

Behavior is what an external observer sees a robot doing.

Robots are programmed to display desired behavior.

Behavior is a result of a sequence of robot actions.

Observing behavior may not tell us much about the internal control of a robot.

Control can be a black box.

Mobile robots can move around, using wheels, tracks, or legs, and
usually move in 2-dimensions.

However, swimming and flying robots are also mobile robots, and they move in
3-dimensions (and are therefore even harder to control)

Manipulators are various robot arms;

they can move in 1 or more dimensions.

the number of dimensions are called the robot's degrees of freedom (DOF).

we will learn much more about actuators/effectors later.

Autonomy:

What is autonomy?

the ability to make one's own decisions and act on them

for robots, the ability to sense the situation and act on it


appropriately

Autonomy can be complete, as in autonomous robots, or


partial, as in tele-operated robots.

examples of autonomous robots: R2D2

examples of tele-operated robots: NASA's robots before


Pathfinder

Exo-skeletons are not robots, according to our definition.

(E.g., Ripley's exo-skeleton in the movie Alien.)

Fundamentals of
Robot Control
Architectures
Distinguish the classical control used in robots and the
Robot Control Architectures that have more to do with AI

Control:

Robot control refers to the way in which the sensing


and action of a robot are coordinated.
The many different ways in which robots can be controlled

all fall along a well-defined spectrum of control.

Control Approaches:

Reactive Control : Dont think, (re)act.

Behavior-Based Control : Think the way you act.

Deliberative Control : Think hard, act later.

Hybrid Control : Think and act independently, in parallel.

Control Trade-offs:

Thinking is slow.
Reaction must be fast.
Thinking enables looking ahead (planning) to avoid
bad solutions.
Thinking too long can be dangerous (e.g., falling off
a cliff, being run over).
To think, the robot needs (a lot of) accurate
information => world models.

Food for Thought:

Many robots you build in this class will use


reactive control. What more can you build on
top of it? Your dream robot?!
Are exo-skeletons (e.g., Ripleys in the movie
Alien) robots?
Is HAL (in the movie 2001) a robot?
Some intelligent Web agents are called
"softbots". Are they robots?

Please review:
1. The concept of a Finite State Machine (a
sequential system)
2. The design of a reactive system may include
using design automation tools (FPGA, EPLD) that
you learn from other classes.
3. Review the stages of designing FSMs

4. Recall examples of FSMs


5. Reactive machine may include counters, shifters,
adders, sequence generators, sequence recognizers
or other that we covered in ECE 271.

Reactive Robot
Dont think, react!
Systems
Reactive control is a technique for tightly coupling perception (sensing) and

Reactive Systems:

action, to produce timely robotic response in dynamic and unstructured


worlds.

Think of it as "stimulus-response".

A powerful method: many animals are largely reactive.

Limitations:

Minimal (if any) state.

No memory.

No learning.

No internal models / representations of the world.

Reactive versus Deliberative Systems


Reactive Systems
Collections of sense-act (stimulus-response) rules
rules implemented as assembly code, C++ code, EPLD
combinational logic, FPGA state machine, state
machine with stacks (memory), etc
Inherently concurrent (parallel)
Very fast and reactive
Unable to plan ahead

Reactive versus Deliberative Systems


Deliberative Systems
Based on the sense->plan->act model
Inherently sequential
Planning requires search, which is slow
Search requires a world model
World models become outdated
Search and planning takes too long

Hybrid Systems
Combine the two extremes

reactive system on the bottom


deliberative system on the top

connected by some intermediate layer

Often called 3-layer systems


Layers must operate concurrently
Different representations and time-scales
between the layers
The best or the worst of both worlds? ??

Behavior-Based Systems
An

alternative to hybrid systems

Have the same capabilities


the ability to act reactively
the ability to act deliberatively
There is no intermediate layer
A unified, consistent representation is used in the
whole system
=> concurrent behaviors
That resolves issues of time-scale

Feedback Control

Feedback: continuous monitoring of the sensors and reacting to their changes.

Feedback control = self-regulation

Two kinds of feedback:

Positive

Negative

The basis of control theory

- and + Feedback

Negative feedback

acts to regulate the state/output of the system

e.g., if too high, turn down, if too low, turn up

thermostats, toilets, bodies, robots...

Positive feedback

acts to amplify the state/output of the system

e.g., the more there is, the more is added

lynch mobs, stock market, ant trails...

Feedback and Cybernetics

Uses of Feedback
Invention of feedback as the first simple robotics (does it work with our
definition)?
The first example came from ancient Greek water systems (toilets)
Forgotten and re-invented in the Renaissance for ovens/furnaces
Really made a splash in Watt's steam engine

Cybernetics
Pioneered by Norbert Wiener (1940s) (From Greek "steersman" of
steam engine)
Marriage of control theory (feedback control), information science and
biology
Seeks principles common to animals and machines , especially for
control and communication
Coupling an organism and its environment (situatedness)

W. Grey Walters Tortoise

Machina Speculatrix
1 photocell & 1 bump sensor, 1 motor

Behaviors:

seek light

head to weak light

back from bright light

turn and push

recharge battery

Reactive control

Turtle World (homework 2)


Turtle Principles
Parsimony: simple is better (e.g., clever recharging
strategy)
Exploration/speculation: keeps moving (except
when charging)
Attraction (positive tropism): motivation to
approach light
Aversion (negative tropism): motivation to avoid
obstacles, slopes
Discernment: ability to distinguish and make choices,
i.e., to adapt

Turtle World (homework 2)

Braitenberg Vehicles

Valentino Braitenberg (early 1980s)

Extended Walters model in a series of thought experiments

Also based on analog circuits

Direct connections (excitatory or inhibitory) between light sensors and motors

Complex behaviors from very simple mechanisms

By varying the connections and their strengths, numerous behaviors result, e.g.:

"fear/cowardice" - flees light

"aggression" - charges into light

"love" - following/hugging

many others, up to memory and learning!

Reactive control

Later implemented on real robots

Artificial Intelligence

Early Artificial Intelligence


"Born" in 1955 at Dartmouth (thus both traditions are old!)
"Intelligent machine" would use internal models to search for solutions and then
try them out (M. Minsky) => deliberative model!
Planning became the tradition
Explicit symbolic representations
Hierarchical system organization
Sequential execution

Artificial Intelligence (AI)


Early AI had a strong impact on early robotics
Focused on knowledge, internal models, and reasoning/planning
Eventually (1980s) robotics developed improved and innovative approaches =>
behavior-based and hybrid control
AI itself has also evolved...
But before that, early robots used deliberative control

Early Robots

Early Robots: SHAKEY


At Stanford Research Institute (late 1960s)
Vision and contact sensors
STRIPS planner
Visual navigation in a special world
Deliberative

Early Robots: HILARE


LAAS in Toulouse, France (late 1970s)
Video, ultrasound, laser range-finder
Still in use!
Multi-level spatial representations
Deliberative -> Hybrid Control

Early Robots:
CART/Rover

Hans Moravec
Stanford Cart (1977)

followed by CMU rover (1983)

Sonar and vision


Deliberative control

Robotics Today
Assembly and manufacturing (most numbers of
robots, least autonomous)
Materials handling
Gophers (hospitals, security guards)
Hazardous environments (Chernobyl)
Remote environments (Pathfinder)
Surgery (brain, hips)
Tele-presence and virtual reality
Entertainment

Both approaches represented

Why is Robotics hard?


Sensors are limited and crude
Effectors are limited and crude
State (internal and external, but mostly
external) is partially-observable
Environment is dynamic (changing over
time)
Environment is full of potentially-useful
information

Key Issues of Robotics vs. AI

Grounding in reality:
not just planning in an abstract world

Situatedness (ecological dynamics):

Embodiment:

having a body

Emergent behavior:

tight connection with the environment

interaction with the environment

Scalability:

increasing task and environment complexity

Food for thought. And Exam?...

Argumentation:

Try to argue that robotics is an engineering and not science

Try to argue on the opposite

Write an Eliza-like program with two robots arguing on this topic

Sensing:

Based on your knowledge from other classes, try to invent a new sensor that has so far
not been used much in robotics, such as smell sensor, polarized light sensor or radiation
sensor.

Some sensors may need a lot of processing.

What computer software and algorithms may be useful.

Think for instance of having an array of directed microphones.

Food for thought. And Exam?...

State:

Give examples of various types of states for your Turtle


robot from homework 2.

Using the concept of finite state machines and verification


of them, how can you verify the correctness of actions of
your robot, for instance that it reaches the goal or does not
bump to the obstacle.

What can be proven ?

How to design a program that will analyze the reachability


of your robot in certain space?

Food for thought. And Exam?...

Control:

Using example of your Turtle, show examples of positive and negative


feedback.

Do you have to redesign your control to be able to demonstrate both?

Control Architectures:

Using your Turtle, give examples what behaviors are reactive and what are
deliberative.

Perhaps most of your Turtle behavior is reactive.

How can you add planning on top of reactive behaviors?

What kind of plans will be the robot able to execute.

If a plan fails, what is the simple solution, using the concepts that you learned so
far?

Food for thought. And Exam?...

Learning

As you remember, any kind of behavior that transforms the stored knowledge to a
new form in result of which the new behavior is more efficient, can be categorized
as learning, for instance, modifying the table of a reactive state machine.

Add one more layer to your Turtle, the level of learning.

How will you evaluate the quality of learning?

Can GA be a learning mechanism?

How learning can be introduced in the framework of tree search?

Applications:

Think about all possible practical applications for your Turtle.

What should be added to it that it will remove mines from a former battlefield?

That it will be finding weeds and destroying them?

Give characterization of every task in terms of basic control architectures from the class

You might also like