You are on page 1of 5

G u e s t E d i t o r s ’ I n t r o d u c t i o n

Semisentient Robots:
Routes to Integrated
Intelligence
Luís Seabra Lopes, Universidade de Aveiro
Jonathan H. Connell, IBM T.J. Watson Research Center

C omplex systems are getting to the point where it almost feels as if “someone” is

there behind the interface. This impression comes across most strongly in the field

of robotics because these agents are physically embodied, much as humans are. We

believe that this phenomenon has four primary components: A system must be able to

• act in some reasonably complicated domain, One precondition for intelligence seems to be a
• communicate with humans using a language-like responsiveness to external stimuli. That is, an agent
modality, must perceive something about its environment and
• reason about its actions at some level so that it has be able to perform appropriate actions as conditions
something to discuss, and change. Let’s call this the animate criterion. This does
• learn and adapt to some extent on the basis of not mean an agent must have a camera or a robotic
human feedback. arm, merely that it has some means of input and out-
put. This might be as simple as a text-based system.
These components all contribute to various aspects of The point is that an agent should not always emit the
the definition of “sentience.” Yet, we obviously can exact same response; at the least, some changes in its
combine these basic ingredients in different ways and input should lead to changes in its output. Certainly
proportions. This special issue, therefore, examines we would not consider intelligent a robot that just sat
what sort of interesting recipes people are cooking up. on a table or drove forward. Given that perception and
action are definitely required, determining such a sys-
Perception & action tem’s intelligence comes down to judging its actions’
Philosophers have studied the nature of intelli- appropriateness in light of the prevailing conditions.
gence since ancient times. In modern times, In fact, robotics is often defined as “the intelligent con-
researchers from other fields, including psychology, nection of perception to action.”2 The question now
cognitive science, and AI, have joined the debate. becomes, how do we make this connection?
Unfortunately, while we can define some abilities The 1950s and 1960s saw the creation of the first
related to intelligence, such as learning or reason- robotic devices, such as Elsie3 and the Hopkins
ing, with some precision, the word intelligence Beast.4 These were inspired by biological models of
means different things to different people. Humans simple creatures, akin to the current A-life and ani-
are intelligent, everybody seems to agree. But are mat approaches. These machines did things such as
animals intelligent? Is a computer running a medical react to lighting variations and follow along hallways
expert system intelligent? In the larger picture, is looking for “food.” Although they definitely had a
sentience even possible for a silicon-based entity quality of aliveness, they did not seem particularly
such as HAL 9000?1 intelligent in the human sense.

10 1094-7167/01/$10.00 © 2001 IEEE IEEE INTELLIGENT SYSTEMS


In the late 1960s, the first robots showing potentially faulty memory, following the ning (either for a given task or for exception
some degree of human-like intelligence and maxim that “the world is its own best model.” handling), execution monitoring, and expla-
autonomy appeared. The mobile robot This decentralization can go even further by nation (diagnosis) of exceptions are key rea-
Shakey, developed at SRI, is generally con- completely removing all internal representa- soning functions for robots that will provide
sidered a watershed in autonomous systems. tion and reasoning from the robot control a high-level interface to humans.17,18
Shakey could already perform some func- architecture: “intelligence without reason.” By providing the robot a library of primi-
tions that we still consider fundamental for The extent to which these robots appear intel- tive routines and a suitable composition
autonomous systems—namely, perception, ligent is largely in the mind of the beholder— methodology, we can potentially cover a
representation, planning, execution moni- such intelligence is not explicitly encoded. large variety of situations with a relatively
toring, learning, and failure recovery. One Rather, the robot’s functionality emerges modest amount of knowledge. Indeed, this
big advance was that the system was sym- dynamically from the interaction among its can even give the robot the flexibility to han-
bol-based, so a human could clearly under- components and the environment.9 dle additional environmental challenges that
stand what the system was doing and why. Reactive and behavior-based systems pro- we hadn’t explicitly considered when devel-
At about the same time, Terry Winograd cre- vide a robust solution to the problem of situ- oping the library of primitives. The robot
ated SHRDLU, a natural-language-under- ated activity, coping with dynamically chang- does not necessarily need a canned operator
standing system for a physically embedded ing environments. In particular, one of the to handle every contingency—consider the
agent, a simulated robot arm manipulating approach’s successes is the fact that these wide variety of chess board configurations
blocks.5 Again, this system seemed more reactively controlled robots could display that can result over time from the relatively
intelligent because it could understand typed emergent intelligent-looking behavior. How- simple moves of the pieces. In this way, rea-
symbolic directions much as a human would. ever, despite the modular and incremental soning makes the robot more adaptable.
Unfortunately, these classical AI tech- design that reactive systems often use, build- Most current planning and reasoning sys-
niques and representations did not extend ing them seems difficult, requiring consider- tems are derived from either first-order pred-
easily to complex dynamic environments.6–8 able engineering effort to hard-code the icate logic or the situation calculus.19 The
Researchers had difficulty maintaining world desired functionality. This is due partly to FOPL approach employs logical formulas
models and connecting the symbols in these unexpected interactions between behaviors that describe the world’s current state, clas-
models to numerical variables derived from as the system’s size scales up. sify objects, and predict the effects of actions.
the outside world. Furthermore, AI systems In addition, the representationless character Often, in robotics, it is assumed that the sen-
tended to be very slow, because of the of behavior-based systems is often criticized sor systems continuously update the truth
sequential processing of information and the as too restrictive. For instance, mobile-robot values of those formulas. The situation cal-
cost of maintaining the world model. So, they path planning requires some representation of culus and its more modern cousin, Golog,20
had difficulty reacting to sudden changes in space and spatial relations between objects, deal primarily with action planning. Each
the environment. This is partially related to which is most conveniently conceptualized in operator (action) has a precondition list gov-
the combinatorial-explosion problem in plan- a centralized framework. Moreover, with mas- erning its applicability to a given situation,
ning, which also limits a system’s decision- sive decentralization, dynamically imparting and a postcondition list describing its effect
making capabilities. Finally, these early AI sys- goals to such systems is difficult because there on the environment in terms of added and
tems were brittle; they failed in situations only is no centralized “it” to give the commands to. dropped assertions. Usually, a set of frame
slightly different from those for which they Of course, researchers have devised clever axioms describe what happens to assertions
were programmed. They assumed their world ways around some of these restrictions.10,11 not directly referenced in the postcondition
models were correct and complete and thus Nevertheless, while some researchers lists. Both types of systems are attractive
tended to run blindly, in a ballistic fashion. continue to investigate purely reactive sys- because complete and correct proof proce-
These realizations sparked new interest in tems, many others have proposed hybrid dures exist. Also, some hope exists that pro-
the animal-modeling paradigm, but now systems.7,12–17 Typically, such systems gramming can be purely declarative—the
leavened with insights from traditional com- employ reactivity in low-level control and user specifies a goal or points out some
puter science. Probably the most influential employ explicit reasoning for higher-level nonoptimal condition, and the robot auto-
of these new behavior-based approaches was deliberation. matically determines how to achieve the
Brooks’ subsumption architecture.8 This desired result.
approach supports competence levels that Reasoning A variety of complementary model-based
range from basic reflexes (reactions) all the One way to make an embodied system techniques, such as qualitative reasoning,
way to reasoning and planning. The more appear more intelligent is to give it the abil- diagnostic reasoning, and probabilistic rea-
abstract control layers exploit the function- ity to reason. By this we mean the process of soning, are also under investigation. Because
ality implemented by the lower layers but can stringing together a number of small infer- these are potentially relevant for building
also directly access sensor data and send ence or planning steps to achieve some goal truly sentient robots, they might gradually
actuator commands. Each layer is imple- or reach some conclusion. The analogy with make their way into future robots.
mented by a set of modules that run in par- humans suggests that robots should be able The case-based approach to reasoning is
allel, without any centralized control. to plan sequences of actions for achieving radically different.21 This approach matches
In such a system, a centralized world given goals and should be prepared to han- the current situation as the robot perceives it
model is not necessary (or possible). Instead, dle unforeseen situations (exceptions) that against all the prototypes in its library. Typ-
the system substitutes continual sensing for occur when executing those actions. Plan- ically, the closest match above some mini-

SEPTEMBER/OCTOBER 2001 computer.org/intelligent 11


G u e s t E d i t o r s ’ I n t r o d u c t i o n

mum threshold becomes the model for guid- should learn to predict the commonly for empirically inducing representations
ing subsequent interactions with the envi- observed output. This form of feedback is from collections of examples. Also, deduc-
ronment. Advanced systems sometimes fairly rich in that it provides the exact desired tive generalization techniques can use an
attempt to combine several known cases to response to the robot. However, even with this underlying domain theory to explain a con-
generate better solutions. By design, these assistance, appropriately generalizing over the crete case and, from this, derive a general
systems are good at handling imprecise spec- high-dimensional spaces typically used is dif- solution for a whole class of problems. Com-
ifications. However, their case-matching ficult. This is especially true given the neces- bining inductive, explanation-based, and
metrics must be carefully crafted. Moreover, sarily limited amount of training data—the case-based approaches can significantly
these systems are prone to hallucinate and robot has physical inertia, so each trial move enhance the flexibility and adaptability of
sometimes apply inappropriate models when takes a significant amount of real time. robots.17,34 Unfortunately, owing to the dif-
the available information is sparse. So, case- Robotics researchers have also investigated ficulty of lower-level issues, researchers have
based systems, although remaining symbol- a weaker form of feedback. In reinforcement left symbolic learning in robotics largely
based and escaping some of the brittleness learning, a reward signal tells the robot when untouched until now.
of systems such as Shakey and SHRDLU, it has done the right thing. One problem with
have their own drawbacks. this approach is that the robot does not really Communication
know how close it has come to the optimal Linguistic communication is one of the
Learning solution or how much closer it might come if main characteristics that distinguish humans
The ability to learn is another key com- it varied some of its parameters more. Also, it from animals. To aspire to sentience—a fully
ponent of intelligence. By learning, we is difficult to propagate such a reward signal human level of intelligence—an agent must
mean the process by which a system across a series of actions to determine which be able to explain its reasoning and motiva-
improves its performance on certain tasks ones truly produced the desirable outcome. tions. That is, its internal beliefs, desires, and
on the basis of experience. We can view Nevertheless, researchers have used rein- intentions must be accessible to some extent
learning as a combination of inference and forcement learning both to craft the transfer by outside agents.
memory processes.22 An agent with rea- functions of individual behaviors24,28 and to Accessibility lets an outside agent more
soning and learning capabilities is not lim- design the arbitration and sequencing logic easily specify commands or provide infor-
ited by its designers’ foresight, because it for a given fixed set of behaviors.29 Gener- mation to the robot. The agent can issue com-
can tailor its behavior to new conditions as ally, the smaller the learning problem and the mands by either specifying top-level goals
appropriate. Again, this enhances its adapt- more frequent the reward signal, the faster or directly enumerating whole sequences of
ability. This capacity is crucial for things such robots can learn. requested steps (like an old command line
that cannot be known ahead of time, such Another major area of robot learning interface). This is probably language’s most
as the layout of a user’s home, or things that research involves automatically building significant direct use in robotics.
might change over time, such as the cali- maps of the environment. Typically, these Yet language is also invaluable for ground-
bration of the robot’s arm. systems are unsupervised and use various ing symbols. For instance, a particular
Learning is also useful for things that are clustering techniques. Sebastian Thrun has room’s name is not something the manufac-
simply difficult or tedious to directly pro- refined and extended Hans Moravec’s early turer can necessarily hard-code into a robot
gram.17,23,24 For instance, the most pervasive work on sonar occupancy grids30 into an direct from the factory. But you could easily
approach to industrial-robot task specifica- elaborate probabilistic map construction and stand in a room and declare “This is the
tion is still a combination of guiding and tex- position estimation system.31 Geometric kitchen” to let the robot know what you mean
tual programming. A teach pendant (a hand- models based on stereo vision have also been by the command “Go to the kitchen.” In this
held control device) deictically indicates the popular and have achieved some matu- way, symbol grounding gives the human and
positions referenced in the program, which rity.32,33 Both approaches typically presup- robot a common vocabulary for interpreting
would otherwise be onerous to enter coordi- pose a weak model of the environment (walls commands and discussing alternatives.35 You
nate by coordinate. This is probably the sim- and corridors), then use this bias to infer can also ground other things, such as adjec-
plest and oldest form of symbol grounding25 structural features from constellations of sen- tives (“red”) or verbs (“follow”). You can
in robotics. sor readings. The only sort of feedback they even assign shorthand invocation designa-
Much robotics and control theory research use is a measure of consistency across dif- tions, much like procedure names, to whole
has dealt with automatically modeling a mech- ferent viewpoints. sequences of steps (for example, find(x) +
anism’s forward and inverse kinematics. A Finally, learning can team up with high- pick-up(x) + return-home + drop = clean-
comparable amount of research has gone into level reasoning to further enhance an agent’s up(x)).
elucidating the geometric transforms between competence. First, learning how to accom- Of course, not all learning must be rote.
sensor and motor subsystems. For example, plish substeps is far easier than directly You can also use language to specify an
Bartlett Mel shows how neural networks can inducing a complete procedure. Second, a item’s generic class or explicitly call out its
progressively learn to move a robot arm to reasoning system can leverage even a rela- important properties, to help the robot induce
desired locations in a camera’s visual field.26 tively small fragment of learned knowledge a recognizer for it.36 For instance, “This is a
A lot of this work, most notably that of the by using it in conjunction with a wide range bottle because of its shape” implies that the
CMAC (Cerebellar Model Articulation Con- of other preexisting plan steps. High-level object’s color, an easily observed overt prop-
troller) camp,27 treats the problem as function (symbolic) learning in general has been erty, is not relevant.
approximation—for a given input, the system much investigated. Powerful algorithms exist Perhaps most important, you can use lan-

12 computer.org/intelligent IEEE INTELLIGENT SYSTEMS


guage to probe the robot’s goals and plans. suggests using the correspondingly more tectures support this naturally. His use of
This can be invaluable for debugging, much complicated motor systems of these robots visual routines and markers is inspiring in its
as the dependency-directed backtrace of rule (with many degrees of freedom) as a sub- effectiveness and elegance. He also shows
calls in an expert system helps justify its con- strate for learning action labels. how a largely reactive architecture can per-
clusions. With such a technique, the user can The articles by Francois Michaud, form pseudosymbolic reasoning and why
correct any underlying false misconceptions Michael Beetz, and Hideki Asoh and their such inference is useful for a robot. Although
the robot might have. It also lets the user colleagues describe the state of the art in inte- this research is very basic, his group has suc-
determine when the robot’s planning system grated robotic systems, all using variations ceeded in building several robots that com-
has hit an impasse and hence what additional on the hybrid deliberative-reactive control petently operate using these principles.
knowledge and inference rules the robot architecture. Michaud and his colleagues Finally, Stanislao Lauria and his col-
needs. These uses are much closer to the psy- designed their Lolitta Hall robot specifically leagues demonstrate in detail how a robot can
chological rationale for language we men- to address the AAAI Mobile Robot Chal- interpret linguistic travel directions. This
tioned at this section’s start. lenge: register for the conference, schmooze, includes understanding what pieces of infor-
then give a talk. They use an intriguingly mation are needed, requesting them when
Integration & the articles in this complex system of finite-state machines, they are not apparent or ambiguous, and sup-
issue reactive behaviors, and emotional modifiers plying default values as appropriate. The
While no robot in this issue claims to be authors also derive an interesting set of
fully sentient, progress is occurring on vari- motion primitives from a corpus of human
ous fronts. And, although the integration of interactions and suggest how to combine
action, reasoning, learning, and communica- Integrated Semisentient these primitives to yield higher-level named
intelligence robot
tion is complex, some consensus seems to be routines (such as “goto-library”).
emerging. For instance, to allow reasoning but Communication Accessible
retain reactivity, layered control architectures
are popular. Moreover, at some level (but pos- Reasoning
sibly below the emotional and hormonal
responses), reasoning is centralized to pro-
mote the robot’s unity of purpose. Yet these
Learning
Adaptable
T o achieve a modicum of sentience, a
robot must be animate, its behavior
adaptable, and its motivations accessible. Fig-
Perception
researchers do not implicitly trust (the results & action Animate ure 1 depicts the relation of these desiderata to
of) reasoning. Their systems typically incor- the basic abilities of communication, action,
porate a lot of machinery to check the predic- reasoning, and learning. Creating such a semi-
tions made at regular intervals. They also pro- Figure 1. The relation between integrated sentient robot touches on many of the central
vide facilities for debugging and partial robot intelligence and the definition of issues of AI: planning, natural language,
replanning when the robot encounters unex- semisentience. vision, and learning. Although classical delib-
pected events. erative systems are eminently suited to nat-
Furthermore, not all decisions result from to schedule the robot’s activities. ural languages, they have proved difficult to
general-purpose reasoning. In several sys- Beetz’s Rhino also performs indoor navi- apply to real-time systems. Behavior-based
tems, complex, special-purpose engines han- gation. Rhino has a sophisticated probabilis- approaches, on the other hand, are very
dle navigation, image processing, and speech tic sonar system for mapping and localiza- responsive to their environment yet have prob-
recognition. Similarly, researchers tend not tion and employs a text-based language lems dealing with symbols. Past attempts to
to use a single monolithic learning agency interface for accepting user commands. The integrate the two traditions have met limited
but rather sprinkle various types of learning robot uses an interesting clustering-based success—the flexibility to accommodate and
throughout the robot’s subsystems. Finally, learning algorithm to learn common path reason about new situations seems missing.
reasoning and language appear to have a segments, which the robot’s reasoning sys- The merger of these abilities is important,
tight, almost Whorfian, coupling. Reasoning tem can then incorporate as substeps of an because to achieve a true measure of sentience
is most easily cast as a symbol-based system; overall trajectory to enhance its adaptability. a robot must not only be alert and proficient at
language helps ground out these symbols, Asoh’s Jijo-2 similarly navigates through some real task, it must also provide a friendly,
whether they represent objects or actions. an office environment but uses full speech high-level interface to humans. Such an inter-
Such symbol grounding for natural lan- recognition to let a human specify travel des- face serves not only for telling the robot what
guage is the focus of Luc Steels’ article. His tinations and to obtain relevant information to do but also for explaining how to do it and
Talking Heads project attempts to build a from the human. Jijo-2’s interaction capa- how to ground relevant symbolic terms in the
common vocabulary based on agents with a bilities include turning to the user on the robot’s sensor and motor primitives. The arti-
similar perceptual organization. It uses a basis of the direction of sound and detecting cles in this issue explore new ways of com-
shared attentional focus derived from the dis- and recognizing the user’s face. In addition, bining communication with reactivity, rea-
tinctiveness of presented objects. The pro- during supervised map learning, a human can soning, and learning. As will become
jects with Sony Aibos and SDR humanoids verbally indicate relevant landmarks as the apparent, although the resulting robots might
attempt to extend this methodology to robots robot encounters them. not be fully sentient by human standards, they
with more sophisticated sensor systems Ian Horswill examines how to bind lin- certainly provide enthralling demos—and per-
(including speech recognition). Steels also guistic entities to sensor data and what archi- haps a glimpse of what’s to come.

SEPTEMBER/OCTOBER 2001 computer.org/intelligent 13


G u e s t E d i t o r s ’ I n t r o d u c t i o n

References Worlds,” Proc. 23rd Ann. Conf. Computer pp. 93–109.


Graphics (SIGGRAPH 96), ACM Press, New
1. D. Stork, ed., HAL’s Legacy: 2001’s Com- York, 1996, pp. 205–216. 32. N. Ayache and O. Faugeras, “Maintaining Rep-
puter as Dream and Reality, MIT Press, Cam- resentations of the Environment of a Mobile
bridge, Mass., 1997. 17. L. Seabra Lopes, Robot Learning at the Task Robot,” IEEE Trans. Robotics and Automation,
Level: A Study in the Assembly Domain, PhD vol. 5, no. 6, Dec. 1989, pp. 804–819.
2. M. Brady, “Artificial Intelligence and Robot- thesis, Faculdade de Ciências e Technologia,
ics,” Artificial Intelligence, vol. 26, no. 1, Apr. Univ. Nova de Lisboa, Lisbon, Portugal, 1997. 33. L. Iocchi, K. Konolige, and M. Bajracharya,
1985, pp. 79–121. “Visually Realistic Mapping of a Planar Envi-
18. G.R. Meijer, Autonomous Shopfloor Systems: ronment with Stereo,” Proc. 2000 Int’l Symp.
3. G. Walter, The Living Brain, W.W. Norton, A Study into Exception Handling for Robot Experimental Robotics (ISER 2000), 2000.
New York, 1953. Control, PhD thesis, Dept. of Computer Sys-
tems, Univ. van Amsterdam, Amsterdam, 34. L. Seabra Lopes, “Failure Recovery Planning in
4. H. Moravec, Robot: Mere Machine to Tran- 1991. Assembly Based on Acquired Experience:
scendent Mind, Oxford Univ. Press, Oxford, Learning by Analogy,” Proc. IEEE Int’l Symp.
UK, 1998. 19. S.J. Russell and P. Norvig, Artificial Intelli- Assembly and Task Planning (ISATP 99), IEEE
gence: A Modern Approach, Prentice-Hall, Press, Piscataway, N.J., 1999, pp. 294–300.
5. T. Winograd, Understanding Natural Lan- Upper Saddle River, N.J., 1995.
guage, Academic Press, San Diego, Calif., 35. L. Seabra Lopes and A.J.S. Teixeira, “Human-
1972. 20. H. Levesque et al., “Golog: A Logic Pro- Robot Interaction through Spoken Language
gramming Language for Dynamic Domains,” Dialogue,” Proc. IEEE/RSJ Int’l Conf. Intel-
6. P. Agre and D. Chapman, “Pengi: An Imple- J. Logic Programming, vol. 31, nos. 1–3, 1 ligent Robots and Systems (IROS 00), IEEE
mentation of a Theory of Action,” Proc. 6th Apr. 1997, pp. 59–83. Press, Piscataway, N.J., 2000, pp. 528–534.
Nat’l Conf. Artificial Intelligence (AAAI 87),
AAAI Press, Menlo Park, Calif., 1987, pp. 21. J.L. Kolodner, Case-Based Reasoning, Mor- 36. J. Connell, “Beer on the Brain,” My Dinner
268–272. gan Kaufmann, San Francisco, 1993, pp. with R2D2: Natural Dialogues with Practi-
331–363. cal Robotic Devices (working notes from
7. R. Arkin, Behavior-Based Robotics, MIT AAAI 2000 Spring Symp.), AAAI Press,
Press, Cambridge, Mass., 1998. 22. R.S. Michalski, “Inferential Theory of Learn- Menlo Park, Calif., 2000, pp. 25–26.
ing: Developing Foundations for Multistrat-
8. R. Brooks, Cambrian Intelligence, MIT egy Learning,” Machine Learning: A Multi-
Press, Cambridge, Mass., 1999. strategy Approach, vol. 4, R.S. Michalski and
G. Tecuci, eds., Morgan Kaufmann, San Fran-
T h e A u t h o r s
9. J. Connell, Minimalist Mobile Robotics, Aca- cisco, 1994.
Luís Seabra Lopes is
demic Press, San Diego, Calif., 1990.
a Professor Auxiliar at
23. J. Connell and S. Mahadevan, eds., Robot
Universidade de Aveiro.
10. M. Mataric, “Environment Learning Using a Learning, Kluwer Academic, Dordrecht,
He also leads the Intel-
Distributed Representation,” Proc. 1990 IEEE Netherlands, 1993.
ligent Robotics Trans-
Int’l Conf. Robotics and Automation (ICRA
verse Activity at the
90), IEEE CS Press, Los Alamitos, Calif., 24. K. Morik, M. Kaiser, and V. Klingspor, eds.,
university’s Instituto
1990, pp. 402–406. Making Robots Smart: Behavioral Learning
de Engenharia Elec-
Combines Sensing and Action, Kluwer Aca-
trónica e Telemática.
11. P. Maes, “The Dynamics of Action Selection,” demic, Dordrecht, Netherlands, 1999.
His interests include robot learning at the task
Proc. 11th Int’l Joint Conf. Artificial Intelli-
level, spoken-language human–robot interac-
gence (IJCAI 89), vol. 2, Morgan Kaufmann, 25. S. Harnad, “The Symbol Grounding Prob-
tion, and service-robotics applications. He holds
San Francisco, 1989, pp. 991–998. lem,” Physica D, vol. 42, 1990, pp. 335–346.
a licenciatura degree in computer science and a
PhD in electrical engineering from Universidade
12. J. Connell, “SSS: A Hybrid Architecture 26. B. Mel, Connectionist Robot Motion Planning,
Nova de Lisboa, Lisbon. He is a member of the
Applied to Robot Navigation,” Proc. 1992 Academic Press, San Diego, Calif., 1990.
IEEE. Contact him at the Dept. de Electrónica e
IEEE Int’l Conf. Robotics and Automation
Telecomunicações, Univ. de Aveiro, P-3810
(ICRA 92), IEEE CS Press, Los Alamitos, 27. J. Albus, Brains, Behavior, and Robotics,
Aveiro, Portugal; lsl@det.ua.pt.
Calif., 1992, pp. 2719–2724. McGraw-Hill, New York, 1981.
Jonathan H. Connell
13. E. Gat, “Integrating Planning and Reacting in 28. S. Mahadevan and J. Connell, “Automatic
is a research scientist at
a Heterogeneous Asynchronous Architecture Programming of Behavior-Based Robots
IBM’s T.J. Watson
for Controlling Real-World Mobile Robots,” Using Reinforcement Learning,” Artificial
Research Center, where
Proc. 10th Nat’l Conf. Artificial Intelligence Intelligence, vol. 55, no. 2, June 1992, pp.
he has worked on mo-
(AAAI 92), AAAI Press, Menlo Park, Calif., 311–365.
bile-robot navigation,
1992, pp. 809–815.
machine learning, wear-
29. P. Maes and R. Brooks, “Learning to Coordi-
able computers, natural
14. J. Firby et al., “Programming CHIP for the nate Behaviors,” Proc. 8th Nat’l Conf. Artifi-
language understand-
IJCAI-95 Robot Competition,” AI Magazine, cial Intelligence (AAAI 90), AAAI Press,
ing, biometric identification, and vegetable recog-
vol. 17, no. 1, Spring 1996, pp. 71–81. Menlo Park, Calif., 1990, 796–802.
nition. He is a member of the Exploratory Com-
puter Vision Group. He received his PhD in
15. D. Isla et al., “A Layered Brain Architecture 30. H. Moravec and A. Elfes, “High Resolution
artificial intelligence from MIT. He is a member
for Synthetic Creatures,” Proc. 17th Int’l Joint Maps from Wide Angle Sonar,” Proc. 1985
of the IEEE and AAAI and was an associate edi-
Conf. Artificial Intelligence (IJCAI 01), Mor- IEEE Conf. Robotics and Automation, IEEE
tor for IEEE Transactions on Pattern Analysis and
gan Kaufmann, San Francisco, 2001, pp. CS Press, Los Alamitos, Calif., 1985, pp.
Machine Intelligence. Contact him at the IBM T.J.
1051–1058. 116–121.
Watson Research Center, 30 Saw Mill River Rd.,
Hawthorne, NY 10532; jconnell@us.ibm.com.
16. K. Perlin and A. Goldberg, “Improv: A Sys- 31. S. Thrun, “Probabilistic Algorithms in Robot-
tem for Scripting Interactive Actors in Virtual ics,” AI Magazine, vol. 21, no. 4, Winter 2000,

14 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

You might also like