Professional Documents
Culture Documents
Robot groups or colonies can exhibit an enormous variety and richness of behaviors which cannot
be observed with single systems.
So a group of small, cheap and simple autonomous robots with very limited cognitive capabilities
can execute sophisticated tasks that could be impossible or very difficult to accomplish by
independent or nonsociable robots.
In this paper I will describe how these simple machines can be used in real life, for example, in
unexplored terrain map creation, foraging tasks, box pushing and clustering, technical inspections,
structure formation, or any other activity in which a large number of agents is important in order to
cover a great surface or is needed a realtime change of the robot shape and characteristics.
In order to do this, first we are going to examine the concept of Artificial Intelligence (AI), and
then refer it to MultiAgent Systems (MAS), creating the basis of knowledge necessary to
understand which problems are found in the development of these systems, and the different
approaches that can be followed to solve them.
First of all, it is necessary to define which are the targets of the research: physical agents. P. Maes
describes agents as “a computational systems that try to fulfill a set of goals in a complex, dynamic
environment” (1995, 135). In real life, this can be translated as autonomous robots with local views
(each one can see a part of the environment) working in a decentralized system (there isn't any
leader that guides the robots), and with social abilities (interaction with the environment and with
other robots).
So, with the onboard sensors of these robots, they can make physical measurements (light,
temperature, chemicals), detect objects and, the most important thing, be able to know their position
relative to other robots.
The next step is clear: use this sensing and intercommunication abilities to establish robot
ecosystems, this is, programming a social algorithm in each agent, so that they can develop tasks in
groups. A very interesting way to do this, as Cao, Fukunaga and Kahng point out, is to emulate
natural societies like ant colonies, “which provide striking proof that systems composed of simple
1
December 16, 2009 Sergio Vilches Expósito, Group 29
agents can accomplish sophisticated tasks in the real world” (1997, 21).
In these cases, each agent follows very simple rules and are highly reactive. So, in order to
execute a task, the main problem is how to distribute it among different agents (This is a major
problem in DAI, which is studied as Distributed Problem Solving).
In order to allocate tasks, agents must communicate through weighted request matrices, which
are based in a protocol known as ChallengeResponseContract. It can be understood as a dialogue
among the whole system: First a "Who can?" question is distributed. Only the relevant components
respond: "I can, at this price" (This price depends on the availability of that robot to accomplish that
task). Finally, a contract is set up, usually in several more short communication steps between sides.
This idea, which appears to be very easy, is tremendously complex to study. Even in simple
environments, it is almost impossible to correctly determine the behavioral repertoire and concrete
activities of a Multi Agent System a priori, as it is based in internal interferences among agents, and
therefore impossible to be modeled.
So, if the reaction of any agent to a certain stimulus cannot be determined, how can the system be
programmed? Here appears the main key of Artificial Intelligence: learning.
If we give each robot the ability to adapt and to learn as an individual (isolated learning) and as a
group (interactive learning), the performance of the system will increase automatically with time,
and robots will be capable of dealing with dynamic changes.
AI learning has been a huge object of study during the past 50 years, but its application in Multi
Agent robotics has been mainly developed by Liu and Wu. In their research, they use evolutionary
algorithms to achieve interactive learning. This process, which is based in Darwinism, works by
using digital genes.
Let's propose an example to explain this concept. In a test environment is placed a rectangular
box, a target and three robots. The objective of the three agents is
to move the box from its starting point to the target, collaborating
Bo
in order to get the best performance. If we tried the same task with x
only one robot, we would see that pushing a rectangular box is not
Robot
easy at all, because whenever the agent pushes in a direction Drawing 1: Problems of box-
different from its center of gravity, the box would spin and not pushing using one robot.
move in the right direction. So, the labor of the three robots is rearrange themselves in order to push
the box evenly.
2
December 16, 2009 Sergio Vilches Expósito, Group 29
Their first tries are random, so the box may not move in the
right direction. The robots would notice it, and would try to
guess why. They will make a random change and look if there
has been any improvement in the movement of the box. If
that's the case, they will maintain that arrangement, but make
some minor changes. Of course, this genetic algorithm is
much more complex, but its explanation is out of the scope of
this essay. What we should keep in mind is that, by trial and
error, the system will lead to an efficient configuration, and
try to improve it in each generation. Illustration 1, shows the
Illustration 1: Box-pushing trajectories
effect of this evolutionary algorithm, developed by Liu and created by three group robots. The solid line
corresponds to the trajectory of the box (•),
Wu (2001, 161). whereas the others correspond to the
movement traces for the three robots (*). At
Another example of interactive learning can be observed in the beginning, the net pushing force of the
robots results in a rather randomized motion
another experimental case studied by the same authors. This of the box. After some generations of selection,
a niche is fund, representing a globally near-
time, the aim is to create a map of an unknown environment optimal collective motion strategy. (© 1999
IEEE).
using a swarm of six microbots.
In this situation, communication between agents is required in order to be
able to scan as much terrain as possible in a certain time. So, robots will need
to agree between them in order to decide which part of landscape will be
scanned by each of them.
Illustration 2 shows three maps. The first one represents the environment
that Liu and Wu used in their experiment. The next one is a real map created
by mixing the data collected by each one of the six robots using a simple
predefined motion strategy. Notice that some figures are not well defined and
that there exist unexplored parts of the environment. In the last picture, the
robots are programmed with an evolutionary strategy, so, thanks to their
common agreements, they can scan a wider zone, and therefore, have less
errors in the map creation.
Until here we have explored the techniques, theory and practical uses of
MultiAgent Robotics Systems. Notice, although, that this is a topic in active
research process, and that the technology of the agents (robots) is being Illustration 2: Maps
created by 6 autonomous
robots. © 1999 IEEE
3
December 16, 2009 Sergio Vilches Expósito, Group 29
reinforced every year, with new advances in materials, electronics, nanotechnology and
communications. So, we should not be surprised if in 30 years time, we start having medical
nanobots, which could repair organic tissues, or swarms of little robots that could explore
unaccessible places.
As Weiß said, “intelligence and interaction are deeply and inevitably coupled to each other”
(1996, 3). This has been realized by AI investigators, and have reinforced the common saying Unity
is strength.
Bibliography:
– Arkin, Ronald C. and Bekey, George A. ed. Robot colonies. Boston, Dordrecht and London:
Kluwer Academic Publishers, 1997.
– Burgard Wolfram. Collaborative MultiRobot Exploration. Freiburg: IEEE Press, 2000.
– Cao, Fukunaga and Kahng. “Cooperative Mobile Robotics: Antecedents and Directions.”
Autonomous Robots 4 (1997): 727.
– Liu, Jiming and Wu, Juanbing. MultiAgent Robotic Systems. Florida: CRC Press
International, 2001.
– Maes, P. Modeling adaptive autonomous agents. Cambridge: The MIT Press, 1995.
– Weiß, G. Adaptation and learning in multiagent systems: Some remarks and a
bibliography. Munchen: Technische Universität München, 1996.