Professional Documents
Culture Documents
DOI 10.1007/s10846-015-0229-8
Received: 21 March 2014 / Accepted: 27 March 2015 / Published online: 21 April 2015
Springer Science+Business Media Dordrecht 2015
1 Introduction
In the last years, a large number of research projects
have focused on including new technologies in education. As a result, a wide variety of educational
platforms with pedagogical purposes have emerged.
These platforms include videos, simulations, virtual
laboratories, teleconferences, and remote laboratories. Many of these initiatives use robotics as a basis
for their operation taking advantage that robotics
is a multidisciplinary field that combines issues
related to electric drives, mechatronic, control systems, autonomous systems and artificial intelligence
in a single system of small dimensions. This makes of
it a very versatile and motivating field for education at
any level [13].
In recent years, a long list of these applications
can be found in the literature. For example: in [1]
and [2] an introduction to technology using robots
with students of high school without previous experience in robotics is presented. The use of robotics to
teach engineering technologies to students at undergraduate levels is described in [3] and [4], whereas [5]
presents VEGO, an industrial-like modular vehicle for
robotics education at postgraduate level. Most of these
platforms include a set of tasks that students develop
with robots. The complexity of these tasks depends
mainly on two aspects: the educational level of the
students and the physical limitations of the robots
used.
132
The main advantages of this type of pedagogical platforms include: direct visualization and understanding of the underlying concepts of motion-control
systems; a strong basis for controller design, flexibility for testing and validation of different control
schemes, and a significant reduction in setting up the
lab sessions. These platforms use different kinds of
robots, for example; robots fixed to one physical location (e.g. robotic arms) or mobile robots (such as
wheeled or caterpillar).
Mobile robots are defined in the literature as automatic machines that are equipped with sensors to
interact with the environment and navigate through
it while attempting to achieve some objectives. The
complexity of the objectives can vary significantly
from light or line following, to obstacle detection and
avoidance in a dynamic environment, advanced signal
processing, motion control, wireless communication,
image processing, formation control, and many other
advanced topics [4].
Research in mobile robotics is divided into a wide
number of subfields [6] such as: robots localization
[7], relative and absolute position estimation [8], point
stabilization [9], obstacles detection and avoidance
[10], exploration and area mapping [11], path following and tracking control [12], etc. Despite all this
development, most of the research fields have been
developed and implemented on single mobile robots.
However, much less work has been carried out on distributed or multi-robot systems, where more than one
robot are coordinated.
Current research in multi-robot systems is divided
into many areas such as: biological inspirations [13],
cooperative-mapping-exploration [14], event based
communication [15], formation control [16], motion
coordination [17], etc. These research areas have different interesting challenges, which can be used in
education to understand fundamental concepts of control engineering. However, just a few of these areas
have been introduced in education, mainly due to the
cost and the complexity of these systems since they
need a communication infrastructure to guarantee a
proper behavior. That is why is important to develop
simulations to introduce this kind of experiments in
education.
In this sense, some simulators of multi-robots systems can be found in the literature. For example, [18]
presents the free and open source simulator ARGoS.
Which is focused on the real-time simulation of large
133
=
R,L
(1)
In order to achieve the control objective, the distance (d) and the angle between these two points ()
are calculated with Eqs. 4 and 5.
d=
yp yc
(2)
= tan1
The two drive velocities are always two parallel vectors and, at the same time, perpendicular
to the wheels axis. Furthermore, the wheels are
assumed to roll without slipping. These conditions
impose some restrictions known as non-holonomic
constraints. The robot can change its direction by
varying the relative rotation of the wheels, so it does
not need an additional steering movement to turn,
in respect to the ICC must be the same for both
wheels.
The kinematic model of the robot in cartesian coordinates is given by Eq. 3, where is the heading
direction angle of the robot [26, 27].
xc = cos ( )
yc = sin ( )
2
+ xp xc
yp yc
xp xc
(5)
(3)
(4)
134
1
max
= max sin (e )
(6)
(7)
(9)
of motion. For each candidate direction, VFH* computes the new position and orientation that the robot
would have after moving for a projected step distance. At every projected position, VFH+ is again
used to construct a new polar histogram based on
the map information. This histogram is then analyzed
for candidate directions, called projected candidate
directions.
2.2 Robots Formation Control
The formation control is carried out in a master-slaves
way. Each robot needs to know its current position
(xc , yc ) and the desired point P (xp , yp ) to reach the
formation (see Fig. 1). Figure 3 shows the communication flow between the robots and the computer (M
Master; S1 Slave1 ; S2 Slave2 ; S3 Slave3 ). All
robots receive their current absolute positions (PM ,
P1 , P2 , P3 ) and the desired point (reference). Once
all robots have received their positions, the master
robot sends its own position to the slave robots (Mp
(SnRef )) where n indicates the corresponding slave.
The slave robots use the masters position as a reference to reach the formation, since the formation is
always reached around the master robot. Meanwhile,
135
2
2
yp yc yd + xp xc xd
yp yc yd
= tan1
x p xc xd
(10)
(11)
The effect of these actions is different when the formation mode is free. In the first case, the student can set
a new formation by dragging and dropping the robots.
In other cases drag and drop robots can be used as a
disturbance in order to test a control strategy.
Panel No. 3 allows students to define the properties
of the selected obstacle avoidance algorithm (VFH,
VFH+ or VFH*). For example, the properties of the
histogram and the vision margin size of the robots.
Panel No. 4 is a panel to define physical properties of
the robots (minimum turning radius and control law)
and obstacles (size, velocity and security margin size
around them). Panel No. 5 is a tabbed panel with two
tabs. In the first tab two graphics are shown: the position of the robots and the control law signals. In the
second tab, a polar histogram of the master robot is
shown. Figure 5 depicts an experiment of obstacles
avoidance (VFH+ algorithm) with the RFC SIM. The
arena is shown in the left side with a typical obstacles (A, B, C, D) configuration. The robot can see
the obstacles situated inside the vision margin (gray
circumference). On each step of simulation the robot
builts a Histogram where the obstacles represent occupied/free sectors of its environment. In the example
of Fig. 5, the obstacles (A, B and C) are included in
the Histogram because these obstacles are inside the
vision margin. The corresponding level of each obstacle in the representation depends of the distance from
the robot to the obstacle. Large values in the Histogram indicate that the obstacle is very close to the
robot. The occupied sectors around the robots are represented with red color and the free sectors with green
color. With the values of the occupied/free sectors the
robot can calculate its velocity to avoid the obstacles
and reach the destination point (represented with a red
cross). The simulator is available in: http://unilabs.dia.
uned.es/mod/ejsapp/view.php?id=1157.
3.2 RFC EXP: Robots Formation Control
Experimental Environment
The other part of the platform is the RFC EXP. This is
the experimental environment that has been developed
to carry out formation control strategies experiments
with mobile robots in the laboratory. To develop this
kind of experiments, the robots need to know their
absolute positions and to communicate between them.
The setup is composed of five hardware components
(PC, Moway robots, CCD Camera, IP Camera and RF
136
USB Module) and two software components (Gateway Module and SwisTrack) that are deeply related.
Figure 6 shows the architecture and the relations
between these elements.
The PC has a CCD Camera connected via FireWire
port. This camera is installed on the ceiling of the laboratory and it obtains live images of the experiments.
These images are processed by SwisTrack which is an
open source software tool developed at the Polytechnic
Federal School of Lausanne (EPFL) for the tracking
of robots [32]. This application computes the absolute
position of each robot and builds a data packet with
this information. This packet is sent via TCP/IP port
to the Gateway Module which is an application developed in Visual C#. This module performs three main
tasks: a) to process the packet received from SwisTrack; b) to send the information to the corresponding
robot using wireless communication (RF USB Module), and c) to receive and respond the request from
the RFC SIM. The RF USB Module is a hardware
component for the wireless communication between
the robots and the PC using radio-frequency. Students
can interact with the robots through Internet using
RFC SIM and visualize the behavior of the experiments using the video streaming of the IP Camera
[33, 34].
The most important components of this setup
are the Moway robots. They are autonomous small
wheeled mobile robots designed mainly to perform
practical applications, teaching robotics, technology,
electronics, and control. The main components of
these robots are: two independent servo motors, a
light sensor, a temperature sensor, two infrared line
sensors, four LED diodes, a three-axis accelerometer
137
and a wireless module for communication by radiofrequency. All these peripherals are connected to a
PIC micro-controller that governs the robot [24].
3.3 Experimental Results with the Platform
Figure 7 shows a sequence representing an experience of formation control with the experimental
environment (from t=0s to t=26s). The experiment is
composed by five slave robots (marked with numbers from 1 to 5) and one master robot (marked with
the letter M) in a circular formation where the master robot is in the center of the circle. As it was
138
are trying to reach their desired position in the formation. Slave robots take more time to reach the
formation because their references are the old position of the master robot which is always changing
during its movement. Besides, each robot considers
the rest of robots as obstacles and tries to avoid each
other. For these reasons the trajectory to the destination position is not always a straight line. After
t=16s the master robot has reached its goal position and the slave robots have reached their positions in the formation. At this time the reference of
the master robot is changed again to carry out the
experiment with a different reference. As in the previous case, at t=19s the master robot is near to reach
its desired position while the slave robots are far
from their desired positions. At t=26s all robots have
reached their positions, so the formation is reached
again.
Figure 8 shows the collected data of the robot
positions during the experiment. The position of the
master robot is represented with red starts and the
slave robots are represented with different color dotted
lines. The axes of the coordinates are in centimeters
and the black dotted lines represents the circle formation around the master robot position. As can be
139
Fig. 8 Experiment of
robots formation control
data
140
(c) Modify the parameters of the selected control and observe the influence in the position
control.
(d) Repeat these three previous steps with other
control algorithms.
2. Position control of the robot with obstacles
avoidance: This experiment consists in adding
obstacles to the already described scenario. The
main objective is to introduce the obstacles avoidance algorithms in the position control of the
robots in a dynamical environment. The student
should proceed as follows:
(a) Select a predefined control strategy and add
obstacles to the arena (as it is explained in the
Applications Interfaces document).
(b) Change several times the destination of the
robot and observe its behavior using the animation and the graphics. Observe the Polar
Histogram representation in the corresponding tab.
(c) Uncheck the Obstacles avoidance option,
change the destination of the robot and
observe its behavior.
(d) Check the Obstacles avoidance option and
observe the behavior of the robot for each
obstacle avoidance algorithm (VFH, VFH+
and VFH*).
(e) Compare the results for the different strategies and their parameters (Min Valley, hm,
TauL and TauH).
(f) Check the Moving obstacles option. With
this option, obstacles start moving. This transforms the arena into a more dynamic scenario.
Observe the behavior of the robots in these
conditions. Change the velocity of the obstacles and observe the results.
3. Formation control: This experiment consist in
adding more robots to the arena to perform formation control. The main objective is to coordinate
a group of robots to reach a common task. In this
case, the experiments are carried out in a masterslaves architecture using the communication flow
diagram of Section 2.2. Other important aspect is
the level of cooperation in the formation. That
is, if the master take into account or not the positions of the slaves. The student should proceed as
follows:
control law from the different predefined algorithms and observes the behavior of the robot
through the web-cam.
3. Formation control: This experiment is also similar to the experiment in virtual mode. As in virtual
mode, the main objective of the experiment is to
coordinate a group of robots to reach a common
task. The following procedure is followed:
(a) Add more robots to the arena (the robots are
added by the instructor which is in the lab).
(b) Select the formation type (Line, Circle or
Free).
(c) Change the destination point of the master
robot and observe the behavior of the system.
(d) The instructor takes a slave robot and places
it at a different position as a way to simulate
a disturbance to the formation. The student
must observe the behavior of the formation.
(e) The previous result is repeated but with the
master robot.
(f) Check the Cooperative option and change
the destination point of the master robot.
What happens?
(g) Modify the value of Kf (cooperative level
parameter) and change the destination point
of the master robot. What happens with the
formation? Describe the influence of Kf over
the master robot behavior.
4. Control design: In this task, the students test the
controller design in the simulation phase over the
real system. To do so, the student needs to send
the code to the instructor to load the program
in the robot. This can be considered as a drawback of the platform, but we are working to sort
this out. The data of the experiments are stored
in the server and the student can download them
later.
Once the virtual and remote sessions are completed,
the student has to send a report to the instructor with all the results obtained in the laboratory
session.
141
for experimentation with mobile robots at postgraduate level. This platform is used in the Systems
and Control Engineering Master Program offered
by the National University of Distance Education of
Spain (UNED) and Complutense University of Madrid
(UCM). RFCP is used in one of the eight modules, related with robotics, in which this program is
divided.
The use of the platform exposes students to handson learning, contributing to their development as engineers. At the same time, they get motivated by simulating and experimenting with mobile robots, possibly
because they feel they are dealing with real and novel
problems. Furthermore, they can test their results and
quickly detect and correct their mistakes, contributing in this way to understand relevant concepts in an
attractive environment.
The platform is prepared to implement other experiments with different goals. In a near future, new challenging experiments will be proposed: new control
strategies, other obstacles avoidance algorithms and
to incorporate another kind of robots to the platform.
At the same time, some drawbacks of the platform
should be alleviated, for example: a) the implementation of the robot controller in remote mode (from
the client side); b) the connection of the robots to a
PC to charge the battery and to load the code; c) the
introduction of disturbances over the system in remote
mode.
Conflict of interests
conflict of interest.
References
1. Jimenez, E., Bravo, E., Bacca, E.: Tool for experimenting
with concepts of mobile robotics as applied to childrens
education. IEEE Trans. Educ 53(1), 8895 (2010)
2. Barak, M., Zadok, Y.: Robotics projects and learning concepts in science, technology and problem solving. INT. J.
TECHNOL. DES. ED 19(3), 289307 (2009)
142
3. Gomez-De-Gabriel, J., Mandow, A., Fernandez-Lozano, J.,
Garca-Cerezo, A.: Using LEGO NXT mobile robots with
LabVIEW. IEEE Trans. Educ 54(1), 4147 (2011)
4. Chew, M., Demidenko, S., Huang, L., Messom, C., Sen,
G., Watts, M.: Simple Mobile Robots for Introduction into
Engineering. IEEE IMTC. P. (1), 797802 (2009)
5. Navarro, P., Fernandez, C., Sanchez, P.: Industrial-Like
Vehicle Platforms for Postgraduate Laboratory Courses on
Robotics. IEEE T. on Educ. 56(1), 3441 (2009)
6. Dhaouadi, R., Sleiman, M.: Development of a modular
mobile robot platform: Applications in motion-control education. IEEE T. Ind. Electron 5(4), 3545 (2011)
7. Hyeonwoo, Ch., Kim, S.: Mobile Robot Localization Using
Biased Chirp-Spread-Spectrum Ranging. IEEE T. Ind.
Electron 57(8), 28262835 (2010)
8. Xing, X., Byung-Jae, C.h.: Position estimation algorithm
based on natural landmark and fish-eyes lens for indoor
mobile robot. Int. Conf. Comm. Sen. Net. (1), 596600
(2011)
9. Zhengcai, C., Yingtao, Zh., Shuguo, W.: Trajectory tracking
and point stabilization of noholonomic robot. IEEE/RSJ.
Int. Conf. Int. Rob. Sys. (1), 13281333 (2010)
10. Bonin-Font, F., Burguera, A., Ortiz, A., Oliver, G.: Combining obstacle avoidance with robocentric localization in a
reactive visual navigation task. IEEE Int. Conf. Ind. Tech.
(1), 1924 (2012)
11. Chaos, D., Chacon, J., Lopez-Orozco, J.A., Dormido, S.:
Virtual and Remote Robotic Laboratory Using EJS MATLAB and LabVIEW. Sensors 13(2), 25952612 (2013)
12. Aneesh, D.: Tracking controller of mobile robot. IEEE Int.
Conf. Elect. Inf. Tech. (1), 343349 (2012)
13. Lee-Johnson, C.P., Carnegie, D.A.: Mobile robot navigation
modulated by artificial emotions. IEEE Tran. Sys. Man.
Cyb 40(2), 469480 (2010)
14. Defoort, M., Veluvolu, K.C.: A Motion Planning Framework with Connectivity Management for Multiple Cooperative Robots. J. Intell. Robot. Syst 75(2), 343357 (2014)
15. Guinaldo, M., Farias, G., Fabregas, E., Sanchez, J.,
Dormido-Canto, S., Dormido, S.: An interactive simulator
for networked mobile robots. IEEE Net 26(3), 1420 (2012)
16. Xue, D., Yao, J., Chen, G., Yu, Y.: Formation control of networked multi-agent systems. Cont. Theo. App 4(10), 2168
2176 (2010)
17. Kostic, D., Adinandra, S., Caarls, J., Nijmeijer, H.:
Collision-free motion coordination of unicycle multiagent systems. Ame. Cont. Conf. (1), 31863191
(2010)
18. Pinciroli, C., Trianni, V., OGrady, R., Pini, G., Brutschy,
A., Brambilla, M., Mathews, N., Ferrante, E., Di Caro, G.,
Gambardella, L.M.,
Ducatelle, F., Stirling, T., Gutierrez, A.,
Dorigo, M.: ARGoS: a Modular, Multi-Engine Simulator
for Heterogeneous Swarm Robotics, Proceedings of IROS,
50275034 (2011)
19. Guyot, L., Heiniger, N., Michel, O., Rohrer, F.: Teaching
robotics with an open curriculum based on the epuck robot,
simulations and competitions. In: Proceedings of the 2nd
International Conference on Robotics in Education. Vienna,
Austria (2011)
Sebastian Dormido-Canto received the M.S. degree in Electronic Engineering from Madrid Pontificia de Comillas University, Madrid, Spain, in 1994 and the Ph.D. degree in Sciences
from UNED, Madrid, in 2001. He is Associate Professor in
the UNED Department of Computer Sciences and Automatic
Control since 2003. His current research interests are the analysis and design of control systems via the Internet, automatic
learning with big data and high performance interconnection
networks for cluster of workstations.
143
Jose Sanchez received the M.S. degree in computer sciences
from Polytechnic University, Madrid, Spain, in 1994 and the
Ph.D. degree in sciences from UNED, Madrid, in 2001. He has
been an Assistant Professor in the UNED Department of Computer Sciences and Automatic Control since 1993. His current
research interests are networked control systems, event based
control and engineering education