Professional Documents
Culture Documents
1 Introduction
Robot navigation is the task where an autonomous robot moves safely from
one location to another. This involves three primary questions [Leonard and
Durrant-Whyte, 1991]:
2 Motivation
In 1920, Karel Capek presented a play R.U.R. Rossum’s Universal Robots
that showed a future in which all workers are automated. A worker was
1
called a robot, which is a well known term today as “a mechanical device
that sometimes resembles a human and is capable of performing a variety of
often complex human tasks on command or by being programmed in advance”
[Mifflin, 1995]. Robots would allow humans to concentrate their concerns
on more important tasks, instead of cleaning ones house. Currently the
community employees other humans to do their unwanted tasks, however
delegation to humans may not be an ideal solution. For some tasks, humans
come with major limitations such as a need for sleep, food and a desire for
money. On the other hand, a robot does not need to sleep, eat or be paid so
the development of robots for the community is promising.
So it is clear an autonomous mobile robot would be of great benefit to hu-
man society. However some may ague that robots will create unemployment.
Though one argument to this, from a well known scientist, specialising in
robotics and computer vision, is “at the moment the highest number of robots
are in the nations with the lowest levels of unemployment. If we can make
robots for the consumer, that will create a lot of employment.” [Zelinsky,
2000].
However a robot worker would require the robot to be mobile, which
is the problem of robot navigation mentioned in the introduction. Robot
navigation is occasionally referred to as the most fundamental problem for
robotics [Cox, 1991], so a project looking at any areas of the robot navigation
problem, such as localisation, would be of great benefit to robotics and the
robot dream.
There are a number of problems faced by mobile robot localisation [Thrun
et al., 2000]:
• Global positioning - The robot is initially lost. i.e. can deal with
multiple ideas (hypotheses) about its location.
2
There has been numerous research on MCL such as [Fox et al., 2000]
and [Thrun et al., 2000], however most of the well known work stem from
Sebastian Thurn [Thurn, 2006].
3 Localisation Theory
This section will describe localisation theory by first mentioning the available
information for a robot, then discussing methods for representing a pose
estimation and finally outlining the approach used in this project.
3
3.2 Pose Estimation
Estimating the pose is very much about representing uncertain pose infor-
mation [Fox et al., 2000]. The pose of the robot is often represented by a
probability distribution. The distribution may be represented by a Gaussian,
in which case, the mean and covariance give the approximation of the robot’s
pose. The distribution is represented by:
4
Gaussian PDF localisation and Gaussian sum PDF localisation use Kalman
filters that are have proved very successful methods for solving the position
tracking problem [Leonard and Durrant-Whyte, 1992], quite effectively solv-
ing the problem of compensating for small, incremental errors. Position prob-
ability grids use Markov localisation to solve the global positioning problem.
However this method suffers from two problems: the computational expense,
and the fact that accuracy is limited to the resolution of the approximation.
Monte Carlo Localisation (MCL) solves the global positioning and kidnap-
ping problem. It is an approach that overcomes the issues of the Markov
localisation by an order of magnitude in its efficiency and accuracy.
5
So equation (6) can now be put into recursive form known as Bayes filter :
p(ot | xt )
Z
p(xt | xt−1 , at−1 )p(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1
(8)
p(ot | at−1 , d0...t−1)
Z
= ηρ αp(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1 (9)
where η equals p(ot | at−1 , d0...t−1)−1 and is known as the normalising con-
stant, and α and ρ are known as the motion model and perceptual model
respectively. So the equation shows that given α and ρ and the previous
approximation of the robots pose p(xt−1 | ot−1 , at−2 , ..., a0 , o0 ), the current
approximation of the robots pose p(xt | ot , at−1 , ..., a0 , o0 ) can be determined.
Hence, given the initial pose probability distribution the robot pose can al-
ways be estimated as the robot travels within its environment.
Its hard to compute the Bayesian filter directly due to issues such as
storing the distribution since the probability distribution is continuous and
can not integrate that easy. For these reasons particle filters are used, which
will be discussed in the following section.
6
2. The robot then moves (acquiring relative position information at−1 ).
So the samples are moved according to at−1 using the motion model.
5. Resample particles: sample new particles from the existing particle set
according to the weights. Replace the particle set with the new one.
Go to 2.
7
3.6 Perceptual Model
The perceptual model weights a given pose through the use of a sensor, which
means a unique perceptual model is required for each sensor. However in this
project only a ring sonar sensor was used. A sonar measures the line of sight
distance to the nearest obstacle. A ring sonar sensor is simply a collection of
sonars angularly displaced by 2π
N
rad, where N is the number of sonars used.
Given a map of obstacles, one can model the sonar readings at any loca-
tion and orientation within the map. This allows the robot to compare its
current sonar readings with the sonar readings of where the robot thinks it
is. These comparisons allow each particle to be weighted via:
N
µ X
ωi = 1 − (ai − bi )2 (12)
N k=0
3.7 Resampling
The resampling step of the particle filter can be implemented in a number
of ways [Douc et al., 2005]. Though this project has used one of the more
common techniques: Sample/Importance Resampling (SIR). The concept of
SIR is a set of particles are separated via thresholding each particles weight.
If a particles weight is greater then the threshold, it is kept, however if the
particle weight is less then the threshold, it is removed from the set and one
of the kept particles is duplicated. Though since MCL is a Monte Carlo
method it is common practice to copy the kept particle with an extremely
similar location and orientation, rather then just duplicating it. Ideally the
threshold should be determined at each resampling step, however for the
purpose of this project the most obvious threshold (particle count)−1 , is
used at every resampling step.
The lower the number of particles, the less computation is required. So
a function has been implemented to determine a number of particles to use
8
function PostTasks(pose_xy_cov, sensor_weight, particle_count) {
particle_count = PARTICLE_COUNT_MAX:
InitaliseUniformPose(particle_count)
particle_count = DecreaseParticleCount(particle_count);
particle_count = IncreaseParticleCount(particle_count);
return particle_count;
}
Figure 1: Resampling Post Tasks: Is the robot lost?, determine new particle
count.
9
after each resampling step. The basic concept is once the x, y covariance
is small the robot must be confident of its pose estimation, so if this is the
case the particle count is reduced, otherwise it is increased. Increasing or
decreasing the particle count can be done in a number of ways to improve
performance, however in this project the particle count is simply increased
or decrease by %5 of the number of current particles. Even though the robot
may be confident of its pose, it may be wrong. So the robot must test its
confidence to ensure it is not lost. This is simply done by using Equation 12,
where ai is the ith sonar distance modelled using the pose mean, once the
robot is confident of its pose. If the weight is not extremely high, the robot
must be lost, so the particles are uniformly redistributed across the map.
These resampling post tasks can be further understood by taking a look at
the algorithm seen in Figure 1.
4 Results
The localiser developed was tested on accuracy over four unique routes with
different program variables such as: maximum particle count, number of
sonars, sonar resolution and odometry resolution. Figure 1 shows the an
image sequence left to right, top to bottom, of the program output for one
such experiment. The blue dots1 represent each particle pose. The red line
indicates the true robot pose (an instance of the given route). Finally the
green lines represent the the sonar readings from the robots true pose, in this
case a 24 ring sonar sensor. This sequence shows the expected result of the
particles converging around the true robot pose as time increases.
The first experiments where to determine what number of sonars required
and the maximum sensor resolutions that can be achieved. Figure 2 shows 3
sets of graphs. The first set is the entire left column, which shows the error
verse time using various size ring sonar sensors tested on identical routes.
The right column contains the two sets of resolution experiments. Each set
shows two different resolution setting, the top set being the sonar resolution,
and the bottom set being the odometry translation resolution. It is clear from
this graphs the ring sonar sensor must have at least 12 sonars. Furthermore,
the sonar and odometry translation resolutions should be under 100cm and
2cm respectively.
With this information the program was set with a 24 ring sonar sensor,
1cm odometry translation resolution, 0.5 degree odometry orientation reso-
lution and a 4cm sonar resolution with a 4 metre range, which are reasonable
specifications for these types of sensors. These setting were then used to test
1
They are actually very small lines (vectors)
10
Table 1: Program Output: The particles (blue) converge around the true
robot pose (red). The green lines represent the sonar readings from the
robots true pose.
11
Error (metres) using 2000 particles, 3 Ring Sonar
25 50
3 Ring Sonar 100cm Sonar Resolution
15 30
10 20
5 10
0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time
25 50
6 Ring Sonar 1m Sonar Resolution
Error (metres) using 2000 particles
15 30
10 20
5 10
0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time
25 50
12 Ring Sonar 2cm Odometry Translation Resolution
Error (metres) using 2000 particles
20 40
15 30
10 20
5 10
0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time
25 50
24 Ring Sonar 100cm Odometry Translation Resolution
Error (metres) using 2000 particles
20 40
15 30
10 20
5 10
0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time
Table 2: Left Column: One route using different numbers of sonars. Right
Column: One route using different Odometry and Sonar resolutions
12
four unique routes within the map. Each route being tested with 2000 and
1000 particles. Figure 3 shows these eight experiments. The left column
shows the error verse time for 2000 particles and the right column shows the
error verse time for 1000 particles. For 2000 particle experiments, it can be
seen that the total error always converges to zero. The individual translation
and orientation error has also been shown, to demonstrate that orientation
effects the error the most, as mentioned in Section 3. For the 1000 parti-
cle experiments, only the total error is shown as these graph demonstrates
the robot getting lost and then recovering once again. A animation of the
program output for each route can be found at [Cole, 2006].
5 Future Work
Since MCL is still in the research community, there is much future work
in this area. This project simply used a sonar sensor, however any sensor
could be used, or used together. This would require perceptual models for
each sensor and then the fusion of these perceptual models would need to be
researched.
This algorithm is given a few parameters, however it would be better the
algorithm determined the parameters in run time. For example the program
could use a 24 ring sonar sensor, but as the robot gains confidants of its
pose, particular sonars can be disabled. This would decrease the required
processing2 and therefore increase the real time response.
6 Conclusion
It is clear from the results the algorithm works very well. Though because
all the results are from a simulation, this algorithm may not work at all on a
real robot in the real world at real time. Even though the simulation models
the sensors so there readings would be similar to the real world, the real
world can be very surprising. However given the time frame of the project
and the resources available, getting a fully functional simulation of MCL for
sonar sensors is an achievement in itself.
References
[Borenstein, 1995] J. Borenstein. Control and kinematic design of multi-
degree-of-freedom robots with compliant linkage. IEEE Transactions on
2
Also battery power is saved on real robot
13
25 25
Translation Total
Orientation
Error (metres) using 2000 particles
15 15
10 10
5 5
0 0
0 100 200 300 400 500 0 100 200 300 400 500
Time Time
25 25
Translation Total
Orientation
Error (metres) using 2000 particles
15 15
10 10
5 5
0 0
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
Time Time
50 25
Translation Total
Orientation
Error (metres) using 2000 particles
Total
40 20
30 15
20 10
10 5
0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time
25 25
Translation Total
Orientation
Error (metres) using 2000 particles
Total
20 20
15 15
10 10
5 5
0 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Time Time
Table 3: Four unique routes within the map, each tested with 2000 and 1000
particles
14
Robotics and Automation, 1995.
[Cole, 2006] Luke Cole. Mcl sonar experiments, 2006.
http://cole.homedns.org/fs.php?subject=thumbnails&
dir=/my media/software/localisation/sonar/.
[Couch, 2001] L. Couch. Digital and Analog Communication Systems. Pren-
tice Hall, 2001.
[Cox, 1991] I. Cox. Blanche - an experiment in guidance and navigation of
an autonomous robot vehicle. IEEE Transactions on Robotics and Au-
tomation, 7(2):193–204, 1991.
[Douc et al., 2005] Randal Douc, Olivier Cappe, and Eric Moulines. Com-
parison of resampling schemes for particle filtering. Image and Signal Pro-
cessing and Analysis (ISPA). Proceedings of the 4th International Sympo-
sium on, 2005:64, 2005.
[Fox et al., 1999] D. Fox, W. Burgard, and S. Thrun. Markov localization for
mobile robots in dynamic environments. Journal of Artificial Intelligence
Research, 11, 1999.
[Fox et al., 2000] D. Fox, S. Thrun, W. Burgard, and F. Dellaert. Sequential
Monte Carlo Methods in Practice. Forthcoming, 2000.
[Leonard and Durrant-Whyte, 1991] J. Leonard and H. Durrant-Whyte.
Mobile robot localization by tracking geometric beacons. IEEE Trans-
actions on Robotics and Automation, 1991.
[Leonard and Durrant-Whyte, 1992] J. Leonard and H. Durrant-Whyte. Di-
rected sonar sensing for mobile robot navigation. Kluwer, Dordrecht, The
Netherlands, 1992.
[Mifflin, 1995] H. Mifflin. The American Heritage Stedman’s Medical Dictio-
nary. Houghton Mifflin Company, 1995.
[Moravec, 1998] H. Moravec. When will the computer hardware match the
human brain? Journal of Evolution and Technology, 1, 1998.
[Rekleitis, 2003] Ioannis Rekleitis. Cooperative Localization and Multi-Robot
Exploration. School of Computer Science, McGill University, Montreal,
Quebec, Canada, 2003.
[Shoval and Borenstein, 1995] S. Shoval and J. Borenstein. Measurement of
angular position of a mobile robot using ultrasonic sensors. 1995.
15
[Singhal, 1997] A. Singhal. Issues in autonomous mobile robot navigation,
May 1997. A survey paper towards partial fulfillment of MS degree re-
quirements.
[Thrun et al., 2000] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Robust
monte carlo localization for mobile robots. Artificial Intelligence, 128(1-
2):99–141, 2000.
[Thurn, 2006] Sebastian Thurn, 2006. http://robots.stanford.edu/.
[Zelinsky, 2000] A. Zelinsky. The 21st century: The age of intelligent ma-
chines, May 2000. http://www.science.org.au/future/zelinsky.htm.
16