You are on page 1of 16

Monte Carlo Localisation for Mobile Robots

Luke Cole <cole@lc.homedns.org>


http://cole.homedns.org
31st of May, 2006

1 Introduction
Robot navigation is the task where an autonomous robot moves safely from
one location to another. This involves three primary questions [Leonard and
Durrant-Whyte, 1991]:

• Where am I? which is known as robotic localisation.

• Where am I going? which is known as goal recognition.

• How do I get there? which is known as path planning.

This project investigated, implemented and experimented a simulation


of localisation task using Monte Carlo techniques. Answering the question
“Where am I?” is done from the entities point of view and is a problem of es-
timating the robot’s pose (position, orientation) relative to its environment.
The robot’s pose is typically the x and y coordinates and heading direction
(orientation) of the robot in a global coordinate system and is often repre-
sented by a probability distribution.
The localisation technique used in this project is based on a particle filter,
which is an algorithm using a Monte Carlo Method and is commonly known
as Monte Carlo Localisation (MCL). As a result the MCL developed worked
very well in simulation. Once the robot determined its pose, it was able to
continue tracking its pose for rest of the journey.

2 Motivation
In 1920, Karel Capek presented a play R.U.R. Rossum’s Universal Robots
that showed a future in which all workers are automated. A worker was

1
called a robot, which is a well known term today as “a mechanical device
that sometimes resembles a human and is capable of performing a variety of
often complex human tasks on command or by being programmed in advance”
[Mifflin, 1995]. Robots would allow humans to concentrate their concerns
on more important tasks, instead of cleaning ones house. Currently the
community employees other humans to do their unwanted tasks, however
delegation to humans may not be an ideal solution. For some tasks, humans
come with major limitations such as a need for sleep, food and a desire for
money. On the other hand, a robot does not need to sleep, eat or be paid so
the development of robots for the community is promising.
So it is clear an autonomous mobile robot would be of great benefit to hu-
man society. However some may ague that robots will create unemployment.
Though one argument to this, from a well known scientist, specialising in
robotics and computer vision, is “at the moment the highest number of robots
are in the nations with the lowest levels of unemployment. If we can make
robots for the consumer, that will create a lot of employment.” [Zelinsky,
2000].
However a robot worker would require the robot to be mobile, which
is the problem of robot navigation mentioned in the introduction. Robot
navigation is occasionally referred to as the most fundamental problem for
robotics [Cox, 1991], so a project looking at any areas of the robot navigation
problem, such as localisation, would be of great benefit to robotics and the
robot dream.
There are a number of problems faced by mobile robot localisation [Thrun
et al., 2000]:

• Position tracking problem - The robot is given the initial location

• Global positioning - The robot is initially lost. i.e. can deal with
multiple ideas (hypotheses) about its location.

• kidnapped robot problem - The robot is suddenly ’kidnapped’ to a new


location. Here the robot needs to determine that it has been kidnapped
and furthermore, must work out its new location. The global positioning
problem is a special case of the kidnapped robot problem where the robot
is told it has been kidnapped [Fox et al., 1999].

Ideally a solution is needed to handle all these problems in real time, on a


real robot and in real world. However for the purposes of this project only a
simulation was used to test these problems via the MCL algorithm presented
in this report.

2
There has been numerous research on MCL such as [Fox et al., 2000]
and [Thrun et al., 2000], however most of the well known work stem from
Sebastian Thurn [Thurn, 2006].

3 Localisation Theory
This section will describe localisation theory by first mentioning the available
information for a robot, then discussing methods for representing a pose
estimation and finally outlining the approach used in this project.

3.1 Available Information


Simply speaking a robot either drives around, or senses an environment,
which gives rise to the two different kinds of information available. The
first is driving information, also known as relative position measurements,
gathered by the robot itself as it drives around. The second is sensor infor-
mation, also known as absolute position measurements, from observations of
the environment as the robot moves.
The most obvious relative position measurement is odometry, which comes
from the Greek words for road and measure [Moravec, 1998]. However the
problem can not be purely solved with odometry, since odometry informa-
tion is generally from a drive system which naturally has drift and slippage,
which lead to errors in both distance and orientation [Borenstein, 1995] and
accumulate over time. Generally errors in orientation cause the largest pose
uncertainties. Different sized wheels can also cause errors [Shoval and Boren-
stein, 1995]. Odometry is also sensitive to terrain, so if the robot navigates
over rocks for example, it may think it has travelled further than it actually
has. Even though odometry causes increasing uncertainty over time, it is
rather accurate for short periods of time, and since it is an inexpensive way
to gain position information, it is an important source of information for
localisation.
Sensing information is typically obtained by representing the environment
by a set of landmarks such as lines and doors. When the global position of the
landmark is known and an observation of the environment is made, capturing
a subset of the landmarks, the robot’s position can be determined.
Landmarks are represented by a map or model which can be built in a
number of ways, such as: topological, feature-based, grid-based and appearance-
based and are either learned by the robot or given to the robot prior to nav-
igation [Singhal, 1997]. This also provides a global reference frame for the
robot to approximate its pose.

3
3.2 Pose Estimation
Estimating the pose is very much about representing uncertain pose infor-
mation [Fox et al., 2000]. The pose of the robot is often represented by a
probability distribution. The distribution may be represented by a Gaussian,
in which case, the mean and covariance give the approximation of the robot’s
pose. The distribution is represented by:

p(xt | d0...t) (1)


Here x is the state, xt is the state at time t and d0...t are measurements
from time 0 up to time t. In robot localisation there are two types of measure-
ments as discussed in Section 2. So absolute position measurements will be
denoted by a, for action and relative position measurements will be denoted
by o, for observation, making equation 1:

p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) (2)


Estimating Equation 2 is the key idea of the Bayesian filtering and is used
by most of the pose estimation techniques presented below.
There are many methods to pose estimation, the earliest pose estimation
techniques were based on dead reckoning, which has been used ever since
people started travelling around [Worsley, 1978]. The following list outlines
the possible pose estimation techniques.
• Dead-Reckoning Given a initial pose and odometry information al-
lows one to calculate the final pose.
• Topological: The uncertainty of the pose information is stored as a
graph. The location of the robot is returned as a particular area or
room instead of actual coordinates.
• Gaussian PDF Localisation: The pose is stored as a Gaussian prob-
ability distribution function (PDF).
• Gaussian Sum PDF Localisation: When the pose is multi-modal,
multiple Gaussian PDF’s are used.
• Position Probability Grids: The pose space is divided into cells,
each representing a possible pose. Thus a discrete approximation of
the PDF can be made.
• Monte Carlo Localisation: The PDF is represented by a set of
samples. Efficient approximation of the PDF is made by finite sized
sample set.

4
Gaussian PDF localisation and Gaussian sum PDF localisation use Kalman
filters that are have proved very successful methods for solving the position
tracking problem [Leonard and Durrant-Whyte, 1992], quite effectively solv-
ing the problem of compensating for small, incremental errors. Position prob-
ability grids use Markov localisation to solve the global positioning problem.
However this method suffers from two problems: the computational expense,
and the fact that accuracy is limited to the resolution of the approximation.
Monte Carlo Localisation (MCL) solves the global positioning and kidnap-
ping problem. It is an approach that overcomes the issues of the Markov
localisation by an order of magnitude in its efficiency and accuracy.

3.3 Bayesian Filtering


Particle filters provide solutions to all the problems given in Section 2. To
understand the implementation of particle filters, let us first derive a Bayes
filter which is the basis of Markov Localisation. First transform equation (2)
by Bayes rule [Couch, 2001]:

p(ot | xt , at−1 , ..., o0 )p(xt | at−1 , ..., o0 )


p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) =
p(ot | at−1 , d0...t−1)
(3)
Now a Bayesian filter assumes the environment is Markov, that is, if the
current state is known, past measurements are independent of future mea-
surements, that is:

p(at | xt , at−1 , ...., o0 ) = p(ot | xt ) (4)

Which simplifies equation (3) to:


p(ot | xt )p(xt | at−1 , ..., o0 )
p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) = (5)
p(ot | at−1 , d0...t−1)
Then the final recursive form is obtained by integrating out the pose xt−1 at
time t − 1, yielding from equation 5:
p(ot | xt )
Z
p(xt | xt−1 , at−1 , ..., o0 )p(xt−1 | at−1 , ..., o0 )dxt−1 (6)
p(ot | at−1 , d0...t−1)
The Markov assumption also tells us that if xt−1 and at−1 are known, the
state xt is conditionally independent of past measurements o1 , ..., ot−1 and
a1 , ..., at−2 , that is:

p(xt | xt−1 , at−1 , ...., o0 ) = p(xt | xt−1 , at−1 ) (7)

5
So equation (6) can now be put into recursive form known as Bayes filter :

p(ot | xt )
Z
p(xt | xt−1 , at−1 )p(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1
(8)
p(ot | at−1 , d0...t−1)
Z
= ηρ αp(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1 (9)

where η equals p(ot | at−1 , d0...t−1)−1 and is known as the normalising con-
stant, and α and ρ are known as the motion model and perceptual model
respectively. So the equation shows that given α and ρ and the previous
approximation of the robots pose p(xt−1 | ot−1 , at−2 , ..., a0 , o0 ), the current
approximation of the robots pose p(xt | ot , at−1 , ..., a0 , o0 ) can be determined.
Hence, given the initial pose probability distribution the robot pose can al-
ways be estimated as the robot travels within its environment.
Its hard to compute the Bayesian filter directly due to issues such as
storing the distribution since the probability distribution is continuous and
can not integrate that easy. For these reasons particle filters are used, which
will be discussed in the following section.

3.4 Implementation of Particle Filters


The idea of the particle filter algorithm is to represent Equation 2 by a set
of n weighted samples distributed according to the Equation 2, that is:

{xi , pi }i=1,...,n (10)


where xi is a sample (or particle, which is a state) and pi are called the impor-
tance factors, which sum up to one, which makes it a probability distribution,
and determine the weight of each sample.
Before the algorithm is presented the initialisation of the samples will
first be discussed in reference to each of the localisation problems mentioned
in Section 2. In the position tracking problem the initial pose is known, so
the initial probability distribution is typically a set of samples distributed in
Gaussian fashion around the given initial pose. However in the global locali-
sation problem the initial probability distribution is typically a set of samples
uniformly distributed in the state space (environment). The kidnapped robot
problem is typically initialised similarly to the global localisation problem,
once the robot has determined it was kidnapped.
The particle filter algorithm is generally performed as follows:

1. Sample n particles xi from the p(xt−1 | ot−1 , at−2 , ..., a0 , o0 ) according


to the importance factors pi .

6
2. The robot then moves (acquiring relative position information at−1 ).
So the samples are moved according to at−1 using the motion model.

3. The robot then makes an observation (acquiring absolute position infor-


mation ot ), which yields the importance factors and allows the weight
for each sample xi to be calculated using the perceptual model.

4. The new importance factors are then normalised so they sum up to


one.

5. Resample particles: sample new particles from the existing particle set
according to the weights. Replace the particle set with the new one.
Go to 2.

Figure 3 shows an image sequence of the particle filter in progress. As


can be seen, the particle spread is reduced as the robot drives around and
makes observations.

3.5 Motion Model


The motion model provides the odometry information (translation, rotation)
from the robot and comes directly from the robot drive system. However
since this project will be a complete simulation, this section will now outline
how the odometry information is modelled.
Since this project investigates localisation, as opposed to its parent prob-
lem robot navigation, the program is given a set of poses, which define the
route within the map for the robot. The odometry information is then sim-
ulated via the vector difference of the current pose and the previous nth
pose. However no sensor is perfect, so various errors need to be introduced.
This project introduced two forms of errors: resolution and random error.
For example, with a a translation resolution of 0.002 metres and a maxi-
mum random error of 0.2 metres, a reading of 0.54321 metres would return
a reading 0.542 ± 0.2 metres.
Once the odometry information is known, each particle is updated using
the Dead Reckoning equation [Rekleitis, 2003]:
 ′   
x x + ρcos(α)
 y ′  =  y + ρsin(α)  (11)
θ′ θ+α
where, ρ and α is the odometry translation and rotation respectively. x, y
and θ is the new pose and x′ , y ′ , θ′ is the previously known pose.

7
3.6 Perceptual Model
The perceptual model weights a given pose through the use of a sensor, which
means a unique perceptual model is required for each sensor. However in this
project only a ring sonar sensor was used. A sonar measures the line of sight
distance to the nearest obstacle. A ring sonar sensor is simply a collection of
sonars angularly displaced by 2π
N
rad, where N is the number of sonars used.
Given a map of obstacles, one can model the sonar readings at any loca-
tion and orientation within the map. This allows the robot to compare its
current sonar readings with the sonar readings of where the robot thinks it
is. These comparisons allow each particle to be weighted via:
N
µ X
ωi = 1 − (ai − bi )2 (12)
N k=0

where, N is the number of sonars, ai is ith modelled sonar distance and bi is


the ith current sonar distance, µ is the sonar reading distance limit.
The map, which can been seen in sequence Figure 1 is a binary image
were the black indicates the walls. It is clear, given a pose, the distance to
N obstacle can be computed. However errors must also be introduced to
these measurements which were: resolution, random error and sonar reading
distance limit. Since this project is simulating MCL, the current sonar dis-
tances also have to be modelled, though this is no different from modelling
sonar distances at any given pose.

3.7 Resampling
The resampling step of the particle filter can be implemented in a number
of ways [Douc et al., 2005]. Though this project has used one of the more
common techniques: Sample/Importance Resampling (SIR). The concept of
SIR is a set of particles are separated via thresholding each particles weight.
If a particles weight is greater then the threshold, it is kept, however if the
particle weight is less then the threshold, it is removed from the set and one
of the kept particles is duplicated. Though since MCL is a Monte Carlo
method it is common practice to copy the kept particle with an extremely
similar location and orientation, rather then just duplicating it. Ideally the
threshold should be determined at each resampling step, however for the
purpose of this project the most obvious threshold (particle count)−1 , is
used at every resampling step.
The lower the number of particles, the less computation is required. So
a function has been implemented to determine a number of particles to use

8
function PostTasks(pose_xy_cov, sensor_weight, particle_count) {

if (pose_xy_cov < SMALL_XY_COV && sensor_weight < HIGH_WEIGHT) {

particle_count = PARTICLE_COUNT_MAX:
InitaliseUniformPose(particle_count)

} else if (pose_xy_cov < SMALL_XY_COV &&


particle_count > PARTICLE_COUNT_MIN) {

particle_count = DecreaseParticleCount(particle_count);

} else if (particle_count < PARTICLE_COUNT_MAX) {

particle_count = IncreaseParticleCount(particle_count);

return particle_count;
}

Figure 1: Resampling Post Tasks: Is the robot lost?, determine new particle
count.

9
after each resampling step. The basic concept is once the x, y covariance
is small the robot must be confident of its pose estimation, so if this is the
case the particle count is reduced, otherwise it is increased. Increasing or
decreasing the particle count can be done in a number of ways to improve
performance, however in this project the particle count is simply increased
or decrease by %5 of the number of current particles. Even though the robot
may be confident of its pose, it may be wrong. So the robot must test its
confidence to ensure it is not lost. This is simply done by using Equation 12,
where ai is the ith sonar distance modelled using the pose mean, once the
robot is confident of its pose. If the weight is not extremely high, the robot
must be lost, so the particles are uniformly redistributed across the map.
These resampling post tasks can be further understood by taking a look at
the algorithm seen in Figure 1.

4 Results
The localiser developed was tested on accuracy over four unique routes with
different program variables such as: maximum particle count, number of
sonars, sonar resolution and odometry resolution. Figure 1 shows the an
image sequence left to right, top to bottom, of the program output for one
such experiment. The blue dots1 represent each particle pose. The red line
indicates the true robot pose (an instance of the given route). Finally the
green lines represent the the sonar readings from the robots true pose, in this
case a 24 ring sonar sensor. This sequence shows the expected result of the
particles converging around the true robot pose as time increases.
The first experiments where to determine what number of sonars required
and the maximum sensor resolutions that can be achieved. Figure 2 shows 3
sets of graphs. The first set is the entire left column, which shows the error
verse time using various size ring sonar sensors tested on identical routes.
The right column contains the two sets of resolution experiments. Each set
shows two different resolution setting, the top set being the sonar resolution,
and the bottom set being the odometry translation resolution. It is clear from
this graphs the ring sonar sensor must have at least 12 sonars. Furthermore,
the sonar and odometry translation resolutions should be under 100cm and
2cm respectively.
With this information the program was set with a 24 ring sonar sensor,
1cm odometry translation resolution, 0.5 degree odometry orientation reso-
lution and a 4cm sonar resolution with a 4 metre range, which are reasonable
specifications for these types of sensors. These setting were then used to test
1
They are actually very small lines (vectors)

10
Table 1: Program Output: The particles (blue) converge around the true
robot pose (red). The green lines represent the sonar readings from the
robots true pose.

11
Error (metres) using 2000 particles, 3 Ring Sonar

25 50
3 Ring Sonar 100cm Sonar Resolution

Error (metres) using 2000 particles


20 40

15 30

10 20

5 10

0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time

25 50
6 Ring Sonar 1m Sonar Resolution
Error (metres) using 2000 particles

Error (metres) using 2000 particles


20 40

15 30

10 20

5 10

0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time

25 50
12 Ring Sonar 2cm Odometry Translation Resolution
Error (metres) using 2000 particles

Error (metres) using 2000 particles

20 40

15 30

10 20

5 10

0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time

25 50
24 Ring Sonar 100cm Odometry Translation Resolution
Error (metres) using 2000 particles

Error (metres) using 2000 particles

20 40

15 30

10 20

5 10

0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time

Table 2: Left Column: One route using different numbers of sonars. Right
Column: One route using different Odometry and Sonar resolutions
12
four unique routes within the map. Each route being tested with 2000 and
1000 particles. Figure 3 shows these eight experiments. The left column
shows the error verse time for 2000 particles and the right column shows the
error verse time for 1000 particles. For 2000 particle experiments, it can be
seen that the total error always converges to zero. The individual translation
and orientation error has also been shown, to demonstrate that orientation
effects the error the most, as mentioned in Section 3. For the 1000 parti-
cle experiments, only the total error is shown as these graph demonstrates
the robot getting lost and then recovering once again. A animation of the
program output for each route can be found at [Cole, 2006].

5 Future Work
Since MCL is still in the research community, there is much future work
in this area. This project simply used a sonar sensor, however any sensor
could be used, or used together. This would require perceptual models for
each sensor and then the fusion of these perceptual models would need to be
researched.
This algorithm is given a few parameters, however it would be better the
algorithm determined the parameters in run time. For example the program
could use a 24 ring sonar sensor, but as the robot gains confidants of its
pose, particular sonars can be disabled. This would decrease the required
processing2 and therefore increase the real time response.

6 Conclusion
It is clear from the results the algorithm works very well. Though because
all the results are from a simulation, this algorithm may not work at all on a
real robot in the real world at real time. Even though the simulation models
the sensors so there readings would be similar to the real world, the real
world can be very surprising. However given the time frame of the project
and the resources available, getting a fully functional simulation of MCL for
sonar sensors is an achievement in itself.

References
[Borenstein, 1995] J. Borenstein. Control and kinematic design of multi-
degree-of-freedom robots with compliant linkage. IEEE Transactions on
2
Also battery power is saved on real robot

13
25 25
Translation Total
Orientation
Error (metres) using 2000 particles

Error (metres) using 1000 particles


Total
20 20

15 15

10 10

5 5

0 0
0 100 200 300 400 500 0 100 200 300 400 500
Time Time

25 25
Translation Total
Orientation
Error (metres) using 2000 particles

Error (metres) using 1000 particles


Total
20 20

15 15

10 10

5 5

0 0
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
Time Time

50 25
Translation Total
Orientation
Error (metres) using 2000 particles

Error (metres) using 1000 particles

Total
40 20

30 15

20 10

10 5

0 0
0 2000 4000 6000 8000 10000 12000 14000 0 2000 4000 6000 8000 10000 12000 14000
Time Time

25 25
Translation Total
Orientation
Error (metres) using 2000 particles

Error (metres) using 1000 particles

Total
20 20

15 15

10 10

5 5

0 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Time Time

Table 3: Four unique routes within the map, each tested with 2000 and 1000
particles
14
Robotics and Automation, 1995.
[Cole, 2006] Luke Cole. Mcl sonar experiments, 2006.
http://cole.homedns.org/fs.php?subject=thumbnails&
dir=/my media/software/localisation/sonar/.
[Couch, 2001] L. Couch. Digital and Analog Communication Systems. Pren-
tice Hall, 2001.
[Cox, 1991] I. Cox. Blanche - an experiment in guidance and navigation of
an autonomous robot vehicle. IEEE Transactions on Robotics and Au-
tomation, 7(2):193–204, 1991.
[Douc et al., 2005] Randal Douc, Olivier Cappe, and Eric Moulines. Com-
parison of resampling schemes for particle filtering. Image and Signal Pro-
cessing and Analysis (ISPA). Proceedings of the 4th International Sympo-
sium on, 2005:64, 2005.
[Fox et al., 1999] D. Fox, W. Burgard, and S. Thrun. Markov localization for
mobile robots in dynamic environments. Journal of Artificial Intelligence
Research, 11, 1999.
[Fox et al., 2000] D. Fox, S. Thrun, W. Burgard, and F. Dellaert. Sequential
Monte Carlo Methods in Practice. Forthcoming, 2000.
[Leonard and Durrant-Whyte, 1991] J. Leonard and H. Durrant-Whyte.
Mobile robot localization by tracking geometric beacons. IEEE Trans-
actions on Robotics and Automation, 1991.
[Leonard and Durrant-Whyte, 1992] J. Leonard and H. Durrant-Whyte. Di-
rected sonar sensing for mobile robot navigation. Kluwer, Dordrecht, The
Netherlands, 1992.
[Mifflin, 1995] H. Mifflin. The American Heritage Stedman’s Medical Dictio-
nary. Houghton Mifflin Company, 1995.
[Moravec, 1998] H. Moravec. When will the computer hardware match the
human brain? Journal of Evolution and Technology, 1, 1998.
[Rekleitis, 2003] Ioannis Rekleitis. Cooperative Localization and Multi-Robot
Exploration. School of Computer Science, McGill University, Montreal,
Quebec, Canada, 2003.
[Shoval and Borenstein, 1995] S. Shoval and J. Borenstein. Measurement of
angular position of a mobile robot using ultrasonic sensors. 1995.

15
[Singhal, 1997] A. Singhal. Issues in autonomous mobile robot navigation,
May 1997. A survey paper towards partial fulfillment of MS degree re-
quirements.
[Thrun et al., 2000] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Robust
monte carlo localization for mobile robots. Artificial Intelligence, 128(1-
2):99–141, 2000.
[Thurn, 2006] Sebastian Thurn, 2006. http://robots.stanford.edu/.

[Worsley, 1978] F. Worsley. Shackleton’s boat journey. New York, 1978.

[Zelinsky, 2000] A. Zelinsky. The 21st century: The age of intelligent ma-
chines, May 2000. http://www.science.org.au/future/zelinsky.htm.

16

You might also like