You are on page 1of 4

Monte Carlo Localisation for mobile robots

Luke Cole <cole@lc.homedns.org>


http://cole.homedns.org
31st of May, 2006

1 Introduction
Robot navigation is the task where an autonomous robot moves safely from
one location to another. This involves three primary questions [Leonard and
Durrant-Whyte, 1991]:

• Where am I? which is known as robotic localisation.

• Where am I going? which is known as goal recognition.

• How do I get there? which is known as path planning.

This project investigated, implemented and experimented the localisation


task using Monte Carlo techniques. Answering the question “Where am I?”
is done from the entities point of view and is a problem of estimating the
robot’s (position, orientation) relative to its environment. The robot’s pose
is typically the x and y coordinates and heading direction (orientation) of the
robot in a global coordinate system and is often represented by a probability
distribution.

2 Localisation Problem Instances


There are a number of problems faced by mobile robot localisation [Thrun
et al., 2000]:

• Position tracking problem - The robot is given the initial location

• Global positioning - The robot is initially lost. i.e. can deal with
multiple ideas (hypotheses) about its location.

1
• kidnapped robot problem - The robot is suddenly it is ’kidnapped’ to a
new location. Here the robot needs to determine that it has been kid-
napped and, furthermore, must work out its new location. The global
positioning problem is a special case of the kidnapped robot problem
where the robot is told it has been kidnapped [Fox et al., 1999].

3 Localisation Algorithm
The Monte Carlo Localistion is implemented via the approximation of bayes
rule [Couch, 2001]:

p(ot | xt , at−1 , ..., o0 )p(xt | at−1 , ..., o0 )


p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) =
p(ot | at−1 , d0...t−1)
(1)
where xt is the pose a time t, ot is an observation at time t and at is a action
a time t.
Generally this is evaluated via a particle filter which performs the follow-
ing recursive steps:

1. The robot then moves (acquiring relative position information at−1 ).


So the samples are moved according to at−1 using the motion model.

2. The robot then makes an observation (acquiring absolute position infor-


mation ot ), which yields the importance factors and allows the weight
for each sample xi to be calculated using the perceptual model.

3. The new importance factors are then normalised so they sum up to


one.

4. Resample new particles from the existing particle set according to the
weights. Replace the particle set with this re sampled set. Go to 1.

3.1 Motion Model


The motion model provides the odometry information (translation, rotation)
from the robot. To simulate this, the vector difference of the current real
location and the previous nth real location provides the inital simulated
odometry information. The next step is to introduce errors based on some
factors. In this project these factors where resolution and random error. The
optimal resolutions found were: 0.001 metres and 0.5 degrees.

2
3.2 Perceptual Model
The perceptual model weights a given pose via the use of a sensor, which is
this projeect was a N ring sonar sensor.
For this project sonar sensor weight for is determined by:

X
N
wi = 1 − (mean − xi )2 /N/ρ (2)
k=0

where, N is the number of sonar’s, ρ is the sonar distance limit.


Please note the distance measurements are calculated similar to the odom-
etry above, that is errors are introduced based on two factors: resolution and
random errors. In this project the sonar distance limit is 4 metres, resolution
0.04 metres and N is generally equal to 24.

3.3 Resampling
This project used sample/importance resampling (SIR) to resample the set of
particles. The threshold used to test the weight for each sample was chosen to
be 1/particle count. Furthermove as an extension on the project, the number
of particles will also be updated depending on the following condition:

function GetParticleCount(pose_xy_covariance, particle_count,


sensor_weight) {

if (pose_xy_covariance < SMALL_XY_COVARIANCE &&


sensor_weight < HIGH_WEIGHT)

particle_count = PARTICLE_COUNT_MAX:
InitaliseUniformPose(particle_count)

else if (pose_xy_covariance < SMALL_XY_COVARIANCE &&


particle_count < PARTICLE_COUNT_MAX)

IncreaseParticleCount(&particle_count);

else if (particle_count > PARTICLE_COUNT_MIN)

DecreaseParticleCount(&particle_count);

return particle_count;
}

3
References
[Couch, 2001] L. Couch. Digital and Analog Communication Systems. Pren-
tice Hall, 2001.
[Fox et al., 1999] D. Fox, W. Burgard, and S. Thrun. Markov localization for
mobile robots in dynamic environments. Journal of Artificial Intelligence
Research, 11, 1999.
[Leonard and Durrant-Whyte, 1991] J. Leonard and H. Durrant-Whyte.
Mobile robot localization by tracking geometric beacons. IEEE Trans-
actions on Robotics and Automation, 1991.
[Thrun et al., 2000] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Robust
monte carlo localization for mobile robots. Artificial Intelligence, 128(1-
2):99–141, 2000.

You might also like