You are on page 1of 21

MOBILE ROBOT LOCALIZATION

Mobile robot localization is the problem of determining the pose of a robot


relative to a given map of the environment. It is often called position
estimation or position tracking. Mobile robot localization is an instance of the
general localization problem, which is the most basic perceptual problem in
robotics.
Maps are described in a global coordinate system, which is independent of a
robots pose. Localization is the process of establishing correspondence
between the map coordinate system and the robots local coordinate system.
Knowing this coordinate transformation enables the robot to express the
location of objects of interests within its own coordinate framea necessary
prerequisite for robot navigation.
Unfortunatelyand herein lies the problem of mobile robot localizationthe
pose can usually not be sensed directly. Put differently, most robots do not
possess a (noise free!) sensor for measuring pose. The pose has therefore to
be inferred from data. A key difficulty arises from the fact that a single sensor
measurement is usually insufficient to determine the pose. Instead, the robot
has to integrate data over time to determine its pose.

EXTENDED KALMAN FILTER


The assumptions of linear state transitions and linear measurements with
added Gaussian noise are rarely fulfilled in practice. For example, a robot that
moves with constant translational and rotational velocity typically moves on a
circular trajectory, which cannot be described by linear next state transitions.
The extended Kalman filter (EKF) overcomes one of these assumptions: the
linearity assumption. Here the assumption is that the next state probability
and the measurement probabilities are governed by nonlinear functions g and
h, respectively:
: state at time t
: control at time t
: measurements at time t
: control and measurement noise respectively

Example of localization using the extended Kalman filter. The robot moves on
a straight line. As it progresses, its uncertainty increases gradually, as
illustrated by the error ellipses. When it observes a landmark with known
position, the uncertainty is reduced.
The extended Kalman filter (EKF) calculates an approximation to the true
state. It represents this approximation by a Gaussian. In particular, the state
( ) at time t is represented by a mean - and a covariance - .
The key idea underlying the EKF is called linearization. There exist many
techniques for linearizing nonlinear functions. EKFs utilize a method called
(first order) Taylor expansion using which we can refine our equations as
: a matrix of size
, with denoting the dimension of the state. This matrix
is called the Jacobian. The value of the Jacobian depends on
and
, hence
it differs for different points in time.
EKFs implement the exact same linearization for the measurement function . Here the Taylor expansion is developed around - , the state deemed most
likely by the robot at the time it linearizes :

: Jacobian for measurement.

ALGORITHM
EKF (

Return
The control noise matrix
The measurement noise matrix
The Kalman Gain matrix

COMPLETE CYCLE:

IMPLEMENTATION
HARDWARE
KHEPERA III

The Khepera III is a differential drive robot of 12cm diameter, produced by KTeam corporation1 with development assistance from the Distributed
Intelligent Systems and Algorithms laboratory (DISAL). It is a descendant of
the first generation Khepera robot which is smaller in size (5.5cm diameter)
and since its first release in 1995 has been in use worldwide to date, around
2000 Khepera I and Khepera II robots have been sold to over 600 universities.
In its latest incarnation, the Khepera III offers a much higher computing
power than its predecessors. A KoreBot extension board provides a standard
embedded Linux operating system (Angstrm distribution) on an Intel
XSCALE PXA-255 processor running at 400 MHz. Basic sensing is formed by a
ring of 9 infrared sensors (TCRT5000 Vishay Telefunken) and an additional
pair facing downwards (for table top navigation and line following), and 5
ultrasound sensors (400ST/R100 Midas Components Ltd.) placed on the front
semicircle. The infrared sensors have both active and passive functioning
modes (for reflected emitted light and ambient light measurements,
respectively), and an approximate range of 2cm-25cm. The ultrasound
sensors have an approximate range of 20cm-400cm and record a maximum of
3 echoes and their corresponding timestamps. Further, the robot has a
stackable expansion bus that enables the addition of custom robot modules.
Finally, in order to facilitate communication, an IEEE 802.11 wireless card can
be installed in the built-in CompactFlash slot.

SOFTWARE

KHEPERA 3 TOOLBOX The toolbox was used as a base for elementary


functions and the algorithm was implemented by using these modules and
programs.
MODULES
Khepera3, i2cal, nmea from khepera 3 toolbox.
Libgsl and libgslblas from Gnu Scientific Library were used for linear algebra
functions.
Urg, urg_connection and urg_sytem for laser related functions.
Ekf_localization for the global state and covariance variables.
Odometry_model for defining the mobile robots odometry.
PROGRAMS
Ekf_localization This includes the implementation of the complete algorithm
(the details of the functions and files are under specific following sections).
Map_building This includes the code for map representation in Hough
domain (discussed later in detail) and other map data for data association.
DATA STRUCTURES
The following data structures were used
struct localization_variables
{
gsl_matrix *state;
//global variable for state
gsl_matrix *covariance;
//global variable for
covariance
};
From ekf_localization module
struct sOdometryTrack {
struct {
int is_default;
float wheel_distance;
//distance between wheels of
khepera3
float wheel_conversion_left; //encoder conversion
float wheel_conversion_right; //encoder conversion
} configuration;
struct {

int pos_left_prev;
int pos_right_prev;
} state;
struct {
float x;
float y;
float theta;
} result;
struct {
float s_left;
meters
float s_right;
meters
} wheel_displacement;
};

//left encoders last reading


//right encoders last reding
//x pose of khepera3
//y pose of khepera3
//heading of khepera3
//control variables
//left wheel displacement in
//right wheel displacement in

struct sOdometryGoto {
struct {
int acceleration_step;
float speed_max;
float speed_min;
} configuration;
struct sOdometryTrack * track;
struct {
float goal_x;
float goal_y;
int speed_left_internal;
int speed_right_internal;
} state;
struct {
int speed_left;
int speed_right;
int closetogoal;
int veryclosetogoal;
int atgoal;
} result;
};
Both from odometry_model module

ODOMETRY
The encoder properties:
1 revolution = 128.8 mm = 2764 measures
The motor control block acts as slave devices of Korebot controller on the I2C
bus.
Speed:
The speed value is a division of a constant value by the time between encoder
pulsations. Thus the real speed can be given as

Where WheelCircumference is 128.8mm and 2764 correspond to the number


of measures per revolution of the wheel.

Where Kspeed is 144.01.

STATE PREDICTION

EQUATIONS -

Control

: wheel displacements

Error Model: linear growth

Nonlinear process model :

Jacobian with respect to state:


Jacobian with respect to control:
CODE FOR THIS MODULE:
1. state_prediction.c in ekf_localization programs folder.
2. odometry_model.c in odometry_model modules folder.
FUNCTIONS
void prediction_step_function(struct localization_variables *
ekf,struct sOdometryTrack *ot)
inputs localization_variables data structure and odometry
track structure. output- estimated state variable and
covariance.

VARIABLES
: localization_variables->state
: localization_variables->covariance
: sOdometryTrack->wheel_displacement.s_left
: sOdometryTrack->wheel_displacement.s_right
: Uk
: Gx
: Gu

SENSOR
Hokuyo Laser Range Finder

Laser Range Finder


Properties

Light weight (160g).Best for robot!


Low-power consumption (2.5W) for longer working hours.
Wide-range (5600mm240).
Accuracy (30mm).*
Distance and angle data output with high angular resolution (0.352).
High quality product under Total Quality Management. Designed,
manufactured and inspected by HOKUYO.

FEATURE/LANDMARK EXTRACTION:

Lines were chosen as landmarks for the ekf and the Hough transform was
used to model the landmarks.

Firstly, line segment information was extracted using split and merge
algorithm which is probably the most popular line extraction algorithm which
is originated from computer vision.

Secondly, the Hough transforms of the lines (of which these line segments are
a part of) were calculated. The above figure shows a snapshot from offline
visualization in hough space the data association where Global map hough
points are represented by Blue asterisk Global observations hough points are
given by red dots for the associated observations and yellow dots for the non
associated ones.

Above figure shows the associated (red) and non associated (yellow) map
lines in the global/inertial frame.

Equations
Observations:
NO = number of observations

CODE FOR THIS MODULE:


Get_observations.c in ekf_localization programs folder.
DATA STRUCTURES
struct laser_sensor_t
{
urg_t *urg;
long *data;
// vector data from the laser sensor
int data_len;
// length of data
float min_angle;
// minimum scan angle
float max_angle;
// maximum scan angle
float ang_res;
// angular resolution sensor
float max_range;
// maximum sensor range
int num_samples;
// number of samples per scan
};
// structure to represent points in global frame
typedef struct
{
double x, y;
} point2d;
// structure to represent a line segment in global frame
typedef struct
{
point2d start, end;
} segment;
// structure to represent a line in hough domain
typedef struct
{
double ro, alfa;
} houg;

FUNCTIONS
// function to get raw laser data from usb in
//laser_sensor_t.data
void do_laser_scan(struct laser_sensor_t *laser_sensor);
// a recursive function for the implementation of split and
// merge algorithm it also returns the hough domain
// representation of the line

int line_extraction(const point2d scan[MaxRows], int


scan_length,int min_length, double threshold, segment
lines[MaxRows], houg lines_houg[MaxRows]);
//function to get all the landmark information for data
//association
int get_laser_readings(struct sOdometryTrack *ot,double **
observations)

MAP BUILDING
Map data for association included
1. Hough transform of global/inertial map line segments.
2. Midpoint coordinates of the global/inertial line segments.
3. Length of global/inertial line segments.
CODE FOR THIS MODULE:
Map_building folder in Programs in Khepera 3 toolbox.
Input map.log file with map points
Output map_hough.log file with above mentioned map information.
DATA STRUCTURES:
// structure to represent points in global frame
typedef struct
{
double x, y;
} point2d;
// structure to represent a line in hough domain
typedef struct
{
double ro, alfa;
} houg;

MEASUREMENT PREDICTION
Is the coordinate frame transform world to sensor, given the predicted state
( ) and map ( ) it predicts the location ( ) and location uncertainty ( ) of
expected observations in sensor coordinates in our case the Hough domain.

Equations
MAP:
NOL = number of landmarks

Observation Model:
The parameters
of matched line segment from global map (according to
global coordinates) are transformed into the parameters
(according to the
estimated coordinates of the robot) by

Where

denotes the absolute value. Thus the observation model can be defined

byN = number of matched landmarks

Code for this Module:


State_correction.c in ekf_localization programs folder.
Variables:
: map
:map_local

DATA ASSOCIATION (Matching)

Local hough space in robots frame with the blue asterisk as the predicted map
and the associated (red) and non-associated (yellow) local observations.
Associates predicted measurements
with observations
in local robots
frame. The data association was done in two steps
1. For all observations every landmark was checked and map hough points
which followed the following condition were added to a list

2. From all the hough points in the list only that line segment having
minimum midpoint distance and within the threshold (half the length of
that map line segment) was associated.
3. The association was maintained in an array association_list of the size of
number of observations which contained indices of matched map
segment.
CODE FOR THIS MODULE:
State_correction.c in ekf_localization programs folder.

STATE CORRECTION / UPDATE

Above figure is a snapshot from offline visualization showing the correction


for stationary robot. The black triangle is the initial wrong pose, blue one is
the true pose and red triangle is the estimated intermediate step of ekf
correction.
Equations
Innovation Covariance:
Measurement noise covariance:

Kalman Gain:
Updated Mean:
Updated Covariance:

Code for this Module:


State_correction.c in ekf_localization programs folder.
Variables:
: S
: R
:K
:I
: localization_variables->state
: localization_variables->covariance

VISUALIZATION
The offline visualization/simulation was done on MATLAB using the log files
maintained during the onboard localization of Khepera 3. Code for the same is
the ekf_localization.m with every folder bearing the test date. The
visualizations were used to mitigate the errors and for debugging purposes.
The simulations helped in better understanding of the working of the
algorithm. The only information used for visualization were the log files
maintained during the execution of the algorithm onboard the robot.
All the data recorded throughout the activity period is saved in visualization
folder. Inside it data is saved in sub folders marked with date of experiment.
The subfolders contain code bearing name ekf_localization.m. before
executing a particular test file please set the initial and actual pose variables
for that particular test. This information is given in every subfolder in the
readme.txt files. Almost all the results were recorded while the corrections
were done afterwards using these visualizations.
Some of the best results can be seen from 13th July and 14th July subfolders
which were recorded after debugging the algorithm completely.

LOG FILES/SAVED DATA


Following log files were saved for visualization and simulation of the
localization algorithm
Map.log map points in global/inertial frame.

Map_hough.log hough transform of map line segments and other map


information.
State_info.log estimated state after each localization step.
Covariance_info.log covariance matrix after each localization step.
Segment_info.log observed line segment information in global frame.
Hough_info.log hough transform and midpoint information of the
observations in global frame.
Local_map.log hough transform of map line segments in the frame of
robot.
Local_obv.log hough transform of observed line segments in the frame
of robot.
Error_estimation.log position and orientation error (this file was used
only to check the correctness of localization algorithm).
Data_association.log association array to visualize associated and nonassociated hough points.
Sample_time.log sample time of each step of localization algorithm.
Laser_st.log sample time of each laser raw data request and
acquisition
Line_extraction_st.log sample time for each step of laser line extraction
(split and merge recursive algorithm).
State_est.log estimated state of robot after each correction step.
State_odo.log predicted state of robot after every odometry prediction
step.

TESTS AND RESULTS


The very first experiments included the algorithm with only odometry
prediction and the results (in visualization/odometry_only) show that the
robots odometry is quite good.
Then tests were done to check whether the correction step is working fine or
not by feeding the complete algorithm with false observations (transformed
map without noise) by giving initial error to both position and orientation to
check whether the error converges to zero. The results are as follows

Orientation error (initial orientation error 10 degrees)

Position error (initial position error ~ 15 cms)


Data taken from fake date/complete_data_29.6 subfolder in visualization.

Then the tests were performed for convergence with real observations from
the laser scanner. The results are as follows

Orientation error (initial orientation error ~15 degrees)

Position error (initial position error ~ 12 cms)


Data taken from 13.7 subfolder in visualization.

Then tests were performed for moving robot with the complete algorithm
onboard the robot. It was found that the sampling time for each ekf step was
very high for the current odometry goto function to work properly. Sampling
time for various steps expected to take large amount of time were recorded
for inferences.

The above graph showing the sampling time of one ekf step vs the number of
samples .

The above graph shows sampling time of raw data acquisition from laser (in
magenta) and sampling time of laser line extraction the split and merge
algorithm (in blue) and it shows that the line extraction algorithm takes
double the time of the data acquisition step. It was inferred that its because of
the fact that laser returns scan for 240 degree span in form of 726 readings
which form the input of the recursive line extraction algorithm.
As an improvement to the algorithm the sampling time can be reduced by
rejecting some readings out of 726 raw laser readings, thus feeding the line
extraction algorithm with less number of data.
The complete ekf localization algorithm can become base to further research
work on mobile robotics like simultaneous localization and mapping SLAM,
cooperative control of multirobot systems etc.

You might also like