You are on page 1of 66

Localization

"Position"
GlobalMap

EnvironmentModel
Local Map
LocalMap

Perception

Cognition
Path

RealWorld
Environment

MotionControl

Chapter
M bil R b t L li ti andMapping
MobileRobotsLocalisation
dM
i

9/20/2013

Dr.T.Asokan
Email:asok@iitm.ac.in

Contents
Mobilerobotlocalisation
M bil
b t l li ti

Odometry,Deadreckoning
Mapbasedlocalisation
MarkovLocalisation
KalmanFilter

Autonomousmapbuilding
Autonomous map building
SLAM,
EKFSLAM

BuildingBlocksofNavigation

Localization

"Position"
Global Map

Environment Model
Local Map

Perception

Cognition
g

Path

Real World
Environment

Motion Control

Localization,WhereamI?
?
position

Encoder

Position Update
(Estimation?)

Prediction of
Position

matched
observations

(e.g. odometry)

YES

Map

predicted position

Matching

data base

Odometry, Dead Reckoning

Probabilistic Map Based Localization

Perceptio
n
Perceptiion

Localization based on external sensors,


beacons or landmarks

raw sensor data or


extracted features

Observation

ChallengesofLocalization
Knowingtheabsoluteposition(e.g.GPS)isnotsufficient
Localizationinhumanscaleinrelationwithenvironment
PlanningintheCognition steprequiresmorethanonly
positionasinput
Perceptionandmotionplaysanimportantrole

Sensornoise
Sensor
noise
Sensoraliasing
Effectornoise
Od
Odometricpositionestimation
i
ii
i
i

SensorNoise
Sensor
Sensornoiseismainlyinfluencedbyenvironment
noise is mainly influenced by environment
e.g.surface,illumination
orbythemeasurementprincipleitself
b h
i i l i lf
e.g.interferencebetweenultrasonicsensors
Sensornoisedrasticallyreducestheusefulinformation
ofsensorreadings.Thesolutionis:
totakemultiplereadingintoaccount
to take multiple reading into account
employtemporaland/ormultisensorfusion

SensorAliasing
Inrobots,nonuniquenessofsensorsreadingsisthe
norm
Evenwithmultiplesensors,thereisamanytoone
mappingfromenvironmentalstatestorobots
pp g
perceptualinputs
Therefore
Thereforetheamountofinformationperceivedbythe
the amount of information perceived by the
sensorsisgenerallyinsufficienttoidentifytherobots
positionfromasinglereading
Robotslocalizationisusuallybasedonaseriesofreadings
b l l
ll b d
f
d
Sufficientinformationisrecoveredbytherobotovertime

EffectorNoise:Odometry,DeadReckoning
y,
g

Odometryanddeadreckoning:
Positionupdateisbasedonproprioceptive sensors
Odometry:wheelsensorsonly
y
y
Deadreckoning:alsoheadingsensors

Themovementoftherobot,sensedwithwheelencoders
and/or heading sensors is integrated to the position.
and/orheadingsensorsisintegratedtotheposition.
Pros:Straightforward,easy
Cons:Errorsareintegrated>unbound

Using
Usingadditionalheadingsensors(e.g.gyroscope)might
additional heading sensors (e g gyroscope) might
helptoreducethecumulatederrors,butthemain
problemsremainthesame.

Odometry:Errorsources
deterministic
(systematic)

nondeterministic
(nonsystematic)

deterministicerrorscanbeeliminatedbypropercalibrationofthe
system.
nondeterministicerrorshavetobedescribedbyerrormodelsandwill
alwaysleadtouncertainpositionestimate.
MajorErrorSources:
Limitedresolutionduringintegration(timeincrements,measurement
resolution)
l i
)
Misalignmentofthewheels(deterministic)
Unequalwheeldiameter(deterministic)
Variationinthecontactpointofthewheel
Unequalfloorcontact(slipping,notplanar)

Types of Error: Range Error, Turn Error, Drift error

Odometry:ErrorModelforDifferentialDriveRobot

x
p = y

xx

p = p + y

Odometry:ErrorModelforDifferentialDriveRobot
Kinematics

Absolute value of travel


distance

Odometry:ErrorModelforDifferentialDriveRobot..

Error model
Errormodel

Error propagation model:


C
Covariance
i
model
d l off position
iti estimate
ti t

TheErrorPropagationLaw

Onedimensionalcaseofa
nonlinearerrorpropagation
problem
Itcanbeshownthat
theoutputcovariance
matrixCY isgivenby
th
theerrorpropagationlaw:
ti l
where
CX:covariancematrixrepresentingtheinputuncertainties
CY:covariancematrixrepresentingthepropagateduncertaintiesfortheoutputs.
FX:istheJacobian matrixdefinedas:

whichisthetransposedofthegradientoff(X).

Odometry: GrowthofPoseuncertaintyforStraightLineMovement

Note:Errorsperpendiculartothedirectionofmovementaregrowingmuchfaster!

Odometry: GrowthofPoseuncertaintyforMovementonaCircle
Growth of Pose uncertainty for Movement on a Circle

Note: Errors ellipse does not remain perpendicular to the direction of movement!
Note:Errorsellipsedoesnotremainperpendiculartothedirectionofmovement!

Odometry:CalibrationofErrorsI(Borenstein)
SelfStudy/Seminartopic

Theunidirectionalsquarepathexperiment

BILD1Borenstein

Odometry:CalibrationofErrorsII
y
(Borenstein[5])

Thebidirectionalsquarepathexperiment

BILD2/3Borenstein

Odometry:CalibrationofErrorsIII
y
(Borenstein[5])

Thedeterministicand
nondeterministicerrors

Seminartopic:
Seminar
topic:
Positionestimationforamobilerobotusing
visionandodometry
ii
d d
t
ProceedingsofIEEEInternationalConference
on RoboticsandAutomation,1992.
Author(s): Chenavier,F.
LETIDSYS,CEACENG,Grenoble
Crowley,J.L.
Page(s): 2588 2593vol.3

Probabilistic, MapBased
Probabilistic,Map
BasedLocalization
Localization
Consideramobilerobotmovinginaknownenvironment.
g
Asitstarttomove,sayfromapreciselyknownlocation,it
mightkeeptrackofitslocationusingodometry.
However,afteracertainmovementtherobotwillgetvery
uncertainaboutitsposition.
updateusinganobservationofitsenvironment.
update using an observation of its environment
observationleadtoanestimateoftherobotsposition
p
which
canthanbefused withtheodometricestimation togetthe
bestpossibleupdateoftherobotsactualposition.

Terminology
Pathinput:X
Path
input Xt
ProprioceptiveinputUt
E t
ExteroceptiveinputZ
ti i
t Zt
Map(landmarks),M
Beliefstate:Thebestguessaboutrobotstate
bel(xt)=p(xt|z1>t,u1>t)(probabilityofrobotbeingatxt
givenallitspastobservations,z
i
ll i
b
i
d ll i
1>tandallitspast
controlinputs
B li f t t
Beliefstatecalculatedbeforethenewobservationatt
l l t db f
th
b
ti
t t
justafterthecontrolinputut isdefinedas
bel(xt)=p(xt|z1>t1,u1>t)
) oftentermedasprediction
often termed as prediction
update

Prediction(action)UpdateandPerception(measurement)
update
Actionupdate
actionmodelACT
i
d l ACT
withot:EncoderMeasurement,st1:priorbeliefstate
increasesuncertainty
i
i

Perceptionupdate
perceptionmodelSEE
p
p

withit:exteroceptivesensorinputs,s1:updatedbeliefstate
decreasesuncertainty

Improvingbeliefstate
p
g
bymoving

Probabilistic, MapBased
Probabilistic,Map
BasedLocalization
Localization
Given

thepositionestimate p(k )
itscovariancefortimek,
p
the current control input u(k )
thecurrentcontrolinput
thecurrentsetofobservationsand
Z (k + 1)
themap M (k )

Computethe
new(posteriori)positionestimateand
( t i i)
iti
ti t
d
p((kk + 1)
itscovariance
(k + 1)
p

Suchaprocedureusuallyinvolvesfivesteps:

TheFiveStepsforMapBasedLocalization

Mapp

data base

Predictionof
position
Measurement and estimate
Position(odometry)
p red ict ed f eatu re
obs erva tion s

Encoder

Estimation
(fusion)

matchedpredictions
andobservations
YES

1.Predictionbasedonpreviousestimateandodometry
2.Observationwithonboardsensors
3.Measurementpredictionbasedonpredictionandmap
4.Matchingofobservationandmap
5 E ti ti
5.Estimation>positionupdate(posterioriposition)
>
iti
d t ( t i i
iti )

Perrception

Matching

rawsensordataor
extractedfeatures

Observation
on-board sensors

Markov KalmanFilter
Localization
Markovlocalization
localization
localizationstartingfromany
starting from any
unknownposition
recoversfromambiguous
situation
situation.
However,toupdatethe
probabilityofallpositions
within the whole state space
withinthewholestatespace
atanytimerequiresadiscrete
representationofthespace
(grid).Therequiredmemory
(g
)
q
y
andcalculationpowercan
thusbecomeveryimportantif
afinegridisused.

Kalmanfilterlocalization
trackstherobotandis
inherentlyverypreciseand
efficient.
efficient.
However,iftheuncertaintyof
therobotbecomestolarge
(e.g.collisionwithanobject)
(e
g co s o
a objec )
theKalmanfilterwillfailand
thepositionisdefinitivelylost.

MarkovLocalization
Markovlocalizationusesan
explicit,discreterepresentationfortheprobabilityof
allpositioninthestatespace.
Thisisusuallydonebyrepresentingtheenvironment
b
byagrid
id oratopologicalgraph
t
l i l
h withafinitenumber
ith fi it
b
ofpossiblestates (positions).
Duringeachupdate,theprobabilityforeachstate
(element) of the entire space isupdated.
(element)oftheentirespace
is updated

Terminology

P(A):ProbabilitythatAistrue.
e.g.p(rt =l):probabilitythattherobotrisatpositionlattimet

Wewishtocomputetheprobabilityofeachindividualrobotposition
ih
h
b bili
f
h i di id l b
ii
givenactionsandsensormeasures.
P(A|B):ConditionalprobabilityofAgiventhatweknowB.
e.g.p(rt =l|it):probabilitythattherobotisatpositionlgiventhesensors
inputit.

Productrule:

Bayesrule:
y

TotalProbability:p( A) =

p( A | B) p( B)

for discrete probabilities

p ( A) =

p( A | B) p( B)dB
B

for continous probabilities

Markov Model Localisation


Markov assumption: Update depends only on the previous state and its most
recent actions and perception.
Prediction update: Robot estimates its current position based on previous
position and the odometric input.
________

b l ( xt ) =
bel

p( x

| ut , xt 1 )bel
b l ( xt 1 ) for
f discrete
di
t probabilit
b bilities
i

xt 1

________

bel ( xt ) =

p( x

bel(xt)=p(xt|z1->t, u1->t)

| ut , xt 1 )bel ( xt 1 )dxt 1 for continous pprobabilities

Perception update: Robot corrects its previous position itself by combining it


with the information from exteroceptive sensor.

bel ( xt ) = p ( zt | xt , M )bel ( xt )

GeneralalgorithmforMarkovlocalisation
g
for all xt
________

bel ( xt ) =

do

p( x

| ut , xt 1 )bel ( xt 1 )dxt 1

bel ( xt ) = p ( zt | xt , M )bel ( xt )
end
return bel ( xt )

MarkovLocalization:CaseStudy1 GridMap
Finefixeddecomposition grid(x,y,),15cmx15cmx1

Initialbeliefisconsideredtobeauniformdistributionfrom0to3asin(a)

Probabilistic motion model of odometry


________

bel ( x1 ) =

p( x

x0 =0
=0

| u1 , x0 )bel ( x0 )

Courtesy of
W. Burgard

Action update:
Sum over previous
S
i
possible
ibl positions
iti
and motion model
________

bel ( x1 ) =

p( x

x0 = 0

| u1 , x0 )bel ( x0 )

Measurement
easu e e upda
update:
e
Assume a range finder measures the distance from
origin, with a probability as in (d)

bel ( x1 ) = p ( z1 | x1 , M )bel ( x1 )

MarkovLocalization:CaseStudy GridMap(3)
The1Dcase

bel(x)

1. Start
Noknowledgeatstart,thuswehave
anuniformprobabilitydistribution.

2. Robotperceivesfirstpillar

P(z|x,M)

Seeingonlyonepillar,theprobability
beingatpillar1,2or3isequal.

3 Robotmoves
3.
R b
Actionmodelenablestoestimatethe
newprobabilitydistributionbased
on the previous one and the motion
onthepreviousoneandthemotion.

4. Robotperceivessecondpillar
Baseonallpriorknowledgethe
probability being at pillar 2 becomes
probabilitybeingatpillar2becomes
dominant

bel(x)

bel(x)

Seminar topics/ Project


Seminartopics/Project
Markov
Markovlocalisationusingtopologicalmap
localisation using topological map
(seminar)
Mobilerobotlocalisationusinglaserrange
Mobile robot localisation using laser range
finder(project)

MarkovLocalization:CaseStudy2
GridMap(5)

Courtesy of
W. Burgard

Example2:Museum
p
Laserscan1

MarkovLocalization:CaseStudy2
GridMap(6)

5.6.2

Courtesy of
W. Burgard

Example2:Museum
Example 2: Museum
Laserscan2

MarkovLocalization:CaseStudy2
GridMap(7)

Courtesy of
W. Burgard

Example2:Museum
Example 2: Museum
Laserscan3

MarkovLocalization:CaseStudy2
GridMap(8)

5.6.2

Courtesy of
W. Burgard

Example2:Museum
Example 2: Museum
Laserscan13

MarkovLocalization:CaseStudy2
y
GridMap(9)
p( )

Example2:Museum
Laserscan21

Courtesy of
W. Burgard

MarkovLocalization:
Finefixeddecomposition gridsresultinahugestatespace
Verylargeprocessingpowerneeded
y g p
gp
Largememoryrequirement

Reducingcomplexity
Various
Variousapproacheshavebeenproposedforreducingcomplexity
approaches have been proposed for reducing complexity
Themaingoalistoreducethenumberofstatesthatareupdatedin
eachstep

RandomizedSampling/ParticleFilter
Randomized Sampling / Particle Filter
Approximatedbeliefstatebyrepresentingonlyarepresentativesubset
ofallstates(possiblelocations)
E.gupdateonly10%ofallpossiblelocations
E g update only 10% of all possible locations
Thesamplingprocessistypicallyweighted,e.g.putmoresamples
aroundthelocalpeaksintheprobabilitydensityfunction
However,youhavetoensuresomelesslikelylocationsarestilltracked,
However you have to ensure some less likely locations are still tracked
otherwisetherobotmightgetlost

KalmanFilterLocalization
A mathematical
th
ti l mechanism
h i
ffor producing
d i an optimal
ti l estimate
ti t off system
t
state based on the knowledge of the system and the measuring device, the
description of system noise, and measurement errors and the uncertainty in
the dynamic models
A powerful method for Sensor fusion
The system is assumed to be linear and with white Gaussian noise
Initial position (initial belief) also assumed to be white noise

IntroductiontoKalmanFilter

Twomeasurements

Kalmangain

2 is smaller than 12 and 2 2

KalmanFilterLocalisationforMobileRobots

PredictionupdateandMeasurement
Prediction
update and Measurement
update
Predictionupdate:Therobotsposition
at time step t is predicted based on its
attimesteptispredictedbasedonits
oldlocation(timestept1)andits
movementduetothecontrolinput
u( ) (odo e c es a o )
u(t)(odometricestimation)
Measurementupdate
Observation:Sensormeasurements,Zt
Measurementprediction~Z
p
t

Predictedobservationsarewhattherobotexpects
toseeatpresentlocation~xt
Predictionexpressedinsensorframe

zt = h xt , M j

Matching

Produceanassignmentfromtheobservationtothe
prediction
Innovation:measureofdifferencebetween
predictedandobservedfeature
g
Validatongate

Estimation
^

xt = x t + K tVt

K t = P t H t INt

vtij

^ j
i
= Z t Z t = Z ti h j ( xt , m j )

INt

Co vaariance

= H Pt H
j

jT

+ Rti

= P t K

IN

)K

T
t

xt = x t + K tVt
The best estimate xt of the robot state at t is equal
to the best prediction of the state at t (x^t ) before
the new measurement , Zt, plus a correction term of
an optimal weighing value Kt times the difference
between Zt and best prediction Z^t at time t.

Co vaariance

= P t K

K t = P t H t INt

IN

)K

T
t

KalmanFilterforMobileRobotLocalization

RobotPositionPrediction:Example
p
sr sl
sr + sl
+
cos(

)
t 1

2
2
b
xt 1
sr + sl
sr sl

x t = yt 1 +
sin(t 1 +
)
2
2b

t 1
sr sl

Pt = Fx .Pt 1Fx + Fu .Qt Fu

kr sr
Qt =
0

kl sl

Odometry

^
Xt

Xt-1

KalmanFilterforMobileRobotLocalization

Observation
Observation
ThesecondstepistoobtaintheobservationZt (measurements)
f
fromtherobotssensorsatthenewlocationattimet
th
b t
t th
l ti
t ti
t
Theobservationusuallyconsistsofasetn0 ofsingleobservations
zj extractedfromthedifferentsensorssignals.Itcanrepresent
g
p
rawdatascans aswellasfeatures likelines,doors oranykindof
landmarks.
Theparametersofthetargetsareusuallyobservedinthesensor
Th
f h
ll b
di h
frame{S}.
Thereforetheobservationshavetobetransformedtotheworldframe{W}
or
themeasurementpredictionhavetobetransformedtothesensorframe
{S}.
Thistransformationisspecifiedinthefunctionhi

KalmanFilterforMobileRobotLocalization

Observation: Example
Observation:Example
RawDataof
Laser Scanner
LaserScanner

Extracted Lines

Extracted Lines
in Model Space

line j

j
rj
R

t
i
zt = i
r t

Sensor
(robot)
frame

r
Sensor covariance R =

rr j
r
i
t

KalmanFilterforMobileRobotLocalization

Measurement Prediction
MeasurementPrediction
Inthenextstepweusethepredictedrobotposition
p
p
p
x t
andthemapM togeneratemultiplepredictedobservations

zt
Theyhavetobetransformedintothesensorframe
^

j
zt = h xt , M

Wecannowdefinethemeasurementpredictionastheset
g
predictedobservations
containingalln
ip

Thefunctionhj ismainlythecoordinatetransformation
betweentheworldframeandthesensorframe

KalmanFilterforMobileRobotLocalization

Measurement Prediction: Example


MeasurementPrediction:Example
Forprediction,onlythewallsthatarein
p
,
y
thefieldofviewoftherobotareselected.
Thisisdonebylinkingtheindividual
li
linestothenodesofthepath
t th
d
f th
th

KalmanFilterforMobileRobotLocalization

MeasurementPrediction:Example
Thegeneratedmeasurementpredictionshavetobe
transformedtotherobotframe{R}
{ }

According
Accordingtothefigureinpreviousslidethetransformationis
to the figure in previous slide the transformation is
givenby
^
^j

{ w} j
^

t
t

Z t j = ^t = h j ( x t , m j ) =
^
^
j
{w} r j x t cos({w} j ) + y sin({w} j )
r
t
t
t
t
t

anditsJacobianby


^
x
r
H tj = ^
x

y
....


..
^

0
0
1

... =

{ w}
j
{ w}
j
cos(

)
sin(

)
0
t
t

Observaion
Measurement
prediction

KalmanFilterforMobileRobotLocalization

Matchingg

Assignmentfromobservationszj(gainedbythesensors)tothetargetszit
(storedinthemap)
For each measurement prediction for which a corresponding observation is
Foreachmeasurementpredictionforwhichacorrespondingobservationis
foundwecalculatetheinnovation:

i ^ j
v = Z t Z t = Z ti h j ( xt , m j )

ij
t

^
i

{ w} j

t
t

= ^t
^
^
i {w} j
{ w}
{ w}
j
j
r

+
y

cos(
)
sin(
t
r
t
t )
t
t t

anditsinnovationcovariancefoundbyapplyingtheerrorpropagationlaw:

Sensor covariance Rti =

r rr j

= H Pt H + R
Thevalidityofthecorrespondencebetweenmeasurementandpredictioncan
e.g.beevaluatedthroughtheMahalanobisdistance:
INt

jT

i
t

KalmanFilterforMobileRobotLocalization

Matching: Example
Matching:Example

KalmanFilterforMobileRobotLocalization

Estimation:ApplyingtheKalmanFilter
Kalmanfiltergain:
^

Kt = Pt H

T
t

( )

IN t

Updateofrobotspositionestimate:
^

xt = x t + K tVt

Theassociatedvariance
Co var iance,

P t = P t K t IN t K t T

KalmanFilterforMobileRobotLocalization

Estimation:Example
p
Kalmanfilterestimationof
thenewrobotposition
h
b
ii
Byfusingthepredictionof
robotposition(magenta)
b
ii (
)
withtheinnovationgained
by the measurements
bythemeasurements
(green)wegettheupdated
estimateoftherobot
position(red)

AutonomousMapBuilding
Startingfromanarbitraryinitialpoint,amobilerobotshould
beabletoautonomouslyexploretheenvironmentwithits
onboardsensors,gainknowledgeaboutit,interpretthe
scene,buildanappropriatemapandlocalizeitselfrelativeto
thismap.
SLAM:SimultaneousLocalizationandMapping
SSLAMisaprocessbywhichamobilerobotcanbuildamapof
i
b hi h
bil
b
b ild
f
anenvironmentandatthesametimeusethismaptodeduce
itslocation.InSLAM,boththetrajectoryoftheplatformand
j
y
p
thelocationofalllandmarksareestimatedonlinewithoutthe
needforanyaprioriknowledgeoflocation.

General Map
p Building
g Schematics

MathematicalformulationofSLAM
X T -------------- Robot path
U T Robot motion
M ..True map
Zt

Observations

in the sensor frame

The SLAM problem is recovering the model map M and robot


path X from odometry U and observations Z

Online SLAM
P(xt, M|Zt,UT)

Extended Kalman Filter SLAM


ExtendedKalmanFilterSLAM
T
Extended
Extendedstatevector:yt=(x
state vector: yt= (xt,m
m0,m
m1.
m
)
1
n1
1
mi =featuresinthemap
Prediction
di i
Measurementmodel(observation)
Estimation

Seminar topics
Seminartopics
VisualSlamandapplication
Visual Slam and application
ParticlefilterSLAM
OpensourceSLAMsoftware
O
S
f
Resources:
www openslam org
www.openslam.org
(youcanchooseaSLAMprojectfromthissite)

Summary

Localisationproblem
Challenges
Odometryanderrormodels
y
Probabilistic,Mapbasedlocalisation
Markov and Kalman Filter localisation
MarkovandKalmanFilterlocalisation
Mapbuilding
SLAM
EKFSLAM

You might also like