Professional Documents
Culture Documents
Howie Choset
http://voronoi.sbp.ri.cmu.edu/~choset
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
The Problem
• What is the world around me (mapping)
– sense from various positions
– integrate measurements to produce map
– assumes perfect knowledge of position
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Localization
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Representations for Robot Localization
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Three Major Map Models
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Three Major Map Models
Computational Grid size and resolution Landmark covariance (N2) Minimal complexity
Complexity
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Atlas Framework
• Hybrid Solution:
– Local features extracted from local grid map.
– Local map frames created at complexity limit.
– Topology consists of connected local map frames.
Authors: Chong, Kleeman; Bosse, Newman, Leonard, Soika, Feiten, Teller
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
H-SLAM
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
What does a Kalman Filter do, anyway?
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
What’s so great about that?
x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k )
y ( k ) = H ( k ) x ( k ) + w( k )
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
How does it work?
x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k )
y ( k ) = H ( k ) x ( k ) + w( k )
1. prediction based on last estimate:
xˆ ( k + 1 | k ) = F (k ) xˆ (k | k ) + G (k )u (k )
yˆ (k ) = H (k ) xˆ (k + 1 | k )
3. update prediction: xˆ ( k + 1 | k + 1) = xˆ (k + 1 | k ) + Δx
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ (k + 1 | k ) and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ (k + 1 | 1) and output y, find Δx so that xˆ = xˆ (k + 1 | 1) + Δx
is the " best" estimate of x.
“best” estimate comes from shortest Δx
shortest Δx is perpendicular to Ω
xˆ (k + 1 | k )
•
Δx
Ω = {x | Hx = y}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Some linear algebra
a is parallel to Ω if Ha = 0
Null ( H ) = {a ≠ 0 | Ha = 0}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ (k + 1 | k ) and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
Δx ⇒ Δx = H T γ
Ω = {x | Hx = y}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ (k + 1 | k ) and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
Δx ⇒ Δx = H T γ
Ω = {x | Hx = y}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ (k + 1 | k ) and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
Δx ⇒ Δx = H T γ
Ω = {x | Hx = y}
assume γ is a linear function of ν
⇒ Δx = H T K ν
Guess, hope, lets face it, it has to be some
for some m × m matrix K
function of the innovation
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ− and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
we require
H ( xˆ (k + 1 | k ) + Δx) = y
xˆ (k + 1 | k )
•
Δx
Ω = {x | Hx = y}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ− and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
we require
H ( xˆ (k + 1 | k ) + Δx) = y
xˆ (k + 1 | k ) ⇒ HΔx = y − Hxˆ (k + 1 | k ) = H ( x − xˆ (k + 1 | k )) = ν
•
Δx
Ω = {x | Hx = y}
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ− and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
we require
H ( xˆ (k + 1 | k ) + Δx) = y
xˆ (k + 1 | k ) ⇒ HΔx = y − Hxˆ (k + 1 | k ) = H ( x − xˆ (k + 1 | k )) = ν
•
Δx substituting Δx = H T Kν yields
Ω = {x | Hx = y}
HH T Kν = ν
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ− and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
we require
H ( xˆ (k + 1 | k ) + Δx) = y
xˆ (k + 1 | k ) ⇒ HΔx = y − Hxˆ (k + 1 | k ) = H ( x − xˆ (k + 1 | k )) = ν
•
Δx substituting Δx = H T Kν yields
Ω = {x | Hx = y}
HH T Kν = ν
⇒ K = HH T ( )−1
The fact that the linear solution solves the equation makes assuming K is linear a kosher guess
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (no noise!)
y = Hx
Given prediction xˆ− and output y, find Δx so that xˆ = xˆ (k + 1 | k ) + Δx
is the " best" estimate of x.
we require
H ( xˆ (k + 1 | k ) + Δx) = y
xˆ (k + 1 | k ) ⇒ HΔx = y − Hxˆ (k + 1 | k ) = H ( x − xˆ (k + 1 | k )) = ν
•
Δx substituting Δx = H T Kν yields
Ω = {x | Hx = y}
HH T Kν = ν
⇒ K = HH T ( )−1
Δx = H T
(HH )T −1
ν
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
A Geometric Interpretation
Ω = {x | Hx = y}
xˆ (k + 1 | k )
••
Δx
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
A Simple State Observer
x(k + 1) = Fx(k ) + Gu (k )
System:
y (k ) = Hx(k )
1. prediction:
xˆ (k + 1 | k ) = Fxˆ (k | k ) + Gu (k )
2. compute correction:
Observer:
Δx = H T
(HH )
T −1
( y (k + 1) − Hxˆ (k + 1 | k ))
3. update:
xˆ ( k + 1 | k + 1) = xˆ (k + 1 | k ) + Δx
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Caveat #1
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Estimating a distribution for x
1
−1
(
( x − xˆ )T P −1 ( x − xˆ ) )
p( x) = 1/ 2
e2
(2π ) n / 2 P
(
where P = E ( x − xˆ )( x − xˆ ) T ) is the covariance matrix
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (geometric intuition)
−1
(
( x − xˆ )T P −1 ( x − xˆ ) )
Ω = {x | Hx = y} p( x) =
1
e2
1/ 2
(2π ) n / 2 P
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
A new kind of distance
= x , x = x T P −1 x
2
Then we can define a new norm x
⇒ ω , Δx = 0 for ω in TΩ = null ( H )
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (for real this time!)
Substituti on yields :
HΔx = ν = HPH T Kν
⇒ K = HPH ( T
)
−1
∴ Δx = PH T
(HPH )T −1
ν
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
x(k + 1) = Fx(k ) + Gu (k ) + v(k )
A Better State Observer
y (k ) = Hx(k ) Sample of Guassian Dist. w/
COV Q
We can create a better state observer following the same 3. steps, but now we
must also estimate the covariance matrix P.
We start with x(k|k) and P(k|k)
Where did noise go?
Step 1: Prediction Expected value…
xˆ (k + 1 | k ) = Fxˆ (k | k ) + Gu (k )
(
P ( k + 1 | k ) = E ( x(k + 1) − xˆ (k + 1 | k ))( x(k + 1) − xˆ (k + 1 | k ))T )
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Continuing Step 1
= E (F (x − xˆ )(x − xˆ ) F + 2 F (x − xˆ )v + v k v Tk )
T T T
k k k k k k k
= FE ((x − xˆ )(x − xˆ ) )F + E (v v )
T T T
k k k k k k
= FPk F T + Q
P (k + 1 | k ) = FP (k | k ) F T + Q
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Step 2: Computing the correction
(
Δx = P ( k + 1 | k ) H HP ( k + 1 | k ) H T
)
−1
( y (k + 1) − Hxˆ (k + 1 | k ) )
Δx = Wν
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Step 3: Update
xˆ ( k + 1 | k + 1) = xˆ ( k + 1 | k ) + Wν
(
Pk +1 = E ( x k +1 − xˆ k +1 )( x k +1 − xˆ k +1 )T )
= E (( x k +1 − ˆ
x −
k +1 − W ν )( x k +1 − ˆ
x −
k +1 − W ν ) T
)
(just take my word for it…)
P (k + 1 | k + 1) = P (k + 1 | k ) − WHP (k + 1 | k ) H T W T
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Just take my word for it…
(
Pk +1 = E ( x k +1 − xˆ k +1 )( x k +1 − xˆ k +1 )T )
= E (( x k +1 − xˆ k−+1 − Wν )( xk +1 − xˆ k−+1 − Wν )T )
= E ⎛⎜ (( x )( ) − Wν ) ⎞⎟
T
ˆ k−+1 ) − Wν ( xk +1 − xˆ k−+1
k +1 − x
⎝ ⎠
(
= E ( xk +1 − xˆ k−+1 )( xk +1 − xˆ k−+1 )T − 2Wν ( xk +1 − xˆ k−+1 )T + Wν (Wν )T )
(
= Pk−+1 + E − 2WH ( xk +1 − xˆ k−+1 )( xk +1 − xˆ k−+1 )T + WH ( xk +1 − xˆ k−+1 )( xk +1 − xˆ k−+1 )T H T W T )
= Pk−+1 − 2WHPk−+1 + WHPk−+1 H T W T
(
= Pk−+1 − 2 Pk−+1 H T HPk−+1 H T ) HP
−1 −
k +1 + WHPk−+1 H T W T
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Better State Observer Summary
1. Predict xˆ (k + 1 | k ) = Fxˆ (k | k ) + Gu (k )
P (k + 1 | k ) = FP (k | k ) F T + Q
( )
Observer
−1
W = P (k + 1 | k ) H HP (k + 1 | k ) H T
2. Correction
Δx = W ( y ( k + 1) − Hxˆ (k + 1 | k ) )
xˆ ( k + 1 | k + 1) = xˆ ( k + 1 | k ) + Wν
3. Update
P (k + 1 | k + 1) = P (k + 1 | k ) − WHP (k + 1 | k ) H T W T
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
•Note: there is a problem with the previous slide, namely the covariance matrix of
the estimate P will be singular. This makes sense because with perfect sensor
measurements the uncertainty in some directions will be zero. There is no
uncertainty in the directions perpendicular to Ω
P lives in the state space and directions associated with sensor noise are zero. In
the step when you do the update, if you have a zero noise measurement, you end
up squeezing P down.
In most cases, when you do the next prediction step, the process covariance matrix
Q gets added to the P(k|k), the result will be nonsingular, and everything is ok again.
There’s actually not anything wrong with this, except that you can’t really call the
result a “covariance” matrix because “sometimes” it is not a covariance matrix
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the correction (with output noise)
y = Hx + w
The previous results require that you know
which hyperplane to aim for. Because there
Ω = {x | Hx = y} is now sensor noise, we don’t know where to
aim, so we can’t directly use our method.
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Projecting the prediction
(putting current state estimates into sensor space)
xˆ (k + 1 | k ) → yˆ = Hxˆ (k + 1 | k )
P (k + 1 | k ) → Rˆ = HP(k + 1 | k ) H T
P(k + 1 | k ) R̂
xˆ (k + 1 | k ) • ŷ •
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding most likely output
ence independent, so multiply them because we want them both to be true at the same time
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Most likely output (cont.)
y = yˆ + ⎛⎜ Rˆ Rˆ + R
*
⎝
( )
−1
⎞⎟( yˆ − y )
⎠
(
= Hxˆ (k + 1 | k ) + HPH HPH + R
T
( T
)−1
)(Hxˆ(k + 1 | k ) − y)
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the Correction
Now we can compute the correction as we did in the noiseless case, this time
using y* instead of y. In other words, y* tells us which hyperplane to aim for.
{
Ω = x | Hx = y * }
The result is:
x̂−
( ) (y )
• −1
Δx Δx = PH HPH T T *
− Hxˆ ( k + 1 | k )
ot going all the way to y, but splitting the difference between how confident you are with your
Sensor and process noise
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Finding the Correction (cont.)
Δx = PH HPH T
( T
) (y − Hxˆ (k + 1 | k ))
−1 *
(
= PH T HPH T ) (Hxˆ + HPH (HPH + R ) ( y − Hxˆ (k + 1 | k )) − Hxˆ (k + 1 | k ))
−1 T T −1
= PH HPH + RT
( T
) ( y − Hxˆ (k + 1 | k ))
−1
So that
Δx = W ( y − Hxˆ ( k + 1 | k ) )
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Correcting the Covariance Estimate
P (k + 1 | k + 1) = P (k + 1 | k ) − W HP (k + 1 | k ) H W( T
) T
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
LTI Kalman Filter Summary
xˆ (k + 1 | k ) = Fxˆ (k | k ) + Gu (k )
1. Predict
P (k + 1 | k ) = FP (k | k ) F T + Q
Kalman Filter
S = HP ( k + 1 | k ) H T + R
2. Correction W = P ( k + 1 | k ) H T S −1
Δx = W ( y ( k + 1) − Hxˆ (k + 1 | k ) )
xˆ ( k + 1 | k + 1) = xˆ ( k + 1 | k ) + Wν
3. Update
P (k + 1 | k + 1) = P (k + 1 | k ) − WSW T
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filters
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filter for Dead Reckoning
• Robot moves along a straight line with state x = [xr, vr]T
• Newton tells us or
Process noise
from a zero
mean
Gaussian V
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Set up
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Observability
Recall from last time
Actually, previous example is not observable but still nice to use Kalman filter
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Extended Kalman Filter
• Life is not linear
• Predict
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Extended Kalman Filter
• Update
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
EKF for Range-Bearing Localization
• Process Model
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Be wise, and linearize..
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Data Association
•BIG PROBLEM
Ith measurement corresponds to the jth landmark
innovation
where
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filter for SLAM (simple)
state
Process model
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filter for SLAM
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Range Bearing
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Greg’s Notes: Some Examples
• Point moving on the line according to f = m a
– state is position and velocity
– input is force
– sensing should be position
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filters
Bel ( xt ) = N ( μ t , σ t2 )
−
⎧ μ t +1 = μt + But
−
Bel(xt +1 ) = ⎨ − 2
σ
⎩ t +1 = A σ
2 2
t + σ 2
act
⎧ μt +1 = μt−+1 + K t +1 ( μ zt +1 − μt−+1 )
Bel(xt +1 ) = ⎨
⎩ σ t2+1 = (1 − K t +1 )σ − t2+1
−
⎧ μ t + 2 = μt +1 + But +1
−
Bel(xt + 2 ) = ⎨ − 2
σ
⎩ t +2 = A σ
2 2
t +1 + σ 2
act
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filter Algorithm
4. μ = μ + K ( z − Cμ )
5. Σ = ( I − KC )Σ
6. Else if d is an action data item u then
7. μ = Aμ + Bu
8. Σ = AΣAT + Σ act
9. Return <μ,Σ>
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Limitations
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Extended Kalman Filter Algorithm
4. μ = μ + K (z − c( μ ,0) ) μ = μ + K ( z − Cμ )
5. Σ = ( I − KC )Σ Σ = ( I − KC )Σ
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filter-based Systems (2)
• [Arras et al. 98]:
• Laser range-finder and vision
• High precision (<1cm accuracy)
Courtesy of RI
K. 16-735,
Arras Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Unscented Kalman Filter
• Instead of linearizing, pass several points from the Gaussian through the non-
linear transformation and re-compute a new Gaussian.
• Better performance (theory and practice).
μ μ
f f
μ ' = f ( μ , u ,0)
Σ = AΣAT + WΣ actW T
μ’
μ’
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox
Kalman Filters and SLAM
• Localization: state is the location of the robot
RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox