Professional Documents
Culture Documents
(an introduction)
George Kantor
presented to
x ( k + 1) = F ( k ) x ( k ) + G ( k )u( k ) + v ( k ) y ( k ) = H ( k ) x ( k ) + w( k )
x ( k ) is the n - dimensional state vector (unknown) u( k ) is the m - dimensional input vector (known) y ( k ) is the p - dimensional output vector (known, measured) F ( k ), G ( k ), H ( k ) are appropriately dimensioned system matrices (known) v ( k ), w( k ) are zero - mean, white Gaussian noise with (known) covariance matrices Q( k ), R ( k )
the Kalman Filter is a recursion that provides the best estimate of the state vector x.
Kalman Filter Introduction Carnegie Mellon University December 8, 2000
noise smoothing (improve noisy measurements) state estimation (for state feedback) recursive (computes next estimate using only most recent measurement)
Kalman Filter Introduction Carnegie Mellon University December 8, 2000
x = f ( y (k + 1), x(k + 1 | k ) )
3. update prediction: x(k + 1 | k + 1) = x( k + 1 | k ) + x
Kalman Filter Introduction Carnegie Mellon University December 8, 2000
= { x | Hx = y}
x = H
Kalman Filter Introduction
( HH )
T 1
A Geometric Interpretation
= { x | Hx = y}
1. prediction:
x(k + 1 | k ) = Fx(k | k ) + Gu (k )
Observer:
2. compute correction:
x = H
3. update:
( HH )
T 1
( y (k + 1) Hx(k + 1 | k ))
x(k + 1 | k + 1) = x(k + 1 | k ) + x
where P = E ( x x)( x x)
1 p( x) = e 2 P
T
1 ( x x )T P 1 ( x x ) 2
= { x | Hx = y}
1 p( x) = 2 P
1 ( x x )T P 1 ( x x ) e2
= x, x = x T P 1 x
, x = 0 for in T = null ( H )
, x = T P 1x = 0 iff x column ( PH T )
Kalman Filter Introduction Carnegie Mellon University December 8, 2000
K = HPH x = PH
T
( HPH )
T
Step 1: Prediction
x(k + 1 | k ) = Fx(k | k ) + Gu (k )
What about P? From the definition:
) )
Continuing Step 1
To make life a little easier, lets shift notation slightly:
Pk+1 = E ( xk +1 xk +1 )( xk +1 xk +1 )T
( = E (( F ( x x ) + v ) ( F ( x x ) + v ) ) = E ( F ( x x )( x x ) F + 2 F ( x x ) v = FE (( x x )( x x ) ) F + E (v v )
k k k k k k T k k k k T T k k k k k k T T k T
k
= E ( Fx k + Gu k + vk ( Fxk + Gu k ) )( Fx k + Gu k + v k ( Fxk + Gu k ) ) T
T
k
+ vk v T k
= FPk F T + Q
P(k + 1 | k ) = FP (k | k ) F T + Q
Kalman Filter Introduction Carnegie Mellon University December 8, 2000
x = P(k + 1 | k ) H HP (k + 1 | k ) H
( y(k + 1) Hx(k + 1 | k ))
x = W
Step 3: Update
x(k + 1 | k + 1) = x(k + 1 | k ) + W
Pk +1 = E ( xk +1 xk +1 )( xk +1 xk +1 )T
( = E (( x
)
)
xk +1 W )( xk +1 xk +1 W )T k +1
(just take my word for it)
( = E (( x = E (( x
)
) ) W )
T
xk +1 W )( xk +1 xk +1 W )T
k +1 x k +1 ) W ( x k +1 xk +1
)(
= E ( xk +1 xk +1 )( xk +1 xk +1 )T 2W ( xk +1 xk +1 )T + W (W ) T
( ( HP
1 HT k +1
) HP ) ( HP
1
k +1
+ WHPk+1 H T W T
T k +1 H
)( HP
1 H T HPk+1 k +1
+ WHPk+1 H T W T
x( k + 1) = Fx (k ) + Gu (k ) + v(k ) y (k ) = Hx ( k )
Observer
2. Correction
W = P (k + 1 | k ) H HP (k + 1 | k ) H T x = W ( y (k + 1) Hx(k + 1 | k ) )
x(k + 1 | k + 1) = x(k + 1 | k ) + W
3. Update
= { x | Hx = y}
x = PH T HPH T + R
Kalman Filter Introduction
( y Hx )
x( k + 1) = Fx (k ) + Gu (k ) + v(k ) y (k ) = Hx (k ) + w(k )
Kalman Filter
2. Correction
3. Update
P (k + 1 | k + 1) = P(k + 1 | k ) WSW T
Carnegie Mellon University December 8, 2000