You are on page 1of 44

Discrete Kalman Filter

M. Sami Fadali
Professor EE
University of Nevada

1
Outline

 What is the Discrete Kalman


Filter (DKF)?
 Derivation of DKF.
 Implementation of DKF.
 Example

2
What is the (DKF)?

 Algorithm for the optimal recursive


estimation of the state of a system.
 Needs:
 Initial state estimate and error covariance.
 Noisy measurements with known
properties.
 System state-space model.

3
Derivation of DKF
Process and measurement models
𝒙𝑘+1 = 𝜙𝑘 𝒙𝑘 + 𝒘𝑘
𝒛𝑘 = 𝐻𝑘 𝒙𝑘 + 𝒗𝑘
𝒙𝑘 = 𝑛 × 1 state vector at 𝑡𝑘
𝜙𝑘 = 𝑛 × 𝑛 state-transition matrix at 𝑡𝑘
𝒛𝑘 = 𝑚 × 1 measurement vector at 𝑡𝑘
𝐻𝑘 = 𝑚 ×n measurement matrix at 𝑡𝑘

4
Noise
zero-mean white Gaussian process
𝒘𝑘 = 𝑛 × 1
noise vector at 𝑡𝑘
𝑇 𝑄𝑘 , 𝑖 = 𝑘
𝐸 𝒘𝑘 𝒘𝑖 =ቊ
0, 𝑖 ≠ 𝑘

𝒗𝑘 = 𝑚 × 1 zero-mean white Gaussian


measurement noise vector at 𝑡𝑘
𝑅𝑘 , 𝑖 = 𝑘
𝐸 𝒗𝑘 𝒗𝑇𝑖 =ቊ
0, 𝑖 ≠ 𝑘
𝐸 𝒘𝑘 𝒗𝑇𝑖 = 𝟎
5
Notation

^ = estimate
 = perturbation.
ෝ−
𝒙 𝑘 = a priori estimate of 𝒙𝑘 (before the
measurement at 𝑡𝑘 ).
ෝ+
𝒙 𝑘 = a posteriori estimate of 𝒙𝑘 (after
the measurement at 𝑡𝑘 ).

6
Measurement Noise
 Measurement noise 𝒗𝑘 is white.
𝒛𝑘 = 𝐻𝑘 𝒙𝑘 + 𝒗𝑘
 ෝ−
A priori estimate at time 𝑘 (i.e. 𝒙𝑘)
uncorrelated with measurement noise at
time 𝑘.
ෝ−
𝒙𝑘 = 𝒇 𝒛𝑖 , 𝑖 = 0,1, … , 𝑘 − 1, 𝐼𝐶𝑠

ෝ−
𝐸 𝒙𝑘 − 𝒙𝑘 𝒗𝑇
𝑘 = 𝟎

7
Estimators
 A priori estimate
ෝ−
𝒙𝑘 = 𝒇 𝒛𝑖 , 𝑖 = 0,1, … , 𝑘 − 1, 𝐼𝐶𝑠
 ෝ+
A posteriori estimate 𝒙𝑘 = 𝐾 ෝ −
𝑥,𝑘 𝑘 + 𝐾𝑘 𝒛𝑘
𝒙
 A priori estimation error 𝒆−
𝒌 = ෝ
𝒙 −
𝑘 − 𝒙𝑘
 A posteriori estimation error
𝒆+
𝒌 = ෝ
𝒙 +
𝑘 − 𝒙 𝑘 = 𝐾 ෝ −
𝑥,𝑘 𝑘 + 𝐾𝑘 𝐻𝑘 𝒙𝑘 + 𝒗𝑘 − 𝒙𝑘
𝒙
ෝ−
= 𝐾𝑥,𝑘 𝒙𝑘 + 𝐾𝑘 𝐻𝑘 − 𝐼𝑛 𝒙𝑘 + 𝐾𝑘 𝒗𝑘

8
Unbiased Linear Estimator
 ෝ+
A posteriori estimate 𝒙𝑘 = 𝐾 ෝ −
𝑥,𝑘 𝑘 + 𝐾𝑘 𝒛𝑘
𝒙
 ෝ+
Error 𝒙𝑘 − 𝒙𝑘 = 𝐾 ෝ −
𝑥,𝑘 𝑘 + 𝐾𝑘 𝐻𝑘 − 𝐼𝑛 𝒙𝑘 + 𝐾𝑘 𝒗𝑘
𝒙
 Expectation must be zero for unbiased
ෝ+
𝐸 𝒙𝑘 − 𝒙𝑘 = 𝐾𝑥,𝑘 𝐸 ෝ
𝒙 −
𝑘 − 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝐸 𝒙𝑘 = 𝟎

 Assume unbiased a priori estimate


ෝ−
𝐸 𝒙𝑘 = 𝐸 𝒙𝑘
 For unbiased estimator 𝐾𝑥,𝑘 = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘
ෝ+
𝒙𝑘 = 𝐼𝑛 − 𝐾 𝐻 𝒙

𝑘 𝑘 𝑘

+ 𝐾 𝒛
𝑘 𝑘 = ෝ
𝒙 −
𝑘 + 𝐾 𝒛
𝑘 𝑘 − 𝐻 ෝ
𝒙
𝑘 𝑘

9
Error Covariance Matrices
 Assume unbiased estimates: 𝐸 𝑒𝑘 = 𝟎
 A priori error 𝒆𝑘− = 𝒙𝑘 − 𝒙
ෝ−
𝑘

 A priori error covariance matrix 𝑃𝑘− = 𝐸 𝒆− 𝒆


𝑘 𝑘
−𝑇

 A posteriori error
𝒆𝑘+ = 𝒙𝑘 − 𝒙
ෝ+
𝑘 = 𝐼𝑛 − 𝐾 −
𝑘 𝑘 𝑘 − 𝐾𝑘 𝒗𝑘
𝐻 𝒆
 A posteriori error covariance matrix
+ +𝑇 + +
𝑃𝑘+ =𝐸 𝒆𝑘 𝒆𝑘 = 𝐸 𝑒𝑘𝑖 𝑒𝑘𝑗

10
Derivation of DKF
 Recursively correct estimate.
ෝ+
𝒙𝑘 = 𝐼𝑛 − 𝐾 ෝ −
𝑘 𝑘 𝑘 + 𝐾𝑘 𝒛𝑘
𝐻 𝒙
= 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝒙ෝ−𝑘 + 𝐾𝑘 𝐻𝑘 𝒙𝑘 + 𝒗𝑘
=𝒙ෝ−
𝑘 + 𝐾 𝐻
𝑘 𝑘 𝑘𝒙 − ෝ
𝒙 −
𝑘 + 𝐾𝑘 𝒗𝑘
𝒆𝑘+ = 𝒙𝑘 − 𝒙
ෝ+𝑘 = 𝐼𝑛 − 𝐾 𝐻
𝑘 𝑘 𝒙𝑘 − ෝ
𝒙 −
𝑘 − 𝐾𝑘 𝒗𝑘
= 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝒆𝑘− − 𝐾𝑘 𝒗𝑘
 Measurement noise orthogonal to a priori error:
ෝ−
𝐸 𝒙𝑘 − 𝒙𝑘 𝒗𝑇
𝑘 = 𝐸 𝒆− 𝑇
𝑘 𝒗𝑘 = 𝟎

11
Error Covariance Matrix
𝒆+
𝑘 = 𝐼𝑛 − 𝐾 −
𝑘 𝑘 𝑘 − 𝐾𝑘 𝒗𝑘 ,
𝐻 𝒆 𝐸 𝒆− 𝑇
𝑘 𝑘 = 𝟎
𝒗
+ +𝑇 + 𝑇
𝑃𝑘+ =𝐸 𝒆𝑘 𝒆𝑘 =𝐸 ෝ+
𝒙𝑘 − 𝒙𝑘 ෝ𝑘
𝒙𝑘 − 𝒙

= 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝐸 𝒆− 𝒆
𝑘 𝑘
−𝑇
𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑇
+ 𝐾𝑘 𝐸 𝒗𝑘 𝒗𝑘 𝑇 𝐾𝑘𝑇
 Substitute for the expectations
𝑃𝑘+ = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑇
+ 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇
 Expand
𝑃𝑘+ = 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐾𝑘𝑇 + 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇

12
Minimum Mean-square Error
 Choose gain 𝐾𝑘 (blending factor) to minimize
+𝑇 +
mean square error 𝐸 𝒆𝑘 𝒆𝑘
+𝑇 + + +𝑇 + +
𝐸 𝒆𝑘 𝒆 𝑘 = 𝑡𝑟 𝐸 𝒆𝑘 𝒆𝑘 = 𝑡𝑟 𝐸 𝑒𝑘𝑖 𝑒𝑘𝑗
𝑛
+ 2 + +𝑇
෍𝐸 𝑒𝑘𝑖 = 𝑡𝑟 𝐸 𝒆 𝑘 𝒆𝑘 = 𝑡𝑟 𝑃𝑘+
𝑖=0

 Minimize over all possible choices of K


𝑛
+ 2
𝜕෍𝐸 𝑒𝑘𝑖 ൘𝜕𝐾𝑘 = 𝜕𝑡𝑟 𝑃𝑘+ Τ𝜕𝐾𝑘
𝑖=0
13
Derivative of Trace

For any scalar 𝑠


𝑑𝑠 𝑑𝑠
=
𝑑𝐴 𝑑 𝑎𝑖𝑗
𝑑 𝑡𝑟 𝐴𝐵
= 𝐵𝑇 𝐴, 𝐵 𝑠𝑞𝑢𝑎𝑟𝑒
𝑑𝐴
𝑑 𝑡𝑟 𝐴𝐶𝐴𝑇
= 2𝐴𝐶 𝐶 𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑖𝑐
𝑑𝐴

14
Minimization
𝑃𝑘+
= 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐾𝑘𝑇 + 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇
𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑖𝑐

 Use 𝑡𝑟 𝑋 = 𝑡𝑟 𝑋 𝑇 (same trace for two terms)

 Apply trace formulas, use tr 𝑄 = 𝑡𝑟 𝑄 𝑇


𝜕𝑡𝑟 𝑃𝑘+
= −2𝑃𝑘− 𝐻𝑘𝑇 + 2𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 = 𝟎
𝜕𝐾𝑘
 Solve for the Kalman Gain
−1
𝐾𝑘 = 𝑃𝑘− 𝐻𝑘𝑇 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘
15
Error Covariance Matrix Forms
 Joseph form
𝑃𝑘+ = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑇
+ 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇
+ − − 𝑇 − 𝑇 −1
𝑃𝑘 = 𝑃𝑘 − 𝑃𝑘 𝐻𝑘 𝐻𝑘 𝑃𝑘 𝐻𝑘 + 𝑅𝑘 𝐻𝑘 𝑃𝑘−
= 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇
= 𝐼 − 𝐾𝑘 𝐻𝑘 𝑃𝑘−
 Four expressions for the error covariance.
 Numerical Computation: behave differently.

16
Joseph Form

𝑃𝑘+ = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑇
+ 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇

 Expand to obtain the other forms.


 Best numerical computation properties.

 Use Joseph form to reduce numerical errors.

17
Derivation of Other Forms
 Derived earlier
𝑃𝑘+ = 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐾𝑘𝑇 + 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇
−1
𝐾𝑘 = 𝑃𝑘− 𝐻𝑘𝑇 − 𝑇
𝐻𝑘 𝑃𝑘 𝐻𝑘 + 𝑅𝑘
𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑖𝑐

 Three equal terms


−1
𝑃𝑘− 𝐻𝑘𝑇 𝐾𝑘𝑇 = 𝑃𝑘− 𝐻𝑘𝑇
𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘−
𝐾𝑘
−1
= 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘−
𝐾𝑘𝑇

18
Derivation (Cont.)
𝑃𝑘+ = 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐾𝑘𝑇
+𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇
 Cancel two equal terms (two forms)
𝑃𝑘+ = 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐾𝑘𝑇
= 𝑃𝑘− − 𝐾𝑘 𝐻𝑘 𝑃𝑘−
 Common factor
+ −
𝑃𝑘 = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘

19
Fundamental Thm. of Estimation
Theory
 Minimum mean-square error estimator:
ෝ𝑘 = 𝐸 𝒙𝑘 |𝒛∗𝑘
𝒙
𝒛∗𝑘 = 𝑐𝑜𝑙 𝒛𝑖 , 𝑖 = 0,1, … , 𝑘 .
 𝒛∗𝑘 = all measurements up to and
including time 𝑘
 State estimate depends on all the
measurements.

20
Proof
𝑀𝑆𝐸 = 𝐸 𝒙𝑘 − 𝒙 ෝ𝑘 |𝒛∗𝑘
ෝ 𝑘 𝑇 𝒙𝑘 − 𝒙
 Expand and complete squares.

𝑀𝑆𝐸 = 𝐸 𝒙𝑇𝑘 𝒙𝑘 |𝒛∗𝑘


+𝒙ෝ𝑘 − 𝐸 𝒙𝑘 |𝒛∗𝑘 𝑇 𝒙
ෝ 𝑘 − 𝐸 𝒙𝑘 |𝒛∗𝑘
−𝐸 𝒙𝑇𝑘 |𝒛∗𝑘 𝐸 𝒙𝑘 |𝒛∗𝑘
 Choose 𝒙ෝ𝑘 to minimize mean-square error.
 ෝ𝑘 = 𝐸 𝒙𝑘 |𝒛∗𝑘
Minimized if 𝒙

21
A Priori Estimate
 Predictor
𝒙𝑘+1 = 𝜙𝑘 𝒙𝑘 + 𝒘𝑘
+
ෝ−
𝒙𝑘+1 = 𝐸 𝒙 |𝒛
𝑘+1 𝑘

= 𝜙𝑘 𝐸 𝒙 |𝒛
𝑘 𝑘

+ 𝟎 = 𝜙 ෝ
𝒙
𝑘 𝑘
+
ෝ−
𝒙𝑘+1 = 𝜙 ෝ
𝒙
𝑘 𝑘
 Estimation error
𝒆−
𝑘+1 = 𝒙𝑘+1 − ෝ
𝒙 −
𝑘+1
= 𝜙𝑘 𝒙𝑘 + 𝒘𝑘 − 𝜙𝑘 𝒙 ෝ+
𝑘
+
𝒆−𝑘+1 = 𝜙 𝑘 𝑘 + 𝒘𝑘
𝒆

22
A Priori Error Covariance Matrix
 Error
+
𝒆−
𝑘+1 = 𝜙𝑘 𝑘 + 𝒘𝑘
𝒆
 Sum of two orthogonal terms 𝐸 𝒆+ 𝑇
𝑘 𝑘 = 𝟎
𝒘
 Error covariance matrix
− 𝑇
𝑃𝑘+1 = 𝐸 𝒆−
𝑘+1

𝒆𝑘+1
+ + 𝑇
= 𝜙𝑘 𝐸 𝒆𝑘 𝒆𝑘 𝜙𝑘𝑇 + 𝐸 𝒘𝑘 𝒘𝑇𝑘

𝑃𝑘+1 = 𝜙𝑘 𝑃𝑘+ 𝜙𝑘𝑇 + 𝑄𝑘

23
DKF Loop
Enter initial state estimate
Measurements
and its error covariance
𝒙−
ෝ −
0 , 𝑃0
𝒛0 , 𝒛1 , …
Compute Kalman Gain
𝐾𝑘 = 𝑃𝑘− 𝐻𝑘𝑇 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 −1

Update estimate with


Project Ahead: measurement 𝒛𝑘
ෝ−
𝒙 ෝ+
𝑘+1 = 𝜙𝑘 𝒙 𝑘
ෝ+
𝒙 ෝ−
𝑘 =𝒙 ෝ−
𝑘 + 𝐾𝑘 𝒛𝑘 − 𝐻𝑘 𝒙 𝑘
− + 𝑇
𝑃𝑘+1 = 𝜙𝑘 𝑃𝑘 𝜙𝑘 + 𝑄𝑘

Compute error covariance


𝑃𝑘+ = 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘− State Estimates
ෝ+
𝒙 ෝ1+ , …
0, 𝒙

24
Example: Wiener Process
 Scalar example.
 Discretize the CT system.
 St. dev. of measurement error = 0.5 ⇒ 𝑅 = 0.25
1
𝐺 𝑠 = ⇔𝑔 𝑡 =1
𝑠

Unity Gaussian v(t)


White Noise u(t) 1 x(t)
s
+
z(t)
x(0)=0( 1

25
Example: Discretization
𝑥ሶ 𝑡 = 𝑢 𝑡
𝑥𝑘+1 = 𝑥𝑘 + 𝑤𝑘 , 𝜙𝑘 = 1
𝑧𝑘 = 𝑥𝑘 + 𝑣𝑘 , 𝐻𝑘 = 1
Δ𝑡 Δ𝑡
𝑄𝑘 = 𝐸 𝑤𝑘2 = 𝐸 න 1. 𝑢 𝜉 𝑑𝜉 න 1. 𝑢 𝜂 𝑑𝜂
0 0
1 1
= න න 𝐸 𝑢 𝜉 𝑢 𝜂 𝑑𝜉 𝑑𝜂
0 0
1 1
= න න 𝛿 𝜉 − 𝜂 𝑑𝜉 𝑑𝜂 = 1
0 0
ෝ−
𝒙0 = 0, 𝑃0− = 0, 𝑅 = 1/4
26
Kalman Loop: k = 0
 Calculate the gain
−1
𝐾0 = 𝑃0− 𝐻0𝑇 𝐻0 𝑃0− 𝐻0𝑇 + 𝑅0 = 0/ 0 + 1/4 = 0
 Update the estimate
𝑥ො0+ = 𝑥ො0− +𝐾0 𝑧0 − 𝐻 𝑥ො0− = 0 + 0 𝑧0 − 0 = 0
 Update the error
𝑃0+ = 𝐼 − 𝐾0 𝐻 𝑃0− = 1 − 0 0 = 0
 Project ahead 𝑥ො1− = 𝜙 𝑥ො0+ = 1.0 = 0
𝑃1− = 𝜙𝑃0+ 𝜙 𝑇 + 𝑄 = 1. 0 . 1 + 1 = 1

27
Kalman Loop: k = 1
 Calculate the gain
−1
𝐾1 = 𝑃1− 𝐻1𝑇 𝐻1 𝑃1− 𝐻1𝑇 + 𝑅1 = 1/ 1 + 1/4 −1 = 4/5
 Update the estimate
4 4
𝑥ො1+ = 𝑥ො1− +𝐾1 𝑧1 − 𝐻 𝑥ො1− = 0 + 𝑧1 − 0 = 𝑧1
5 5
 Update the error
𝑃1+ = 𝐼 − 𝐾1 𝐻 𝑃1− = 1 − 4/5 1 = 1/5
 Project ahead 𝑥ො2− = 𝜙 𝑥ො1+ = 1. 4/5 𝑧1 = 4/5 𝑧1
𝑃2− = 𝜙𝑃1+ 𝜙 𝑇 + 𝑄 = 1. 1/5 . 1 + 1 = 6/5

28
MATLAB DKF Implementation
% Across Measurements:
K=P*H’/(H*P*H’+R);
xhatp=xhat+K*(z-H*xhat);
P=(eye(n)-K*H)*P;
P=(P+P’)/2;
% Between Measurements:
xhat=phi*xhat;
P=phi*P*phi’+Q;

29
Example: Gauss-Markov Process
2𝜎 2 𝛽
𝑅𝑋𝑋 𝜏 = 𝜎 2 𝑒 −𝛽 𝜏 , 𝑆𝑋𝑋 𝑠 =
−𝑠 2 + 𝛽 2
2𝜎 2𝛽
+
𝐺 𝑠 = 𝑆𝑋𝑋 𝑠 = , 𝑔 𝑡 = 2𝜎 2 𝛽𝑒 −𝛽𝑡
𝑠+𝛽
𝑥ሶ = −𝛽𝑥 + 2𝜎 2 𝛽 𝑢, 𝑦=𝑥

Unity 2𝜎 2 𝛽
Gaussian 𝑠+𝛽
white noise
30
Example: Discretization
𝜙 = 𝑒 −𝛽Δ𝑡 , 𝐻 = 𝑊 = 1, 𝐺= 2𝜎 2 𝛽
Δ𝑡
𝐹𝜉 𝑇 𝐹𝑇𝜉
𝑄 𝑘 = න 𝑒 𝐺𝑊𝐺 𝑒 𝑑𝜉
0
Δ𝑡 2
=න 𝑒 −𝛽Δ𝑡 2𝜎 2 𝛽 1 𝑑𝜉
0
= 𝜎 1 − 𝑒 −2𝛽Δ𝑡
2

𝑥𝑘+1 = 𝑒 −𝛽Δ𝑡 𝑥𝑘 + 𝑤𝑘
𝑧𝑘 = 𝑥𝑘 + 𝑣𝑘
𝑅=1
31
Initial Conditions
 Process mean and variance
𝑅𝑥𝑥 𝜏 = 𝜎 2 𝑒 −𝛽 𝜏
lim 𝑅𝑥𝑥 𝜏 = 0 ⇒ 𝐸 𝑥 𝑡 =0
𝜏→∞
𝑅𝑥𝑥 0 = 𝐸 𝑥 2 𝑡 = 𝜎 2 = 𝑣𝑎𝑟 𝑥 𝑡
 Use process mean and variance to
initialize
𝑥ො0− = 0, 𝑃0− = 𝜎 2

32
Simulation Results
 Use unity variance 𝜎 2 𝑃0− = 1𝑚2 and
unity time constant, 𝛽 = 1.
 Use a sampling period Δ𝑡 = 0.02 s
 Steady-state Kalman gain 𝐾 = 0.165
 Steady-state error variance 𝑃 = 0.16𝑚2 ,
 RMS Error 𝑃 = 0.4𝑚
 Close to steady state after 20 steps.
 Suboptimal filter: Use 𝐾 = 0.165 for a
simple implementation.
33
Simulation Results

34
Discrete Lyapunov Equation

𝑃𝑘+1 = 𝜙𝑘 𝑃𝑘+ 𝜙𝑘𝑇 + 𝑄𝑘
 Substitute for P +

𝑃𝑘+1 = 𝜙𝑘 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑃𝑘− 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 𝑇 𝜙𝑘𝑇
+𝜙𝑘 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇 𝜙𝑘𝑇 + 𝑄𝑘
 Lyapunov Equation: 𝑃𝑘+1 = 𝐴ҧ𝑘 𝑃𝑘 𝐴ҧ𝑘 + 𝑄 ത𝑘
− − 𝑇

𝐴ҧ𝑘 = 𝜙𝑘 𝐼𝑛 − 𝐾𝑘 𝐻𝑘
𝑄ത𝑘 = 𝜙𝑘 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇 𝜙𝑘𝑇 + 𝑄𝑘
 Applies for any gain K (not just the optimal
Kalman gain K)
35
Solution of Lyapunov Eqn.

𝑃𝑘+1 = 𝐴ҧ𝑘 𝑃𝑘− 𝐴ҧ𝑇𝑘 + 𝑄ത𝑘
𝑃𝑘− = Φ 𝑘, 0 𝑃0− Φ𝑇 𝑘, 0
𝑘−1

+ ෍ Φ 𝑘, 𝑖 + 1 𝑄ത𝑖 Φ𝑇 𝑘, 𝑖 + 1
𝑖=0
𝐴ҧ𝑘−1 𝐴ҧ𝑘−2 … 𝐴ҧ𝑖
Φ 𝑘, 𝑖 = ቊ 𝑘−𝑖
𝐴ҧ , 𝐴ҧ 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
Φ 𝑘, 𝑘 = 𝐼𝑛
 Proof by induction.
36
Discrete Riccati Equation

𝑃𝑘+1 = 𝜙𝑘 𝑃𝑘+ 𝜙𝑘𝑇 + 𝑄𝑘
 For the Kalman (optimal) gain (slide 14)
−1
𝑃𝑘+ = 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘−
 Riccati Equation
− −1
𝑃𝑘+1 = 𝜙𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘− 𝜙𝑘𝑇
+𝑄𝑘

37
Wiener Example: Lyapunov
Equation
𝜙𝑘 = 1, 𝐻 = 1, 𝑄 = 1, 𝑅 = 0.25

𝑃𝑘+1 = 𝐴ҧ𝑘 𝑃𝑘− 𝐴ҧ𝑇𝑘 + 𝑄ത𝑘
𝐴ҧ𝑘 = 𝜙𝑘 𝐼𝑛 − 𝐾𝑘 𝐻𝑘 = 1 1 − 𝐾𝑘 × 1 = 1 − 𝐾𝑘
𝑄ത𝑘 = 𝜙𝑘 𝐾𝑘 𝑅𝑘 𝐾𝑘𝑇 𝜙𝑘𝑇 + 𝑄𝑘 = 1. 𝐾𝑘2 /4 . 1 + 1
= 𝐾𝑘2 /4 + 1

𝑃𝑘+1 = 𝐴ҧ𝑘 𝑃𝑘− 𝐴ҧ𝑇𝑘 + 𝑄ത𝑘
= 1 − 𝐾𝑘 2 𝑃𝑘− + 𝐾𝑘2 /4 + 1
38
Wiener Example: Riccati Equation
𝜙𝑘 = 1, 𝐻 = 1, 𝑄 = 1, 𝑅 = 0.25

𝑃𝑘+1
−1
= 𝜙𝑘 𝑃𝑘− − 𝑃𝑘− 𝐻𝑘𝑇 𝐻𝑘 𝑃𝑘− 𝐻𝑘𝑇 + 𝑅𝑘 𝐻𝑘 𝑃𝑘− 𝜙𝑘𝑇
+𝑄𝑘
−1
= 1. 𝑃𝑘− − 𝑃𝑘− . 1. 1. 𝑃𝑘− . 1 + 0.25 . 1. 𝑃𝑘− . 1 + 1
= 1 − 𝑃𝑘− / 𝑃𝑘− + 0.25 𝑃𝑘− + 1

𝑃𝑘+1 = 1.25 − 0.25/ 4𝑃𝑘− + 1

39
Deterministic Inputs
𝒙ሶ = 𝐹𝒙 + 𝐵𝒖𝑑 + 𝐺𝒖, 𝒙0
Linear system: use superposition.
a. Add zero-state deterministic response
𝑡𝑘+1
+
ෝ−
𝒙𝑘+1 = 𝜙 ෝ
𝑘 𝑘 +න
𝒙 𝜙 𝑡𝑘+1 , 𝜏 𝐵 𝜏 𝒖𝑑 𝑑𝜏
𝑡𝑘
b. (i) Subtract deterministic output from 𝒛𝑘
(ii) Compute 𝒙𝑘,𝑑 separately and add to the KF
estimate to obtain the state estimate.
ෝ𝑘,𝒕𝒐𝒕 = 𝒙
𝒙 ෝ𝑘 + 𝒙𝑘,𝑑

40
Real-time Implementation
 Data Latency: delay between data time
and current time due to sensor,
computation, and information delivery.
 Processor Loading: how much is the
processor used?
 throughput (bits/s) analysis
 dedicated vs. shared processor
 specialized exploitation to reduce
computation e.g. exploit sparse matrices.

41
Round-off Errors
 Use high precision off line.
 Choose step carefully.
 Propagate 𝑛 𝑛 + 1 /2 unique terms of
symmetric P matrix only.
 Use array algorithms that propagate the
square root of the matrix P (see Kailath,
covered later).
 Use suboptimal filter (fix K).

42
The Separation Principle

 Linear system with state estimator


feedback.
 Design controller and state estimator
separately.
 True for observer or Kalman filter.

43
Conclusion

 Popular recursive algorithm.


 Minimize mean-square error.
 Real-time implementation.
 Estimator feedback.

44

You might also like