You are on page 1of 7

Crib Sheet : Linear Kalman Smoothing

Gabriel A. Terejanu
Department of Computer Science and Engineering
University at Buffalo, Buffalo, NY 14260
terejanu@buffalo.edu

1 Introduction
Smoothing can be separated into three classes [6]:

1. Fixed-interval smoothing. The goal is to obtain the estimates xsk , for k = 0 . . . N , given a
fixed observation interval ZN = {zk |0 k N } [6].

2. Fixed-point smoothing. The goal is to obtain the estimate xsN (the time tN is fixed, N < k)
given a set of observations Zk = {zk |0 N k} [6].

3. Fixed-lag smoothing. The goal is to obtain the estimate xsk given the observations Zk+N =
{zk |0 k k + N } [6]. Where k + N is the most current measurement and N is a constant lag.

It is possible to employ a single smoothing scheme, based on fixed-interval smoothing to solve all
three problems [1].

A state is said to be smoothable if an optimal smoother provides a state estimate superior to


that obtained when the final optimal filter estimate is extrapolated backwards in time [4].

Only those states which are controllable by the noise driving the system state vector are smoothable
(Weiss 1970).
2 Fixed Interval Smoothing
The goal is to obtain the estimates xsk , for k = 0 . . . N , given a fixed observation interval ZN = {zk |0
k N } [6].

Fraser and Potter (1969): two optimal linear filters [3, 2]

The smoothed estimate is expressed as a linear combination between the forward and backward filter
state estimates. The optimal weighting between the two is also known as Millmans theorem which
is also an exact analog to maximum likelihood of a scalar with independent measurements [2].

Backward Information Filter

Forward Kalman Filter

Figure 1: Fixed Interval Smoothing : two-filter approach

Figure 2: Advantage of performing optimal smoothing [4]

2
Model and Observation
Model and Observation xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
Forward Filter
Initialization x+f 0 = 0 with error covariance Pf 0
+

Model Forecast Step/Predictor x +


f k = Ak1 xf k1 + Bk1 uk1
T
Pf k = Ak1 P+ f k1 Ak1 + Qk1

Data Assimilation Step/Corrector x+ f k = xf k + Kf k (zk Hk xf k )
Kf k = P T T
f k Hk (Hk Pf k Hk + Rk )
1
+
Pf k = (I Kf k Hk )Pf k
Backward Information Filter
Initialization ybN = 0
Y bN = 0
Backward Update ybk = y
+ T 1
bk + Hk Rk zk
Ybk = Ybk + HTk R1
+
k Hk 1
Backard Propagation Kbk = Ybk+1 Ybk+1 + Q1
+ +
k
y T + +

bk = Ak (I Kbk ) ybk+1 Ybk+1 Bk uk
Y T +
bk = Ak (I Kbk ) Ybk+1 Ak
Smoother
Estimate xsk = (I Ksk ) x+ + Psk y
 fk bk 
1
Kk = Pf k Ybk I + Pf k Y
s + +
bk
Psk = (I Ksk ) P+
fk

Table 1: Fraser and Potter: two optimal linear filters.

Notes

The infinite value of P


bN and the boundary condition to specify xbN are difficult to apply [3].
To avoid these problems the information form approach is used to represent the backward filter.
 1
The smoother error covariance Psk = (P+ )1 + (P )1 [2]. At k = N the smoother
fk bk
estimate and covariance have to be the same as the forward Kalman Filter. Thus, PsN = P+ fk
yields (P
bk )
1 = 0. Since we use the information form for the backward filter: Y = (P )1 = 0
bk bk
and y
bk = Ybk xbk = 0 (are zero for k = N ).

The smoothed covariance is equivalent to Josephs stabilized form.

Forward error estimate and backward forecast error are uncorrelated.

3
Rauch, Tung, and Striebel (1965): correction to the Kalman Filter [5, 2]

Combines the backward filter and the smoother in one single recursive step. Notes

Model and Observation


Model and Observation xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
Forward Filter
Initialization x+f 0 = 0 with error covariance Pf 0
+

Model Forecast Step/Predictor x +


f k = Ak1 xf k1 + Bk1 uk1
T
Pf k = Ak1 P+ f k1 Ak1 + Qk1

Data Assimilation Step/Corrector x+f k = xf k + Kf k (zk Hk xf k )
Kf k = P T T
f k Hk (Hk Pf k Hk + Rk )
1
+
Pf k = (I Kf k Hk )Pf k
Smoother
Initialization xsN = x+ fk
PsN = P+ fk

Update Ksk = P+ T
f k Ak (P f k+1 )
1


Psk = P+ fk K s
k P P s
k+1 (Ksk )T
 f k+1 
s
xsk = x+ s
f k + Kk xk+1 xf k+1

Table 2: Rauch, Tung, and Striebel: correction to the Kalman Filter.

The smoother does not depend on either backward covariance or backward estimate. Its form
reveals just a correction of the current Kalman Filter using only the data provided by the forward
filter.

The smoothed estimate does not depend on the smoothed covariance.

In order to obtain the smoothed estimate only the forward state estimate and the smoothed gain
have to be stored.

It is completely derived from optimal control theory [2]

Minimize
N
1X
J(wk ) = (zk Hk xk )T R1 t 1
k (zk Hk xk ) + wk Qk wk
2
k=1
1 + T
1
 
+ xf 0 x0 (P+ f0 ) x +
f0 x 0
2
subject to
xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk

this yields a Two-Point-Boundary-Value-Problem (TPBVP).

4
It can be derived as MAP estimate of xk , given the data set ZN , N > k, where xk maximizes
the conditional density p(xk |ZN ) [6, p.367].

3 Fixed Point Smoothing


The goal is to obtain the estimate xsN (the time tN is fixed, N < k) given a set of observations
Zk = {zk |0 N k} [6].

Figure 3: Fixed Point Smoothing

Meditch (1967): extended RTS for fixed-point smoothing [6, 2]

Model and Observation


Model and Observation xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
Forward Filter
Initialization x+f 0 = 0 with error covariance Pf 0
+

Model Forecast Step/Predictor x +


f k = Ak1 xf k1 + Bk1 uk1
T
Pf k = Ak1 P+ f k1 Ak1 + Qk1
Data Assimilation Step/Corrector xf k = xf k + Kf k (zk Hk x
+
f k)

Kf k = Pf k Hk (Hk Pf k Hk + Rk )1
T T

P+f k = (I Kf k Hk )Pf k
Smoother
Initialization xsN |N = x+ fN
PsN |N = P+ fN
MN |N = I
Update MN |k = MN |k1 Ksk1

Ksk = P+ T
f k Ak (Pf k+1 )
1
 

PsN |k = PsN |k1 MN |k P+ P T
f k MN |k
 fk 

xsN |k = xsN |k1 MN |k x+ f k xf k

Table 3: Meditch (1967): extended RTS for fixed-point smoothing

5
4 Fixed Lag Smoothing
The goal is to obtain the estimate xsk given the observations Zk+N = {zk |0 k k + N } [6]. Where
k + N is the most current measurement and N is a constant lag.

Figure 4: Fixed Lag Smoothing

Meditch (1967): extended RTS for fixed-lag smoothing [6, 2]


The fixed-lag-smoothing algorithm for discrete processed is obtained by combinng both the fixed-
interval and fixed-lag algorithms [6].

Model and Observation


Model and Observation xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
Forward Filter
Initialization xf 0 = 0 with error covariance P+
+
f0
Model Forecast Step/Predictor x +
f k = Ak1 xf k1 + Bk1 uk1
P + T
f k = Ak1 Pf k1 Ak1 + Qk1

Data Assimilation Step/Corrector x+
f k = xf k + Kf k (zk Hk xf k )
Kf k = P T T
f k Hk (Hk Pf k Hk + Rk )
1
+
Pf k = (I Kf k Hk )Pf k
Smoother
Initialization xk|k+N = x+
s
fN
Psk|k+N = P+ fN
Mk|k+N = I
Update Mk+1|k+N +1 = Mk|k+N Ksk+N

Ksk+N = P+ T
f k+N Ak+N (Pf k+N +1)
1

Psk+1|k+N +1 = P s 1
f k+1 (Kk ) P+ s s T
f k+ Pk|k+N (Kk )
Mk+1|k+N +1 Kf k+N +1 Hk+N +1 P T
f k+N +1 Mk+1|k+N +1
xsk+1|k+N +1 s
= Ak xk|k+N + Bk uk
 
+Qk AT
k (P + 1
fk ) x s
k|k+N x+
fk
+Mk+1|k+N +1 Kf k+N +1 (zk+N +1 Hk+N +1 x
f k+N +1 )

Table 4: Meditch (1967): extended RTS for fixed-lag smoothing

6
References
[1] Doucet A. & Maskell S. Briers, M. Smoothing algorithms for state-space models. Submission IEEE
Transactions on Signal Processing, 2004.

[2] John Crassidis and John Junkins. Optimal Estimation of Dynamic Systems. CRC Press, 2004.

[3] D. Fraser and J. Potter. The optimum linear smoother as a combination of two optimum linear
filters. Automatic Control, IEEE Transactions on, 14(4):387390, Aug 1969.

[4] Arthur Gelb. Applied Optimal Estimation. The M.I.T. Press, 1974.

[5] H. E. Rauch, F. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic
systems. J. Amer. Inst. Aeronautics and Astronautics, 3 (8):14451450, 1965.

[6] Andrew Sage and James Melsa. Estimation Theory with Applications to Communications and
Control. McGraw-Hill Book Company, 1971.

You might also like