Professional Documents
Culture Documents
Gabriel A. Terejanu
Department of Computer Science and Engineering
University at Buffalo, Buffalo, NY 14260
terejanu@buffalo.edu
1 Introduction
Smoothing can be separated into three classes [6]:
1. Fixed-interval smoothing. The goal is to obtain the estimates xsk , for k = 0 . . . N , given a
fixed observation interval ZN = {zk |0 k N } [6].
2. Fixed-point smoothing. The goal is to obtain the estimate xsN (the time tN is fixed, N < k)
given a set of observations Zk = {zk |0 N k} [6].
3. Fixed-lag smoothing. The goal is to obtain the estimate xsk given the observations Zk+N =
{zk |0 k k + N } [6]. Where k + N is the most current measurement and N is a constant lag.
It is possible to employ a single smoothing scheme, based on fixed-interval smoothing to solve all
three problems [1].
Only those states which are controllable by the noise driving the system state vector are smoothable
(Weiss 1970).
2 Fixed Interval Smoothing
The goal is to obtain the estimates xsk , for k = 0 . . . N , given a fixed observation interval ZN = {zk |0
k N } [6].
The smoothed estimate is expressed as a linear combination between the forward and backward filter
state estimates. The optimal weighting between the two is also known as Millmans theorem which
is also an exact analog to maximum likelihood of a scalar with independent measurements [2].
2
Model and Observation
Model and Observation xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
Forward Filter
Initialization x+f 0 = 0 with error covariance Pf 0
+
Notes
3
Rauch, Tung, and Striebel (1965): correction to the Kalman Filter [5, 2]
Combines the backward filter and the smoother in one single recursive step. Notes
The smoother does not depend on either backward covariance or backward estimate. Its form
reveals just a correction of the current Kalman Filter using only the data provided by the forward
filter.
In order to obtain the smoothed estimate only the forward state estimate and the smoothed gain
have to be stored.
Minimize
N
1X
J(wk ) = (zk Hk xk )T R1 t 1
k (zk Hk xk ) + wk Qk wk
2
k=1
1 + T
1
+ xf 0 x0 (P+ f0 ) x +
f0 x 0
2
subject to
xk = Ak1 xk1 + Bk1 uk1 + wk1
zk = Hk xk + vk
4
It can be derived as MAP estimate of xk , given the data set ZN , N > k, where xk maximizes
the conditional density p(xk |ZN ) [6, p.367].
5
4 Fixed Lag Smoothing
The goal is to obtain the estimate xsk given the observations Zk+N = {zk |0 k k + N } [6]. Where
k + N is the most current measurement and N is a constant lag.
6
References
[1] Doucet A. & Maskell S. Briers, M. Smoothing algorithms for state-space models. Submission IEEE
Transactions on Signal Processing, 2004.
[2] John Crassidis and John Junkins. Optimal Estimation of Dynamic Systems. CRC Press, 2004.
[3] D. Fraser and J. Potter. The optimum linear smoother as a combination of two optimum linear
filters. Automatic Control, IEEE Transactions on, 14(4):387390, Aug 1969.
[4] Arthur Gelb. Applied Optimal Estimation. The M.I.T. Press, 1974.
[5] H. E. Rauch, F. Tung, and C. T. Striebel. Maximum likelihood estimates of linear dynamic
systems. J. Amer. Inst. Aeronautics and Astronautics, 3 (8):14451450, 1965.
[6] Andrew Sage and James Melsa. Estimation Theory with Applications to Communications and
Control. McGraw-Hill Book Company, 1971.