Professional Documents
Culture Documents
Winter 2008-09
Linear system driven by stochastic process Statistical steady-state Linear Gauss-Markov model Kalman lter Steady-state Kalman lter
81
82
now lets consider the covariance xt+1 x t+1 = A(xt x t) + B (ut u t ) and so x(t + 1) = E (A(xt x t) + B (ut u t)) (A(xt x t) + B (ut u t)) = Ax(t)AT + B u(t)B T + Axu(t)B T + B ux(t)AT where xu(t) = ux(t)T = E(xt x t)(ut u t)T thus, the covariance x(t) satises another, Lyapunov-like linear dynamical system, driven by xu and u
T
83
consider special case xu(t) = 0, i.e., x and u are uncorrelated, so we have Lyapunov iteration x(t + 1) = Ax(t)AT + B u(t)B T , which is stable if and only if A is stable if A is stable and u(t) is constant, x(t) converges to x, called the steady-state covariance, which satises Lyapunov equation x = A x AT + B u B T thus, we can calculate the steady-state covariance of x exactly, by solving a Lyapunov equation (useful for starting simulations in statistical steady-state)
84
Example
we consider xt+1 = Axt + wt, with A= where wt are IID N (0, I ) eigenvalues of A are 0.6 0.75j , with magnitude 0.96, so A is stable we solve Lyapunov equation to nd steady-state covariance x = 13.35 0.03 0.03 11.75 0.6 0.8 0.7 0.6 ,
two initial state distributions: x(0) = 0, x(0) = 102I plot shows 11(t) for the two cases
120
100
80
11(t)
60
40
20
10
20
30
40
50
60
70
80
90
100
86
(xt)1
5 0 5 10 15 0 10 20 30 40 50 60 70 80 90 100
15 10
(xt)1
5 0 5 10 15 0 10 20 30 40 50 60 70 80 90 100
87
xt Rn is the state; yt Rp is the observed output wt Rn is called process noise or state noise vt Rp is called measurement noise v w z 1 x C y
88
Statistical assumptions
x0, w0, w1, . . ., v0, v1, . . . are jointly Gaussian and independent
T wt are IID with E wt = 0, E wtwt =W T vt are IID with E vt = 0, E vtvt =V
E x0 = x 0, E(x0 x 0)(x0 x 0)T = 0 (its not hard to extend to case where wt, vt are not zero mean) well denote Xt = (x0, . . . , xt), etc. since Xt and Yt are linear functions of x0, Wt, and Vt, we conclude they are all jointly Gaussian (i.e., the process x, w, v, y is Gaussian)
89
Statistical properties
sensor noise v independent of x wt is independent of x0, . . . , xt and y0, . . . , yt Markov property: the process x is Markov, i.e., xt|x0, . . . , xt1 = xt|xt1 roughly speaking: if you know xt1, then knowledge of xt2, . . . , x0 doesnt give any more information about xt
810
811
812
State estimation
we focus on two state estimation problems: nding x t|t, i.e., estimating the current state, based on the current and past observed outputs nding x t+1|t, i.e., predicting the next state, based on the current and past observed outputs since xt, Yt are jointly Gaussian, we can use the standard formula to nd x t|t (and similarly for x t+1|t)
1 x t|t = x t + xtYt Yt (Yt Yt )
the Kalman lter is a clever method for computing x t|t and x t+1|t recursively
The Kalman lter 813
Measurement update
lets nd x t|t and t|t in terms of x t|t1 and t|t1 start with yt = Cxt + vt, and condition on Yt1: yt|Yt1 = Cxt|Yt1 + vt|Yt1 = Cxt|Yt1 + vt since vt and Yt1 are independent so xt|Yt1 and yt|Yt1 are jointly Gaussian with mean and covariance x t|t1 Cx t|t1 , t|t1 t|t1C T C t|t1 C t|t1C T + V
814
now use standard formula to get mean and covariance of (xt|Yt1)|(yt|Yt1), which is exactly the same as xt|Yt:
1 1
C t|t1C + V
this gives us x t|t and t|t in terms of x t|t1 and t|t1 this is called the measurement update since it gives our updated estimate of xt based on the measurement yt becoming available
815
Time update
now lets increment time, using xt+1 = Axt + wt condition on Yt to get xt+1|Yt = Axt|Yt + wt|Yt = Axt|Yt + wt since wt is independent of Yt therefore we have x t+1|t = Ax t|t and t+1|t = E( xt+1|t xt+1)( xt+1|t xt+1)T = E(Ax t|t Axt wt)(Ax t|t Axt wt)T = At|tAT + W
816
Kalman lter
measurement and time updates together give a recursive solution start with prior mean and covariance, x 0|1 = x 0, 0|1 = 0 apply the measurement update x t|t = x t|t1 + t|t1C
T
C t|t1C + V
1 1
t|t = t|t1 t|t1C T C t|t1C T + V to get x 0|0 and 0|0; then apply time update x t+1|t = Ax t|t, to get x 1|0 and 1|0 now, repeat measurement and time updates . . .
The Kalman lter
t+1|t = At|tAT + W
817
Riccati recursion
we can express measurement and time updates for as t+1|t = At|t1AT + W At|t1C T (C t|t1C T + V )1C t|t1AT which is a Riccati recursion, with initial condition 0|1 = 0
t|t1 can be computed before any observations are made thus, we can calculate the estimation error covariance before we get any observed data
818
819
Observer form
we can express KF as x t+1|t = Ax t|t1 + At|t1C T (C t|t1C T + V )1(yt C x t|t1) = Ax t|t1 + Lt(yt y t|t1) where Lt = At|t1C T (C t|t1C T + V )1 is the observer gain y t|t1 is our output prediction, i.e., our estimate of yt based on y0, . . . , yt1 et = yt y t|t1 is our output prediction error Ax t|t1 is our prediction of xt+1 based on y0, . . . , yt1 our estimate of xt+1 is the prediction based on y0, . . . , yt1, plus a linear function of the output prediction error
820
xt C
yt
Lt x t|t1 z 1 C
et
y t|t1
x t|t1 A
821
822
steady-state lter is a time-invariant observer: x t+1|t = Ax t|t1 + L(yt y t|t1), C T (C C T + V )1 where L = A dene state estimation error x t|t1 = xt x t|t1, so yt y t|t1 = Cxt + vt C x t|t1 = C x t|t1 + vt and x t+1|t = xt+1 x t+1|t = Axt + wt Ax t|t1 L(C x t|t1 + vt) = (A LC ) xt|t1 + wt Lvt y t|t1 = C x t|t1
823
thus, the estimation error propagates according to a linear system, with closed-loop dynamics A LC , driven by the process wt LCvt, which is IID zero mean and covariance W + LV LT provided A, W is controllable and C, A is observable, A LC is stable
824
Example
system is xt+1 = Axt + wt, with xt R6, yt R
2 2 well take E x0 = 0, E x0xT = = 5 I ; W = (1 . 5) I, V = 1 0 0
yt = Cxt + vt
eigenvalues of A: 0.9973 0.0730j, (which have magnitude one) goal: predict yt+1 based on y0, . . . , yt 0.9995 0.0324j, 0.9941 0.1081j
825
x(t + 1) = Ax(t)AT + W,
x(0) = 0
16000
14000
12000
2 E yt
10000
8000
6000
4000
2000
20
40
60
80
100
120
140
160
180
200
826
now, lets plot the prediction error variance versus t, E e2 yt|t1 yt)2 = C t|t1C T + V, t = E( where t|t1 satises Riccati recursion, initialized by 1|2 = 0, t+1|t = At|t1AT + W At|t1C T (C t|t1C T + V )1C t|t1AT
20.5
20
19.5
E e2 t
19
18.5
18
17.5
17
20
40
60
80
100
120
140
160
180
200
now lets try the Kalman lter on a realization yt top plot shows yt; bottom plot shows et (on dierent vertical scale)
300 200 100
yt
t
30 20 10
et
828