Professional Documents
Culture Documents
Robert Stengel! Optimal Control and Estimation, MAE 546, Princeton University, 2013 " ! Estimating unknown constants from redundant measurements"
! Least-squares" ! Weighted least-squares"
y=Hx
= H !1 y x
=z"y !=z"Hx
dim(!) = ( k " 1)
y=Hx
J=
z= y+n=Hx+n
) = ( n ! 1) dim ( x
J=
1 T T HT z ! z T H x +x T HT H x z z!x 2
Necessary condition "
T HT H HT H x or = HT H x
)(
!1
T = z T H HT H =x
!1
(row )
!1
HT z (column )
T HT H = z T H x
zi = x + ni , i = 1 to k
! Express measurements as"
z = Hx + n
! Output matrix"
! # H=# # # "
1 1 ... 1
! Optimal estimate"
= (k)
-1
( z1 + z2 + ... + zk )
1 ! zi k i=1
k
= H H x
T
!1
H z
= x
z = ( a0 + a1 x ) + n
y = a0 + a1 x
'1
HT z
=a 0 + a 1 x y
Degraded Image!
z = Hx + n
! Give the more uncertain measurements less weight in arriving at the minimum-cost estimate" ! Let S = measure of uncertainty; then express error cost in terms of S1" 1
J= 2 ! T S "1!
1 1 T ) S "1 ( z " H x ) J = ! T S "1! = ( z " H x 2 2 1 T HT S "1z " z T S "1H x +x T HT S "1H x = z T S "1z " x 2
!J =0 !x
=
T T 1# ) + x T HT S "1H % 0 " ( HT S "1z ) " z T S "1H + ( HT S "1H x $ & 2
= HT S !1H x
!1
HT S !1 z
= H S H x
T
!1
!1
H S z
0 #! &% 0 &% ... ... & % &% ... akk & % $" ... ... z1 # & z2 & ... & & zk & $
!1
a)# Normalize the cost function according to expected measurement error, SA!
J=
1 1 ... 1 ! a11 #* % &, 0 &, ! 1 1 ... 1 # % " $ % ... &, % &, , 0 $+ % "
-1
0 a22 ... 0
0 #! & ... 0 & % % ... ... & % &% ... akk & " $ ...
0 a22 ... 0
= x
!a z
i =1 k
ii i
b)# Normalize the cost function according to expected measurement residual, SB! 1 1 T 1 1 ) S" J = ! T S" (z " H x B! = B (z " H x) 2 2
!a
i =1
ii
) ! = ( z " Hx
T T T T T T = HE " # !! $ % H + HE !n + E n! H + E nn
! HPH + HM + M H + R
T T T
z1 = H1x + n1
T !1 T !1 1 = ( H1 x R1 H1 ) H1 R1 z1 !1
dim ( z1 ) = dim ( n1 ) = k1 ! 1 dim ( H1 ) = k1 ! n dim ( R1 ) = k1 ! k1
!! Recursive approach"
!! Optimal estimate has been made from prior measurement set" !! New measurement set is obtained" !! Optimal estimate is improved by incremental change (or correction) to the prior optimal estimate"
J1 =
2) ( z 2 ! H2x
( !1 0 % * R1 ' * 0 R !1 & 2 )
T
+ " (z ! H x 1 1 2) -$ 2) ( z ! H2x ,$ # 2
P1!1
T !1 ! H1 R1 H1
HT 2
T %1 %1 = ( H1 R1 H1 + HT 2 R 2 H2 )
(H R
T P1 ! P1HT 2 ( H2P 1H 2 + R 2 ) H 2 P 1 !1
(H R (P
T 1
!1 1
!1 H1 + HT = 2 R 2 H2 ) !1 !1 + HT = 2 R 2 H2 ) !1
!1 1
!1
1 H2x
!1
H 2 P1H $ R z %
T 2
!1 2 2
T 2 = x 1 ! P1HT x 2 ( H2P 1H 2 + R 2 )
!1
1 ! K ( z 2 ! H2x 1 ) !x
1 ) ( z 2 ! H2x
i!1 ) ( zi ! x
H = 1; R = 1
i !1 ! K i ( z i ! H i x i !1 ) !x with Pi = P + H R Hi
T i
) (z
!1
i !1 ) ! Hi x
pi!1 ( pi!1 + 1) 1
!1 i !1
(p
+ 1)
!1 i !1
!1 i
dim ( x ) = n ! 1; dim ( P) = n ! n
!1
dim ( z ) = r ! 1; dim ( R ) = r ! r
dim ( H ) = r ! n; dim ( K ) = n ! r
T Ki = Pi !1HT i Hi P i ! 1H i + R i
!1
!1
!! Why?" !! Each new sample has smaller effect on the average than the sample before!
Supplemental Material !
http://en.wikipedia.org/wiki/Kriging!