Professional Documents
Culture Documents
• Virtually all branches of science, engineering, and social science for data analysis, sys-
tem control subject to random disturbances or for decision making based on incomplete
information call for estimation theory.
• The projection theorem can be applied directly to the area of statistical estimation.
¦ Least squares.
¦ Maximum likelihood
¦ Bayesian techniques.
• When all variables are Gaussian statistics, these techniques produce linear equations.
103
104 CHAPTER 4. LINEAR ESTIMATION THEORY
Preliminaries
¦ The “derivative” p(ξ) of the probability distribution P (ξ) is called the probability
density function (pdf ) of the variable x, i.e.,
Z ξ
P (ξ) = p(x)dx.
−∞
. Note that Z ∞
p(x)dx = 1.
−∞
P (ξ1 , . . . ξn ) = Prob(x1 ≤ ξ1 , . . . , xn ≤ ξn ).
E[(x1 − E[x1 ])(x2 − E[x2 ])] = E[x1 − E[x1 ]]E[x2 − E[x2 ]].
106 Estimation
¦ We assume a linear model that the response y is related to the input β linearly, i.e.,
y = W β.
• It would be interesting to compare the least squares setting with those with random
noises.
Least Squares Model 107
• Given
W > (y − W β̂) = 0.
Gauss-Markov Model
y = W β + ².
¦ W ∈ Rm×n is known.
¦ ² ∈ Rm is a random vector with zero mean and covariance E(²²> ) = Q.
¦ y represents the outcome of inexact measurements in Rm .
β̂ := Ky
• Suppose the approximation is measured by minimizing the expected value of the error,
i.e.,
min
n×m
E[kβ̂ − βk2 ]
K∈R
Gauss-Markov Estimate
• Observe that
¦ Observe
E[β̂] = E[KW β + K²] = KW E[β].
It is expected that KW = In .
¦ The problem now becomes, given a symmetric and positive definite matrix Q,
• It can be argued that the above solution β̂ i is the minimum-variance unbiased estimation
of β i for each individual i.
Minimum-Variance Model
¦ W ∈ Rm×n is known.
¦ ² ∈ Rm is a random vector with zero mean and covariance E(²²> ) = Q.
¦ β is a random vector in Rn with known statistical information.
¦ y represents the outcome of inexact measurements in Rm .
β̂ := Ky
• The best approximation is measured by minimizing the expected value of the random
error, i.e.,
min
n×m
E[kβ̂ − βk2 ]
K∈R
112 Estimation
Minimum-Variance Estimate
• Proof Is Interesting!
g(ki ) := E[(y> ki − βi )2 ]
Z Z
= (y> ki − βi )2 f (y, βi ) dy dβi .
¦ Easy to see
Z Z
∂g
= 2(y> ki − βi )yj f (y, βi ) dy dβi
∂ki,j
= 2E[(y> ki − βi )yj ].
β̂ := Ky + b.
Equivalent Formulas
• Assume that
E[²²> ] = Q ∈ Rm×m ,
E[ββ > ] = R ∈ Rn×n ,
E[²β > ] = 0 ∈ Rm×n .
• When β is considered as a random variable, the size m of observations y does not need
to be large.
¦ Basic Relationships
¦ Open-loop Model
¦ Closed-loop Model
¦ An Ideal Control
¦ An Inverse Problem
¦ Temporary Latency
• Numerical Illustration
Adaptive Optics 117
• Purpose:
• Challenges:
A Simplified AO System
System
Pupil
Collimating
Wave Front
Lens
Sensor (WFS)
Deformable
Beamsplitter Mirror (DM)
Focusing
Lens
Actuator
Control
Computer
Image Plane Dector
Adaptive Optics 119
Basic Notation
• Three quantities:
• Two transformations:
• Assuming m actuators and linear response of actuators to the command, model the DM
surface by
m
X
φ̂(~x, t) = ai (t)ri (~x).
i=1
φ̂(t) = Ha(t)
• G is used to describe the WFS slope measurement associated with the actuator command
a.
Xm µ Z ¶
ŝj (t) = − d~x(∇Wj (~x)ri (~x) ai (t).
i=1 | {z }
Gji
¦ Can write
ŝ(t) = Ga(t)
where G = [Gij ] ∈ R`×m .
¦ The DM actuators are not capable of producing the exact wavefront phase φ(~x, t)
due to its finiteness of degrees of freedom. So ŝ = Ga is not an exact measurement.
122 Estimation
Open-loop Closed-loop
Sensor Measurements Sensor Measurements
s ∆ s = s - Ga
Σ
Actuator
Command
a
Loop Compensation
Reconstructor
Corrected ^
φ = Ha
Phase Profile
Σ
φ ∆ φ = φ − Ha
Open-loop Closed-loop
Phase Profile Phase Profile
Adaptive Optics 123
What is Available?
Open-loop Model
φ̃ = Eopen s
Closed-loop Model
W H = G. (2)
• Then
s = Wφ + ²
= W (Ha + ∆φ) + ²
= W Ha + (W ∆φ + ²).
It follows that
∆s = W ∆φ + ² . (3)
• Can estimate the residual phase error ∆φ(t) using ∆φ̃(t) from the model
∆φ̃ = Eclosed ∆s
Eclosed G = eclosed (W H) = H.
126 Estimation
Actuator Control
• An Ideal Control:
∆a = M ∆s (4)
Adaptive Optics 127
An Inverse Problem
G
A S
M
H W
E
Φ
128 Estimation
estimate ∆φ (t)
t t + ∆t t + 2 ∆t
Pp Pq
a(r+2) = k=0 ck a
(r+1−k)
+ j=0 bj Mj ∆s
(r−j)
, r = 0, 1, . . .
Adaptive Optics 129
• Suppose
• Then
¦ The WFS feedback measurement ∆s(n) is eventually nullified by the actuators, i.e.,
¦ The expected residual phase error is inversely related to the expected WFS mea-
surement noise ² via
0 = W lim exp[∆φ(n) ] + exp[²].
n→∞
¦ Even if exp[²] = 0, not necessarily exp[k limn→∞ ∆φn k2 ] will be small because W
has non-trivial null space.
130 Estimation
• Each control a(r+j) is a random variable =⇒ The control scheme is a stochastic process.
• Each control A(r+j) is also a realization of the corresponding random variable =⇒ The
control scheme is a deterministic iteration.
• Define
Numerical Simulation
• Test data:
surface positions n = 5
number of actuators m = 4
number of subapertures ` = 3
size of random samples z = 2500
H = rand(n, m)
W = rand(`, n)
G = WH
Lφ = rand(n, n)
L² = diag(rand(`, 1))
µφ = zeros(n, 1)
µ² = zeros(`, 1)
• Random samples: