Professional Documents
Culture Documents
2011
by Taejeong Kim
EZ
Z
11 EZ1l
11 Z1l
.
.
.
.
.
. , EZ = .
Z = .
EZk1 EZkl
Zk1 Zkl
moments
correlation matrix of X:
EX1 EX1Xk
t
.
.
.
.
RX = EXX =
EXk X1 EXk
c
2011
by Taejeong Kim
covariance matrix of X:
var(X1) cov(X1, Xk )
..
..
CX = E(XEX)(XEX)t =
uncorrelated X: CX is diagonal.
iid X: CX = 2I, I: identity matrix
Other possibilities: uncorrelated (independent) between subvectors that are each correlated (dependent) within.
definiteness of a matrix
non-negative definite (positive semidefinite) k k matrix A: symmetric and vector a, atAa 0.
positive definite k k matrix A:
symmetric and vector a, atAa > 0.
c
2011
by Taejeong Kim
c
2011
by Taejeong Kim
1, i = j
t
q i q j = qi1qj1 + qi2qj2 + + qik qjk = ij =
0, i 6= j
proof (real eigenvalues):
In this proof, to be more general, eigenvectors are considered
complex, though they can be chosen to be all real.
i =
ikq ik2 =
iq tq i = (iq i)tq i = (Aq i)tq i = q tAtq i
i
i
= q tiAq i = q tiiq i = iq tiq i = ikq ik2 = i
c
2011
by Taejeong Kim
q
q
11 q21 qk1
i1
q
q
q
12
22
k2
i2
q i = . , Q = (q 1, , q k ) = . .
.
.
.
.
qik
q1k q2k qkk
Then Q is an orthogonal (unitary) matrix: QQt = QtQ = I,
ie, Qt = Q1.
proof:
Qt Q =
q t1
.. (q , , q ) =
1
k
t
qk
q t1q 1
q t2q 1
q t1q 2
q t2q 2
q t1q k
q t2q k
=I
..
..
..
q tk q 1 q tk q 2 q tk q k
c
2011
by Taejeong Kim
QtAQ =
q t1
.. (1q , , k q )
1
k
q tk
1q t1q 1
1q t2q 1
2q t1q 2
2q t2q 2
k q t1q k
k q t2q k
..
..
..
1q tk q 1 2q tk q 2 k q tk q k
1
0
= .
.
0
0
2
..
0
0
..
c
2011
by Taejeong Kim
notation:
Z
Z
Z
X
X
X
x = x1 xk ; dx = dx1 dxk
s
k
2
EX
i
i=1
Eej(u1X1++uk Xk )
= Z
[Euclidean norm]
j utX
j ut x
xe
pX (x)
ej u xfX (x)dx
Y
= ki=1 Xi (ui) if independent.
c
2011
by Taejeong Kim
g (X , , X )
Y
1
1
k
1
.
.
. =
.
gl (X1, , Xk )
Yl
If G is continuously differentiable and invertible (l = k),
X = H(Y ), H = G1
X
h (Y , , Y )
1
1 1
k
.
.
.
.
Xk
hk (Y1, , Yk )
y = {y : y1 < Y1 y1 + 1, , yk < Yk yk + k }
volume of y: |y| = 1 k
x = H(y), y = G(x), volume of x: |x|
c
2011
by Taejeong Kim
P (X x) = P (Y y)
fX (x)|x| fY (y)|y|, where y = G(x),
lim|y |0 |x| = | det(dH(y))|
|y |
h1
y1
Jacobian of H: dH(y) = .. . . .
y
1
fX (x)
fY (y) = fX (x)| det(dH(y))| =
| det(dG(x))|
x = H(y)
h1
yk
..
hk
yk
c
2011
by Taejeong Kim
10
dx
dr
dH(r, ) = dy
dr
dx
d = cos r sin , det(dH(r, )) = r
dy
sin r cos
d
fR(r, ) = rfXY (r cos , r sin ) (for r 0 and < )
1
r2 cos2 /2 1
r2 sin2 /2
r2/2 1
=r
e
e
= re
2
2
2
1 , <
r /2
, r0
re
fR(r) =
, f() = 2
0,
else
0, else
c
2011
by Taejeong Kim
11
k
j=1 Aij Xj
gi
+ bi, x
= Aij , i = 1, , k
i
dG(x) = A; dH(y) = A1
1
fX (A1(y b))
|=
| det A|
j v t(AX + b)
Y (v) = Ee
= Ee
j v tb
=e
X (Atv)
j v tb
=e
Ee
Av X
c
2011
by Taejeong Kim
12
Estimation
minimum mean-squared-error (mmse) estimation of X:
Given the observation Y and some information on the jpdf,
2, where X: k-d, Y : l-d
= g(Y ) = min1
EkX
Xk
find X
X
Xk
2.
= min1
E(X
X
)
i
i
i=1
Xi
i)2/aij = 0 E(Xi X
i)(Yj ) = 0, j = 1, , l
E(Xi X
note: differentiation and expectation are usually interchangeable.
c
2011
by Taejeong Kim
13
orthogonality principle:
i)Yj = 0
E(Xi X
i Yj
EXiYj = E X
Xi 6
i
Xi X
9 QQ
Yj
Q
Q
s
i =
X
Xl
j=1
aij Yj
For j = 1, , l,
EXiYj = EaiY Yj
= ai(EY1Yj , , EYl Yj )t. [scalar, 1-d]
EXiY t = aiRY [row vector, l-d]
Repeating for i = 1, , k, ARY = RXY . [matrix, kl]
= RXY R1Y if RY is invertible.
A = RXY RY1 and X
Y
2
= EXY Y , solving min1
For 1-d, it becomes X
a E(X aY ) .
EY 2
c
2011
by Taejeong Kim
14
c
2011
by Taejeong Kim
15
= g(Y ) = E(X|Y )
(general) mmse estimator: X
proof: Minimize EkX g(Y )k2 = EE(kX g(Y )k2|Y )
E(kX
g(y)k
Y
y
= Z
2
g(Y
))
.
g
c
2011
by Taejeong Kim
16
alternative proof:
orthogonality principle for functions of Y :
X 6
X g(Y )
1
2
9 QQ
Q
s
Q
h(Y )
g(Y )
proof of orthogonality
principle for functions:
EkX f (Y )k2 = EkX g(Y ) + g(Y ) f (Y )k2
= EkX g(Y )k2 + Ek(g f )(Y )k2
+2E(g f )(Y )t(X g(Y ))
EkX g(Y )k2 if orthogonality holds.
c
2011
by Taejeong Kim
17
c
2011
by Taejeong Kim
18
1
1
t 1
fX (x) =
(x
m)
exp
(x m) C
k/2
2
(2)
det C
jchf [def]
u Cu
X (u) = exp jm u
2
A Gaussian random vector is fully characterized by its 1-st
and 2-nd moments, ie, by m and C.
c
2011
by Taejeong Kim
19
t
t
t
tb
(A v) CX (A v)
t t
j
v
= e
exp j(A v) m
X
t
t
v (ACX A )v
= exp jv (AmX + b)
2
Any linear or affine transformation of a Gaussian random
vector is Gaussian.
c
2011
by Taejeong Kim
20
0, Y < 0
Y, Y < 0
; X2 =
Y N (0, 1); X1 =
Y, Y 0
0, Y 0
X1 + X2 = Y , but neither X1 nor X2 is Gaussian.
fX1 (x)
fX2 (x)
If the components of a Gaussian random vector are uncorrelated, they are independent.
proof (sketch): uncorrelated CX is diagonal
Y
CX1 is diagonal fX (x) = i fXi (xi) [See 2-d case]
c
2011
by Taejeong Kim
21
alternative proof:
utCX u
2
Xk
1
= exp j
2 i=1 i2u2i
Yk
Y
= i=1 exp j miui 12 i2u2i = ki=1 Xi (ui)
k
i=1 miui
= 12 P (X y|W = 1) + 12 P (X y|W = 1)
= 12 P (X y) + 12 P (X y) = P (X y) = FX (y)
c
2011
by Taejeong Kim
22
@
@
@
@
@
@
X
@
@
1
xi
1
1 t
Yk
c
2011
by Taejeong Kim
23
Let Y = C 1/2X + m.
Then Y N (m, C 1/2I(C 1/2)t) = N (m, C) such that
fX (C 1/2(y m))
fY (y) =
| det C 1/2|
1
1
t 1
exp (y m) C (y m)
=
2
(2)k/2 det C
We can also use Q1/2 in place of C 1/2 = Q1/2Qt,
ie, Y = Q1/2X + m.
Therefore to generate a Gaussian random vector with m and
C, we proceed as follows.
k iid unif(0, 1)
k iid N (0, 1) by the transform (inverse of the cdf)
N (m, C) by the affine transform (above)
c
2011
by Taejeong Kim
24
X AY
Y
I A X
: jointly Gaussian
=
0 I
Y
c
2011
by Taejeong Kim
25
x
>0
fX |Y (x|y)
1
1
t
1
s
=
exp
(x mX|y ) CX|Y (x mX|y ) ,
2
(2)k/2 det CX|Y
where mX|y = E(X|Y = y) = A(y mY ) mX
and CX|Y = CX ACY X , in which A satisfies ACY = CXY .
The vector conditional pdf is in the Gaussian jpdf form.
Note that CX|Y does not depend on y.
c
2011
by Taejeong Kim
26
q t1
q t2
qi1
11
qi2
q i = . , A = . = .21
.
.
q tk qk1
qik
q1k
q2k
t
[A = Q ]
..
qkk
1, i = j
t
q i q j = qi1qj1 + qi2qj2 + + qik qjk = ij =
0, i 6= j
AAt = AtA = I: A is orthogonal or unitary.
X
Y = AX, Yi = q tiX = j qij Xj : transform
X
X = AtY = i Yiq i: expansion
q12
q22
..
qk2
c
2011
by Taejeong Kim
27
CX At = CX (q 1, , q k ) = (CX q 1, , CX q k )
= (1q 1, , k q k )
q t1
q t2
1 0 0
0 2 0
t
ACX A = . (1q 1, , k q k ) = = . .
..
. .
.
t
0 0 k
qk
Y has uncorrelated components. (Assume EX = 0.)
EYiYjt = E(q tiX)(X tq j ) = q ti(EXX t)q j = q tiCX q j
i , i = j
t
= q i j q j =
0,
i 6= j
If EX = 0, Y has orthogonal components.
If X is Gaussian, Y is a Gaussian random vector with independent components.
c
2011
by Taejeong Kim
28
c
2011
by Taejeong Kim
29
u
u
u
u
u
u
u
u
u
u
u
u
u
u
transform code
c
2011
by Taejeong Kim
Y-
30
Y1 -
Q1
Y2
YL -
Q2
QL
Y1 e
Y2 -
Y
YL-
binary decoder
binary encoder
X2 6
Y2
X1
T =
X
-
1
2
1 1
1 1
Y1
c
2011
by Taejeong Kim
31
DCT
c
2011
by Taejeong Kim
32
rr
rr r
r
r
rr rrr rr r r
rr r r rr
r
rr
r
rr r
r
r r rr r r r r r
r
r rr
rr r
rr
r r rr
r r
r rrr rr
r r r rr
r
Y2 6
r
r
r
r
Y1
rr
rr r
r
r
rr rrr rr r r
rr r r rr
r
rr
r
rr r
r
r r rr r r r r r
r
r rr
rr r
rr
r r rr
r r
r rrr rr
r r r rr
r
Y2 6
r
r
r
r
r
r
r
rr r r rr
rr
r r rr r r
r
rr r
??
r ?
bb
bb b
b
bb bbb bb b b
b
b
b bbbb
b
b
Y1
? ??
?? ?
? ???? ??
? ? ???
?
?
Y1