Professional Documents
Culture Documents
2011
by Taejeong Kim
Gaussian process
Gaussian process: For any choice of t1, , tk ,
X = (Xt1 , , Xtk )t is a Gaussian random vector.
A Gaussian random process is fully characterized by its 1st
and 2nd moment, ie, by mX (t) and RX (t, s).
Any linear or affine transformation of a Gaussian random
process is Gaussian, eg, integration, differentiation, and stable linear filtering.
If samples of a Gaussian random process are uncorrelated,
they are independent.
If a Gaussian random process is wss, it is sss.
A stationary Gaussian process with arbitrary acf or psd can
be obtained by filtering a white Gaussian noise.
c
2011
by Taejeong Kim
c
2011
by Taejeong Kim
White noise
white noise Xt: wss process with SX (f ) = N20 watts/Hz
mX (t) = 0, RX ( ) = N20 ( )
> 0, Xt and Xt+ are uncorrelated.
Z
N0 df = : infinite average power
2
PX = EXt =
2
|f |/f0
,
exp(|f |/f0)1
c
2011
by Taejeong Kim
Matched filter
We discuss cont-time cases; disc-time cases are similar.
Consider detecting a deterministic signal v(t) among a wss random noise Xt with psd SX (f ).
v(t) + Xt
h(t)
w(t) + Yt
t0
H1
H0
w(t) + Yt =
h(t )(v( ) + X )d
Z
w(t) =
h(t )v( )d
Z
Yt =
h(t )X d
goal: Find h(t) that maximizes the signal-to-noise ratio (SNR)
at t0: R = |w(t0)|2/EYt20 .
c
2011
by Taejeong Kim
j2f t0
w(t0) =
W
(f
)e
df
Z
j2f t0
=
H(f
)V
(f
)e
df
H(f )
V
(f )ej2f t0
df
SX (f )
SX (f )
V (f )ej2f t0
H(f )
|w(t0)|
SX (f )
2
|H(f
)|
SX (f )df
SX (f )
df
|V (f )|2
df
SX (f )
EYt20
SY (f )df
|V (f )|2
df
SX (f )
|V (f )|2
df
SX (f )
c
2011
by Taejeong Kim
2
Z
|w(t0)|2
|V (f )|
R=
S (f ) df
2
EYt0
X
V
(f )ej2f t0
Equality holds if and only if H(f ) SX (f ) = a
SX (f )
s
V (f )ej2f t0
or H(f ) = a
for some constant a
SX (f )
|a V (f )|2
SY (f ) = |H(f )| SX (f ) = S (f )
X
2
Z
|a V (f )| j2f
RY ( ) = S (f ) e
df
X
2
c
2011
by Taejeong Kim
7
Z
j2f t
w(t) =
df
W (f )e
Z
j2f t
=
df
H(f )V (f )e
V (f )ej2f t0
j2f t
=
V
(f
)e
df
SX (f )
Z
|V (f )|2 j2f (tt0)
w(t) = a S (f ) e
df = a1 RY (t t0)
X
EYt2
1
w(t0) = a RY (0) = a : maximum at t0
EYt2
R= 2
|a|
Z
c
2011
by Taejeong Kim
V (f )ej2t0
H(f ) = a S (f )
X
2a V (f )ej2t0
=N
0
2a v(t t)
h(t) = N
0
0
HH
v(t)
HH
HH
H
H
h(t)
w(t0) + Yt0 =
=
t0
h(t0 )(v( ) + X )d
2a Z v( )(v( ) + X )d : time
N0
correlation
Z
Z
2a
2
= N ( v ( )d +
v( )X d )
c
2011
by Taejeong Kim
Wiener filter
Consider estimating a random signal Vt from a random observation signal Ut. They are assumed jwss with zero mean and
known psds and cross psd. We discuss continuous-time cases.
Ut
h(t)
Vt
Vt = h(t )U d =
h()Ut d: optimal
Z
Z
Vt =
h(t )U d = h()U
t d: another
Vt|2 for
any h(t).
c
2011
by Taejeong Kim
10
Vt6
Vt Vt
9
Q
Q
s
Q
Vt
Vt
orthogonality principle:
Z
c
2011
by Taejeong Kim
11
Since Vt Vt = h()Ut d
h()Ut d
Z
= (h() h())U
t d,
it is in the subspace.
E(Vt Vt)(Vt Vt) = 0 by orthogonality.
optimal filter, Wiener filter:
Z
=E
h()(Vt Vt)Ut d
=
h()(EV
U
E
V
t
t
tUt )d
=
h()(RV U () RV U ())d
Z
c
2011
by Taejeong Kim
12
c
2011
by Taejeong Kim
13
u u
u u u u u
u u u
u
-
u
u
u
u
u
u
u
u
u
u
u
u
u
u -
c
2011
by Taejeong Kim
14
Poisson process
Poisson process Nt: continuous-time counting process with
(t)k t
P (Nt = k) = pNt (k) = k! e
for some > 0, t 0.
c
2011
by Taejeong Kim
15
n
t(z1)
t
t
et(z1) [Poi(t)] as n .
A Poisson process is also defined for t 0 using increments:
1. N0 = 0.
2. For s < t, Nt Ns is a Poisson rv with mean (t s).
3. For t1 < t2 < < tn, the increments
Nt2 Nt1 , Nt3 Nt2 , , Ntn1 Ntn are independent
: independent increment process
c
2011
by Taejeong Kim
16
Poisson process
c
2011
by Taejeong Kim
17
= (k 0)! e
(k
k
)!
1
2
1
((tntn1))knkn1 (tntn1)
e
(knkn1)!
c
2011
by Taejeong Kim
18
c
2011
by Taejeong Kim
19
k1
i=0
(t)i t
e
i!
Nt
k1
i=1
= e
i!
k1
i=1
(t)i
Tk
et + i! ()et
i1
(t)
= et
d (1 F (t))
fTk (t) = dt
Tk
i
d Xk1 (t) et
= dt
i=0 i!
i1
i(t)
(t)i
i!
(i1)!
(t)k1et
= (k1)! , t 0 Erl(k, )
c
2011
by Taejeong Kim
inter-arrival time:
X1, X2, X3, , where Xk = Tk Tk1
Nt
Xk
20
-
Tk1 = t t+x
P (Xk x) = 1 P (Xk > x)
= 1 P (Xk > x|X1 = x1, , Xk1 = xk1)
= 1 P (Nt+x Nt = 0|X1 = x1, , Xk1 = xk1),
where t = Tk1 = x1 + + xk1: (k 1)th arrival time
(x)0 x
= 1 P (Nt+x Nt = 0) = 1 0! e , x > 0
0,
x0
1 ex, x > 0
=
Xk exp()
0,
x0
A counting process with iid exp() interarrival times is a Poisson process of rate .
c
2011
by Taejeong Kim
21
c
2011
by Taejeong Kim
22
Mt
Nt
Xk
u
u
uu
u
u
-
c
2011
by Taejeong Kim
23
Wiener process
A Wiener process Wt, also called Brownian motion, describes
the motion of a highly excited particle in a fluid, viewed in one
coordinate, that does not drift off in one direction.
Wiener process for t 0:
1. W0 = 0.
2. For s < t, Wt Ws is a Gaussian random variable with
mean zero and variance 2(t s).
3. For t1 < t2 < < tk , the increments Wt2 Wt1 , Wt3 Wt2 ,
, Wtk1 Wtk are independent: independent increment
process.
4. Each sample path is a continuous function of t.
Wt =
t
0 X d ,
c
2011
by Taejeong Kim
24
c
2011
by Taejeong Kim
25
n
i=1 Xi:
Wt(n) := 1 Sbntc,
n
1
n
1
n
?
6
-
c
2011
by Taejeong Kim
26
As n ,
1. The power of the process is maintained.
2. By the central limit theorem, Wt(n) converges in distribution
to a Gaussian process.
3. As the random walk is an independent increment process, so
is its limit process.
4. Wt(n) eventually becomes a Wiener process.
If the random walk is replaced by a binomial counting process, the limit process is a drifting Wiener process.
c
2011
by Taejeong Kim
27
Markov process
We discuss jointly continuous cases; jointly discrete cases are similar.
Markov property:
For any t1 < t2 < < tn and x1, , xn,
fXtn |Xt Xt (xn|x1, , xn1) = fXtn |Xt (xn|xn1)
1
n1
n1
t1
tn2
tn1 tn
c
2011
by Taejeong Kim
28
fXtn |Xt
(xn|x1, , xn1)
Xt
1
n1
= fXt (x1)fXt
1
(xn|xn1)
|Xt (x2|x1)fXt |Xt (x3|x2) fXtn |Xt
2
1
3
2
n1
c
2011
by Taejeong Kim
29
n1
= fX t
fXt
n+1
n+k
Xt
|X (x1, , xn1|xn)
n1 tn
n+1
Xt
|Xtn (xn+1, , xn+k |xn)
n+k
t1
tn1
tn tn+1
The equivalence implies that the time-reversed Markov process is also Markov.
c
2011
by Taejeong Kim
30
n1
= fX t
n+1
n+k
Xt
|X fXt
Xt
|X Xt
X
n1 tn
n+1
n1 tn
n+k t1
= fX t
Xt
|Xtn (fXt
|X Xtn
1
n1
n+1 t1
= fX t
Xt
|Xtn (fXt
|X
1
n1
n+1 tn
fX t
n+1
fX t
) [ch]
|Xt Xt
1
n+k
n+k1
fXt
n+k
|Xt
n+k1
) [Mp]
|X
Xt
n+k tn
= fX t
|Xtn
fX t
= fX t
|Xtn
fX t
n+1
n+1
n+2
|Xtn Xt
n+1
|X
n+2 tn+1
fXt
fX t
|X Xt
n+k tn
n+k1
|X
n+k tn+k1
[Mp]
[ch]
c
2011
by Taejeong Kim
31
[ch]
fXt Xt
1
n2
n2
n1
Xtn |Xt
n1
= fX t
n2
n1
Xt
|X
1
n2 tn1
X
n2 tn1
fXtn |Xt
n1
[ci]
n1
= fXtn Xt
(xn
|Xt Xt
1
n1
= fXtn Xt
(xn
|Xt
n1
n1
n1
xn1|x1, , xn1)
n1
(xn|xn1)
c
2011
by Taejeong Kim
32
t1
proof: fXt
Z
|Xt (x3|x1)
3
1
t2
= fX t
t3
= fX t
Z
= fX t
[Mp]
[ch]
[marginal]
c
2011
by Taejeong Kim
33
fX t
Z
= fX t
= [ x 3 fX t
Z
= x3[ fXt
= x3fXt
|Xt (x3|x1)dx3
1
[C-K eqn]
Xt2
Xt 3
c
2011
by Taejeong Kim
34
c
2011
by Taejeong Kim
35
1p
-1
1p
1p
1p
example:
0.2
0.1
q
0.8 gry/2
0.3
0.4
blu/1
0.2
0.5
blk/3 i 0.9
0.5
brn/4
0.1
j
Y
1p
c
2011
by Taejeong Kim
36
..
..
..
..
1p 0
p
0
0 1p 0
p
0
0 1p 0
0
0
0 1p
..
..
..
example:
0
0.2
0
0.2
0.1
0.8
0
0.3
0.4
0
0.9
0.5
0.5
0.1
..
..
..
0
0
p
0
0
0
0
p
..
..
c
2011
by Taejeong Kim
37
j pij = 1
(2)
pij := [P 2]ij
= k pik pkj
X
= k P (Xn+1 = k|Xn = i)P (Xn+2 = j|Xn+1 = k)
= P (Xn+2 = j|Xn = i): Chapman-Kolmogorov equation
(m)
pij
(n) (m)
P (Xn+m = j) =
= i)
p(n+m) = p(n)P , where
p(n) := (P (Xn = 1), P (Xn = 2), P (Xn = 3), ) is
the marginal pmf of Xn expressed as a row vector.
p(n) = p(0)P n
c
2011
by Taejeong Kim
38
1
1
n
limn P =
1
..
2
2
2
..
..
c
2011
by Taejeong Kim
39
Note that CX (t2, t2) = var(Xt2 ) and that this includes both
discrete and continuous time processes.
proof of only if part:
Recall if X and Y are jointly Gaussian,
E(X|Y ) = mX + X (Y mY ).
Y
For Gaussian Xt,
CX (t1,t2)
E(Xt2 |Xt1 ) = mX (t2) + C
[X mX (t1)].
(t ,t ) t1
X 1 1
c
2011
by Taejeong Kim
40
CX (t2,t3)
= E mX (t3) + C
[X
m
(t
)]|X
t
X
2
t
2
1
(t ,t )
X 2 2
X (t2,t3) [E(X |X ) m (t )]
= mX (t3) + C
t2
t1
X 2
C (t ,t )
X 2 2
= mX (t3)
CX (t1,t2)
X (t2,t3)
m
+C
(t
)
+
[X
m
(t
)]
m
(t
)
X
2
t
X
1
X
2
1
C (t ,t )
C (t ,t )
X 2 2
X 1 1
X (t2,t3) CX (t1,t2) [X m (t )]
= mX (t3) + C
X 1
CX (t2,t2) CX (t1,t1) t1
X (t1,t3) [X m (t )],
Since E(Xt3 |Xt1 ) = mX (t3) + C
X 1
C (t ,t ) t1
X 1 1
c
2011
by Taejeong Kim
41
c
2011
by Taejeong Kim
42
Markov
Xn
c
2011
by Taejeong Kim
43
2
X
2
2
a2X
+W
2
X
2
W
=
1a2
2
RX (1) = EXnXn1 = E(aXn1 + Wn)Xn1 = aX
2
RX (2) = EXnXn2 = E(aXn1 + Wn)Xn2 = a2X
2
RX ( ) = EXnXn = E(aXn1 + Wn)Xn = a X
X1
1
a
a
X2
a
1
a
2
2
a
a
1
covariance matrix of X3 = X
..
..
..
..
Xn
an1 an2 an3
n1
a
n2
a
n3
a
..
c
2011
by Taejeong Kim
44
6
1
6
2
6
3
a4
Xn
b0
D
-
b1
-
b2
-
b3
-
b4
Xn
c
2011
by Taejeong Kim
45
2
W
2
W
2
1
: first order
j2f
1ae
1
: k-th order
j2f
j2f
k
1a1e
ak e
c
2011
by Taejeong Kim
46
j2f
j2f k 2
2
+ + bk e
SX (f ) = W b0 + b1e
: kth order
(k, l)th order ARMA process:
Xn = a1Xn1 + + ak Xnk + b0Wn + + bl Wnl ,
where Wn is iid.
(k, l)-th order pole-zero filter output when Wn is the input.
c
2011
by Taejeong Kim
47
Wn
b0
D
D
D
b2
6
6
2
b3
6
6
3
b1
-
6
6
1
6
6
a
a
a
D
?
D
?
D
?
Xn
6
n
6
1
6
2
6
3
b0
D
?
b1
D
?
b2
D
?
b3
a4
b4
a4
6
n
b4
2
j2f
j2f
l
b0+b1e
++bl e
2
:
W
j2f
j2f
k
1a1e
ak e
c
2011
by Taejeong Kim
48
Ergodicity
Ergodicity means equality between time averages and statistical
averages.
statistical average: EXt = mX , Eg(Xt)
time average:
1 XT
1 XT X or
t=1 t
t=T Xt, discrete-time
T
2T
+1
ET Xt = 1 Z T
Z
1
T
X
dt
or
X dt, continuous-time
T 0 t
2T T t
ET g(Xt)
EXt = limT ET Xt
Eg(Xt)
Assume Xt is wss for ergodicity of first and second moment
and sss for higher moments.
We will discuss continuous-time cases.
c
2011
by Taejeong Kim
49
Xt is ergodic
in the mean
EXt = EXt
in the 2nd moment
EXt2 = EXt2
in the acf
, EXt+ Xt = EXt+ Xt
equality in all moments
We can compute the corresponding moment or joint moment
by time averaging the appropriate function of a sample path.
Ergodic theorems provide (necessary and) sufficient conditions for certain ergodicities.
Since the time average is a limit of a random sequence, the
senses of the above equalities need to be defined.
c
2011
by Taejeong Kim
50
Z
limT T1 T0 1 T CX ( )d = 0.
c
2011
by Taejeong Kim
51
Z
2
1 T
var(ET Xt) = E T 0 Xtdt mX
Z
2
1 T
= E T 0 (Xt mX )dt
Z
1
T ZT
= 2 0 0 E(Xt mX )(Xs mX )dtds
T
Z
s
1
T ZT
6
= 2 0 0 CX (t s)dtds (1)
T
Z
1
T Z T s
= 2 0 s CX ( )d ds
T
@
-
6
@
@
@
Z
Z
1
0 ZT
T Z T
= 2 T CX ( )dsd + 0 0 CX ( )dsd
T
!
Z
Z
1
0
T
= 2 T (T + )CX ( )d + 0 (T )CX ( )d
T
@
@ -
c
2011
by Taejeong Kim
52
Z
1
= 2 TT (T | |)CX ( )d [symmetry]
T
Z
2
T
= T 0 1 T CX ( )d : since CX ( ) is even.
(2)
proof:
c
2011
by Taejeong Kim
53
1 T C ( )d 2
T 0 X
2
Z
1 T
= T 0 E(X mX )(X0 mX )d
2
Z
1
T
= E(X0 mX ) T 0 (X mX )d
2
2 1ZT
Z
C ( )d [(2)]
2 2 T
1
= X
X
T 0
T
Z
1
if part: Assume limT T T0 CX ( )d = 0 and consider
Z
Z
T
T
1
Z
Z
Z
Z
T
t
T
s
1
= 2 ( 0 0 CX (t s)dsdt + 0 0 CX (s t)dtds)
s
T
6
Z
Z
T
t
2
B
= 2 0 0 CX (t s)dsdt [symmetry]
T
A
Z
c
2011
by Taejeong Kim
= 22
54
T t
0 0 CX ( )d dt ,
Z
Z
Z
Z
2
2
T t
T t
= 2 0 0 CX ( )d dt + 2 T 0 CX ( )d dt, where
T
T
T is a constant given by this:
Z
1
Since limT T T0 CX ( )d = 0,
1 t
Z
Z
2
2
t
t
T
T
2 0 0 CX ( )d dt + 2 T 0 CX ( )d dt [ ineq]
T
T
Z
Z
Z
1 t
2
2
T
t
T
2 0 0 CX ( )d dt + 2 T t t 0 CX ( )d dt [ ineq]
T
T
2
2
Z
+ 22 TT tdt = 1 + 22 T T
2
2
T
T
c
2011
by Taejeong Kim
55
ms
Xt is ms ergodic in the acf : ET Xt+ Xt
RX ( )
for each as T .
( )
( )
c
2011
by Taejeong Kim
56
Isserlis theorem: For jointly Gaussian zero-mean random variables X1, X2, X3, and X4,
EX1X2X3X4
= EX1X2EX3X4 + EX1X3EX2X4 + EX1X4EX2X3.
In fact, for any even number k,
X
EX1X2 Xk = EXi1 Xi2 EXi3 Xi4 EXik1 Xik ,
where the sum is over all possible ways of partitioning
{1, 2, , k} into k/2 sets of pairs.
c
2011
by Taejeong Kim
57
proof: Let Yt
( )
CY ( ) () = EYt+ Yt
c
2011
by Taejeong Kim
58
only if: Assumed is, by the previous theorem (page 55), that
Z
1
limT T T0 CY ( ) ()d = 0 for every .
Z
1
limT T T0 [CX2 () + CX ( + )CX ( )]d = 0
Z
2
Setting = 0, limT T T0 CX 2()d = 0.
Z
1
if: Assume that limT T T0 CX 2( )d = 0.
1
T
limT
C
()d
(
)
0
Y
T
Z
Z
1
1
T
T
2
Z
Z
1/2
1/2
2
2
1 T
1 T
C ( )d
[Schwarz]
lim T 0 CX ( + )d
T 0 X