Professional Documents
Culture Documents
Denitions
Properties
Yule-Walker Equations
x () =
Levinson-Durbin recursion
xx ()
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
J. McNames
x ()
x ()
=
x (0)
x2
ECE 538/638
Autocorrelation
Ver. 1.09
x ()
x ()
=
x (0)
x2
2
).
where w(n) WN(0, w
Bounded: 1 x () 1
White noise, x(n) WN(x , x2 ): x () = ()
These enable us to assign meaning to estimated values from
signals
For example,
If x () (), we can conclude that the process consists of
nearly uncorrelated samples
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
2
). Hint: n u(n 1)
where w(n) WN(0, w
ROC of |z| < ||.
1
1z 1
x () =
x ()
x ()
=
x (0)
x2
for an
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
J. McNames
All-Pole Models
H(z) =
ECE 538/638
Autocorrelation
Ver. 1.09
AP Equations
b0
b0
=
P
A(z)
1 + k=1 ak z k
H(z) +
h(n) +
P
ak H(z)z k
= b0
ak h(n k)
= b0 (n)
k=1
P
k=1
P
ak h(n k)h (n )
= b0 h (n )(n)
k=0
Since the coecients at large lags tend to be small, this can often
be well approximated by an AP(P ) model
P
ak h(n k)h (n )
n= k=0
b0 h (n )(n)
n=
P
ak rh ( k)
= b0 h ()
k=0
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
AP Equations Continued
b0 ,
ak rh (k) = |b0 |
and
rh (1)
rh (0)
rh (1)
rh (0)
..
..
.
.
rh (P ) rh (P 1)
rh (1)
rh (0)
rh (1)
rh (0)
..
..
.
.
=0
k=0
P
ak rh ( k)
>0
k=0
ak rh ( k)
rh (P ) rh (P 1)
0
P
rh (0)
..
.
aP
rh (P )
1
a1
rh (P 1)
..
.
rh (0)
.. =
.
aP
2
|b0 |
0
..
.
0
2
|b0 |
0
..
.
0
k=0
rh () =
rh (P )
1
a1
rh (P + 1)
.. =
..
..
.
.
.
ak rh ( k)
>0
k=1
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
10
Solving for a
1
rh (0)
|b0 |
rh (1)
rh (P )
a1 0
rh (1)
r
(0)
r
(P
1)
h
h
.. = ..
..
..
..
..
.
.
.
.
.
rh (0)
0
aP
rh (P ) rh (P 1)
1
rh (P 1)
a1
..
..
.. =
.
.
.
rh (0)
rh (P ) rh (P 1)
aP
rh (P 1)
rh (0)
a1
rh (1)
..
.
.
.
.
..
..
..
.. =
. +
rh (1)
..
.
rh (P )
rh (0)
..
.
rh (P 1)
rh + Rh a =
a =
rh (0)
aP
0
..
.
0
0
..
.
0
0
Rh1 rh
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
11
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
12
Solving for b0
rh (0)
rh (1)
..
.
rh (1)
rh (0)
..
.
rh (P ) rh (P 1)
rh (P )
1
a1
rh (P 1)
.. =
..
..
.
.
.
rh (0)
aP
1
a1
rh (0) rh (1) rh (P ) . =
..
aP
b0
2
|b0 |
0
..
.
b0 = rh (0) + aT rh
a = Rh1 rh
|b0 |2
P
=
ak rh (k)
k=0
= rh (0) + aT rh
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
13
b0 =
rh (0) + aT rh
J. McNames
>0
14
rh ()
ECE 538/638
Ver. 1.09
k=1
rh () =
Autocorrelation
2
If we have an AR(P ) process, then we know rx () = w
rh () and we
can equivalently write
Rx a = rx
ECE 538/638
Rh a = rh
P
rh () =
J. McNames
Ra = r
Autocorrelation
Ver. 1.09
15
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
16
Partial Autocorrelation
Partial Autocorrelation Function (PACF) also known as, the partial
autocorrelation sequence (PACS), is dened as
=0
1
() a()
>0
() < 0
()
a1
r ( 1)
r(0)
r (1)
r(1)
()
r(1)
r(2)
r(0)
r ( 2)
a
2. =
..
..
..
.
.
.
.
.
.
.
.
.
.
.
()
r( 1) r( 2)
r(0)
r()
a
Ra = r
a = R
R
a =
r
1
Levinson-Durbin algorithm
J. McNames
1
ECE 538/638
Autocorrelation
Ver. 1.09
17
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
Then the PACF can be dened as the correlation between the residuals
x
(n) = P [x(n)|x(n 1), . . . , x(1)] =
n1
x
n (n)
x(0) x
1:n1 (0) = x(0) P [x(0)|x(n 1), . . . , x(1)]
(0))]
E [(x() x
()) (x(0) x
n
n
()
2
E (x(0) x
n (0))
2
ck = argmin E (x(n) x
(n))
ck
n1
ECE 538/638
dk x(n k)
Autocorrelation
(0))]
E [(x() x
()] (x(0) x
n
n
2
E (x(n) x
n (n))
k=1
J. McNames
x(n) x
1:n1 (n) = x(n) P [x(n)|x(n 1), . . . , x(1)]
x
n (0)
ck x(n k)
k=1
where
18
Ver. 1.09
19
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
20
=0
1
()
() a
>0
() < 0
Intuitively you might expect |()| < |()|, but this is not true in
general
(b1 ) (1 b21 )
2(+1)
1 b1
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
21
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
22
1
0.8
0.6
=
=
=
=
10;
0.9;
1;
sw*[(1+b1^2);b1;zeros(L-1,1)];
%
%
%
%
l
= 0:L;
acf = ac/ac(1);
h = stem(l,acf);
set(h(1),MarkerFaceColor,b);
set(h(1),MarkerSize,4);
ylabel(\rho(l));
xlabel(Lag (l));
xlim([0 L]);
ylim([-1 1]);
box off;
0.4
0.2
(l)
0
0.2
0.4
0.6
0.8
1
J. McNames
5
Lag (l)
ECE 538/638
Autocorrelation
10
Ver. 1.09
23
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
24
pc = zeros(L+1,1);
mc = zeros(L+1,1);
pv = zeros(L+1,1);
1
0.8
pc(1) = 1;
mc(1) = 1;
pv(1) = ac(1);
0.6
0.4
pc(2) = ac(2)/ac(1);
mc(2) = pc(2);
pv(2) = ac(1)*(1-pc(2).^2);
(l)
0.2
0
for c1 = 3:L+1,
pc(c1
) =
mc(2:c1-1) =
mc(c1
) =
pv(c1
) =
end;
0.2
0.4
0.6
(ac(c1) - mc(2:c1-1).*ac((c1-1):-1:2))/pv(c1-1);
mc(2:c1-1) - pc(c1)*mc(c1-1:-1:2);
pc(c1);
pv(c1-1)*(1-pc(c1).^2);
0.8
1
J. McNames
5
Lag (l)
ECE 538/638
Autocorrelation
10
Ver. 1.09
25
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
26
Ver. 1.09
28
l
= 0:L;
h = stem(l,pc);
set(h(1),MarkerFaceColor,b);
set(h(1),MarkerSize,4);
ylabel(\alpha(l));
xlabel(Lag (l));
xlim([0 L]);
ylim([-1 1]);
box off;
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
27
J. McNames
ECE 538/638
Autocorrelation
1
0.8
0.6
10;
0.9;
1;
zeros(L+1,1);
ac(1) = sw/(1-a1^2);
for c1=2:L+1,
ac(c1) = -a1*ac(c1-1);
end;
0.4
0.2
(l)
=
=
=
=
0
0.2
0.4
0.6
0.8
1
J. McNames
5
Lag (l)
ECE 538/638
Autocorrelation
10
Ver. 1.09
29
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
30
Autocovariance Estimation
1
0.8
0.6
0.4
0.2
(l)
0
0.2
0.4
0.6
0.8
1
J. McNames
5
6
Lag (l)
ECE 538/638
Autocorrelation
10
Ver. 1.09
31
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
32
u ()
N 1||
[x(n + ||)
x ] [x(n)
x ]
n=0
Unbiased
1
u ()
N ||
N 1||
[x(n + ||)
x ] [x(n)
x ]
|| < N
n=0
x
x(n)
N n=0
J. McNames
1
N ||
ECE 538/638
Autocorrelation
Ver. 1.09
We know that each pair {x(n + ||), x(n)} for all n have the same
distribution because the process is assumed WSS and ergodic
This is a natural estimator that we know converges asymptotically
(N )
33
J. McNames
Autocorrelation
Ver. 1.09
34
N 1||
ECE 538/638
[x(n + ||)
x ] [x(n)
x ]
b ()
n=0
1
N
N 1||
[x(n + ||)
x ] [x(n)
x ]
|| < N
n=0
When we use
x the estimate is asymptotically unbiased
N ||
u ()
N
Our book (and most other books) lists a dierent estimate
N ||
()
N
The bias of this estimator is larger than the unbiased estimator
E [
()] =
ECE 538/638
Autocorrelation
Ver. 1.09
35
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
36
N ||
u ()
N
N 1||
[x(n + ||)
x ] [x(n)
x ]
b () =
n=0
N ||
u ()
N
N 1||
[x(n + ||)
x ] [x(n)
x ]
n=0
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
37
J. McNames
N ||
b () =
u ()
N
[x(n + ||)
x ] [x(n)
x ]
n=0
If () is small for large lags, then the bias is also small
J. McNames
Ver. 1.09
38
N 1||
[x(n + ||)
x ] [x(n)
x ]
n=0
In most cases, the biased model has smaller MSE, though it has
not been proven rigorously
Unbiased
Autocorrelation
In general
At small lags, there is little dierence between the two
estimators
At large lags, the larger bias of the biased model is favorably
traded for reduced variance
Biased
ECE 538/638
Biased is Better?
N 1||
For the remainder of the class will use the biased estimator, unless
otherwise noted () = b ()
var{
b ()} = O(1/N )
var{
u ()} = O(1/(N ||))
ECE 538/638
Autocorrelation
Ver. 1.09
39
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
40
N ||
u ()
N
N 1||
[x(n + ||)
x ] [x(n)
x ]
b () = rb () =
n=0
1
N
N 1||
1
u () = ru () =
N ||
x(n + ||)x(n)
n=0
N 1||
x(n + ||)x(n)
n=0
The bias is
E [
rb ()] =
N ||
r()
N
E [
ru ()] = r()
ECE 538/638
Autocorrelation
Ver. 1.09
41
N
1
m=(N +)+1
N |m|+
N
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
Estimated ACF
The natural estimate of the ACF is
r (m) + r(m + )r(m )
b ()
b ()
(0)
u ()
u ()
(0)
J. McNames
42
ECE 538/638
Autocorrelation
Ver. 1.09
43
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
44
Confidence Intervals
1
N
2 (m) + (m + )(m )
m=
It will generally be less damped and decay more slowly than ()
Applies to the estimated autocovariance and autocorrelations as
well
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
45
J. McNames
Confidence Intervals
ECE 538/638
Autocorrelation
Ver. 1.09
46
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
47
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
48
N=100 a1=0.9
6
2
). Estimate the ACF using the biased and
where w(n) WN(0, w
unbiased estimates for N = 100. Do so several times for dierent
values of a1 .
x(n)
4
3
2
1
0
1
0
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
49
J. McNames
10
20
30
(l)
(l)
80
90
Autocorrelation
100
Ver. 1.09
50
0.5
0.5
J. McNames
ECE 538/638
70
N=100 a1=0.9
0.5
50
60
Sample (n)
N=100 a1=0.9
40
0.5
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
51
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
52
N=100 a1=0.9
N=100 a1=0.0
2
1.5
1
0.5
x(n)
(l)
0.5
0
0
0.5
1
1.5
0.5
2
2.5
1
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
53
J. McNames
10
20
(l)
(l)
ECE 538/638
70
80
90
Autocorrelation
100
Ver. 1.09
54
0.5
0.5
J. McNames
50
60
Sample (n)
N=100 a1=0.0
0.5
40
N=100 a1=0.0
30
0.5
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
55
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
56
N=100 a1=0.0
N=100 a1=0.5
2
0.5
x(n)
(l)
0.5
2
1
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
57
J. McNames
10
20
30
(l)
(l)
80
90
Autocorrelation
100
Ver. 1.09
58
0.5
0.5
J. McNames
ECE 538/638
70
N=100 a1=0.5
0.5
50
60
Sample (n)
N=100 a1=0.5
40
0.5
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
59
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
60
N=100 a1=0.5
N=100 a1=0.9
5
x(n)
(l)
0.5
0.5
5
1
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
61
J. McNames
10
20
30
(l)
(l)
80
90
Autocorrelation
100
Ver. 1.09
62
0.5
0.5
J. McNames
ECE 538/638
70
N=100 a1=0.9
0.5
50
60
Sample (n)
N=100 a1=0.9
40
0.5
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
63
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
64
L = 90;
a1 = 0.9;
sw = 1;
ac = zeros(L+1,1);
N = 100;
cl = 99;
np = norminv((1-cl/100)/2);
ac(1) = sw/(1-a1^2);
for c1=2:L+1,
ac(c1) = -a1*ac(c1-1);
end;
acf = ac/ac(1);
w = randn(N,1);
a = [1 a1];
x = filter(1,a,w);
N=100 a1=0.9
(l)
0.5
% Confidence level
% Find corresponding lower percentile
0.5
J. McNames
10
20
30
40
50
Lag (l)
ECE 538/638
60
70
80
Autocorrelation
90
Ver. 1.09
65
Summary
ACF and PACF are useful characterizations of WSS random
processes
Can help select an appropriate model
MA: Finite ACF
AR: Finite PACF
AP/AR are often preferred characterizations because we can
solve/estimate the model parameters by solving a set of linear
equations (Yule-Walker)
Biased estimates of r(), (), and/or () are generally preferred
to the unbiased estimates
Less variance (always), and lower MSE (sometimes)
Positive denite (PSD is therefore also nonnegative)
Bias is known, but variance of estimates is generally unknown
Loosely, condence intervals for WN are used instead
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
67
J. McNames
ECE 538/638
Autocorrelation
Ver. 1.09
66