You are on page 1of 17

Partial and Autocorrelation Functions Overview

Autocorrelation Function Defined


Normalized Autocorrelation, also known as the Autocorrelation
Function (ACF) is dened for a WSS signal as

Denitions
Properties
Yule-Walker Equations

x () =

Levinson-Durbin recursion

where x () is the autocovariance of x(n),

Biased and unbiased estimators

xx ()

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

J. McNames

Autocorrelation Function Properties and Examples


x () =

x ()
x ()
=
x (0)
x2

E [[x(n + ) x ][x(n) x ] ] = rx () |x |2

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

Example 1: 1st Order Moving Average

x ()
x ()
=
x (0)
x2

Find the autocorrelation function of a 1st order moving average


process, MA(1):
x(n) = w(n) + b1 w(n 1)

The ACF has a number of useful properties

2
).
where w(n) WN(0, w

Bounded: 1 x () 1
White noise, x(n) WN(x , x2 ): x () = ()
These enable us to assign meaning to estimated values from
signals
For example,
If x () (), we can conclude that the process consists of
nearly uncorrelated samples

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

Example 2: 1st Order Autoregressive

Autocorrelation Function Properties

Find the autocorrelation function of a 1st order autoregressive process,


AR(1):
x(n) = a1 x(n 1) + w(n)
Z

2
). Hint: n u(n 1)
where w(n) WN(0, w
ROC of |z| < ||.

1
1z 1

x () =

x ()
x ()
=
x (0)
x2

In general, the ACF of an AR(P ) process decays as a sum of


damped exponentials (innite extent)

for an

If the AR(P ) coecients are known, the ACF can be determined


by solving a set of linear equations
The ACF of a MA(Q) process is nite: x () = 0 for  > Q
Thus, if the estimated ACF is very small for large lags a MA(Q)
model may be appropriate
The ACF of a ARMA(P, Q) process is also a sum of damped
exponentials (innite extent)
It is dicult to solve for in general

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

J. McNames

Portland State University

All-Pole Models
H(z) =

ECE 538/638

Autocorrelation

Ver. 1.09

AP Equations

b0
b0
=
P
A(z)
1 + k=1 ak z k

Let us consider a causal AP(P ) model:

All-pole models are especially important because they can be


estimated by solving a set of linear equations

H(z) +

Partial autocorrelation can also be best understood within the


context of all-pole models (my motivation)

h(n) +

P


ak H(z)z k

= b0

ak h(n k)

= b0 (n)

k=1
P

k=1
P


Recall that an AZ(Q) model can be expressed as an AP()


model if the AZ(Q) model is minimum phase

ak h(n k)h (n )

= b0 h (n )(n)

k=0

Since the coecients at large lags tend to be small, this can often
be well approximated by an AP(P ) model

P



ak h(n k)h (n )

n= k=0

b0 h (n )(n)

n=
P


ak rh ( k)

= b0 h ()

k=0

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

AP Equations Continued

Since AP(P ) is causal, h(0) = b0 , h (0) =


P


b0 ,

ak rh (k) = |b0 |

AP Equations in Matrix Form

and

We can collect the rst P

rh (1)
rh (0)
rh (1)
rh (0)

..
..
.
.
rh (P ) rh (P 1)

rh (1)
rh (0)
rh (1)
rh (0)

..
..
.
.

=0

k=0
P


ak rh ( k)

>0

k=0

This has several important consequences. One is that the


autocorrelation can be expressed as a recursive relation for  > 0, since
a0 = 1:
P


ak rh ( k)

rh (P ) rh (P 1)

0
P


rh (0)

..
.

aP

rh (P )
1

a1
rh (P 1)

..
.
rh (0)

.. =
.
aP

2
|b0 |
0

..
.
0
2
|b0 |
0

..
.
0

The autocorrelation matrix is Hermitian, Toeplitz, and positive


denite.

k=0

rh () =

+ 1 of these terms in a matrix


rh (P )
1
a1
rh (P + 1)

.. =
..
..
.
.
.

ak rh ( k)

>0

k=1

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

10

Solving for a

Solving the AP Equations


If we know the autocorrelation, we can solve these equations for a and
b0
2

1
rh (0)
|b0 |
rh (1)

rh (P )

a1 0
rh (1)
r
(0)

r
(P

1)
h
h

.. = ..
..
..
..
..

.
.
.
.
.
rh (0)
0
aP
rh (P ) rh (P 1)


1
rh (P 1)
a1
..
..
.. =
.
.
.
rh (0)
rh (P ) rh (P 1)
aP

rh (P 1)
rh (0)
a1
rh (1)

..
.
.
.
.
..
..
..
.. =
. +

rh (1)
..
.

rh (P )

rh (0)
..
.

rh (P 1)

rh + Rh a =
a =

rh (0)

aP


0
..
.
0

0
..
.
0

0
Rh1 rh

These are called the Yule-Walker equations

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

11

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

12

Solving for b0

rh (0)
rh (1)

..
.

rh (1)
rh (0)
..
.

rh (P ) rh (P 1)

Yule-Walker Equation Comments

rh (P )
1
a1
rh (P 1)

.. =
..
..
.
.
.

rh (0)

aP

1
a1



rh (0) rh (1) rh (P ) . =
..
aP

b0

2
|b0 |
0

..
.

b0 = rh (0) + aT rh

a = Rh1 rh

The matrix inverse exists because unless h(n) = 0, Rh is positive


denite

Note that we cannot determine the sign of b0 = h(0) from rh ()


Thus, the rst P terms of the autocorrelation completely
determine the model parameters

|b0 |2

A similar relation exists for the rst P + 1 elements of the


autocorrelation sequence in terms the model parameters by solving
a set of linear equations (Problem 4.6)


 P

= 
ak rh (k)

Is not true for AZ or PZ models

k=0

= rh (0) + aT rh

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

13

Yule-Walker Equation Comments Continued


a = Rh1 rh

b0 =

rh (0) + aT rh

J. McNames

Portland State University

>0

14

To accommodate both in a common notation, I will write the


Yule-Walker equations as simply

rh ()

ECE 538/638

Ver. 1.09

Thus, the following two problems are equivalent


2
}, given
Find the parameters of an AR process, {a1 , . . . , aP , w
rx ()
Find the parameters of an AP model, {a1 , . . . , aP , b0 }, given
rh ()

k=1

rh () =

Autocorrelation

2
If we have an AR(P ) process, then we know rx () = w
rh () and we
can equivalently write
Rx a = rx

The rest of the sequence can then be determined by symmetry


and the recursive relation given earlier
ak rh ( k)

ECE 538/638

Rh a = rh

{rh (0), . . . , rh (P )} {b0 , a1 , . . . , aP }

P


Portland State University

AR Processes versus AP Models


Concisely, we can write the Yule-Walker Equations as

Thus the two are equivalent and reversible and unique


characterizations of the model

rh () =

J. McNames

Ra = r

Autocorrelation

Ver. 1.09

15

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

16

Solving for a Recursively

Partial Autocorrelation
Partial Autocorrelation Function (PACF) also known as, the partial
autocorrelation sequence (PACS), is dened as

=0
1
()  a()
>0



()  < 0

We can write the Yule-Walker equations as

()

a1
r ( 1)
r(0)
r (1)
r(1)
()
r(1)
r(2)
r(0)
r ( 2)
a

2. =

..

..
..
.
.

.
.
.

.
.
.
.
.
.
()
r( 1) r( 2)
r(0)
r()
a


where a is the last element of a R and is given by the


Yule-Walker equations
()

Ra = r

a = R

We can recursively solve for the model coecients


() ()
()
a = [a1 , a2 , . . . , a ] for increasing model orders

R 
a = 
r

 1

Levinson-Durbin algorithm

J. McNames

Portland State University

1

It is a dual of the ACF and has a number of useful and


complimentary properties

ECE 538/638

Autocorrelation

Ver. 1.09

17

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

Partial Autocorrelation: Alternative Definition

Partial Autocorrelation: Alternative Definition & Properties

Dene P [x(n)|x(1), . . . , x(n 1)] as the minimum mean square error


linear predictor of x(n) given {x(1), . . . , x(n 1)}

Then the PACF can be dened as the correlation between the residuals

x
(n) = P [x(n)|x(n 1), . . . , x(1)] =

n1


x
n (n) 

x(0) x
1:n1 (0) = x(0) P [x(0)|x(n 1), . . . , x(1)]
(0))]
E [(x() x
()) (x(0) x
n
 n
() 
2
E (x(0) x
n (0))



2
ck = argmin E (x(n) x
(n))

ck

Similarly dene P [x(0)|x(1), . . . , x(n 1)] as the minimum mean


square error linear predictor of x(0) given {x(1), . . . , x(n 1)},
x
(0) = P [x(0)|x(n 1), . . . , x(1)] =

n1


Portland State University

ECE 538/638

dk x(n k)

Autocorrelation

(0))]
E [(x() x
()] (x(0) x
 n
n
2
E (x(n) x
n (n))

One can think of the PACF as a measure of the correlation of


what has not already been explained (the residuals)
Like the ACF, it depends only on second order properties

k=1

J. McNames

x(n) x
1:n1 (n) = x(n) P [x(n)|x(n 1), . . . , x(1)]

x
n (0) 

ck x(n k)

k=1

where

18

Ver. 1.09

19

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

20

Partial Autocorrelation Properties & Intuiting

=0
1
()
()  a
>0


()  < 0

Example 3: MA(1) ACF and PACF


Plot the ACF and PACF of a MA(1) model with b1 = 0.9. Hint: the
true PACF is given by
() =

Intuitively you might expect |()| < |()|, but this is not true in
general

(b1 ) (1 b21 )
2(+1)

1 b1

Like (), the PACF is bounded: 1 () 1


White noise, x(n) WN(0, x2 ): x () = ()
The PACF of a AR(P ) process is nite: x () = 0 for  > P
Thus, if the estimated PACF is very small for large lags a AR(P )
model may be appropriate
Surprisingly, the PACF is an innite sequence for MA(Q)
processes and ARMA(P, Q) processes

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

21

J. McNames

ECE 538/638

Autocorrelation

Ver. 1.09

22

Example 3: MATLAB Code

Example 3: MA(1) ACF


L
b1
sw
ac

1
0.8
0.6

=
=
=
=

10;
0.9;
1;
sw*[(1+b1^2);b1;zeros(L-1,1)];

%
%
%
%

Length of autocorrelation calculated


Coefficient
White noise power
Autocovariance = autocorrelation

l
= 0:L;
acf = ac/ac(1);
h = stem(l,acf);
set(h(1),MarkerFaceColor,b);
set(h(1),MarkerSize,4);
ylabel(\rho(l));
xlabel(Lag (l));
xlim([0 L]);
ylim([-1 1]);
box off;

0.4
0.2
(l)

Portland State University

0
0.2
0.4
0.6
0.8
1

J. McNames

Portland State University

5
Lag (l)

ECE 538/638

Autocorrelation

10

Ver. 1.09

23

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

24

Example 3: Relevant MATLAB Code

Example 3: MA(1) PACF

pc = zeros(L+1,1);
mc = zeros(L+1,1);
pv = zeros(L+1,1);

1
0.8

pc(1) = 1;
mc(1) = 1;
pv(1) = ac(1);

0.6
0.4

pc(2) = ac(2)/ac(1);
mc(2) = pc(2);
pv(2) = ac(1)*(1-pc(2).^2);

(l)

0.2
0

for c1 = 3:L+1,
pc(c1
) =
mc(2:c1-1) =
mc(c1
) =
pv(c1
) =
end;

0.2
0.4
0.6

(ac(c1) - mc(2:c1-1).*ac((c1-1):-1:2))/pv(c1-1);
mc(2:c1-1) - pc(c1)*mc(c1-1:-1:2);
pc(c1);
pv(c1-1)*(1-pc(c1).^2);

0.8
1

J. McNames

Portland State University

5
Lag (l)

ECE 538/638

Autocorrelation

10

Ver. 1.09

25

J. McNames

Example 3: Relevant MATLAB Code Continued

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

26

Ver. 1.09

28

Example 4: AR(1) ACF and PACF


Plot the ACF and PACF of a AR(1) process with a = [1 0.9].

l
= 0:L;
h = stem(l,pc);
set(h(1),MarkerFaceColor,b);
set(h(1),MarkerSize,4);
ylabel(\alpha(l));
xlabel(Lag (l));
xlim([0 L]);
ylim([-1 1]);
box off;

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

27

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Example 4: Relevant MATLAB Code

Example 4: AR(1) ACF


L
a1
sw
ac

1
0.8
0.6

10;
0.9;
1;
zeros(L+1,1);

% Length of autocorrelation calculated


% Coefficient
% White noise power

ac(1) = sw/(1-a1^2);
for c1=2:L+1,
ac(c1) = -a1*ac(c1-1);
end;

0.4
0.2
(l)

=
=
=
=

0
0.2
0.4
0.6
0.8
1

J. McNames

Portland State University

5
Lag (l)

ECE 538/638

Autocorrelation

10

Ver. 1.09

29

J. McNames

ECE 538/638

Autocorrelation

Ver. 1.09

30

Autocovariance Estimation

Example 4: AR(1) PACF

Weve seen that the second-order statistics are a handy, though


incomplete, characterization of WSS stochastic processes

1
0.8

We would like to estimate these properties from realizations


Single signal: x (), rx (), x (), Rx (ej )
2
(ej )
Two or more signals: yx (), ryx (), Ryx (), Gyx

0.6
0.4
0.2
(l)

Portland State University

What are the best estimators?

0
0.2
0.4
0.6
0.8
1

J. McNames

Portland State University

5
6
Lag (l)

ECE 538/638

Autocorrelation

10

Ver. 1.09

31

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

32

Autocovariance Estimation Options

Unbiased Autocovariance Estimation

In practical applications, we only have a real nite data record


1
{x(n)}N
. There are two popular estimators of autocovariance worth
0
considering: unbiased and biased.

u () 

N 1||

[x(n + ||)
x ] [x(n)
x ]

n=0

Discussed briey in the book

Unbiased
1
u () 
N ||

The estimate has even symmetry: u () = u ()

N 1||

[x(n + ||)
x ] [x(n)
x ]

|| < N

At longer lags, we have fewer terms to estimate the autocovariance

n=0

We have no way to estimate () for || N

x is the sample average of the


and u () = 0 for || N . Here
sequence dened as
N 1
1 

x 
x(n)
N n=0

J. McNames

1
N ||

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

We know that each pair {x(n + ||), x(n)} for all n have the same
distribution because the process is assumed WSS and ergodic
This is a natural estimator that we know converges asymptotically
(N )

33

J. McNames

Portland State University

Unbiased Autocovariance Estimation


1
u () 
N ||

Autocorrelation

Ver. 1.09

34

Biased Autocovariance Estimation

N 1||

ECE 538/638

[x(n + ||)
x ] [x(n)
x ]

b () 

n=0

If we used the true mean x instead of


x , u () would be
unbiased

1
N

N 1||

[x(n + ||)
x ] [x(n)
x ]

|| < N

n=0

When we use
x the estimate is asymptotically unbiased

N ||
u ()
N
Our book (and most other books) lists a dierent estimate

The bias is O(1/N )

This estimate uses a divisor of N rather than (N ||)

Much smaller than the variance, so it may be ignored

If we ignore the eect of estimating x , this bias is obvious

N ||
()
N
The bias of this estimator is larger than the unbiased estimator
E [
()] =

Some claim that in general, the biased estimator has a smaller


MSE
The variance, must therefore be much smaller
J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

35

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

36

Biased versus Unbiased Estimators


b () =

N ||
u ()
N

Biased versus Unbiased Estimators

N 1||

[x(n + ||)
x ] [x(n)
x ]

b () =

n=0

N ||
u ()
N

N 1||

[x(n + ||)
x ] [x(n)
x ]

n=0

The estimators are often called the sample autocovariance


functions

The key advantage of b () is that it is positive semi-definite (i.e.,


nonnegative denite)

Most software and books prefer the biased estimate

There are many reasons why this property is important


We know the true autocovariance has this property
Autoregressive models built with the positive-denite estimates
of () are stable
Most estimators of power spectral density R(ej ) are
nonnegative if they are based on a positive-denite estimate of
()

Why prefer a biased estimate to an unbiased estimate?


Our goal is to estimate the sequence, not just () for a specic
lag 

u () may be positive denite for a particular sequence, but it is


not guaranteed in general

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

37

J. McNames

Biased versus Unbiased Estimators


N ||
b () =
u ()
N

N ||
b () =
u ()
N

[x(n + ||)
x ] [x(n)
x ]

n=0

If () is small for large lags, then the bias is also small

J. McNames

Portland State University

Ver. 1.09

38

N 1||

[x(n + ||)
x ] [x(n)
x ]

n=0

In most cases, the biased model has smaller MSE, though it has
not been proven rigorously

The biased estimator has considerably less variance at large lags


(the tail)

Unbiased

Autocorrelation

In general
At small lags, there is little dierence between the two
estimators
At large lags, the larger bias of the biased model is favorably
traded for reduced variance

Although b () is biased,


The bias is small at small lags
For large lags, the bias is towards 0: () 0 as 
This is also a property of the true autocorrelation

Biased

ECE 538/638

Biased is Better?

N 1||

Portland State University

For the remainder of the class will use the biased estimator, unless
otherwise noted () = b ()

var{
b ()} = O(1/N )
var{
u ()} = O(1/(N ||))

ECE 538/638

Autocorrelation

Ver. 1.09

39

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

40

Estimated Autocorrelation Covariance


b () =

N ||
u ()
N

Estimated Autocorrelation Variance

N 1||

[x(n + ||)
x ] [x(n)
x ]

b () = rb () =

n=0

1
N

N 1||

1
u () = ru () =
N ||

As with all estimators, we would like to have condence intervals


These are hard to obtain, in general
Need more assumptions
Stationary up to order four

x(n + ||)x(n)

n=0
N 1||

x(n + ||)x(n)

n=0

The bias is
E [
rb ()] =

E [x(n)x(n + k)x(n + )x(n + m)] = f (k, , m)

N ||
r()
N

E [
ru ()] = r()

The covariance of r() is complicated and not usable in practice


Depends on fourth joint cumulant of
{x(n), x(n + k), x(n + ), x(n + m)}
Depends on true unknown autocorrelation

Mean is zero x = 0, so does not need to be estimated

If process is Gaussian, then the fourth joint cumulant is zero


J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

41

Estimated Autocorrelation Variance Continued


If Gaussian random process,
1
var{
rb ()} =
N

N
1

m=(N +)+1

N |m|+
N



J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

Estimated ACF
The natural estimate of the ACF is


r (m) + r(m + )r(m )

b () 

b ()
(0)

u () 

u ()
(0)

Same tradeos exist between the biased and unbiased estimates


Also
|
b ()| 1 for all 
Not true in general for u ()

The same applies to the unbiased estimate with a divisor of


1/(N ||) instead of 1/N
This is still problematic because we dont know the true r() in
most applications

They are the same at  = 0

If we did, we wouldnt need to estimate it!

Often called the sample autocorrelation function

This is often what prevents us from making desired inferences


about our estimators:
Desired properties of the sampling distribution depend on
unknown properties of the random process

Again, the bias, covariance, and variance of the estimators is


complicated and based on unknown properties

J. McNames

Portland State University

42

ECE 538/638

Autocorrelation

Ver. 1.09

43

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

44

Estimated ACF Variance

Confidence Intervals

Again, if x(n) is a Guassian process then


var{
b ()}

1
N

Let x(n) be an IID sequence. Then


(0) =

2 (m) + (m + )(m )

() = 0 || > 0


cov{
(), ( + m)} 0 m =
0
1
|| > 0
var{
b ()}
N
N
var{
u ()}
|| > 0
(N ||)2

m=

+ 22 ()2 (m) 4()(m)(m )


The fourth cumulant is also absent if x(n) is generated by a linear
process with independent inputs
The sample ACF, () will generally have more correlation than
the true ()

In general, it is not possible to obtain condence intervals for the


estimated ACF because the variance of the estimator depends on
the true ACF

It will generally be less damped and decay more slowly than ()
Applies to the estimated autocovariance and autocorrelations as
well

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

Instead, it is common practice to plot the condence intervals of a


purely random process

45

J. McNames

Confidence Intervals

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

46

Partial Autocorrelation Estimation


There are similar issues surrounding partial autocorrelation

If N is large enough, the central limit theorem applies and b () is


approximately normal

However, in this case we always use the biased estimate of


autocorrelation to estimate the PACF

In this case, we can use the Normal cdf to plot condence


intervals of an IID sequence

These are proportional to var{
b ()}

This is necessary, in this case, to ensure that the AR models are


bounded
Less is known about the statistics of the PACF (mean, variance,
and condence intervals)
However, for reasons similar to that of the ACF, for a WN process
the CLT applies and we can use the same condence intervals as
for the ACF

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

47

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

48

Example 5: 1st Order Autoregressive

Example 5: AR(1) Signal

Find the autocorrelation function of a 1st order autoregressive process,


AR(1):
x(n) = a1 x(n 1) + w(n)

N=100 a1=0.9
6

2
). Estimate the ACF using the biased and
where w(n) WN(0, w
unbiased estimates for N = 100. Do so several times for dierent
values of a1 .

x(n)

4
3
2
1
0
1
0

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

49

J. McNames

10

20

30

Portland State University

Example 5: AR(1) ACF

(l)

(l)

80

90

Autocorrelation

100

Ver. 1.09

50

0.5

0.5

J. McNames

ECE 538/638

70

N=100 a1=0.9

0.5

50
60
Sample (n)

Example 5: AR(1) ACF

N=100 a1=0.9

40

0.5

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

51

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

52

Example 5: AR(1) PACF

Example 5: AR(1) Signal

N=100 a1=0.9

N=100 a1=0.0
2
1.5
1

0.5

x(n)

(l)

0.5
0

0
0.5
1
1.5

0.5

2
2.5
1

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

53

J. McNames

10

20

Portland State University

Example 5: AR(1) ACF

(l)

(l)

ECE 538/638

70

80

90

Autocorrelation

100

Ver. 1.09

54

0.5

0.5

J. McNames

50
60
Sample (n)

N=100 a1=0.0

0.5

40

Example 5: AR(1) ACF

N=100 a1=0.0

30

0.5

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

55

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

56

Example 5: AR(1) PACF

Example 5: AR(1) Signal

N=100 a1=0.0

N=100 a1=0.5
2

0.5

x(n)

(l)

0.5

2
1

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

57

J. McNames

10

20

30

Portland State University

Example 5: AR(1) ACF

(l)

(l)

80

90

Autocorrelation

100

Ver. 1.09

58

0.5

0.5

J. McNames

ECE 538/638

70

N=100 a1=0.5

0.5

50
60
Sample (n)

Example 5: AR(1) ACF

N=100 a1=0.5

40

0.5

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

59

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

60

Example 5: AR(1) PACF

Example 5: AR(1) Signal

N=100 a1=0.5

N=100 a1=0.9
5

x(n)

(l)

0.5

0.5

5
1

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

61

J. McNames

10

20

30

Portland State University

Example 5: AR(1) ACF

(l)

(l)

80

90

Autocorrelation

100

Ver. 1.09

62

0.5

0.5

J. McNames

ECE 538/638

70

N=100 a1=0.9

0.5

50
60
Sample (n)

Example 5: AR(1) ACF

N=100 a1=0.9

40

0.5

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

63

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

64

Example 5: MATLAB Code

Example 5: AR(1) PACF

L = 90;
a1 = 0.9;
sw = 1;
ac = zeros(L+1,1);
N = 100;
cl = 99;
np = norminv((1-cl/100)/2);
ac(1) = sw/(1-a1^2);
for c1=2:L+1,
ac(c1) = -a1*ac(c1-1);
end;
acf = ac/ac(1);
w = randn(N,1);
a = [1 a1];
x = filter(1,a,w);

N=100 a1=0.9

(l)

0.5

% Length of autocorrelation calculated


% Coefficient
% White noise power

% Confidence level
% Find corresponding lower percentile

0.5

J. McNames

10

20

30

Portland State University

40

50
Lag (l)

ECE 538/638

60

70

80

Autocorrelation

90

Ver. 1.09

65

Summary
ACF and PACF are useful characterizations of WSS random
processes
Can help select an appropriate model
MA: Finite ACF
AR: Finite PACF
AP/AR are often preferred characterizations because we can
solve/estimate the model parameters by solving a set of linear
equations (Yule-Walker)
Biased estimates of r(), (), and/or () are generally preferred
to the unbiased estimates
Less variance (always), and lower MSE (sometimes)
Positive denite (PSD is therefore also nonnegative)
Bias is known, but variance of estimates is generally unknown
Loosely, condence intervals for WN are used instead

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

67

J. McNames

Portland State University

ECE 538/638

Autocorrelation

Ver. 1.09

66

You might also like