You are on page 1of 5

c 


   
For a given stochastic process one is often interested in the connection between two random
variables of a process at different points in time. One way to measure a linear relationship is with
the ACF, i.e., the correlation between these two variables. Another way to measure the

connection between and is to filter out of and the linear influence of the

random variables that lie in between, and then calculate the correlation of
the transformed random variables. This is called the @   autocorrelation.

 c  (Partial autocorrelation)

The partial autocorrelation of th order is defined as

(12.17)

where is the best linear projection of on , i.e.,

with as the covariance matrix of the regressors and as


the matrix of covariances between and .
The `best linear projection' is understood in the sense of minimizing the mean squared error.

An equivalent definition is the solution to the system of equations

with

and . These are the Yule-Walker equations for

an AR( ) process. The last coefficient, , is the partial autocorrelation of order , as


defined above. Since only this coefficient is of interest in this context, the system of equations

can be solved for using the Cramer-Rule. We get


where is equal to the matrix , in which the th column is replaced with . Here

indicates the determinant. Since this can be applied for various orders , in the end we
obtain a @     
 

(PACF). The PACF can be graphically displayed for a
given stochastic process, similar to the ACF as a function of order . This is called the partial
autocorrelogram.

From the definition of PACF it immediately follows that there is no difference between PACF
and ACF of order 1:

For order 2 we have

(12.18)

  c  (AR(1))

The AR(1) process has the ACF . For the PACF we have

and

and for all . This is plausible since the last coefficient of an AR( ) model for

this process is zero for all . For we illustrate the equivalence with Definition 11.3:

From we directly obtain with

From the 'backward regression' with white noise it further follows that

with
For the process is covariance-stationary and it holds that

and . We obtain

and

With this we get for the partial autocorrelation of 2nd order


which corresponds to the results in (11.18). For the AR(1) process it holds that and thus

It holds in general for AR( ) processes that for all . In Figure 11.5 the PACF
of an AR(2) process is displayed using the parameters as in Figure 11.4.

F PACF of an AR(2) process with (top left),

(top right), (bottom left) and

(bottom right). Π 

  c  (MA(1))

For a MA(1) process with it holds that ,

and for all . For the partial autocorrelations we obtain

and
(12.19)

For a MA(1) process it strictly holds that . If one were to continue the calculation with

, one could determine that the partial autocorrelations will not reach zero.

Figure 11.6 shows the PACF of a MA(2) process. In general for a MA( ) process it holds that
the PACF does not decay, in contrast to the autoregressive process. Compare the PACF to the
ACF in Figure 11.2. This is thus a possible criterium for the specification of a linear model.

F PACF of a MA(2) process with (top left),

(top right), (bottom left) and (bottom


right). Π  

You might also like