Professional Documents
Culture Documents
For a given stochastic process one is often interested in the connection between two random
variables of a process at different points in time. One way to measure a linear relationship is with
the ACF, i.e., the correlation between these two variables. Another way to measure the
connection between and is to filter out of and the linear influence of the
random variables that lie in between, and then calculate the correlation of
the transformed random variables. This is called the @ autocorrelation.
(12.17)
with
indicates the determinant. Since this can be applied for various orders , in the end we
obtain a @
(PACF). The PACF can be graphically displayed for a
given stochastic process, similar to the ACF as a function of order . This is called the partial
autocorrelogram.
From the definition of PACF it immediately follows that there is no difference between PACF
and ACF of order 1:
(12.18)
The AR(1) process has the ACF . For the PACF we have
and
and for all . This is plausible since the last coefficient of an AR( ) model for
this process is zero for all . For we illustrate the equivalence with Definition 11.3:
From the 'backward regression' with white noise it further follows that
with
For the process is covariance-stationary and it holds that
and . We obtain
and
It holds in general for AR( ) processes that for all . In Figure 11.5 the PACF
of an AR(2) process is displayed using the parameters as in Figure 11.4.
and
(12.19)
For a MA(1) process it strictly holds that . If one were to continue the calculation with
, one could determine that the partial autocorrelations will not reach zero.
Figure 11.6 shows the PACF of a MA(2) process. In general for a MA( ) process it holds that
the PACF does not decay, in contrast to the autoregressive process. Compare the PACF to the
ACF in Figure 11.2. This is thus a possible criterium for the specification of a linear model.