Professional Documents
Culture Documents
• Autoregressive (AR)
• Moving Average (MA)
• Autoregressive - Moving Average (ARMA)
LO-2.5, P-13.3 to 13.4 (skip 13.4.3 – 13.4.5)
1/21
Time Series Models
“Time Series” = “DT Random Signal”
2/21
Motivation for Time Series Models
Recall the result we had that related output PSD to input PSD for a
linear, time-invariant system:
Signal
ε[n] h[n] x[n] Being
Modeled
Input RP Output RP
WSS w/ Sε(ω) LTI System WSS w/ Sx(ω)
Impulse Response h(t)
Frequency Response H(ω) = F{h(t)}
2
S x (ω ) = H (ω ) Sε (ω )
The output of the LTI system gives a time-domain model for the
process: p q
x[n ] = − ∑ a k x[n − k ] + ∑ bk ε [n − k ]
k =1 k =0
(b0 = 1)
There are three special cases that are considered for these models:
• Autoregressive (AR)
• Moving Average (MA)
• Autoregressive Moving Average (ARMA) 5/21
Autoregressive (AR) PSD Models
If the LTI system’s model is constrained to have only poles, then:
1 1 p
H ( z) = =
A( z ) p x[n ] = − ∑ a k x[n − k ] + ε [n ]
1 + ∑ ak z −k
k =1
k =1 (b0 = 1)
Output depends
TF has only Poles
“regressively” on itself
Order of the model is p: called AR(p) model
σ2
S AR (ω ) = 2
Poles Give Rise to
p PSD Spikes
1 + ∑ a k e − jωk
k =1
Output is an “average” of
TF has only Zeros
values inside a moving window
Order of the model is q: called MA(q) model
q
2 Zeros Give Rise to
S MA (ω ) = σ 2 1 + ∑ bk e − jωk PSD Nulls
k =1
8/21
ACF Model of a Process
So far we’ve seen relationships between:
• PSD Model
• Time-Domain Model
These models impart a corresponding model to the ACF:
Let the process obey an ARMA(p,q) model
p q
x [ n ] = − ∑ a k x[ n − k ] + ∑ bk ε [n − k ]
k =1 k =0
To get ACF: multiply both sides of this by x[n-k] & take E{}:
p q
E{x[n ]x[n − k ]} = − ∑ al E{x[n − l ]x[n − k ]} + ∑ bl E{ε [n − l ] x[n − k ]}
l =1 l =0
p q
⇒ rx [k ] = − ∑ al rx [k − l ] + ∑ bl rxε [k − l ]
l =1 l =0
Need This!
9/21
ACF Model of a Process (cont.)
To evaluate this – write x[n] as output of filter with input ε[n]:
rxε [k ] = E{x[n ]ε [n + k ]}
⎧⎪ ∞ ⎫⎪
= E ⎨ε [n + k ] ∑ h[n − l ]ε [l ]⎬
⎪⎩ l = −∞ ⎪⎭
∞
= ∑ h[n − l ]E{ε [n + k ]ε [l ]}
l = −∞
∞
= ∑ h[ n − l ]σ 2
δ [n + k − l ]
l = −∞
= h[ − k ]σ 2
rxε [k ] = 0 k > 0
10/21
ACF Model of a Process (cont.)
Using this result gives the Yule-Walker Equations for ARMA:
⎧ p q
⎪− ∑ al rx [k − l ] + σ ∑ bl + k h[l ] k = 0, 1, …, q
2
⎪ l =1 l =0
rx [k ] = ⎨
p
(ARMA)
⎪
⎪− ∑ al rx [k − l ] k ≥ q +1
⎩ l =1
11/21
ACF Model for an AR Process
Specializing to the AR case, we set q = 0 and get:
⎧ p
⎪− ∑ al rx [k − l ] + σ h[0] k =0
2
⎪ l =1
rx [k ] = ⎨
p
⎪
⎪− ∑ al rx [k − l ] k ≥1
q
⎩ l =1
1 + ∑ bk z −k
k =1
Now, we see that h[0] = lim H ( z ) = lim p
=1
z →∞ z →∞
1 + ∑ ak z −k
“Initial Value k =1
Theorem” for
Z-Transform Yule-Walker Equations (AR)
⎧ p
⎪− ∑ al rx [k − l ] + σ k =0
2
⎪ l =1
rx [k ] = ⎨
p
⎪
⎪− ∑ al rx [k − l ] k ≥1
⎩ l =1 12/21
ACF Model for an AR Process (cont.)
If we look at k = 0, 1, … p for these AR Yule-Walker equations,
we get p+1 simultaneous equations that can be solved for the
p+1 model parameters of {ai}i=1,…,p and σ2:
l =1
13/21
ACF Model for an MA Process
Specializing to the MA case, we set p = 0 and get:
⎧ 2 q−k
rx [k ] = ⎨ l∑
⎪σ bl + k h[l ] k = 0, 1, …, q
=0
⎪
⎩0 k ≥ q +1
But… for the MA case the system is a FIR filter and we have
⎧bk , k = 0, 1, , q
h[k ] = ⎨
⎩0, otherwise
15/21
Parametric PSD Estimation (cont.)
Here is the general AR method: Given data {x[n], 0 ≤ n ≤ N-1}
1. Estimate the p×p AC Matrix from the data:
{x[n ], 0 ≤ n ≤ N − 1} ⇒ {rˆ[k ], 0 ≤ k ≤ p}
2. Solve the AR Yule-Walker Equations for the AR Model
⎡ rˆx [0] rˆx [1] rˆx [ p − 1]⎤ ⎡ aˆ1 ⎤ ⎡ rˆx [1] ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ rˆx [1] rˆx [0] ⎥ ⎢ aˆ 2 ⎥ ⎢ rˆx [2] ⎥
⎢ ⎥⎢ ⎥ = −⎢ ⎥
⎢ rˆx [1] ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ rˆx [ p − 1]
⎣ rˆx [1] rˆx [0] ⎥⎦ ⎢⎣aˆ p ⎥⎦ ⎢ rˆx [ p ]⎥
⎣ ⎦
p
σˆ 2 = rˆx [0] + ∑ aˆ l rˆx [l ]
l =1
3. Compute the PSD estimate from the model
σˆ 2
Sˆ AR (ω ) = 2
p
1 + ∑ aˆ k e − jωk
16/21
k =1
Parametric PSD Estimation – AR Case (cont.)
Two common methods (but there are many others):
“Autocorrelation” Method
N −1− k
∑ x[i ]x[i + k ],
1
Estimate the ACF using: rˆx [k ] = 0≤k ≤ p
N i =0
“Covariance” Method
1 N −1
Estimate using: cˆ jk = ∑
N − p n= p
x[n − j ]x[n − k ], 0 ≤ j, k ≤ p
Solve Using:
⎡ cˆ11 cˆ12 cˆ1 p ⎤ ⎡ aˆ1 ⎤ ⎡ cˆ10 ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ cˆ21 cˆ22 cˆ2 p ⎥ ⎢ aˆ 2 ⎥ ⎢ cˆ20 ⎥
⎢ ⎥⎢ ⎥ = −⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢cˆ p1 cˆ p 2 cˆ pp ⎥⎦ ⎢⎣aˆ p ⎥⎦ ⎢cˆ p 0 ⎥
⎣ ⎣ ⎦
p
σ = cˆ00 + ∑ aˆ l cˆ0l
ˆ2
17/21
l =1
Least Squares Method & Linear Prediction
There is another method that is often used that comes at the
problem from a little different direction.
p
ε[n]
1
x[n ] = − ∑ a k x[n − k ] + ε [n ]
p
1 + ∑ ak z −k k =1
k =1
18/21
LS Method & Linear Prediction (cont.)
If we re-arrange this output equation we get:
Prediction
⎡ p ⎤
x[n ] − ⎢ − ∑ a k x[n − k ]⎥ = ε [n ] Error
⎢⎣ k =1 ⎥⎦ Prediction
xˆ [ n ] of x[n]
∂V 2 N −1∂ε [n ] 2 N −1
= ∑
∂al N − p n = p ∂al
ε [n ] = ∑
N − p n= p
x[n − l ]ε [n ]
Now we use:
⎡ p ⎤ p
ε [n ] = x[n ] − ⎢ − ∑ a k x[n − k ]⎥ = ∑ a k x[n − k ]; a0 = 1
⎢⎣ k =1 ⎥⎦ k =0
xˆ [ n ]
∂V 2 N −1 ⎡ p ⎤
∂al
= ∑
N − p n= p
x[n − l ]⎢ ∑ a k x[n − k ]⎥
⎢⎣k =0 ⎥⎦
p N −1
∑ a k ∑ x[n − l ] x[n − k ] = 0; 1 ≤ l ≤ p
2
=
N − p k =0 n = p
20/21
LS Method & Linear Prediction (cont.)
So to solve the LS Linear Prediction problem we need:
p N −1
∑ a k ∑ x[n − l ] x[n − k ] = 0; 1 ≤ l ≤ p
2
N − p k =0 n = p ()
Define:
1. Matrix Γ with elements λlk
2. Vector λ with elements λl0
3. Vector a with elements a1,…, ap
where
1 N −1
λlk = ∑
N − p n= p
x[n − l ] x[n − k ]; 1 ≤ l , k ≤ p