You are on page 1of 20

Digtal Signal Processing (et 4235) The Levinson Algorithm

The Levinson algorithm is a fast recursion to solve the Yule-Walker prediction-error equations that model a random process, or more in general to implicitly invert a Toeplitz matrix. It gives rise to a computational structure (lattice lter) with useful properties. It can be used to analyze an autocorrelation sequence, and to generate a process with this autocorrelation.

It is used for parametric spectrum estimation (based on all-pole or AR modeling), and e.g. speech coding in GSM: instead of transmitting speech samples, the lter coefcients are transmitted so that the receiver can reconstruct the speech.

(These slides: real-valued signals. See book for the complex case.)

the levinson algorithm

October 2, 2011

Recap: AR modeling of a random process


Suppose we want to model a random signal white noise, ,

  

using an AR lter applied to a

The input signal

is also known as the innovation.

White implies that


is uncorrelated to

In the -domain, the model is written as


         
  

is not correlated to past samples (

, etc), and hence

, etc:

     

   

 

        

  

the levinson algorithm

  

October 2, 2011

Application: parametric spectrum estimation

The power spectral density of

is given by

Parametric spectrum estimation is: given the correlation sequence nd . For this we need to nd





and




of


, i.e. model identication.

  

 

the levinson algorithm

October 2, 2011

Application: prediction lter modeling


We are given a signal combination of


and wish to predict the current sample :


  

as a linear

past samples

To estimate the optimal coefcients, we can minimize the prediction error:

In a deterministic setting, we can minimize In a stochastic setting, we minimize




This leads to the same equations as before (with

 

and

).

the levinson algorithm

October 2, 2011

From prediction lter to Yule-Walker equations


Model for :
   

In general, a random process




model order , we try to nd the best tting parameters minimize the residual noise variance
     

and minimize the power of the prediction error

If the model holds, we can easily derive the Yule-Walker equations, as follows. Multiply by
  

and take expectation:


      

will not precisely satisfy such a model. For any that will
     

. That is, we consider


    

  

For

this gives, with




,
 

 

 

 

 

 

the levinson algorithm

October 2, 2011

Yule-Walker equations

These equations can be written in matrix form (with rows for .. . . .. . . . . .. . . . . . . .. .

    

The matrix is constant along diagonals: a Toeplitz matrix. It is also symmetric.

  

For
 

, we need to compute . Because


 

 

  

for

  

(see before), we have

 

 

. We obtain an additional equation:


 

 

 

the levinson algorithm

October 2, 2011

Yule-Walker equations
To solve the Yule-Walker equations is straightforward: the lter coefcients are . The noise power (prediction error or innovation power) . multiplications. The Levinson algorithm can
 

follows from

the above equation for

This has a complexity of order reduce this to order

, by exploiting the Toeplitz structure.

the levinson algorithm

October 2, 2011

Levinson algorithm
We will derive a recursion, for coefcients For order For order

. For each , we have different lter




  

(


) and modeling/prediction error power

   

, the YW equations give , we have



 

 

(there is no lter) , or , and




 

. and into a single matrix


    

 

For order , combine the YW equations for




of size

Suppose we know the solution solution for order


 

:


. . .

..
 

. .
 

. . .

..

.
  

..

..

. . .
  

. . .
  

 

 

 

   

  

 

    

for order , how can we nd the


 

efciently? 8
the levinson algorithm October 2, 2011

      

Levinson algorithm
Take as trial solution for order . . .

..

.
  

..

the old solution for order , extended by : . . . .. . . . . . . . .. .


 

 

This is not the correct solution of the YW equations, because in general We need to modify the trial solution such that


required form and we have a solution to the YW equations of order

To modify the trial solution, we apply the following two tricks.

   

  

 

   

 

; in that case the RHS has the .


 

the levinson algorithm

October 2, 2011

The Levinson algorithm


Trick 1: the Toeplitz structure gives rise to the following equation (for order ): .. . . . . . . . . .. .. . . . . .. .. .. ... . . . .. .


Trick 2: The two equations can be combined:


   

  

 

. . .

..
 

. .
 

. . .

..

.
  

..

..

. . .
  

. . .
 

. . .

. . .

   

  

 

   

  

 

 

   

10

the levinson algorithm

October 2, 2011

Levinson algorithm

For

, we take as trial solutions:

 

. ..


..

. .
 

..

.
 

. . .
  

. . .
 

..

.
  

.. ..

. .

..

. . .

. . .

Both trial solutions are not correct solutions of the YW equations of order because


such a way that

 

 

  

 

. But we can take linear combinations of the two trial solutions in becomes 0.

11

 

the levinson algorithm

October 2, 2011

The Levinson algorithm


Indeed, for any . . .

matrix
  

..
 

. .
 

. . .

. . .
  

. . .
 

..

.
  

..

..

. . .

. . .

We will take linear combinations . . .


   

  

 

   

   

 

 

. . .

of the following form: (book uses

. . .


. . .


 

 

For

, this will bring the RHS into the required form. The LHS is then the .( is called a reection coefcient.)
the levinson algorithm October 2, 2011



solution of the YW equations of order

12

The Levinson algorithm


Thus, we need to set

. . .
  

. . .
 

. We obtain as solution

. . .




  

   

This is the update step in the Levinson recursion. The recursion is initiated by

With this, we enter the recursion. In the next step, we obtain the equations

. . .




 


where


. We set

, and follow the recursion.

The th step in the recursion has complexity . Overall, the complexity of solving


the th order YW equations is of order




: more efcient than a direct inversion.


the levinson algorithm October 2, 2011

13

The Levinson algorithm


To show that this recursion works and does not break down, we need to prove that we always can compute a suitable . This will follow from the main assumption: (i.e., the covariance matrix strictly positive deniteall its eigenvalues are


positive) for all . . This follows from the YW equations:

In particular,

invertible), the inverse that




In particular,

. . .
 

. . .


..


. .
 

. . .

..

.
  

..

..

. . .
  

     

  

    

. Since

is strictly positive denite (hence also

. Hence


exists and is also strictly positive denite. This implies for any .
 

, so that we can compute 14

.
the levinson algorithm October 2, 2011

The Levinson algorithm


We can show a bit more:

The update equation is


so that

From this we also see that creases.

Special case: if we have a true AR process of order , then we will nd




and

AR coefcients do not change anymore). Special case: if

be exactly predicted from its past, the process is called deterministic. This can occur only if is singular (thus the matrix is not strictly positive denite). 15
the levinson algorithm October 2, 2011

. : the modeling/prediction error always de


. At this point the recursion stops (we can take

, so that the

, then

. The prediction error is zero,

can

The Levinson algorithm


Szeg polynomials

We can dene lter functions corresponding to the lter coefcients:


 
             

This allows to write the Levinson recursion compactly as





  

     

with initialization (

  


 

) as

 




 
 

16

the levinson algorithm

October 2, 2011

The Levinson algorithm


FIR lattice lter

Thus, the resulting system has impulse response If we use this system with input see Slide 4.


   

, we obtain the prediction error sequence

(and also

, not used). ,

  

The lter structure is known as a lattice lter; note it is an FIR lter. 17

 

 

 


the levinson algorithm

October 2, 2011





 

  




    

 


    

  

 

 




The recursion can be reversed as follows:


   



   

  

The Levinson algorithm

  

 

   
       

 




  

  

   

 

18

the levinson algorithm

October 2, 2011

  

The Levinson algorithm


IIR lattice lter

This allows to compute

an IIR lter, the lter structure is recursive, i.e., the lter coefcients of

explicitly computed. It can be shown that the lter is stable as long as all reection coefcients are strictly smaller than 1. This is the case if the original If we replace

sulting output signal is a random process with the same correlation sequence as .

 

 

  

from

: the lter impulse response is

. This is

by any white noise sequence (with variance

is strictly positive denite. ), then the re

are not

  




19

the levinson algorithm

October 2, 2011

The Levinson algorithm


This is exploited in the GSM system, where is the speech signal.

At the transmitter, the speech signal is split in short frames, each of 20 ms. For each frame, the correlation sequence

  

is estimated. Using the are


Levinson algorithm, the reection coefcients and residual noise power estimated, coded and transmitted to the receiver.




The receiver uses

as coefcients in the reverse lter, and generates


a white noise sequence in place of

It is not the same as the original speech signal, but has the same correlation sequence, which is good enough for the ear (at least for unvoiced speech).

. It computes the corresponding

20

the levinson algorithm

October 2, 2011

You might also like