You are on page 1of 25

The ory

Category: Time data processing Topic: Digital filtering

Basic definitions relating to digital filtering


A linear time-invariant system Discrete time signals are defined for discrete values of time i.e. when t= n T. A general way of describing a sequence of discrete pulses of amplitude a(n) as illustrated below is given in equation 5-1.

where u0 is the unit impulse. A discrete-time system is an algorithm for converting one sequence into another as represented below. In this case the input x(n) is related to the output y(n) by the specific system f.

A linear system implies that applying the input ax1+bx2 will result in the output ay1+by2 where a and b are arbitrary constants. A time-invariant system implies that the input sequence x(n-n0) will result in the output y(n-n0) for all n0.

LMS proprietary information: reproduction or distribution of this document requires permission in writing from LMS Digital filtering.doc

From equation 5-1 the input x(n) to a system can be expressed as

If h(n) is defined as the impulse response of a system which is the response to the sequence u0(n), then by time invariance h(n-m) is the response to u0(n-m). By linearity, the response to sequence x(m)u0(n-m) must be x(m)h(n-m). Thus the response to x(n) is given by

Equation 5-4 is known as the convolution sum and y(n) is known as the convolution of x(n) and h(n), designated by x(n) * h(n). Thus for a linear time invariant (LTI) system a relation exists between the input and output that is completely characterized by the impulse function h(n) of the system.

Stability and Causality The constraints of stability and causality define a more restricted class of linear time-invariant systems which have important practical applications. A stable system is one for which every bounded input results in a bounded output. The necessary and sufficient condition for stability is

A causal system is one for which the output for any n=n0 depends only on the input for n ? n0. A linear time-invariant system is causal if and only if the unit sample response is zero for n<0, in which case it may be referred to as a causal sequence. Difference equations Some linear time-invariant systems have input and output sequences that are related by a constant coefficient linear difference equation. Representing such systems in this way, can provide means of making them realizable and the appropriate difference equation reveals useful information on the characteristics of the system under investigation such as the natural frequencies, their multiplicity, the order of the system, frequencies for which there is zero transmission ...

The general form of an Mth order linear constant coefficient difference equation is given in equation 56.

An example of a first order difference equation is given by

which can be realized as follows.

represents a one sample delay. A realization such as this where separate delays are used for both input and output is known as Direct form 1. More detailed information on filter realizations can be obtained from the references listed at the end of this chapter. The z transform The z transform of a sequence x(n) is given by

Delay

where z is a complex variable. The z transform is a useful technique for representing and manipulating sequences. The information contained in the z transform can be displayed in terms of (poles) and zeros. If the poles of the function X(z) fall within a radius R1 where R1 1 then the system is stable. In the z plane, the overall representation of a linear time invariant system is given by

and H(z) can again be expressed in the general form of difference equations

The frequency response of filters Consider the case when the input to a filter is x(n)= ejw0n (equivalent to a sampled sinusoid of frequency w0). From equation 5-4

The quantity H(e jw) is the frequency response function of the filter, which gives the transmission of the system for every value of w. This is in fact the z transform of the impulse response function with z=e jw..

which means that the frequency response of a filter is an important indicator of a systems response to any input sequence that can be represented as a continuous superposition of input sequences x(n). Relationship between the frequency response and the Fourier transform of a filter The frequency response of a linear time invariant system can be viewed as the Fourier series representation of H(ejw).

where the impulse response coefficients are also the Fourier series coefficients. Since the above relationships are valid for any sequence that can be summed, the same can apply to x(n) and y(n) and it can be shown that

and so the convolution in the time domain has been converted to multiplication in the frequency domain. Discrete Fourier Transform For a periodic sequence of N samples, the Discrete Fourier Transform is given as

and the DFT coefficients are identical to the z transform of that same sequence evaluated at N equally spaced points around the unit circle. The DFT coefficients are therefore a unique representation of a sequence of finite duration. The continuous frequency response can be obtained from the DFT coefficients, by artificially increasing the number of points equally spaced around the unit circle. So by augmenting a finite duration sequence with additional equally spaced zero valued samples the Fourier transform can be calculated with arbitrary resolution. Finite and Infinite Impulse Response Filters When an impulse response h(n) is made up of a sequence of finite pulses between the limits N1 < n < N2 (as shown below) and is zero outside these limits then the system is called a finite impulse response (FIR) filter or system.

Such filters are always stable and can be realized by delaying the impulse response by an appropriate amount. The design of FIR filters is described in section 11.2.2.

A filter (system) whose impulse response extends to either -R or +R (or both) is termed an infinite impulse response (IIR) filter or system. Design of these filters is discussed in sections 11.2.3 and 11.2.4. Use of digital filters Digital filters can be used in a range of applications such as anti-aliasing, smoothing, elimination of noise, compensation (equalization), modification of fatigue/damage characteristics. They have some important advantages compared to analog filters high accuracy, consistent behavior and characteristics, few physical constraints, independent of hardware, the signals can be easily used by different processing algorithms.

FIR and IIR filter design


Filters fall into two distinct categories -the Finite Impulse Response (FIR) filters and the Infinite Impulse Response (IIR) filters. A comparison of the two categories of filters is given below.

Characteristic Stability Phase Efficiency

FIR these filters are always stable (poles=0)

IIR will be stable if |poles|<1

linear (important in applications such as nonlinear speech processing) better low the length (nr of taps) must be relatively lower order required large to produce an adequately sharp cut off low finite duration easy straightforward (direct form) high infinite duration difficult more critical (direct or cascaded)

Round off error sensitivity Start up transients Adaptive filtering Realization

There are nine basic designs of filters that are described in this chapter as listed below.

FIR Window FIR Multi window FIR Remez IIR Bessel IIR Butterworth IIR Chebyshev IIR Inverse Chebyshev IIR Cauer IIR Inverse design

see page 48 see page 51 see page 52 see page 55 see page 55 see page 56 see page 57 see page 58 see page 61

This section begins with an introduction to the terminology used in filter design. The following subsections deal with the processes and parameters involved in each sort of filter mentioned above. Filter design terminology Filter characteristics The nomenclature used in describing a (low pass) filter is illustrated in Figure 5-1.

Figure 5-1

Filter characteristics

The filter design functions operate with normalized frequencies with a unit frequency equal to the sampling frequency.

Normalized frequency = and thus lies in the range 0 to 0.5 Angular frequency on the unit circle =

frequency (Hz) sampling frequency Normalized frequency x 2 p

Linear phase filters The frequency response of a filter has an amplitude and a phase

For a linear phase, q(w)= -aw where -p w p. It can be shown that a necessary condition for this is that the impulse response function is symmetric,

and in this case a= (N-1)/2. This means that for each value of N there is only one value of a for which exactly linear phase will be obtained. Figure 5-2 shows the type of symmetry required when N is odd and even.

Figure 5-2 Filter types

Symmetrical impulses for odd and even N

Several types of filter are provided (some of which are illustrated below) as well as multipoint filters where the required response can be of an arbitrary shape.

Figure 5-3 Filter types


In addition it is also possible to design a Differentiator filter and a Hilbert transformer. These can both be designed using the Remez exchange algorithm and they are briefly described here.

Differentiator filter Such a filter takes the derivative of a signal and an ideal differentiator has a desired frequency response of

The unit sample response is

Which is an anti symmetric unit sample response. In practice however the ideal case is not required and a pass band will be specified as shown here.

H()

pass band transition stop band

stop band ripple

Figure 5-4

Characteristics of a differentiator filter

Hilbert transformer This filter imparts a 900 phase shift to the input. The ideal Hilbert transformer has a desired frequency response of

The unit sample response is

In practice however the ideal case is not required and the desired frequency response of a Hilbert transformer can be specified as Hd(w) = 1 between the limits wl< w <wu as shown below.

Figure 5-5

Characteristics of a Hilbert transformer

Design of FIR filters Design of an FIR window filter The frequency response of a filter can be expanded into the Fourier series.

The coefficients of the Fourier series are identical to the impulse response of the filter. Such a filter is not realizable however since it begins at -R and is infinitely long. It needs to be both truncated to make it finite and shifted to make it realizable. Direct truncation is possible but leads to the Gibbs phenomenon of overshoot and ripple illustrated below.

Figure 5-6

Gibbs phenomenon due to truncation of the Fourier series

A solution to this is to truncate the Fourier series with a window function. This is a finite weighting sequence which will modify the Fourier coefficients to control the convergence of the series. Then

where w(n) is the window function sequence and hn gives the required impulse response. The desirable characteristics of a window function are a narrow main lobe containing as much energy as possible side lobes that decrease in energy rapidly as w tends to p.

The windows supported are listed below. Rectangular This is equivalent to direct truncation.

Hanning This type of window trades off transition width for ripple cancellation. In this case

Hamming This has similar properties to the Hanning window described above. The formula is the same but in this case a=0.54. Kaiser The Kaiser window function is a simplified approximation of a prolate spheroidal wave function which exhibits the desirable qualities of being a time-limited function whose Fourier transform approximates a band-limited function. It displays minimum energy outside a selected frequency band and is described by the following formula

Where I0 is the zeroth order Bessel function and b is a constant representing a frequency trade-off between the height of the side lobe ripple and the width of the main lobe. Chebyshev This is another example of an essentially optimum window like the Kaiser window, in the sense that it is a finite duration sequence that has the minimum spectral energy beyond the specified limits. The window function is derived from the Chebyshev polynomial which is described below. The Chebyshev polynomial of degree r in x where -1 x 1 is denoted by Tr Tr(x).

The window function W(n) is obtained from the inverse DFT of the Chebyshev polynomial evaluated at N equally spaced points around the unit circle. FIR multi window Filter This allows you to design a filter of arbitrary shape and is suited for narrow band selective filters. It uses the design technique known as frequency sampling.

It will be recalled from equations 5-15 that a filter can be defined by its DFT coefficients and that the DFT coefficients can be regarded as samples of the z transform of the function evaluated at N points around the unit circle.

From these relationships and since ej2pk =1, it can be shown that

The desired filter specification can be sampled in frequency at n equidistant points around the unit circle, to give the desired frequency response H(k). The continuous frequency response can be obtained by interpolation of these sampled values around the unit circle. The filter coefficients are obtained after applying an inverse FFT on the interpolated response. The coefficients are tapered smoothly to zero at the ends by multiplying the impulse response by the specified window function. FIR Remez filter This uses the remez exchange algorithm and the Chebyshev approximation theory to arrive at filters that optimally fit the desired and the actual frequency responses, in the sense that the error between them is minimized. The Parks-McClellan algorithm employed enables you to design an equi-ripple optimal FIR filter. The desired frequency response is expressed as a gabarit which contains a number of frequency bands. These bands are interpolated onto a dense grid in a similar way to that described for the multipoint FIR filter design using a window described above. The weighted approximation error between the desired frequency response and the actual response is spread evenly across the passbands and the stopbands and the maximum error is minimized by linear optimization techniques. The approximation errors in both the pass and stop bands for a low pass filter are illustrated in Figure 5-7.

Figure 5-7

Approximation errors

The filter coefficients are obtained after applying an inverse DFT on the optimum frequency response. Weighting For each frequency band the approximation errors can be weighted. This is done by specifying a weighting function W(w). Applying a weighting function of 1 (unity) in all bands implies an even distribution of the errors over the whole frequency band. To reduce the ripple in one particular band it is necessary to change the relative weighting across the bands and in this case to ensure that the band of interest has a relatively high weighting. It is convenient to normalize W(w) in the stopband to unity and to set it to the ratio of the approximation errors (d2/d1) in the passband. Design of IIR filters using analog prototypes The steps involved in this design process are described in the following subsections. References for further reading on filters can be found on page 66. Step 1) Specify the filter characteristics The required filter characteristics are described in Figure 5-8. These will of course depend on the type of filter required.

Figure 5-8

Filter specification for IIR filters

Step 2) Compute the analog frequencies A prototype low pass filter will be designed based on the required digital cutoff frequency wc. First however the digital frequency wd must be converted to an analog one wa . This is achieved through a bilinear transformation from the digital (z) plane to the analog (s) plane where s and z are related by

When z= ejwT (the unit circle) and s=jwa

The analog w axis is mapped onto one revolution the of the unit circle, but in a non-linear fashion. It is necessary to compensate for this nonlinearity (warping) as shown below

Figure 5-9

Conversion from digital to analog frequencies

Step 3) Select the suitable analog filter It is now necessary to select a suitable low pass analog prototype filter that will produce the required characteristics. The selection can be made from the following types of filter. Bessel filters

Butterworth filters Chebyshev type I filters Inverse Chebyshev (type II) filters Cauer (elliptical) filters Bessel filters The goal of the Bessel approximation for filter design is to obtain a flat delay characteristic in the passband. The delay characteristics of the Bessel approximation are far superior to those of the Butterworth and the Chebyshev approximations, however, the flat delay is achieved at the expense of the stopband attenuation which is even lower than that for the Butterworth. The poor stopband characteristics of the Bessel approximation make it impractical for most filtering applications ! Bessel filters have sloping pass and stop bands and a wide transition width resulting in a cutoff frequency that is not well defined. The transfer function is given by

where Bn(s) is the nth order Bessel polynomial

and d0 is a normalizing constant.

Butterworth filters These are characterized by the response being maximally flat in the pass band and monotonic in the pass band and stop band. Maximally flat means as many derivatives as possible are zero at the origin. The squared magnitude response of a Butterworth filter is

where n is the order of the filter. The transfer function of this filter can be determined by evaluating equation 5-29 at s=jw

Butterworth filters are all-pole filters i.e. the zeros of H(s) are all at s=. They have magnitude (1/?2 ) when w/ wc =1 i.e. the magnitude response is down 3dB at the cutoff frequency.

Figure 5-10

Characteristics of a Butterworth filter

A means of determining the optimum order is described on page 60. Chebyshev (type I) filters These are all pole filters that have equi-ripple pass bands and monotone stop bands. The formula is

where Cn(w) are the Chebyshev polynomials and e is the parameter related to the ripple in the pass band as shown below for n odd and even.

For the same loss requirements, the Chebyshev approximation usually requires a lower order than the Butterworth approximation, but at the expense of an equi-ripple passband. Therefore, the transition width of a Chebyshev filter is narrower than for a Butterworth filter of the same order.

The increased stopband attenuation is achieved by changing the approximation conditions in that band thus minimizing the maximum deviation from the ideal flat characteristics. The stopband loss keeps increasing at the maximum possible rate of 6*<Order> dB/Octave. Chebyshev filters show a non-uniform group delay and substantially non-linear phase. A means of determining the optimum order is described on page 60. Inverse Chebyshev (type II) filters These contain poles and zeros and have equi-ripple stop bands with maximally flat pass bands. In this case

where Cn(w) are the Chebyshev polynomials, e is the pass band ripple parameter and wr is the lowest frequency where the stop band loss attains a specified value. These parameters are illustrated below for n odd and even.

For the same loss requirements, the Inverse Chebyshev approximation usually requires a lower order than the Butterworth approximation, but at the expense of an equi-ripple stopband. The increased passband flatness is achieved by changing the approximation conditions in that band thus minimizing the maximum deviation from the ideal flat characteristics. Cauer (elliptical) filter These filters are optimum in the sense that for a given filter order and ripple specifications, they achieve the fastest transition between the pass and the stop band (i.e. the narrowest transition band). They have equi-ripple stop bands and pass bands.

The transfer function is given by

where Rn(wL) is called a Chebyshev rational function and L is a parameter describing the ripple properties of Rn(wL). The determination of Rn(wL) involves the use of the Jacobi elliptic function. e is a parameter related to the passband ripple. This group of filters is characterized by the property that the group delay is maximally flat at the origin of the s plane. However this characteristic is not normally preserved by the bilinear transformation and it has poor stop band characteristics. For a given requirement, this approximation will in general require a lower order than the Butterworth or the Chebyshev ones. The Cauer approximation will thus lead to the least costly filter realization, but at the expense of the worst delay characteristics. In the Chebyshev and Butterworth approximations, the stopband loss keeps increasing at the maximum possible rate of 6*<Order> dB/Octave. Therefore these approximations provide increasingly more loss than a certain wanted flat attenuation that is really needed above the edge of the stopband. This source of inefficiency for both approximations is remedied by the Cauer or elliptic approximation. Step 4) Transform the prototype low pass filter At this point we have selected a suitable low pass filter prototype with a normalized cutoff frequency wc =1. The next stage is to transform this low pass filter into the type of analog filter required with the desired cutoff frequencies. To achieve this the following transformations are applied.

Step 5) Apply a bilinear transformation The final stage in this design process is to apply a bilinear transformation to map the (s) plane to the (z) plane to obtain the desired digital filter.

The final result is a set of filter coefficients a and b, stored in vectors of length n+1,where n is the order of the filter. A facility, described below, enables you to determine the optimum order of a filter required for a particular design. Determining the filter order You can determine the filter order and the cutoff frequency for a given set of design parameters that are shown in Figure 5-11.

Figure 5-11

Specifications required to determine filter order

Ripple passband Attenuation Lower frequency Upper frequency

This determines the ripple parameter d1. It is expressed in dB When this is defined, the ripple parameter d2 is determined. It is expressed in dB. These are the two edge frequencies wp (end of the pass band) and ws (start of the stop band) of a low pass or high pass filter. Band pass and band stop filters will require a second pair of frequencies to be defined. This is the sampling frequency at which the filter must operate.

Sampling frequency

The filter can be any one of the types mentioned above and the prototype can be either a Butterworth, Chebyshev type I or type II or a Cauer filter. This process does not apply to the Bessel filter because of the particular condition pertaining to these filters in that the filter order affects the cutoff frequency. The minimum filter order required is determined from a set of functions described below. One function relates the pass band and stop band ripple specifications to a filter design parameter h where

Another parameter relates the pass band cut off frequency w p, the transition width v and the low pass filter transition ratio k where

A final function relates the filter order n, the low pass filter transition ratio k and the filter design parameter h. This relationship depends on the type of prototype analog filter.

where K(.) is the complete elliptical integral of the first kind. IIR Inverse design filter The filter inverse design command uses a direct digital design technique rather than the digitization of existing analog filters as described in section 11.2.3. An iterative procedure is used to perform a least squares error fit between the actual frequency response and the specified desired response. The required response is obtained from a specified gabarit that contains the necessary frequency and magnitude break points which are mapped onto a grid. The outcome is a set of filter coefficients.

Analysis
This section describes the functions that provide information on the characteristics of filters. Frequency response of filters The magnitude and phase of the frequency response H(e ) of the filter defined by the coefficients a and b in equation 5-10 Group delay The group delay of a set of filters provides a measure of the average delay of a filter as a function of frequency. The frequency response of a filter is given by
jw

The phase delay is defined as

and the group delay is defined as the first derivative of the phase

If the wave form is not to be distorted then the group delay should be constant over the frequency bands being passed by the filter. For a linear delay, q(w)= -aw where -p w < p then a is both the phase delay and the group delay.

Applying filters
This section describes how filters can be applied to data. Direct trace filtering Implementing this method basically filters the data x according to the filter defined by coefficients a and b to produce the filtered data y. Zero phase filtering This option also filters the data using the filter defined by the coefficients a and b, but in such a way as to produce no phase distortion. In the case of FIR filters an exact linear phase distortion is possible since the output is simply delayed by a fixed number of samples, but with IIR filters the distortion is very non-linear. If the data has been recorded however and the whole sequence can be re-played, then this problem can be overcome by using the concept of time reversal. In effect the data is filtered twice, once in the forwards direction, then in the reverse direction which removes all the phase distortion but results in the magnitude effect of the filter being squared. If x(n)=0 when n<0, then the z transform of the time reversed sequence is

Time reversal filtering can be realized using the method shown in Figure 5-12.

Figure 5-12

Realization of zero phase filters

In this case it can be seen that

So the equivalent filter for the input data is

i.e. zero phase and squared magnitude. Using this filtering method results in starting and end transients, which in this implementation are minimized by carefully matching the initial conditions.

References

[1]

A. V. Oppenheimer and R.W. Schafer Digital Signal Processing Prentice Hall 1975 L.R. Rabiner and B. Gold Theory and Application of Digital Signal Processing Prentice Hall 1975 R.E. Crochiere and L.R. Rabiner Multirate Digital Signal Processing Prentice Hall 1983 J.G. Proakis and D.G. Manolakis Digital Signal Processing: Principles Algorithms and Applications MacMillan Publishing 1992

[2]

[3]

[4]

You might also like