You are on page 1of 558

Lecture 1

Introduction

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Digital Signal Processing
 This week in DSP
ª Getting to know each other
ª Introduction
ª What is DSP?
ª Signals
ª Is this yet another class on testing our endurance on abstract math? (…Yes!)
ª What good is this miserable subject? Why do I care?
• Real world applications
ª Components of a typical DSP system
ª A practical exercise
ª DSP Spring’04 at a glance

ª On Friday Æ DSP prerequisite quiz


What is DSP?
ÂDigital Signal Processing:

ªMathematical and algorithmic manipulation of discretized and


quantized or naturally digital signals in order to extract the most
relevant and pertinent information that is carried by the signal.

Signal to be DSP System Processed


processed signal

• What is a signal?
• What is a system?
• What is processing?
Signals
Signals
 Signals can be characterized in several ways:
ª Continuous time signals vs. discrete time signals
• Temperature in NJ – audio signal on a CD-ROM
ª Continuous valued signals vs. discrete signals
• Amount of current drawn by a device – average SAT scores of a school over years
– Continuous time and continuous valued : Analog signal
– Continuous time and discrete valued: Quantized signal
– Discrete time and continuous valued: Sampled signal
– Discrete time and discrete values: Digital signal
ª Real valued signals vs. complex valued signals
In-class
• Resident use electric power – industrial use reactive power
Exercise
ª Scalar signals vs. vector valued (multichannel) signals
• Blood pressure signal – 128 channel EEG
ª Deterministic vs. random signal:
• Recorded audio – noise (corrupted signal)
ª One-dimensional vs. two dimensional vs. multidimensional signals
• Speech - image
Signals

Analog Digital

Sampled Quantized
Systems
 Not your typical systems: airline system, security system, irrigation
system, etc. are of no interest to us

 For our purposes, a DSP system is one that can mathematically


manipulate (e.g., change, record, transmit, play, transform) digital
signals

 Furthermore, we are not interested in processing analog signals either,


even tough most signals in nature are analog signals

Analog ADC DSP System DAC Processed


signal Digital Digital analog signal
signal signal
Processing
 So what is processing…? What kind of processing do we do?
ª This depends on the application

ª Communication – Modulation and demodulation


ª Signal security – Encryption and decryption
ª Multiplexing and demultiplexing
ª Data Compression
ª Signal denoising – Filtering for noise reduction
ª Speaker / system identification
ª Audio processing – signal enhancement – equilization
ª Image processing – image denoising, enhancement, watermarking, recontruction
ª Data analysis and feature extraction
ª Frequency / spectral analysis
ª Signal generation – TOUCH-TONE® dialing.
Modulation - Demodulation

y (t ) = A[1 + mx(t )]⋅ cos(Ω 0t )


z (t ) = y (t ) ⋅ cos(Ω 0t )
x (t ) = LPF ( z (t ))
~

A[1+m*x(t)]*cosΩ0t
Filtering
 By far the most commonly used DSP operation
ª Filtering refers to deliberately changing the frequency content of the signal,
typically, by removing certain frequencies from the signals
ª For denoising applications, the (frequency) filter removes those frequencies in the
signal that correspond to noise
ª In communications applications, filtering is used to focus to that part of the
spectrum that is of interest, that is, the part that carries the information.
 Typically we have the following types of filters
ª Lowpass (LPF) – removes high frequencies, and retains (passes) low frequencies
ª Highpass (HPF) – removes low frequencies, and retains high frequencies
ª Bandpass (BPF) – retains an interval of frequencies within a band, removes others
ª Bandstop(BSF) – removes an interval of frequencies within a band, retains others
ª Notch filter – removes a specific frequency
Filtering

80 Hz 150 Hz

50 Hz 110 Hz 210 Hz

80 Hz 150 Hz 80 Hz 150 Hz
Touch-Tone Dialing
 Dual-tone multifrequency (DTMF) signals

1000 Hz

1200 Hz
About DSP…
 Is this another one of those classes that tests our endurance on abstract
math torture…?
ª YES! Here is an example…
About DSP
 …and here is another one…
So What Good Is This?
 The real world applications of DSP is innumerous
ª Signal analysis, noise reduction /removal : biological signals - such as ECG, EEG,
blood pressure - NDE signals, such as ultrasound, eddy current, magnetic flux,
oceanographic data, seismic data, financial data - such as stock prices as a time
series data - , audio signal processing, echo cancellation
ª Communications – analog communications, such as amplitude modulation,
frequency modulation, quadrature amplitude modulation, phase shift keying, phase
locked loops, digital and wireless transmission – CDMA (code division multiple
access) / TDMA (time division multiple access), time division multiplexing,
frequency division multiplexing, internet protocol
ª Data encryption, watermarking, fingerprint analysis, speech recognition
ª Image processing and reconstruction, MRI, PET, CT scans
ª Signal generation, electronic music synthesis
ª And many many many more….
Components of a
DSP System

Discrete Sampled
signal signal
Analog HOLD A/D
signal
Sampler

DSP System Digital


(Digital Filter) signal

Digital Quantized Analog


signal signal signal Analog Processed
D/C D/A
LPF analog signal
Components of a
DSP System
Components of a
DSP System
Another Example
Analog Signal Output of sample & hold

Output of A/D Converter (quantized binary) Output of Digital Processor (binary)

Output of D/A Converter (analog) Output of LPF - Analog Signal

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Analog - to – Digital –
to - Analog…?
 Why not just process the signals in continuous time domain? Isn’t it just a waste of time,
money and resources to convert to digital and back to analog…?
 Why DSP? We digitally process the signals in discrete domain, because it is
ª More flexible , more accurate, easier to mass produce
ª Easier to design
• System characteristics can easily be changed by programming
• Any level of accuracy can be obtained by use of appropriate number of bits.
ª More deterministic and reproducible – less sensitive to component values, etc.
ª Many things that cannot be done using analog processors can be done digitally
• Allows multiplexing, time sharing, multichannel processing, adaptive filtering
• Easy to cascade, no loading effects, signals can be stored indefinitely w/o loss
• Allows processing of very low frequency signals, which requires unpractical component
values in analog world
 On the other hand, it can be
ª Slower, sampling issues
ª More expensive, increased system complexity, consumes more power.
 Yet, the advantages far outweigh the disadvantages Æ Today, most continuous time
signals
Digital Signalare in fact processed
Processing, inPolikar,
© 2004 Robi discreteRowan
time University
using digital signal processors
A Practical Example
 Audio processing – Matlab demo.
DSP At a Glance
• Introduction, Components of a DSP System, DSP Applications, Concepts of
Frequency and Filtering
•Signals and Systems – Chapter 1
• Commonly used signals in DSP – unit step and impulse, sinusoids, complex
exponentials, classification of signals, periodicity, energy vs. power signals
• Discrete time systems – classification of discrete systems (linearity, causality, time invariance,
memory, stability), characterization of LTI systems – impulse response, convolution,difference
equations, finite and infinite impulse response (FIR/IIR) systems
•Representation of Signals in Frequency Domain – Chapter 2
• Concept of spectrum / frequency
• Frequency representation of continuous time signals - Fourier series and Fourier transform (review)
• Sampling theorem – aliasing, Nyquist criterion, interpretation of spectrum in discrete time domain
• Frequency representation of discrete time signals
• Discrete time Fourier transform (DTFT) ,
• Discrete Fourier transform (DFT) and Fast Fourier transform (FFT),
• Properties of and relationships between various Fourier transforms,
• Concepts of circular shift and convolution, decimation and interpolation of discrete signals.
•The z-transform – Chapter 3
• Definition and properties
• Relation to DTFT/DFT
• Concepts of zeros and poles of a system, region of convergence (ROC) of z-transform
• Inverse z-transform (to be covered in CC Module - Complex Systems)
DSP At a Glance
• Linear Time Invariant (LTI) Systems in Transform Domain – Chapter 3 & 4
• Concept of filtering – revisited, lowpass, bandpass and highpass filters
• The frequency response and transfer function of a system
• Types of transfer functions
• FIR filters, ideal filters, linear phase filters, zero locations of linear phase FIR filters,
• IIR filters, pole and zero locations of IIR filters, all pass filters, comb filters
• Stability issues for IIR filters
•Filter Design and Implementation – Chapter 4
• Digital filter specifications, selection of filter type, estimation of filter order
• FIR filter design using windows
• IIR filter design using bilinear transformation
• Analog filter design – Butterworth, Chebyshev, Elliptic, Bessel filters
• Spectral transformations for designing a filter with new characteristics based on a previously
designed filter
•Filter Structures – Chapter 5
• FIR filter structures – direct and cascade form
• IIR filter structures, Lattice form
•Finite Wordlength Effects – Chapter 6
• Analog to digital and digital to analog conversion
• Number representations – fixed point and floating point numbers
• Quantization of fixed and floating point numbers, coefficient quantization
• Quantization noise analysis, Overflow effects
Practical Issues and Advanced Topics (time permitting)
Concept Maps
 Concept maps are tools for organizing and representing knowledge1
 Concept maps include “concepts” which are connected by lines to form “propositions”. The
lines are labeled to specify the relationship between the concepts.
 They are typically represented in a hierarchical fashion, with more general concepts at the
top / center, and more specific, less general ones at the the bottom or extremities.
 The hierarchy depends on some context in which the knowledge is applied, such as with
respect to a specific question.
 Cross links may connect concepts that are at different geographical locations of the map.
Such links represent the multidisciplinary nature of the topic and the creative thinking
ability of the person preparing the map.
 Creating concept maps is not very easy, and requires some amount of familiarity with the
technique as well as the context. No concept map is ever final, as it can be continually
improved. One should be careful however, against frivolously creating concepts and/or
links between them (which result in invalid propositions).
 Concept maps provide a very powerful mechanism for presenting the relationships between
concepts as well as the preparer level of understanding of these concepts.

1. J.D. Novak, http://cmap.coginst.ufw.edu/info


Sample Concept Maps
(Not Complete)

What is a plant?

Cross link
Concepts

Link
labels
Links

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Sample
Concept Maps
Sample
Concept Maps
For Friday & Next Monday

 Friday – Prerequisite Quiz


ª Graduate student, Joe Oagaro will proctor the quiz
ª You may use full lab period for the quiz
ª Close notes and closed books, calculators allowed.
ª There will be an optional take home section, that in fact includes topics that will be
learned this semester.
 Monday – Jan 26
ª Prepare a concept map of your current DSP knowledge. Yes, your DSP knowledge
at this time is very limited. This will be used to compare a concept map to be
prepared at the end of the semester to show you your progress this semester.
ª Concept maps should only include concepts you know! Do NOT use topic names
from tentative contents lists, unless you are very familiar with that topic!
Lecture 2
Discrete
Time Signals

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


This Week in DSP
 A demo!
 Semester at a glance – DSP’04
 Continuous time signals
 Sinusoids
ª Concepts of amplitude, frequency and phase
ª Generating sinusoids in matlab (more during lab)
ª Complex exponentials
ª Phasor notation – revisited
ª Euler’s formula & graphical interpretation
ª Other continuous time signals
Sinusoids &
Signals
Impulse, step,
DSP Quantization
Finite
Exponentials rectangular Worldlength
Phasors (LTI) Systems A/D D/A
Frequency Number Rep.
Discrete LTI Fixed/Floating
Characterization Systems
Quantization Overflow
Power / Energy Periodicity Cont. / Discrete Classification Noise Effects

Convolution Sampling Linearity Causality Memory


Regular / Circular Nyquist Thm. Time Inv. Stability

Time domain Time Domain Rep. Advanced Topics


representation
Impulse Resp. Diff. Equation Random Signal Analysis
Multirate Signal Proc.
Transforms
Time Frequency Analysis
Freq. Domain Rep. Adaptive Signal Process.
Representation in
frequency domain Transfer Func.
Frequency Res.
Spectrum
Filtering FIR / IIR
LPF HPF BPF BSF APF Notch
CFT DFT
Ideal vs. Practical Filter Design Filter Structure
DTFT FFT FIR IIR FIR IIR
Specs Direct Lattice

Z Windows

Linear Phase
Bilinear. Tran.

Butterworth
Cascade

Poles & Zeros Chebychev


ROC Elliptic
Stability
Signals & Sinusoids
 Any physical quantity that is represented as a function of an
independent variable is called a signal.
ª Independent variable can be time, frequency, space, etc.
 Sinusoids play a very important role in signal processing, because
ª They are easy to generate
ª They are easy to work with – their mathematical properties are well known
ª Most importantly: All signals can be represented as a sum of sinusoids
 In continuous time:

y (t ) = A sin(Ωt − θ )
Phase
Amplitude Angular frequency (radians)
(radians/sec)
Sinusoids
 A continuous time domain sinusoid is a periodic signal

 Period: The time after


which the signal repeats
itself:

y (t ) = y (t + T )
(period)

 (Normalized Analog) Angular frequency: a measure of rate of change in the signal,


normalized to the interval [0 2π] – rad/sec. Ω
ª Analog frequency (F, f – measured in Hertz, 1/sec), the f = or Ω = 2πf

period T (measured in seconds), and the angular
frequency are related to each other by 1 2π
T= or Ω =
f T
 Phase: The number of degrees –in radians – the sinusoid is shifted from its origin.
ª If the sinusoid is shifted by tθ seconds, then the phase is

θ = 2πftθ = 2π
T
Frequency , period & Phase
T

tθ=0.0187s

T=0.05 seconds Î f=1/0.05=20 Hz


Ω=2*pi*20=125.66 rad/s, θ = 0

tθ=θ/2πf=(3π/4)/2π20=0.75/40=0.0187s
What is the phase component???
Sinusoids
 We can also represent sinusoids using cosine
x(t ) = A cos(Ω 0t ± θ ) = A cos(2πf 0t ± θ )

Can you identify A, f0,


Ω0,θ for each plot?

A=1, f0=2Hz, T=0.5s,


Ωo=2*pi*2=4pi=12.56 rad/s
Answer here…!
θ1=0 rad, θ2=2*pi*0.125/0.5=pi/2=900
Sinusoids
 The take home message is: The sine and cosine are essentially the
same signals, separated by only a 900 phase angle. Other important
properties identities we need to know:
Sinusoids
 In matlab: A*sin(2*pi*f*t+theta) – t: time base, theta=phase
• t=-1:0.001:1;
• x=sin(2*pi*20*t); %phase is zero, amplitude is 1, in this example
• plot(t,x); grid; xlabel(time, sec.); ylabel(‘amplitude’); title(‘a 20 Hz sine signal’)

T=1/f=1/20=0.05 seconds
In Matlab
 What Ts is small
enough?
Complex Exponential
Signals
 The complex exponential signal is defined as
x(t ) = Ae j (Ω0t +θ ) Note that there is π/2 phase
= A cos(Ω 0t + θ ) + j sin (Ω 0t + θ ) Difference between the two.
Phasor Notation
 Recall that any complex number can be represented with a phasor
notation r = x2 + y2
ª z=x+jy Î X=rejθ , where
 y
θ = arctan 
x

ª Therefore, any complex signal can be represented in phasor notation

Ae j (Ω0t +θ ) = Ae jθ e jΩ0t
Then any real signal can be
represented as:
= Xe jΩ0t = Ae jϕ (t ) x(t ) = A cos(Ω 0t ± θ )
Complex amplitude - phasor ϕ (t ) = Ω 0 t + θ
{ }
= ℜ Ae j (Ω0t +θ )
= ℜ{Xe }
jΩ 0 t

= ℜ{Ae ϕ ( )}
j t
Graphical
Interpretation
Ae j (Ω0t +θ ) = Ae jθ e jΩ0t = Ae jϕ (t ) , ϕ (t ) = Ω 0 t + θ
Imag

Ae jϕ (t )
The complex signal, frozen in time, at time = t.

A
φ(t)
Real

As t changes, the phase φ(t) that is the angle, of the phasor


changes; the phasor rotates at the frequency of Ω0. If the
phasor is rotating in the counterclockwise direction, Ω0 is said
to be (+) frequency, otherwise (-) frequency. Since the phasor
returns the same point every 2π, and it takes Ω0 T to make one
turn, 1
Ω 0T = (2πf )T = 2π ⇒ T =
f
Also note that the phase term in a signal indicates where the
signal is pointing at t=0!
Euler’s Formula
e jθ + e − jθ e jθ − e − jθ
cos θ = sin θ = ⇔ e jθ = cos θ + j sin θ
2 −2j
 e j (Ω0t +θ ) + e − j (Ω0t +θ ) 
A cos(Ω 0t + θ ) = A 
 2 
 
=L
= ℜ{z (t )}

cos(ωt )

From © Lyons
e j θ + e − jθ
cos θ =
2
Unit circle (A=1)

Real ωt=π/2
ωt=π/4 Axis

ωt=3π/4

From © Lyons
Continuous Time Signals
 The signal y (t ) = A sin(Ωt − θ ) is a continuous-time signal
ª It has a value for every time instant on which it is defined
ª Different then a continuous function, which is a function that can be differentiated
in the interval on which it is defined. The following rectangular signal, is not a
continuous function (why not?):

∆ 1, −1/ 2 ≤ t ≤ 1/ 2
Π (t ) = 
0, otherwise
Continuous Time Signals
Sinusoid and Exponential

y (t ) = A sin(Ωt − θ ) x(t ) = e −αt


α >0
Continuous Time Signals
Rectangular Function
 Rectangular function
∆ 1, −1/ 2 ≤ t ≤ 1/ 2 1 t ∆ 1 β , −β /2≤t ≤ β /2
Π (t ) =  Π ( ) =
β β 0, otherwise
0, otherwise
Continuous Time Signals
Unit Impulse
 Consider compressing the rectangular function in such a way that the
area under the rectangle stays constant, as 1
ª The function you would obtain as the width of the rectangle approaches zero, is
called the unit impulse function – it is a commonly used test signal.

1t 
δ (t ) = lim Π  
β →0 β  β 

ª Since the area under the rectangle


was kept constant, the impulse
function has the following
properties

0, t > 0
∫ δ (t )dt = 1, δ (t ) = 0, t < 0
−∞ 
Unit Impulse
 Some important properties of the unit impulse

1. f (t )δ (t − t0 ) = f (t0 )δ (t − t0 )
b b
2. ∫ f (t )δ (t − t0 )dt = ∫ f (t0 )δ (t − t0 )dt = f (t0 ) Sifting property
a a
b
3. ∫ f (t )δ ( n ) (t − t0 )dt =(−1) n f ( n ) (t0 ) a < t0 b > t0
a
1
4. δ (at ) = δ (t )
a
5. f (t ) ∗ δ (t ) = f (t ), f (t ) ∗ δ (t0 ) = f (t0 ), Convolution property
(n) (n)
6. f (t ) ∗ δ (t ) = f (t ),
Recall Convolution


x(t ) ∗ y (t ) = ∫ x(τ ) y (t − τ )dτ
−∞

= ∫ y (τ ) x(t − τ )dτ
−∞
Continuous Time Signals
Unit Step Function
 The continuous-time unit-step function is

1, t > 0
u (t ) = 
0, t < 0

 Note the following relationship between the impulse and step


functions
d
δ (t ) = u (t )
dt
t
u (t ) = ∫ δ (t )dt
−∞
Lecture 3
Continuous
& Discrete
Time Signals

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


This Week in DSP
 Basic Continuous Signals
ª Rectangular pulse
ª Unit impulse
ª Unit step
 Convolution Integral
 Discrete Time Signals
ª Sampling
ª Unit impulse sequence
ª Unit step sequence
Continuous Time Signals
 The signal y (t ) = A sin(Ωt − θ ) is a continuous-time signal
ª It has a value for every time instant on which it is defined
ª Different then a continuous function, which is a function that can be differentiated
in the interval on which it is defined. The following rectangular signal, is not a
continuous function (why not?):

∆ 1, −1/ 2 ≤ t ≤ 1/ 2
Π (t ) = 
0, otherwise
Continuous Time Signals
Sinusoid and Exponential

y (t ) = A sin(Ωt − θ ) x(t ) = e −αt α > 0


Continuous Time Signals
Rectangular Function
 Rectangular function
∆ 1, −1/ 2 ≤ t ≤ 1/ 2 1 t ∆ 1 β , −β /2≤t ≤ β /2
Π (t ) =  Π ( ) =
β β 0, otherwise
0, otherwise

G=rectpuls(t)
G=rectpuls(t,w)
Continuous Time Signals
Unit Impulse
 Consider compressing the rectangular function in such a way that the
area under the rectangle stays constant, as 1
ª The function you would obtain as the width of the rectangle approaches zero, is
called the unit impulse function – it is a commonly used test signal.

1t 
δ (t ) = lim Π  
β →0 β  β 

ª Since the area under the rectangle


was kept constant, the impulse
function has the following
properties

0, t > 0
∫ δ (t )dt = 1, δ (t ) = 
−∞ 0, t < 0
Unit Impulse
 Some important properties of the unit impulse

1. f (t )δ (t − t0 ) = f (t0 )δ (t − t0 )
b b
2. ∫ f (t )δ (t − t0 )dt = ∫ f (t0 )δ (t − t0 )dt = f (t0 ) Sifting property
a a
b
3. ∫ f (t )δ ( n ) (t − t0 )dt =(−1) n f ( n ) (t0 ) a < t0 b > t0
a
1
4. δ (at ) = δ (t )
a
5. f (t ) ∗ δ (t ) = f (t ), f (t ) ∗ δ (t0 ) = f (t0 ), Convolution property
(n) (n)
6. f (t ) ∗ δ (t ) = f (t ),
Continuous Convolution
 Recall the convolution integral

z (t ) = x(t ) ∗ y (t ) = ∫ x(τ ) y(t − τ )dτ
−∞

= ∫ y (τ ) x(t − τ )dτ
−∞

 Note that the “t” dependency of z(t) is misleading. For each value of “t” the
convolution integral must be integrated separately over all values of the dummy
variable “τ”. So, for each “t”
1. Rename the independent variable as τ. You now have x(τ) and y(τ). Flip y(τ) over the origin.
This is y(-τ)
2. Shift y(-τ) as far left as possible to a point “t”, where the two signals barely touch. This is
y(t-τ)
3. Multiply the two signals and integrate over all values of τ. This is the convolution integral
for the specific “t” picked above.
4. Shift / move y(-τ) infinitesimally to the right and obtain a new y(t-τ). Multiply and integrate
5. Repeat 2~4 until y(t-τ) no longer touches x(t), i.e., shifted out of the x(t) zone.
Convolution Integral

e −0.15(t +5) [u (t + 5) − u (t − 11)] [u (t ) − u (t − 2)]

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Convolution w/Impulse
 From the sifting property of the impulse function
b b
∫ f (t )δ (t − t0 )dt = ∫ f (t0 )δ (t − t0 )dt = f (t0 )
a a

ª Convolution integral with the impulse is simply shifting the function to where the
impuls(es) appear.
Continuous Time Signals
Unit Step Function
 The continuous-time unit-step function is

1, t > 0
u (t ) = 
0, t < 0

 Note the following relationship between the impulse and step


functions
d
δ (t ) = u (t )
dt
t
u (t ) = ∫ δ (t )dt
−∞
Discrete-Time Signals
 A discrete-time signal, commonly referred to as a sequence, is only
defined at discrete time instances, where t is defined to take integer
values only.
 Discrete-time signals may also be written as a sequence of numbers
inside braces:
{x[n]} = {K, − 0.2, 2.2,1.1, 0.2, − 3.7, 2.9,K}

ª n indicates discrete time, in integer intervals, the bold-face denotes time t=0.
 Discrete time signals are often generated from continuous time signals
by sampling which can roughly be interpreted as quantizing the
independent variable (time)

{x(n)} = x(nTs ) = x(t ) t =nT s


n = L ,−2,−1,0,1,2, L

Sampling interval / period


Sampling
 Think of sampling as a switch, that stays closed for an infinitesimally
small amount of time. It takes samples from the continuous time
signal

Ts

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Sampling

1
fs =
Ts
 Reciprocal of sampling period is the sampling frequency, fs
(samples/s)
 We naturally interpret signals in cont domain ΠD/A conversion
 A fundamental question: how close should the samples be to each
other so that a continuous time signal can be uniquely generated from
a discrete time signal
ª What is the maximumTs, or minimum fs should be so that we can observe the
signal as smooth? How about reconstruct the cont. signal from its samples?
Discrete Signals
 A length-N sequence is often referred to as an N-point sequence
 The length of a finite-length sequence can be increased by zero-padding, i.e., by
appending it with zeros

 A right-sided sequence x[n] has zero-valued samples for n<N1. If N1>0, a right-
sided sequence is called a causal sequence

n
N1

 A left sequence x[n] has zero-valued samples for n>N2. If N2<0, a left-sided
sequence is called an anti-causal sequence

N2
n
Lecture 4
Discrete
Time Signals

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


This Week in DSP
 Discrete Time Signals
ª Sampling
ª Unit impulse sequence
ª Unit step sequence
ª Complex exponential sequence
ª Periodicity
Discrete-Time Signals
 A discrete-time signal, commonly referred to as a sequence, is only
defined at discrete time instances, where t is defined to take integer
values only.
 Discrete-time signals may also be written as a sequence of numbers
inside braces:
{x[n]} = {K, − 0.2, 2.2,1.1, 0.2, − 3.7, 2.9,K}

ª n indicates discrete time, in integer intervals, the bold-face denotes time t=0.
 Discrete time signals are often generated from continuous time signals
by sampling which can roughly be interpreted as quantizing the
independent variable (time)

{x(n)} = x(nTs ) = x(t ) t =nT s


n = L ,−2,−1,0,1,2, L

Sampling interval / period


Sampling
 Think of sampling as a switch, that stays closed for an infinitesimally
small amount of time. It takes samples from the continuous time
signal

Ts

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Sampling

1
fs =
Ts
 Reciprocal of sampling period is the sampling frequency, fs
(samples/s)
 We naturally interpret signals in cont domain ΠD/A conversion
 A fundamental question: how close should the samples be to each
other so that a continuous time signal can be uniquely generated from
a discrete time signal
ª What is the maximumTs, or minimum fs should be so that we can observe the
signal as smooth? How about reconstruct the cont. signal from its samples?
Discrete Signals
 A length-N sequence is often referred to as an N-point sequence
 The length of a finite-length sequence can be increased by zero-padding, i.e., by
appending it with zeros

 A right-sided sequence x[n] has zero-valued samples for n<N1. If N1>0, a right-
sided sequence is called a causal sequence

n
N1

 A left sequence x[n] has zero-valued samples for n>N2. If N2<0, a left-sided
sequence is called an anti-causal sequence

N2
n
Discrete Time Signals
Unit Impulse (sequence)

1, n = 0
δ [ n] = 
0, n ≠ 0

The sifting property carries into discrete time domain

 f [n0 ], n = n0
f [n]δ [n − n0 ] = f [n0 ]δ [n − n0 ] = 
0, n ≠ n0
b b
a<n0, b>n0
∑ f [n]δ [n − n0 ] = ∑ f [n0 ]δ [n − n0 ] = f [n0 ]
n=a n=a
Discrete Time Signals
Unit Impulse (sequence)
 The sifting property has one very important consequence:
ª A sequence can be generated in terms of impulses
x[n] = ... + x[−1]δ [n + 1] + x[0]δ [n] + x[1]δ [n − 1] + ...

= ∑ x[m]δ [n − m]
m = −∞

=x[-5]δ[n+5]

We will use this property in the future to define any system in terms of its “impulse response”
Discrete Time Signals
Unit Step (sequence)

n
–4 –3 –2 –1 0 1 2 3 4 5 6

1, n ≥ 0 ∞
u[n] =  = ∑ δ [n − i]
0, n < 0 i =0

δ [n] = u[n] − u[n − 1]


Discrete Time Signals
Exponential Sequence
 Exponential sequence -
n
x[ n ] = A α , − ∞ < n < ∞
ª where A and α are real or complex numbers. If |α|<1, this is decaying exponential
ª What about for 0 < α <1, or what about -1 < α < 0, and |α|>1???

 If we write α=e ( σ o + j ωo )
, A = A e jφ ,
Angular (discrete) frequency
ª then we can express of the sequence
j φ ( σ o + j ωo ) n
x[ n ] = Ae e = xre [ n ] + j xim [ n ],
ª where σon
xre [ n ] = A e cos( ωo n + φ),
σo n
xim [ n ] = A e sin( ωo n + φ)
Discrete Time Signals
Exponential Sequence
 xre[n] and xim[n] of a complex exponential sequence are real
sinusoidal sequences with constant (σ0=0), growing (σ0>0), or
(σ0<0) amplitudes for n > 0

Real part Imaginary part


1 1

0.5 0.5
Amplitude

Amplitude
0 0

-0.5 -0.5

-1 -1
0 10 20 30 40 0 10 20 30 40
Time index n Time index n

1 π
x[n] = exp(− + j )n ⋅ u[n]
12 6
Discrete Time Signals
Exponential Sequence
 A special case of the exponenital signal is very commonly used in
DSP: A=real constant, and α is purely imaginary, i.e.,
x[ n ] = Ae jω o n = A(cos [ω 0 n ] + j sin [ω 0 n ])

 If both A and α are purely real, then we have a real exponential


sequence x[n] = Aα n , −∞ < n < ∞, A, α ∈ ℜ
α = 1.2 α = 0.9
50 20
40
15
Amplitude

30
Amplitude

10
20

10 5

0
0 5 10 15 20 25 30 0
0 5 10 15 20 25 30
Time index n
Time index n
Periodicity
 Sinusoidal sequence A cos(ωo n + φ) and complex exponential
sequence Be jωo n are periodic sequences of period N if
ω0N = 2πr, where N and r are positive integers
 Smallest value of N satisfying ωo N = 2πr is the fundamental
period of the sequence
 Any sequence that does not satisfy this condition is aperiodic
 To verify the above fact, consider
x1[n] = cos(ωo n + φ) x2 [ n] = cos(ωo ( n + N ) + φ)
x2 [ n] = cos(ω o n + φ ) cos ω o N − sin(ω o n + φ ) sin ω o N
= cos(ω o n + φ ) = x1[ n] iff sin ω o N = 0 and cos ω o N = 1
ª These two conditions are met if and only if
2π = N
ωo N = 2π r or ωo r
Periodicity
 Note that any continuous sinusoidal /exponential signal is periodic,
however, not all discrete sinusoidal sequences are:
ª A discrete time sequence sin(ω0n+φ) or ejω n is periodic with period N, if and
0

only if, there exists an integer m, such that mT0 is an integer, where T0=2π/ ω0.
ª In other words, ω0N = 2πr must be satisfied with two integers N and r, or N/r
must be rational number.

ª Are these sequences periodic?


x[n] = 3 cos(5n + π / 2)

y[n] = e j 7 n x[n] = 5 sin(3πn + π / 2)

y[n] = e j 3.7πn x[n] = 5 sin( 3πn + π / 2)


ª Try in matlab!!!
Periodicity
Do you believe in the professor?
 Try this:
ª n=0:1:1000;
ª x=sin(2*0.01*n);
ª plot(n,x)
Bizarre Properties
of Discrete Signals

Property 1 - Consider x[n]=ejω n and y[n]= ejω n with 0<ω1<π and


1 2

2πk≤ ω2≤ 2π(k+1), where k is any positive integer.


If ω2= ω1+ 2πk, then x[n] = y[n]

Thus, x[n] and y[n] are indistinguishable

What does this mean?

Two periodic discrete exponential sequences are indistinguishable, if


their angular frequencies are 2πk apart from each other !!!
Try this at home
N=-1000:1000
x=exp(j*2*pi*0.01*n);
plot(n, real(x))
y=exp(j*2*pi*2.01*n); %note that the ωy[n]= ωx[n]+ 2π
hold
plot(n, real(y),'r')
What the Heck
is Going on?
 Property 2: The curse of digital angular frequency:
ª The frequency of oscillation of Acos(ω0n) increases as ω0 increases from 0 to π,
and then decreases as ω0 increases from π to 2π
 Thus, frequencies in the neighborhood of ω = 0 or 2πk are called low
frequencies, whereas, frequencies in the neighborhood of ω = π
or π(2k+1) are called high frequencies.
ª Note that, the frequencies around ω = 0 and ω = 2π are both low frequencies. In
fact, ω = 0 and ω = 2π are both identical frequencies
ª Due to these two properties a frequency π/2
in the neighborhood of ω = 2π is
indistinguishable from a frequency in ω0n
π 0
the neighbor hood of ω = 2π±2πk. 2π

3π/2
Try This at Home

n=-50:50;
x=cos(pi*0.1*n);
y=cos(pi*0.9*n);
z=cos(pi*2.1*n);
subplot(311)
plot(n,x)
title('x[n]=cos(0.1\pin')
title('x[n]=cos(0.1\pin)')
grid
subplot(312)
plot(n,y)
title('y[n]=cos(0.9\pin)')
grid
subplot(313)
plot(n,z)
grid
title('z[n]=cos(2.1\pin)')
xlabel('n')
The Curse of
Digital Frequency
 Recall that a discrete time signal can be obtained from a continuous time
signal through the process of sampling: take a sample every Ts second
{x(n)} = x(nTs ) = x(t ) t =nTs
n = L ,−2,−1,0,1,2, L

 When time is discretized, what happens to frequency? Consider the


following:

y (t ) = A sin(Ωt − θ ) 2πΩ F
ω = ΩTs = ⇒ f =
y s (nTs ) = A sin(ΩnTs − θ ) Ωs Fs

Sampling frequency (rad/s) Sampling frequency (sam/s)

 Note that if Ω=Ωs Î ω = 2π What does this mean?


The Sampling Process

 Remember: If the unit of sampling period Ts is in seconds


ª The unit of discrete time n is samples (or just an index with a unit)
ª The unit of normalized digital angular frequency ω is radians/sample
ª The unit of normalized analog angular frequency Ω is radians/second
ª The unit of analog frequency f is Hertz (Hz)

 More about sampling…Coming soon…!


Some Basic Operations
on Sequences
 Product (modulation) operation:

ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]

 Creating a finite-length sequence from an infinite-length


sequence by multiplying the latter with a finite-length
sequence called a window sequence
 Process called windowing
Some Basic Operations
on Sequences

 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation

ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences

 Time-shifting operation: y[n] = x[n − N ]


where N is an integer
 If N > 0, it is delaying operation

ª Unit delay x[n] z −1 y[n] y[n] = x[n − 1]


 If N < 0, it is an advance operation

ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences

 Time-reversal (folding) operation:

y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence

x[n] x[n]

x[n]
Basic Operations
An Example

y[n] = α1x[n] + α 2 x[n − 1] + α 3 x[n − 2] + α 4 x[n − 3]


Discrete Convolution
 The operation by far the most commonly used by DSP professionals,
and most commonly misused, abused and confused by students.

 At the heart of any DSP system:

Discrete-time
x[n] System, h[n]
y[n]
Input sequence Output sequence
y[n] = x[n] ∗ h[n]

= ∑ x[m] ⋅ h[n − m] h[n]: Impulse response of the system
m = −∞

= ∑ h[m] ⋅ x[n − m]
m = −∞
Discrete Convolution
 Again, note that the “n” dependency of y[n] is misleading. For each
value of “n” the convolution sum must be computed separately over
all values of the dummy variable “m”. So, for each “n”
1. Rename the independent variable as m. You now have x[m] and h[m]. Flip h[m]
over the origin. This is h[-m]
2. Shift h[-m] as far left as possible to a point “n”, where the two signals barely touch.
This is h[n-m]
3. Multiply the two signals and sum over all values of m. This is the convolution sum
for the specific “n” picked above.
4. Shift / move h[-m] to the right by one sample, and obtain a new h[n-m]. Multiply
and sum over all m.
5. Repeat 2~4 until h[n-m] no longer overlaps with x[m], i.e., shifted out of the x[m]
zone.
Convolution Demo

x[n]=0.55n+3(u[n+3]-u[n-7]) dconvdemo.m h[n]=u[n]-u[n-10]


In Matlab
 Matlab has the built-in convolution function, conv(.)
 Be careful, however in setting the time axis

n=-3:7;
x=0.55.^(n+3);
h=[1 1 1 1 1 1 1 1 1 1 1];
y=conv(x,h);
subplot(311)
stem(x)
title(‘Original signal’)
subplot(312)
stem(h) % Use stem for discrete sequences
title(‘Impulse response / second signal’)
subplot(313)
stem(y)
title(‘ convolution result’)
Lecture 5

Discrete
Convolution
&
Classification
of Discrete
Sequences

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Discrete Convolution – Demystified
ª In class exercises
 Classification of Discrete Sequences
 Basic Operations on Discrete Sequences
Discrete Convolution
 The operation by far the most commonly used by DSP professionals,
and most commonly misused, abused and confused by students.

 At the heart of any DSP system:

Discrete-time
x[n] System, h[n]
y[n]
Input sequence Output sequence
y[n] = x[n] ∗ h[n]

= ∑ x[m] ⋅ h[n − m] h[n]: Impulse response of the system
m = −∞

= ∑ h[m] ⋅ x[n − m]
m = −∞
Discrete Convolution
 Again, note that the “n” dependency of y[n] is misleading. For each
value of “n” the convolution sum must be computed separately over
all values of the dummy variable “m”. So, for each “n”
1. Rename the independent variable as m. You now have x[m] and h[m]. Flip h[m]
over the origin. This is h[-m]
2. Shift h[-m] as far left as possible to a point “n”, where the two signals barely touch.
This is h[n-m]
3. Multiply the two signals and sum over all values of m. This is the convolution sum
for the specific “n” picked above.
4. Shift / move h[-m] to the right by one sample, and obtain a new h[n-m]. Multiply
and sum over all m.
5. Repeat 2~4 until h[n-m] no longer overlaps with x[m], i.e., shifted out of the x[m]
zone.
Convolution Demo

x[n]=0.55n+3(u[n+3]-u[n-7]) dconvdemo.m h[n]=u[n]-u[n-10]


In Matlab
 Matlab has the built-in convolution function, conv(.)
 Be careful, however in setting the time axis

n=-3:7;
x=0.55.^(n+3);
h=[1 1 1 1 1 1 1 1 1 1 1];
y=conv(x,h);
subplot(311)
stem(x)
title(‘Original signal’)
subplot(312)
stem(h) % Use stem for discrete sequences
title(‘Impulse response / second signal’)
subplot(313)
stem(y)
title(‘ convolution result’)
In class Exercise
 See Example 1.6 on p.27 of Bose for an additional example.

 Compute the convolution of the following sequences, using the


mathematical definition, as well as using matlab: (Q.40)
n
1
a. x[n] =   u[ n − 3], y[n] = 2n u[3 − n]
2
b. x[n] = y[n] = α nu[n]
The Vector Method for Convolution
of for Finite Sequences

 By example - Q. 41 (a) x[n]=[-1 2 -3 2 -1]; y[n]=[-0.5 1 1.5]

Negative shifts of n Positive shifts of n


In Class Exercise
 Using the vector method, compute the discrete convolution of
ª x[n]=[-5 0.25 -1 0 2 7], y[n]=[1 -1.5 1 2]

Solution: z[n]= [-5 7.75 -6.375 -8.25 1.5 2 -8.5 11 14]


Classification of Signals
 We have already seen some:
ª Continuous vs. discrete
ª Periodic vs. aperiodic
 Here are some others:
ª Symmetric vs. antisymmetric vs. nonsymmetric
ª Energy vs. power sequences
ª Bounded vs. unbounded sequences
ª Absolutely summable vs. square summable sequences*
Symmetry
 A discrete sequence is called
ª Symmetric if, x[n] = x[-n]
ª Conjugate-symmetric if x[n]=x*[-n],
• if x[n] is real, then symmetric and conjugate-symmetric are the same, and the signal is also referred
to as an even sequence

Even sequence

ª Conjugate-antisymmetric if x[n]=-x*[-n]
• If x[n] is real, the signal is called simply as antisymmetric or odd sequence

Odd sequence
Symmetry
 Any real sequence can be expressed as a sum of its even part and its odd part:

x[n] = xev [n] + xod [n]


where

xev [n] = 12 ( x[n] + x[− n]) xod [n] = 12 ( x[n] − x[− n])

 Any complex sequence can be expressed as a sum of its conjugate-symmetric part


and its conjugate-antisymmetric part:
x[n] = xcs [n] + xca [n]
where

xcs [n] = 12 ( x[n] + x * [− n]) xca [n] = 12 ( x[n] − x * [− n])


Energy & Power
in Sequences
 Total energy of a sequence x[n] is defined by
∞ ∞
ε x = ∑ x[n] ⋅ x [n] = ∑ x[n] 2

n = −∞ n = −∞
 An infinite length sequence with finite sample values may or may not have finite
energy
 A finite length sequence with finite sample values has finite energy
 We define the energy of a sequence x[n] over a finite interval -K<n<K as

ε x,K
K
= ∑ x[n]
n=− K
2

 The average power of an aperiodic sequence is defined by


K 2
Px = lim 2 K1+1 ∑ x[n]
K →∞ n=− K
Energy & Power
in Sequences

ε x.K
K 1
 Then Px = lim 2 K1+1 ∑ x[n]
2
Px = lim
K →∞ n=− K K →∞ 2 K +1

 The average power of a periodic sequence ~x [n] with a period N is


given by N −1
Px = ∑ ~
1 2
x [n ]
N
n =0

ª The average power of an infinite-length sequence may be finite or infinite

ª The average power of a periodic signal is _____________

ª The average power of a finite sequence is ______________


An Example
 Consider the causal sequence
3(−1)n , n ≥ 0
x[n] = 
 0, n<0
 What is the energy and power of this signal?

1  K 1 K 2 9( K + 1)
Px = lim ∑
Px=9lim 1  =∑ xlim
[ n] = 4 .5
K →∞ 2 K + 1  Kn→=∞0  n=− KK →∞ 2 K + 1
2 K +1

Hint: Recall (and make sure to remember throughout the semester) the following:
∞ 1
n
∑a = , a <1
n =0 1− a
N a m − a N +1
n
∑a = , a ≠1
n=m 1− a
Energy & Power Sequences

 A sequence with finite average power is called a power signal. Unless


the power is zero, a power signal has infinite energy
ªA periodic sequence has a finite average power but infinite energy

 A sequence with finite energy is called an energy signal. An energy


signal has zero average power.
ªA finite-length sequence has finite energy but zero average power

y
Boundedness &
Summability*
 A sequence x[n] is said to be bounded if x[n] ≤ Bx < ∞
ª Example: x[n] = cos 0.3πn


 A sequence x[n] is said to be absolutely summable if ∑ x[n] < ∞
n = −∞

ª Example: ∞
0.3n , n ≥ 0 n 1
y[n] =  ∑ 0 .3 = = 1.42857 < ∞
Answer???
1 − 0.3
 0, n < 0 n =0

 A sequence x[n] is said to be square summable if ∞ 2


∑ x[n] < ∞
n = −∞
sin 0.4 n is square summable,
ª Example: h[n] = πn but not absolutely summable
Some Basic Operations
on Sequences
 Product (modulation) operation:

ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]

 Creating a finite-length sequence from an infinite-length


sequence by multiplying the latter with a finite-length
sequence called a window sequence
 Process called windowing
Some Basic Operations
on Sequences

 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation

ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences

 Time-shifting operation: y[n] = x[n − N ]


where N is an integer
 If N > 0, it is delaying operation

ª Unit delay x[n] z −1 y[n] y[n] = x[n − 1]


 If N < 0, it is an advance operation

ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences

 Time-reversal (folding) operation:

y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence

x[n] x[n]

x[n]
Basic Operations
An Example

y[n] = α1x[n] + α 2 x[n −y[n]=…


1] + α 3 x[n − 2] + α 4 x[n − 3]
Next Time…
 Discrete Systems … Read pages 21 - 33
Lecture 6

Operations on
Discrete
Signals

Discrete
Systems

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Basic Operations on Discrete Sequences
 Discrete Systems
 Classification of Discrete Systems
ª Linearity
ª Shift-invariance
ª Causality
ª Memory
ª Stablity
Some Basic Operations
on Sequences
 Product (modulation) operation:

ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]

 Creating a finite-length sequence from an infinite-length


sequence by multiplying the latter with a finite-length
sequence called a window sequence
 Process called windowing
Some Basic Operations
on Sequences

 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation

ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences

 Time-shifting operation: y[n] = x[n − N ]


where N is an integer
 If N > 0, it is delaying operation

ª Unit delay x[n] z −1 y[n] y[n] = x[n − 1]


 If N < 0, it is an advance operation

ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences

 Time-reversal (folding) operation:

y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence

x[n] x[n]

x[n]
Basic Operations
An Example

y[n] = α1x[n] + α 2 x[n −y[n]=…


1] + α 3 x[n − 2] + α 4 x[n − 3]
Discrete Systems
 A discrete-time system processes a given input sequence x[n] to
generates an output sequence y[n] with more desirable
properties
 In most applications, the discrete-time system is a single-input,
single-output system:

x[n] DISCRETE TIME y[n]


SYSTEM
Input sequence Output sequence
y[n] = ℜ{x[n]}

The operator that acts on the input sequence


Classification of Systems
 Discrete time systems can be classified based on their properties

ª Discrete vs. continuous


ª Linear vs. non-linear
ª Shift-invariant or shift-variant
ª Causal vs. noncausal
ª Memoryless vs. with memory
ª Stable vs. unstable
Linearity
 Let y1[n] be the output due to an input x1[n] and y2[n] be the output due to an input
x2[n]. A system is said to be linear, if the following superposition & homogeneity
properties are satisfied

x[n] = α x1[n] + β x2 [n] y[n] = α y1[n] + β y2 [n]


ℜ{αx1[n]+βx2[n]}= α.ℜ{x1[n]}+ β.{x2[n]}

 This property must hold for any arbitrary constants α and β , and for all possible
inputs x1[n] and x2[n], and can also be generalized to any arbitrary number of inputs
An Example - Accumulator
 A discrete system whose input / output relationship is given as
n
y[n] = ∑ x[l] The second form is used, if the
l = −∞ signal is causal, in which case
n
y[-1] is the initial condition
= y[−1] + ∑ x[l],
l =0

is known as an accumulator. The output at any given time, is simply the sum of all
inputs up to that time.

 Is the accumulator linear? In class


 How about the accumulator for the causal systems? exercise
 How about the system y[n]=ax1[n]+b
 As an exercise, try y[n] = x 2 [n] − x[n − 1]x[n + 1]
Shift - Invariance
 A system is said to be shift-invariant if ℜ(x[n-M])=y[n-M], for all n, m.

 For sequences and systems where the index n is related to discrete instants
of time, this property is also called the time-invariance property

 Time-invariance property ensures that for a specified input, the output is


independent of the time the input is being applied
An Example - Upsampler
 A system whose input / output characteristics can be written as
 x[ n / L], n = 0, ± L, ± 2 L, .....
xu [n] = 
 0, otherwise

is known as an upsampler.
 This system inserts L zeros between every sample. If the samples are
inserted based on their amplitudes, then the system is called an interpolator.

y[n]

Is the upsampler a time- In class


invariant system? exercise

3 4
n
0 1 2 5 6 7 8 9 10 11 12
Linear
Time-Invariant Systems
 A system that satisfies both the linearity and the time (shift)
invariance properties is called a linear time (shift) invariant system,
LTI (LSI).

 We will see that these group of systems play a particularly important


role in signal processing.
ª They are easy to characterize and analyze, and hence easy to design
ª Efficient algorithms have been developed over the years for such systems
Causality
 A system is said to causal, if the output at time n0 does not depend on
the inputs that come after n0.

 In other words, in a causal system, the n0th output sample y[n0]


depends only on input samples x[n] for n ≤ n0 and does not depend on
input samples for n>n0.
 Here are some examples: Which systems are causal?

y[n] = α1x[n] + α 2 x[n − 1] + α 3 x[n − 2] + α 4 x[n − 3]


y[n] = b0 x[n] + b1x[n − 1] + b2 x[n − 2] 1
y[n] = xu [n] + ( xu [n − 1] + xu [n + 1])
+ a1 y[n − 1] + a2 y[n − 2] 2
1
y[n] = y[n − 1] + x[n] y[n] = xu [n] + ( xu [n − 1] + xu [n + 2])
3

2
+ ( xu [n − 2] + xu [n + 1])
3
An Example - Downsampler
 A system whose input-output characteristic satisfies y[ n] = x[ Mn]
where M is a (+) integer, is called a downsampler or a decimator.
ª Such a system reduces the number of samples by a factor of M by removing M
samples from between every sample.

 Is this system
ª Linear? In class
exercise
ª Time (shift) invariant?
ª Causal?
Memory
 A system is said to be memoryless if the output depends only on the
current input, but not on any other past (or future) inputs. Otherwise,
the system is said to have memory.

 Which of the following systems have memory?

y[n] = y[n − 1] + x[n]

y[n] = x[ Mn]
 x[ n / L], n = 0, ± L, ± 2 L, .....
xu [n] = 
 0, otherwise

n
y[n] = ∑ x[l]
l = −∞
Stability
 There are several definitions of stability, which is of utmost
importance in filter design. We will use the definition of stability in
the BIBO sense

 A systems is said to be stable in the bounded input bounded output


sense, if the system produces a finite (bounded) output for any finite
(bounded) input, that is,

ª If y[n] is the response to an input x[n] that satisfies |x[n]|≤Bx<∞, and y[n] satisfies
|y[n]| ]|≤By<∞, then the system is said to be stable in the BIBO sense.

 A system (filter) that is not stable is rarely of any practical use (for
very specialized applications), and therefore, most filters are designed
to be BIBO stable.
An Example –
Moving Average Filter
 An M – point moving average system (filter) is defined as
M −1
y[n] = 1 x[n − k ]

M k =0
 What would you use such a system for?
 Is this system stable?
Try this at Home

n=0:99;
s=2*(n.*(0.9).^n);
d=rand(1, 100);
x=s+d;
subplot(211)
plot(x); grid

for i=5:100;
y(i)=(0.20)*sum(x(i-1)+x(i-2)+x(i-3)+x(i-4));
end

subplot(212)
plot(n,y); grid
Lecture 7

Characterization
of Discrete
Systems
Impulse
Response

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Characterization of Discrete Systems
ª Impulse Response
ª Constant coefficient linear difference equations (CCLDE)
ª FIR systems
ª IIR systems
Characterization of
Discrete Systems
 Discrete systems can be characterized in several ways:
ª Impulse response (time)
ª Linear constant coefficient difference equations (time)
ª Frequency response (frequency)
ª Transfer function (frequency)

x[n] Discrete System y[n]


Input sequence Output sequence
Impulse Response
 The response of a discrete system to a unit impulse sequence δ[n] is
called the impulse response of the system, and it is typically denoted
by h[n]

x[n]=δ[n] Discrete System y[n]=h[n]


h[n] = x[n] ∗ δ [n] = ∑ x[m] ⋅ δ [n − m]
m = −∞
Impulse response
 Ex: Consider the following system

y[ n] = α1 x[n] + α 2 x[ n − 1] + α 3 x[n − 2] + α 4 x[n − 3]


ª The impulse response of this system can be obtained by setting the input to the
impulse sequence Î

h[n] = α1δ [n] + α 2δ [n − 1] + α 3δ [n − 2] + α 4δ [n − 3]


h[n] = x[n] ∗ δ [n] = ∑ x[m] ⋅ δ [n − m]
m = −∞
= L x[−1]δ [n + 1] + x[0]δ [n] + x[1]δ [n − 1] + L

⇒ {h[n]} = {α1 , α 2 , α 3 , α 4 }

Impulse Response
 Recall the discrete time accumulator: n
y[n] = ∑ x[l]
l = −∞
ª What is the impulse response of this system?

n
∑ δ [l] = u[n]
h[n] = h[n]=…
l = −∞
Impulse Response
 So, what is the big deal?

 The impulse response plays a monumental role in characterization of


LTI systems.
ª In fact, if you know the impulse response of a discrete LTI system, then you know
the response of the system to any arbitrary input!
ª You tell me h[n], I will tell you the response to any x[n]

Discrete System
x[n] y[n]=x[n]*h[n]
h[n]
Input sequence Output sequence

y[n] = x[n] ∗ h[n] = ∑ x[m] ⋅ h[n − m]
m = −∞
Impulse Response
 To see why, consider the following example
x[n]
3 h[n]
2
1 1
0 3 3
1 2 4 n 0 1 2 n
–1 –1
–2

 Recall that any sequence can be represented as a sum of impulses:


x[n] = ... + x[−1]δ [n + 1] + x[0]δ [n] + x[1]δ [n − 1] + ...

= ∑ x[m]δ [n − m]
m = −∞

ª x[n]=-2δ[n]+0δ[n-1]+1δ[n-2]-1δ[n-3]+3δ[n-4]
Impulse Response
for LTI Systems
 For a LTI system, we can add individual responses of the system to
individual inputs of the input sequence
… Î …
x[-2] Î y[-2] x[n]=-2δ[n]+0δ[n-1]+1δ[n-2]-1δ[n-3]+3δ[n-4]
x[-1] Î y[-1]
x[0] Î y[0]
x[1] Î y[1] h[n] h[n-1] h[n-2] h[n-3] h[n-4]
… …
+__________
x[n] Î y[n]
y[n] = -2h[n] + 0h[n-1] +1h[n-2]-1h[n-3]+3h[n-4]
=x[0]h[n]+x[1]h[n-1]+x[2]h[n-2]+x[3]h[n-3]+x[4]h[n-4]


y[n] = x[n] ∗ h[n] = ∑ x[m] ⋅ h[n − m]
m = −∞
Impulse Response
Example
 Continuing with the example
x[n]
3 h[n]
2
1 1
0 3 3
1 2 4 n 0 1 2 n
–1 –1
–2

y[n] = -2h[n] + 0h[n-1] + 1h[n-2] - 1h[n-3] + 3h[n-4]


= x[0]h[n]+x[1]h[n-1]+x[2]h[n-2]+x[3]h[n-3]+x[4]h[n-4]

y[n]
5

1 1 1
0 1 7
–2 –1 2 3 4 5 6 8 9 n
–2
–3
–4
Properties
 There are several useful properties of the convolution and impulse
response characterization:
ª An LTI system is BIBO sable, if its impulse response is absolutely summable:

∑ h[n] < ∞
n = −∞

ª An LTI system is causal, if its impulse response is a causal sequence, i.e.,


h[n]=0, n<0
ª If more then one system are connected by series or parallel, the effective system
impulse response can be obtained as follows:
• Cascade h[n]=h1[n]*h2[n]
h1[n] h2[n] ≡ h2[n] h1[n] ≡ h1[n] * hh[n]=h2[n] [n]
1

• Parallel
h[n]=h1[n]+h2[n]
h1[n]
+ ≡ h1[n] + h2h[n[n
]=h[n] ]
1

h2[n]
Properties
 A cascade connection of two stable is systems is always stable
 If a cascade system satisfies the following condition
h1[ n ] * h 2[n ] = δ[n ]
then h1 and h2are called inverse systems
 An application of the inverse system concept is in the recovery of a signal x[n]
from its distorted version x’[n] appearing at the output of a transmission
channel
 If the impulse response of the channel is known, then x[n] can be recovered by
designing an inverse system of the channel

channel inverse system


x ' [ n]
x[n ] h1[n] h2[n] x[n ]

h1[n] * h2[n] = δ[n]


Exercise

 Consider the discrete-time system where

h1[n] = δ[n] + 0.5δ[n − 1],


h2[n] = 0.5δ[n] − 0.25δ[n − 1],
h3[n] = 2δ[n], h1[n] +
n h2[n]
h 4[n] = −2(0.5) u[n]
h3[n] +
What is the overall system response?
h4[n]
Finite Impulse Response
Systems
 If the impulse response h[n] of a system is of finite length, that system
is referred to as a finite impulse response (FIR) system
h[n] = 0 for n < N1 and n > N 2 , N1 < N 2
ª The output of such a system can then be computed as a finite convolution sum
N2
y[n] = ∑ h[k ]x[n − k ]
k = N1 h[n]
2
ª E.g., h[n]=[1 2 0 3 -1] is a FIR system (filter) 1
3
0 1 2
–1

ª FIR systems are also called nonrecursive systems (for reasons that will later
become obvious), where the output can be computed from the current and past
input values only – without requiring the values of previous outputs.
Infinite Impulse Response
Systems
 If the impulse response is of infinite length, then the system is referred
to as an infinite impulse response (IIR) system. These systems cannot
be characterized by the convolution sum due to infinite sum.
ª Instead, they are typically characterized by linear constant coefficient difference
equations, as we will see later.
ª Recall accumulator and note that it can have an alternate – and more compact
representation that makes the current output a function of previous inputs and
outputs
n
y[n] = ∑ x[l] y[n] = y[n − 1] + x[n]
l = −∞
ª The impulse response of this system (which is of infinite length), cannot be
represented with a finite convolution sum. Note that, since the current output
depends on the previous outputs, this is also called a recursive system
Constant Coefficient
Linear Difference Equations
 All discrete systems can also be represented using constant
coefficient, linear difference equations of the form
y[n] + a1 y[n − 1] + a2 y[n − 2] + L aN y[n − N ] = b0 x[n] + b1x[n − 1] + L + bM x[n − M ]

Outputs y[n] Inputs x[n]

N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0

Constant coefficients

ª Constant coefficients aiand bi are called filter coefficients


ª Integers M and N represent the maximum delay in the input and output,
respectively. The larger of the two numbers is known as the order of the filter.
ª Any LTI system can be represented as two finite sum of products!
FIR Systems
 Note that the expression indicate the most general form of an LTI system:
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0
ª If the current output y[n] does not depend on previous outputs y[n-i], that is if all ai=0
(except a0=1), then we have no recursion – such systems are FIR (non-recursive) systems
M
y[n] = ∑ b j x[n − j ]
j =0

ª Note that the impulse response of an FIR system can easily be obtained from its CCLDE
representation: M M
y[n] = ∑ b j x[n − j ] ⇒ h[n] = ∑ b jδ [n − j ]
j =0 j =0

= b0δ [n] + b1δ [n − 1] + L + bM δ [n − M ]


ª The sum of finite numbers will always be finite, therefore, the impulse response of this
system will be finite, hence, finite impulse response.
ª Finite Impulse Response ÍÎ Nonrecursive
FIR Systems
M
y[n] = ∑ b j x[n − j ] = b0 x[ n] + b1 x[n − 1] + L + bM x[n − M ]
j =0

 Note that this representation looks similar to the definition of convolution. In


fact, y[n]=bn*x[n], that is the system output of an FIR filter is simply the
convolution of input x[n] with the filter coefficients bn
 Since we already know that the output of a system is the convolution of its
input with the system impulse response, it follows that filter coefficients bn
is the impulse response of an FIR filter!
 The CCLDE representation of an FIR system can schematically be
represented using the following diagram, known as the “filter structure”
z-1 z-1 z-1

The hardware implementation follows this structure exactly, using delay elements, adders and multipliers.
IIR Systems
 If in the general expression, ai are not zero, then the output depends
former outputs, and hence this is a recursive system.
 The impulse response of an IIR system cannot be represented as a
closed finite convolution sum precisely due to recursion
 The filter structure of IIR systems – which has a distinct feedback
(recursion) loop, has the following form:

y[n] = b0 x[ n] +y[n]=…
b1x[ n − 1] − a1 y[n − 1]
CCLDE
 Note that, assuming that system is causal, y[n] be pulled out of the
CCLDE equation to obtain:

N ai M b
y[n] = − ∑ y[n − i ] + ∑ j x[ n − j ]
i =1 a0 j =1 a0

 Since the impulse response of an FIR system consists of finite terms,


it is always stable – a significant advantage of FIR systems

 IIR systems are not guaranteed to be stable, since their h[n] consists of
infinite number of terms. Their design requires stability checks!
Lecture 8

Representation
of Signals in
Frequency
Domain

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Representation of Signals in Frequency Domain
 A preview
The Frequency Domain
 Time domain operation are often not very informative and/or efficient
in signal processing
 An alternative representation and characterization of signals and
systems can be made in transform domain
ª Much more can be said, much more information can be extracted from a signal in
the transform / frequency domain.
ª Many operations that are complicated in time domain become rather simple
algebraic expressions in transform domain
ª Most signal processing algorithms and operations become more intuitive in
frequency domain, once the basic concepts of the frequency domain are
understood.
Frequency Domain
t=-1:0.001:1;
x=sin(2*pi*50*t);
plot(t(1001:1200),x(1:200))
grid
title('Sin(2\pi50t)')
xlabel('Time, s')
subplot(212)
X=abs(fft(x));
X2=fftshift(X);
f=-499.9:1000/2001:500;
plot(f,X2);
grid
title(' Frequency domain representation of …
Sin(2\pi50t)')
xlabel('Frequency, Hz.')
Frequency Domain
t=-1:0.001:1;
x=sin(2*pi*50*t)+sin(2*pi*75*t);
plot(t(1001:1200),x(1:200))
grid
title('Sin(2\pi50t)+Sin(2\pi75t)')
xlabel('Time, s')
subplot(212)
X=abs(fft(x));
X2=fftshift(X);
f=-499.9:1000/2001:500;
plot(f,X2);
grid
title(' Frequency domain representation of
Sin(2\pi50t)+Sin(2\pi75t)')
xlabel('Frequency, Hz.')
The Fourier Transform

Spectrum: A compact representation of the frequency content of a signal


that is composed of sinusoids
Fourier Who…?

“An arbitrary function, continuous or with


discontinuities, defined in a finite interval by an
arbitrarily capricious graph can always be
expressed as a sum of sinusoids”
J.B.J. Fourier

Jean B. Joseph Fourier December 21, 1807


(1768-1830)

− j 2πkt / N 1 N −1
F [k ] = ∫ f (t )e dt f (t ) = ∑ F [k ]e j 2πkt / N
2π i = 0
Jean B. J. Fourier
 He announced his discovery in a prize paper on the theory of heat (1807).
ª The judges: Laplace, Lagrange, Poisson and Legendre
 Three of the judges found it incredible that sum of sines and cosines could add up
to anything but an infinitely differential function, but...
ª Lagrange: Lack of mathematical rigor and generality ÎDenied publication….

• Became famous with his other previous work on math, assigned as chair of Ecole Polytechnique
• Napoleon took him along to conquer Egypt
• Return back after several years
• Barely escaped Giyotin!

ª After 15 years, following several attempts and disappointments and frustration, he published
his results in Theorie Analytique de la Chaleur in 1822 (Analytical Theory of Heat).
ª In 1829, Dirichlet proved Fourier’s claim with very few and non-restricting conditions.

ª Next 150 years: His ideas expanded and generalized. 1965: Cooley and Tukey--> Fast
Fourier Transform Î Computational simplicity Î King of all transforms… Countless
number of applications engineering, finance, applied mathematics, etc.
Fourier Transforms
 Fourier Series (FS)
ª Fourier’s original work: A periodic function can be represented as a finite, weighted sum of
sinusoids that are integer multiples of the fundamental frequency Ω0 of the signal. Î These
frequencies are said to be harmonically related, or simply harmonics.
 (Continuous) Fourier Transform (FT)
ª Extension of Fourier series to non-periodic functions: Any continuous aperiodic function
can be represented as an infinite sum (integral) of sinusoids. The sinusoids are no longer
integer multiples of a specific frequency anymore.
 Discrete Time Fourier Transform (DTFT)
ª Extension of FT to discrete sequences. Any discrete function can also be represented as an
infinite sum (integral) of sinusoids.
 Discrete Fourier Transform (DFT)
ª Because DTFT is defined as an infinite sum, the frequency representation is not discrete
(but continuous). An extension to DTFT is DFT, where the frequency variable is also
discretized.
 Fast Fourier Transform (FFT)
ª Mathematically identical to DFT, however a significantly more efficient implementation.
FFT is what signal processing made possible today!
Dirichlet Conditions
(1829)
 Before we dive into Fourier transforms, it is important to understand
for which type of functions, Fourier transform can be calculated.
 Dirichlet put the final period to the discussion on the feasibility of
Fourier transform by proving the necessary conditions for the
existence of Fourier representations of signals
ª The signal must have finite number of discontinuities
ª The signal must have finite number of extremum points within its period
ª The signal must be absolutely integrable within its period
t 0 +T
∫ x(t ) dt < ∞
t0
 How restrictive are these conditions…?
A Demo
 http://homepages.gac.edu/~huber/fourier/
Fourier Series
 Any periodic signal x(t) whose fundamental period is T0 (hence,
fundamental frequency f0=1/T0), can be represented as a finite sum of
complex exponentials (sines and cosines)
ª That is, a signal however arbitrary and complicated it may be, can be represented as
a sum of simple building blocks

x(t ) = ∑ ck e jΩ0kt
k = −∞

ª Note that each complex exponential that makes up the sum is an integer multiple
of Ω0, the fundamental frequency
ª Hence, the complex exponentials are harmonically related
ª The coefficients ck , aka Fourier (series) coefficients, are possibly complex
• Fourier series (and all other types of Fourier transforms) are complex valued ! That is,
there is a magnitude and phase (angle) term to the Fourier transform!
Fourier Series
 This is the synthesis equation: x(t) is synthesized from its building
blocks, the complex exponentials at integer multiples of Ω0

x(t ) = ∑ ck e jkΩ0t
k = −∞

kΩ0: “kth” integer multiple – kth harmonic of the fundamental frequency Ω0


ck: Fourier coefficients – how much of kth harmonic exists in the signal
|ck|: Magnitude of the kth harmonic Î magnitude spectrum of x(t)
Spectrum of x(t)
< ck: Phase of the kth harmonic Î phase spectrum of x(t)

 How to compute the Fourier coefficients, ck?


Fourier Series
 The coefficients ck can be obtained through the analysis equation.

1 t0 +T0 − jkΩ 0t
ck = ∫ x (t ) e dt
T0 t0

 Note that, while x(t) is a sum, ck are obtained through an integral of


complex values.
 More importantly, if x(t) is real, then the coefficients satisfy c-k=c*k,
that is |c-k|=|ck| Î why?
Lecture 9

The Fourier
Transform

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Fourier Series (cont.)
 Continuous Time Fourier Transform
Fourier Series
 Any periodic signal x(t) whose fundamental period is T0 (hence,
fundamental frequency f0=1/T0), can be represented as a finite sum of
complex exponentials (sines and cosines)
ª That is, a signal however arbitrary and complicated it may be, can be represented as
a sum of simple building blocks

x(t ) = ∑ ck e jΩ0kt
k = −∞

ª Note that each complex exponential that makes up the sum is an integer multiple
of Ω0, the fundamental frequency
ª Hence, the complex exponentials are harmonically related
ª The coefficients ck , aka Fourier (series) coefficients, are possibly complex
• Fourier series (and all other types of Fourier transforms) are complex valued ! That is,
there is a magnitude and phase (angle) term to the Fourier transform!
Fourier Series
 This is the synthesis equation: x(t) is synthesized from its building
blocks, the complex exponentials at integer multiples of Ω0

x(t ) = ∑ ck e jkΩ0t
k = −∞

kΩ0: “kth” integer multiple – kth harmonic of the fundamental frequency Ω0


ck: Fourier coefficients – how much of kth harmonic exists in the signal
|ck|: Magnitude of the kth harmonic Î magnitude spectrum of x(t)
Spectrum of x(t)
< ck: Phase of the kth harmonic Î phase spectrum of x(t)

 How to compute the Fourier coefficients, ck?


Fourier Series
 The coefficients ck can be obtained through the analysis equation.

1 t0 +T0 − jkΩ 0t
ck = ∫ x (t ) e dt
T0 t0

 Note that, while x(t) is a sum, ck are obtained through an integral of


complex values.
 More importantly, if x(t) is real, then the coefficients satisfy c-k=c*k,
that is |c-k|=|ck| Î why?
 Interpret the meaning of c0…?
Trigonometric
Fourier Series
 Using the Euler’s rule, we can represent the complex Fourier series in
two trigonometric forms:

x(t ) = a0 + ∑ (ak cos(kΩ0t ) + bk sin (kΩ0t ))
k =1
2
ak = ∫ x(t ) cos(kΩ 0t )dt
T0 T
0
2
x(t ) sin (kΩ 0t )dt
T0 T∫
bk =
0

 As you might have already guessed the trigonometric Fourier


coefficients, ak and bk, are not independent of the complex Fourier
coefficients a0 = 2c0 ak = ck + c− k bk = j (ck − c− k )
a − jbk a + jbk
ck = k , c− k = k
2 2
Quick Facts About
Fourier Series
 Fourier series are computed for periodic signals (continuous or discrete). A periodic
signal has finite number of discrete spectral components.
ª Each spectral component is represented by ckand c-k Fourier series coefficients, k=1,2,…,N.
ª Each k represents one of the spectral components at integer multiples of Ω0, the
fundamental frequency of the signal. These discrete spectral components at Ω0, 2Ω0,…,N
Ω0 are called harmonics.
• For example, the signal x(t)=cos4t+sin6t has two (four, if you count the negative frequencies)
spectral components. The fundamental frequency is Ω0=2, and c-3=-1/2j, c3=1/2j, c-2=c2=1/2

ck
k=0Î Ω=0 rad/s
1/2 1/2j 1/2 k=1Î Ω=2 rad/s
k=2Î Ω=4 rad/s
-3 -2 -1 0 1 2 3
k k=3Î Ω=6 rad/s
-1/2j
Quick Facts About
Fourier Series
 If a signal is even, then all bk=0, and if a signal is odd, then all ak=0
 If x(t) is real, then the Fourier series are symmetric, that is, ck=c-k
ª This is inline with our interpretation of frequency as two phasors rotating at the
same rate but opposite directions.
ª For real signals, the relationship between complex and trigonometric
representation of Fourier series further simplifies to ak = 2ℜ[ck ] bk = −2 Im[ck ]
ª We also have a third representation for real signals:
kth harmonic

x(t ) = C0 + ∑ Ck cos(kΩ0t − θ k )
k =1
a b
C0 = 0 , Ck = ak2 + bk2 , θ k = tan −1 k
2 ak

Phase angles

DC component Harmonic amplitudes


The Fourier Transform
 Fourier series are defined only on periodic signals. Non-periodic
continuous time signals can also be represented as a sum of weighted
complex exponentials; however
ª The complex exponentials are no longer discrete and integer multiples of a
fundamental frequency.
ª Unlike the FS, where we represented the signal with a finite sum of harmonics, for
non-periodic signals, we need a sum (integration) of continuum of frequencies.
ª The continuous Fourier transform (FT), can be obtained from the FS
representation of a signal, by assuming that the non-periodic signal is periodic with
an infinite period. This results:

X (Ω ) = ℑ( x(t ) ) = ∫ x(t )e
− jΩt
dt

x(t ) ⇔ x(Ω )
−∞

x(t ) = ℑ −1
( X (Ω) ) = ∫ X (Ω)e jΩt
dΩ
−∞
Quick Facts About
Fourier Transform
 The FT, X(Ω) is a complex transform
Magnitude Phase
Spectrum Spectrum
Fourier
Spectrum
X (Ω ) = X (Ω ) ∠X (Ω )
= X (Ω ) e jφ (Ω )
Frequency
dependent phase
ª If x(t) is real

X (− Ω ) = ∫ x(t )e
jΩt
dt = X ∗ (Ω ) X (− Ω ) = X (Ω ) , φ (− Ω ) = −φ (Ω )
−∞
Î then, FT is conjugate symmetric ! (just like Fourier series)

ª If x(t) is even Î FT is real (!)


Properties of
Fourier Transform

Property Signal Fourier Transform

The table is from Signals and Systems, H.P. Hsu


(Schaum’s series), which uses ω for continuous
frequencies. Replace all ω with Ω to be consistent with
our, and the textbook notation.
Common Fourier
Transform Pairs

x(t) X(ω)
Exercises

t=0.01:0.01:10;
d=0.5:2:9.5;
y=pulstran(t,d,'rectpuls');
y2=y-0.5;
y2=2*y;
subplot(211)
plot(t,y2)
grid
Y2=fft(y2);
Y2=fftshift(Y2);
subplot(212)
stem(abs(Y2))
grid
Exercises
−3 t
x(t ) = e sin(2t )

fs=100;
t=-2:1/fs:2;
x=exp(-3*abs(t)).*sin(2*t);
Subplot(211)
plot(t,x)
grid
X=fft(x);
f=-fs/2:fs/(length(t)-1):fs/2;
subplot(212)
plot(f, abs(fftshift(X)))
grid

What kind of a system is this?


Exercises
 Try the following at home. What is being demonstrated here? What is
the outcome? How do you interpret it?
x=zeros(1000,1);
x(500)=1; x=ones(1000,1);
plot(x) plot(x)
close X=abs(fft(x));
X=abs(fft(x)); plot(fftshift(X))
plot(fftshift(X))
Lecture 10

Discrete Time
Fourier
Transform
(DTFT)

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 The Discrete Time Fourier Transform
ª Definition
ª Key theorems
• DTFT of the output of an LTI system
• The frequency response
• Periodicity of DTFT
• Definition of discrete frequency
• Existence of DTFT
ª DTFT properties
recall
 Any periodic signal x(t) whose fundamental period is T0, can be represented as a
finite and discrete sum of complex exponentials (sines and cosines) that are integer
ck
multiples of Ω0, the fundamental frequency
∞ 1 t0 +T0 − jkΩ 0t
x(t ) = ∑ ck e jΩ0kt ck = ∫ x (t ) e dt 1/2 1/2 1/2j

k = −∞ T0 t0
-3 -2 -1 0 1 2 3
k
-1/2j

 A non-periodic continuous time signal can


also be represented as an (infinite and
continuous) sum of complex exponentials ℑ
x(t ) ⇔ x(Ω )

X (Ω ) = ℑ( x(t ) ) = − jΩt

X (Ω ) = X (Ω ) ∠X (Ω )
∫ x(t )e dt
−∞

= X (Ω ) e jφ (Ω )


x(t ) = ℑ−1 ( X (Ω) ) = ∫ X (Ω ) e
jΩt
dΩ
−∞
Key Facts to Remember
 All FT pairs provide a transformation between time and frequency domains: The
frequency domain representation provides how much of which frequencies exist in
the signal Î More specifically, how much ejΩt exists in the signal for each Ω.
 In general, the frequency representation is complex (except if the signal is even).
ª |X(Ω)|: The magnitude spectrum Î the power of each Ω component
ª Ang X(Ω): The phase spectrum Î the amount of phase delay for each Ω component
 The FS is discrete in frequency domain, since it is the discrete set of exponentials –
integer multiples of Ω0 – that make up the signal. This is because only a finite
number of frequencies are required to construct a periodic signal.
 The FT is continuous in frequency domain, since exponentials of a continuum of
frequencies are required to reconstruct a non-periodic signal.
 Both transforms are non-periodic in frequency domain.
Discrete –Time
Fourier Transform (DTFT)
 Similar to continuous time signals, discrete time sequences can also be
periodic or non-periodic, resulting in discrete-time Fourier series or
discrete – time Fourier transform, respectively.
 Most signals in engineering applications are non-periodic, so we will
concentrate on DTFT.
 We will represent the discrete frequency as ω, measured in radians.

x[n] ⇔ X (ω )
π Quick facts:
1 j ωt
x[n] = ∫ X (ω ) e dω • Since x[n] discrete, we can only add them, hence summation
2π −π • The sum of x[n], weighted with continuous exponentials, is
continuous Î the DTFT X(ω) is continuous (non-discrete)
ℑ • Since X(ω) is continuous, x[n] is obtained as a continuous integral
⇔ of X(ω), weighed by the same complex exponentials.
∞ • x[n] is obtained as an integral of X(ω), where the integral is over
X (ω ) = ∑ x[n]e − jωn an interval of 2π. Î This is our first clue that DTFT is periodic
n = −∞ with 2π in frequency domain.
Proof
 We now show that x[n] and X(ω) are indeed FT pairs, that is one can
be obtained from the other:

1 π
x[n] = ∫
2π −π
X (ω ) e jωt
dω X (ω ) = ∑ x[n]e − jωn
n = −∞

 Lemma: Continuous time complex exponentials are orthogonal, that is


t +T0
jω ( k − m ) t 0, k ≠ m
Integral over ∫ e dt =  = T0δ [k − m]
t
T
 0 , k = m
one period

1 π ∞ − j ωl  j ω n
x[n] = ∫  ∑ x[l]e  e dω
2π − π  l = −∞ 
Important Theorems
 There are several important theorems related to DTFT.
 Theorem 1:
ª If x[n] is input to an LTI system with an impulse response of h[n], then the DTFT
of the output is the product of X(ω) and Y(ω)

x[n] h[n] y[n]=x[n]*h[n]

X(ω) H(ω) Y(ω) = X(ω) * H(ω)


Frequency Response
 Theorem 2:
ª If the input to an LTI system with an impulse response of h[n] is a complex
exponential ejω0n , then the output is the SAME complex exponential whose
magnitude and phase are given by |H(ω)| and <H(ω), evaluated at ω = ω0.

ª This theorem constitutes the fundamental cornerstone for the concept of


frequency response. H(ω), the DTFT of h[n], is called the frequency response of
the system.

ª If a sinusoidal sequence with frequency ω0 is applied to a system whose frequency


response is H(ω), then the output can be obtained simply by evaluating H(ω) at
ω = ω0. Since all signals can be written as a superposition of sinusoids, then the
output to an arbitrary input can be obtained as the superposition of H(ω0) for each
component that makes up the input signal!
Periodicity of DTFT
 Theorem 3:
ª The DTFT of an discrete sequence is periodic with the period 2π, that is
H(ω ) = H(ω + 2π )

ª You-will-flunk-if-you-do-not-understand-this-fact theorem: The discrete


frequency 2π rad. of the discrete-time sequence x[n], corresponds to the sampling
frequency Ωs used to sample the original continuous signal x(t) to obtain x[n].
ª Proof: ω=ΩTs ÎFor Ω= Ωs, we have ω=ΩsTs=2πfsTs=2π
Existence of DTFT
 Theorem 4:
ª The DTFT of a sequence exists if and only if the , sequence x[n] is absolutely
summable, that is, if ∞
∑ x[n] < ∞
n = −∞

− jω n
ª This is because an infinite series of the form ∑ x[n]e converges if x[n] is
n = −∞
absolutely summable.
Properties of DTFT
 We will study the following properties of the DTFT:
ª Linearity Î DTFT is a linear operator
ª Time reversal Î x[n] ÅÆ X(-ω)
ª Time shift Î x[n-n0] ÅÆ X(ω)e-jωn 0

ª Frequency shift Î x[n] e-jω nÅÆ X(ω- ω0)


0

ª Convolution in time Î x[n]*y[n] ÅÆ X(ω).Y(ω)


ª Convolution in frequency
ª Differentiation in frequency Î nx[n] ÅÆ j (dX(ω)/dω)
ª Parseval’s theorem Î Conservation of energy in time and frequency domains
Lecture 11

Discrete Time
Fourier
Transform
(DTFT)

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 The Discrete Time Fourier Transform
ª DTFT properties
Properties of DTFT
 We will study the following properties of the DTFT:
ª Linearity Î DTFT is a linear operator
ª Time reversal Î x[n] ÅÆ X(-ω)
ª Time shift Î x[n-n0] ÅÆ X(ω)e-jωn 0

ª Frequency shift Î x[n] e-jω nÅÆ X(ω- ω0)


0

ª Convolution in time Î x[n]*y[n] ÅÆ X(ω).Y(ω)


ª Convolution in frequency
ª Differentiation in frequency Î nx[n] ÅÆ j (dX(ω)/dω)
ª Parseval’s theorem Î Conservation of energy in time and frequency domains

ℑ ℑ
x[n] ⇔ X (ω ) y[n] ⇔ Y (ω )
Linearity &
Differentiation In Frequency
 The DTFT is a linear operator


ax[n] + by[n] ⇔ aX (ω ) + bY (ω )

 Multiplying the time domain signal with the independent time variable
is equivalent to differentiation in frequency domain.

ℑ dX (ω )
nx[n] ⇔ j

Time Reversal ,
Time & Frequency Shift
 A reversal in of the time domain variable causes a reversal of the
frequency variable

x[−n] ⇔ X (−ω )

 A shift in time domain by m samples causes a phase shift of e-jωm in


the frequency domain

x[n − m] ⇔ X (ω )e − jωm
 A shift in frequency domain by ω0 causes a time delay of ejω n
0


− jω 0 n
x[n]e ⇔ X (ω − ω0 )
Convolution
 Convolution in time domain is equivalent to multiplication in
frequency domain

x[n] * h[n] ⇔ X (ω ) ⋅ H (ω )
ª This is one of the fundamental theorems in filtering. It allows us to compute the
filter response in frequency domain using the frequency response of the filter.

 Multiplication in time domain is equivalent to convolution in


frequency domain

ℑ 1 π
x[n] ⋅ h[n] ⇔ ∫ X (γ ) H (ω − γ )dγ
2π −π
Parseval’s Theorem
 The energy of the signal , whether computed in time domain or the
frequency domain, is the same!

∞ π
2 1 2
∑ x[ n ] = ∫
2π −π
X (ω ) dω
n = −∞

Energy of a continuous
periodic function
Important DTFT Pairs
Impulse Function
 The DTFT of the impulse function is “1” over the entire frequency
band.
ℑ{δ [n]} = 1

Extend of the frequency band


in discrete frequency domain
Important DTFT Pairs
Constant Function
 Note that x[n]=1 (or any other constant) does not satisfy absolute
summability. However, we can sow that the DTFT of the constant
function is an impulse at ω=0. (this should make sense!!!)

ℑ{1} = 2π ∑ δ (ω − 2πm )
m = −∞
Matlab Approximation
 In class demo!

x=zeros(1000,1); x=ones(1000,1);
x(500)=1; subplot(211)
subplot(211) plot(x); grid
plot(x); grid X=abs(fft(x));
x=abs(fft(x)); subplot(212)
subplot(212) w=linspace(-pi, pi, 1000);
w=-pi:2*pi/999:pi; plot(w, fftshift(X)); grid
plot(w, fftshift(X)); grid
Important DTFT Pairs
Real Exponential
 This is an important function in signal processing. Why?
ℑ 1
n
x[n] = α u[n] ⇔
1 − α e − jω
t=0:0.01:10;
x=(0.5).^t;
plot(t,x)
X=fftshift((fft(x)));
subplot(311)
plot(t,x); grid
subplot(312)
plot(abs(X)); grid
f=-50:100/1000:50;
plot(f,abs(X)); grid
subplot(313)
plot(f, unwrap(angle(X))); grid

In Matlab, periodicity of X(ω) is assumed


Important DTFT Pairs
The sinusoid at ω0
 By far the most often used DTFT pair (it is less complicated then it looks):
ℑ ∞ ∞
x[n] = cos(ω0 n ) ⇔ π ∑ δ (ω − 2πm − ω0 ) + π ∑ δ (ω − 2πm + ω0 )
m = −∞ m = −∞
Important DTFT Pairs
Rectangular Pulse
 Also very commonly used in DSP, as it provides the FT of an ideal
lowpass filter (we will see this later)

1, − M ≤ n ≤ M
x[n] = rect M [n] = 
0, otherwise


M
− j ωn sin (M + 1 2 )ω
∑ e =
sin (ω 2 )
, ω≠0
n=−M
Ideal Lowpass Filter
 The ideal lowpass filter is defined as

1, ω ≤ ωc
H (ω ) = 
0, ωc ≤ ω ≤ π

ª Taking its inverse DTFT, we can obtain the corresponding impulse function h[n]:

sin ωc n
h[n] =
πn
Ideal Lowpass Filter
 Note that:
ª The impulse response of an ideal LPF is infinitely long Æ This is an IIR filter. In
fact h[n] is not absolutely summable Æ its DTFT cannot be computed Æ an ideal
h[n] cannot be realized!
ª One possible solution is to truncate h[n], say with a window function, and then
take its DTFT to obtain the frequency response of a realizable FIR filter.
On Friday
 In class exercises
 Matlab exercises
 No formal lab report
 Exam next FRIDAY!
Lecture 12

Sampling
Theorem

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 The Sampling Theorem
ª The need for sampling and the aliasing problem
ª Shannon’s sampling theorem
• Nyquist rate
ª Effect of sampling in frequency domain
ª Sampling explained graphically!
ª Recovering the original signal.
Analog Æ Digital Æ Analog
 Most signals in nature are continuous in time
ª Need a way for “digital processing of continuous-time signals.”
 A three-step procedure
ª Conversion of the continuous-time signal into a discrete-time signal
• Anti-aliasing filter – to prevent potentially detrimental effects of sampling
• Sample & Hold – to allow time to the A/D converter
• Analog to Digital Converter (A/D) – actual conversion in time and amplitude
ª Processing of the discrete-time signal
• Digital Signal Processing – Filter, digital processor
ª Conversion of the processed discrete-time signal back into a cont.-time signal
• Digital to analog converter (D/A) – to obtain the cont. time signal
• Reconstruction / smoothing filter - smooth out the signal from the D/A

Anti- Digital Reconstruction


aliasing S/H A/D processor D/A filter
filter
Aliasing

Alias:
Two names for the
same person, or thing.
Aliasing
 Note that identical discrete-time signals may result from the sampling of more than
one distinct continuous-time function. In fact, there exists an infinite number of
continuous-time signals, which when sampled lead to the same discrete-time signal
1

0.5
Amplitude

-0.5

-1
0 0.2 0.4 0.6 0.8 1
time
Aliasing
 Here is another example:
ª The signals in the following plot shows two sinusoids: x1[n]=cos(0.4πn) and
x2[n]=cos(2.4πn). Note that these two signals are distinct, as the second one clearly
has a higher frequency.
ª However, when samples at, say integer values of n they have the same samples, that
is x1[n]=x2[n] for all integer n. These two signals are aliases of each other. More
specifically, in the DSP jargon, we say that the frequencies ω1= 0.4π and ω2=2.4π
are aliases of each other.
Aliasing
 In general, 2π multiples added or subtracted to a sinusoid gives aliases
of the same signal.
ª The one at the lowest frequency is called the principal alias, whereas those at the
negative frequencies are called folded aliases.
ª In summary, the frequencies at ω0 + 2πk and 2πk - ω0 for any integer k, are aliases
of each other.
ª We can further show that for folded aliases, the algebraic sign of the phase angle is
opposite that of the principal alias

Alias frequencies
Houston…
We’ve got a problem!
 The fact that there exists an infinite number of continuous-time
signals, which when sampled lead to the same discrete-time signal,
poses a considerable dilemma in plotting and interpreting signals in
the frequency domain.
 Q: If the same discrete signal can be obtained from several continuous
time signals, how can we uniquely reconstruct the original continuous
time signal that generated the discrete signal at hand?
 A. Under certain conditions, it is possible to relate a unique
continuous-time signal to a given discrete-time signal.
ª If these conditions hold, then it is possible to recover the original continuous-time
signal from its sampled values
Shannon’s
Sampling Theorem
 The solution to this complicated and perplexing phenomenon comes
from the amazingly simple Shannon’s Sampling Theorem, one of the
cornerstones of the modern communications, signal processing and
control.
ª A continuous time signal x(t), with frequencies no higher then fmax can be
reconstructed exactly, precisely and uniquely from its samples x[n] = x(nTs), if the
samples are taken at a sampling rate (frequency) of fs=1/Ts that is greater then
2fmax. This is called the Nyquist frequency (rate)
ª In other words, if a continuous time signal is sampled at a rate that is at least twice
as high (or higher) as the highest frequency in the signal, then it can be uniquely
reconstructed from its samples.
ª Aliasing can be avoided if a signal is sampled at or above then the Nyquist rate.
Claude Shannon

Shannon, Claude Elwood (1916-2001), American applied mathematician and


electrical engineer, noted for his development of the theory of
communication now known as information theory. Born in Gaylord, Michigan,
Shannon attended the University of Michigan and in 1940 obtained his
doctorate from the Massachusetts Institute of Technology, where he became
a faculty member in 1956 after working at Bell Telephone Laboratories. In
1948 Shannon published “A Mathematical Theory of Communication,” an
article in which he presented his initial concept for a unifying theory of the
transmitting and processing of information. Information in this context
includes all forms of transmitted messages, including those sent along the
nerve networks of living organisms; information theory is now important in
many fields.

© 2003 Microsoft Encarta.


Effect of Sampling
in the Frequency Domain
 Mathematically, we can show that the spectrum of the discrete
(sampled) signal is simply a 2π replicated and 1/Ts normalized version
the spectrum of the original continuous time signal

1 ∞
X s (Ω) = ∑ X c (Ω − kΩ s )
Ts k = −∞

Spectrum of the Sampling


Spectrum of the continuous time signal frequency
sampled signal

ω = ΩTs
Effect of Sampling
in the Frequency Domain
Sampling Explained …
 Let ga(t) be a continuous-time signal that is sampled uniformly at
t = nTs, generating the sequence g[n] where
g[n] = g a (nTs ), −∞ < n < ∞

 Now, the frequency-domain representation of ga(t) is given by its


continuous-time Fourier transform (CTFT):
∞ − jΩt
Ga ( jΩ) = ∫−∞ a
g (t )e dt

 The frequency-domain representation of g[n] is given by its discrete-


time Fourier transform (DTFT):

G ( e jω ) = ∑ ∞
n = −∞ g [ n ] e − jω n
Sampling
 To establish the relation between Ga(Ω) and G(ω), we treat the
sampling operation mathematically as a multiplication of ga(t) by a
periodic impulse train p(t):

p (t ) = ∑ δ (t − nTs ) g a (t ) × g p (t )
n = −∞
p (t )
Sampling
 The multiplication operation yields another impulse train:

g p (t ) = g a (t ) p (t ) = ∑ g a (nT )δ (t − nTs )
n = −∞
 gp=(t) is a continuous-time signal consisting of a train of uniformly spaced impulses
with the impulse at t = nTs weighted by the sampled value ga(nTs) of ga(t) at that
instant
Sampling
 Now, note that p(t) is a periodic signal, therefore it can be represented with its FS
1 ∞ j ( 2π / Ts ) kt 1 ∞ jΩ s kt
p (t ) = ∑e
Ts k = −∞
= ∑ e
Ts k = −∞
Ω s = 2π / Ts

 The impulse train gp(t) therefore can be expressed as

 1 ∞ jΩ kt 
g p (t ) =  ∑ e T  ⋅ g a (t )
 T k = −∞ 
 From the frequency-shifting property of the CTFT, the CTFT of e jΩT kt g a (t ) is
Ga (Ω − kΩ s )

 Hence Gp(Ω) is a periodic function of Ω consisting of a sum of shifted and scaled


replicas of Ga(Ω) , shifted by integer multiples of Ωs and scaled by 1/Ts
1 ∞
G p (Ω) = ∑ Ga (Ω − kΩ s )
Ts k = −∞
Sampling
 The term on the RHS of this equation for k = 0 is the baseband
portion of Gp(Ω), and each of the remaining terms are the frequency
translated portions of Gp(Ω). 1 ∞
G p (Ω) = ∑ Ga (Ω − kΩ s )
Ts k = −∞

 The frequency range – Ωs/2 ≤ Ω ≤ Ωs/2 is called the baseband or Nyquist


band

 OK, so the spectrum of the sampled signal is a replicated (by 2π)


version of that of the continuous signal. How does this solve the
aliasing dilemma mentioned earlier, or how does it lead to the
sampling theorem?
ª To see the answer, we look at what is happening in the frequency domain
graphically
Sampling Explained…
…Graphically
ga(t)


x
p(t)
*

=
gp(t) =

* x
h(t)

= =
^ga(t)
Nyquist Rate
 Note that the key requirement for the Ga(Ω) recovered from Gp(Ω) is
that Gp(Ω) should consist of non-overlapping replicas of Ga(Ω).

 Under what conditions would this be satisfied…?

?
Ω s − Ω m ≥ Ω m ⇒ Ω s ≥ 2Ω m
If Ωs ≥2Ωm, ga(t) can be recovered exactly from gp(t) by passing it through an ideal
lowpass filter Hr(Ω) with a gain Ts and a cutoff frequency Ωc greater than Ωm and
less than Ωs- Ωm. For simplicity, a half-band ideal filter is typically used in exercises.
Aliasing - revisited
 On the other hand, if Ωs < 2Ωm, due to the overlap of the shifted
replicas of Ga(Ω) in the spectrum of Gp(Ω), the signal cannot be
recovered by filtering.
ª This is simply because the filtering of overlapped sections will cause a distortion by
folding, or aliasing, the areas immediately outside the baseband back into the
baseband.

ª The frequency Ωs /2 is known as the folding frequency.


A Summary
 Given the discrete samples ga(nTs), we can recover ga(t) exactly by
generating the impulse train ∞
g p (t ) = ∑ g a (nTs )δ (t − nTs )
n = −∞
and then passing it through an ideal lowpass filter Hr(Ω) with a gain Ts
and a cutoff frequency Ωc satisfying Ωm ≤ Ωc ≤ Ωs- Ωm.
 The highest frequency Ωm contained in ga(t) is usually called the
Nyquist frequency since it determines the minimum sampling
frequency Ωs=2 Ωm that must be used to fully recover ga(t) from its
sampled version.
ª Sampling over or below the Nyquist rate is called oversampling or
undersampling, respectively. Sampling exactly at this rate is critical sampling. A
pure sinusoid may not be recoverable from critical sampling. Some amount of
oversampling is usually used to allow some tolerance.
ª E.g. In phone conversations, 3.4 kHz is assumed to be the highest frequency in the
speech signal, and hence the signal is samples at 8 kHz.
ª In digital audio applications, the full range of audio frequencies of 0 ~ 20kHz is
preserved. Hence, in CD audio, the signal is sampled at 44kHz.
More on Aliasing
 To see the full effect of
X c (Ω) = π [δ (Ω − Ω 0 ) + δ (Ω + Ω 0 )]
aliasing, as well as to get an
insight to the story behind the
word “aliasing” consider the
sinusoid xc(t)=cosΩ0t.
 Note that the filtered signal has
its spectral components at
±(Ωs-Ω0), rather then Ω0.
 The reconstructed signal is
then cos(Ωs-Ω0)t, not cosΩ0t.
ª This is because frequencies Ideal half-band LPF
beyond Ωs/2 have folded into
the baseband area. Hence we
get an alias of original signal.
ª E.g. A sinusoid at 5kHz,
sampled at 8kHz would appear
as 3kHz when reconstructed!
Recovering the
Original Signal
 We saw in frequency domain that the original signal ga(t) can be
recovered from its sampled version after lowpass filtering.
 One may ask the question: How does lowpass filtering uniquely “fill-
in” the spaces in between the discrete samples?
 To find out, we need to convolve the impulse response of the low pass
filter hr[n] with the sampled sequence, impulse train gp(t)
Recovering the
Original Signal
 The impulse response hr(t) of the lowpass reconstruction filter is
obtained by taking the inverse DTFT of Hr(Ω):
T , Ω ≤ Ωc
H r (Ω ) = 
0, Ω ≤ Ωc

Ωc
1 ∞ jΩt T jΩt 2 sin(Ω ct )
hr (t ) = ∫
2π − ∞
H r ( Ω ) e d Ω = ∫
2π − Ω
e dΩ =
Ω s (t 2 )
, −∞ ≤ t ≤ ∞
c

 The inout to the LPF is the impulse train gp(t)



g p (t ) = ∑ g[n]δ (t − nT )
n = −∞

 Therefore, the output in time domain is the convolution



gˆ a (t ) = h r(t ) ∗ g p (t ) = ∑ g[n]h r(t − nT )
n = −∞
Recovering the
Original Signal
 Substituting h r(t ) = sin(Ω ct ) /(Ω s t / 2) in the above and assuming for
simplicity Ω c = Ω s 2 = π Ts (that is assuming ideal half band filter), we
get (do the math at home!!!)
∞ sin[π (t − nTs ) / Ts ]
gˆ a (t ) = ∑ g[ n]
π (t − nTs ) / Ts
n = −∞
Shifted versions of
impulse function hr(t)
 What does this mean???
ª Now recall that the impulse response of the filter is a sinc function
ª The sampled signal is a series of impulses
ª The convolution of any signal with a series of impulses can be obtained by sifting
the signal to each impulse location and summing up all shifted versions of the
signal
Recovering the
Original Signal
 Graphically:
ª Observe that hr(0)=1 and hr(t)=0 for all n≠0. Since at n=0, the hr(t) is normalized by g[n],
we do obtain g[0] at n=0. The contribution to g[0] from all other hr(t) at n=0 is zero.
ª The same can be said for all other time points of g[n]: For any n, only one of the shifted
hr(t) contributes at that time n, all others are zero.
ª Thus the ideal lowpass filter fills-in between the samples by interpolating using the sinc
function.

-4 -3 -2 -1 0 1 2 3 4
Final Words
 Note that the ideal lowpass filter is infinitely long, and therefore is not
realizable. A non-ideal filter will not have all the zero crossings that
ensure perfect reconstruction.
 Furthermore, if a signal is not-bandlimited, it cannot be reconstructed
whatever the sampling frequency is.
 Therefore, an anti-aliasing filter is typically used before the sampling
to limit the highest frequency of the signal, so that a suitable sampling
frequency can be used, and so that the signal can be reconcstructed if
with a non-ideal low pass filter!
Lecture 13

Sampling
Theorem

Review &
Exercises

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Shannon’s
Sampling Theorem
 The solution to this complicated and perplexing phenomenon comes
from the amazingly simple Shannon’s Sampling Theorem, one of the
cornerstones of the modern communications, signal processing and
control.
ª A continuous time signal x(t), with frequencies no higher then fmax can be
reconstructed exactly, precisely and uniquely from its samples x[n] = x(nTs), if the
samples are taken at a sampling rate (frequency) of fs=1/Ts that is greater then
2fmax. This is called the Nyquist frequency (rate)
ª In other words, if a continuous time signal is sampled at a rate that is at least twice
as high (or higher) as the highest frequency in the signal, then it can be uniquely
reconstructed from its samples.
ª Aliasing can be avoided if a signal is sampled at or above then the Nyquist rate.
Effect of Sampling
in the Frequency Domain
 Mathematically, we can show that the spectrum of the discrete
(sampled) signal is simply a 2π replicated and 1/Ts normalized version
the spectrum of the original continuous time signal

1 ∞
X s (Ω) = ∑ X c (Ω − kΩ s )
Ts k = −∞

Spectrum of the Sampling


Spectrum of the continuous time signal frequency
sampled signal

ω = ΩTs
Effect of Sampling
in the Frequency Domain
Sampling Explained…
…Graphically
ga(t)

x
* 1 ∞
P(Ω ) = ∑ δ (Ω − kΩ s )
p(t)

p(t ) = ∑ δ (t − nTs ) Ts k = −∞
n = −∞

g p (t ) = g a (t ) p (t )
=
gp(t) ℑ =
1 ∞
=

∑ g a (nT )δ (t − nTs )
⇔ G p (Ω) = ∑ Ga (Ω − kΩ s )
Ts k = −∞
n = −∞

2 sin(Ω c t ) x
hr (t ) =
Ω s (t 2 ) Ω = Ω 2
*
h (t) T , Ω ≤ Ωc
r
c s H r (Ω) = 
sin (πt Ts ) 0, Ω ≤ Ωc
= = sinc(t Ts )
πt Ts

=
=
^g (t)
gˆ a (t ) = a
∞ sin[π (t − nTs ) / Ts ]
∑ g[ n]
π (t − nTs ) / Ts
n = −∞
Aliasing
 To see the full effect of
X c (Ω) = π [δ (Ω − Ω 0 ) + δ (Ω + Ω 0 )]
aliasing, as well as to get an
insight to the story behind the
word “aliasing” consider the
sinusoid xc(t)=cosΩ0t.
 Note that the filtered signal has
its spectral components at
±(Ωs-Ω0), rather then Ω0.
 The reconstructed signal is
then cos(Ωs-Ω0)t, not cosΩ0t.
ª This is because frequencies Ideal half-band LPF
beyond Ωs/2 have folded into
the baseband area. Hence we
get an alias of original signal.
ª E.g. A sinusoid at 5kHz,
sampled at 8kHz would appear
as 3kHz when reconstructed!
Recovering the
Original Signal
 Graphically:
ª Observe that hr(0)=1 and hr(t)=0 for all n≠0. Since at n=0, the hr(t) is normalized by g[n],
we do obtain g[0] at n=0. The contribution to g[0] from all other hr(t) at n=0 is zero.
ª The same can be said for all other time points of g[n]: For any n, only one of the shifted
hr(t) contributes at that time n, all others are zero.
ª Thus the ideal lowpass filter fills-in between the samples by interpolating using the sinc
function.

gˆ a (t ) = g p (nTs ) ∗ hr (t )
∞ sin[π (t − nTs ) / Ts ]
∑ g[ n]
π (t − nTs ) / Ts
n = −∞
-4 -3 -2 -1 0 1 2 3 4
(xTs)
Some Key points to Remember
 The lowpass filter is essentially doing a “sinc” interpolation.
 Sinc function in Matlab computes sin(pi*x)/(pi*x)
ª The sinc function and the rectangular function are Fourier transform pairs. Therefore, the
impulse response of an ideal lowpass filter is a sinc function
About the Sinc Function
t=-3:0.001:3; 1/Ts
xa1=sinc(1*(ones(length(t),1).*t'));
xa2=sinc(10*(ones(length(t),1).*t'));
xa3=sinc(100*(ones(length(t),1).*t'));
subplot(311); plot(t,xa1); grid
subplot(312); plot(t,xa2); grid
subplot(313); plot(t,xa3); grid
About the Sinc Function

XA1=abs(fft(xa1));
XA2=abs(fft(xa2));
XA3=abs(fft(xa3));
1/Ts
Fs=1/0.001;
f=linspace(-Fs/2, Fs/2, length(xa1));
subplot(311)
plot(f,fftshift(XA1)); grid
subplot(312)
plot(f,fftshift(XA2)) ; grid
subplot(313)
plot(f,fftshift(XA3)) ; grid
Example
 Consider the analog signal xa(t)=cos(20πt), 0≤t≤1. It is sampled at
Ts=0.01, 0.05, 0.075 and 0.1 second intervals to obtain x[n];
ª For each Ts, plot x[n]
ª Reconstruct the analog signal ya(t) from the samples of x[n] by means of sinc
interpolation (low pass filtering). Use ∆t=0.001. Estimate the frequency in ya(t)
from your plot. Ignore the end effects.

ª Try at home with the sin function. What did you observe?
Solution
%mysample.m
%DSP 0909.351.01 spring 04 Sampling example
clear; close all
t=0:0.001:1; xa=cos(20*pi*t); %xa is the original analog signal
% Part 1 - plotting the signals
Ts=0.01; N1=round(1/Ts); n1=0:N1; %Create the discrete time base for 1 second
x1=cos(20*pi*n1*Ts); %Here is the signal sampled at Ts=0.01 sec.
subplot(411); plot(t,xa, n1*Ts, x1, 'o') %plot xa and x1 ontop of each other.
% Note that the time base for the discrete signal is n1*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x1(n)'), title(' Sampling of xa(t) at Ts=0.01s')

Ts=0.05; N2=round(1/Ts); n2=0:N2; %Create the discrete time base for 1 second
x2=cos(20*pi*n2*Ts); %Here is the signal sampled at Ts=0.05 sec.
subplot(412); plot(t,xa, n2*Ts, x2, 'o') %plot xa and x2 ontop of each other.
% Note that the time base for the discrete signal is n2*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x2(n)'), title(' Sampling of xa(t) at Ts=0.05s')

Ts=0.075; N3=round(1/Ts); n3=0:N3; %Create the discrete time base for 1 second
x3=cos(20*pi*n3*Ts); %Here is the signal sampled at Ts=0.075 sec.
subplot(413); plot(t,xa, n3*Ts, x3, 'o') %plot xa and x3 ontop of each other.
% Note that the time base for the discrete signal is n3*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x3(n)'), title(' Sampling of xa(t) at Ts=0.075s')

Ts=0.1; N4=round(1/Ts); n4=0:N4; %Create the discrete time base for 1 second
x4=cos(20*pi*n4*Ts); %Here is the signal sampled at Ts=0.1 sec.
subplot(414); plot(t,xa, n4*Ts, x4, 'o') %plot xa and x4 ontop of each other.
% Note that the time base for the discrete signal is n4*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x4(n)'), title(' Sampling of xa(t) at Ts=0.1s')
Solution
Solution
%Part 2 - Reconstructing the signal
figure
Ts=0.01; Fs=1/Ts;
xa1=x1*sinc(Fs*(ones(length(n1),1)*t-(n1*Ts)'*ones(1,length(t))));
subplot(411); plot(t, xa1); axis([0, 1, -1.1, 1.1]);
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.01s')
xˆa (t ) = x(nTs ) ∗ hr (t )
Ts=0.05; Fs=1/Ts;
xa2=x2*sinc(Fs*(ones(length(n2),1)*t-(n2*Ts)'*ones(1,length(t)))); ∞ sin[π (t − nTs ) / Ts ]
subplot(412); plot(t, xa2); axis([0, 1, -1.1, 1.1]); = ∑ x[n]
π (t − nTs ) / Ts
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.05s') n = −∞

∑ x[n] sinc(Fs (t − nTs ))
Ts=0.075; Fs=1/Ts;
xa3=x3*sinc(Fs*(ones(length(n3),1)*t-(n3*Ts)'*ones(1,length(t)))); =
subplot(413); plot(t, xa3); axis([0, 1, -1.1, 1.1]); n = −∞
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.075s')

Ts=0.1; Fs=1/Ts;
xa4=x4*sinc(Fs*(ones(length(n4),1)*t-(n4*Ts)'*ones(1,length(t))));
subplot(414); plot(t, xa4); %axis([0, 1, -1.1, 1.1]);
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.1s')
Solution
Solution
figure
Ts=0.001; Fs=1/Ts;
%Note that since in MATLAB, the actual signal is
%samples at 1000 Hz, (delta_t=0.001), we need to use that number to
%get corret Matlab interpretation.

f=linspace(-Fs/2, Fs/2, length(xa1));


subplot(411); plot(f,abs(fftshift(fft(xa1)))); grid
title('Magnitude Spectrum of xa1'); xlabel('Frequency, Hz')

f=linspace(-Fs/2, Fs/2, length(xa2));


subplot(412); plot(f,abs(fftshift(fft(xa2)))); grid
title('Magnitude Spectrum of xa2'); xlabel('Frequency, Hz')

f=linspace(-Fs/2, Fs/2, length(xa3));


subplot(413); plot(f,abs(fftshift(fft(xa3)))); grid
title('Magnitude Spectrum of xa3'); xlabel('Frequency, Hz')

f=linspace(-Fs/2, Fs/2, length(xa4));


subplot(414); plot(f,abs(fftshift(fft(xa4)))); grid
title('Magnitude Spectrum of xa4'); xlabel('Frequency, Hz')
Solution
Final Words
 Note that the ideal lowpass filter is infinitely long, and therefore is not
realizable. A non-ideal filter will not have all the zero crossings that
ensure perfect reconstruction.
 Furthermore, if a signal is not-bandlimited, it cannot be reconstructed
whatever the sampling frequency is.
 Therefore, an anti-aliasing filter is typically used before the sampling
to limit the highest frequency of the signal, so that a suitable sampling
frequency can be used, and so that the signal can be reconcstructed if
with a non-ideal low pass filter!
Lecture 14

Decimation
&
Interpolation

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Resampling the sequence
 Decimation and Interpolation
ª Downsampling
ª Upsampling
Changing the
Sampling Rate
 Often necessary to change the sampling rate of a discrete signal
ª Increase sampling rate to obtain higher resolution in discrete time
ª Decrease sampling rate for compression
 Two primary ways
ª Convert back to analog, resample at the desired rate
• Not practical nor efficient, needs extra hardware, prone to errors, and more costly
ª Resample in the discrete time
• Practical and efficient, little or no additional hardware is required
• However, needs up/down sampler, as well as additional digital filters

 Decimation & Interpolation


ª Reduce sampling rate: Prefiltering + Downsampling Î Decimation
ª Increase sampling rate: Upsampling + Postfiltering Î Interpolation
Downsampling
 Definition: Downsampling a discrete sequence by a factor of M is
achieved by retaining every Mth sample and discarding all the others.
ª The process of downsampling the sequence x[n] is denoted as (↓M), x[n]

ª If x[n]={…, x[-2], x[-1], x[0], x[1], x[2], …}, then the downsampled sequence is
{…, x[-2M], x[-M], x[0], x[M], x[2M], …}

ª Mathematically

x( M ) [n] =(↓ M )x[n] = x[ Mn]

x[n] = 2 7 3 5 2 7 4 9 9 6 5 9 8 6 8 7 3 3 3 5 7 3 8 6 4 7 5 4 7 6 8 10 5 9 2 10 3 3 9 7 1 0 9 2 3 7 3 5 1 10

X(5)[n] = 2 7 5 7 7 7 8 10 1 7
Downsampling in Matlab

x=round(10*rand(50,1));
subplot(211)
stem(x); grid;
title('Some random sequence')
xlabel('Original sequence index')
x2=x(1:5:50);
x3=downsample(x,5);
subplot(212)
stem(x2); grid
title('Sequence downsampled by 5')
xlabel('New sequence index')
Downsampling in
Frequency Domain
 To see the effect of
downsampling in
frequency domain, let
us first look at the
Frequency axis
special case of compresses
downsampling by 2
1  ω  ω +π 
X ( 2) (ω ) = X   + X  
2   2   2 

Stretches the ω axis


by a factor of 2

(to see why, think about how sin(20t)


and sin(10t) would look lie)
Downsampling w/o Aliasing
 Looking at on the same frequency scale reveals that the spectral
components of the downsampled signal are stretched by a factor of 2
(the frequency axis is compressed by a factor of 2)
Downsampling with Aliasing
 Note that we normally require that Ωh<Ωs/2. If we are going
subsample by a factor of 2, then aliasing will occur, if Ωh is not less
then Ωs/4.

Ωh
Ωs/2 Ωs

Ωh’=2Ωh

Ωs/2 Ωs
Antialiasing Filter
 If a discrete sequence is downsampled by a factor of M, then the
original highest frequency in the signal, Ωh, must be low enough to
accommodate the M fold reduction in the sampling rate.
ª More specifically, Ωh< Ωs/2M must be satisfied to avoid aliasing
ª Since Ωs=2πfs=2π/Ts Î this condition is equivalent to saying

Ω hTs ≤ π / M ⇔ ωh ≤ π / M
ª In order to ensure that the highest frequency of the downsampled signal does not
exceed the Nyquist rate, an anti-aliasing decimation prefilter with a cutoff frequency of
π/M is typically employed. The entire procedure is then called decimation.
Decimation

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Decimation in matlab
DECIMATE Resample data at a lower rate after lowpass filtering.
Y = DECIMATE(X,R) resamples the sequence in vector X at 1/R times the
original sample rate. The resulting resampled vector Y is R times shorter,
LENGTH(Y) = LENGTH(X)/R.

DECIMATE filters the data with an eighth order Chebyshev Type I lowpass
filter with cutoff frequency .8*(Fs/2)/R, before resampling.

Y = DECIMATE(X,R,N) uses an N'th order Chebyshev filter.


Y = DECIMATE(X,R,'FIR') uses the 30 point FIR filter generated by
FIR1(30,1/R) to filter the data.
Y = DECIMATE(X,R,N,'FIR') uses the N-point FIR filter.

Note: For large R, the Chebyshev filter design might be incorrect


due to numeric precision limitations. In this case, DECIMATE will
use a lower filter order. For better anti-aliasing performance, try
breaking R up into its factors and calling DECIMATE several times.
Downsample vs. decimate

x=round(10*rand(50,1));
x2=downsample(x,5);
x3=decimate(x,5);
subplot(311)
stem(x); grid
subplot(312)
stem(x2); grid
subplot(313)
stem(x3); grid

x2 = 4 9 2 5 6 8 7 7 1 8

x3 = 6.79 6.09 4.93 5.88 5.12 5.92 6.31 5.16 6.27 1.22
Spectrum of the
Downsampled Signal
 For the most general case, the following theorem applies
 Theorem: Let x[n] and X(ω) denote a discrete sequence and its
DTFT, whereas x(M)[n] and X(M) (ω) denote the signal subsampled by
a factor of M and its DTFT, respectively. Then

1 M −1  ω 2πk 
X ( M ) (ω ) = ∑ X + 
M k =0  M M 

 Take home message: Downsampling by a factor of M stretches the


spectrum by a factor of M. To ensure that aliasing does not occur, a
prefilter with a cutoff frequency of π/M must be used.
Upsampling
 Upsampling is the reverse operation, used to increase the sampling
rate of the signal.
 Upsampling by a factor of M is typically achieved by inserting M-1
zeroes between every consecutive sample
ª The process of upsampling the sequence x[n] is denoted as (↑M), x[n]

ª If x[n]={…, x[-2], x[-1], x[0], x[1], x[2], …}, then a sequence upsampled by 3 is
{…, x[-2], 0, 0, x[-1], 0, 0, x[0], 0, 0, x[1], 0, 0, x[2], …}

ª Mathematically

∆  x[n / M ], n = 0, ± M , ± 2 M , .....
y[n] =(↑ M )x[n] = x ( M ) [n] = 
 0, otherwise
Upsampling in Matlab

x=[1 2 3 4 5 4 3 2 1];
x2=upsample(x,5);
subplot(211)
stem(x, 'filled'); grid
title(‘Some sequence x[n]’)
subplot(212)
stem(x2, 'filled'); grid
title(‘x[n] upsampled by 5’)

Note that the upsampling operation introduces spurious high frequency


components, not originally present in the signal. These components must be
removed by a post-lowpass filter!
Upsampling in
Frequency Domain
 It is easy to see that the original spectrum and the spectrum of the
upsampled signal are related by

X ( M ) (ω ) = X (ωM )
 This means that the spectral components of the upsampled signal are
compressed by a factor of M (or the frequency axis is stretched by a
factor of M)

 Since we are increasing the sampling rate (the new sampling rate is
MΩs), there is no danger of aliasing, however, there are spurious
frequency components, called images, that need to be removed.

 The upsampling operation followed by a post processing lowpass


filter is called interpolation.
Upsampling in
Frequency Domain

Frequency axis
stretches

Image
frequencies

Postprocessing
filter
Interpolation in Matlab

INTERP Resample data at a higher rate using lowpass interpolation.

Y = INTERP(X,R) resamples the sequence in vector X at R times the original sample rate.
The resulting resampled vector Y is R times longer, LENGTH(Y) = R*LENGTH(X).

A symmetric filter, B, allows the original data to pass through unchanged and
interpolates between so that the mean square error between them and their ideal
values is minimized.

Y = INTERP(X,R,L,ALPHA) allows specification of arguments L and ALPHA which


otherwise default to 4 and .5 respectively. 2*L is the number of original sample values
used to perform the interpolation. Ideally L should be less than or equal to 10. The length
of B is 2*L*R+1. The signal is assumed to be band limited with cutoff frequency 0 < ALPHA
<= 1.0.

[Y,B] = INTERP(X,R,L,ALPHA) returns the coefficients of the interpolation filter B.


Interpolation in Matlab

x=[1 2 3 4 5 4 3 2 1];
x2=upsample(x,5);
x3=interp(x,5);
subplot(311)
stem(x,'filled'); grid;
subplot(312)
stem(x2,'filled'); grid;
subplot(313)
stem(x3,'filled'); grid;
Interpolation in Matlab

X=abs(fft(x, 512));
X2=abs(fft(x2, 512));
X3=abs(fft(x3, 512));

w=linspace(-pi, pi, 512);


subplot(311)
plot(w,fftshift(X)); grid
title( 'Spectrum of the original equence')
subplot(312)
plot(w,fftshift(X2)); grid
title( 'Spectrum of the upsampled sequence')
subplot(313)
plot(w,fftshift(X3)); grid
title( 'Spectrum of the interpolated sequence')
Lecture 16

Discrete Fourier
Transform
& Fast Fourier
Transform

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Computing convolution using DFT
 Some examples
 The DFT matrix
 The relationship between DTFT vs DFT: Reprise
DFT Examples
 We will compute the indicated N-point DFTs of the following
sequences
ª x[n]=δ[n] (general N)
ª x[n]=u[n]-u[n-N] (general N)
ª x[n]=(0.5)n (N=32)
In Matlab
 In Matlab, the fft() computes DFT using a fast algorithm, called fast Fourier
transform (FFT).
 X = fft(x) returns the discrete Fourier transform (DFT) of vector X, computed with a
fast Fourier transform (FFT) algorithm.
ª If x is a matrix, fft returns the Fourier transform of each column of the matrix. In this case,
the length of X and the length of x are identical.
ª X = fft(x,N) returns the N-point DFT. If the length of X is less than N, X is padded with
trailing zeros to length N. If the length of X is greater than N, the sequence X is truncated.
ª The N points returned by fft corresponds to frequencies in the [0 2π] range, equally spaced
with an interval of 2π/N.
ª Note that the Nth FFT point corresponds to 2π, which in turn corresponds to the sampling
frequency.
ª If x[n] is real, X[k] is symmetric. Using the fftshift() function shifts the center of symmetry
so that the FFT is given in the [-π to π] interval, rather then [0 2π].
 X=ifft(X,N) returns the N-point inverse discrete Fourier transform
Back to Example
Let’s show that
i. DFT is indeed the sampled version of
DTFT and
ii. DFT and FFT produce identical results

n=0:31; k=0:31;
x=0.9.^n;
w=linspace(0, 2*pi, 512);
K=linspace(0, 2*pi, 32);
X1=1./(1-0.9*exp(-j*w));
X2=(1-(0.9*exp(-j*(2*pi/32)*k)).^32)./
(1-0.9*exp(-j*(2*pi/32)*k));
X=fft(x);
subplot(311)
plot(w, abs(X1)); grid
subplot(312)
stem(K, abs(X11), 'r', 'filled')
stem(K, abs(X2), 'r', 'filled'); grid
subplot(313)
stem(K, abs(X), 'g', 'filled'); grid
Matrix Computation
of DFT
 DFT has a simple matrix implementation: The DFT samples defined by
N −1
X [k ] = ∑ x[n]WNkn , 0 ≤ k ≤ N − 1
n =0

can be expressed in a matrix form X = DN x


where X=[X[0] X[1] … X[N-1]]T is the vectorized DFT sequence,
x=[x[0] x[1] … x[N-1]]T is the vectorized time domain sequence, and
DN is the NxN DFT matrix given by
1 1 1 L 1 
1 WN1 WN2 L WN( N −1) 
 2( N −1) 
D N = 1 WN2 WN4 L WN 
M M M O M 
1 W ( N −1) W 2( N −1) ( N −1)2 
L WN
 N N 
Matrix Computation
of IDFT
• Likewise, the IDFT relation given by
N −1
x[n] = ∑ X [k ]WN− k n , 0 ≤ n ≤ N − 1
k =0
can be expressed in matrix form as
x = D−N1X
where DN-1 is the NxN IDFT matrix
1 1 1 L 1 
1 WN−1 WN− 2 L WN−( N −1) 
−1  − 2( N −1) 
D N = 1 WN− 2 WN− 4 L WN 
M M M O M 
1 W −( N −1) W − 2( N −1) −( N −1)2 
L WN
 N N 

1 *
Note that DFT and IDFT matrices
are related to each other D−N1 = DN
N
In Matlab
dftmtx() Discrete Fourier transform matrix.
dftmtx(N) is the N-by-N complex matrix of values around the unit-
circle whose inner product with a column vector of length N yields the
discrete Fourier transform of the vector. If X is a column vector of
length N, then dftmtx(N)*X yields the same result as FFT(X);
however, FFT(X) is more efficient.
D = dftmtx(N) returns the N-by-N complex matrix D that, when
multiplied into a length N column vector x.
y = A*x computes the discrete Fourier transform of x.
The inverse discrete Fourier transform matrix is
Ai = conj(dftmtx(n))/n
DTFT ÍÎDFT
 We now look more closely to the relationship between DTFT and DFT
ª DTFT is a continuous transform. Sampling the DTFT at regularly spaced intervals around
the unit circle gives the DFT.
ª The sampling operation in one domain causes the (inverse) transform in the other domain to
periodic
ª Just like we can reconstruct a continuous time signal from its samples, DTFT can also be
synthesized from its DFT samples, as long as the number of points N at which DFT is
computed is equal to or larger then the samples of the original signal.
• That is, given the N-point DFT X[k] of a length-N sequence x[n], its DTFT X(ω) can be uniquely
determined from X[k]
Thus ~x [n] is obtained from x[n] by
DTFT DFT
x[n] ⇔ X (ω ) ~
x [ n] ⇔ X [ k ] adding an infinite number of shifted
replicas of x[n], with each replica

X [k ] = X (ω ) ω = 2πk ~ shifted by an integer multiple of N
x [ n] = ∑ x[n + mN ], 0 ≤ n ≤ N −1
sampling instants, and observing the
N m = −∞
sum only for the interval 0≤n≤N-1.
 ωN − 2π k
sin  
1 N −1  2  ⋅ e − j[(ω − 2π k/ N )][( N −1) / 2] If we compute the DFT at M<N
X (ω ) == ∑ X [k ] points, time-domain aliasing occurs,
N k =0  ωN − 2π k
sin   and DTFT cannot be reconstructed
 2N  from DFT samples X[k]!
The Fast Fourier Transform
(FFT)
 By far one of the most influential algorithms ever developed in signal
processing, revolutionalizing the field
 FFT: Computationally efficient calculation of the frequency spectrum
ª Made many advances in signal processing possible
ª Drastically reduces the number of additions and multiplications necessary to
compute the DFT
 Many competing algorithms
ª Decimation in time FFT
ª Decimation in frequency FFT
 Makes strategic use of the two simple complex identities:
2π 2π
−j 2 −j
WN2 = e N = e N 2 =W
N 2
N 2π 2π N 2π 2π
k+ −j k −j ⋅ −j k −j k
WN 2 = e N ⋅ e N 2 = e N ⋅ e − jπ = −e N = −WNk
FFT: Decimation in Time
 Assume that the signal is of length N=2p, a power of two. If it is not,
zero-pad the signal with enough number of zeros to ensure power-of-
two length.
 N-point DFT can be computed as two N/2 point DFTs, both of which
can then be computed as two N/4 point DFTs
ª Therefore an N-point DFT can be computed as four N/4 point DFTs.
ª Similarly, an N/4 point DFT can be computed as two N/8 point DFTs Î the
entire N-point DFT can be computed as eight N/8 point DFTs
ª Continuing in this fashion, an N-point DFT can be computed as N/2 2-point
DFTs
• A two point DFT requires no multiplications and just two additions.
• We will see that we will need additional multiplications and additions to combine the 2-
point DFTs, however, overall, the total number of operations will be much fewer then
that would be required by the regular computation of DFT
Computational
Complexity of DFT
 Q: How many multiplications and additions are needed to compute
DFT? N −1
X [k ] = ∑ x[n]WNnk , 0 ≤ k ≤ N − 1
n=0

 Note that for each “k” we need N complex multiplications, and N-1
complex additions
ª Each complex multiplication is four real multiplications and two real additions
ª A complex addition requires two real additions
ª So for N values of “k”, we need N*N complex multiplications and N*(N-1)
complex additions
• This amounts to N2 complex multiplications and N*(N-1)~N2 (for large N) complex
additions
• N2 complex multiplications: 4N2 real multiplications and ~2N2 real additions
• N2 complex additions: 2N2 real additions
ª A grand total of 4N2 real multiplications and 4N2 real additions:
• For , say 1000 point signal: 4,000,000 multiplications and 4,000,000 additions: OUCH!
Two-point DFT
 How many operations do we need for 2-point DFT?
ª X[0]=x[0]+x[1] 1
ª X[1]=x[0]*1+x[1]*e-jπ=x[0]-x[1] X [k ] = ∑ x[n]WNnk , 0 ≤ k ≤1
n=0

2-point DFT requires no multiplication, just two additions


(sign change is very inexpensive – reversal of one or more bits –
and hence does not even go into complexity computations)
Decimation in Time
 For the first stage, we first “decimate” the time-domain signal into two half: even
indexed samples and odd indexed samples.
ª Using the first property of complex exponentials:

( N 2 )−1 ( N 2 )−1
X [k ] = ∑ x[2n]W(kn
N 2)
+ W k
N ∑ x[2n + 1]W(kn
N 2)
n =0 n =0

= P[k ] + WNk S [k ], 0 ≤ k ≤ N − 1

N/2 point DFT N/2 point DFT


of even samples of odd samples – defined on k=[0, 1, …, (N/2)-1]

ª However, we need all values of P[k] and S[k] for k=[0~N-1]


ª Since DFT for real sequences is real, P[k] and S[k] are identical in both [0 (N/2)-1] and
[(N/2) N-1] intervals
Decimation in Time
Stage 1
 Therefore: X [k ] = P[k ] + WNk S[k ] 0≤k ≤
N
−1
2
N N
X [k + ] = P[k ] + WNk + N 2 S[k ] ≤ k ≤ N −1
2 2

k=0~(N/2)-1
k=(N/2)~N-1
The Butterfly Process
 The basic building block P[k]: Even indexed samples
in this process can be
represented in the
butterfly process.
ª Two multiplications &
two additions
 But we can do even better,
just by moving the S[k]: Odd indexed
multiplier before the node samples
S[k]
ª Reduced butterfly
ª 1 multiplication and two
additions !
 We need N/2 reduced
butterflies in stage 1
ª A total of (N/2)
multiplications and
2*(N/2)=N additions
Stage 1 FFT

(N/2)2 multiplications+
(N/2)2 additions

xeven [n ] (N/2)-point P[k ]


x[n ] 2
WNk
DFT + X [k ]
−1
z WNk
xodd [n ] (N/2)-point WNk N
2 + X [k +
2
]
DFT S [k ]
(N/2)2 multiplications+ The butterfly
(N/2)2 additions (N/2) mult. & N additions

Grand total: (N/2+N2/2) mult. & (N+N2/2) additions


Regular DFT: ~N2 mult. and N2 additions Whoop-de-doo!!!?
For N<3, the decimation uses fewer calculations
Stage 2…& 3…&4…

 We can continue the same process as long as we have samples to split in half:
ª Each N/2-point DFT can be computed as two N/4-point DFT + a reduced butterfly

p-1 stages,
ª Each N/4-point DFT can be computed as two N/8-point DFT + a reduced butterfly.

p=log2N
ª …
ª …
ª …
ª 2 2-point DFT+ a reduced butterfly
8-point FFT – Stage 1
8-Point FFT – Stage 2
4-point DFT
X[0] X[0] X[0]
X[1] X[1] X[2]
X[2] X[2] X[4]
X[3] X[3] X[6]
X[4] X[4] X[1]
X[5] X[5] X[3]
X[6] X[6] X[5]
X[7] X[7] X[7]

X[0] X[0] 4-point DFT


X[2] X[4]
X[4] X[2]
X[6] X[6]
X[1] X[1]
X[3] X[5]
X[5] X[3]
X[7] X[7]
8-Point FFT – Stage 3
x(0)

Note that further reductions can be obtained by avoiding the following multiplications
WNN / 2 = −1 WNN / 4 = j WN3 N / 4 = − j WN0 = 1
Digital Signal Processing, © 2004 Robi Polikar, Rowan University
So How did we do?

In log2N stages, total # of calculations:


(N/2)*(log2N-1)~(N/2)*log2N multiplications
Nlog2N additions
As opposed to N2 multiplications and N2 additions

For N=1024 Î DFT: 1,048,576 multiplications and additions


Î FFT: 5,120 multiplications and 10,240 additions
Is that good…?
For N=4096 ÎDFT:16,777,216 multiplications and additions
ÎFFT: 49,152 multiplications and 98,304 additions
Decimation in Frequency
 Just like we started with time-domain samples and decimated them in
half, we can do the same thing with frequency samples:
ª Compute the even indexed spectral samples k=0, 2, 4,…, 2N-1 and odd indexed
spectral components k=1, 3, 5, …, 2N-2 separately.
ª Then further divide each by half to compute X[k] for 0, 4, 8,… and 2, 6, 10, … as
well as 1, 5, 9, … and 3, 7, 11 …separately
ª Continuing in this manner, we can reduce the entire process to a series of 2-point
DFTs.
8-point DFT with
Decimation in Frequency
Homework
 Questions from Chapter 2
ª 30, 32, 34, 35, 36, 46 (also verify using fft function), 50, 51, 52

ª Due: April 7
Lecture 15

Discrete Fourier
Transform

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Discrete Fourier Transform
ª Analysis
ª Synthesis
ª Orthogonality of discrete complex exponentials
 Periodicity of DFT
 Circular Shift
 Properties of DFT
 Circular Convolution
ª Linear vs. circular convolution
 Computing convolution using DFT
 Some examples
 The DFT matrix
 The relationship between DTFT vs DFT: Reprise
DFT
 DTFT is an important tool in digital signal processing, as it provides
the spectral content of a discrete time signal.
ª However, the computed spectrum, X(ω) is a continuous function of ω, and
therefore cannot computed using a computer (the freqz function computes only an
approximation of the DTFT, not the actual DTFT).
ª We need a method to compute the spectral content of a discrete time signal and
have a spectrum – actually a discrete function – so that it can be computed using a
digital computer
 A straightforward solution: Simply sample the frequency variable ω of
the DTFT in frequency domain in the [0 2π] interval.
ª If we want, say N points in the frequency domain, then we divide ω in the [0 2π]
interval into N equal intervals.
ª Then the discrete values of ω are 0, 2π/N, 2.2π/N, 3.2π/N, …, (N-1). 2π/N
DFT Analysis
ÂDefinition - The simplest relation between a length-N
sequence x[n], defined for 0 ≤ n ≤ N-1, and its DTFT X(ω)
is obtained by uniformly sampling X(ω) on the ω-axis 0 ≤ ω
≤ 2π at ωk=2πk/N, 0 ≤ k ≤ N-1

ÂFrom the definition of the DTFT we thus have


X [k ] = X (ω ) ω = 2π k/ N
N −1 DFT analysis equation
− j 2π k/ N
= ∑ x[n] e , 0 ≤ k ≤ N −1
n =0
DFT Analysis
 Note the following:
ª k replaces ω as the frequency variable in the discrete frequency domain
ª X[k] is also (usually) a length-N sequence in the frequency domain, just like the
signal x[n] is a length-N sequence in the time domain.
ª X[k] can be made to be longer then N points ( as we will later see)
ª The sequence X[k] is called the discrete Fourier transform (DFT) of the
sequence x[n]
 Using the notation WN = e − j 2π / N the DFT is usually expressed as:

N −1
X [k ] = ∑ x[n]WNk n , 0 ≤ k ≤ N − 1
n =0
Inverse DFT
(DFT Synthesis)
 The inverse discrete Fourier transform, also known as the synthesis
equation, is given as


1 N −1 j nk
x[n] = ∑
N k =0
X [k ] e N
DFT synthesis equation
1 N −1
= ∑ X [k ]WN− kn , 0 ≤ n ≤ N − 1
N k =0

 To verify the above2πexpression we multiply both sides of the above


equation by WNln = e j N nl and sum the result from n = 0 to n=N-1
Orthogonality of
Discrete Exponentials

 During the proof, multiplication by W ln = e j N nlwill result in an expression that
N
includes summation of several discrete exponentials, which can be computed using
the result of the following lemma
 Lemma: Orthogonality of discrete exponentials
ª For integers N, n, r and l
N −1 j 2π nl  N , l = rN
∑ e N = 0, otherwise
n =0 
ª In words, summation of harmonically related discrete complex exponentials of 2πl/N is N,
if l is an integer multiple of N; zero, otherwise.
ª The proof of the first part follows directly from Euler’s expansion, the second part follows
from the geometric series expansion:
N
 j 2π l  Note that the exponential in the numerator evaluates to e j2πl which is
 
n 1− e N  always “1” for any integer “l”. Therefore, the numerator is zero, and
N −1 j 2π l    hence the whole expression is zero, except for l=rN, when the
 N   
∑ e  =
2π denominator is zero as well. This undefined situation (0/0) can easily
n =0   j l
  1− e N be computed straight from the original expression itself.
Computing DFT
N −1
 Recall the analysis equation: x[n] e − j 2π k/ N , 0 ≤ k ≤ N − 1
X [k ] = ∑
n =0

−j 6
ª For any given k, the DFT is computed by multiplying e 8
−j

7

each x[n] with each of the complex exponentials −j 5
8
e 8
e
e j2πnk/N and then adding up all these components
ª If, for example, we wish to compute an 8 point − j 2π 4
DFT, the complex exponentials are 8 unit vectors e 8
placed at equal distances from each other on the
unit circle 2π
−j 3
e 8
X[0]=x[0]ej0+ x[1]ej0+ x[2]ej0+ x[3]ej0+ x[4]ej0+ x[5]ej0+ x[6]ej0+ x[7]ej0
X[1]=x[0]ej0+x[1]e-j(2π/8).1+x[2]e-j(2π/8).2+x[3]e-j(2π/8).3+x[4]e-j(2π/8).4+x[5]e-j(2π/8).5+x[6]e-j(2π/8).6+x[7]e-j(2π/8).7
X[2]=x[0]ej0+x[1]e-j(2π/8).2+x[2]e-j(2π/8).4+x[3]e-j(2π/8).6+x[4]e-j(2π/8).0+x[5]e-j(2π/8).2+x[6]e-j(2π/8).4+x[7]e-j(2π/8).6
X[3]=x[0]ej0+x[1]e-j(2π/8).3+x[2]e-j(2π/8).6+x[3]e-j(2π/8).1+x[4]e-j(2π/8).4+x[5]e-j(2π/8).7+x[6]e-j(2π/8).2+x[7]e-j(2π/8).5
X[4]=x[0]ej0+x[1]e-j(2π/8).4+x[2]e-j(2π/8).0+x[3]e-j(2π/8).4+x[4]e-j(2π/8).0+x[5]e-j(2π/8).4+x[6]e-j(2π/8).0+x[7]e-j(2π/8).4
X[5]=x[0]ej0+x[1]e-j(2π/8).5+x[2]e-j(2π/8).2+x[3]e-j(2π/8).7+x[4]e-j(2π/8).4+x[5]e-j(2π/8).1+x[6]e-j(2π/8).6+x[7]e-j(2π/8).3
X[6]=x[0]ej0+x[1]e-j(2π/8).6+x[2]e-j(2π/8).4+x[3]e-j(2π/8).2+x[4]e-j(2π/8).0+x[5]e-j(2π/8).6+x[6]e-j(2π/8).4+x[7]e-j(2π/8).2
X[7]=x[0]ej0+x[1]e-j(2π/8).7+x[2]e-j(2π/8).6+x[3]e-j(2π/8).5+x[4]e-j(2π/8).4+x[5]e-j(2π/8).3+x[6]e-j(2π/8).2+x[7]e-j(2π/8).1
Periodicity in DFT
 DFT is periodic in both time and frequency domains!!!
ª Even though the original time domain sequence to be transformed is not periodic!
 There are several ways to explain this phenomenon. Mathematically ,
we can easily show that both the analysis and synthesis equations are
periodic by N.
 Now, understanding that DFT is periodic in frequency domain is
straightforward: DFT is obtained by sampling DTFT at 2π/N intervals.
Since DTFT was periodic with 2π Æ DFT is periodic by N.
 This can also be seen easily from the complex exponential wheel.
Since there are only N vectors around a unit circle, the transform will
repeat itself every N points.
 But what does it mean that DFT is also periodic in
time domain?
Periodicity in DFT
 Recall how we obtained the frequency spectrum of a sampled signal:
ª The spectrum of the sampled signal was identical to that of the continuous signal, except,
with periodically replications of itself every 2π.
ª That is, sampling the signal in time domain, caused the frequency domain to be periodic
with 2π.
 In obtaining DFT, we did the opposite: sample the frequency domain
ª From the duality of the Fourier transform, this corresponds to making the time domain
periodic.
ª However, the original signal was not periodic!
ª This is an artifact of the mathematical manipulations.
 Here is what is happening:
ª When we sample the DTFT in frequency domain, the resulting “discrete spectrum”
is not spectrum of the original discrete signal. Rather, the sampled spectrum is in
fact the spectrum of a time domain signal that consists of periodically replicated
versions of the original discrete signal.
ª Similar to sampling theorem, under rather mild conditions, we can reconstruct the DTFT
X(ω), from its DFT samples X[k]. More on this later!
DFT & Circular Shift
 We compute the N-point DFT of an a periodic discrete sequence x[n],
with the understanding that the computed DFT is in fact the spectrum
of a periodic sequence ~x [n] , which is obtained by periodically
replicating x[n] with a period of N samples.
 This nuance makes it necessary to introduce the concept of circular
shift.
ª A circularly shifted sequence is denoted by x[(n-L)N], where L is the amount of
shift, and N is the length of the previously determined base interval (by default,
unless defined otherwise, this is equal to the length of the sequence).
ª The circularly shifted sequence is defined only in the same interval as the original
sequence!
ª To obtain a circularly shifted sequence, we first linearly shift the sequence by L,
and then rotate the sequence in such a manner that the shifted sequence remain in
the same interval originally defined by N.
Circular Shift

Original sequence

Linear shift Circular shift

If the original sequence is defined in the time interval N1


to N2, then the circular shift can mathematically be
represented as follows:
xc [n] = x[(n − L )]N
= x[(n − L ) mod N ]
= x[(n − L + rN ), such that N1 ≤ n − L ≤ N 2 ]

The circularly shifted sequence is obtained by finding an


integer r such that n-L+rN remains in the same domain as
the original sequence.
Properties of the DFT


−j kn0
=e N G[k ]

j k0n
=e N g[n]

(g[n] * h[n])N
1
= (G[k ] * H [k ])N
(Circular convolution in frequency) N
DFT Symmetry relations
Circular Convolution
 Since the convolution operation involves shifting, we need to redefine the
convolution for circularly shifted sequences.
ª The process is fairly straightforward, one simply needs to ensure that the shifting in
convolution is done in a circular fashion.
ª Example: (Example 2.7): Compute circular convolution of the following sequences:
x1[n]=[-1 2 -3 2] x2[n]=[-0.5 1 1.5]
• Recall the expression:
3 3
y[n] = ( x1[n] * x2 [n])4 = ∑ x1[m]x2 [(n − m )4 ] = ∑ x2[m]x1[(n − m )4 ]
m =0 m=0
x1[-m]=[2 -3 2 -1] Î This sequence has nonzero values for m=-3, -2, -1 and 0. However,
circular shift is only defined on the original interval of [0 3]. Therefore we need to convert
this shift to circular shift, by rotating x1[-m] enough to land all values in the original domain
xc [n] = x[(n − L ) mod N ] = x[(n − L + rN ), N1 ≤ n − L ≤ N 2 ]

x1[(-m)4]=[-1 2 -3 2] Î y[0]=Σx2[m].x1[(-m)4]=-1(-0.5)+2(1)+-3(1.5)+2(0)=-2
x1[(1-m)4]=[2 -1 2 -3] (shift right) Î y[1]=Σx2[m].x1[(1-m)4]=2(-0.5)+(-1)1+2(1.5)-3(0)=1
x1[(2-m)4]=[-3 2 -1 2 ] Î y[2]=Σx2[m].x1[(2-m)4]=-3(-0.5)+2(1)+-1(1.5)-2(0)=2
x1[(3-m)4]=[2 -3 2 -1] Î y[3]=Σx2[m].x1[(3-m)4]=2(-0.5)+(-3)1+2(1.5)-1(0)=-1
Circular Convolution
Example
 Compute circular convolution of x[n]=[1 3 2 -1 4], h[n]=[2 0 1 7 -3]
4 4
y[n] = ( x[n] * h[n])5 = ∑ x[m]h[(n − m )5 ] = ∑ h[m]x[(n − m )5 ]
m =0 m =0

x[m]=[1 3 2 -1 4]
h[-m]=[-3 7 1 0 2]
h[(-m)5]=[2 -3 7 1 0] Î y[0]=Σx[m].h[(-m)5]=1*2+3(-3)+2*7+-1(1)+4(0)=6
h[(1-m)5]=[0 2 -3 7 1] Î y[0]=Σx[m].h[(1-m)5]=-3
h[(2-m)5]=[1 0 2 -3 7] Î y[0]=Σx[m].h[(2-m)5]= 36
h[(3-m)5]=[7 1 0 2 -3] Î y[0]=Σx[m].h[(3-m)5]= -4
h[(4-m)5]=[-3 7 1 0 2] Î y[0]=Σx[m].h[(4-m)5]= 28

Linear convolution results: [ 2 6 5 8 28 4 -9 31 -12]


Linear vs. Circular
Convolution
 Note that the results of linear and circular convolution are different.
This is a problem! Why?
 All LTI systems are based on the principle of linear convolution, as
the output of an LTI system is the linear convolution of the system im
pulse response and the input to the system, which is equivalent to the
product of the respective DTFTs in the frequency domain.
ª However, if we use DFT instead of DTFT (so that we can compute it using a
computer), then the result appear to be invalid:
• DTFT is based on linear convolution, and DFT is based on circular convolution, and
they are not the same!!!
• For starters, they are not even of equal length: For two sequences of length N and M,
the linear convolution is of length N+M-1, whereas circular convolution of the same two
sequences is of length max(N,M), where the shorter sequence is zero padded to make
it the same length as the longer one.
• Is there any relationship between the linear and circular convolutions? Can one be
obtained from the other? OR can they be made equivalent?
Linear vs. Circular
Convolution
 YES!, rather easily, as a matter of fact!
ª FACT: If we zero pad both sequences x[n] and h[n], so that they are both of
length N+M-1, then linear convolution and circular convolution result in identical
sequences
ª Furthermore: If the respective DFTs of the zero padded sequences are X[k] and
H[k], then the inverse DFT of X[k]·H[k] is equal to the linear convolution of x[n]
and h[n]
ª Note that, normally, the inverse DFT of X[k].H[k] is the circular convolution of
x[n[ and h[n]. If they are zero padded, then the inverse DFT is the linear
convolution of the two.
Example
 Compute circular convolution of x[n]=[1 3 2 -1 4], h[n]=[2 0 1 7 -3],
by appropriately zero padding the two
ª We need 5+5-1=9 samples
Zero pad signals!
• x[n]=[1 3 2 -1 4 0 0 0 0], h[n]=[2 0 1 7 -3 0 0 0 0]
8 8
y[n] = ( xCorresponding ]h[(n − m )9 ] = ∑formula
[n] * h[n])9 = ∑ x[mconvolution h[m]x[(n − m )9 ]
m=0 m=0
x[m]=[1 3 2 -1 4 0 0 0 0]
h[-m]=[0 0 0 0-3 7 1 0 2]
h[(-m)9]=[2 0 0 0 0-3 7 1 0] Î y[0]=Σx[m].h[(-m)9]=1(2)+3(0)+2(0)+-1(0)+4(0)+0(-3)+0(7)+0(1)+0(0)=2
h[(1-m)9]=[0 2 0 0 0 0 -3 7 1] Î y[0]=Σx[m].h[(1-m)9]=6
h[(2-m)9]=[1 0 2 0 0 0 0 -3 7] Î y[0]=Σx[m].h[(2-m)9]= 5
Solution
h[(3-m)9]=[7 1 0 2 0 0 0 0 -3] Î y[0]=Σx[m].h[(3-m)9]= 8
h[(4-m)9]=[-3 7 1 0 2 0 0 0 0] Î y[0]=Σx[m].h[(4-m)9]= 28
h[(5-m)9]=[0 -3 7 1 0 2 0 0 0] Î y[0]=Σx[m].h[(5-m)9]= 4
h[(6-m)9]=[0 0 -3 7 1 0 2 0 0] Î y[0]=Σx[m].h[(6-m)9]= -9
h[(7-m)9]=[0 0 0 -3 7 1 0 2 0] Î y[0]=Σx[m].h[(7-m)9]= 31
h[(8-m)9]=[0 0 0 0 -3 7 1 0 2] Î y[0]=Σx[m].h[(8-m)9]= -12

Linear convolution results: [ 2 6 5 8 28 4 -9 31 -12] SAME!!!


Computing Convolution
Using DFT
 Note that we can compute the convolution using no time domain
operation, using the convolution property of DFT, that is, the inverse
DFT of X[k]·H[k] is equal to the linear convolution of x[n] and h[n].
 Since we can compute DFT very efficiently using FFT ( more on this
later), it is actually more computationally efficient to compute the
DFTs X[k] and H[k], multiply them, and take the inverse DFT then to
compute the convolution in time domain:
(x[n] * h[n])N = IDFT { X [k ] ⋅ H [k ]}
 Note, however, the IDFT gives the circular convolution. To ensure
that one gets linear convolution, both sequences in the time domain
must be zero padded to appropriate length.
Example

x=[1 3 2 -1 4];
h=[2 0 1 7 -3];
x2=[1 3 2 -1 4 0 0 0 0];
h2=[2 0 1 7 -3 0 0 0 0];
C1=conv(x,h);
X=fft(x2); H=fft(h2);
C2=ifft(X.*H);
subplot(411)
stem(x, 'filled'); grid
subplot(412)
stem(h, 'filled'); grid
subplot(413)
stem(C1, 'filled'); grid
subplot(414)
stem(real(C2), 'filled'); grid
DFT Examples
 We will compute the indicated N-point DFTs of the following
sequences
ª x[n]=δ[n] (general N)
ª x[n]=u[n]-u[n-N] (general N)
ª x[n]=(0.5)n (N=32)
In Matlab
 In Matlab, the fft() computes DFT using a fast algorithm, called fast Fourier
transform (FFT).
 X = fft(x) returns the discrete Fourier transform (DFT) of vector X, computed with a
fast Fourier transform (FFT) algorithm.
ª If x is a matrix, fft returns the Fourier transform of each column of the matrix. In this case,
the length of X and the length of x are identical.
ª X = fft(x,N) returns the N-point DFT. If the length of X is less than N, X is padded with
trailing zeros to length N. If the length of X is greater than N, the sequence X is truncated.
ª The N points returned by fft corresponds to frequencies in the [0 2π] range, equally spaced
with an interval of 2π/N.
ª Note that the Nth FFT point corresponds to 2π, which in turn corresponds to the sampling
frequency.
ª If x[n] is real, X[k] is symmetric. Using the fftshift() function shifts the center of symmetry
so that the FFT is given in the [-π to π] interval, rather then [0 2π].
 X=ifft(X,N) returns the N-point inverse discrete Fourier transform
Back to Example
Let’s show that
i. DFT is indeed the sampled version of
DTFT and
ii. DFT and FFT produce identical results

n=0:31; k=0:31;
x=0.9.^n;
w=linspace(0, 2*pi, 512);
K=linspace(0, 2*pi, 32);
X1=1./(1-0.9*exp(-j*w));
X2=(1-(0.9*exp(-j*(2*pi/32)*k)).^32)./
(1-0.9*exp(-j*(2*pi/32)*k));
X=fft(x);
subplot(311)
plot(w, abs(X1)); grid
subplot(312)
stem(K, abs(X11), 'r', 'filled')
stem(K, abs(X2), 'r', 'filled'); grid
subplot(313)
stem(K, abs(X), 'g', 'filled'); grid
Matrix Computation
of DFT
 DFT has a simple matrix implementation: The DFT samples defined by
N −1
X [k ] = ∑ x[n]WNkn , 0 ≤ k ≤ N − 1
n =0

can be expressed in a matrix form X = DN x


where X=[X[0] X[1] … X[N-1]]T is the vectorized DFT sequence,
x=[x[0] x[1] … x[N-1]]T is the vectorized time domain sequence, and
DN is the NxN DFT matrix given by
1 1 1 L 1 
1 WN1 WN2 L WN( N −1) 
 2( N −1) 
D N = 1 WN2 WN4 L WN 
M M M O M 
1 W ( N −1) W 2( N −1) ( N −1)2 
L WN
 N N 
Matrix Computation
of IDFT
• Likewise, the IDFT relation given by
N −1
x[n] = ∑ X [k ]WN− k n , 0 ≤ n ≤ N − 1
k =0
can be expressed in matrix form as
x = D−N1X
where DN-1 is the NxN IDFT matrix
1 1 1 L 1 
1 WN−1 WN− 2 L WN−( N −1) 
−1  − 2( N −1) 
D N = 1 WN− 2 WN− 4 L WN 
M M M O M 
1 W −( N −1) W − 2( N −1) −( N −1)2 
L WN
 N N 

1 *
Note that DFT and IDFT matrices
are related to each other D−N1 = DN
N
In Matlab
dftmtx() Discrete Fourier transform matrix.
dftmtx(N) is the N-by-N complex matrix of values around the unit-
circle whose inner product with a column vector of length N yields the
discrete Fourier transform of the vector. If X is a column vector of
length N, then dftmtx(N)*X yields the same result as FFT(X);
however, FFT(X) is more efficient.
D = dftmtx(N) returns the N-by-N complex matrix D that, when
multiplied into a length N column vector x.
y = A*x computes the discrete Fourier transform of x.
The inverse discrete Fourier transform matrix is
Ai = conj(dftmtx(n))/n
DTFT ÍÎDFT
 We now look more closely to the relationship between DTFT and DFT
ª DTFT is a continuous transform. Sampling the DTFT at regularly spaced intervals around
the unit circle gives the DFT.
ª The sampling operation in one domain causes the (inverse) transform in the other domain to
periodic
ª Just like we can reconstruct a continuous time signal from its samples, DTFT can also be
synthesized from its DFT samples, as long as the number of points N at which DFT is
computed is equal to or larger then the samples of the original signal.
• That is, given the N-point DFT X[k] of a length-N sequence x[n], its DTFT X(ω) can be uniquely
determined from X[k]
Thus ~x [n] is obtained from x[n] by
DTFT DFT
x[n] ⇔ X (ω ) ~
x [ n] ⇔ X [ k ] adding an infinite number of shifted
replicas of x[n], with each replica

X [k ] = X (ω ) ω = 2πk ~ shifted by an integer multiple of N
x [ n] = ∑ x[n + mN ], 0 ≤ n ≤ N −1
sampling instants, and observing the
N m = −∞
sum only for the interval 0≤n≤N-1.
 ωN − 2π k
sin  
1 N −1  2  ⋅ e − j[(ω − 2π k/ N )][( N −1) / 2] If we compute the DFT at M<N
X (ω ) == ∑ X [k ] points, time-domain aliasing occurs,
N k =0  ωN − 2π k
sin   and DTFT cannot be reconstructed
 2N  from DFT samples X[k]!
Homework
 Questions from Chapter 2
ª 30, 32, 34, 35, 36, 46 (also verify using fft function), 50, 51, 52

ª Due: April 7
Lecture 17

The Z-Transform

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Introducing the Z- transform
ª Why do we need another transform?
ª Z-transform – relation to the DTFT
ª Some examples and observations
ª Region of convergence (ROC)
 Properties of z-transform
 Some common z-transforms
 Rational z-transforms
ª Z-transforms in the system theory
ª Poles and zeros of a system
ª Making physical sense of the z-transform – relation to filtering
Yet Another Transform?
 Think about all the transforms you have seen so far:
ª The Laplace transform
ª The Fourier series
ª The continuous Fourier transform
ª The discrete time Fourier transform
ª The discrete Fourier transform, and FFT
 Why need yet another…?
ª Convergence issues with the Fourier transforms
ª The DTFT of a sequence exists if and only if the sequence x[n] is absolutely
summable, that is, if

∑ x[ n] < ∞
n = −∞

ª DTFT may not exist for many signals of practical or analytical signals, whose
frequency analysis can therefore not be obtained through DTFT
The z-Transform
 A generalization of the DTFT leads to the z-transform, which may
exist for many signals for which the DTFT does not.
ª DTFT is in fact a special case of the z-transform
• …just like the Fourier transform is a special case of _____________(?)

 Furthermore, the use of the z-transform allows simple algebraic


expressions to be used which greatly simplifies frequency domain
analysis.
 Digital filters are designed, expressed and applied and represented in
terms of the z-transform
 For a given sequence x[n], its z-transform X(z) is defined as
∞ Z
X ( z ) = Z {x[n]} = ∑ x[n] z −n
x[n] ⇔ X ( z )
n = −∞
where z lies in the complex space, that is, z=a+jb=rejω
Z ÍÎDTFT
 From the definition of the z-variable
n
∑ x[n] (re )
∞ ∞ ∞

X ( z) = ∑ x[n] z −n
= = ∑ x[n] r ne jωn
n = −∞ n = −∞ n = −∞
ª It follows that the DTFT is indeed a special case of the z-transform, specifically, z-
transform reduces to DTFT for the special case of r=1, that is, |z|=1, provided
that the latter exists.
X (ω ) = X ( z ) z = e jω
ª The contour |z|=1 is a circle in the z-plane of unit radius Î the unit circle
ª Hence, the DTFT is really the z-transform evaluated on the unit circle
ª Just like the DTFT, z-transform too has its own, albeit less∞ restrictive,
convergence requirements, specifically, the infinite series ∑ x[n] z − n must converge.
n = −∞

 For a given sequence, the set R of values of z for which its z-


transform converges is called the region of convergence (ROC).
Convergence of
the z-transform
 From our discussion with the DTFT, we know that the infinite series


X ( z ) = X (r e )= ∑ x[n] r − n e − jω n
n = −∞
converges of the x[n]r-n is absolutely summable, that is, if

∑ x[n] r − n < ∞
n = −∞

 The area where this is satisfied defines the ROC, which in general is
an annular region of the z-plane (since “z” is a complex number,
constant z-values describe a circle in the z-plane)
R − < z < R + where 0 ≤ R − < R + ≤ ∞
Examples
 Determine the z-transform and the corresponding ROC of the causal
sequence x[n]=αnu[n]
∞ ∞
−n
X ( z) = ∑α n
u[n] z = ∑α n z −n This power (geometric) series converges to
n = −∞ n =0
1
X ( z) = , for α z −1 < 1
1 − α z −1 Î ROC is the annular region |z| > |a|
z
= , for ∞ > z > α
z −α
•Note that this sequence
does not have a DTFT if
|α|>1, however, it does have
a z-transform!

|α|
•This is a right-sided
sequence, which have an
ROC that is outside of a
circular area!
Examples
 The z-transform of the unit step sequence u[n] can be obtained from
1
X ( z) = , for α z −1 < 1
1 − α z −1
z1
by setting α=1 Î U ( z ) = = , for z −1 < 1
−1 z −1
1− z
ROC is the annular regions |z|>1. Note that this sequence also does
not have a DTFT!
Examples
 Now consider the anti-causal sequence y[n] = −α nu[− n − 1]
−1 ∞
n −n
Y ( z) = ∑ −α z = − ∑α −m z m
n = −∞ m =1
∞ α −1z
= −α −1z ∑α −m z m = −
m=0 1 − α −1z
1 z
= = , for α −1z < 1 ⇒ z < α
1 − α z −1 z −α

ROC is the annular region z<α


•The z-transforms of the two sequences αnu[n] and -
αnu[-n-1] are identical even though the two parent
sequences are different
•Only way a unique sequence can be associated |α|
with a z-transform is by specifying its ROC
•This is a left-sided sequence, which have an ROC
that is inside of a circular area!
Some Observations
 If the two sequences x[n] and y[n] denote the impulse responses of a
system (digital filter), then their z-transforms represent the transfer
functions of these systems
 Both transfer functions have a pole at z=α, which make the transfer
function asymptotically approach to infinity at this value. Therefore,
z=α is not included in either of the ROCs.
 The circle with the radius of α is called the pole circle. A system may
have many poles, and hence many pole circles.
 For right sided sequences, the ROC extend outside of the – outermost
pole circle - , whereas for left sided sequences, the ROC is the inside
of the – innermost pole circle.
 For two-sided sequences, the ROC will be the intersection of the two
ROC areas corresponding to the left and right sides of the sequence.
Example
 Consider x[n]=5nu[n]-8nu[-n-1] Πz z
X ( z) = +
z −5 z −8
ª Corresponding ROCs are |z|>5 and |z|<8
ª Therefore the ROC for this signal is the
annular region 5<|z|<8

ª Note that if the signal was x[n]=8nu[n]-5nu[-n-1] 5 8


the ROC would be empty! That is, the z-transform
of this sequence does not exist!

ª Now recall that DTFT was z-transform evaluated on the unit circle, that is
for z=ejω. Therefore, DTFT of a sequence exists (that is the series
converges), if and only if the ROC includes the unit circle!
ª The DTFT for the above example clearly does not exist, since the ROC does not
include the unit circle!
ª Though, we must add that the existence of DTFT is not a guarantee for the
existence of the z-transform.
Commonly Used
z-Transform Pairs

u[n]

u[n]

u[n]

u[n]
Z-transform Properties
Let x1[n] ÅÆ X1(z), x2[n] ÅÆ X2(z), h[n] ÅÆ H(z) be z-transform pairs, with
individual ROC of Rx1, Rx2, and Rh, respectively. Also assume that any ROC is
of the form rin < |z| < rout. Then the following hold:

ROC
Rx1 ∩ Rx2
Rx
a ri < z < a ro

Rx
1 ro < z < 1 ri
Rx1 ∩ Rh
Rx
Rational z-Transforms
 The z-transforms of LTI systems can be expressed as a ratio of two
polynomials in z-1, hence they are rational transforms.
ª Starting with the constant coefficient linear difference equation representation of
an LTI system:
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0

y[n] + a1 y[n − 1] + a2 y[n − 2] + L aN y[n − N ] = b0 x[n] + b1x[n − 1] + L + bM x[n − M ]

Z Z
Y ( z ) + a1z −1Y ( z ) + a2 z −2Y ( z ) + L a N z − N Y ( z ) = b0 X ( z ) + b1z −1 X ( z ) + L + bM z − M X ( z )
MATLAB uses this representation
Y ( z ) b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) = = for all digital filters / systems /
X ( z ) 1 + a1z −1 + a2 z − 2 + L + a N z − N transfer functions!!!
Rational Z-Transforms
 A rational z-transform can be alternately written in factored form as

b0 ∏ l =1 (1 − ζ l z −1 )
M M
( N − M ) p0 ∏ l =1 ( z − ζ l )
H ( z) = =z
N −1 N
a0 ∏ l =1 (1 − pl z ) d 0 ∏ l =1 ( z − pl )
 At a root z=ζℓ of the numerator polynomial H(ζℓ)=0, and as a result,
these values of z are known as the zeros of H(z)
 At a root z=pℓ of the denominator polynomial H(pℓ)Æ ∞, and as a
result, these values of z are known as the poles of H(z)
ª Note that H(z) has M finite zeros and N finite poles
ª If N > M there are additional N-M zeros at z = 0 (the origin in the z-plane)
ª If N < M there are additional M-N poles at z = 0
 Why is this important?
ª As we will see later, a digital filter is designed by placing appropriate
number of zeros at the frequencies (z-values) to be suppressed, and poles at
the frequencies to be amplified!
Some sense of Physical
Interpretation of this Math Crap!
What does this look like???
poles
−1 −2
1 − 2.4 z + 2.88 z
clear G( z) =
close all 1 − 0.8 z −1 + 0.64 z − 2
N=256;
rez=linspace(-4, 4, N);
imz=linspace(-4,4,N);
%create a uniform z-plane
for n=1:N
z(n,:)=ones(1,N)*rez(n)+j*ones(1,N).*imz(1:N);
zeros
end
%Compute the H function on the z-plane
for n=1:N
for m=1:N
Hz(n,m)=H_fun(z(n,m));
end
end
%Logarithmic mesh plot of the H function
mesh(rez, imz, 20*log10(abs(Hz)))

=====================================
function Hz=H_fun(z);
%Compute the transfer function
Hz=(1-2.4*z^(-1)+2.88*z^(-2))/(1-0.8*z^(-1)+0.64*z^(-2));
In Matlab
 Matlab has simple functions to determine and plot the poles and zeros
of a function in the z-plane:
tf2zpk() Discrete-time transfer function to zero-pole conversion.
[Z,P,K] = TF2ZPK(NUM,DEN) finds the zeros, poles, and gain:

(z-Z(1))(z-Z(2))...(z-Z(n))
H(z) = K ----------------------------------
(z-P(1))(z-P(2))...(z-P(n))

from a single-input, single-output transfer function in polynomial form:

NUM(z)
H(z) = ------------
DEN(z) z = 1.2000 + 1.2000i
b=[1 -2.4 2.88]; a=[1 -0.8 0.64];
1.2000 - 1.2000i
[z,p,k] = tf2zpk(b,a) p = 0.4000 + 0.6928i
0.4000 - 0.6928i
k=1
In Matlab
zplane Z-plane zero-pole plot.
zplane(Z,P) plots the zeros Z and poles P (in column vectors) with the unit circle for reference.
Each zero is represented with a 'o' and each pole with a 'x' on the plot. Multiple zeros and poles are
indicated by the multiplicity number shown to the upper right of the zero or pole.

ZPLANE(B,A) where B and A are row vectors containing transfer function polynomial coefficients
plots the poles and zeros of B(z)/A(z).

b=[1 -2.4 2.88]; a=[1 -0.8 0.64];


zplane(b,a)
Some sense of Physical
Interpretation of this Math Crap!
1 − 2.4 z −1 + 2.88 z −2
G( z) =
1 − 0.8 z −1 + 0.64 z − 2

[H w]=freqz([1 -2.4 2.88],[1 -0.8 0.64],256);


figure
plot(w/pi, abs(H)) Created by a pole
grid
title('Transfer function')
xlabel('Frequency \omega / \pi')

This system has two zeros


at z=1.2 ±j 1.2 and two poles at
z=0.4 ± j0.6928

Created by a zero
Poles & ROC
 The ROC of a rational z-transform cannot contain any poles and is
bounded by the poles
ª For a right sided sequence, the ROC is outside of the largest pole
ª For a left sided sequence, the ROC is inside of the smallest pole
ª For a two sided sequence, some of the poles contribute to terms in the parent
sequence for n<0 and other to terms for n>0. Therefore, the ROC is between the
circular regions: outside of the largest pole coming from the n>0 sequence and
inside of the smallest pole coming from the n<0 sequence
Poles and ROC

For a sequence that has two


poles, at z=α and β, the ROC
is one of these three options
For Wednesday
 Exercise: Implement matlab code on interpretation of the z-transform
 Inverse z-transform and the Residue theorem. Read Chapter 3,
through page 182!
Lecture 18

The Z-Transform
Part II

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Z-transform – review from Monday
 The poles and zeros of a system, and their importance
 Stability issues for discrete systems
ª Relationship of poles, zeros and the ROC
 Inverse z-transform
ª Cauchy residue theorem
ª Partial fraction expansion
• Simple poles
• Multiple poles
ª In Matlab
The z-Transform
 A generalization of the DTFT leads to the z-transform, which may exist for many
signals for which the DTFT does not.
ª The use of the z-transform allows simple algebraic expressions to be used which greatly
simplifies frequency domain analysis.
ª Digital filters are designed, expressed and applied and represented in terms of the z-
transform
 For a given sequence x[n], its z-transform X(z) is defined as
∞ Z
X ( z ) = Z {x[n]} = ∑ x[n] z −n
x[n] ⇔ X ( z )
n = −∞

where z lies in the complex space, that is, z=a+jb=rejω


 Z-transform reduces to DTFT for the special case of r=1, that is, |z|=1, provided that
the latter exists. X (ω ) = X ( z ) z = e jω
ª The contour |z|=1 is a circle in the z-plane of unit radius Î the unit circle
ª Hence, the DTFT is really the z-transform evaluated on the unit circle
ª For a given sequence, the set R of values of z for which its z-transform converges
is called the region of convergence (ROC). − + − +
R < z <R where 0 ≤ R < R ≤ ∞
Some Observations
 If the two sequences x[n] and y[n] denote the impulse responses of a
system (digital filter), then their z-transforms represent the transfer
functions of these systems
 Both transfer functions have a pole at z=α, which make the transfer
function asymptotically approach to infinity at this value. Therefore,
z=α is not included in either of the ROCs.
ª The circle with the radius of α is called the pole circle. A system may have many
poles, and hence many pole circles.
ª For right sided sequences, the ROC extend outside of the – outermost pole circle
- , whereas for left sided sequences, the ROC is the inside of the – innermost pole
circle.
ª For two-sided sequences, the ROC will be the intersection of the two ROC areas
corresponding to the left and right sides of the sequence.
ª Since DTFT is the z-transform evaluated on the unit circle, that is for z=ejω,
DTFT of a sequence exists if and only if the ROC includes the unit circle!
Rational z-Transforms
 The z-transforms of LTI systems can be expressed as a ratio of two
polynomials in z-1, hence they are rational transforms.

Y ( z ) b0 + b1z −1 + b2 z −2 + L + bM z − M
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0
Z H ( z) = =
X ( z ) 1 + a1z −1 + a2 z − 2 + L + a N z − N

b0 ∏ l =1 (1 − ζ l z −1 )
M M
( N − M ) p0 ∏ l =1 ( z − ζ l )
H ( z) = =z
N −1 N
a0 ∏ l =1 (1 − pl z ) d 0 ∏ l =1 ( z − pl )
ª At a root z=ζℓ of the numerator polynomial H(ζℓ)=0, and as a result, these
values of z are known as the zeros of H(z)
ª At a root z=pℓ of the denominator polynomial H(pℓ)Æ ∞, and as a result, these
values of z are known as the poles of H(z)
ª A digital filter is designed by placing appropriate number of zeros at the
frequencies (z-values) to be suppressed, and poles at the frequencies to be
amplified!
Poles & ROC
 The ROC of a rational z-transform
cannot contain any poles and is
bounded by the poles
ª For a right sided sequence, the ROC is
outside of the largest pole
ª For a left sided sequence, the ROC is
inside of the smallest pole
ª For a two sided sequence, some of the
poles contribute to terms in the parent
sequence for n<0 and other to terms
for n>0. Therefore, the ROC is
between the circular regions: outside
of the largest pole coming from the
n>0 sequence and inside of the
smallest pole coming from the n<0
sequence

For a sequence that has two


poles, at z=α and β, the ROC
is one of these three options
Stability & ROC
In terms of Zeros & Poles
 Recall that for a system to be causal, its impulse response must satisfy h[n]=0, n<0,
that is for a causal system, the impulse response is right sided. Based on this, and
our previous observations, we can make the following important conclusions:
ª The ROC of a causal system extends outside of the outermost pole circle
ª The ROC of an anticausal system (whose h[n] is purely left-sided) lies inside of the
innermost pole circle

Damn
ª The ROC of a noncausal system (whose h[n] two-sided) is bounded by two different
pole circles

Important!
ª For a system to be stable, its h[n] must be absolutely summableÎ An LTI system is
stable, if and only if the ROC of its transfer function H(z) includes the unit circle!
ª A causal system’s ROC lies outside of a pole circle. If that system is also stable, its
ROC must include unit circleÎ Then a causal system is stable, if and only if, all
poles are inside the unit circle! Similarly, an anticausal system is stable, if and only if
its poles lie outside the unit circle.
• An FIR filter is always stable, why?
Inverse z-Transform
 The inverse z-transform can be obtained as a generalization of the
inverse DTFT:
1 ⌠
x[n] =  X ( z ) ⋅ z n −1
dz
2πj ⌡C

ª Since the variable z is defined on the complex (polar) plane, the integral is not a
Cartesian integral, but rather a contour integral, where the contour C is any
circular region that falls into the ROC of X(z).
ª Complicated procedure, yet several different procedure to compute
1. Perform a long division for X(z), and invert each (simple) term individually – tedious,
often does not result in a closed form (finds x[n] one by one for each n)
2. Direct evaluation of the contour integral using the Cauchy Residue theorem – tedious
3. Partial fraction expansion – most commonly used procedure.
Cauchy Residue Theorem
 Cauchy’s residue theorem states that the contour integral can be
computed as a sum of the residues that lie inside the contour
1 ⌠ Residues of X(z) ⋅ z n −1at the 
 X ( z ) ⋅ z dz = ∑ 
n −1
x[n] = 
2πj ⌡C poles inside the contour C 
ª whose special case include the following theorem

⌠ −l 2πj , l = 1
 z dz = 
⌡C 0, otherwise

ª The details of this integration is reserved for a course in complex variables


Partial Fraction Expansion
 Re-express the rational z-transform as a partial fraction expansion of
simpler terms, whose inverse z-transforms are known.
ª Slightly different procedure depending on whether the system has simple poles or
multiple poles
ª A rational H(z) can be expressed as

Y ( z) P( z) b0 + b1z −1 + b2 z − 2 + L + bM z − M ∑ i =
M
0 bi z −i
H ( z) = = = = =
−1 −2 −N
∑i = 0 ai z −i
X ( z ) D( z ) N
1 + a1z + a2 z + L + aN z
ª If M ≥ N then H(z) can be re-expressed as through long division
M −N
P ( z)
H ( z) = ∑ ηl z − l + D1( z )
l =0

2 + 0 .8 z − 1 + 0 .5 z − 2 + 0 .3 z − 3 −1 5.5 + 2.1 z −1
H ( z) = H ( z ) = −3.5 + 1.5 z +
1 + 0 .8 z −1
+ 0 .2 z −2 1 + 0.8 z −1 + 0.2 z − 2

where the degree of P1(z) is less than N. The rational fraction P1( z ) / D( z ) is then
called a proper fraction or proper polynomial .
Partial Fraction Expansion
Simple Poles
 If the system only has simple poles, then it can be written in the
following form
b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) =
1 + a1z −1 + a2 z − 2 + L + a N z − N
b0 + b1z −1 + b2 z − 2 + L + bM z − M
=
(z − p1 )(z − p2 )L (z − p N )
A1 A2 AN
= + +L+
z − p1 z − p2 z − pN

 Recall that αnu[n] ↔ z/(z-α), |z|>|α|; and αnu[-n-1]↔ -z/(z-α), |z|<|α|;


ª If we can put a z in the numerator, then we have a series of z/(z-pi), whose inverse
z-transform is the geometric series
Partial Fraction Expansion
Simple Poles
 So, we simply compute the partial fraction of H(z)/z,
H ( z ) b0 + b1z −1 + b2 z −2 + L + bM z − M
=
z z ( z − p1 )( z − p2 )L ( z − p N )
A A A2 AN
= 0+ 1 + +L+
z z − p1 z − p2 z − pN

ª the constants Ai , which are the residues at the poles of H(z)/z, can be computed
as follows:

H ( z)
Ai = ( z − pi ) i = 0,1,2, L N
z z= p
i
Example
See and Solve Examples on Page 173-175)

 Find the inverse z-transform of H(z) given the ROC i) -0.6<|z|<0.2


z 2 + 2z +1 ii) |z|>0.2
H ( z) =
z 2 + 0.4 z − 0.12

z2 + 2z +1 z 2 + 2z +1
H ( z) = =
2
z + 0.4 z − 0.12 ( z − 0.2)( z + 0.6)
H ( z) z 2 + 2z +1 A A1 A2
= = 0+ +
z z ( z − 0.2)( z + 0.6) z z − 0.2 z + 0.6
x1[n] = −0.833δ [n]
H ( z) 1
A0 = ( z ) = = −8.333 + 0.333 ⋅ (− 0.6 )n u[n]
z z = 0 − 0.12

A1 = ( z − 0.2 )
H ( z)
=9
− 9 ⋅ (0.2 )n u[− n − 1]
z z = 0 .2
x2 [n] = −0.833δ [n]
H ( z) 1
A2 = ( z + 0.6) = = 0.333 0.333 ⋅ (− 0.6 )n 
z z = −0.6 3
+  u[n]
+ 9 ⋅ (0.2 )n 
Partial Fraction Expansion
Multiple Poles
 If the z-domain function contains a m-multiple pole, that is, a term of
the following form is included H ( z) P( z )
=
z (z − p )m
this term is expanded as follows:
H ( z) A A2 Am −1 Am
= 1 + +L+ +
z z − p ( z − p )2 (z − p )m −1 (z − p )m
where each coefficient can be computed by taking consecutive
derivatives and evaluating the function at the pole
 H ( z) 
d i  ( z − p )m 
1  z 
Am −i = i = 0,1,2,..., m − 1
(i )! dz i
z= p

1 d i P( z )
= i = 0,1,2,..., m − 1
(i )! dz i z= p
Exercises
 Solve the following as an exercise – the solutions are given (next
slide)so that you can check your answer:
z
1. X ( z ) = (i ) ROC : z < 1 2 as well as (ii) ROC : z > 1
2
2 z − 3z + 1
z
2. X ( z ) = ROC : z > 2
z ( z − 1)(z − 2 )2

2 z3 − 5z 2 + z + 3
3. X ( z ) = ROC : z < 1
(z − 1)(z − 2)
2 + z − 2 + 3z − 4
4. X ( z ) = ROC : z > 0
2
z + 4z + 3
Answers
1. (i) x[n] = -u[-n-1]+(0.5)nu[-n-1]; (ii) x[n]=u[n]-(0.5)nu[n]
2. x[n] = (1-2n+n*2n-1)u[n]
3. x[n] = 2δ[n+1]+1.5δ[n]+u[-n-1]-(0.5)(2)nu[-n-1]
4. x[n] = 1 [(-1)n-1-(-3)n-1]u[n-1]
+0.5[(-1)n-3-(-3)n-3]u[n-3]
+1.5[(-1)n-5-(-3)n-5]u[n-5]
In Matlab

[r,p,k]= residuez(num,den) develops the partial-fraction expansion of a


rational z-transform with numerator and denominator coefficients given by
vectors num and den. Vector r contains the residues, vector p contains the poles,
vector k contains the direct term constants (the coefficients of z-terms if the ratio
is made proper)
[num,den]=residuez(r,p,k) converts a z-transform expressed in a partial-
fraction expansion form to its rational form.
See the matlab help files on residuez and residue for more details!
 In residuez function, you do not need to divide by “z”! The coefficient
normally used for 1/z will be in “k” variable. You can also use the continuous
equivalent of this function, residue, for which you do need to divide the
original function by “z” before you obtain the coefficients to get the correct
results!
Lecture 19
Concept of
Filtering
&
Linear Phase

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Frequency response and transfer function, revisited
 Interpretation of zero-pole plot
 Types of transfer functions
ª FIR/IIR
ª Magnitude response Æ LPF, HPF, BPF, BSF
 Ideal filters vs. realizable filters
 Zero phase and linear phase filters
 Phase and group delay
 Linear Phase Concept
 Linear phase filters
Frequency Response
 Recall that the output of an LTI system can be obtained by the
convolution sum:

y[n] = ∑ h[k ] x[n − k ]
k = −∞

ª Or in frequency domain
 ∞ k Y (ω )
Y (ω ) =  ∑ h[k ] e − jω X (ω ) = H (ω ) X (ω ) H (ω ) =
  X (ω )
 k = −∞ 
where H(ω) is called the frequency response of the system, which relates the input
and the output of an LTI system in the frequency domain. The frequency response
can also be represented in terms of CCLDE coefficients:

M − jω k
H (ω ) =
∑ b
k =0 k e
N − jω k
∑ a
k =0 k e
Transfer Function
 A generalization of the frequency response is the transfer function,
computed in the z-domain

 ∞  Y ( z)
Y ( z ) =  ∑ h[n] e − jωn  X ( z ) = H ( z ) X ( z ) H ( z) =
  X ( z)
 n = −∞ 
ª The function H(z), which is the z-transform of the impulse response h[n] of the
LTI system, is called the transfer function or the system function
ª The inverse z-transform of the transfer function H(z) yields the impulse response
h[n]
ª Using the CCLDE coefficients
−k −1
∑ k = 0 bk z M − k
M M M M

H ( z) = k =0
bk z
=z (N −M )
=
b0 ∏ k =1 (1 − ξ k z ) b0 ( N − M ) ∏ k =1 ( z − ξ k )
⋅ = z
∑ k = 0 ak z − k ∑ k = 0 ak z N − k −
N N N
a0 ∏ (1 − λk z ) a0 1 N
k =1 ∏ ( z − λk )
k =1
CCLDE Zeros & Zero & pole
coefficients poles factors
Frequency Response ÅÆ
Transfer Function
 If the ROC of the transfer function H(z) includes the unit circle, then
the frequency response H(ω) of the LTI digital filter can be obtained
simply as follows:
H (e jω ) = H (ω ) = H ( z ) z = e jω
 For a real coefficient transfer function H(z) it can be shown that

ω 2
j
H (e ) = H ( e jω ) H * ( e jω )
= H (e jω ) H (e − jω ) = H ( z ) H ( z −1 )
z = e jω
Frequency Response ÅÆ
Transfer Function
 Assuming that the DTFT exists, starting with the factored z-transform,
we can write the frequency response of a typical LTI system as
M M jω
b ∏ ( z − ξk ) jω b jω ( N − M ) ∏ ( e − ξk )
H ( z ) = 0 z ( N − M ) k =1 H (e ) = 0 e k =1
∏ (e jω − λk )
N a0 N
a0 ∏ ( z − λk )
k =1 k =1

From which we can obtain the magnitude and phase response:


M jω
jω b ∏ k =1
e − ξk
H (e )= 0
a0 N jω
∏ k =1
e − λk

M N
arg H (e jω ) = arg(b0 / a0 ) + ω ( N − M ) + ∑ arg(e jω − ξ k ) − ∑ arg(e jω − λk )
k =1 k =1
Interpretation of the
Frequency Response
 Take a close look at the magnitude and phase responses in terms of zeros and poles:

∏ k =1 e jω − ξ k
M
b
H ( e jω ) = 0
∏ k =1 e jω − λk
a0 N

ª The magnitude response |H(ω)| at a specific value of ω is given by the product of


the magnitudes of all zero vectors divided by the product of the magnitudes of all
pole vectors

M N
jω jω
arg H (e ) = arg(b0 / a0 ) + ω ( N − M ) + ∑ arg(e − ξk ) − ∑ arg(e jω − λk )
k =1 k =1

ª The phase response at a specific value of ω is obtained by adding the phase of the
term b0/a0 and the linear-phase term ω(N-M) to the sum of the angles of the zero
vectors minus the angles of the pole vectors
So What Does
This All Mean?
 An approximate plot of the magnitude and phase responses of the
transfer function of an LTI digital filter can be developed by
examining the pole and zero locations
 Now, the frequency response has the smallest magnitude around ω=ζ,
and the largest magnitude around ω=λ.
ª Of course, at ω=λ, the response is infinitely large, and at ω=ζ, the response is zero
 Therefore:
ª To highly attenuate signal components in a specified frequency range, we
need to place zeros very close to or on the unit circle in this range
ª Likewise, to highly emphasize signal components in a specified frequency
range, we need to place poles very close to or on the unit circle in this range
An Example
 Consider the M-point moving-average FIR filter with an impulse response
1/ M , 0 ≤ n ≤ M −1
h[n ] = 
 0, otherwise
1 M −1 1 − z−M zM −1
H (z) = ∑ z−n = −
=
M n=0 M (1 − z ) M [ z M ( z − 1)]
1

 Observe the following


ª The transfer function has M zeros on 1

the unit circle at z = e j 2πk / M , 0 ≤ k ≤ M − 1


0.5
ª There are M-1 poles at z = 0 and a single

Imaginary Part
pole at z = 1 0
7

ª The pole at z = 1 exactly cancels the


zero at z = 1 -0.5

ª The ROC is the entire z-plane except z = 0


-1
-1 -0.5 0 0.5 1
Real Part
Moving Average Filter
%h1=[1 1 1 1 1];
%h2=[1 1 1 1 1 1 1 1 1];

b1=[1 1 1 1 1]; a1=1;


b2=[1 1 1 1 1 1 1 1 1]; a2=1;

[H1 w]=freqz(b1, 1, 512);


[H2 w]=freqz(b2, 1, 512);

[z1,p1,k1] = tf2zpk(b1,a1);
[z2,p2,k2] = tf2zpk(b2,a2);

subplot(221)
plot(w/pi, abs(H1)); grid
title('Transfer Function of 5 point MAF')
subplot(222)
zplane(b1,a1);
title('Pole-zero plot of 5 point MAF')
subplot(223)
plot(w/pi, abs(H2)); grid
title('Transfer Function of 9 point MAF')
subplot(224)
zplane(b2,a2);
title('Pole-zero plot of 9 point MAF') Observe the effects of zeros and the poles !!!
Types of
Transfer Functions
 The time-domain classification of an LTI digital transfer function
sequence is based on the length of its impulse response:
ª Finite impulse response (FIR) transfer function
ª Infinite impulse response (IIR) transfer function
 Many other classifications are also used
ª For digital transfer functions with frequency-selective frequency responses, one
classification is based on the shape of the magnitude function |H(ω)| or the form
of the phase function θ(ω)
 Based on the magnitude spectrum, one of four types of ideal filters are
usually defined
ª Low pass
ª High pass
ª Band pass
ª Band stop
Ideal Filter
 An ideal filter is a digital filter designed to pass signal components of
certain frequencies without distortion, which therefore has a
frequency response equal to one at these frequencies, and has a
frequency response equal to zero at all other frequencies
 The range of frequencies where the frequency response takes the
value of one is called the passband
 The range of frequencies where the frequency response takes the
value of zero is called the stopband
 The transition frequency from a passband to stopband region is called
the cutoff frequency
 Note that an ideal filter cannot be realized. Why?
Ideal Filters
 The frequency responses of four common ideal filters in the [-π π]
range are

Lowpass Highpass

Bandpass Bandstop

Passband
Stopband
Ideal Filters
 Recall that the DTFT of a rectangular pulse is a sinc function

1, − M ≤ n ≤ M ℑ M − jωn sin (M + 1 2 )ω


x[n] = rect M [n] =  ⇔ ∑e = , ω≠0
0, otherwise n=−M sin (ω 2 )

 From the duality theorem, the inverse DTFT of a rectangular pulse is


also a sinc function. Since the ideal (lowpass) filter is of rectangular
shape, its impulse response must be of sinc.
hLP[n]
1, 0 ≤ ω ≤ ωc
H LP (ω ) = 
0, ωc ≤ ω ≤ π
ω
1 c j ωn
hLP[ n] = ∫ 1 ⋅ e dω
2π −ω
c
sin (ωc n )
= , −∞ < n < ∞
πn
Ideal Filters

 We note the following about the impulse response of an ideal filter


ª hLP[n] is not absolutely summable
ª The corresponding transfer function is therefore not BIBO stable
ª hLP[n] is not causal, and is of doubly infinite length
ª The remaining three ideal filters are also characterized by doubly infinite, noncausal
impulse responses and also are not absolutely summable
 Thus, the ideal filters with the ideal brick wall frequency responses
cannot be realized with finite dimensional LTI filter
Realizable Filters
 In order to develop a stable and
realizable filter transfer function
ª The ideal frequency response
specifications are relaxed by
including a transition band
between the passband and the
stopband
ª This permits the magnitude
response to decay slowly from its
maximum value in the passband
to the zero value in the stopband
ª Moreover, the magnitude
response is allowed to vary by a
small amount both in the
passband and the stopband
ª Typical magnitude response
specifications of a lowpass filter
therefore looks like Î
Types of
transfer Functions
 So far we have seen transfer functions characterized primarily
according to their
ª Impulse response length (FIR / IIR)
ª Magnitude spectrum characteristics (LPF, HPF, BPF, BSF)
 A third classification of a transfer function is with respect to its phase
characteristics
ª Zero phase
ª Linear phase
ª Generalized linear phase
ª Non-linear phase
 Recall that the phase spectrum tells us _________________________
how much each frequency is delayed

 In many applications, it is necessary that the digital filter designed


does not distort the phase of the input signal components with
frequencies in the passband
Phase and Group Delay
 A frequency selective system (filter) with frequency response
H(ω)=|H(ω)|∟H(ω) = |H(ω)|ejθ(ω) changes the amplitude of all
frequencies in the signal by a factor of |H(ω)|, and adds a phase of
θ(ω) to all frequencies.
ª Note that both the amplitude change and the phase delay are functions of ω
ª The phase θ(ω) is in terms of radians, but can be expressed in terms of time, which
is called the phase delay. The phase delay at a particular frequency ω0 is given as
θ(ω ) Compare this to the θ θ
τ p (ωo ) = − ω o tθ = =
o phase delay in cont. time 2πf ω
ª If an input system consists of many frequency components (which most practical
signals do), then we can also define group delay, the phase shift by which the
envelope of the signal shifts. This can also be considered as the average phase delay
– in seconds – of the filter as a function of frequency, given by
dθ (ω )
τ g (ωc ) = −
dω ω = ω
c
Phase and Group Delay
 Note that both are slopes of the phase function, just defined slightly
differently
Zero Phase Filters

 One way to avoid any phase distortion is to make sure the frequency
response of the filter does not delay any of the spectral components.
Such a transfer function is said to have a zero – phase characteristic.
 A zero – phase transfer function has no phase component, that is, the
spectrum is purely real (no imaginary component) and non-negative
 However, it is NOT possible to design a causal digital filter with a
zero phase. Why?
ª Hint: What do we need in the impulse response to ensure that the frequency
response is real and non-negative?
Zero-Phase Filters
 Now, for non-real-time processing of real-valued input signals of finite length, zero-
phase filtering can be implemented by relaxing the causality requirement
 A zero-phase filtering scheme can be obtained by the following procedure:
ª Process the input data (finite length) with a causal real-coefficient filter H(z).
ª Time reverse the output of this filter and process by the same filter.
ª Time reverse once again the output of the second filter

x[n] H(z) v[n] u[n] H(z) w[n]

u[ n ] = v[ − n ], y[ n ] = w[ − n ]
V (ω ) = H (ω ) X (ω ), W (ω ) = H (ω )U (ω )
U (ω ) = V * (ω ), Y (ω ) = W * (ω )
Y (ω ) = W * (ω ) = H * (ω )U * (ω )
2
Y (ω ) = H (ω ) V (ω ) = H * (ω ) H (ω ) X (ω ) = H (ω ) X (ω )
In Matlab
 The function fftfilt() implements the zero-phase filtering scheme

fftfilt()
Zero-phase digital filtering

y = filtfilt(b,a,x) performs zero-phase digital filtering by processing the input


data in both the forward and reverse directions. After filtering in the forward
direction, it reverses the filtered sequence and runs it back through the filter.

The resulting sequence has precisely zero-phase distortion and double the
filter order. filtfilt minimizes start-up and ending transients by matching initial
conditions, and works for both real and complex inputs.
Linear Phase
 Note that a zero-phase filter cannot implemented for real-time
applications. Why? Real time means causal operation. No time to reverse and refilter.

 For a causal transfer function with a nonzero phase response, the


phase distortion can be avoided by ensuring that the transfer function
has (preferably) a unity magnitude and a linear-phase characteristic in
the frequency band of interest
H (ω ) = e − jαω H (ω ) = 1 ∠H (ω ) = θ (ω ) = −αω

ª Note that this phase characteristic is linear for all ω in [0 2π]. θ(ωo )
ª Recall that the phase delay at any given frequency ω0 was τ p ( ωo ) = − ωo
ª If we have linear phase, that is, θ(ω)=-αω, then the total delay at
any frequency ω0 is τ0 = -θ(ω0)/ ω0 = -αω0/ω0 = α
ª Note that this is identical to the group delay dθ(ω)/dω dθ (ω )
τ g (ωc ) = −
evaluated at ω0 dω ω =ω
c
What is the Big Deal…?
 The deal is huge!
 If the phase spectrum is linear, then the phase delay is independent of
the frequency, and it is the same constant α for all frequencies.
 In other words, all frequencies are delayed by α seconds, or
equivalently, the entire signal is delayed by α seconds.
ª Since the entire signal is delayed by a constant amount, there is no distortion!
 If the filter does not have linear phase, then different frequency
components are delayed by different amounts, causing significant
distortion.
Take Home Message
 If it is desired to pass input signal components in a certain frequency
range undistorted in both magnitude and phase, then the transfer
function should exhibit a unity magnitude response and a linear-phase
response in the band of interest
|HLP(ω)|

∟HLP(ω)
Generalized Linear Phase
 Now consider the following system, where G(ω) is real (i.e., no
phase)
H (ω ) = e − jαω G (ω )

 From our previous discussion, the term e-jαω simply introduces a phase
delay, that is, normally independent of frequency. Now,
ª If G(ω) is positive, the phase term is θ(ω)=-αω, hence the system has linear phase.
ª If G(ω)<0 for and ω, then a 180º (π rad) phase term is added to the phase
spectrum. Therefore, the phase response is θ(ω)=-αω+π, the phase delay is no
longer independent of frequency
ª We can, however, write H(ω)=-[e-jαω G(ω)], and the function inside brackets has
now linear phase Æ no distortion. The negative signs simply flips the signal.
ª Therefore, such systems are said to have generalized linear phase
Approximately Linear Phase
 Consider the following transfer functions

 Note that above a certain frequency, say ωc, the magnitude is very close to zero, that
is most of the signal above this frequency is suppressed. So, if the phase response
deviates from linearity above these frequencies, then signal is not distorted much,
since those frequencies are blocked anyway.
Linear Phase Filters
 It is typically impossible to design a linear phase IIR filter, however,
designing FIR filters with precise linear phase is very easy:
 Consider a causal FIR filter of length M+1 (order M)

H ( z ) = ∑ nN= 0 h[n] z − n = h[0] + h[1]z −1 + h[2]z −2 + L + h[ M ]z − M

ª This transfer function has linear phase, if its impulse response h[n] is either
symmetric
h[n] = h[ M − n], 0 ≤ n ≤ M

or anti-symmetric

h[n] = − h[ M − n], 0 ≤ n ≤ M
Linear Phase Filters
 There are four possible scenarios: filter length even or odd, and
impulse response is either symmetric or antisymmetric

FIR I: even length, symmetric FIR II: odd length, symmetric

FIR IV: odd length,


antisymmetric
FIR III: odd length, symmetric

Note for this case


that h[M/2]=0
FIR I and FIR II Types
 For symmetric coefficients, we can show that the frequency responses are of the
following form:
 FIR I (M is odd, the sequence is symmetric and of even length)
G(ω)

ω  (M −1) 2
M
−j
2   M  
H (ω ) = 2e
 ∑ h[i ] cos − i ω
α  i =0  2  

ª Note that this of the form H(ω)=e-jαωG(ω), where α=M/2, and G(ω) is the real quantity (the
summation term)Î Output is delayed by M/2 samples!
 FIR II (M is even, sequence is symmetric and of odd length)
M
−j ω M M 2
M  
2  h[ ] + 2
H (ω ) = e
 ∑
h[i ] cos − i ω
 2 i =1  2  

ª Again, this system has linear phase (the quantity inside the parenthesis is a real quantity) and
the phase delay is M/2 samples.
FIR III and FIR IV Types
 For antisymmetric sequences, we have h[n]=-h[M-n], which gives us sin terms in
the summation expression:
 FIR III (M is odd, the sequence is antisymmetric and of even length)

 M π
j  − ω +   (M −1) 2
2  M  
 ∑
H (ω ) = 2e  2 h[i ] sin  − i ω
 i =0  2  

 FIR IV (M is even, the sequence is antisymmetric and of odd length)


 M π
j  − ω +   M 2 −1
2  M  
 ∑
H (ω ) = 2e  2 h[i ] sin  − i ω
 i =1  2  

ª In both cases, the phase response is of the form θ(ω)=-(M/2) ω + π/2, hence generalized
linear phase. Again, in all of theses cases, the filter output is delayed by M/2 samples. Also,
for all cases, if G(ω)<0, an additional π term is added to the phase, which causes the samples
to be flipped.
Lecture 20
Basic Digital
Filter
Structures

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Review of zero-plot ÅÆ filter relationship
 Linear Phase Concept
 Linear phase filters – FIR 1, FIR 2, FIR 3, FIR 4 filters
 Zero locations of linear filters
ª Why does it matter?
 Understanding digital filters
ª Simple FIR filters
ª The concept of cutoff-frequency
ª Cascaded FIR filters
Review
 A digital filter can be designed through its pole-zero characteristic:
ª To attenuate certain frequencies place zeros close to or on the unit circle at locations
corresponding to the frequencies to be suppressed.
ª To amplify certain frequencies, place poles close to or on the unit circle at locations
corresponding to frequencies to be amplified.
zM −1
 Example: Simplest lowpass filter is the moving average filter: H (z) =
M [ z M ( z − 1)]
ª Observe the relationship between the poles & zeros and the magnitude response.
 Based on magnitude response, we typically have one of four types of filters:
ª Lowpass, highpass, bandpass and bandstop
ª Terminology to know: passband / stopband /transition band, cutoff frequency, ideal vs.
realizable filter. Why are ideal filters not realizable?
 Based on phase response, we typically have one of four types of filters:
ª Zero-phase, linear-phase, generalized linear phase, non-linear phase. Zero-phase filters are
not realizable in real time (causality restriction), FIR filters can be made linear phase.
ª Need to make h[n] symmetric or antisymmetric.
Moving Average Filter
%h1=[1 1 1 1 1];
%h2=[1 1 1 1 1 1 1 1 1];

b1=[1 1 1 1 1]; a1=1;


b2=[1 1 1 1 1 1 1 1 1]; a2=1;

[H1 w]=freqz(b1, 1, 512);


[H2 w]=freqz(b2, 1, 512);

[z1,p1,k1] = tf2zpk(b1,a1);
[z2,p2,k2] = tf2zpk(b2,a2);

subplot(221)
plot(w/pi, abs(H1)); grid
title('Transfer Function of 5 point MAF')
subplot(222)
zplane(b1,a1);
title('Pole-zero plot of 5 point MAF')
subplot(223)
plot(w/pi, abs(H2)); grid
title('Transfer Function of 9 point MAF')
subplot(224)
zplane(b2,a2);
title('Pole-zero plot of 9 point MAF') Observe the effects of zeros and the poles !!!
Ideal Filters
 The frequency responses of four common ideal filters in the [-π π]
range are

Lowpass Highpass

Bandpass Bandstop

Passband
Stopband
Realizable Filters
 In order to develop a stable and
realizable filter transfer function
ª The ideal frequency response
specifications are relaxed by
including a transition band
between the passband and the
stopband
ª This permits the magnitude
response to decay slowly from its
maximum value in the passband
to the zero value in the stopband
ª Moreover, the magnitude
response is allowed to vary by a
small amount both in the
passband and the stopband
ª Typical magnitude response
specifications of a lowpass filter
therefore looks like Î
Phase and Group Delay
 A frequency selective system (filter) with frequency response
H(ω)=|H(ω)|∟H(ω) = |H(ω)|ejθ(ω) changes the amplitude of all
frequencies in the signal by a factor of |H(ω)|, and adds a phase of
θ(ω) to all frequencies.
ª Note that both the amplitude change and the phase delay are functions of ω
ª The phase θ(ω) is in terms of radians, but can be expressed in terms of time, which
is called the phase delay. The phase delay at a particular frequency ω0 is given as
θ(ω ) Compare this to the θ θ
τ p (ωo ) = − ω o tθ = =
o phase delay in cont. time 2πf ω
ª If an input system consists of many frequency components (which most practical
signals do), then we can also define group delay, the phase shift by which the
envelope of the signal shifts. This can also be considered as the average phase delay
– in seconds – of the filter as a function of frequency, given by
dθ (ω )
τ g (ωc ) = −
dω ω = ω
c
Zero Phase Filters
 Phase distortion can be avoided with a system whose transfer function
has a zero – phase characteristic, which has no phase (imaginary)
component.
 Causal (and hence real-time) zero – phase filters are not realizable,
however, for non-real applications, a zero-phase filter can be easily
constructed by relaxing the causality requirement:
ª Process the input data (finite length) with a causal real-coefficient filter H(z).
ª Time reverse the output of this filter and process by the same filter.
ª Time reverse once again the output of the second filter

x[n] H(z) v[n] u[n] H(z) w[n]

u[ n ] = v[ − n ], y[ n ] = w[ − n ]

ª The function filtfilt() implements the zero-phase filtering scheme


Linear Phase
 For a causal transfer function with a nonzero phase response, the phase distortion can be
avoided by ensuring that the transfer function has (preferably) a unity magnitude and a
linear-phase characteristic in the frequency band of interest

H (ω ) = e − jαω H (ω ) = 1 ∠H (ω ) = θ (ω ) = −αω
ª Note that this phase characteristic is linear for all ω in [0 2π]. τ (ω ) = − θ(ωo )
ª Recall that the phase delay at any given frequency ω0 was
p o ωo
ª If we have linear phase, that is, θ(ω)=-αω, then the total delay at any frequency ω0 is
τ0 = -θ(ω0)/ ω0 = -αω0/ω0 = α
ª If the phase spectrum is linear, then the phase delay is independent of ω0, and it is the same constant α
for all ω: the entire signal is delayed by α seconds Î no distortion!
Linear Phase Filters
 It is typically impossible to design a linear phase IIR filter, however,
designing FIR filters with precise linear phase is very easy:
 Consider a causal FIR filter of length M+1 (order M)

H ( z ) = ∑ nN= 0 h[n] z − n = h[0] + h[1]z −1 + h[2]z −2 + L + h[ M ]z − M

ª This transfer function has linear phase, if its impulse response h[n] is either
symmetric
h[n] = h[ M − n], 0 ≤ n ≤ M

or anti-symmetric

h[n] = − h[ M − n], 0 ≤ n ≤ M
Linear Phase Filters
 There are four possible scenarios: filter length even or odd, and
impulse response is either symmetric or antisymmetric

FIR I: even length, symmetric FIR II: odd length, symmetric

FIR IV: odd length,


antisymmetric
FIR III: even length,
antisymmetric
Note for this case
that h[M/2]=0
FIR I and FIR II Types
 For symmetric coefficients, we can show that the frequency responses are of the
following form:
 FIR I (M is odd, the sequence is symmetric and of even length)
G(ω)

ω  (M −1) 2
M
−j
2   M  
H (ω ) = 2e
 ∑ h[i ] cos − i ω
α  i =0  2  

ª Note that this of the form H(ω)=e-jαωG(ω), where α=M/2, and G(ω) is the real quantity (the
summation term)Î Output is delayed by M/2 samples!
 FIR II (M is even, sequence is symmetric and of odd length)
M
−j ω M M 2
M  
2  h[ ] + 2
H (ω ) = e
 ∑
h[i ] cos − i ω
 2 i =1  2  

ª Again, this system has linear phase (the quantity inside the parenthesis is a real quantity) and
the phase delay is M/2 samples.
FIR III and FIR IV Types
 For antisymmetric sequences, we have h[n]=-h[M-n], which gives us sin terms in
the summation expression:
 FIR III (M is odd, the sequence is antisymmetric and of even length)

 M π
j  − ω +   (M −1) 2
2  M  
 ∑
H (ω ) = 2e  2 h[i ] sin  − i ω
 i =0  2  

 FIR IV (M is even, the sequence is antisymmetric and of odd length)


 M π
j  − ω +   M 2 −1
2  M  
 ∑
H (ω ) = 2e  2 h[i ] sin  − i ω
 i =1  2  

ª In both cases, the phase response is of the form θ(ω)=-(M/2) ω + π/2, hence generalized
linear phase. Again, in all of theses cases, the filter output is delayed by M/2 samples. Also,
for all cases, if G(ω)<0, an additional π term is added to the phase, which causes the samples
to be flipped.
An Example – Matlab Demo

h1=[1 2 3 4 4 3 2 1]; % FIR 1


h2=[1 2 3 4 3 2 1]; % FIR 2
h3=[-1 -2 -3 -4 3 3 2 1] ; % FIR 3
h4=[-1 -2 -3 0 3 2 1]; %FIR 4 %Plot the zero - pole plots
figure
[H1 w]=freqz(h1, 1, 512); [H2 w]=freqz(h2, 1, 512); subplot(221)
[H3 w]=freqz(h3, 1, 512); [H4 w]=freqz(h4, 1, 512); zplane(h1,1); grid
title('FIR 1')
% Plot the magnitude and phase responses subplot(222)
% in angular frequency from 0 to pi zplane(h2,1);grid
subplot(421); plot(w/pi, abs(H1));grid; ylabel('FIR 1') title('FIR 2')
subplot(422); plot(w/pi, angle(H1));grid; subplot(223)
subplot(423); plot(w/pi, abs(H2));grid; ylabel('FIR 2') zplane(h3,1);grid
subplot(424); plot(w/pi, angle(H2));grid; title('FIR 3')
subplot(425); plot(w/pi, abs(H3));grid; ylabel('FIR 3') subplot(224)
subplot(426); plot(w/pi, angle(H3));grid; zplane(h4,1);grid
subplot(427); plot(w/pi, abs(H4));grid title('FIR 4')
xlabel('Frequency, \omega/\pi'); ylabel('FIR 4')
subplot(428); plot(w/pi, angle(H4));grid
xlabel('Frequency, \omega/\pi')

lin_phase_demo2.m
Example

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Zero Locations of
Linear Phase Filters
 For linear phase filters, the impulse response satisfies
M
h[n] = ± h[ M − n] ⇒ H ( z ) = ± ∑ h( M − n) z − n H ( z ) = ± z − M H ( z −1 )
n=0

ª We can make the following observations as facts

1. If z0 is a zero of H(z), so too is 1/z0=z0-1


2. Real zeros that are not on the unit circle, always occur in pairs
such as (1-αz-1) and (1-α-1z-1), which is a direct consequence of
Fact 1.
3. If the zero is complex, z0=αejθ, then its conjugate αe-jθ is also
zero. But by the above statements, their symmetric
counterparts α-1ejθ and α-1e-jθ must also be zero. So, complex
zeros occur in quadruplets
?
4. If M is odd (filter length is even), and symmetric (that is,
h[n]=h[M-n]), then H(z) must have a zero at z=-1
5. If the filter is antisymmetric, that is, h[n]=-h[M-n], then H(z)
must have a zero at z=+1 for both odd and even M
6. If the filter is symmetric and h[n]=-h[M-n], and M is even, then
H(z) must have a zero at z=-1
Linear Phase Filter
Zero Locations
Type 1 FIR
A Type filter:
1 FIR Either
filter an even
cannot be used
number or no zeros at z = 1,
to design a highpass filter sinceand it Type 2 Type 1
an odd number
always of zeros
has a zero at z=-1
at z=-1
Type 2 FIR
A Type filter:
4 FIR Either
filter has an even
zeros at
number or no zeros at z =
both z = 1 and z=-1, and hence1 and
z=-1
cannot be used to design either a −1 1 −1 1
lowpass
Type 3 FIRorfilter:
a highpass
An oddor a
number
ofbandstop
zeros at filter
z = 1, and either an
even number
A Type 3 FIRor no is
filter zeros
not at z=-1
appropriate
Type to design
4 FIR filter: An odda lowpass
number Type 4 Type 3
offilter
zerosdue
atto
z =the presence
1 and of a zero
and z=-1
at z = 1
Type 2 FIR filter has no such
The presence of zeros at z=±1
restrictions
leads and can be
to some limitations onused toof
the use −1 1 −1 1
design
these almost any
linear-phase typefunctions
transfer of filter
for designing frequency-selective filters
Understanding
Digital Filters
 Design of digital filters Æ coming up next two weeks
 First, take a look at a few simple FIR and then IIR filters to
understand their properties
 FIR digital filters have finite length integer or real valued impulse
response coefficients
ª Employed in a number of practical applications, primarily because of their
simplicity, which makes them amenable to inexpensive hardware implementations
ª After all, an FIR filter is simply a finite number of (typically integer) numbers.
Simple FIR Filters
 Two-point (M=1Π1st order) moving average filter: simplest FIR filter
ª h[n]= (½) [1 1] = [ ½ ½] Æ h[n]= ½ (δ[n]+δ[n-1])
Æ H(z)= (½)(z+z-1) = (z+1)/(2z)
ª Notice that H(z) has a zero at z=-1, and a pole at z=0.
• Remember: for stable systems (and FIR filters are always stable), the frequency
response can be obtained by substituting z=ejω. z=-1 ÍÎ ω=π
• The zero at ω=π (suppress high frequency components π), coupled with the pole at
z=0 (amplify zero frequency), makes this a lowpass filter

jω z +1 e jω + 1 First-order FIR lowpass filter


H (ω ) = H (e ) = H ( z ) z = e jω = = 1
2 z z = e jω 2e jω 0.8
How would this
ω = 0 ⇒ H (e j 0 ) = 1
Magnitude
0.6
frequency response look
ω = π ⇒ H ( e jπ ) = 0 when
0.4 plotted as a function of ω?

0.2

0
0 0.2 0.4 0.6 0.8 1
ω/π
Simple FIR Filters
First-order FIR lowpass filter

e jω + 1
1
jω − jω 2
H (e )= =e cos(ω 2 )

2e 0.8

Magnitude
0.6
Monotonically decreasing function of ω
Hence a low pass filter. 0.4

0.2

System Gain at some frequency ω:


0

( )
0 0.2 0.4 0.6 0.8 1

G (ω ) = 20 log10 H e jω
ω/π

1
The frequency at which H (ωc ) = H (0 ) is of special interest:
2
3-dB cutoff frequency

( )
G (ωc ) = 20 log10 H e j 0 − 20 log10 ( 2 ) = 0 − 0.30103 ≅ −3.0dB
Cutoff-Frequency
 For realizable filters, the cutoff frequency is the frequency at which the system gain
reaches its 0.707 multiple of its peak value.
 This gain represents the frequency at which the signal power is half of its peak
power! (why?)
 For a lowpass filter, the gain at the cut-off frequency is 3dB less then its gain at zero
frequency (or 0.707 of its zero frequency amplitude, or half the power of its power
at zero frequency).
 For the first order filter, this occurs at ωc = π/2
First-order FIR lowpass filter
1

0.8

e +1
H ( e jω ) = = e − jω 2 cos(ω 2 )
2e jω Magnitude
0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
ω/π
Cascaded FIR Filters
 Now consider a second order MAF, M=3 h h

 − 1  
 
ωc = 2 cos −1 2  2 M  
 
 
As the order M increases,
the filter becomes sharper
but the passband also
decreases!

Ex: If we want a LPF with


a cutoff frequency of .12π,
what order filter do we
need? What linear
frequency does this
%Cascade MAF (two sections) correspond to?

subplot(121) %(three sections)


w=linspace(0, pi, 512); subplot(122)
H2=exp(-j*w/2).*cos(w/2).*exp(-j*w/2).*cos(w/2); H3=exp(-j*w/2).*cos(w/2).*exp(-j*w/2).*cos(w/2).*exp(-
plot(w/pi, abs(H2)); grid j*w/2).*cos(w/2);
xlabel('Frequency, \omega/\pi') plot(w/pi, abs(H3)); grid
Digital Signal Processing, © 2004 Robi Polikar, Rowan University
xlabel('Frequency, \omega/\pi')
How About
High Pass Filters?
 A high pass filter can easily be obtained from a lowpass filter by
____________________________________________
Multiplying h[n] by (-1)n, or replacing z with -z in H(z)

 Hence H1 ( z ) =
1
2
( ) 1
( )
1 − z −1 ⇒ H1 (e jω ) = 1 − e − jω = je − jω / 2 sin (ω / 2 )
2
ª Notice that H(z) has a zero at z=1, and a pole at z=0.
• Therefore, the frequency response has a zero at ω=0,
corresponding to z=1
• The zero at ω=0 (suppress low frequency components 0), First-order FIR highpass filter
makes this a highpass filter 1

0.8
j0
ω = 0 ⇒ H (e ) = 0
How would this
Magnitude
0.6

ω = π ⇒ H (e ) =1 frequency response look
when
0.4 plotted as a function of ω?
0.2

0
0 0.2 0.4 0.6 0.8 1
/
Cascading HPF
h h … h

subplot(121) %(three sections)


subplot(122)
w=linspace(0, pi, 512); H3=j*exp(-j*(w/2)).*sin(w/2)*j.*exp(-j*(w/2)).*sin(w/2)*j.*exp(-
H2=j*exp(-j*(w/2)).*sin(w/2)*j.*exp(-j*(w/2)).*sin(w/2); j*(w/2)).*sin(w/2);
plot(w/pi, abs(H2)); grid plot(w/pi, abs(H3)); grid
xlabel('Frequency,
Digital Signal Processing, \omega/\pi')
© 2004 Robi Polikar, Rowan University xlabel('Frequency, \omega/\pi')
Lecture 21
Basic Digital
Filter
Structures
(Part II)

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Understanding digital filters
ª Simple FIR filters
ª Cascaded FIR filters
ª Simple IIR filters
• Lowpass
• Highpass
• Bandpass
• Bandstop
• High order
• Comb filters
ª Minimum & Maximum phase filters
ª Allpass filters
Simple FIR Filters
 Two-point (M=1Π1st order) moving average filter: simplest FIR filter
ª h[n]= (½) [1 1] = [ ½ ½] Æ h[n]= ½ (δ[n]+δ[n-1])
Æ H(z)= (½)(z+z-1) = (z+1)/(2z)
ª Notice that H(z) has a zero at z=-1, and a pole at z=0.
• Remember: for stable systems (and FIR filters are always stable), the frequency
response can be obtained by substituting z=ejω. z=-1 ÍÎ ω=π
• The zero at ω=π (suppress high frequency components π), coupled with the pole at
z=0 (amplify zero frequency), makes this a lowpass filter

jω z +1 e jω + 1 First-order FIR lowpass filter


H (ω ) = H (e ) = H ( z ) z = e jω = = 1
2 z z = e jω 2e jω 0.8
How would this
ω = 0 ⇒ H (e j 0 ) = 1
Magnitude
0.6
frequency response look
ω = π ⇒ H ( e jπ ) = 0 when
0.4 plotted as a function of ω?

0.2

0
0 0.2 0.4 0.6 0.8 1
ω/π
Simple FIR Filters
First-order FIR lowpass filter

e jω + 1
1
jω − jω 2
H (e )= =e cos(ω 2 )

2e 0.8

Magnitude
0.6
Monotonically decreasing function of ω
Hence a low pass filter. 0.4

0.2

System Gain at some frequency ω:


0

( )
0 0.2 0.4 0.6 0.8 1

G (ω ) = 20 log10 H e jω
ω/π

1
The frequency at which H (ωc ) = H (0 ) is of special interest:
2
3-dB cutoff frequency

( )
G (ωc ) = 20 log10 H e j 0 − 20 log10 ( 2 ) = 0 − 0.30103 ≅ −3.0dB
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Cutoff-Frequency
 For realizable filters, the cutoff frequency is the frequency at which the system gain
reaches its 0.707 multiple of its peak value.
 This gain represents the frequency at which the signal power is half of its peak
power! (why?)
 For a lowpass filter, the gain at the cut-off frequency is 3dB less then its gain at zero
frequency (or 0.707 of its zero frequency amplitude, or half the power of its power
at zero frequency).
 For the first order filter, this occurs at ωc = π/2
First-order FIR lowpass filter
1

0.8

e +1
H ( e jω ) = = e − jω 2 cos(ω 2 )
2e jω Magnitude
0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
ω/π Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Cascaded FIR Filters
 Now consider a second order MAF, M=3 h h

 − 1  
 
ωc = 2 cos −1 2  2 M  
 
 
As the order M increases,
the filter becomes sharper
but the passband also
decreases!

Ex: If we want a LPF with


a cutoff frequency of .12π,
what order filter do we
need? What linear
frequency does this
%Cascade MAF (two sections) correspond to?

subplot(121) %(three sections)


w=linspace(0, pi, 512); subplot(122)
H2=exp(-j*w/2).*cos(w/2).*exp(-j*w/2).*cos(w/2); H3=exp(-j*w/2).*cos(w/2).*exp(-j*w/2).*cos(w/2).*exp(-
plot(w/pi, abs(H2)); grid j*w/2).*cos(w/2);
xlabel('Frequency, \omega/\pi') plot(w/pi, abs(H3)); grid
Digital Signal Processing, © 2004 Robi Polikar, Rowan University
xlabel('Frequency, \omega/\pi')
How About
High Pass Filters?
 A high pass filter can easily be obtained from a lowpass filter by
____________________________________________
Multiplying h[n] by (-1)n, or replacing z with -z in H(z)

 Hence H1 ( z ) =
1
2
( ) 1
( )
1 − z −1 ⇒ H1 (e jω ) = 1 − e − jω = je − jω / 2 sin (ω / 2 )
2
ªNotice that H(z) has a zero at z=1, and a pole at z=0.
•Therefore, the frequency response has a zero at ω=0,
corresponding to z=1
•The zero at ω=0 (suppress low frequency components 0), makes
First-order FIR highpass filter
this a highpass filter 1

0.8
j0
ω = 0 ⇒ H (e ) = 0
How would this
Magnitude
0.6

ω = π ⇒ H (e ) =1 frequency response look
when
0.4 plotted as a function of ω?
0.2

0
0 0.2 0.4 0.6 0.8 1
/
Cascading HPF
h h … h

subplot(121) %(three sections)


subplot(122)
w=linspace(0, pi, 512); H3=j*exp(-j*(w/2)).*sin(w/2)*j.*exp(-j*(w/2)).*sin(w/2)*j.*exp(-
H2=j*exp(-j*(w/2)).*sin(w/2)*j.*exp(-j*(w/2)).*sin(w/2); j*(w/2)).*sin(w/2);
plot(w/pi, abs(H2)); grid plot(w/pi, abs(H3)); grid
xlabel('Frequency,
Digital Signal Processing, \omega/\pi')
© 2004 Robi Polikar, Rowan University xlabel('Frequency, \omega/\pi')
HPF
 Alternately, a higher-order highpass filter of the form

M −1 n −n
H1( z ) = 1
M ∑ n =0 ( −1) z
is obtained by replacing z with -z in the transfer function of a moving
average filter
IIR LPF Filters
 A first-order causal lowpass IIR digital filter has a transfer function
given by −1
1 − α  1 + z  1 − α  z + 1 
H LP ( z ) = =  

2 1−α z  −1 2  z −α 

where |α| < 1 for stability

ª The above transfer function has a zero at z=-1 i.e., at ω = π which is in the
stopband
ª HLP(z) has a real pole at z = α
ª As ω increases from 0 to π, the magnitude of the zero vector decreases from a
value of 2 to 0, whereas, for a positive value of α, the magnitude of the pole vector
increases from a value of 1-α to 1+α
ª The maximum value of the magnitude function is 1 at ω = 0, and the minimum
value is 0 at ω = π

| H LP (e j 0 )| = 1, | H LP (e jπ )| = 0 LPF
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
IIR LPF Filters

H (ω ) ( )
G (ω ) = 20 log 10 H e jω
1
α = 0.8 0
α = 0.7
0.8
α = 0.5
-5
Magnitude

α = 0.8

Gain, dB
0.6
α = 0.7
-10 α = 0.5
0.4

-15
0.2

0 -20 -2 -1 0
0 0.2 0.4 0.6 0.8 1 10 10 10
ω/π ω/π

2α 1 − sin ωc
3-dB cutoff
cos ωc = α=
frequency
1+ α 2 cos ωc
To find 3-db cutoff frequency, Find α corresponding to a given
solve for ωc in |H(ωc)|2= ½ 3-db cutoff frequency, ωc
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
IIR HPF Filters
 A first-order causal highpass IIR digital filter has a transfer function given by
where |α| < 1 for stability 1 + α  1 − z −1  1 + α  z − 1 
H ( z) = HP=  
2  1 − α z −1 2  z −α 

ª Note that one can obtain a HPF simply by replacing z with a –z from LPF. The above
transfer function is slightly different, however, provides a better HPF.
 This transfer function has a zero at z = 1Î ω = 0 which is in the stopband
0 α = 0.8
1 α = 0.7
α = 0.5
α = 0.8 -5
0.8
α = 0.7
Magnitude

Gain, dB
0.6 α = 0.5
-10

0.4
-15
0.2

0 -20 -2 -1 0
0 0.2 0.4 0.6 0.8 1 10 10 10
ω/π ω /π

(1 − sin ωc )
For a given 3-dB cutoff frequency, α=
the corresponding α is: cos ωc Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Example
 W can easily design a first order IIR HPF, with say a cutoff frequency
of 0.8π: Sin(0.8 π)=0.587785, Cos(0.8 π)=-0.80902Î α=-0.5095245

−1
1 + α  1 − z −1   1 − z 
H HP ( z ) = = 0.245238  
−1 
2  1 − α z −1   1 + 0. 5095245 z 

 Higher order filters can also be easily designed simply by cascading


several first order filters. Note that for each cascaded filter, the overall
impulse response is the convolution of individual impulse responses,
h’[n]=(h[n]*h[n]*h[n]*…*h[n]), or the overall frequency response is
H’(ω)= H(ω). H(ω)… H(ω)
Bandpass IIR Filters
 A general 2nd order bandpass IIR filter transfer function is
1 − α  1 − z −2 
 |α| < 1 and |β| < 1 for stability
H BP ( z ) =

2  1 − β (1 + α ) z + α z 
− 1 −2

ª This function has a zero both at z=-1 and z=1, that is both at ω = 0 and ω = π
ª It obtains its maximum value at ω = ω0, called the center frequency, given by

ωo = cos −1 (β)
ª This filter ha two cutoff frequencies, ωc1and ωc2, where |HBP(ω)|2= ½. These frequencies
are also called 3-dB cutoff frequencies.
ª The difference between the two cutoff frequencies is called the 3-dB bandwidth, given by

 2α 
Bw = ωc 2 − ωc1 = cos −1 , ωc 2 > ωc1
2
1+ α 

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Bandpass IIR Filters
β = 0.34 α = 0.6
1 β = 0.8
α = 0.8
α = 0.5 1 β = 0.5
0.8 α = 0.2 β = 0.2
0.8

Magnitude
Magnitude

0.6
0.6
0.4
0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
ω/π ω/π

For example, to design a 2nd order BPF with a center frequency of 0.4π and 3-dB
bandwidth of 0.1π Î β = cos(ω0) = cos(0.4π) = 0.30901, and 2α/(1+α2)=cos(Bw) Î
α1 = 1.37638 and α2=0.72654. Note that only the second one provides a stable
transfer function: −2
' ( z ) = −0.18819 1 − z
H BP
 −2  1 − 0.7343424 z −1 + 1.37638 z − 2
1−α  1− z 
H BP ( z ) =
2  1 − β (1 + α ) z + α z 
 −1 −2
1 − z −2
" ( z ) = 0.13673
H BP
1 − 0.533531z −1 + 0.72654253z − 2
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Bandstop IIR Filters
 A general 2nd order bandstop IIR filter transfer function is
1 + α  1 − 2β z −1 + z −2 
H BS ( z ) =   |α| < 1 and |β| < 1 for stability

2  1 − β(1 + α) z + α z 
−1 −2

ª This function achieves its maximum value of unity at at z=±1, i.e, at ω = 0 and ω = π
ª It has a zero at ω = ω0, called the notch frequency, given by

ωo = cos −1 (β)
ª Therefore, this filter is also called the notch filter.
ª Similar to bandpass, there are two values, ωc1 and ωc2 , called the 3-dB cutoff frequencies
where the frequency response magnitude reaches ½, i.e., |HBS(ω)|2= ½.
ª The difference between these two frequencies is again called the 3-dB bandwidth, given by

−1  2α 
B w = ω c 2 − ω c1 = cos  , ω c 2 < ω c1
2 
1+α 
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Bandstop IIR Filters

1 1

0.8 0.8
Magnitude

Magnitude
0.6 0.6

0.4 0.4
α = 0.8 β = 0.8
α = 0.5 β = 0.5
0.2 α = 0.2 0.2 β = 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
ω/π ω/π

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Higher order IIR Filters
 Again, note that any of these filters can be designed to be of higher
order, which typically provides sharper and narrower transition bands
 Simply cascade a basic filter structure as many times necessary to
achieve the higher order filter.
 For example, to cascade K-first order LPFs

1 − α  1 + z −1   1 − α 1 + z −1 
K
H LP ( z ) =   GLP ( z ) =  ⋅ 
2  1 − α z 
 −1
 2 1 − α z −1 

ª It can be shown that

1 + (1 − C ) cos ωc − sin ωc 2C − C 2
α= C = 2( K −1) / K
1 − C + cos ωc

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Comb Filters
 Note that LPF, HPF, BPF, BSF all have a single passband and/or single
stopband. Many applications require several such frequency regions, which
can be provided by a comb filter.
 A comb filter typically has a frequency response that is a periodic function
of ω, with a period of 2π/L, L indicating the number of passband/stopbands.
 If H(z) is a filter with a single passband and/or a single stopband, a comb
filter can be easily generated from it by replacing each delay in its
realization with L delays resulting in a structure with a transfer function
given by Hcomb(z) =H(zL)
ª If |H(ω)| exhibits a peak at ωp, then Hcomb(ω) will exhibit L peaks at ωpk/L,
0≤k≤L-1, in the frequency range 0≤ω≤2π
ª Likewise, if |H(ω)|has a notch at ω0, then Hcomb(ω) will have L notches at ω0k/L,
0≤k≤L in the frequency range 0≤ω≤2
ª A comb filter can be generated from either an FIR or an IIR prototype filter

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Comb Filters
 Starting from a lowpass transfer function:
1 1
H ( z) = (1 + z −1 ) ⇒ H comb ( z ) = H ( z L ) = (1 + z − L )
2 2
Comb filter from lowpass prototype
1

0.8
Magnitude

0.6

0.4

0.2

0
0 0.5 1 1.5 2
ω/π

(L=5) L notches are at ω=(2k+1)π/L and L peaks are at ω=2πk/L

0 ≤ ω < 2π Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Comb Filters
 Starting from a highpass transfer function
1 1
H ( z) = (1 − z −1 ) ⇒ H comb ( z ) = H ( z L ) = (1 − z − L )
2 2
Comb filter from highpass prototype
1

0.8
Magnitude

0.6

0.4

0.2

0
0 0.5 1 1.5 2
ω/π

(L=5) L peaks are at ω=(2k+1)π/L and L notches are at ω=2πk/L

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Minimum & Maximum
Phase Filters
 Without further details, we provide the following properties and
definitions:
ª It can be shown that a causal stable transfer function with all zeros outside the
unit circle has an excess phase compared to a causal transfer function with
identical magnitude but having all zeros inside the unit circle
ª A causal stable transfer function with all zeros inside the unit circle is called a
minimum-phase transfer function
ª A causal stable transfer function with all zeros outside the unit circle is called a
maximum-phase transfer function.
ª Typically, we are interested in minimum phase transfer functions.
All Pass Filters
 A filter that passes all frequencies equally is called an allpass filter.
ª Any nonminimum-phase transfer function can be expressed as the product of a
minimum-phase transfer function and a stable allpass transfer function, where an
allpass transfer function has a unit magnitude for all frequencies.
ª All pass transfer functions are typically IIR whose magnitude response satisfy
1. |H(ω)|2=1 for all ω
2. For every zero at z=αejθ, there must be a pole at z=α-1e-jθ or vice versa, so that
zeros and poles cancel each other out.

Recall that all poles of a causal stable transfer


function must lie inside the unit circle. It then
follows that all of the zeros of a causal, stable,
allpass filter must lie outside of the unit circle, in a
mirror image symmetry formation.

Compiled in part from DSP, 2/e


S. K. Mitra, Copyright © 2001
Lecture 22
FIR Filter
Design

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 Filter specifications
ª Choosing the filter type
 Basic design appraoches
ª FIR / IIR
 FIR filter design
ª Windowing based FIR filter design
• LPF, HPH, BSF, BPF
ª Gibbs Phenomenon
ª Trade-offs in windows based FIR filter design
ª Commonly used windows
• Rectangular, Hamming, Hanning, Bartlett, Blackman, Kaiser
ª FIR Windows in Matlab
Digital Filters
 So far we have seen several basic filter architectures
ª Impulse response based: FIR & IIR
ª Magnitude response based: Lowpass, highpass, bandpass,
bandstop (notch), allpass, comb
ª Phase response based: Zero-phase, linear phase, generalized linear phase, non-
linear
 FIR examples:
ª 1sr order Moving average filter – lowpass
ª 1st order Highpass by replacing “z” with “-z” in MFA
ª High order FIR by cascading
 IIR examples:
ª 1st order lowpass, highpass, 2nd order bandpass, bandstop, notch, Lth order comb
ª Allpass, minimum phase, maximum phase
 The transfer functions provided for these are fixed, not allowing much
flexibility in the design of these filters
 How to design a filter given its passband and stopband characteristics?
Filter Design
 Objective: Obtain a realizable transfer function H(z) approximating a
desired frequency response.
 Digital filter design is the process of deriving this transfer function
ª Typically magnitude (and sometimes phase) response of the desired filter is specified.
ª Recall that there are four basic types of ideal digital filter, based on magnitude response
HLP(e jω ) HHP (e jω ) HBS (e jω )

1 1 1

ω ω ω
−π –ω c 0 ωc π −π –ω c 0 ωc π − π –ω c2 –ω c1 ω c1 ω c2 π

HBP (e jω )
Since these filters cannot be realized, we
–1 need relax (i.e., smooth) the sharp filter
characteristics in the passband and stop band
by providing acceptable tolerances
ω
− π –ω c2 –ω c1 ω c1 ω c2 π
Filter Specifications
• |H(ejω)| ≈ 1, with an error |H(ω)|
±δp in the passband, i.e.,

1 − δ p ≤ H (ω ) ≤ 1 + δ p , ω ≤ ω p

• |H(ejω)| ≈ 0, with an error


±δs in the stopband, i.e.,

H (ω ) ≤ δ s , ωs ≤ ω ≤ π
ωp - passband edge frequency
ωs - stopband edge frequency
δp - peak ripple value in the passband
δs - peak ripple value in the stopband

We will assume that we are dealing with filters with real coefficients, hence
the frequency response is periodic with 2π, and symmetric around 0 and π.
Filter Specifications
 Filter specifications are often given in |H(ω)|
decibels, in terms of loss of gain:
G (ω ) = − 20 log10 H (e jω )
with peak pass and minimum stopband ripple
α p = − 20 log10 (1 − δ p )

α s = − 20 log10 (δ s )
|H(ω)|

 Magnitude specs can also be given is a


normalized form, where the max.
passband value is normalized to “1” (0 dB).
Then, we have max. passband deviation
and max. stopband magnitude

2 1
1/ 1 + ε A
Remember!
 The following must be taken into consideration in making filter
selection
ª H(z) satisfying the frequency response specifications must be causal and stable
inside the unit circle ROC includes_____________,
(poles_______________, the unit circle right
h[n] _______-sided)
ª If the filter is FIR, then H(z) is a polynomial in z-1 with real coefficients
N N
−n
H ( z) = ∑ b[n] z = ∑ h[n] z − n
n =0 n =0

• If linear phase is desired, the filter coefficients h[n] (also the impulse response) must
satisfy symmetry constraints: h[n] = ±h[M-n]
• For computational efficiency, the minimum filter order M that satisfies design criteria
must be used.
ª If the filter is IIR, then H(z) is a real rational function of z-1
b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) =
a0 + a1z −1 + a2 z − 2 + L + a N z − N

• Stability must be ensured!


• Minimum (M,N) that satisfies the design criteria must be used.
FIR or IIR???

 Several advantages to both FIR and IIR type filters.


 Advantages of FIR filters (disadvantages of IIR filters):
ª Can be designed with exact linear phase,
ª Filter structure always stable with quantized coefficients
 Disadvantages of FIR filters (advantage of IIR filters)
ª Order of an FIR filter is usually much higher than the order of an equivalent IIR
filter meeting the same specifications Î higher computational complexity
Basic Design Approaches
 IIR filter design:
1. Convert the digital filter specifications into an analog prototype lowpass filter
specifications
2. Determine the analog lowpass filter transfer function |H(Ω)|
3. Transform |H(Ω)| into the desired digital transfer function H(z)
• Analog approximation techniques are highly advanced
• They usually yield closed-form solutions
• Extensive tables are available for analog filter design
• Many applications require digital simulation of analog systems

 FIR filter design is based on a direct approximation of the specified


magnitude response, with the often added requirement that the phase
be linear
ª The design of an FIR filter of order M may be accomplished by finding either the
length-(M+1) impulse response samples of h[n] or the (N+1) samples of its
frequency response H(ω)
FIR Filter Design
 Let’s start with the ideal lowpass filter
ª We know that there are two problems with this filter: infinitely long, and it is non-causal
ω
1 c j ωn
hLP[ n] = ∫ 1 ⋅ e dω
1, 0 ≤ ω ≤ ωc 2π −ω
H LP (ω ) =  c
0, ωc ≤ ω ≤ π sin (ωc n )
= , −∞ < n < ∞
πn

hLP[n]

How can we overcome


these two problems?
FIR Filter Design

0 20 40 80 100 120 140 160 180 200


FIR Filter Design
 This is the basic, straightforward approach to FIR filter design:
ª Start with an ideal filter that meets the design criteria, say a filter H(ω)
ª Take the inverse DTFT of this H (ω) to obtain h[n].
• This h[n] will be double infinitely long, and non-causal
ª Truncate using a window, say a rectangle, so that M+1 coefficients of h[n] are
retained, and all the others are discarded.
• We now have a finite length (order M) filter, ht[n], however, it is still non-causal
ª Shift the truncated h[n] to the right (i.e., delay) by M/2 samples, so that the first
sample now occurs at n=0.
• The resulting impulse response, ht[n-M/2] is a causal, stable, FIR filter, which has an
almost identical magnitude response and a phase factor or e-jM/2 compared to the
original filter, due to delay introduced.
FIR (Lowpass)
Filter Design
Unrealizable!

HLP(e )
sin (ωc n )
hLP[ n] = , −∞ < n < ∞
1 πn

ω
−π –ω c 0 ωc π

Realizable!

 sin (ωc (n − M / 2 )) M
 π (n − M / 2 ) , 0 < n < M , n ≠
2
hLP[ n] = 
 ωc , n = M
 π 2
FIR Highpass Design
HLP(e jω ) HHP (e jω )

1 1

ω ω
−π –ω c 0 ωc π −π –ω c 0 ωc π

H HP (ω ) = 1 − H LP (ω ) hHP [n] = δ [n] − hLP [n]


 sin (ωc (n − M / 2 )) M
 π (n − M / 2 ) , 0 < n < M , n ≠
2
hLP[ n] = 
 ωc , n = M
 π 2  sin (ωc (n − M / 2)) M
 −
π (n − M / 2)
, 0 < n < M, n ≠
2
hHP [n] = 
1 − ωc , n = M
 π 2
FIR BPF/BSF Design
HLP(e jω ) HLP(e jω )
HBP (e jω )

-
1
–1
= 1

ωc = ωc2 ωc = ωc1
ω ω ω
− π –ω c2 –ω c1 ω c1 ω c2 π −π –ω c 0 ωc π −π –ω c 0 ωc π

H BP (ω ) = H LP (ω ) ω =ω − H LP (ω ) ω =ω hBP [n] = hLP [n] ω =ω − hLP [n] ω =ω


c c2 c c1 c c2 c c1

( ) (
 sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 )
− ,
) 0 < n < M, n ≠
M


hBP[ n] = 
π (n − M / 2 ) π (n − M / 2 ) 2
ωc2 ωc1 M
 π − , n =
π 2

Similarly,

H BS (ω ) = 1 − H BP (ω ) hBS [n] = δ [n] − hBP [n]


However…

What
happened…?

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Gibbs Phenomenon
 Truncating the impulse response of an ideal filter to obtain a realizable
filter, creates oscillatory behavior in the frequency domain.
ª The Gibbs Phenomenon
1.5
 We observe the following: N = 20
ª As M↑, the number of ripples ↑ N = 60

Magnitude
1
however, ripple widths ↓
ª The height of the largest ripples
remain constant, regardless of the 0.5
filter length
ª As M↑, the height of all other
0
ripples ↓ 0 0.2 0.4 0.6 0.8 1
ª The main lobe gets narrower as ω/π
M↑, that is, the drop-off becomes
sharper Main lobe Side lobes
ª Similar oscillatory behavior can be
seen in all types of truncated filters
Gibbs Phenomenon
 Why is this happening?
 The Gibbs phenomenon is simply an artificact of the windowing
operation.
ª Multiplying the ideal filter’s impulse response with a rectangular window function
is equivalent to convolving the underlying frequency response with a sinc

h[n] = hI [n] ⋅ w[n] ⇔ H (ω ) = H I (ω ) *W (ω )

ª However, we want H(ω) to be as close as possible to HI(ω), which can only be


possible if the W(ω)=δ(ω)ÅÆ w[n]=1, an infinite window.
• We have conflicting requirements: On one hand, we want a narrow window, so that we
have a short filter, on the other hand, we want the truncated filter as closely match as
possible ideal filter’s frequency response, which in turn requires an infinitely long
window!
ª This convolution results in the oscillations, particularly dominant at the edges.
Gibbs Phenomenon

Hd: Ideal filter


frequency response
Ψ: Rectangular window
frequency response
Ht: Truncated filter’s
frequency response
Gibbs Phenomenon
Demo
% FIR Lowpass filter

M=input('Enter the order of filter:');


wc=input('Enter the cutoff frequency in terms of radians:');

n=0:M;
h_LP=sin(wc*(n-M/2))./(pi*(n-M/2));
h_LP(ceil(M/2+1))=wc/pi;

subplot(211)
stem(n, h_LP)
axis([0 M min(h_LP) max(h_LP)])
title(['Impulse response of the ',num2str(M),…
'th order filter']);
subplot(212)
H_LP=fft(h_LP, 1024);
w=linspace(-pi, pi, 1024);
plot(w/pi, abs(fftshift(H_LP)))
title(['Frequency response of the windowed ', …
num2str(M), 'th order filter']);
grid
axis([-1 1 0 max(abs(H_LP))])
Effect of Filter Length
FIR Filter Design
Using Windows
 Here’s what we want:
ª Quick drop off Æ Narrow transition band
• Narrow main lobe
• Increased stopband attenuation Conflicting requirements
ª Reduce the height of the side-lobe which causes the ripples
ª Reduce Gibb’s phenomenon (ringing effects, all ripples)
ª Minimize the order of the filter.
 Gibb’s phenomenon can be reduced (but not eliminated) by using a
smoother window that gently tapers off zero, rather then the brick
wall behavior of the rectangular filter.
ª Several window functions are available, which usually trade-off main-lobe width
and stopband attenuation.
• Rectangular window has the narrowest main-lobe width, but poor side-lobe
attenuation.
• Tapered window causes the height of the sidelobes to diminish, with a corresponding
increase in the main lobe width resulting in a wider transition at the cutoff frequency.
Commonly Used Windows
Commonly Used Windows
Narrow Wide (poorest)
(poor)
Wide
main (poor)
lobe Very wide (poor) Widest
main lobe main
main lobe
lobe
main lobe
Poor side lobe
attenuation
Good side lobe Good side lobe
attenuation Very good side lobe Excellent side lobe
attenuation
attenuation attenuation
Comparing WIndows
Fixed Window Functions
 All windows shown so far are fixed window functions
ª Magnitude spectrum of each window characterized by a main lobe centered at
ω = 0 followed by a series of sidelobes with decreasing amplitudes
ª Parameters predicting the performance of a window in filter design are:
• Main lobe width (∆ML) and/or transition bandwidth (∆ω=ωs – ωp)
• Relative sidelobe level (Asl) / sidelobe attenuation (αs)
ª For a given window, both parameters all completely determined once the filter
order M is set.
Htr(ω)

Hdes(ω)

Asl(dB) ∆ω
αs(dB)

∆ML
Fixed Window Functions

 How to design:
ª Set ωc = (ω p + ωs ) / 2
ª Choose window type based on the specified sidelobe attenuation (Asl) or minimum
stopband attenuation (αs)
ª Choose M according to the transition band width (ωc ) and/or mainlobe width
(∆ML ). Note that this is the only parameter that can be adjusted for fixed window
functions. Once a window type and M is selected, so are Asl, αs , and ∆ML
• Ripple amplitudes cannot be custom designed.
ª Adjustable windows have a parameter that can be varied to trade-off between
main-lobe width and side-lobe attenuation.
Kaiser Window
 The most popular adjustable window
 n−M 2 2
I 0 β 1 − ( ) 
M 2
w[n] =  , 0≤n≤M
I 0 (β )

where β is an adjustable parameter to trade-off between the main lobe


width and sidelobe attenuation, and I0{x} is the modified zeroth-order
Bessel function of the first kind:
2
∞  ( x 2 )k 
I 0 ( x) = 1 + ∑  
k =1 k ! 

In practice, this infinite series can be computed for a finite number of terms for a
desired accuracy. In general, 20 terms is adequate.
20  ( x k 2
2)
I 0 ( x) ≅ 1 + ∑ 
k! 
k =1
FIR Design Using
Kaiser Window
 Given the following:
ª ωp - passband edge frequency and ωs - stopband edge frequency
ª δp - peak ripple value in the passband and δs - peak ripple value in the stopband
 Calculate:
( {
1. Minimum ripple in dB: α s = − 20 log10 (δ s ) or − 20 log10 min δ s , δ p })
2. Normalized transition bandwidth: ∆ω = ω s − ω p

0.1102(α s −8.7), α s >50 dB *



β = 0.5842(α s − 21) 0.4 + 0.07886(α s − 21), 21≤α s ≤ 50 dB
3. Window parameters: 0, α s ≤ 21dB

α s − 7.95 *Different in your book.


4. Filter length, M+1:  2.285∆ω + 1, α s > 21 This one is correct!
M +1 = 
 5.79 , α s < 21
 ∆ω

5. Determine the corresponding Kaiser window  n−M 2 2


I 0 β 1 − ( ) 
M 2
6. Obtain the filter by multipling w[n] =  , 0≤n≤M
the ideal filter hI[n] with w[n] I 0 (β )
Example
 Design an FIR filter with the following characteristics:
ª ωp=0.3π ωs= 0.5π, δs= δp=0.01 Îα=40dB, ∆ω=0.2π

β = 0.5842(19)0.4 + 0.07886 × 19 = 3.3953


32.05
M +1 = + 1 = 23.2886 ≈ 24
2.285(0.2π )

w[n] = kaiser ( M + 1, β ) ( from matlab)

 sin (ωc (n − M / 2 )) M  sin (0.4π (n − 12 ))


 π (n − M / 2 ) , 0 < n < M , n ≠ , 0 < n < 23, n ≠ 12

hLP[ n] = 
2 h
LP[ n] =  π (n − 12 )
 ωc , n = M 0.4, n = 12

 π 2

ht [n] = hLP [n] ⋅ w[n], − 12 ≤ n ≤ 12


Complete Cycle for FIR Filter
Design using Windows
 Depending on your specs, determine what kind of window you would
like to use.
ª For all window types, except Kaiser, once you choose the window, the only other
parameter to choose is filter length M.
• For Kaiser window, determine M and beta, based on the specs using the given
expressions.

 Compute the window coefficients w[n] for the chosen window.


 Compute filter coefficients (taps)
ª Determine the ideal impulse response hI[n] from the given equations for the type
of magnitude response you need (lowpass, highpass, etc.)
ª Multiply window and ideal filter coefficients to obtain the realizable filter
coefficients (also called taps or weights): h[n]=hI[n].w[n]
 Convolve your signal with the new filter coefficients y[n]=x[n]*h[n].
In Matlab
 The following functions create N-point windows for the
corresponding functions:
ª rectwin(N) ªbartlett(N)
ª hamming(N) ªkaiser(N)
ª hanning(N) ªblackman(N)
 Try this: h=hamming(40); [H w]=freqz(h,1, 1024); plot(w, abs(H))
ª Compare for various windows. Also plot gain in dB
 The function window window design and analysis tool provides a
GUI to custom design several window functions.
 The function fdatool filter design and analysis tool provides a
GUI to custom design several types of filters from the given specs.
 The function sptool signal processing tool, provides a GUI to
custom design, view and apply to custom created signals. It also
provides a GUI for spectral analysis.
In Matlab: fir1
fir1 FIR filter design using the window method.
b = fir1(N,Wn) designs an N'th order lowpass FIR digital filter and returns the filter coefficients in length N+1
vector B. The cut-off frequency Wn must be between 0 < Wn < 1.0, with 1.0 corresponding to half the sample
rate. The filter B is real and has linear phase. The normalized gain of the filter at Wn is -6 dB.

b = fir1(N,Wn,'high') designs an N'th order highpass filter. You can also use B = fir1(N,Wn,'low') to design a
lowpass filter.

If Wn is a two-element vector, Wn = [W1 W2], FIR1 returns an order N bandpass filter with passband W1 < W <
W2. You can also specify b = fir1(N,Wn,'bandpass'). If Wn = [W1 W2], b = fir1(N,Wn,'stop') will design a
bandstop filter.
If Wn is a multi-element vector, Wn = [W1 W2 W3 W4 W5 ... WN], FIR1 returns an order N multiband filter with
bands 0 < W < W1, W1 < W < W2, ..., WN < W < 1.
b = fir1(N,Wn,'DC-1') makes the first band a passband.
b = fir1(N,Wn,'DC-0') makes the first band a stopband.
b = fir1(N,Wn,WIN) designs an N-th order FIR filter using the N+1 length vector WIN to window the impulse
response. If empty or omitted, FIR1 uses a Hamming window of length N+1. For a complete list of available
windows, see the help for the WINDOW function. If using a Kaiser window, use the following
b = fir1(N,Wn,kaiser(n+1,beta))
In Matlab
 Here is the complete procedure:
ª Obtain the specs: Cutoff frequency, passband and stopband edge frequencies,
allowed maximum ripple, filter order.
• Note that you do not need filter order for Kaiser (to be determined), and you do not
need the edge frequencies and ripple amount for the others.
ª Design the filter using the fir1() command. Default window is Hamming. For
all window types –except Kaiser – provide filter order N and normalized cutoff
frequency (between 0 and 1)
• For Kaiser window, determine the beta and M manually from the given equations
before providing them to the fir1 command with kaiser as filter type.
• You can also use kaiserord() to estimate the filter order from
• This gives you the “b” coefficients, or in other words, the impulse response h[n]
ª Use this filter with the filter() command as y=filter(b,a,x), where b are the
coefficients obtained from fir1(), a=1 since this is an FIR filter, x is the signal
to be filtered, and y is the filtered signal.
 OR, use sptool for the entire design cycle.
FIR Filter Design Through
Frequency Sampling
 An alternate FIR filter design based on using the inverse DFT of a
custom filter
 Unlike windows technique this technique can design a filter with an
arbitrary frequency response – not just lowpass, highpass, etc.
ª The desired filter response is specified at N equally spaced points in the [0 2π]
interval. These constitute the magnitude spectra:
 2πk 
H des (ωk ) ω = 2πk N = H des   , k = 0,1,2, L , N − 1
k  N 
ª A phase term added to ensure linear phase: θ(ω)=-jω(N-1)/2 to obtain the DFT
coefficients of the desired filter Hdes[k]
2πk  N −1 
 2πk  − j N  2 
H des [k ] = H des  e , k = 0,1,2, L , N − 1
 N 

ª Inverse DFT of the filter is taken to obtain the impulse response of the desired
filter, h[n].
Frequency Sampling Design

π π π π π
In Matlab: fir2

fir2 FIR arbitrary shape filter design using the frequency sampling method.

B = fir2(N,F,Am NPT, window) designs an Nth order FIR digital filter with the frequency response specified
by vectors F (frequencies) and A (amplitudes), and returns the filter coefficients in length N+1 vector B. The
desired frequency response is interpolated on an evenly spaced grid of length NPT points (512, by default).
The filter coefficients are then obtained by applying inverse DFT and multiplying by the window (default,
Hamming). Vectors F and A specify the frequency and magnitude breakpoints for the filter such that plot(F,A)
would show a plot of the desired frequency response. The frequencies in F must be between 0.0 < F < 1.0, with
1.0 corresponding to half the sample rate. They must be in increasing order and start with 0.0 and end with 1.0.

The filter B is real, and has linear phase, i.e., symmetric coefficients obeying B(k) = B(N+2-k), k = 1,2,...,N+1.

By default FIR2 windows the impulse response with a Hamming window. Other available windows, including
Boxcar, Hann, Bartlett, Blackman, Kaiser and Chebwin can be specified with an optional trailing argument.
For example, B = fir2(N,F,A,bartlett(N+1)) uses a Bartlett window.

For filters with a gain other than zero at Fs/2, e.g., highpass and bandstop filters, N must be even. Otherwise,
N will be incremented by one. In this case the window length should be specified as N+2.
Example
freq=[0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1]
amp=[0 0.5 1 0.8 0.6 0 0.5 0.5 1 0 0]

b=fir2(100, freq, amp, 1024);

subplot(211)
[H w]=freqz(b, 1, 1024);
plot(w/pi, abs(H)); hold on
plot(freq, amp, 'r*')
grid
xlabel('Frequency, \omega/\pi')
title(' Magnitude response of the filter and the
corresponding points')
subplot(212)
plot(w/pi, unwrap(angle(H)));
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter')
grid

You will need to determine the filter order by trial and error. You may need higher
orders if your specified points require a sharp transition
Other FIR Design Methods
 While window and frequency sampling methods simple and powerful,
they do not allow precise control of the critical frequencies, nor do
they provide equiripple in passband and stopbands
 Several computer aided design techniques exist that allow optimal
control of all bands and ripples
 The Parks-McClellan algorithm (Remez exchange algorithm)
ª In Matlab, read the help file for the remez() function.
On Friday
 Bring your own .wav file along with a ear phone…!
Lecture 23
FIR Filter
Design Review
&
IIR Filter
Design

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 FIR Filter Design Review
ª Filter specification
ª FIR design using a windowing function & Gibbs phenomenon
ª FIR filter design using Kaiser window
ª Matlab implementation using fir1()
ª FIR filter design using frequency sampling, Matlab implemention using fir2()
ª A demo
 IIR Filter Design
ª Bilinear transformation for analog ÅÆ digital domains
ª Analog filter design using Butterworth approximation
ª Matlab implementation
Filter Specifications
• |H(ejω)| ≈ 1, with an error |H(ω)|
±δp in the passband, i.e.,
1 − δ p ≤ H (ω ) ≤ 1 + δ p , ω ≤ ω p
• |H(ejω)| ≈ 0, with an error
±δs in the stopband, i.e.,

H (ω ) ≤ δ s , ωs ≤ ω ≤ π
ωp - passband edge frequency
ωs - stopband edge frequency
δp - peak ripple value in the passband
δs - peak ripple value in the stopband

ÂFilter specifications are often given in decibels, in terms of loss of


gain, with peak pass and minimum stopband ripple

G (ω ) = − 20 log10 H (e jω ) α p = − 20 log10 (1 − δ p ) α s = − 20 log10 (δ s )


FIR Filter Design

 Using windows:
ª Start with an ideal filter that meets the design criteria, say a filter H(ω)
ª Take the inverse DTFT of this H (ω) to obtain h[n].
• This h[n] will be double infinitely long, and non-causal
ª Truncate using a window, say a rectangle, so that M+1 coefficients of h[n] are
retained, and all the others are discarded.
• We now have a finite length (order M) filter, ht[n], however, it is still non-causal
ª Shift the truncated h[n] to the right (i.e., delay) by M/2 samples, so that the first
sample now occurs at n=0.
• The resulting impulse response, ht[n-M/2] is a causal, stable, FIR filter, which has an
almost identical magnitude response and a phase factor or e-jM/2 compared to the
original filter, due to delay introduced.
Other FIR Filter Design
H LP(e jω )
 sin (ωc (n − M / 2 ))
sin (ωc n )
M
 π (n − M / 2 ) , 0 < n < M , n ≠
1 hLP[ n] = , −∞ < n < ∞ hLP[ n] = 
2
πn  ωc , n = M
 π 2
ω
−π –ω c 0 ωc π
H HP (ω ) = 1 − H LP (ω ) hHP [n] = δ [n] − hLP [n]
H BS (ω ) = 1 − H BP (ω ) hBS [n] = δ [n] − hBP [n]
 sin (ωc (n − M / 2)) M
 − , 0 < n < M , n ≠
π (n − M / 2) 2
hHP [n] = 
1 − ωc , n = M
 π 2
( ) (
 sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 )

), 0 < n < M, n ≠
M

 π (n − M / 2 ) π (n − M / 2) 2
hBP[ n] = 
ωc2 ωc1 M
 π − , n =
π 2

( ) (
 sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 ) ) M
− + , 0 < n < M, n ≠
 π (n − M / 2 ) π (n − M / 2 ) 2
hBP[ n] = 
 ωc2 ωc1 M
1 − + , n =
π π 2
However…

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Gibbs Phenomenon
 Why is this happening?
 Multiplying the ideal filter’s impulse response with a rectangular window function is
equivalent to convolving the underlying frequency response with a sinc

h[n] = hI [n] ⋅ w[n] ⇔ H (ω ) = H I (ω ) *W (ω )


ª We have conflicting requirements: On one hand, we want a narrow window, so that we have
a short filter, on the other hand, we want the truncated filter as closely match as possible
ideal filter’s frequency response, which in turn requires an infinitely long window!
ª This convolution results in the oscillations, particularly dominant at the edges.
 Gibb’s phenomenon can be reduced (but not eliminated) by using a smoother
window that gently tapers off zero, rather then the brick wall behavior of the
rectangular filter.
ª Several window functions are available, which usually trade-off main-lobe width and
stopband attenuation.
• Rectangular window has the narrowest main-lobe width, but poor side-lobe attenuation.
• Tapered window causes the height of the sidelobes to diminish, with a corresponding increase in
the main lobe width resulting in a wider transition at the cutoff frequency.
Commonly Used Windows

Fixed windows
Once filter length
is chosen, mail lobe and
side lobe properties are
determined!

Has an additional
parameter, beta, to
adjust stopband
attenuation.
Commonly Used Windows
Narrow Wide (poorest)
(poor)
Wide
main (poor)
lobe Very wide (poor) Widest
main lobe main
main lobe
lobe
main lobe
Poor side lobe
attenuation
Good side lobe Good side lobe
attenuation Very good side lobe Excellent side lobe
attenuation
attenuation attenuation
Kaiser Window
 The most popular adjustable window
 n−M 2 2
I 0 β 1 − ( ) 
M 2
w[n] =  , 0≤n≤M
I 0 (β )

where β is an adjustable parameter to trade-off between the main lobe


width and sidelobe attenuation, and I0{x} is the modified zeroth-order
Bessel function of the first kind:
2
∞  ( x 2 )k 
I 0 ( x) = 1 + ∑  
k =1 k ! 

In practice, this infinite series can be computed for a finite number of terms for a
desired accuracy. In general, 20 terms is adequate.
20  ( x k 2
2)
I 0 ( x) ≅ 1 + ∑ 
k! 
k =1
FIR Design Using
Kaiser Window
 Given the following:
ª ωp - passband edge frequency and ωs - stopband edge frequency
ª δp - peak ripple value in the passband and δs - peak ripple value in the stopband
 Calculate:
( {
1. Minimum ripple in dB: α s = − 20 log10 (δ s ) or − 20 log10 min δ s , δ p })
2. Normalized transition bandwidth: ∆ω = ω s − ω p

0.1102(α s −8.7), α s >50 dB *


3. Window parameter: 
β = 0.5842(α s − 21) 0.4 + 0.07886(α s − 21), 21≤α s ≤ 50 dB
0, α s ≤ 21dB

α s − 7.95 *Different in your book.


 2.285∆ω + 1, α s > 21 This one is correct!
M +1 = 
4. Filter length, M+1:  5.79 , α s < 21
 ∆ω

5. Determine the corresponding Kaiser window  n−M 2 2


I 0 β 1 − ( ) 
M 2
6. Obtain the filter by multipling w[n] =  , 0≤n≤M
the ideal filter hI[n] with w[n] I 0 (β )
Complete Cycle for FIR Filter
Design using Windows
 Depending on your specs, determine what kind of window you would
like to use.
ª For all window types, except Kaiser, once you choose the window, the only other
parameter to choose is filter length M.
• For Kaiser window, determine M and beta, based on the specs, using the given
expressions.

 Compute the window coefficients w[n] for the chosen window.


 Compute filter coefficients (taps)
ª Determine the ideal impulse response hI[n] from the given equations for the type
of magnitude response you need (lowpass, highpass, etc.)
ª Multiply window and ideal filter coefficients to obtain the realizable filter
coefficients (also called taps or weights): h[n]=hI[n].w[n]
 Convolve your signal with the new filter coefficients y[n]=x[n]*h[n].
In Matlab
 The following functions create N-point windows for the
corresponding functions:
ª rectwin(N) ªbartlett(N)
ª hamming(N) ªkaiser(N, beta)
ª hanning(N) ªblackman(N)
fir1 FIR filter design using the window method.
b = fir1(N,Wn) designs an N'th order lowpass FIR digital filter and returns the filter coefficients in
length N+1 vector B. The cut-off frequency Wn must be between 0 < Wn < 1.0, with 1.0
corresponding to half the sample rate. b = fir1(N,Wn,'high') designs an N'th order highpass filter. You
can also use B = fir1(N,Wn,'low') to design a lowpass filter.

If Wn is a two-element vector, Wn = [W1 W2], FIR1 returns an order N bandpass filter with passband
W1 < W < W2. You can also specify b = fir1(N,Wn,'bandpass'). If Wn = [W1 W2], b =
fir1(N,Wn,'stop') will design a bandstop filter. b = fir1(N,Wn,WIN) designs an N-th order FIR filter
using the N+1 length vector WIN to window the impulse response. If empty or omitted, FIR1 uses a
Hamming window of length N+1.
Use this filter with the filter() command as y=filter(b,a,x), where b are the coefficients
obtained from fir1(), a=1 since this is an FIR filter, x is the signal to be filtered, and y is the
filtered signal.
FIR Filter Design Through
Frequency Sampling
 An alternate FIR filter design based on using the inverse DFT of a
custom filter
 Unlike windows technique this technique can design a filter with an
arbitrary frequency response – not just lowpass, highpass, etc.
ª The desired filter response is specified at N equally spaced points in the [0 2π]
interval. These constitute the magnitude spectra:
 2πk 
H des (ωk ) ω = 2πk N = H des   , k = 0,1,2, L , N − 1
k  N 
ª A phase term added to ensure linear phase: θ(ω)=-jω(N-1)/2 to obtain the DFT
coefficients of the desired filter Hdes[k]
2πk  N −1 
 2πk  − j N  2 
H des [k ] = H des  e , k = 0,1,2, L , N − 1
 N 

ª Inverse DFT of the filter is taken to obtain the impulse response of the desired
filter, h[n].
In Matlab: fir2

fir2 FIR arbitrary shape filter design using the frequency sampling method.

B = fir2(N,F,Am NPT, window) designs an Nth order FIR digital filter with the frequency response specified
by vectors F (frequencies) and A (amplitudes), and returns the filter coefficients in length N+1 vector B. The
desired frequency response is interpolated on an evenly spaced grid of length NPT points (512, by default).
The filter coefficients are then obtained by applying inverse DFT and multiplying by the window (default,
Hamming). Vectors F and A specify the frequency and magnitude breakpoints for the filter such that plot(F,A)
would show a plot of the desired frequency response. The frequencies in F must be between 0.0 < F < 1.0, with
1.0 corresponding to half the sample rate. They must be in increasing order and start with 0.0 and end with 1.0.

The filter B is real, and has linear phase, i.e., symmetric coefficients obeying B(k) = B(N+2-k), k = 1,2,...,N+1.

By default FIR2 windows the impulse response with a Hamming window. Other available windows, including
Boxcar, Hann, Bartlett, Blackman, Kaiser and Chebwin can be specified with an optional trailing argument.
For example, B = fir2(N,F,A,bartlett(N+1)) uses a Bartlett window.

For filters with a gain other than zero at Fs/2, e.g., highpass and bandstop filters, N must be even. Otherwise,
N will be incremented by one. In this case the window length should be specified as N+2.
Example
freq=[0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1]
amp=[0 0.5 1 0.8 0.6 0 0.5 0.5 1 0 0]

b=fir2(100, freq, amp, 1024);

subplot(211)
[H w]=freqz(b, 1, 1024);
plot(w/pi, abs(H)); hold on
plot(freq, amp, 'r*')
grid
xlabel('Frequency, \omega/\pi')
title(' Magnitude response of the filter and the
corresponding points')
subplot(212)
plot(w/pi, unwrap(angle(H)));
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter')
grid

You will need to determine the filter order by trial and error. You may need higher
orders if your specified points require a sharp transition
Other FIR Design Methods
 While window and frequency sampling methods simple and powerful,
they do not allow precise control of the critical frequencies, nor do
they provide equiripple in passband and stopbands
 Several computer aided design techniques exist that allow optimal
control of all bands and ripples
 The Parks-McClellan algorithm (Remez exchange algorithm)
ª In Matlab, read the help file for the remez() function.
FIR Filter Design Example

Matlab Demo
audiofilter2.m
IIR Filter Design
 Major disadvantage of the FIR filter: Long filter lengths
ª Order of an FIR filter is usually much higher than the order of an equivalent IIR
filter meeting the same specifications Î higher computational complexity
ª IIR filters are therefore preferred for many practical applications.
ª Two potential concerns of IIR filters must be addressed however:
Linear phase
• __________________ stability
and _______________________

 IIR filter design:


1. Convert the digital filter specifications into an analog prototype lowpass filter
specifications
2. Determine the analog lowpass filter transfer function |H(Ω)|
3. Transform |H(Ω)| into the desired digital transfer function H(z)
• Analog approximation techniques are highly advanced
• They usually yield closed-form solutions
• Extensive tables are available for analog filter design
• Many applications require digital simulation of analog systems
Bilinear Transformation
 In order to design an analog filter, we must first convert the digital filter
characteristics to equivalent analog filter characteristics:
 We need a mapping between the s-plane, where analog filters are defined, and the z-
plane, where the digital filters are designed
 The bilinear and inverse bilinear transformation
T
 1+ s s
2 −1  2
s=  1 − z  z=
1+ z  −1 T
Ts   1− s s
2

Maps a point in one plane into a unique point in the other plane.
 The analog to digital filter transformation is then

H ( z ) = Ha ( s )  1− z −1 
s = 2s  
T  1+ z −1 
 
Analog ÅÆ Digital
Frequency
 The parameter Ts often does not play a role in the design, and therefore Ts=2 is
chosen for convenience. Then, we have

z = 1+ s
−1
s = 1 − z −1 1− s
1+ z
 Since in the s-plane, s = σ + jΩ Î
(1 + σ ) + jΩ 2
(1 + σ ) 2 + Ω 2
z = (1 − σ ) − jΩ ⇒ z =
(1 − σ ) 2 + Ω 2
σ = 0 → s = jΩ → z = 1
σ < 0 → z <1
σ > 0 → z >1
Analog ÅÆ Digital
Frequency

 The left half of the s-plane corresponds to inside the unit circle in the z-plane
 The jΩ axis corresponds to the unit circle
 The stability requirement of the analog filters carry to digital filters:
ª Analog: The poles of the filter frequency response must be on the left half plane
ª Digital: The poles of the filter frequency response must be inside the unit circle, i.e., the
ROC must include the unit circle.
Effect of Bilinear
Transformation
 Since, the frequency response is defined on the unit circle, z=ejω ÍÎ s=jΩ
 
s=
2  1 − z −1 
Ts  1 + z −1 
 

2  e − jω 
jΩ = 1− 
Ts  1 + e − jω 
 

2 ω −1 ΩTs
Ω = tan ⇔ ω = 2 tan
Ts 2 2
This mapping is (highly) nonlinear
Î Frequency warping
Bilinear
Transformation
 Steps in the design of a digital filter -
ª Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs
ª Design the analog filter Ha(s)
ª Design the digital filter H(z) by applying bilinear transformation to Ha(s)
 Note the following, however:
ª Transformation can be used to preserve the edge frequencies, however, the shape
of the magnitude response will not necessarily be the same in analog and digital
cases, due to nonlinear mapping
ª Transformation does not preserve phase response of analog filter
ª The distortion caused by the nonlinearity is at the high frequencies. Therefore, this
may not be a problem for lowpass filters. It may be too severe for certain high pass
filters.
Analog Filter Design
 We know
ª how to prewarp the frequencies to convert analog specs into digital specs.
ª how to transform an analog filter into a digital filter through the bilinear
transformation
ª But, how do we design the analog filter?
 Several approaches and prototypes:
ª Butterworth filter – maximally flat
ª Chebychev (type I and type II) filters – Equiripple in passband or stopband
ª Elliptic filter – Sharper transition band but nonlinear phase and nonequiripple
ª Bessel filter – Linear phase in passband, at the cost of wide transition band
 All of these filters are defined for lowpass characteristic
ª Spectral transformations are then used convert lowpass to any one of highpass,
bandpass or bandstop.
The Butterworth
Approximation
 The magnitude-square response of an Nth order analog lowpass Butterworth filter:

1 2 1
H (Ω) = ⇔ H (Ω) =
1 + (Ω / Ω c ) 2 N 1 + (Ω / Ω c ) 2 N
ª Ωc is the 3-dB cutoff frequency (20log|H(Ωc)|=-3), N is the filter order
ª The most interesting property of this function is that the first 2N-2 derivatives of this
function is zero at Ω=0. Î The function is as flat as possible, without being a constant.
ª The Butterworth LPF is therefore said to have a maximally-flat magnitude at Ω=0.

1-δp

Increasing N

δs
Ωp Ωs
Butterworth Filter
Frequency Response
 The necessary order to meet passband and  The 3-dB cut-off frequency is
stopband specs can be obtained as: then:
2
(
H (Ω p ) = 1 − δ p 2 , ) H (Ω s ) = δ s2
2
Ωc =
Ωp
2N
 1 
 − 1
( )
2N 2N
Ωp  1  Ωs  1  1− δ 2 
  = − 1,   = −1  p 
Ω 
 c (1 − δ p )2  Ωc  δ s2

   
 1   1 
log − 1 − log 2 − 1
1
 1− δ 2 
 p ( δ s

)

N=
2 Ωp 
log 
 Ωs 
Butterworth filter
System Response
 We now know the frequency response of the Butterworth filter, and
how to obtain the filter order N, and the cutoff-frequency Ωc from the
passband and stopband edge frequencies
 In order to design the filter, we actually need the transfer function,
H(s) of the filter.
2 1
H ( s ) H ( − s ) s = jΩ = H ( Ω ) =
2N
 s 
1 +  
 jΩ c 

ª This transfer function has 2N roots (poles), located at equal distances from each
other around a circle of radius Ωc

 j π (2k +1) j π 
 
sk = Ω c  e 2 N e 2 , k = 0,1,L 2 N − 1
 
 
Designing a Digital LPF
Using Butterworth Appr.
1. Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs 2 ω
Ω= tan
Ts 2
2. Design the analog filter
   
a) From δp, δs, Ωp and Ωs obtain the order of the filter N log  1   1 
− 1 − log 2 − 1
1  ( p)
 1− δ 2 
 δ s



N=
2 Ωp 
log 

 Ωs 

Ωp
b) Use N, δp, and Ωp to calculate the 3dB cutoff frequency Ωc Ωc =
2N
 1 
c) Determine the corresponding H(s) and its poles  − 1
(
 1− δ
p )
2 

3. Apply bilinear transformation to obtain H(z)
Ts
1+ s
z= 2
T
1− s s
2
In Matlab
 Yes, you guessed it right, Matlab has several functions:
buttord Butterworth filter order selection.
[N, Wn] = buttord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital
Butterworth filter that loses no more than Rp dB in the passband and has at least Rs dB of
attenuation in the stopband. Wp and Ws are the passband and stopband edge frequencies,
normalized from 0 to 1 (where 1 corresponds to pi radians/sample). buttord()also
returns Wn, the Butterworth natural frequency (or, the "3 dB frequency") to use with
butter()to achieve the specs.

[N, Wn] = buttord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in
which case Wp and Ws are in radians/second. When Rp is chosen as 3 dB, the Wn in
butter() is equal to Wp in buttord().
In Matlab
butter() Butterworth digital and analog filter design.

[B,A] = butter(N,Wn) designs an Nth order lowpass digital Butterworth filter and returns the filter
coefficients in length N+1 vectors B (numerator) and A (denominator). The coefficients are listed in
descending powers of z. The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding
to half the sample rate. If Wn is a two-element vector, Wn = [W1 W2], butter() returns an order
2N bandpass filter with passband W1 < W < W2. [B,A] = butter(N,Wn,'high') designs a
highpass filter.

[B,A] = butter(N,Wn,'stop') is a bandstop filter if Wn = [W1 W2].

When used with three left-hand arguments, as in [Z,P,K] = butter(...), the zeros and poles are
returned in length N column vectors Z and P, and the gain in scalar K.

butter(N,Wn,'s'), butter(N,Wn,'high','s') and butter (N,Wn,'stop','s') design analog


Butterworth filters. In this case, Wn is in [rad/s] and it can be greater than 1.0.
IN Matlab

bilinear() Bilinear transformation with optional frequency prewarping.


[Zd,Pd,Kd] = bilinear(Z,P,K,Fs) converts the s-domain transfer function specified by Z, P, and K to a z-
transform discrete equivalent obtained from the bilinear transformation:
H(z) = H(s)|s = 2*Fs*(z-1)/(z+1)
where column vectors Z and P specify the zeros and poles, scalar K specifies the gain, and Fs is the sampling
frequency in Hz.

[NUMd,DENd] = bilinear (NUM,DEN,Fs), where NUM and DEN are row vectors containing
numerator and denominator transfer function coefficients, NUM(s)/DEN(s), in descending powers of
s, transforms to z-transform coefficients NUMd(z)/DENd(z).

Each form of bilinear() accepts an optional additional input argument that specifies prewarping. For
example, [Zd,Pd,Kd] = bilinear Z,P,K,Fs,Fp) applies prewarping before the bilinear transformation so
that the frequency responses before and after mapping match exactly at frequency point Fp (match point Fp is
specified in Hz).
Example
 Design an IIR lowpass filter using the Butterworth approximation that
meets the following specs: fp = 2kHz, fs = 3kHz, αp<2dB αs>50dB,
fs=10kHz.
ª ωp = 0.4π, ωs=0.6 π, δp=0.2057, δs=0.032 Î Ωp=0.7265, Ωs=1.3764

[N Wn]=buttord(0.7264, 1.3764, 2, 50, 's')


[b a]=butter(N, Wn, 's');
[H W]=freqs(b,a);
plot(W, abs(H));
grid
xlabel('Frequency, rad/s')
title('Analog Filter H(\Omega)')
Example

[N Wn]=buttord(0.7264, 1.3764, 2, 50, 's')


[b a]=butter(N, Wn, 's');
[H W]=freqs(b,a);
subplot(311); plot(W, abs(H)); grid
xlabel('Frequency, rad/s')
title('Analog Filter H(\Omega)')

%If we were to design this completely in digital domain

[Nd, Wnd]=buttord(0.4, 0.6, 2, 50);


[bd ad]=butter(Nd, Wnd);
[HD WD]=freqz(bd,ad);
subplot(312); plot(WD/pi, abs(HD)); grid
xlabel('Frequency, rad/\pi')
title('Digital Filter H(\omega)')

% To convert analog into digital


[bdd add]=bilinear(b, a, 0.5); %Note that we took Fs=0.5
[HDD WDD]=freqz(bdd,add);
subplot(313); plot(WDD/pi, abs(HDD)); grid
xlabel('Frequency, rad/\pi')
title('Digital Filter Through Bilinear Trans. H(\omega)')
On Wednesday
 Other lowpass approximations
ª Chebychev filters
ª Elliptic filters
ª Bessel filters
 Spectral transformations
ª Analog to analog spectral transformations
• Lowpass to highpass
• Lowpass to bandpass
• Lowpass to bandstop
• Lowpass to lowpass
ª Digital to digital spectral transformations
• Lowpass to highpass
• Lowpass to bandpass
• Lowpass to bandstop
• Lowpass to lowpass
Lecture 24
IIR Filter
Design

Digital Signal Processing, © 2004 Robi Polikar, Rowan University


Today in DSP
 IIR filter design
 Bilinear transformation for analog ÅÆ digital domains
 Analog IIR filter design
ª Butterworth approximation and Matlab implementation
ª Chebyshev approximation and Matlab implementation
ª Elliptic filters
ª Bessel filters
 Spectral transformations
ª Analog – to – analog transformations
ª Digital – to – digital transformations
 Direct IIR filter design
IIR Filter Design
 Major disadvantage of the FIR filter: Long filter lengths
ª IIR filters yield much shorter filters for the same specsÎ computationally efficient.
ª Two potential concerns of IIR filters must be addressed, however:________, ________
 IIR filter design:
1. Convert the digital filter specifications into an analog prototype lowpass filter specifications
2. Determine the analog lowpass filter transfer function |Ha(Ω)| and corresponding Ha(s)
ª Butterworth / Chybshev / Elliptic / Bessel
3. Transform |Ha(s)| into the desired digital transfer function H(z)
• Bilinear and inverse bilinear transformations for mapping s ÍÎz-planes

T
2  1 − z −1  1+ s s
s= z= 2
Ts  1 + z −1  Ts
1− s
2

H ( z ) = H a ( s ) s = 2  1− z −1 
Ts  1+ z −1 
Analog ÅÆ Digital
Frequency
 The parameter Ts often does not play a role in the design, and therefore Ts=2 is
chosen for convenience. Then, we have

z = 1+ s
1− s

1 − z −1
s=
1 + z −1

 The left half of the s-plane corresponds to inside the unit circle in the z-plane
 The jΩ axis corresponds to the unit circle
 The stability requirement of the analog filters carry to digital filters:
ª Analog: The poles of the filter frequency response must be on the left half plane
ª Digital: The poles of the filter frequency response must be inside the unit circle, i.e., the
ROC must include the unit circle.
Effect of Bilinear
Transformation
 Since, the frequency response is defined on the unit circle, z=ejω ÍÎ s=jΩ
 
s=
2  1 − z −1 
Ts  1 + z −1 
 

2  e − jω 
jΩ = 1− 
Ts  1 + e − jω 
 

2 ω −1 ΩTs
Ω = tan ⇔ ω = 2 tan
Ts 2 2
This mapping is (highly) nonlinear
Î Frequency warping
Bilinear
Transformation
 Steps in the design of a digital filter -
ª Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs
ª Design the analog filter Ha(s)
ª Design the digital filter H(z) by applying bilinear transformation to Ha(s)
 How to design the analog filter?
ª Butterworth filter – maximally flat
ª Chebychev (type I and type II) filters – Equiripple in passband or stopband
ª Elliptic filter – Sharper transition band but nonlinear phase and nonequiripple
ª Bessel filter – Linear phase in passband, at the cost of wide transition band
 All of these filters are defined for lowpass characteristic
ª Spectral transformations are then used convert lowpass to any one of highpass,
bandpass or bandstop.
 …and then there is direct design that sidesteps all of the above steps…
The Butterworth
Approximation
 The magnitude-square response of an Nth order analog lowpass Butterworth filter:

1 2 1
H (Ω) = ⇔ H (Ω) =
1 + (Ω / Ω c ) 2 N 1 + (Ω / Ω c ) 2 N
ª Ωc is the 3-dB cutoff frequency (20log|H(Ωc)|=-3), N is the filter order
ª The most interesting property of this function is that the first 2N-2 derivatives of this
function is zero at Ω=0. Î The function is as flat as possible, without being a constant.
ª The Butterworth LPF is therefore said to have a maximally-flat magnitude at Ω=0.

1-δp

Increasing N

δs
Ωp Ωs
Designing a Digital LPF
Using Butterworth Appr.
1. Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs 2 ω
Ω= tan
Ts 2
2. Design the analog filter
a) From δp, δs, Ωp and Ωs obtain the order of the filter N    
 1   1 
log − 1 − log 2 − 1
1  (
 1− δ 2 
p )  δ s



N=
2 Ωp 
log 

 Ωs 

Ωp
b) Use N, δp, and Ωp to calculate the 3dB cutoff frequency Ωc Ωc =
2N
 1 
 − 1
(
 1− δ
p )
2 

c) Determine the corresponding H(s) and its poles
If the filter order is large, manually computing
H(s) is typically difficult, and usually done using K
H(s) =
filter design software (such as Matlab) (s − s1)(s − s2)L(s − s2N−1)

T
1+ s s
z= 2
3. Apply bilinear transformation to obtain H(z) T
1− s s
2
In Matlab
 Yes, you guessed it right, Matlab has several functions:
buttord Butterworth filter order selection.

[N, Wn] = buttord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital
Butterworth filter that loses no more than Rp dB in the passband and has at least Rs dB of
attenuation in the stopband. Wp and Ws are the passband and stopband edge frequencies,
normalized from 0 to 1 (where 1 corresponds to pi radians/sample).

buttord()also returns Wn, the Butterworth natural frequency (or, the "3 dB frequency")
to use with butter()to achieve the desired specs.

[N, Wn] = buttord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in
which case Wp and Ws are in radians/second. Note that for analog filter design Wn is not
restricted to the [0 1] range. When Rp is chosen as 3 dB, the Wn in butter() is equal to
Wp in buttord().
In Matlab
butter() Butterworth digital and analog filter design.

[B,A] = butter(N,Wn) designs an Nth order lowpass digital Butterworth filter and returns the filter
coefficients in length N+1 vectors B (numerator) and A (denominator). The coefficients are listed in
descending powers of z. The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding
to half the sample rate. If Wn is a two-element vector, Wn = [W1 W2], butter() returns an order
2N bandpass filter with passband W1 < W < W2.

[B,A] = butter(N,Wn,'high') designs a highpass filter.


[B,A] = butter(N,Wn,'stop') is a bandstop filter if Wn = [W1 W2].

When used with three left-hand arguments, as in [Z,P,K] = butter(...), the zeros and poles are
returned in length N column vectors Z and P, and the gain in scalar K.

butter(N,Wn,'s'), butter(N,Wn,'high','s') and butter (N,Wn,'stop','s') design analog


Butterworth filters. In this case, Wn is in [rad/s] and it can be greater than 1.0.
IN Matlab

bilinear() Bilinear transformation with optional frequency prewarping.


[Zd,Pd,Kd] = bilinear(Z,P,K,Fs) converts the s-domain transfer function specified by Z, P, and K to a z-
transform discrete equivalent obtained from the bilinear transformation:
H(z) = H(s)|s = 2*Fs*(z-1)/(z+1)
where column vectors Z and P specify the zeros and poles, scalar K specifies the gain, and Fs is the sampling
frequency in Hz.

[NUMd,DENd] = bilinear (NUM,DEN,Fs), where NUM and DEN are row vectors containing
numerator and denominator transfer function coefficients, NUM(s)/DEN(s), in descending powers of
s, transforms to z-transform coefficients NUMd(z)/DENd(z).

Each form of bilinear() accepts an optional additional input argument that specifies prewarping. For
example, [Zd,Pd,Kd] = bilinear Z,P,K,Fs,Fp) applies prewarping before the bilinear transformation so
that the frequency responses before and after mapping match exactly at frequency point Fp (match point Fp is
specified in Hz).
Example
 Design an analog IIR lowpass filter using the Butterworth approximation that meets
the following specs: fp = 2kHz, fs = 3kHz, αp<2dB αs>50dB, fs=10kHz.
ª ωp = 0.4π rad, ωs=0.6 π rad, δp=0.2057, δs=0.032 Î Ωp=0.7265 rad/s, Ωs=1.3764 rad/s

[N Wn]=buttord(0.7264, 1.3764, 2, 50, 's')


[b a]=butter(N, Wn, 's');
[H W]=freqs(b,a); %Note freqs() for analog
plot(W, abs(H));
grid
xlabel('Frequency, rad/s')
title('Analog Filter H(\Omega)')
Example

[N Wn]=buttord(0.7264, 1.3764, 2, 50, 's')


[b a]=butter(N, Wn, 's');
[H W]=freqs(b,a); % analog frequency response
subplot(311); plot(W, abs(H)); grid
xlabel('Frequency, rad/s')
title('Analog Filter H(\Omega)')

%If we were to design this completely in digital domain

[Nd, Wnd]=buttord(0.4, 0.6, 2, 50);


[bd ad]=butter(Nd, Wnd);
[HD WD]=freqz(bd,ad); % digital frequency response
subplot(312); plot(WD/pi, abs(HD)); grid
xlabel('Frequency, rad/\pi')
title('Digital Filter H(\omega)')

% To convert analog into digital


[bdd add]=bilinear(b, a, 0.5); %Note that we took Fs=0.5
[HDD WDD]=freqz(bdd,add);
subplot(313); plot(WDD/pi, abs(HDD)); grid
xlabel('Frequency, rad/\pi')
title('Digital Filter Through Bilinear Trans. H(\omega)')
Chebyshev Filter
 The (almost) flat passband and stopband characteristics of Butterworth filter come at
wide transition band
the cost of ____________________.
 Two types of Chebyshev filters:
ª Type I has equiripple in the passband, and monotonic behavior in the stopband
ª Type II has equiripple in the stopband, and monotonic behavior in the passband

1 Mag. response of
TYPE I H (Ω ) = Cyb. filter, N odd
1 + ε 2C N
2
(Ω Ωc )
where CN(Ω) is the Chebyshev polynomial of order N:

 cos( N cos −1 Ω), Ω ≤1


C N (Ω ) =  −1
cosh( N cosh Ω), Ω >1

ε is a user defined parameter that controls ripple amount.


How to Design a
Chebyshev Filter
 Designing a Chebyshev filter requires that the approriate filter order and cutoff
frequency be determined so that the filter will satisfy the specs.
ª Given the four parameters: passband and stopband edge frequencies (Ωp, Ωs) , and passband
and stopband ripples (δp , δs) , determine the filter order:

 
1 cosh −1 1

1
1 
ε= −1 ε δ2 
(1 − δ p ) 2 N=  s
cosh −1 (Ω s / Ω p )

ª No close form formula exists for computing Ωc. Therefore, for Type I, take Ωc = Ωp
ª Compute the Chebyshev polynomial Messy…!
ª Determine the poles of the filter Î Obtain the analog filter Really messy, in fact, downright ugly!
ª Apply bilinear transformation to obtain the digital filter Relatively easy, but by this time
you probably lost the will to live.

Matlab to the rescue!


In Matlab
cheby1() Chebyshev Type I digital and analog filter design.
cheby2() Chebyshev Type II digital and analog filter design.

[B,A] = cheby1(N,R,Wn) designs an Nth order type I lowpass digital Chebyshev filter with R decibels of peak-
to-peak ripple in the passband. cheby1 returns the filter coefficients in length N+1 vectors B (numerator) and
A (denominator). The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding to half the sample
rate. Use R=0.5 as a starting point, if you are unsure about choosing R.

[B,A] = cheby2(N,R,Wn) designs an Nth order type II lowpass digital Chebyshev filter with the stopband
ripple R decibels down and stopband edge frequency Wn. cheby2() returns the filter coefficients in length
N+1 vectors B (numerator) and A (denominator). The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0
corresponding to half the sample rate. Use R = 20 as a starting point, if you are unsure about choosing R.

If Wn is a two-element vector, Wn = [W1 W2], cheby1 returns an order 2N bandpass filter with passband
W1 < W < W2. [B,A] = cheby1 (N,R,Wn,'high') designs a highpass filter.
[B,A] = cheby1(N,R,Wn,'stop') is a bandstop filter if Wn = [W1 W2].

When used with three left-hand arguments, as in [Z,P,K] = cheby1(…), the zeros and poles are returned in
length N column vectors Z and P, and the gain in scalar K.

cheby1(N,R,Wn,'s'), cheby1(N,R,Wn,'high','s') cheby1,'stop','s') design analog Chebyshev Type I filters. In


this case, Wn is in [rad/s] and it can be greater than 1.0
Effect of Filter Order
Let’s create 3 Chebyshev filters of different orders with a cutoff frequency of 5 rad/s

w=linspace(0, 10, 1024); %Create analog frequency base


[b1 a1]=cheby1(3, 0.5, 5, 's');
[b3 a2]=cheby1(5, 0.5, 5, 's');
[b2 a3]=cheby1(10, 0.5, 5, 's');

%Obtain analog frequency responses at “w”


H1=freqs(b1, a1, w);
H2=freqs(b2, a2, w);
H3=freqs(b3, a3, w);

%Normalize magnitude to plot together


H1=abs(H1)/ max(abs(H1));
H2=abs(H2)/ max(abs(H2));
H3=abs(H3)/ max(abs(H3));

plot(w, abs(H1));
grid; hold on
plot(w, abs(H2), 'r')
plot(w, abs(H3), 'g')
legend('N=3', 'N=5', 'N=10')
title('Magnitude Spectra of Chebyshev Filters');
xlabel('Analog angular frequency, \Omega (rad/s)')
In Matlab

cheb1ord() Chebyshev Type I filter order selection.


cheb2ord() Chebyshev Type II filter order selection.

[N, Wn] = cheb1ord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital Chebyshev Type I filter
that loses no more than Rp dB in the passband and has at least Rs dB of attenuation in the stopband. Wp and Ws
are the passband and stopband edge frequencies, normalized from 0 to 1 (where 1 corresponds to π rad/sample.).
For example,
Lowpass: Wp = .1, Ws = .2 Highpass: Wp = .2, Ws = .1 (note the reversal of freq.)
Bandpass: Wp = [.2 .7], Ws = [.1 .8] Bandstop: Wp = [.1 .8], Ws = [.2 .7]
cheb1ord() also returns Wn, the Chebyshev natural frequency to use with cheby1() to achieve the
specifications.

[N, Wn] = cheb1ord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in which case Wp and
Ws are in radians/second.

If designing an analog filter, use bilinear() to convert the analog filter coeff. to digital filter coeff.
chebyshev_demo2.m Demo

%Provide wp, ws, rp, rs, Prewarp, with Ts=2)


Wp=tan(wp/2);
Ws=tan(ws/2);

[N Wn]=cheb1ord(Wp, Ws, rp, rs, 's');


[b a]=cheby1(N, rp, Wn, 's');
[H w]=freqs(b, a);
subplot(311); plot(w, abs(H)); grid

%Design this completely in digital domain

[Nd, Wnd]=cheb1ord(wp/pi, ws/pi, rp, rs);


[bd ad]=cheby1(Nd, rp, Wnd);
[HD WD]=freqz(bd,ad);
subplot(312); plot(WD/pi, abs(HD)); grid

% To convert analog into digital


[bdd add]=bilinear(b, a, 0.5); %Note Fs=0.5
[HDD WDD]=freqz(bdd,add);
subplot(313); plot(WDD/pi, abs(HDD)); grid
Elliptic Filters
(Cauer Filter)
 Provides much sharper transition band at the expense of equiripple in
both bands, and nonlinear behavior in the passband.
1
H (Ω ) =
1 + ε 2U N
2
(Ω / Ω c )

ª Where ε again controls the ripple amount and UN is some cryptic function
(Jacobian elliptic function of order N)

⌠ dθ
U (Ω ) =  (if you must insist….

θ =0
(1 − N sin 2 θ ) see if you can pull N out of this!)

The functions ellipord() and ellip() determine the elliptic filter parameters, and design
the elliptic filter, respectively. The syntax and usage are similar to those of Chebyshev.
[N, Wn] = ellipord(Wp, Ws, Rp, Rs, ‘s’);
[b, a] = ellip(N, Rp, Rs, Wn, ‘s’);
Bessel Filter
 Yet another cryptically designed filter, used only in most desperate
situations.
ª Advantage: Has (approximately) linear phase in the passband, though there is no
guarantee that the linearity will be preserved when bilinear transform is applied.
ª Disadvantage: Horrendously wide transition band (no free lunch!)

In Matlab
[B,A] = besself(N,Wn) designs an Nth order lowpass analog Bessel filter and
returns the filter coefficients in length N+1 vectors B and A. The cut-off
frequency Wn must be greater than 0.
Spectral
Transformations
 But what about other types of filters?
 Spectral transformations: Process for converting lowpass filters into highpass,
bandpass or bandstop filters
ª In fact, spectral transformations can be used to convert a LPF into another LPF with a
different cutoff frequency.
ª The transformation can be done in either analog or discrete domain.
 Here is how to “design a non-lowpass filter by designing a lowpass filter”
ª Prewarp the digital filter edge frequencies into analog frequencies
ª Transform the filter specs into LPF specs using spectral transformation
ª Design analog LPF
ª Obtain the corresponding desired digital filter by either
1. Convert the analog LPF to digital LPF using bilinear transformation and use the digital – to – digital
spectral transformation to obtain the non-LPF characteristic
2. Convert the analog LPF to the desired analog non-LPF using the analog – to – analog spectral
transformation, followed by the bilinear transformation to obtain the digital equivalent
Analog –to – Analog
Transformations

Ωp : Passband edge-frequency of
LPÆLP
prototype analog LPF, typically
normalized to 1 rad/s.
ΩpLP: Passband edge-frequency of
LPÆHP desired analog LPF
ΩpHP: Passband edge-frequency of
desired analog HPF
ΩLBP: Lower passband edge-frequency
of desired analog BPF
LPÆBP
ΩUBP: Upper passband edge-frequency
of desired analog BPF
ΩLBS: Lower passband edge-frequency
LPÆBS of desired analog BSF
ΩUBS: Upper passband edge-frequency
of desired analog BSF
Analog –to – Analog
Transformations
Digital – to –Digital
transformations
 Digital – to – digital
transformations are obtained by
replacing every z-1 in H(z) with
a mapping function F(z-1)
 In digital – to – digital
transformations, the following
conditions must be satisfied:
1. The mapping must transform
the LP prototype into the
desired type of filter (duh!)
2. Inside of the unit circle must be
mapped to itself (so that stability
is insured – part 1)
3. The unit circle must also be
mapped to itself (so that the
stability is insured – part 2).
 The transformations given on
the right satisfy all of these
requirements
Needless to say…
 For any application of practical complexity, these transformations cannot be done by hand.
Digital filter design software, such as Matlab, is typically used for the transformations
lp2lp() Lowpass to lowpass analog filter transformation.
[NUMT,DENT] = lp2lp(NUM,DEN,Wo) transforms the lowpass filter prototype
NUM(s)/DEN(s) with unity cutoff frequency of 1 rad/sec to a lowpass filter with cutoff frequency
Wo (rad/sec).

lp2hp() Lowpass to highpass analog filter transformation.


[NUMT,DENT] = lp2hp(NUM,DEN,Wo) transforms the lowpass filter prototype
NUM(s)/DEN(s) with unity cutoff frequency to a highpass filter with cutoff frequency Wo.

lp2bp() Lowpass to bandpass analog filter transformation.


[NUMT,DENT] = lp2bp(NUM,DEN,Wo,Bw) transforms the lowpass filter prototype
NUM(s)/DEN(s) with unity cutoff frequency to a bandpass filter with center frequency Wo and
bandwidth Bw.
lp2bs() Lowpass to bandstop analog filter transformation.
[NUMT,DENT] = lp2bs(NUM,DEN,Wo,Bw) transforms the lowpass filter prototype
NUM(s)/DEN(s) with unity cutoff frequency to a bandstop filter with center frequency Wo and
bandwidth Bw.
Example

 Design a type I Chebyshev digital IIR highpass filter with the following
specifications: Fp=700 Hz, Fst=500 Hz, αp=1dB, αs=32dB, FT=2kHz
ª First compute normalized angular digital frequencies:
2π Fs 2π × 500 2π Fp 2π × 700
ωs = = = 0. 5 π ωp = = = 0.7 π
FT 2000 FT 2000

ª Prewarp these frequencies:

Ω HP
p = tan(ω p / 2) = 1.9626105 Ω sHP = tan(ω s / 2) = 1.0
ª For the prototype lowpass filter pick Ωp=1 rad/s
ª Use the spectral transformation to convert these into lowpass equivalents:

Ω p Ω HP
p Ω p Ω HP
p Ω p Ω HP
p 1 ⋅1.962105
s= ⇒Ω=− ( s = jΩ ) Ωs = − = = 1.962105
sˆ Ωˆ Ω sHP 1

ª Then the analog prototype LPF specs are: Ωp=1, Ωs=1.926105, αp=1dB, αs=32dB
spect_trans_demo.m
Example Cont.
%Omega_p=tan(wp/2); %convert to analog frequencies
%Omega_s=tan(ws/2);
%Wp=1; Ws=Wp*Omega_p/Omega_s;

[N, Wn] = cheb1ord(1, 1.9626105, 1, 32, 's');


[B, A] = cheby1(N, rp, Wn, 's');

WC=linspace(0, 5, 200); subplot(311)


[HC]=freqs(B,A, WC);
plot(WC, abs(HC)); grid
title('Freq. response of the analog LPF H(\Omega)');
xlabel('Analog freq., \Omega, rad/s')

[BT, AT] = lp2hp(B, A, 1.9626105);


[HT]=freqs(BT,AT, WC);
subplot(312); plot(WC, abs(HT)); grid
title('Freq. response of the transformed analog….’)
xlabel('Analog freq., \Omega, rad/s')

w=linspace(0, pi, 512); %512 is default size for freqz


[num, den] = bilinear(BT, AT, 0.5);
subplot(313); [H w]=freqz(num, den);
plot(w/pi, abs(H)); grid
title('Freq. response of the digital HPF
H^{HP}(\omega/pi)');
xlabel('Digital freq., \omega, rad')
Direct Design
 The described techniques are the classic analog and digital filter
design techniques that have been traditionally used
 Several numeric and recursive (iterative) optimization techniques are
not available that allow us to design IIR filters directly by
approximating the desired frequency response with an appropriate
transfer function
 A definite advantage of such optimization techniques is that they are
no longer restricted to standard filter types, such as the lowpass,
highpass, bandpass or bandstop
ª Arbitrary filter shapes can be easily designed.
ª One example is the yulewalk() function in matlab, which is the IIR
counterpart of the remez()and fir2() functions used for FIR filter design.
Direct IIR Design
yulewalk() Recursive filter design using a least-squares method.

[B,A] = yulewalk(N,F,M) finds the Nth order recursive filter coefficients B and A such
that the filter:

B(z) b(1) + b(2)z-1 + .... + b(n)z-(n-1)


---- = ---------------------------
A(z) 1 + a(1)z-1 + .... + a(n)z-(n-1)

matches the magnitude frequency response given by vectors F and M. Vectors F and M
specify the frequency and magnitude breakpoints for the filter such that plot(F,M) would
show a plot of the desired frequency response. The frequencies in F must be between 0.0
and 1.0,with 1.0 corresponding to half the sample rate. They must be in increasing order and
start with 0.0 and end with 1.0.

The filter order N usually needs to be determined by trial and error.


yulewalk_demo.m Yulewalk vs. fir2
IIR vs. FIR
freq=[0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1]
amp=[0 0.5 1 0.8 0.6 0 0.5 0.5 1 0 0]
[b1 a1]=yulewalk(25, freq, amp);
b2=fir2(100, freq, amp, 1024);
subplot(221)
[H1 w]=freqz(b1, a1, 1024);
plot(w/pi, abs(H1)); hold on
plot(freq, amp, 'r*'); grid
xlabel('Frequency, \omega/\pi')
title(' Yulewalk: Magnitude response ')
subplot(222)
plot(w/pi, unwrap(angle(H1))); grid
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter')

subplot(223)
[H2 w]=freqz(b2, 1, 1024);
plot(w/pi, abs(H2)); hold on
plot(freq, amp, 'r*'); grid
xlabel('Frequency, \omega/\pi')
title(' Fir2: Magnitude response')
subplot(224)
plot(w/pi, unwrap(angle(H2))); grid
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter') Compare filter orders and phase responses !!!
IIR vs. FIR
On Friday
 Demos of FIR and IIR filter designs on real signals
ª Play around with all the matlab functions described in this lecture to familiarize
yourself with these functions.

You might also like