Professional Documents
Culture Documents
Introduction
• What is a signal?
• What is a system?
• What is processing?
Signals
Signals
 Signals can be characterized in several ways:
ª Continuous time signals vs. discrete time signals
• Temperature in NJ – audio signal on a CD-ROM
ª Continuous valued signals vs. discrete signals
• Amount of current drawn by a device – average SAT scores of a school over years
– Continuous time and continuous valued : Analog signal
– Continuous time and discrete valued: Quantized signal
– Discrete time and continuous valued: Sampled signal
– Discrete time and discrete values: Digital signal
ª Real valued signals vs. complex valued signals
In-class
• Resident use electric power – industrial use reactive power
Exercise
ª Scalar signals vs. vector valued (multichannel) signals
• Blood pressure signal – 128 channel EEG
ª Deterministic vs. random signal:
• Recorded audio – noise (corrupted signal)
ª One-dimensional vs. two dimensional vs. multidimensional signals
• Speech - image
Signals
Analog Digital
Sampled Quantized
Systems
 Not your typical systems: airline system, security system, irrigation
system, etc. are of no interest to us
A[1+m*x(t)]*cosΩ0t
Filtering
 By far the most commonly used DSP operation
ª Filtering refers to deliberately changing the frequency content of the signal,
typically, by removing certain frequencies from the signals
ª For denoising applications, the (frequency) filter removes those frequencies in the
signal that correspond to noise
ª In communications applications, filtering is used to focus to that part of the
spectrum that is of interest, that is, the part that carries the information.
 Typically we have the following types of filters
ª Lowpass (LPF) – removes high frequencies, and retains (passes) low frequencies
ª Highpass (HPF) – removes low frequencies, and retains high frequencies
ª Bandpass (BPF) – retains an interval of frequencies within a band, removes others
ª Bandstop(BSF) – removes an interval of frequencies within a band, retains others
ª Notch filter – removes a specific frequency
Filtering
80 Hz 150 Hz
50 Hz 110 Hz 210 Hz
80 Hz 150 Hz 80 Hz 150 Hz
Touch-Tone Dialing
 Dual-tone multifrequency (DTMF) signals
1000 Hz
1200 Hz
About DSP…
 Is this another one of those classes that tests our endurance on abstract
math torture…?
ª YES! Here is an example…
About DSP
 …and here is another one…
So What Good Is This?
 The real world applications of DSP is innumerous
ª Signal analysis, noise reduction /removal : biological signals - such as ECG, EEG,
blood pressure - NDE signals, such as ultrasound, eddy current, magnetic flux,
oceanographic data, seismic data, financial data - such as stock prices as a time
series data - , audio signal processing, echo cancellation
ª Communications – analog communications, such as amplitude modulation,
frequency modulation, quadrature amplitude modulation, phase shift keying, phase
locked loops, digital and wireless transmission – CDMA (code division multiple
access) / TDMA (time division multiple access), time division multiplexing,
frequency division multiplexing, internet protocol
ª Data encryption, watermarking, fingerprint analysis, speech recognition
ª Image processing and reconstruction, MRI, PET, CT scans
ª Signal generation, electronic music synthesis
ª And many many many more….
Components of a
DSP System
Discrete Sampled
signal signal
Analog HOLD A/D
signal
Sampler
What is a plant?
Cross link
Concepts
Link
labels
Links
Z Windows
Linear Phase
Bilinear. Tran.
Butterworth
Cascade
y (t ) = A sin(Ωt − θ )
Phase
Amplitude Angular frequency (radians)
(radians/sec)
Sinusoids
 A continuous time domain sinusoid is a periodic signal
y (t ) = y (t + T )
(period)
tθ=0.0187s
tθ=θ/2πf=(3π/4)/2π20=0.75/40=0.0187s
What is the phase component???
Sinusoids
 We can also represent sinusoids using cosine
x(t ) = A cos(Ω 0t ± θ ) = A cos(2πf 0t ± θ )
T=1/f=1/20=0.05 seconds
In Matlab
 What Ts is small
enough?
Complex Exponential
Signals
 The complex exponential signal is defined as
x(t ) = Ae j (Ω0t +θ ) Note that there is π/2 phase
= A cos(Ω 0t + θ ) + j sin (Ω 0t + θ ) Difference between the two.
Phasor Notation
 Recall that any complex number can be represented with a phasor
notation r = x2 + y2
ª z=x+jy Î X=rejθ , where
y
θ = arctan
x
Ae j (Ω0t +θ ) = Ae jθ e jΩ0t
Then any real signal can be
represented as:
= Xe jΩ0t = Ae jϕ (t ) x(t ) = A cos(Ω 0t ± θ )
Complex amplitude - phasor ϕ (t ) = Ω 0 t + θ
{ }
= ℜ Ae j (Ω0t +θ )
= ℜ{Xe }
jΩ 0 t
= ℜ{Ae ϕ ( )}
j t
Graphical
Interpretation
Ae j (Ω0t +θ ) = Ae jθ e jΩ0t = Ae jϕ (t ) , ϕ (t ) = Ω 0 t + θ
Imag
Ae jϕ (t )
The complex signal, frozen in time, at time = t.
A
φ(t)
Real
cos(ωt )
From © Lyons
e j θ + e − jθ
cos θ =
2
Unit circle (A=1)
Real ωt=π/2
ωt=π/4 Axis
ωt=3π/4
From © Lyons
Continuous Time Signals
 The signal y (t ) = A sin(Ωt − θ ) is a continuous-time signal
ª It has a value for every time instant on which it is defined
ª Different then a continuous function, which is a function that can be differentiated
in the interval on which it is defined. The following rectangular signal, is not a
continuous function (why not?):
∆ 1, −1/ 2 ≤ t ≤ 1/ 2
Π (t ) =
0, otherwise
Continuous Time Signals
Sinusoid and Exponential
1t
δ (t ) = lim Π
β →0 β β
1. f (t )δ (t − t0 ) = f (t0 )δ (t − t0 )
b b
2. ∫ f (t )δ (t − t0 )dt = ∫ f (t0 )δ (t − t0 )dt = f (t0 ) Sifting property
a a
b
3. ∫ f (t )δ ( n ) (t − t0 )dt =(−1) n f ( n ) (t0 ) a < t0 b > t0
a
1
4. δ (at ) = δ (t )
a
5. f (t ) ∗ δ (t ) = f (t ), f (t ) ∗ δ (t0 ) = f (t0 ), Convolution property
(n) (n)
6. f (t ) ∗ δ (t ) = f (t ),
Recall Convolution
∞
x(t ) ∗ y (t ) = ∫ x(τ ) y (t − τ )dτ
−∞
∞
= ∫ y (τ ) x(t − τ )dτ
−∞
Continuous Time Signals
Unit Step Function
 The continuous-time unit-step function is
1, t > 0
u (t ) =
0, t < 0
∆ 1, −1/ 2 ≤ t ≤ 1/ 2
Π (t ) =
0, otherwise
Continuous Time Signals
Sinusoid and Exponential
G=rectpuls(t)
G=rectpuls(t,w)
Continuous Time Signals
Unit Impulse
 Consider compressing the rectangular function in such a way that the
area under the rectangle stays constant, as 1
ª The function you would obtain as the width of the rectangle approaches zero, is
called the unit impulse function – it is a commonly used test signal.
1t
δ (t ) = lim Π
β →0 β β
1. f (t )δ (t − t0 ) = f (t0 )δ (t − t0 )
b b
2. ∫ f (t )δ (t − t0 )dt = ∫ f (t0 )δ (t − t0 )dt = f (t0 ) Sifting property
a a
b
3. ∫ f (t )δ ( n ) (t − t0 )dt =(−1) n f ( n ) (t0 ) a < t0 b > t0
a
1
4. δ (at ) = δ (t )
a
5. f (t ) ∗ δ (t ) = f (t ), f (t ) ∗ δ (t0 ) = f (t0 ), Convolution property
(n) (n)
6. f (t ) ∗ δ (t ) = f (t ),
Continuous Convolution
 Recall the convolution integral
∞
z (t ) = x(t ) ∗ y (t ) = ∫ x(τ ) y(t − τ )dτ
−∞
∞
= ∫ y (τ ) x(t − τ )dτ
−∞
 Note that the “t” dependency of z(t) is misleading. For each value of “t” the
convolution integral must be integrated separately over all values of the dummy
variable “τ”. So, for each “t”
1. Rename the independent variable as τ. You now have x(τ) and y(τ). Flip y(τ) over the origin.
This is y(-τ)
2. Shift y(-τ) as far left as possible to a point “t”, where the two signals barely touch. This is
y(t-τ)
3. Multiply the two signals and integrate over all values of τ. This is the convolution integral
for the specific “t” picked above.
4. Shift / move y(-τ) infinitesimally to the right and obtain a new y(t-τ). Multiply and integrate
5. Repeat 2~4 until y(t-τ) no longer touches x(t), i.e., shifted out of the x(t) zone.
Convolution Integral
ª Convolution integral with the impulse is simply shifting the function to where the
impuls(es) appear.
Continuous Time Signals
Unit Step Function
 The continuous-time unit-step function is
1, t > 0
u (t ) =
0, t < 0
Ts
1
fs =
Ts
 Reciprocal of sampling period is the sampling frequency, fs
(samples/s)
 We naturally interpret signals in cont domain ΠD/A conversion
 A fundamental question: how close should the samples be to each
other so that a continuous time signal can be uniquely generated from
a discrete time signal
ª What is the maximumTs, or minimum fs should be so that we can observe the
signal as smooth? How about reconstruct the cont. signal from its samples?
Discrete Signals
 A length-N sequence is often referred to as an N-point sequence
 The length of a finite-length sequence can be increased by zero-padding, i.e., by
appending it with zeros
 A right-sided sequence x[n] has zero-valued samples for n<N1. If N1>0, a right-
sided sequence is called a causal sequence
n
N1
 A left sequence x[n] has zero-valued samples for n>N2. If N2<0, a left-sided
sequence is called an anti-causal sequence
N2
n
Lecture 4
Discrete
Time Signals
Ts
1
fs =
Ts
 Reciprocal of sampling period is the sampling frequency, fs
(samples/s)
 We naturally interpret signals in cont domain ΠD/A conversion
 A fundamental question: how close should the samples be to each
other so that a continuous time signal can be uniquely generated from
a discrete time signal
ª What is the maximumTs, or minimum fs should be so that we can observe the
signal as smooth? How about reconstruct the cont. signal from its samples?
Discrete Signals
 A length-N sequence is often referred to as an N-point sequence
 The length of a finite-length sequence can be increased by zero-padding, i.e., by
appending it with zeros
 A right-sided sequence x[n] has zero-valued samples for n<N1. If N1>0, a right-
sided sequence is called a causal sequence
n
N1
 A left sequence x[n] has zero-valued samples for n>N2. If N2<0, a left-sided
sequence is called an anti-causal sequence
N2
n
Discrete Time Signals
Unit Impulse (sequence)
1, n = 0
δ [ n] =
0, n ≠ 0
f [n0 ], n = n0
f [n]δ [n − n0 ] = f [n0 ]δ [n − n0 ] =
0, n ≠ n0
b b
a<n0, b>n0
∑ f [n]δ [n − n0 ] = ∑ f [n0 ]δ [n − n0 ] = f [n0 ]
n=a n=a
Discrete Time Signals
Unit Impulse (sequence)
 The sifting property has one very important consequence:
ª A sequence can be generated in terms of impulses
x[n] = ... + x[−1]δ [n + 1] + x[0]δ [n] + x[1]δ [n − 1] + ...
∞
= ∑ x[m]δ [n − m]
m = −∞
=x[-5]δ[n+5]
We will use this property in the future to define any system in terms of its “impulse response”
Discrete Time Signals
Unit Step (sequence)
n
–4 –3 –2 –1 0 1 2 3 4 5 6
1, n ≥ 0 ∞
u[n] = = ∑ δ [n − i]
0, n < 0 i =0
 If we write α=e ( σ o + j ωo )
, A = A e jφ ,
Angular (discrete) frequency
ª then we can express of the sequence
j φ ( σ o + j ωo ) n
x[ n ] = Ae e = xre [ n ] + j xim [ n ],
ª where σon
xre [ n ] = A e cos( ωo n + φ),
σo n
xim [ n ] = A e sin( ωo n + φ)
Discrete Time Signals
Exponential Sequence
 xre[n] and xim[n] of a complex exponential sequence are real
sinusoidal sequences with constant (σ0=0), growing (σ0>0), or
(σ0<0) amplitudes for n > 0
0.5 0.5
Amplitude
Amplitude
0 0
-0.5 -0.5
-1 -1
0 10 20 30 40 0 10 20 30 40
Time index n Time index n
1 π
x[n] = exp(− + j )n ⋅ u[n]
12 6
Discrete Time Signals
Exponential Sequence
 A special case of the exponenital signal is very commonly used in
DSP: A=real constant, and α is purely imaginary, i.e.,
x[ n ] = Ae jω o n = A(cos [ω 0 n ] + j sin [ω 0 n ])
30
Amplitude
10
20
10 5
0
0 5 10 15 20 25 30 0
0 5 10 15 20 25 30
Time index n
Time index n
Periodicity
 Sinusoidal sequence A cos(ωo n + φ) and complex exponential
sequence Be jωo n are periodic sequences of period N if
ω0N = 2πr, where N and r are positive integers
 Smallest value of N satisfying ωo N = 2πr is the fundamental
period of the sequence
 Any sequence that does not satisfy this condition is aperiodic
 To verify the above fact, consider
x1[n] = cos(ωo n + φ) x2 [ n] = cos(ωo ( n + N ) + φ)
x2 [ n] = cos(ω o n + φ ) cos ω o N − sin(ω o n + φ ) sin ω o N
= cos(ω o n + φ ) = x1[ n] iff sin ω o N = 0 and cos ω o N = 1
ª These two conditions are met if and only if
2π = N
ωo N = 2π r or ωo r
Periodicity
 Note that any continuous sinusoidal /exponential signal is periodic,
however, not all discrete sinusoidal sequences are:
ª A discrete time sequence sin(ω0n+φ) or ejω n is periodic with period N, if and
0
only if, there exists an integer m, such that mT0 is an integer, where T0=2π/ ω0.
ª In other words, ω0N = 2πr must be satisfied with two integers N and r, or N/r
must be rational number.
3π/2
Try This at Home
n=-50:50;
x=cos(pi*0.1*n);
y=cos(pi*0.9*n);
z=cos(pi*2.1*n);
subplot(311)
plot(n,x)
title('x[n]=cos(0.1\pin')
title('x[n]=cos(0.1\pin)')
grid
subplot(312)
plot(n,y)
title('y[n]=cos(0.9\pin)')
grid
subplot(313)
plot(n,z)
grid
title('z[n]=cos(2.1\pin)')
xlabel('n')
The Curse of
Digital Frequency
 Recall that a discrete time signal can be obtained from a continuous time
signal through the process of sampling: take a sample every Ts second
{x(n)} = x(nTs ) = x(t ) t =nTs
n = L ,−2,−1,0,1,2, L
y (t ) = A sin(Ωt − θ ) 2πΩ F
ω = ΩTs = ⇒ f =
y s (nTs ) = A sin(ΩnTs − θ ) Ωs Fs
ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]
 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation
ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences
ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences
y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence
x[n] x[n]
x[n]
Basic Operations
An Example
Discrete-time
x[n] System, h[n]
y[n]
Input sequence Output sequence
y[n] = x[n] ∗ h[n]
∞
= ∑ x[m] ⋅ h[n − m] h[n]: Impulse response of the system
m = −∞
∞
= ∑ h[m] ⋅ x[n − m]
m = −∞
Discrete Convolution
 Again, note that the “n” dependency of y[n] is misleading. For each
value of “n” the convolution sum must be computed separately over
all values of the dummy variable “m”. So, for each “n”
1. Rename the independent variable as m. You now have x[m] and h[m]. Flip h[m]
over the origin. This is h[-m]
2. Shift h[-m] as far left as possible to a point “n”, where the two signals barely touch.
This is h[n-m]
3. Multiply the two signals and sum over all values of m. This is the convolution sum
for the specific “n” picked above.
4. Shift / move h[-m] to the right by one sample, and obtain a new h[n-m]. Multiply
and sum over all m.
5. Repeat 2~4 until h[n-m] no longer overlaps with x[m], i.e., shifted out of the x[m]
zone.
Convolution Demo
n=-3:7;
x=0.55.^(n+3);
h=[1 1 1 1 1 1 1 1 1 1 1];
y=conv(x,h);
subplot(311)
stem(x)
title(‘Original signal’)
subplot(312)
stem(h) % Use stem for discrete sequences
title(‘Impulse response / second signal’)
subplot(313)
stem(y)
title(‘ convolution result’)
Lecture 5
Discrete
Convolution
&
Classification
of Discrete
Sequences
Discrete-time
x[n] System, h[n]
y[n]
Input sequence Output sequence
y[n] = x[n] ∗ h[n]
∞
= ∑ x[m] ⋅ h[n − m] h[n]: Impulse response of the system
m = −∞
∞
= ∑ h[m] ⋅ x[n − m]
m = −∞
Discrete Convolution
 Again, note that the “n” dependency of y[n] is misleading. For each
value of “n” the convolution sum must be computed separately over
all values of the dummy variable “m”. So, for each “n”
1. Rename the independent variable as m. You now have x[m] and h[m]. Flip h[m]
over the origin. This is h[-m]
2. Shift h[-m] as far left as possible to a point “n”, where the two signals barely touch.
This is h[n-m]
3. Multiply the two signals and sum over all values of m. This is the convolution sum
for the specific “n” picked above.
4. Shift / move h[-m] to the right by one sample, and obtain a new h[n-m]. Multiply
and sum over all m.
5. Repeat 2~4 until h[n-m] no longer overlaps with x[m], i.e., shifted out of the x[m]
zone.
Convolution Demo
n=-3:7;
x=0.55.^(n+3);
h=[1 1 1 1 1 1 1 1 1 1 1];
y=conv(x,h);
subplot(311)
stem(x)
title(‘Original signal’)
subplot(312)
stem(h) % Use stem for discrete sequences
title(‘Impulse response / second signal’)
subplot(313)
stem(y)
title(‘ convolution result’)
In class Exercise
 See Example 1.6 on p.27 of Bose for an additional example.
Even sequence
ª Conjugate-antisymmetric if x[n]=-x*[-n]
• If x[n] is real, the signal is called simply as antisymmetric or odd sequence
Odd sequence
Symmetry
 Any real sequence can be expressed as a sum of its even part and its odd part:
xev [n] = 12 ( x[n] + x[− n]) xod [n] = 12 ( x[n] − x[− n])
ε x,K
K
= ∑ x[n]
n=− K
2
ε x.K
K 1
 Then Px = lim 2 K1+1 ∑ x[n]
2
Px = lim
K →∞ n=− K K →∞ 2 K +1
1 K 1 K 2 9( K + 1)
Px = lim ∑
Px=9lim 1 =∑ xlim
[ n] = 4 .5
K →∞ 2 K + 1 Kn→=∞0 n=− KK →∞ 2 K + 1
2 K +1
Hint: Recall (and make sure to remember throughout the semester) the following:
∞ 1
n
∑a = , a <1
n =0 1− a
N a m − a N +1
n
∑a = , a ≠1
n=m 1− a
Energy & Power Sequences
y
Boundedness &
Summability*
 A sequence x[n] is said to be bounded if x[n] ≤ Bx < ∞
ª Example: x[n] = cos 0.3πn
∞
 A sequence x[n] is said to be absolutely summable if ∑ x[n] < ∞
n = −∞
ª Example: ∞
0.3n , n ≥ 0 n 1
y[n] = ∑ 0 .3 = = 1.42857 < ∞
Answer???
1 − 0.3
0, n < 0 n =0
ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]
 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation
ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences
ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences
y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence
x[n] x[n]
x[n]
Basic Operations
An Example
Operations on
Discrete
Signals
Discrete
Systems
ª Modulator
x[n] × y[n]
y[n] = x[n] ⋅ w[n]
w[n]
 Addition operation:
x[n] + y[n]
ª Adder
y[n] = x[n] + w[n]
w[n]
 Multiplication operation
ª Multiplier
A
x[n] y[n] y[n] = A ⋅ x[n]
Some Basic Operations
on Sequences
ª Unit advance
x[n] z y[n]
y[n] = x[n + 1]
Some Basic Operations
on Sequences
y[n] = x[− n]
 Branching operation: Used to provide multiple copies of
a sequence
x[n] x[n]
x[n]
Basic Operations
An Example
 This property must hold for any arbitrary constants α and β , and for all possible
inputs x1[n] and x2[n], and can also be generalized to any arbitrary number of inputs
An Example - Accumulator
 A discrete system whose input / output relationship is given as
n
y[n] = ∑ x[l] The second form is used, if the
l = −∞ signal is causal, in which case
n
y[-1] is the initial condition
= y[−1] + ∑ x[l],
l =0
is known as an accumulator. The output at any given time, is simply the sum of all
inputs up to that time.
 For sequences and systems where the index n is related to discrete instants
of time, this property is also called the time-invariance property
is known as an upsampler.
 This system inserts L zeros between every sample. If the samples are
inserted based on their amplitudes, then the system is called an interpolator.
y[n]
3 4
n
0 1 2 5 6 7 8 9 10 11 12
Linear
Time-Invariant Systems
 A system that satisfies both the linearity and the time (shift)
invariance properties is called a linear time (shift) invariant system,
LTI (LSI).
2
+ ( xu [n − 2] + xu [n + 1])
3
An Example - Downsampler
 A system whose input-output characteristic satisfies y[ n] = x[ Mn]
where M is a (+) integer, is called a downsampler or a decimator.
ª Such a system reduces the number of samples by a factor of M by removing M
samples from between every sample.
 Is this system
ª Linear? In class
exercise
ª Time (shift) invariant?
ª Causal?
Memory
 A system is said to be memoryless if the output depends only on the
current input, but not on any other past (or future) inputs. Otherwise,
the system is said to have memory.
y[n] = x[ Mn]
x[ n / L], n = 0, ± L, ± 2 L, .....
xu [n] =
0, otherwise
n
y[n] = ∑ x[l]
l = −∞
Stability
 There are several definitions of stability, which is of utmost
importance in filter design. We will use the definition of stability in
the BIBO sense
ª If y[n] is the response to an input x[n] that satisfies |x[n]|≤Bx<∞, and y[n] satisfies
|y[n]| ]|≤By<∞, then the system is said to be stable in the BIBO sense.
 A system (filter) that is not stable is rarely of any practical use (for
very specialized applications), and therefore, most filters are designed
to be BIBO stable.
An Example –
Moving Average Filter
 An M – point moving average system (filter) is defined as
M −1
y[n] = 1 x[n − k ]
∑
M k =0
 What would you use such a system for?
 Is this system stable?
Try this at Home
n=0:99;
s=2*(n.*(0.9).^n);
d=rand(1, 100);
x=s+d;
subplot(211)
plot(x); grid
for i=5:100;
y(i)=(0.20)*sum(x(i-1)+x(i-2)+x(i-3)+x(i-4));
end
subplot(212)
plot(n,y); grid
Lecture 7
Characterization
of Discrete
Systems
Impulse
Response
∞
h[n] = x[n] ∗ δ [n] = ∑ x[m] ⋅ δ [n − m]
m = −∞
Impulse response
 Ex: Consider the following system
∞
h[n] = x[n] ∗ δ [n] = ∑ x[m] ⋅ δ [n − m]
m = −∞
= L x[−1]δ [n + 1] + x[0]δ [n] + x[1]δ [n − 1] + L
⇒ {h[n]} = {α1 , α 2 , α 3 , α 4 }
↑
Impulse Response
 Recall the discrete time accumulator: n
y[n] = ∑ x[l]
l = −∞
ª What is the impulse response of this system?
n
∑ δ [l] = u[n]
h[n] = h[n]=…
l = −∞
Impulse Response
 So, what is the big deal?
Discrete System
x[n] y[n]=x[n]*h[n]
h[n]
Input sequence Output sequence
∞
y[n] = x[n] ∗ h[n] = ∑ x[m] ⋅ h[n − m]
m = −∞
Impulse Response
 To see why, consider the following example
x[n]
3 h[n]
2
1 1
0 3 3
1 2 4 n 0 1 2 n
–1 –1
–2
ª x[n]=-2δ[n]+0δ[n-1]+1δ[n-2]-1δ[n-3]+3δ[n-4]
Impulse Response
for LTI Systems
 For a LTI system, we can add individual responses of the system to
individual inputs of the input sequence
… Î …
x[-2] Î y[-2] x[n]=-2δ[n]+0δ[n-1]+1δ[n-2]-1δ[n-3]+3δ[n-4]
x[-1] Î y[-1]
x[0] Î y[0]
x[1] Î y[1] h[n] h[n-1] h[n-2] h[n-3] h[n-4]
… …
+__________
x[n] Î y[n]
y[n] = -2h[n] + 0h[n-1] +1h[n-2]-1h[n-3]+3h[n-4]
=x[0]h[n]+x[1]h[n-1]+x[2]h[n-2]+x[3]h[n-3]+x[4]h[n-4]
∞
y[n] = x[n] ∗ h[n] = ∑ x[m] ⋅ h[n − m]
m = −∞
Impulse Response
Example
 Continuing with the example
x[n]
3 h[n]
2
1 1
0 3 3
1 2 4 n 0 1 2 n
–1 –1
–2
y[n]
5
1 1 1
0 1 7
–2 –1 2 3 4 5 6 8 9 n
–2
–3
–4
Properties
 There are several useful properties of the convolution and impulse
response characterization:
ª An LTI system is BIBO sable, if its impulse response is absolutely summable:
∞
∑ h[n] < ∞
n = −∞
• Parallel
h[n]=h1[n]+h2[n]
h1[n]
+ ≡ h1[n] + h2h[n[n
]=h[n] ]
1
h2[n]
Properties
 A cascade connection of two stable is systems is always stable
 If a cascade system satisfies the following condition
h1[ n ] * h 2[n ] = δ[n ]
then h1 and h2are called inverse systems
 An application of the inverse system concept is in the recovery of a signal x[n]
from its distorted version x’[n] appearing at the output of a transmission
channel
 If the impulse response of the channel is known, then x[n] can be recovered by
designing an inverse system of the channel
ª FIR systems are also called nonrecursive systems (for reasons that will later
become obvious), where the output can be computed from the current and past
input values only – without requiring the values of previous outputs.
Infinite Impulse Response
Systems
 If the impulse response is of infinite length, then the system is referred
to as an infinite impulse response (IIR) system. These systems cannot
be characterized by the convolution sum due to infinite sum.
ª Instead, they are typically characterized by linear constant coefficient difference
equations, as we will see later.
ª Recall accumulator and note that it can have an alternate – and more compact
representation that makes the current output a function of previous inputs and
outputs
n
y[n] = ∑ x[l] y[n] = y[n − 1] + x[n]
l = −∞
ª The impulse response of this system (which is of infinite length), cannot be
represented with a finite convolution sum. Note that, since the current output
depends on the previous outputs, this is also called a recursive system
Constant Coefficient
Linear Difference Equations
 All discrete systems can also be represented using constant
coefficient, linear difference equations of the form
y[n] + a1 y[n − 1] + a2 y[n − 2] + L aN y[n − N ] = b0 x[n] + b1x[n − 1] + L + bM x[n − M ]
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0
Constant coefficients
ª Note that the impulse response of an FIR system can easily be obtained from its CCLDE
representation: M M
y[n] = ∑ b j x[n − j ] ⇒ h[n] = ∑ b jδ [n − j ]
j =0 j =0
The hardware implementation follows this structure exactly, using delay elements, adders and multipliers.
IIR Systems
 If in the general expression, ai are not zero, then the output depends
former outputs, and hence this is a recursive system.
 The impulse response of an IIR system cannot be represented as a
closed finite convolution sum precisely due to recursion
 The filter structure of IIR systems – which has a distinct feedback
(recursion) loop, has the following form:
y[n] = b0 x[ n] +y[n]=…
b1x[ n − 1] − a1 y[n − 1]
CCLDE
 Note that, assuming that system is causal, y[n] be pulled out of the
CCLDE equation to obtain:
N ai M b
y[n] = − ∑ y[n − i ] + ∑ j x[ n − j ]
i =1 a0 j =1 a0
 IIR systems are not guaranteed to be stable, since their h[n] consists of
infinite number of terms. Their design requires stability checks!
Lecture 8
Representation
of Signals in
Frequency
Domain
− j 2πkt / N 1 N −1
F [k ] = ∫ f (t )e dt f (t ) = ∑ F [k ]e j 2πkt / N
2π i = 0
Jean B. J. Fourier
 He announced his discovery in a prize paper on the theory of heat (1807).
ª The judges: Laplace, Lagrange, Poisson and Legendre
 Three of the judges found it incredible that sum of sines and cosines could add up
to anything but an infinitely differential function, but...
ª Lagrange: Lack of mathematical rigor and generality ÎDenied publication….
• Became famous with his other previous work on math, assigned as chair of Ecole Polytechnique
• Napoleon took him along to conquer Egypt
• Return back after several years
• Barely escaped Giyotin!
ª After 15 years, following several attempts and disappointments and frustration, he published
his results in Theorie Analytique de la Chaleur in 1822 (Analytical Theory of Heat).
ª In 1829, Dirichlet proved Fourier’s claim with very few and non-restricting conditions.
ª Next 150 years: His ideas expanded and generalized. 1965: Cooley and Tukey--> Fast
Fourier Transform Î Computational simplicity Î King of all transforms… Countless
number of applications engineering, finance, applied mathematics, etc.
Fourier Transforms
 Fourier Series (FS)
ª Fourier’s original work: A periodic function can be represented as a finite, weighted sum of
sinusoids that are integer multiples of the fundamental frequency Ω0 of the signal. Î These
frequencies are said to be harmonically related, or simply harmonics.
 (Continuous) Fourier Transform (FT)
ª Extension of Fourier series to non-periodic functions: Any continuous aperiodic function
can be represented as an infinite sum (integral) of sinusoids. The sinusoids are no longer
integer multiples of a specific frequency anymore.
 Discrete Time Fourier Transform (DTFT)
ª Extension of FT to discrete sequences. Any discrete function can also be represented as an
infinite sum (integral) of sinusoids.
 Discrete Fourier Transform (DFT)
ª Because DTFT is defined as an infinite sum, the frequency representation is not discrete
(but continuous). An extension to DTFT is DFT, where the frequency variable is also
discretized.
 Fast Fourier Transform (FFT)
ª Mathematically identical to DFT, however a significantly more efficient implementation.
FFT is what signal processing made possible today!
Dirichlet Conditions
(1829)
 Before we dive into Fourier transforms, it is important to understand
for which type of functions, Fourier transform can be calculated.
 Dirichlet put the final period to the discussion on the feasibility of
Fourier transform by proving the necessary conditions for the
existence of Fourier representations of signals
ª The signal must have finite number of discontinuities
ª The signal must have finite number of extremum points within its period
ª The signal must be absolutely integrable within its period
t 0 +T
∫ x(t ) dt < ∞
t0
 How restrictive are these conditions…?
A Demo
 http://homepages.gac.edu/~huber/fourier/
Fourier Series
 Any periodic signal x(t) whose fundamental period is T0 (hence,
fundamental frequency f0=1/T0), can be represented as a finite sum of
complex exponentials (sines and cosines)
ª That is, a signal however arbitrary and complicated it may be, can be represented as
a sum of simple building blocks
∞
x(t ) = ∑ ck e jΩ0kt
k = −∞
ª Note that each complex exponential that makes up the sum is an integer multiple
of Ω0, the fundamental frequency
ª Hence, the complex exponentials are harmonically related
ª The coefficients ck , aka Fourier (series) coefficients, are possibly complex
• Fourier series (and all other types of Fourier transforms) are complex valued ! That is,
there is a magnitude and phase (angle) term to the Fourier transform!
Fourier Series
 This is the synthesis equation: x(t) is synthesized from its building
blocks, the complex exponentials at integer multiples of Ω0
∞
x(t ) = ∑ ck e jkΩ0t
k = −∞
1 t0 +T0 − jkΩ 0t
ck = ∫ x (t ) e dt
T0 t0
The Fourier
Transform
ª Note that each complex exponential that makes up the sum is an integer multiple
of Ω0, the fundamental frequency
ª Hence, the complex exponentials are harmonically related
ª The coefficients ck , aka Fourier (series) coefficients, are possibly complex
• Fourier series (and all other types of Fourier transforms) are complex valued ! That is,
there is a magnitude and phase (angle) term to the Fourier transform!
Fourier Series
 This is the synthesis equation: x(t) is synthesized from its building
blocks, the complex exponentials at integer multiples of Ω0
∞
x(t ) = ∑ ck e jkΩ0t
k = −∞
1 t0 +T0 − jkΩ 0t
ck = ∫ x (t ) e dt
T0 t0
ck
k=0Î Ω=0 rad/s
1/2 1/2j 1/2 k=1Î Ω=2 rad/s
k=2Î Ω=4 rad/s
-3 -2 -1 0 1 2 3
k k=3Î Ω=6 rad/s
-1/2j
Quick Facts About
Fourier Series
 If a signal is even, then all bk=0, and if a signal is odd, then all ak=0
 If x(t) is real, then the Fourier series are symmetric, that is, ck=c-k
ª This is inline with our interpretation of frequency as two phasors rotating at the
same rate but opposite directions.
ª For real signals, the relationship between complex and trigonometric
representation of Fourier series further simplifies to ak = 2ℜ[ck ] bk = −2 Im[ck ]
ª We also have a third representation for real signals:
kth harmonic
∞
x(t ) = C0 + ∑ Ck cos(kΩ0t − θ k )
k =1
a b
C0 = 0 , Ck = ak2 + bk2 , θ k = tan −1 k
2 ak
Phase angles
x(t) X(ω)
Exercises
t=0.01:0.01:10;
d=0.5:2:9.5;
y=pulstran(t,d,'rectpuls');
y2=y-0.5;
y2=2*y;
subplot(211)
plot(t,y2)
grid
Y2=fft(y2);
Y2=fftshift(Y2);
subplot(212)
stem(abs(Y2))
grid
Exercises
−3 t
x(t ) = e sin(2t )
fs=100;
t=-2:1/fs:2;
x=exp(-3*abs(t)).*sin(2*t);
Subplot(211)
plot(t,x)
grid
X=fft(x);
f=-fs/2:fs/(length(t)-1):fs/2;
subplot(212)
plot(f, abs(fftshift(X)))
grid
Discrete Time
Fourier
Transform
(DTFT)
k = −∞ T0 t0
-3 -2 -1 0 1 2 3
k
-1/2j
X (Ω ) = X (Ω ) ∠X (Ω )
∫ x(t )e dt
−∞
ℑ
= X (Ω ) e jφ (Ω )
⇔
∞
x(t ) = ℑ−1 ( X (Ω) ) = ∫ X (Ω ) e
jΩt
dΩ
−∞
Key Facts to Remember
 All FT pairs provide a transformation between time and frequency domains: The
frequency domain representation provides how much of which frequencies exist in
the signal Î More specifically, how much ejΩt exists in the signal for each Ω.
 In general, the frequency representation is complex (except if the signal is even).
ª |X(Ω)|: The magnitude spectrum Î the power of each Ω component
ª Ang X(Ω): The phase spectrum Î the amount of phase delay for each Ω component
 The FS is discrete in frequency domain, since it is the discrete set of exponentials –
integer multiples of Ω0 – that make up the signal. This is because only a finite
number of frequencies are required to construct a periodic signal.
 The FT is continuous in frequency domain, since exponentials of a continuum of
frequencies are required to reconstruct a non-periodic signal.
 Both transforms are non-periodic in frequency domain.
Discrete –Time
Fourier Transform (DTFT)
 Similar to continuous time signals, discrete time sequences can also be
periodic or non-periodic, resulting in discrete-time Fourier series or
discrete – time Fourier transform, respectively.
 Most signals in engineering applications are non-periodic, so we will
concentrate on DTFT.
 We will represent the discrete frequency as ω, measured in radians.
ℑ
x[n] ⇔ X (ω )
π Quick facts:
1 j ωt
x[n] = ∫ X (ω ) e dω • Since x[n] discrete, we can only add them, hence summation
2π −π • The sum of x[n], weighted with continuous exponentials, is
continuous Î the DTFT X(ω) is continuous (non-discrete)
ℑ • Since X(ω) is continuous, x[n] is obtained as a continuous integral
⇔ of X(ω), weighed by the same complex exponentials.
∞ • x[n] is obtained as an integral of X(ω), where the integral is over
X (ω ) = ∑ x[n]e − jωn an interval of 2π. Î This is our first clue that DTFT is periodic
n = −∞ with 2π in frequency domain.
Proof
 We now show that x[n] and X(ω) are indeed FT pairs, that is one can
be obtained from the other:
∞
1 π
x[n] = ∫
2π −π
X (ω ) e jωt
dω X (ω ) = ∑ x[n]e − jωn
n = −∞
1 π ∞ − j ωl j ω n
x[n] = ∫ ∑ x[l]e e dω
2π − π l = −∞
Important Theorems
 There are several important theorems related to DTFT.
 Theorem 1:
ª If x[n] is input to an LTI system with an impulse response of h[n], then the DTFT
of the output is the product of X(ω) and Y(ω)
Discrete Time
Fourier
Transform
(DTFT)
ℑ ℑ
x[n] ⇔ X (ω ) y[n] ⇔ Y (ω )
Linearity &
Differentiation In Frequency
 The DTFT is a linear operator
ℑ
ax[n] + by[n] ⇔ aX (ω ) + bY (ω )
 Multiplying the time domain signal with the independent time variable
is equivalent to differentiation in frequency domain.
ℑ dX (ω )
nx[n] ⇔ j
dω
Time Reversal ,
Time & Frequency Shift
 A reversal in of the time domain variable causes a reversal of the
frequency variable
ℑ
x[−n] ⇔ X (−ω )
ℑ
− jω 0 n
x[n]e ⇔ X (ω − ω0 )
Convolution
 Convolution in time domain is equivalent to multiplication in
frequency domain
ℑ
x[n] * h[n] ⇔ X (ω ) ⋅ H (ω )
ª This is one of the fundamental theorems in filtering. It allows us to compute the
filter response in frequency domain using the frequency response of the filter.
ℑ 1 π
x[n] ⋅ h[n] ⇔ ∫ X (γ ) H (ω − γ )dγ
2π −π
Parseval’s Theorem
 The energy of the signal , whether computed in time domain or the
frequency domain, is the same!
∞ π
2 1 2
∑ x[ n ] = ∫
2π −π
X (ω ) dω
n = −∞
Energy of a continuous
periodic function
Important DTFT Pairs
Impulse Function
 The DTFT of the impulse function is “1” over the entire frequency
band.
ℑ{δ [n]} = 1
x=zeros(1000,1); x=ones(1000,1);
x(500)=1; subplot(211)
subplot(211) plot(x); grid
plot(x); grid X=abs(fft(x));
x=abs(fft(x)); subplot(212)
subplot(212) w=linspace(-pi, pi, 1000);
w=-pi:2*pi/999:pi; plot(w, fftshift(X)); grid
plot(w, fftshift(X)); grid
Important DTFT Pairs
Real Exponential
 This is an important function in signal processing. Why?
ℑ 1
n
x[n] = α u[n] ⇔
1 − α e − jω
t=0:0.01:10;
x=(0.5).^t;
plot(t,x)
X=fftshift((fft(x)));
subplot(311)
plot(t,x); grid
subplot(312)
plot(abs(X)); grid
f=-50:100/1000:50;
plot(f,abs(X)); grid
subplot(313)
plot(f, unwrap(angle(X))); grid
1, − M ≤ n ≤ M
x[n] = rect M [n] =
0, otherwise
ℑ
⇔
M
− j ωn sin (M + 1 2 )ω
∑ e =
sin (ω 2 )
, ω≠0
n=−M
Ideal Lowpass Filter
 The ideal lowpass filter is defined as
1, ω ≤ ωc
H (ω ) =
0, ωc ≤ ω ≤ π
ª Taking its inverse DTFT, we can obtain the corresponding impulse function h[n]:
sin ωc n
h[n] =
πn
Ideal Lowpass Filter
 Note that:
ª The impulse response of an ideal LPF is infinitely long Æ This is an IIR filter. In
fact h[n] is not absolutely summable Æ its DTFT cannot be computed Æ an ideal
h[n] cannot be realized!
ª One possible solution is to truncate h[n], say with a window function, and then
take its DTFT to obtain the frequency response of a realizable FIR filter.
On Friday
 In class exercises
 Matlab exercises
 No formal lab report
 Exam next FRIDAY!
Lecture 12
Sampling
Theorem
Alias:
Two names for the
same person, or thing.
Aliasing
 Note that identical discrete-time signals may result from the sampling of more than
one distinct continuous-time function. In fact, there exists an infinite number of
continuous-time signals, which when sampled lead to the same discrete-time signal
1
0.5
Amplitude
-0.5
-1
0 0.2 0.4 0.6 0.8 1
time
Aliasing
 Here is another example:
ª The signals in the following plot shows two sinusoids: x1[n]=cos(0.4πn) and
x2[n]=cos(2.4πn). Note that these two signals are distinct, as the second one clearly
has a higher frequency.
ª However, when samples at, say integer values of n they have the same samples, that
is x1[n]=x2[n] for all integer n. These two signals are aliases of each other. More
specifically, in the DSP jargon, we say that the frequencies ω1= 0.4π and ω2=2.4π
are aliases of each other.
Aliasing
 In general, 2π multiples added or subtracted to a sinusoid gives aliases
of the same signal.
ª The one at the lowest frequency is called the principal alias, whereas those at the
negative frequencies are called folded aliases.
ª In summary, the frequencies at ω0 + 2πk and 2πk - ω0 for any integer k, are aliases
of each other.
ª We can further show that for folded aliases, the algebraic sign of the phase angle is
opposite that of the principal alias
Alias frequencies
Houston…
We’ve got a problem!
 The fact that there exists an infinite number of continuous-time
signals, which when sampled lead to the same discrete-time signal,
poses a considerable dilemma in plotting and interpreting signals in
the frequency domain.
 Q: If the same discrete signal can be obtained from several continuous
time signals, how can we uniquely reconstruct the original continuous
time signal that generated the discrete signal at hand?
 A. Under certain conditions, it is possible to relate a unique
continuous-time signal to a given discrete-time signal.
ª If these conditions hold, then it is possible to recover the original continuous-time
signal from its sampled values
Shannon’s
Sampling Theorem
 The solution to this complicated and perplexing phenomenon comes
from the amazingly simple Shannon’s Sampling Theorem, one of the
cornerstones of the modern communications, signal processing and
control.
ª A continuous time signal x(t), with frequencies no higher then fmax can be
reconstructed exactly, precisely and uniquely from its samples x[n] = x(nTs), if the
samples are taken at a sampling rate (frequency) of fs=1/Ts that is greater then
2fmax. This is called the Nyquist frequency (rate)
ª In other words, if a continuous time signal is sampled at a rate that is at least twice
as high (or higher) as the highest frequency in the signal, then it can be uniquely
reconstructed from its samples.
ª Aliasing can be avoided if a signal is sampled at or above then the Nyquist rate.
Claude Shannon
1 ∞
X s (Ω) = ∑ X c (Ω − kΩ s )
Ts k = −∞
ω = ΩTs
Effect of Sampling
in the Frequency Domain
Sampling Explained …
 Let ga(t) be a continuous-time signal that is sampled uniformly at
t = nTs, generating the sequence g[n] where
g[n] = g a (nTs ), −∞ < n < ∞
G ( e jω ) = ∑ ∞
n = −∞ g [ n ] e − jω n
Sampling
 To establish the relation between Ga(Ω) and G(ω), we treat the
sampling operation mathematically as a multiplication of ga(t) by a
periodic impulse train p(t):
∞
p (t ) = ∑ δ (t − nTs ) g a (t ) × g p (t )
n = −∞
p (t )
Sampling
 The multiplication operation yields another impulse train:
∞
g p (t ) = g a (t ) p (t ) = ∑ g a (nT )δ (t − nTs )
n = −∞
 gp=(t) is a continuous-time signal consisting of a train of uniformly spaced impulses
with the impulse at t = nTs weighted by the sampled value ga(nTs) of ga(t) at that
instant
Sampling
 Now, note that p(t) is a periodic signal, therefore it can be represented with its FS
1 ∞ j ( 2π / Ts ) kt 1 ∞ jΩ s kt
p (t ) = ∑e
Ts k = −∞
= ∑ e
Ts k = −∞
Ω s = 2π / Ts
1 ∞ jΩ kt
g p (t ) = ∑ e T ⋅ g a (t )
T k = −∞
 From the frequency-shifting property of the CTFT, the CTFT of e jΩT kt g a (t ) is
Ga (Ω − kΩ s )
=
gp(t) =
* x
h(t)
= =
^ga(t)
Nyquist Rate
 Note that the key requirement for the Ga(Ω) recovered from Gp(Ω) is
that Gp(Ω) should consist of non-overlapping replicas of Ga(Ω).
?
Ω s − Ω m ≥ Ω m ⇒ Ω s ≥ 2Ω m
If Ωs ≥2Ωm, ga(t) can be recovered exactly from gp(t) by passing it through an ideal
lowpass filter Hr(Ω) with a gain Ts and a cutoff frequency Ωc greater than Ωm and
less than Ωs- Ωm. For simplicity, a half-band ideal filter is typically used in exercises.
Aliasing - revisited
 On the other hand, if Ωs < 2Ωm, due to the overlap of the shifted
replicas of Ga(Ω) in the spectrum of Gp(Ω), the signal cannot be
recovered by filtering.
ª This is simply because the filtering of overlapped sections will cause a distortion by
folding, or aliasing, the areas immediately outside the baseband back into the
baseband.
Ωc
1 ∞ jΩt T jΩt 2 sin(Ω ct )
hr (t ) = ∫
2π − ∞
H r ( Ω ) e d Ω = ∫
2π − Ω
e dΩ =
Ω s (t 2 )
, −∞ ≤ t ≤ ∞
c
-4 -3 -2 -1 0 1 2 3 4
Final Words
 Note that the ideal lowpass filter is infinitely long, and therefore is not
realizable. A non-ideal filter will not have all the zero crossings that
ensure perfect reconstruction.
 Furthermore, if a signal is not-bandlimited, it cannot be reconstructed
whatever the sampling frequency is.
 Therefore, an anti-aliasing filter is typically used before the sampling
to limit the highest frequency of the signal, so that a suitable sampling
frequency can be used, and so that the signal can be reconcstructed if
with a non-ideal low pass filter!
Lecture 13
Sampling
Theorem
Review &
Exercises
1 ∞
X s (Ω) = ∑ X c (Ω − kΩ s )
Ts k = −∞
ω = ΩTs
Effect of Sampling
in the Frequency Domain
Sampling Explained…
…Graphically
ga(t)
x
* 1 ∞
P(Ω ) = ∑ δ (Ω − kΩ s )
p(t)
∞
p(t ) = ∑ δ (t − nTs ) Ts k = −∞
n = −∞
g p (t ) = g a (t ) p (t )
=
gp(t) ℑ =
1 ∞
=
∞
∑ g a (nT )δ (t − nTs )
⇔ G p (Ω) = ∑ Ga (Ω − kΩ s )
Ts k = −∞
n = −∞
2 sin(Ω c t ) x
hr (t ) =
Ω s (t 2 ) Ω = Ω 2
*
h (t) T , Ω ≤ Ωc
r
c s H r (Ω) =
sin (πt Ts ) 0, Ω ≤ Ωc
= = sinc(t Ts )
πt Ts
=
=
^g (t)
gˆ a (t ) = a
∞ sin[π (t − nTs ) / Ts ]
∑ g[ n]
π (t − nTs ) / Ts
n = −∞
Aliasing
 To see the full effect of
X c (Ω) = π [δ (Ω − Ω 0 ) + δ (Ω + Ω 0 )]
aliasing, as well as to get an
insight to the story behind the
word “aliasing” consider the
sinusoid xc(t)=cosΩ0t.
 Note that the filtered signal has
its spectral components at
±(Ωs-Ω0), rather then Ω0.
 The reconstructed signal is
then cos(Ωs-Ω0)t, not cosΩ0t.
ª This is because frequencies Ideal half-band LPF
beyond Ωs/2 have folded into
the baseband area. Hence we
get an alias of original signal.
ª E.g. A sinusoid at 5kHz,
sampled at 8kHz would appear
as 3kHz when reconstructed!
Recovering the
Original Signal
 Graphically:
ª Observe that hr(0)=1 and hr(t)=0 for all n≠0. Since at n=0, the hr(t) is normalized by g[n],
we do obtain g[0] at n=0. The contribution to g[0] from all other hr(t) at n=0 is zero.
ª The same can be said for all other time points of g[n]: For any n, only one of the shifted
hr(t) contributes at that time n, all others are zero.
ª Thus the ideal lowpass filter fills-in between the samples by interpolating using the sinc
function.
gˆ a (t ) = g p (nTs ) ∗ hr (t )
∞ sin[π (t − nTs ) / Ts ]
∑ g[ n]
π (t − nTs ) / Ts
n = −∞
-4 -3 -2 -1 0 1 2 3 4
(xTs)
Some Key points to Remember
 The lowpass filter is essentially doing a “sinc” interpolation.
 Sinc function in Matlab computes sin(pi*x)/(pi*x)
ª The sinc function and the rectangular function are Fourier transform pairs. Therefore, the
impulse response of an ideal lowpass filter is a sinc function
About the Sinc Function
t=-3:0.001:3; 1/Ts
xa1=sinc(1*(ones(length(t),1).*t'));
xa2=sinc(10*(ones(length(t),1).*t'));
xa3=sinc(100*(ones(length(t),1).*t'));
subplot(311); plot(t,xa1); grid
subplot(312); plot(t,xa2); grid
subplot(313); plot(t,xa3); grid
About the Sinc Function
XA1=abs(fft(xa1));
XA2=abs(fft(xa2));
XA3=abs(fft(xa3));
1/Ts
Fs=1/0.001;
f=linspace(-Fs/2, Fs/2, length(xa1));
subplot(311)
plot(f,fftshift(XA1)); grid
subplot(312)
plot(f,fftshift(XA2)) ; grid
subplot(313)
plot(f,fftshift(XA3)) ; grid
Example
 Consider the analog signal xa(t)=cos(20πt), 0≤t≤1. It is sampled at
Ts=0.01, 0.05, 0.075 and 0.1 second intervals to obtain x[n];
ª For each Ts, plot x[n]
ª Reconstruct the analog signal ya(t) from the samples of x[n] by means of sinc
interpolation (low pass filtering). Use ∆t=0.001. Estimate the frequency in ya(t)
from your plot. Ignore the end effects.
ª Try at home with the sin function. What did you observe?
Solution
%mysample.m
%DSP 0909.351.01 spring 04 Sampling example
clear; close all
t=0:0.001:1; xa=cos(20*pi*t); %xa is the original analog signal
% Part 1 - plotting the signals
Ts=0.01; N1=round(1/Ts); n1=0:N1; %Create the discrete time base for 1 second
x1=cos(20*pi*n1*Ts); %Here is the signal sampled at Ts=0.01 sec.
subplot(411); plot(t,xa, n1*Ts, x1, 'o') %plot xa and x1 ontop of each other.
% Note that the time base for the discrete signal is n1*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x1(n)'), title(' Sampling of xa(t) at Ts=0.01s')
Ts=0.05; N2=round(1/Ts); n2=0:N2; %Create the discrete time base for 1 second
x2=cos(20*pi*n2*Ts); %Here is the signal sampled at Ts=0.05 sec.
subplot(412); plot(t,xa, n2*Ts, x2, 'o') %plot xa and x2 ontop of each other.
% Note that the time base for the discrete signal is n2*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x2(n)'), title(' Sampling of xa(t) at Ts=0.05s')
Ts=0.075; N3=round(1/Ts); n3=0:N3; %Create the discrete time base for 1 second
x3=cos(20*pi*n3*Ts); %Here is the signal sampled at Ts=0.075 sec.
subplot(413); plot(t,xa, n3*Ts, x3, 'o') %plot xa and x3 ontop of each other.
% Note that the time base for the discrete signal is n3*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x3(n)'), title(' Sampling of xa(t) at Ts=0.075s')
Ts=0.1; N4=round(1/Ts); n4=0:N4; %Create the discrete time base for 1 second
x4=cos(20*pi*n4*Ts); %Here is the signal sampled at Ts=0.1 sec.
subplot(414); plot(t,xa, n4*Ts, x4, 'o') %plot xa and x4 ontop of each other.
% Note that the time base for the discrete signal is n4*Ts
axis([0, 1, -1.1, 1.1]); ylabel('x4(n)'), title(' Sampling of xa(t) at Ts=0.1s')
Solution
Solution
%Part 2 - Reconstructing the signal
figure
Ts=0.01; Fs=1/Ts;
xa1=x1*sinc(Fs*(ones(length(n1),1)*t-(n1*Ts)'*ones(1,length(t))));
subplot(411); plot(t, xa1); axis([0, 1, -1.1, 1.1]);
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.01s')
xˆa (t ) = x(nTs ) ∗ hr (t )
Ts=0.05; Fs=1/Ts;
xa2=x2*sinc(Fs*(ones(length(n2),1)*t-(n2*Ts)'*ones(1,length(t)))); ∞ sin[π (t − nTs ) / Ts ]
subplot(412); plot(t, xa2); axis([0, 1, -1.1, 1.1]); = ∑ x[n]
π (t − nTs ) / Ts
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.05s') n = −∞
∞
∑ x[n] sinc(Fs (t − nTs ))
Ts=0.075; Fs=1/Ts;
xa3=x3*sinc(Fs*(ones(length(n3),1)*t-(n3*Ts)'*ones(1,length(t)))); =
subplot(413); plot(t, xa3); axis([0, 1, -1.1, 1.1]); n = −∞
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.075s')
Ts=0.1; Fs=1/Ts;
xa4=x4*sinc(Fs*(ones(length(n4),1)*t-(n4*Ts)'*ones(1,length(t))));
subplot(414); plot(t, xa4); %axis([0, 1, -1.1, 1.1]);
ylabel('xa(t)'); title(' Reconstruction of xa(t) with Ts=0.1s')
Solution
Solution
figure
Ts=0.001; Fs=1/Ts;
%Note that since in MATLAB, the actual signal is
%samples at 1000 Hz, (delta_t=0.001), we need to use that number to
%get corret Matlab interpretation.
Decimation
&
Interpolation
ª If x[n]={…, x[-2], x[-1], x[0], x[1], x[2], …}, then the downsampled sequence is
{…, x[-2M], x[-M], x[0], x[M], x[2M], …}
ª Mathematically
∆
x( M ) [n] =(↓ M )x[n] = x[ Mn]
x[n] = 2 7 3 5 2 7 4 9 9 6 5 9 8 6 8 7 3 3 3 5 7 3 8 6 4 7 5 4 7 6 8 10 5 9 2 10 3 3 9 7 1 0 9 2 3 7 3 5 1 10
X(5)[n] = 2 7 5 7 7 7 8 10 1 7
Downsampling in Matlab
x=round(10*rand(50,1));
subplot(211)
stem(x); grid;
title('Some random sequence')
xlabel('Original sequence index')
x2=x(1:5:50);
x3=downsample(x,5);
subplot(212)
stem(x2); grid
title('Sequence downsampled by 5')
xlabel('New sequence index')
Downsampling in
Frequency Domain
 To see the effect of
downsampling in
frequency domain, let
us first look at the
Frequency axis
special case of compresses
downsampling by 2
1 ω ω +π
X ( 2) (ω ) = X + X
2 2 2
Ωh
Ωs/2 Ωs
Ωh’=2Ωh
Ωs/2 Ωs
Antialiasing Filter
 If a discrete sequence is downsampled by a factor of M, then the
original highest frequency in the signal, Ωh, must be low enough to
accommodate the M fold reduction in the sampling rate.
ª More specifically, Ωh< Ωs/2M must be satisfied to avoid aliasing
ª Since Ωs=2πfs=2π/Ts Î this condition is equivalent to saying
Ω hTs ≤ π / M ⇔ ωh ≤ π / M
ª In order to ensure that the highest frequency of the downsampled signal does not
exceed the Nyquist rate, an anti-aliasing decimation prefilter with a cutoff frequency of
π/M is typically employed. The entire procedure is then called decimation.
Decimation
DECIMATE filters the data with an eighth order Chebyshev Type I lowpass
filter with cutoff frequency .8*(Fs/2)/R, before resampling.
x=round(10*rand(50,1));
x2=downsample(x,5);
x3=decimate(x,5);
subplot(311)
stem(x); grid
subplot(312)
stem(x2); grid
subplot(313)
stem(x3); grid
x2 = 4 9 2 5 6 8 7 7 1 8
x3 = 6.79 6.09 4.93 5.88 5.12 5.92 6.31 5.16 6.27 1.22
Spectrum of the
Downsampled Signal
 For the most general case, the following theorem applies
 Theorem: Let x[n] and X(ω) denote a discrete sequence and its
DTFT, whereas x(M)[n] and X(M) (ω) denote the signal subsampled by
a factor of M and its DTFT, respectively. Then
1 M −1 ω 2πk
X ( M ) (ω ) = ∑ X +
M k =0 M M
ª If x[n]={…, x[-2], x[-1], x[0], x[1], x[2], …}, then a sequence upsampled by 3 is
{…, x[-2], 0, 0, x[-1], 0, 0, x[0], 0, 0, x[1], 0, 0, x[2], …}
ª Mathematically
∆ x[n / M ], n = 0, ± M , ± 2 M , .....
y[n] =(↑ M )x[n] = x ( M ) [n] =
0, otherwise
Upsampling in Matlab
x=[1 2 3 4 5 4 3 2 1];
x2=upsample(x,5);
subplot(211)
stem(x, 'filled'); grid
title(‘Some sequence x[n]’)
subplot(212)
stem(x2, 'filled'); grid
title(‘x[n] upsampled by 5’)
X ( M ) (ω ) = X (ωM )
 This means that the spectral components of the upsampled signal are
compressed by a factor of M (or the frequency axis is stretched by a
factor of M)
 Since we are increasing the sampling rate (the new sampling rate is
MΩs), there is no danger of aliasing, however, there are spurious
frequency components, called images, that need to be removed.
Frequency axis
stretches
Image
frequencies
Postprocessing
filter
Interpolation in Matlab
Y = INTERP(X,R) resamples the sequence in vector X at R times the original sample rate.
The resulting resampled vector Y is R times longer, LENGTH(Y) = R*LENGTH(X).
A symmetric filter, B, allows the original data to pass through unchanged and
interpolates between so that the mean square error between them and their ideal
values is minimized.
x=[1 2 3 4 5 4 3 2 1];
x2=upsample(x,5);
x3=interp(x,5);
subplot(311)
stem(x,'filled'); grid;
subplot(312)
stem(x2,'filled'); grid;
subplot(313)
stem(x3,'filled'); grid;
Interpolation in Matlab
X=abs(fft(x, 512));
X2=abs(fft(x2, 512));
X3=abs(fft(x3, 512));
Discrete Fourier
Transform
& Fast Fourier
Transform
n=0:31; k=0:31;
x=0.9.^n;
w=linspace(0, 2*pi, 512);
K=linspace(0, 2*pi, 32);
X1=1./(1-0.9*exp(-j*w));
X2=(1-(0.9*exp(-j*(2*pi/32)*k)).^32)./
(1-0.9*exp(-j*(2*pi/32)*k));
X=fft(x);
subplot(311)
plot(w, abs(X1)); grid
subplot(312)
stem(K, abs(X11), 'r', 'filled')
stem(K, abs(X2), 'r', 'filled'); grid
subplot(313)
stem(K, abs(X), 'g', 'filled'); grid
Matrix Computation
of DFT
 DFT has a simple matrix implementation: The DFT samples defined by
N −1
X [k ] = ∑ x[n]WNkn , 0 ≤ k ≤ N − 1
n =0
1 *
Note that DFT and IDFT matrices
are related to each other D−N1 = DN
N
In Matlab
dftmtx() Discrete Fourier transform matrix.
dftmtx(N) is the N-by-N complex matrix of values around the unit-
circle whose inner product with a column vector of length N yields the
discrete Fourier transform of the vector. If X is a column vector of
length N, then dftmtx(N)*X yields the same result as FFT(X);
however, FFT(X) is more efficient.
D = dftmtx(N) returns the N-by-N complex matrix D that, when
multiplied into a length N column vector x.
y = A*x computes the discrete Fourier transform of x.
The inverse discrete Fourier transform matrix is
Ai = conj(dftmtx(n))/n
DTFT ÍÎDFT
 We now look more closely to the relationship between DTFT and DFT
ª DTFT is a continuous transform. Sampling the DTFT at regularly spaced intervals around
the unit circle gives the DFT.
ª The sampling operation in one domain causes the (inverse) transform in the other domain to
periodic
ª Just like we can reconstruct a continuous time signal from its samples, DTFT can also be
synthesized from its DFT samples, as long as the number of points N at which DFT is
computed is equal to or larger then the samples of the original signal.
• That is, given the N-point DFT X[k] of a length-N sequence x[n], its DTFT X(ω) can be uniquely
determined from X[k]
Thus ~x [n] is obtained from x[n] by
DTFT DFT
x[n] ⇔ X (ω ) ~
x [ n] ⇔ X [ k ] adding an infinite number of shifted
replicas of x[n], with each replica
∞
X [k ] = X (ω ) ω = 2πk ~ shifted by an integer multiple of N
x [ n] = ∑ x[n + mN ], 0 ≤ n ≤ N −1
sampling instants, and observing the
N m = −∞
sum only for the interval 0≤n≤N-1.
ωN − 2π k
sin
1 N −1 2 ⋅ e − j[(ω − 2π k/ N )][( N −1) / 2] If we compute the DFT at M<N
X (ω ) == ∑ X [k ] points, time-domain aliasing occurs,
N k =0 ωN − 2π k
sin and DTFT cannot be reconstructed
2N from DFT samples X[k]!
The Fast Fourier Transform
(FFT)
 By far one of the most influential algorithms ever developed in signal
processing, revolutionalizing the field
 FFT: Computationally efficient calculation of the frequency spectrum
ª Made many advances in signal processing possible
ª Drastically reduces the number of additions and multiplications necessary to
compute the DFT
 Many competing algorithms
ª Decimation in time FFT
ª Decimation in frequency FFT
 Makes strategic use of the two simple complex identities:
2π 2π
−j 2 −j
WN2 = e N = e N 2 =W
N 2
N 2π 2π N 2π 2π
k+ −j k −j ⋅ −j k −j k
WN 2 = e N ⋅ e N 2 = e N ⋅ e − jπ = −e N = −WNk
FFT: Decimation in Time
 Assume that the signal is of length N=2p, a power of two. If it is not,
zero-pad the signal with enough number of zeros to ensure power-of-
two length.
 N-point DFT can be computed as two N/2 point DFTs, both of which
can then be computed as two N/4 point DFTs
ª Therefore an N-point DFT can be computed as four N/4 point DFTs.
ª Similarly, an N/4 point DFT can be computed as two N/8 point DFTs Î the
entire N-point DFT can be computed as eight N/8 point DFTs
ª Continuing in this fashion, an N-point DFT can be computed as N/2 2-point
DFTs
• A two point DFT requires no multiplications and just two additions.
• We will see that we will need additional multiplications and additions to combine the 2-
point DFTs, however, overall, the total number of operations will be much fewer then
that would be required by the regular computation of DFT
Computational
Complexity of DFT
 Q: How many multiplications and additions are needed to compute
DFT? N −1
X [k ] = ∑ x[n]WNnk , 0 ≤ k ≤ N − 1
n=0
 Note that for each “k” we need N complex multiplications, and N-1
complex additions
ª Each complex multiplication is four real multiplications and two real additions
ª A complex addition requires two real additions
ª So for N values of “k”, we need N*N complex multiplications and N*(N-1)
complex additions
• This amounts to N2 complex multiplications and N*(N-1)~N2 (for large N) complex
additions
• N2 complex multiplications: 4N2 real multiplications and ~2N2 real additions
• N2 complex additions: 2N2 real additions
ª A grand total of 4N2 real multiplications and 4N2 real additions:
• For , say 1000 point signal: 4,000,000 multiplications and 4,000,000 additions: OUCH!
Two-point DFT
 How many operations do we need for 2-point DFT?
ª X[0]=x[0]+x[1] 1
ª X[1]=x[0]*1+x[1]*e-jπ=x[0]-x[1] X [k ] = ∑ x[n]WNnk , 0 ≤ k ≤1
n=0
( N 2 )−1 ( N 2 )−1
X [k ] = ∑ x[2n]W(kn
N 2)
+ W k
N ∑ x[2n + 1]W(kn
N 2)
n =0 n =0
= P[k ] + WNk S [k ], 0 ≤ k ≤ N − 1
k=0~(N/2)-1
k=(N/2)~N-1
The Butterfly Process
 The basic building block P[k]: Even indexed samples
in this process can be
represented in the
butterfly process.
ª Two multiplications &
two additions
 But we can do even better,
just by moving the S[k]: Odd indexed
multiplier before the node samples
S[k]
ª Reduced butterfly
ª 1 multiplication and two
additions !
 We need N/2 reduced
butterflies in stage 1
ª A total of (N/2)
multiplications and
2*(N/2)=N additions
Stage 1 FFT
(N/2)2 multiplications+
(N/2)2 additions
 We can continue the same process as long as we have samples to split in half:
ª Each N/2-point DFT can be computed as two N/4-point DFT + a reduced butterfly
p-1 stages,
ª Each N/4-point DFT can be computed as two N/8-point DFT + a reduced butterfly.
p=log2N
ª …
ª …
ª …
ª 2 2-point DFT+ a reduced butterfly
8-point FFT – Stage 1
8-Point FFT – Stage 2
4-point DFT
X[0] X[0] X[0]
X[1] X[1] X[2]
X[2] X[2] X[4]
X[3] X[3] X[6]
X[4] X[4] X[1]
X[5] X[5] X[3]
X[6] X[6] X[5]
X[7] X[7] X[7]
Note that further reductions can be obtained by avoiding the following multiplications
WNN / 2 = −1 WNN / 4 = j WN3 N / 4 = − j WN0 = 1
Digital Signal Processing, © 2004 Robi Polikar, Rowan University
So How did we do?
ª Due: April 7
Lecture 15
Discrete Fourier
Transform
N −1
X [k ] = ∑ x[n]WNk n , 0 ≤ k ≤ N − 1
n =0
Inverse DFT
(DFT Synthesis)
 The inverse discrete Fourier transform, also known as the synthesis
equation, is given as
2π
1 N −1 j nk
x[n] = ∑
N k =0
X [k ] e N
DFT synthesis equation
1 N −1
= ∑ X [k ]WN− kn , 0 ≤ n ≤ N − 1
N k =0
Original sequence
2π
−j kn0
=e N G[k ]
2π
j k0n
=e N g[n]
(g[n] * h[n])N
1
= (G[k ] * H [k ])N
(Circular convolution in frequency) N
DFT Symmetry relations
Circular Convolution
 Since the convolution operation involves shifting, we need to redefine the
convolution for circularly shifted sequences.
ª The process is fairly straightforward, one simply needs to ensure that the shifting in
convolution is done in a circular fashion.
ª Example: (Example 2.7): Compute circular convolution of the following sequences:
x1[n]=[-1 2 -3 2] x2[n]=[-0.5 1 1.5]
• Recall the expression:
3 3
y[n] = ( x1[n] * x2 [n])4 = ∑ x1[m]x2 [(n − m )4 ] = ∑ x2[m]x1[(n − m )4 ]
m =0 m=0
x1[-m]=[2 -3 2 -1] Î This sequence has nonzero values for m=-3, -2, -1 and 0. However,
circular shift is only defined on the original interval of [0 3]. Therefore we need to convert
this shift to circular shift, by rotating x1[-m] enough to land all values in the original domain
xc [n] = x[(n − L ) mod N ] = x[(n − L + rN ), N1 ≤ n − L ≤ N 2 ]
x1[(-m)4]=[-1 2 -3 2] Î y[0]=Σx2[m].x1[(-m)4]=-1(-0.5)+2(1)+-3(1.5)+2(0)=-2
x1[(1-m)4]=[2 -1 2 -3] (shift right) Î y[1]=Σx2[m].x1[(1-m)4]=2(-0.5)+(-1)1+2(1.5)-3(0)=1
x1[(2-m)4]=[-3 2 -1 2 ] Î y[2]=Σx2[m].x1[(2-m)4]=-3(-0.5)+2(1)+-1(1.5)-2(0)=2
x1[(3-m)4]=[2 -3 2 -1] Î y[3]=Σx2[m].x1[(3-m)4]=2(-0.5)+(-3)1+2(1.5)-1(0)=-1
Circular Convolution
Example
 Compute circular convolution of x[n]=[1 3 2 -1 4], h[n]=[2 0 1 7 -3]
4 4
y[n] = ( x[n] * h[n])5 = ∑ x[m]h[(n − m )5 ] = ∑ h[m]x[(n − m )5 ]
m =0 m =0
x[m]=[1 3 2 -1 4]
h[-m]=[-3 7 1 0 2]
h[(-m)5]=[2 -3 7 1 0] Î y[0]=Σx[m].h[(-m)5]=1*2+3(-3)+2*7+-1(1)+4(0)=6
h[(1-m)5]=[0 2 -3 7 1] Î y[0]=Σx[m].h[(1-m)5]=-3
h[(2-m)5]=[1 0 2 -3 7] Î y[0]=Σx[m].h[(2-m)5]= 36
h[(3-m)5]=[7 1 0 2 -3] Î y[0]=Σx[m].h[(3-m)5]= -4
h[(4-m)5]=[-3 7 1 0 2] Î y[0]=Σx[m].h[(4-m)5]= 28
x=[1 3 2 -1 4];
h=[2 0 1 7 -3];
x2=[1 3 2 -1 4 0 0 0 0];
h2=[2 0 1 7 -3 0 0 0 0];
C1=conv(x,h);
X=fft(x2); H=fft(h2);
C2=ifft(X.*H);
subplot(411)
stem(x, 'filled'); grid
subplot(412)
stem(h, 'filled'); grid
subplot(413)
stem(C1, 'filled'); grid
subplot(414)
stem(real(C2), 'filled'); grid
DFT Examples
 We will compute the indicated N-point DFTs of the following
sequences
ª x[n]=δ[n] (general N)
ª x[n]=u[n]-u[n-N] (general N)
ª x[n]=(0.5)n (N=32)
In Matlab
 In Matlab, the fft() computes DFT using a fast algorithm, called fast Fourier
transform (FFT).
 X = fft(x) returns the discrete Fourier transform (DFT) of vector X, computed with a
fast Fourier transform (FFT) algorithm.
ª If x is a matrix, fft returns the Fourier transform of each column of the matrix. In this case,
the length of X and the length of x are identical.
ª X = fft(x,N) returns the N-point DFT. If the length of X is less than N, X is padded with
trailing zeros to length N. If the length of X is greater than N, the sequence X is truncated.
ª The N points returned by fft corresponds to frequencies in the [0 2π] range, equally spaced
with an interval of 2π/N.
ª Note that the Nth FFT point corresponds to 2π, which in turn corresponds to the sampling
frequency.
ª If x[n] is real, X[k] is symmetric. Using the fftshift() function shifts the center of symmetry
so that the FFT is given in the [-π to π] interval, rather then [0 2π].
 X=ifft(X,N) returns the N-point inverse discrete Fourier transform
Back to Example
Let’s show that
i. DFT is indeed the sampled version of
DTFT and
ii. DFT and FFT produce identical results
n=0:31; k=0:31;
x=0.9.^n;
w=linspace(0, 2*pi, 512);
K=linspace(0, 2*pi, 32);
X1=1./(1-0.9*exp(-j*w));
X2=(1-(0.9*exp(-j*(2*pi/32)*k)).^32)./
(1-0.9*exp(-j*(2*pi/32)*k));
X=fft(x);
subplot(311)
plot(w, abs(X1)); grid
subplot(312)
stem(K, abs(X11), 'r', 'filled')
stem(K, abs(X2), 'r', 'filled'); grid
subplot(313)
stem(K, abs(X), 'g', 'filled'); grid
Matrix Computation
of DFT
 DFT has a simple matrix implementation: The DFT samples defined by
N −1
X [k ] = ∑ x[n]WNkn , 0 ≤ k ≤ N − 1
n =0
1 *
Note that DFT and IDFT matrices
are related to each other D−N1 = DN
N
In Matlab
dftmtx() Discrete Fourier transform matrix.
dftmtx(N) is the N-by-N complex matrix of values around the unit-
circle whose inner product with a column vector of length N yields the
discrete Fourier transform of the vector. If X is a column vector of
length N, then dftmtx(N)*X yields the same result as FFT(X);
however, FFT(X) is more efficient.
D = dftmtx(N) returns the N-by-N complex matrix D that, when
multiplied into a length N column vector x.
y = A*x computes the discrete Fourier transform of x.
The inverse discrete Fourier transform matrix is
Ai = conj(dftmtx(n))/n
DTFT ÍÎDFT
 We now look more closely to the relationship between DTFT and DFT
ª DTFT is a continuous transform. Sampling the DTFT at regularly spaced intervals around
the unit circle gives the DFT.
ª The sampling operation in one domain causes the (inverse) transform in the other domain to
periodic
ª Just like we can reconstruct a continuous time signal from its samples, DTFT can also be
synthesized from its DFT samples, as long as the number of points N at which DFT is
computed is equal to or larger then the samples of the original signal.
• That is, given the N-point DFT X[k] of a length-N sequence x[n], its DTFT X(ω) can be uniquely
determined from X[k]
Thus ~x [n] is obtained from x[n] by
DTFT DFT
x[n] ⇔ X (ω ) ~
x [ n] ⇔ X [ k ] adding an infinite number of shifted
replicas of x[n], with each replica
∞
X [k ] = X (ω ) ω = 2πk ~ shifted by an integer multiple of N
x [ n] = ∑ x[n + mN ], 0 ≤ n ≤ N −1
sampling instants, and observing the
N m = −∞
sum only for the interval 0≤n≤N-1.
ωN − 2π k
sin
1 N −1 2 ⋅ e − j[(ω − 2π k/ N )][( N −1) / 2] If we compute the DFT at M<N
X (ω ) == ∑ X [k ] points, time-domain aliasing occurs,
N k =0 ωN − 2π k
sin and DTFT cannot be reconstructed
2N from DFT samples X[k]!
Homework
 Questions from Chapter 2
ª 30, 32, 34, 35, 36, 46 (also verify using fft function), 50, 51, 52
ª Due: April 7
Lecture 17
The Z-Transform
ª DTFT may not exist for many signals of practical or analytical signals, whose
frequency analysis can therefore not be obtained through DTFT
The z-Transform
 A generalization of the DTFT leads to the z-transform, which may
exist for many signals for which the DTFT does not.
ª DTFT is in fact a special case of the z-transform
• …just like the Fourier transform is a special case of _____________(?)
 The area where this is satisfied defines the ROC, which in general is
an annular region of the z-plane (since “z” is a complex number,
constant z-values describe a circle in the z-plane)
R − < z < R + where 0 ≤ R − < R + ≤ ∞
Examples
 Determine the z-transform and the corresponding ROC of the causal
sequence x[n]=αnu[n]
∞ ∞
−n
X ( z) = ∑α n
u[n] z = ∑α n z −n This power (geometric) series converges to
n = −∞ n =0
1
X ( z) = , for α z −1 < 1
1 − α z −1 Î ROC is the annular region |z| > |a|
z
= , for ∞ > z > α
z −α
•Note that this sequence
does not have a DTFT if
|α|>1, however, it does have
a z-transform!
|α|
•This is a right-sided
sequence, which have an
ROC that is outside of a
circular area!
Examples
 The z-transform of the unit step sequence u[n] can be obtained from
1
X ( z) = , for α z −1 < 1
1 − α z −1
z1
by setting α=1 Î U ( z ) = = , for z −1 < 1
−1 z −1
1− z
ROC is the annular regions |z|>1. Note that this sequence also does
not have a DTFT!
Examples
 Now consider the anti-causal sequence y[n] = −α nu[− n − 1]
−1 ∞
n −n
Y ( z) = ∑ −α z = − ∑α −m z m
n = −∞ m =1
∞ α −1z
= −α −1z ∑α −m z m = −
m=0 1 − α −1z
1 z
= = , for α −1z < 1 ⇒ z < α
1 − α z −1 z −α
ª Now recall that DTFT was z-transform evaluated on the unit circle, that is
for z=ejω. Therefore, DTFT of a sequence exists (that is the series
converges), if and only if the ROC includes the unit circle!
ª The DTFT for the above example clearly does not exist, since the ROC does not
include the unit circle!
ª Though, we must add that the existence of DTFT is not a guarantee for the
existence of the z-transform.
Commonly Used
z-Transform Pairs
u[n]
u[n]
u[n]
u[n]
Z-transform Properties
Let x1[n] ÅÆ X1(z), x2[n] ÅÆ X2(z), h[n] ÅÆ H(z) be z-transform pairs, with
individual ROC of Rx1, Rx2, and Rh, respectively. Also assume that any ROC is
of the form rin < |z| < rout. Then the following hold:
ROC
Rx1 ∩ Rx2
Rx
a ri < z < a ro
Rx
1 ro < z < 1 ri
Rx1 ∩ Rh
Rx
Rational z-Transforms
 The z-transforms of LTI systems can be expressed as a ratio of two
polynomials in z-1, hence they are rational transforms.
ª Starting with the constant coefficient linear difference equation representation of
an LTI system:
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0
Z Z
Y ( z ) + a1z −1Y ( z ) + a2 z −2Y ( z ) + L a N z − N Y ( z ) = b0 X ( z ) + b1z −1 X ( z ) + L + bM z − M X ( z )
MATLAB uses this representation
Y ( z ) b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) = = for all digital filters / systems /
X ( z ) 1 + a1z −1 + a2 z − 2 + L + a N z − N transfer functions!!!
Rational Z-Transforms
 A rational z-transform can be alternately written in factored form as
b0 ∏ l =1 (1 − ζ l z −1 )
M M
( N − M ) p0 ∏ l =1 ( z − ζ l )
H ( z) = =z
N −1 N
a0 ∏ l =1 (1 − pl z ) d 0 ∏ l =1 ( z − pl )
 At a root z=ζℓ of the numerator polynomial H(ζℓ)=0, and as a result,
these values of z are known as the zeros of H(z)
 At a root z=pℓ of the denominator polynomial H(pℓ)Æ ∞, and as a
result, these values of z are known as the poles of H(z)
ª Note that H(z) has M finite zeros and N finite poles
ª If N > M there are additional N-M zeros at z = 0 (the origin in the z-plane)
ª If N < M there are additional M-N poles at z = 0
 Why is this important?
ª As we will see later, a digital filter is designed by placing appropriate
number of zeros at the frequencies (z-values) to be suppressed, and poles at
the frequencies to be amplified!
Some sense of Physical
Interpretation of this Math Crap!
What does this look like???
poles
−1 −2
1 − 2.4 z + 2.88 z
clear G( z) =
close all 1 − 0.8 z −1 + 0.64 z − 2
N=256;
rez=linspace(-4, 4, N);
imz=linspace(-4,4,N);
%create a uniform z-plane
for n=1:N
z(n,:)=ones(1,N)*rez(n)+j*ones(1,N).*imz(1:N);
zeros
end
%Compute the H function on the z-plane
for n=1:N
for m=1:N
Hz(n,m)=H_fun(z(n,m));
end
end
%Logarithmic mesh plot of the H function
mesh(rez, imz, 20*log10(abs(Hz)))
=====================================
function Hz=H_fun(z);
%Compute the transfer function
Hz=(1-2.4*z^(-1)+2.88*z^(-2))/(1-0.8*z^(-1)+0.64*z^(-2));
In Matlab
 Matlab has simple functions to determine and plot the poles and zeros
of a function in the z-plane:
tf2zpk() Discrete-time transfer function to zero-pole conversion.
[Z,P,K] = TF2ZPK(NUM,DEN) finds the zeros, poles, and gain:
(z-Z(1))(z-Z(2))...(z-Z(n))
H(z) = K ----------------------------------
(z-P(1))(z-P(2))...(z-P(n))
NUM(z)
H(z) = ------------
DEN(z) z = 1.2000 + 1.2000i
b=[1 -2.4 2.88]; a=[1 -0.8 0.64];
1.2000 - 1.2000i
[z,p,k] = tf2zpk(b,a) p = 0.4000 + 0.6928i
0.4000 - 0.6928i
k=1
In Matlab
zplane Z-plane zero-pole plot.
zplane(Z,P) plots the zeros Z and poles P (in column vectors) with the unit circle for reference.
Each zero is represented with a 'o' and each pole with a 'x' on the plot. Multiple zeros and poles are
indicated by the multiplicity number shown to the upper right of the zero or pole.
ZPLANE(B,A) where B and A are row vectors containing transfer function polynomial coefficients
plots the poles and zeros of B(z)/A(z).
Created by a zero
Poles & ROC
 The ROC of a rational z-transform cannot contain any poles and is
bounded by the poles
ª For a right sided sequence, the ROC is outside of the largest pole
ª For a left sided sequence, the ROC is inside of the smallest pole
ª For a two sided sequence, some of the poles contribute to terms in the parent
sequence for n<0 and other to terms for n>0. Therefore, the ROC is between the
circular regions: outside of the largest pole coming from the n>0 sequence and
inside of the smallest pole coming from the n<0 sequence
Poles and ROC
The Z-Transform
Part II
Y ( z ) b0 + b1z −1 + b2 z −2 + L + bM z − M
N M
∑ ai y[n − i ] = ∑ b j x[n − j ], a0 = 1
i =0 j =0
Z H ( z) = =
X ( z ) 1 + a1z −1 + a2 z − 2 + L + a N z − N
b0 ∏ l =1 (1 − ζ l z −1 )
M M
( N − M ) p0 ∏ l =1 ( z − ζ l )
H ( z) = =z
N −1 N
a0 ∏ l =1 (1 − pl z ) d 0 ∏ l =1 ( z − pl )
ª At a root z=ζℓ of the numerator polynomial H(ζℓ)=0, and as a result, these
values of z are known as the zeros of H(z)
ª At a root z=pℓ of the denominator polynomial H(pℓ)Æ ∞, and as a result, these
values of z are known as the poles of H(z)
ª A digital filter is designed by placing appropriate number of zeros at the
frequencies (z-values) to be suppressed, and poles at the frequencies to be
amplified!
Poles & ROC
 The ROC of a rational z-transform
cannot contain any poles and is
bounded by the poles
ª For a right sided sequence, the ROC is
outside of the largest pole
ª For a left sided sequence, the ROC is
inside of the smallest pole
ª For a two sided sequence, some of the
poles contribute to terms in the parent
sequence for n<0 and other to terms
for n>0. Therefore, the ROC is
between the circular regions: outside
of the largest pole coming from the
n>0 sequence and inside of the
smallest pole coming from the n<0
sequence
Damn
ª The ROC of a noncausal system (whose h[n] two-sided) is bounded by two different
pole circles
Important!
ª For a system to be stable, its h[n] must be absolutely summableÎ An LTI system is
stable, if and only if the ROC of its transfer function H(z) includes the unit circle!
ª A causal system’s ROC lies outside of a pole circle. If that system is also stable, its
ROC must include unit circleÎ Then a causal system is stable, if and only if, all
poles are inside the unit circle! Similarly, an anticausal system is stable, if and only if
its poles lie outside the unit circle.
• An FIR filter is always stable, why?
Inverse z-Transform
 The inverse z-transform can be obtained as a generalization of the
inverse DTFT:
1 ⌠
x[n] = X ( z ) ⋅ z n −1
dz
2πj ⌡C
ª Since the variable z is defined on the complex (polar) plane, the integral is not a
Cartesian integral, but rather a contour integral, where the contour C is any
circular region that falls into the ROC of X(z).
ª Complicated procedure, yet several different procedure to compute
1. Perform a long division for X(z), and invert each (simple) term individually – tedious,
often does not result in a closed form (finds x[n] one by one for each n)
2. Direct evaluation of the contour integral using the Cauchy Residue theorem – tedious
3. Partial fraction expansion – most commonly used procedure.
Cauchy Residue Theorem
 Cauchy’s residue theorem states that the contour integral can be
computed as a sum of the residues that lie inside the contour
1 ⌠ Residues of X(z) ⋅ z n −1at the
X ( z ) ⋅ z dz = ∑
n −1
x[n] =
2πj ⌡C poles inside the contour C
ª whose special case include the following theorem
⌠ −l 2πj , l = 1
z dz =
⌡C 0, otherwise
Y ( z) P( z) b0 + b1z −1 + b2 z − 2 + L + bM z − M ∑ i =
M
0 bi z −i
H ( z) = = = = =
−1 −2 −N
∑i = 0 ai z −i
X ( z ) D( z ) N
1 + a1z + a2 z + L + aN z
ª If M ≥ N then H(z) can be re-expressed as through long division
M −N
P ( z)
H ( z) = ∑ ηl z − l + D1( z )
l =0
2 + 0 .8 z − 1 + 0 .5 z − 2 + 0 .3 z − 3 −1 5.5 + 2.1 z −1
H ( z) = H ( z ) = −3.5 + 1.5 z +
1 + 0 .8 z −1
+ 0 .2 z −2 1 + 0.8 z −1 + 0.2 z − 2
where the degree of P1(z) is less than N. The rational fraction P1( z ) / D( z ) is then
called a proper fraction or proper polynomial .
Partial Fraction Expansion
Simple Poles
 If the system only has simple poles, then it can be written in the
following form
b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) =
1 + a1z −1 + a2 z − 2 + L + a N z − N
b0 + b1z −1 + b2 z − 2 + L + bM z − M
=
(z − p1 )(z − p2 )L (z − p N )
A1 A2 AN
= + +L+
z − p1 z − p2 z − pN
ª the constants Ai , which are the residues at the poles of H(z)/z, can be computed
as follows:
H ( z)
Ai = ( z − pi ) i = 0,1,2, L N
z z= p
i
Example
See and Solve Examples on Page 173-175)
z2 + 2z +1 z 2 + 2z +1
H ( z) = =
2
z + 0.4 z − 0.12 ( z − 0.2)( z + 0.6)
H ( z) z 2 + 2z +1 A A1 A2
= = 0+ +
z z ( z − 0.2)( z + 0.6) z z − 0.2 z + 0.6
x1[n] = −0.833δ [n]
H ( z) 1
A0 = ( z ) = = −8.333 + 0.333 ⋅ (− 0.6 )n u[n]
z z = 0 − 0.12
A1 = ( z − 0.2 )
H ( z)
=9
− 9 ⋅ (0.2 )n u[− n − 1]
z z = 0 .2
x2 [n] = −0.833δ [n]
H ( z) 1
A2 = ( z + 0.6) = = 0.333 0.333 ⋅ (− 0.6 )n
z z = −0.6 3
+ u[n]
+ 9 ⋅ (0.2 )n
Partial Fraction Expansion
Multiple Poles
 If the z-domain function contains a m-multiple pole, that is, a term of
the following form is included H ( z) P( z )
=
z (z − p )m
this term is expanded as follows:
H ( z) A A2 Am −1 Am
= 1 + +L+ +
z z − p ( z − p )2 (z − p )m −1 (z − p )m
where each coefficient can be computed by taking consecutive
derivatives and evaluating the function at the pole
H ( z)
d i ( z − p )m
1 z
Am −i = i = 0,1,2,..., m − 1
(i )! dz i
z= p
1 d i P( z )
= i = 0,1,2,..., m − 1
(i )! dz i z= p
Exercises
 Solve the following as an exercise – the solutions are given (next
slide)so that you can check your answer:
z
1. X ( z ) = (i ) ROC : z < 1 2 as well as (ii) ROC : z > 1
2
2 z − 3z + 1
z
2. X ( z ) = ROC : z > 2
z ( z − 1)(z − 2 )2
2 z3 − 5z 2 + z + 3
3. X ( z ) = ROC : z < 1
(z − 1)(z − 2)
2 + z − 2 + 3z − 4
4. X ( z ) = ROC : z > 0
2
z + 4z + 3
Answers
1. (i) x[n] = -u[-n-1]+(0.5)nu[-n-1]; (ii) x[n]=u[n]-(0.5)nu[n]
2. x[n] = (1-2n+n*2n-1)u[n]
3. x[n] = 2δ[n+1]+1.5δ[n]+u[-n-1]-(0.5)(2)nu[-n-1]
4. x[n] = 1 [(-1)n-1-(-3)n-1]u[n-1]
+0.5[(-1)n-3-(-3)n-3]u[n-3]
+1.5[(-1)n-5-(-3)n-5]u[n-5]
In Matlab
ª Or in frequency domain
∞ k Y (ω )
Y (ω ) = ∑ h[k ] e − jω X (ω ) = H (ω ) X (ω ) H (ω ) =
X (ω )
k = −∞
where H(ω) is called the frequency response of the system, which relates the input
and the output of an LTI system in the frequency domain. The frequency response
can also be represented in terms of CCLDE coefficients:
M − jω k
H (ω ) =
∑ b
k =0 k e
N − jω k
∑ a
k =0 k e
Transfer Function
 A generalization of the frequency response is the transfer function,
computed in the z-domain
∞ Y ( z)
Y ( z ) = ∑ h[n] e − jωn X ( z ) = H ( z ) X ( z ) H ( z) =
X ( z)
n = −∞
ª The function H(z), which is the z-transform of the impulse response h[n] of the
LTI system, is called the transfer function or the system function
ª The inverse z-transform of the transfer function H(z) yields the impulse response
h[n]
ª Using the CCLDE coefficients
−k −1
∑ k = 0 bk z M − k
M M M M
∑
H ( z) = k =0
bk z
=z (N −M )
=
b0 ∏ k =1 (1 − ξ k z ) b0 ( N − M ) ∏ k =1 ( z − ξ k )
⋅ = z
∑ k = 0 ak z − k ∑ k = 0 ak z N − k −
N N N
a0 ∏ (1 − λk z ) a0 1 N
k =1 ∏ ( z − λk )
k =1
CCLDE Zeros & Zero & pole
coefficients poles factors
Frequency Response ÅÆ
Transfer Function
 If the ROC of the transfer function H(z) includes the unit circle, then
the frequency response H(ω) of the LTI digital filter can be obtained
simply as follows:
H (e jω ) = H (ω ) = H ( z ) z = e jω
 For a real coefficient transfer function H(z) it can be shown that
ω 2
j
H (e ) = H ( e jω ) H * ( e jω )
= H (e jω ) H (e − jω ) = H ( z ) H ( z −1 )
z = e jω
Frequency Response ÅÆ
Transfer Function
 Assuming that the DTFT exists, starting with the factored z-transform,
we can write the frequency response of a typical LTI system as
M M jω
b ∏ ( z − ξk ) jω b jω ( N − M ) ∏ ( e − ξk )
H ( z ) = 0 z ( N − M ) k =1 H (e ) = 0 e k =1
∏ (e jω − λk )
N a0 N
a0 ∏ ( z − λk )
k =1 k =1
M N
arg H (e jω ) = arg(b0 / a0 ) + ω ( N − M ) + ∑ arg(e jω − ξ k ) − ∑ arg(e jω − λk )
k =1 k =1
Interpretation of the
Frequency Response
 Take a close look at the magnitude and phase responses in terms of zeros and poles:
∏ k =1 e jω − ξ k
M
b
H ( e jω ) = 0
∏ k =1 e jω − λk
a0 N
M N
jω jω
arg H (e ) = arg(b0 / a0 ) + ω ( N − M ) + ∑ arg(e − ξk ) − ∑ arg(e jω − λk )
k =1 k =1
ª The phase response at a specific value of ω is obtained by adding the phase of the
term b0/a0 and the linear-phase term ω(N-M) to the sum of the angles of the zero
vectors minus the angles of the pole vectors
So What Does
This All Mean?
 An approximate plot of the magnitude and phase responses of the
transfer function of an LTI digital filter can be developed by
examining the pole and zero locations
 Now, the frequency response has the smallest magnitude around ω=ζ,
and the largest magnitude around ω=λ.
ª Of course, at ω=λ, the response is infinitely large, and at ω=ζ, the response is zero
 Therefore:
ª To highly attenuate signal components in a specified frequency range, we
need to place zeros very close to or on the unit circle in this range
ª Likewise, to highly emphasize signal components in a specified frequency
range, we need to place poles very close to or on the unit circle in this range
An Example
 Consider the M-point moving-average FIR filter with an impulse response
1/ M , 0 ≤ n ≤ M −1
h[n ] =
0, otherwise
1 M −1 1 − z−M zM −1
H (z) = ∑ z−n = −
=
M n=0 M (1 − z ) M [ z M ( z − 1)]
1
Imaginary Part
pole at z = 1 0
7
[z1,p1,k1] = tf2zpk(b1,a1);
[z2,p2,k2] = tf2zpk(b2,a2);
subplot(221)
plot(w/pi, abs(H1)); grid
title('Transfer Function of 5 point MAF')
subplot(222)
zplane(b1,a1);
title('Pole-zero plot of 5 point MAF')
subplot(223)
plot(w/pi, abs(H2)); grid
title('Transfer Function of 9 point MAF')
subplot(224)
zplane(b2,a2);
title('Pole-zero plot of 9 point MAF') Observe the effects of zeros and the poles !!!
Types of
Transfer Functions
 The time-domain classification of an LTI digital transfer function
sequence is based on the length of its impulse response:
ª Finite impulse response (FIR) transfer function
ª Infinite impulse response (IIR) transfer function
 Many other classifications are also used
ª For digital transfer functions with frequency-selective frequency responses, one
classification is based on the shape of the magnitude function |H(ω)| or the form
of the phase function θ(ω)
 Based on the magnitude spectrum, one of four types of ideal filters are
usually defined
ª Low pass
ª High pass
ª Band pass
ª Band stop
Ideal Filter
 An ideal filter is a digital filter designed to pass signal components of
certain frequencies without distortion, which therefore has a
frequency response equal to one at these frequencies, and has a
frequency response equal to zero at all other frequencies
 The range of frequencies where the frequency response takes the
value of one is called the passband
 The range of frequencies where the frequency response takes the
value of zero is called the stopband
 The transition frequency from a passband to stopband region is called
the cutoff frequency
 Note that an ideal filter cannot be realized. Why?
Ideal Filters
 The frequency responses of four common ideal filters in the [-π π]
range are
Lowpass Highpass
Bandpass Bandstop
Passband
Stopband
Ideal Filters
 Recall that the DTFT of a rectangular pulse is a sinc function
 One way to avoid any phase distortion is to make sure the frequency
response of the filter does not delay any of the spectral components.
Such a transfer function is said to have a zero – phase characteristic.
 A zero – phase transfer function has no phase component, that is, the
spectrum is purely real (no imaginary component) and non-negative
 However, it is NOT possible to design a causal digital filter with a
zero phase. Why?
ª Hint: What do we need in the impulse response to ensure that the frequency
response is real and non-negative?
Zero-Phase Filters
 Now, for non-real-time processing of real-valued input signals of finite length, zero-
phase filtering can be implemented by relaxing the causality requirement
 A zero-phase filtering scheme can be obtained by the following procedure:
ª Process the input data (finite length) with a causal real-coefficient filter H(z).
ª Time reverse the output of this filter and process by the same filter.
ª Time reverse once again the output of the second filter
u[ n ] = v[ − n ], y[ n ] = w[ − n ]
V (ω ) = H (ω ) X (ω ), W (ω ) = H (ω )U (ω )
U (ω ) = V * (ω ), Y (ω ) = W * (ω )
Y (ω ) = W * (ω ) = H * (ω )U * (ω )
2
Y (ω ) = H (ω ) V (ω ) = H * (ω ) H (ω ) X (ω ) = H (ω ) X (ω )
In Matlab
 The function fftfilt() implements the zero-phase filtering scheme
fftfilt()
Zero-phase digital filtering
The resulting sequence has precisely zero-phase distortion and double the
filter order. filtfilt minimizes start-up and ending transients by matching initial
conditions, and works for both real and complex inputs.
Linear Phase
 Note that a zero-phase filter cannot implemented for real-time
applications. Why? Real time means causal operation. No time to reverse and refilter.
ª Note that this phase characteristic is linear for all ω in [0 2π]. θ(ωo )
ª Recall that the phase delay at any given frequency ω0 was τ p ( ωo ) = − ωo
ª If we have linear phase, that is, θ(ω)=-αω, then the total delay at
any frequency ω0 is τ0 = -θ(ω0)/ ω0 = -αω0/ω0 = α
ª Note that this is identical to the group delay dθ(ω)/dω dθ (ω )
τ g (ωc ) = −
evaluated at ω0 dω ω =ω
c
What is the Big Deal…?
 The deal is huge!
 If the phase spectrum is linear, then the phase delay is independent of
the frequency, and it is the same constant α for all frequencies.
 In other words, all frequencies are delayed by α seconds, or
equivalently, the entire signal is delayed by α seconds.
ª Since the entire signal is delayed by a constant amount, there is no distortion!
 If the filter does not have linear phase, then different frequency
components are delayed by different amounts, causing significant
distortion.
Take Home Message
 If it is desired to pass input signal components in a certain frequency
range undistorted in both magnitude and phase, then the transfer
function should exhibit a unity magnitude response and a linear-phase
response in the band of interest
|HLP(ω)|
∟HLP(ω)
Generalized Linear Phase
 Now consider the following system, where G(ω) is real (i.e., no
phase)
H (ω ) = e − jαω G (ω )
 From our previous discussion, the term e-jαω simply introduces a phase
delay, that is, normally independent of frequency. Now,
ª If G(ω) is positive, the phase term is θ(ω)=-αω, hence the system has linear phase.
ª If G(ω)<0 for and ω, then a 180º (π rad) phase term is added to the phase
spectrum. Therefore, the phase response is θ(ω)=-αω+π, the phase delay is no
longer independent of frequency
ª We can, however, write H(ω)=-[e-jαω G(ω)], and the function inside brackets has
now linear phase Æ no distortion. The negative signs simply flips the signal.
ª Therefore, such systems are said to have generalized linear phase
Approximately Linear Phase
 Consider the following transfer functions
 Note that above a certain frequency, say ωc, the magnitude is very close to zero, that
is most of the signal above this frequency is suppressed. So, if the phase response
deviates from linearity above these frequencies, then signal is not distorted much,
since those frequencies are blocked anyway.
Linear Phase Filters
 It is typically impossible to design a linear phase IIR filter, however,
designing FIR filters with precise linear phase is very easy:
 Consider a causal FIR filter of length M+1 (order M)
ª This transfer function has linear phase, if its impulse response h[n] is either
symmetric
h[n] = h[ M − n], 0 ≤ n ≤ M
or anti-symmetric
h[n] = − h[ M − n], 0 ≤ n ≤ M
Linear Phase Filters
 There are four possible scenarios: filter length even or odd, and
impulse response is either symmetric or antisymmetric
ω (M −1) 2
M
−j
2 M
H (ω ) = 2e
∑ h[i ] cos − i ω
α i =0 2
ª Note that this of the form H(ω)=e-jαωG(ω), where α=M/2, and G(ω) is the real quantity (the
summation term)Î Output is delayed by M/2 samples!
 FIR II (M is even, sequence is symmetric and of odd length)
M
−j ω M M 2
M
2 h[ ] + 2
H (ω ) = e
∑
h[i ] cos − i ω
2 i =1 2
ª Again, this system has linear phase (the quantity inside the parenthesis is a real quantity) and
the phase delay is M/2 samples.
FIR III and FIR IV Types
 For antisymmetric sequences, we have h[n]=-h[M-n], which gives us sin terms in
the summation expression:
 FIR III (M is odd, the sequence is antisymmetric and of even length)
M π
j − ω + (M −1) 2
2 M
∑
H (ω ) = 2e 2 h[i ] sin − i ω
i =0 2
ª In both cases, the phase response is of the form θ(ω)=-(M/2) ω + π/2, hence generalized
linear phase. Again, in all of theses cases, the filter output is delayed by M/2 samples. Also,
for all cases, if G(ω)<0, an additional π term is added to the phase, which causes the samples
to be flipped.
Lecture 20
Basic Digital
Filter
Structures
[z1,p1,k1] = tf2zpk(b1,a1);
[z2,p2,k2] = tf2zpk(b2,a2);
subplot(221)
plot(w/pi, abs(H1)); grid
title('Transfer Function of 5 point MAF')
subplot(222)
zplane(b1,a1);
title('Pole-zero plot of 5 point MAF')
subplot(223)
plot(w/pi, abs(H2)); grid
title('Transfer Function of 9 point MAF')
subplot(224)
zplane(b2,a2);
title('Pole-zero plot of 9 point MAF') Observe the effects of zeros and the poles !!!
Ideal Filters
 The frequency responses of four common ideal filters in the [-π π]
range are
Lowpass Highpass
Bandpass Bandstop
Passband
Stopband
Realizable Filters
 In order to develop a stable and
realizable filter transfer function
ª The ideal frequency response
specifications are relaxed by
including a transition band
between the passband and the
stopband
ª This permits the magnitude
response to decay slowly from its
maximum value in the passband
to the zero value in the stopband
ª Moreover, the magnitude
response is allowed to vary by a
small amount both in the
passband and the stopband
ª Typical magnitude response
specifications of a lowpass filter
therefore looks like Î
Phase and Group Delay
 A frequency selective system (filter) with frequency response
H(ω)=|H(ω)|∟H(ω) = |H(ω)|ejθ(ω) changes the amplitude of all
frequencies in the signal by a factor of |H(ω)|, and adds a phase of
θ(ω) to all frequencies.
ª Note that both the amplitude change and the phase delay are functions of ω
ª The phase θ(ω) is in terms of radians, but can be expressed in terms of time, which
is called the phase delay. The phase delay at a particular frequency ω0 is given as
θ(ω ) Compare this to the θ θ
τ p (ωo ) = − ω o tθ = =
o phase delay in cont. time 2πf ω
ª If an input system consists of many frequency components (which most practical
signals do), then we can also define group delay, the phase shift by which the
envelope of the signal shifts. This can also be considered as the average phase delay
– in seconds – of the filter as a function of frequency, given by
dθ (ω )
τ g (ωc ) = −
dω ω = ω
c
Zero Phase Filters
 Phase distortion can be avoided with a system whose transfer function
has a zero – phase characteristic, which has no phase (imaginary)
component.
 Causal (and hence real-time) zero – phase filters are not realizable,
however, for non-real applications, a zero-phase filter can be easily
constructed by relaxing the causality requirement:
ª Process the input data (finite length) with a causal real-coefficient filter H(z).
ª Time reverse the output of this filter and process by the same filter.
ª Time reverse once again the output of the second filter
u[ n ] = v[ − n ], y[ n ] = w[ − n ]
H (ω ) = e − jαω H (ω ) = 1 ∠H (ω ) = θ (ω ) = −αω
ª Note that this phase characteristic is linear for all ω in [0 2π]. τ (ω ) = − θ(ωo )
ª Recall that the phase delay at any given frequency ω0 was
p o ωo
ª If we have linear phase, that is, θ(ω)=-αω, then the total delay at any frequency ω0 is
τ0 = -θ(ω0)/ ω0 = -αω0/ω0 = α
ª If the phase spectrum is linear, then the phase delay is independent of ω0, and it is the same constant α
for all ω: the entire signal is delayed by α seconds Î no distortion!
Linear Phase Filters
 It is typically impossible to design a linear phase IIR filter, however,
designing FIR filters with precise linear phase is very easy:
 Consider a causal FIR filter of length M+1 (order M)
ª This transfer function has linear phase, if its impulse response h[n] is either
symmetric
h[n] = h[ M − n], 0 ≤ n ≤ M
or anti-symmetric
h[n] = − h[ M − n], 0 ≤ n ≤ M
Linear Phase Filters
 There are four possible scenarios: filter length even or odd, and
impulse response is either symmetric or antisymmetric
ω (M −1) 2
M
−j
2 M
H (ω ) = 2e
∑ h[i ] cos − i ω
α i =0 2
ª Note that this of the form H(ω)=e-jαωG(ω), where α=M/2, and G(ω) is the real quantity (the
summation term)Î Output is delayed by M/2 samples!
 FIR II (M is even, sequence is symmetric and of odd length)
M
−j ω M M 2
M
2 h[ ] + 2
H (ω ) = e
∑
h[i ] cos − i ω
2 i =1 2
ª Again, this system has linear phase (the quantity inside the parenthesis is a real quantity) and
the phase delay is M/2 samples.
FIR III and FIR IV Types
 For antisymmetric sequences, we have h[n]=-h[M-n], which gives us sin terms in
the summation expression:
 FIR III (M is odd, the sequence is antisymmetric and of even length)
M π
j − ω + (M −1) 2
2 M
∑
H (ω ) = 2e 2 h[i ] sin − i ω
i =0 2
ª In both cases, the phase response is of the form θ(ω)=-(M/2) ω + π/2, hence generalized
linear phase. Again, in all of theses cases, the filter output is delayed by M/2 samples. Also,
for all cases, if G(ω)<0, an additional π term is added to the phase, which causes the samples
to be flipped.
An Example – Matlab Demo
lin_phase_demo2.m
Example
0.2
0
0 0.2 0.4 0.6 0.8 1
ω/π
Simple FIR Filters
First-order FIR lowpass filter
e jω + 1
1
jω − jω 2
H (e )= =e cos(ω 2 )
jω
2e 0.8
Magnitude
0.6
Monotonically decreasing function of ω
Hence a low pass filter. 0.4
0.2
( )
0 0.2 0.4 0.6 0.8 1
G (ω ) = 20 log10 H e jω
ω/π
1
The frequency at which H (ωc ) = H (0 ) is of special interest:
2
3-dB cutoff frequency
( )
G (ωc ) = 20 log10 H e j 0 − 20 log10 ( 2 ) = 0 − 0.30103 ≅ −3.0dB
Cutoff-Frequency
 For realizable filters, the cutoff frequency is the frequency at which the system gain
reaches its 0.707 multiple of its peak value.
 This gain represents the frequency at which the signal power is half of its peak
power! (why?)
 For a lowpass filter, the gain at the cut-off frequency is 3dB less then its gain at zero
frequency (or 0.707 of its zero frequency amplitude, or half the power of its power
at zero frequency).
 For the first order filter, this occurs at ωc = π/2
First-order FIR lowpass filter
1
0.8
jω
e +1
H ( e jω ) = = e − jω 2 cos(ω 2 )
2e jω Magnitude
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
ω/π
Cascaded FIR Filters
 Now consider a second order MAF, M=3 h h
− 1
ωc = 2 cos −1 2 2 M
As the order M increases,
the filter becomes sharper
but the passband also
decreases!
 Hence H1 ( z ) =
1
2
( ) 1
( )
1 − z −1 ⇒ H1 (e jω ) = 1 − e − jω = je − jω / 2 sin (ω / 2 )
2
ª Notice that H(z) has a zero at z=1, and a pole at z=0.
• Therefore, the frequency response has a zero at ω=0,
corresponding to z=1
• The zero at ω=0 (suppress low frequency components 0), First-order FIR highpass filter
makes this a highpass filter 1
0.8
j0
ω = 0 ⇒ H (e ) = 0
How would this
Magnitude
0.6
jπ
ω = π ⇒ H (e ) =1 frequency response look
when
0.4 plotted as a function of ω?
0.2
0
0 0.2 0.4 0.6 0.8 1
/
Cascading HPF
h h … h
0.2
0
0 0.2 0.4 0.6 0.8 1
ω/π
Simple FIR Filters
First-order FIR lowpass filter
e jω + 1
1
jω − jω 2
H (e )= =e cos(ω 2 )
jω
2e 0.8
Magnitude
0.6
Monotonically decreasing function of ω
Hence a low pass filter. 0.4
0.2
( )
0 0.2 0.4 0.6 0.8 1
G (ω ) = 20 log10 H e jω
ω/π
1
The frequency at which H (ωc ) = H (0 ) is of special interest:
2
3-dB cutoff frequency
( )
G (ωc ) = 20 log10 H e j 0 − 20 log10 ( 2 ) = 0 − 0.30103 ≅ −3.0dB
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Cutoff-Frequency
 For realizable filters, the cutoff frequency is the frequency at which the system gain
reaches its 0.707 multiple of its peak value.
 This gain represents the frequency at which the signal power is half of its peak
power! (why?)
 For a lowpass filter, the gain at the cut-off frequency is 3dB less then its gain at zero
frequency (or 0.707 of its zero frequency amplitude, or half the power of its power
at zero frequency).
 For the first order filter, this occurs at ωc = π/2
First-order FIR lowpass filter
1
0.8
jω
e +1
H ( e jω ) = = e − jω 2 cos(ω 2 )
2e jω Magnitude
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
ω/π Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Cascaded FIR Filters
 Now consider a second order MAF, M=3 h h
− 1
ωc = 2 cos −1 2 2 M
As the order M increases,
the filter becomes sharper
but the passband also
decreases!
 Hence H1 ( z ) =
1
2
( ) 1
( )
1 − z −1 ⇒ H1 (e jω ) = 1 − e − jω = je − jω / 2 sin (ω / 2 )
2
ªNotice that H(z) has a zero at z=1, and a pole at z=0.
•Therefore, the frequency response has a zero at ω=0,
corresponding to z=1
•The zero at ω=0 (suppress low frequency components 0), makes
First-order FIR highpass filter
this a highpass filter 1
0.8
j0
ω = 0 ⇒ H (e ) = 0
How would this
Magnitude
0.6
jπ
ω = π ⇒ H (e ) =1 frequency response look
when
0.4 plotted as a function of ω?
0.2
0
0 0.2 0.4 0.6 0.8 1
/
Cascading HPF
h h … h
M −1 n −n
H1( z ) = 1
M ∑ n =0 ( −1) z
is obtained by replacing z with -z in the transfer function of a moving
average filter
IIR LPF Filters
 A first-order causal lowpass IIR digital filter has a transfer function
given by −1
1 − α 1 + z 1 − α z + 1
H LP ( z ) = =
2 1−α z −1 2 z −α
ª The above transfer function has a zero at z=-1 i.e., at ω = π which is in the
stopband
ª HLP(z) has a real pole at z = α
ª As ω increases from 0 to π, the magnitude of the zero vector decreases from a
value of 2 to 0, whereas, for a positive value of α, the magnitude of the pole vector
increases from a value of 1-α to 1+α
ª The maximum value of the magnitude function is 1 at ω = 0, and the minimum
value is 0 at ω = π
| H LP (e j 0 )| = 1, | H LP (e jπ )| = 0 LPF
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
IIR LPF Filters
H (ω ) ( )
G (ω ) = 20 log 10 H e jω
1
α = 0.8 0
α = 0.7
0.8
α = 0.5
-5
Magnitude
α = 0.8
Gain, dB
0.6
α = 0.7
-10 α = 0.5
0.4
-15
0.2
0 -20 -2 -1 0
0 0.2 0.4 0.6 0.8 1 10 10 10
ω/π ω/π
2α 1 − sin ωc
3-dB cutoff
cos ωc = α=
frequency
1+ α 2 cos ωc
To find 3-db cutoff frequency, Find α corresponding to a given
solve for ωc in |H(ωc)|2= ½ 3-db cutoff frequency, ωc
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
IIR HPF Filters
 A first-order causal highpass IIR digital filter has a transfer function given by
where |α| < 1 for stability 1 + α 1 − z −1 1 + α z − 1
H ( z) = HP=
2 1 − α z −1 2 z −α
ª Note that one can obtain a HPF simply by replacing z with a –z from LPF. The above
transfer function is slightly different, however, provides a better HPF.
 This transfer function has a zero at z = 1Î ω = 0 which is in the stopband
0 α = 0.8
1 α = 0.7
α = 0.5
α = 0.8 -5
0.8
α = 0.7
Magnitude
Gain, dB
0.6 α = 0.5
-10
0.4
-15
0.2
0 -20 -2 -1 0
0 0.2 0.4 0.6 0.8 1 10 10 10
ω/π ω /π
(1 − sin ωc )
For a given 3-dB cutoff frequency, α=
the corresponding α is: cos ωc Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Example
 W can easily design a first order IIR HPF, with say a cutoff frequency
of 0.8π: Sin(0.8 π)=0.587785, Cos(0.8 π)=-0.80902Î α=-0.5095245
−1
1 + α 1 − z −1 1 − z
H HP ( z ) = = 0.245238
−1
2 1 − α z −1 1 + 0. 5095245 z
ª This function has a zero both at z=-1 and z=1, that is both at ω = 0 and ω = π
ª It obtains its maximum value at ω = ω0, called the center frequency, given by
ωo = cos −1 (β)
ª This filter ha two cutoff frequencies, ωc1and ωc2, where |HBP(ω)|2= ½. These frequencies
are also called 3-dB cutoff frequencies.
ª The difference between the two cutoff frequencies is called the 3-dB bandwidth, given by
2α
Bw = ωc 2 − ωc1 = cos −1 , ωc 2 > ωc1
2
1+ α
Magnitude
Magnitude
0.6
0.6
0.4
0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
ω/π ω/π
For example, to design a 2nd order BPF with a center frequency of 0.4π and 3-dB
bandwidth of 0.1π Î β = cos(ω0) = cos(0.4π) = 0.30901, and 2α/(1+α2)=cos(Bw) Î
α1 = 1.37638 and α2=0.72654. Note that only the second one provides a stable
transfer function: −2
' ( z ) = −0.18819 1 − z
H BP
−2 1 − 0.7343424 z −1 + 1.37638 z − 2
1−α 1− z
H BP ( z ) =
2 1 − β (1 + α ) z + α z
−1 −2
1 − z −2
" ( z ) = 0.13673
H BP
1 − 0.533531z −1 + 0.72654253z − 2
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Bandstop IIR Filters
 A general 2nd order bandstop IIR filter transfer function is
1 + α 1 − 2β z −1 + z −2
H BS ( z ) = |α| < 1 and |β| < 1 for stability
2 1 − β(1 + α) z + α z
−1 −2
ª This function achieves its maximum value of unity at at z=±1, i.e, at ω = 0 and ω = π
ª It has a zero at ω = ω0, called the notch frequency, given by
ωo = cos −1 (β)
ª Therefore, this filter is also called the notch filter.
ª Similar to bandpass, there are two values, ωc1 and ωc2 , called the 3-dB cutoff frequencies
where the frequency response magnitude reaches ½, i.e., |HBS(ω)|2= ½.
ª The difference between these two frequencies is again called the 3-dB bandwidth, given by
−1 2α
B w = ω c 2 − ω c1 = cos , ω c 2 < ω c1
2
1+α
Compiled in part from DSP, 2/e
S. K. Mitra, Copyright © 2001
Bandstop IIR Filters
1 1
0.8 0.8
Magnitude
Magnitude
0.6 0.6
0.4 0.4
α = 0.8 β = 0.8
α = 0.5 β = 0.5
0.2 α = 0.2 0.2 β = 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
ω/π ω/π
1 − α 1 + z −1 1 − α 1 + z −1
K
H LP ( z ) = GLP ( z ) = ⋅
2 1 − α z
−1
2 1 − α z −1
ª It can be shown that
1 + (1 − C ) cos ωc − sin ωc 2C − C 2
α= C = 2( K −1) / K
1 − C + cos ωc
0.8
Magnitude
0.6
0.4
0.2
0
0 0.5 1 1.5 2
ω/π
0.8
Magnitude
0.6
0.4
0.2
0
0 0.5 1 1.5 2
ω/π
1 1 1
ω ω ω
−π –ω c 0 ωc π −π –ω c 0 ωc π − π –ω c2 –ω c1 ω c1 ω c2 π
HBP (e jω )
Since these filters cannot be realized, we
–1 need relax (i.e., smooth) the sharp filter
characteristics in the passband and stop band
by providing acceptable tolerances
ω
− π –ω c2 –ω c1 ω c1 ω c2 π
Filter Specifications
• |H(ejω)| ≈ 1, with an error |H(ω)|
±δp in the passband, i.e.,
1 − δ p ≤ H (ω ) ≤ 1 + δ p , ω ≤ ω p
H (ω ) ≤ δ s , ωs ≤ ω ≤ π
ωp - passband edge frequency
ωs - stopband edge frequency
δp - peak ripple value in the passband
δs - peak ripple value in the stopband
We will assume that we are dealing with filters with real coefficients, hence
the frequency response is periodic with 2π, and symmetric around 0 and π.
Filter Specifications
 Filter specifications are often given in |H(ω)|
decibels, in terms of loss of gain:
G (ω ) = − 20 log10 H (e jω )
with peak pass and minimum stopband ripple
α p = − 20 log10 (1 − δ p )
α s = − 20 log10 (δ s )
|H(ω)|
2 1
1/ 1 + ε A
Remember!
 The following must be taken into consideration in making filter
selection
ª H(z) satisfying the frequency response specifications must be causal and stable
inside the unit circle ROC includes_____________,
(poles_______________, the unit circle right
h[n] _______-sided)
ª If the filter is FIR, then H(z) is a polynomial in z-1 with real coefficients
N N
−n
H ( z) = ∑ b[n] z = ∑ h[n] z − n
n =0 n =0
• If linear phase is desired, the filter coefficients h[n] (also the impulse response) must
satisfy symmetry constraints: h[n] = ±h[M-n]
• For computational efficiency, the minimum filter order M that satisfies design criteria
must be used.
ª If the filter is IIR, then H(z) is a real rational function of z-1
b0 + b1z −1 + b2 z −2 + L + bM z − M
H ( z) =
a0 + a1z −1 + a2 z − 2 + L + a N z − N
hLP[n]
ω
−π –ω c 0 ωc π
Realizable!
sin (ωc (n − M / 2 )) M
π (n − M / 2 ) , 0 < n < M , n ≠
2
hLP[ n] =
ωc , n = M
π 2
FIR Highpass Design
HLP(e jω ) HHP (e jω )
1 1
ω ω
−π –ω c 0 ωc π −π –ω c 0 ωc π
-
1
–1
= 1
ωc = ωc2 ωc = ωc1
ω ω ω
− π –ω c2 –ω c1 ω c1 ω c2 π −π –ω c 0 ωc π −π –ω c 0 ωc π
( ) (
sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 )
− ,
) 0 < n < M, n ≠
M
hBP[ n] =
π (n − M / 2 ) π (n − M / 2 ) 2
ωc2 ωc1 M
π − , n =
π 2
Similarly,
What
happened…?
Magnitude
1
however, ripple widths ↓
ª The height of the largest ripples
remain constant, regardless of the 0.5
filter length
ª As M↑, the height of all other
0
ripples ↓ 0 0.2 0.4 0.6 0.8 1
ª The main lobe gets narrower as ω/π
M↑, that is, the drop-off becomes
sharper Main lobe Side lobes
ª Similar oscillatory behavior can be
seen in all types of truncated filters
Gibbs Phenomenon
 Why is this happening?
 The Gibbs phenomenon is simply an artificact of the windowing
operation.
ª Multiplying the ideal filter’s impulse response with a rectangular window function
is equivalent to convolving the underlying frequency response with a sinc
n=0:M;
h_LP=sin(wc*(n-M/2))./(pi*(n-M/2));
h_LP(ceil(M/2+1))=wc/pi;
subplot(211)
stem(n, h_LP)
axis([0 M min(h_LP) max(h_LP)])
title(['Impulse response of the ',num2str(M),…
'th order filter']);
subplot(212)
H_LP=fft(h_LP, 1024);
w=linspace(-pi, pi, 1024);
plot(w/pi, abs(fftshift(H_LP)))
title(['Frequency response of the windowed ', …
num2str(M), 'th order filter']);
grid
axis([-1 1 0 max(abs(H_LP))])
Effect of Filter Length
FIR Filter Design
Using Windows
 Here’s what we want:
ª Quick drop off Æ Narrow transition band
• Narrow main lobe
• Increased stopband attenuation Conflicting requirements
ª Reduce the height of the side-lobe which causes the ripples
ª Reduce Gibb’s phenomenon (ringing effects, all ripples)
ª Minimize the order of the filter.
 Gibb’s phenomenon can be reduced (but not eliminated) by using a
smoother window that gently tapers off zero, rather then the brick
wall behavior of the rectangular filter.
ª Several window functions are available, which usually trade-off main-lobe width
and stopband attenuation.
• Rectangular window has the narrowest main-lobe width, but poor side-lobe
attenuation.
• Tapered window causes the height of the sidelobes to diminish, with a corresponding
increase in the main lobe width resulting in a wider transition at the cutoff frequency.
Commonly Used Windows
Commonly Used Windows
Narrow Wide (poorest)
(poor)
Wide
main (poor)
lobe Very wide (poor) Widest
main lobe main
main lobe
lobe
main lobe
Poor side lobe
attenuation
Good side lobe Good side lobe
attenuation Very good side lobe Excellent side lobe
attenuation
attenuation attenuation
Comparing WIndows
Fixed Window Functions
 All windows shown so far are fixed window functions
ª Magnitude spectrum of each window characterized by a main lobe centered at
ω = 0 followed by a series of sidelobes with decreasing amplitudes
ª Parameters predicting the performance of a window in filter design are:
• Main lobe width (∆ML) and/or transition bandwidth (∆ω=ωs – ωp)
• Relative sidelobe level (Asl) / sidelobe attenuation (αs)
ª For a given window, both parameters all completely determined once the filter
order M is set.
Htr(ω)
Hdes(ω)
Asl(dB) ∆ω
αs(dB)
∆ML
Fixed Window Functions
 How to design:
ª Set ωc = (ω p + ωs ) / 2
ª Choose window type based on the specified sidelobe attenuation (Asl) or minimum
stopband attenuation (αs)
ª Choose M according to the transition band width (ωc ) and/or mainlobe width
(∆ML ). Note that this is the only parameter that can be adjusted for fixed window
functions. Once a window type and M is selected, so are Asl, αs , and ∆ML
• Ripple amplitudes cannot be custom designed.
ª Adjustable windows have a parameter that can be varied to trade-off between
main-lobe width and side-lobe attenuation.
Kaiser Window
 The most popular adjustable window
n−M 2 2
I 0 β 1 − ( )
M 2
w[n] = , 0≤n≤M
I 0 (β )
In practice, this infinite series can be computed for a finite number of terms for a
desired accuracy. In general, 20 terms is adequate.
20 ( x k 2
2)
I 0 ( x) ≅ 1 + ∑
k!
k =1
FIR Design Using
Kaiser Window
 Given the following:
ª ωp - passband edge frequency and ωs - stopband edge frequency
ª δp - peak ripple value in the passband and δs - peak ripple value in the stopband
 Calculate:
( {
1. Minimum ripple in dB: α s = − 20 log10 (δ s ) or − 20 log10 min δ s , δ p })
2. Normalized transition bandwidth: ∆ω = ω s − ω p
b = fir1(N,Wn,'high') designs an N'th order highpass filter. You can also use B = fir1(N,Wn,'low') to design a
lowpass filter.
If Wn is a two-element vector, Wn = [W1 W2], FIR1 returns an order N bandpass filter with passband W1 < W <
W2. You can also specify b = fir1(N,Wn,'bandpass'). If Wn = [W1 W2], b = fir1(N,Wn,'stop') will design a
bandstop filter.
If Wn is a multi-element vector, Wn = [W1 W2 W3 W4 W5 ... WN], FIR1 returns an order N multiband filter with
bands 0 < W < W1, W1 < W < W2, ..., WN < W < 1.
b = fir1(N,Wn,'DC-1') makes the first band a passband.
b = fir1(N,Wn,'DC-0') makes the first band a stopband.
b = fir1(N,Wn,WIN) designs an N-th order FIR filter using the N+1 length vector WIN to window the impulse
response. If empty or omitted, FIR1 uses a Hamming window of length N+1. For a complete list of available
windows, see the help for the WINDOW function. If using a Kaiser window, use the following
b = fir1(N,Wn,kaiser(n+1,beta))
In Matlab
 Here is the complete procedure:
ª Obtain the specs: Cutoff frequency, passband and stopband edge frequencies,
allowed maximum ripple, filter order.
• Note that you do not need filter order for Kaiser (to be determined), and you do not
need the edge frequencies and ripple amount for the others.
ª Design the filter using the fir1() command. Default window is Hamming. For
all window types –except Kaiser – provide filter order N and normalized cutoff
frequency (between 0 and 1)
• For Kaiser window, determine the beta and M manually from the given equations
before providing them to the fir1 command with kaiser as filter type.
• You can also use kaiserord() to estimate the filter order from
• This gives you the “b” coefficients, or in other words, the impulse response h[n]
ª Use this filter with the filter() command as y=filter(b,a,x), where b are the
coefficients obtained from fir1(), a=1 since this is an FIR filter, x is the signal
to be filtered, and y is the filtered signal.
 OR, use sptool for the entire design cycle.
FIR Filter Design Through
Frequency Sampling
 An alternate FIR filter design based on using the inverse DFT of a
custom filter
 Unlike windows technique this technique can design a filter with an
arbitrary frequency response – not just lowpass, highpass, etc.
ª The desired filter response is specified at N equally spaced points in the [0 2π]
interval. These constitute the magnitude spectra:
2πk
H des (ωk ) ω = 2πk N = H des , k = 0,1,2, L , N − 1
k N
ª A phase term added to ensure linear phase: θ(ω)=-jω(N-1)/2 to obtain the DFT
coefficients of the desired filter Hdes[k]
2πk N −1
2πk − j N 2
H des [k ] = H des e , k = 0,1,2, L , N − 1
N
ª Inverse DFT of the filter is taken to obtain the impulse response of the desired
filter, h[n].
Frequency Sampling Design
π π π π π
In Matlab: fir2
fir2 FIR arbitrary shape filter design using the frequency sampling method.
B = fir2(N,F,Am NPT, window) designs an Nth order FIR digital filter with the frequency response specified
by vectors F (frequencies) and A (amplitudes), and returns the filter coefficients in length N+1 vector B. The
desired frequency response is interpolated on an evenly spaced grid of length NPT points (512, by default).
The filter coefficients are then obtained by applying inverse DFT and multiplying by the window (default,
Hamming). Vectors F and A specify the frequency and magnitude breakpoints for the filter such that plot(F,A)
would show a plot of the desired frequency response. The frequencies in F must be between 0.0 < F < 1.0, with
1.0 corresponding to half the sample rate. They must be in increasing order and start with 0.0 and end with 1.0.
The filter B is real, and has linear phase, i.e., symmetric coefficients obeying B(k) = B(N+2-k), k = 1,2,...,N+1.
By default FIR2 windows the impulse response with a Hamming window. Other available windows, including
Boxcar, Hann, Bartlett, Blackman, Kaiser and Chebwin can be specified with an optional trailing argument.
For example, B = fir2(N,F,A,bartlett(N+1)) uses a Bartlett window.
For filters with a gain other than zero at Fs/2, e.g., highpass and bandstop filters, N must be even. Otherwise,
N will be incremented by one. In this case the window length should be specified as N+2.
Example
freq=[0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1]
amp=[0 0.5 1 0.8 0.6 0 0.5 0.5 1 0 0]
subplot(211)
[H w]=freqz(b, 1, 1024);
plot(w/pi, abs(H)); hold on
plot(freq, amp, 'r*')
grid
xlabel('Frequency, \omega/\pi')
title(' Magnitude response of the filter and the
corresponding points')
subplot(212)
plot(w/pi, unwrap(angle(H)));
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter')
grid
You will need to determine the filter order by trial and error. You may need higher
orders if your specified points require a sharp transition
Other FIR Design Methods
 While window and frequency sampling methods simple and powerful,
they do not allow precise control of the critical frequencies, nor do
they provide equiripple in passband and stopbands
 Several computer aided design techniques exist that allow optimal
control of all bands and ripples
 The Parks-McClellan algorithm (Remez exchange algorithm)
ª In Matlab, read the help file for the remez() function.
On Friday
 Bring your own .wav file along with a ear phone…!
Lecture 23
FIR Filter
Design Review
&
IIR Filter
Design
H (ω ) ≤ δ s , ωs ≤ ω ≤ π
ωp - passband edge frequency
ωs - stopband edge frequency
δp - peak ripple value in the passband
δs - peak ripple value in the stopband
 Using windows:
ª Start with an ideal filter that meets the design criteria, say a filter H(ω)
ª Take the inverse DTFT of this H (ω) to obtain h[n].
• This h[n] will be double infinitely long, and non-causal
ª Truncate using a window, say a rectangle, so that M+1 coefficients of h[n] are
retained, and all the others are discarded.
• We now have a finite length (order M) filter, ht[n], however, it is still non-causal
ª Shift the truncated h[n] to the right (i.e., delay) by M/2 samples, so that the first
sample now occurs at n=0.
• The resulting impulse response, ht[n-M/2] is a causal, stable, FIR filter, which has an
almost identical magnitude response and a phase factor or e-jM/2 compared to the
original filter, due to delay introduced.
Other FIR Filter Design
H LP(e jω )
sin (ωc (n − M / 2 ))
sin (ωc n )
M
π (n − M / 2 ) , 0 < n < M , n ≠
1 hLP[ n] = , −∞ < n < ∞ hLP[ n] =
2
πn ωc , n = M
π 2
ω
−π –ω c 0 ωc π
H HP (ω ) = 1 − H LP (ω ) hHP [n] = δ [n] − hLP [n]
H BS (ω ) = 1 − H BP (ω ) hBS [n] = δ [n] − hBP [n]
sin (ωc (n − M / 2)) M
− , 0 < n < M , n ≠
π (n − M / 2) 2
hHP [n] =
1 − ωc , n = M
π 2
( ) (
sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 )
−
), 0 < n < M, n ≠
M
π (n − M / 2 ) π (n − M / 2) 2
hBP[ n] =
ωc2 ωc1 M
π − , n =
π 2
( ) (
sin ωc2 (n − M / 2 ) sin ωc1 (n − M / 2 ) ) M
− + , 0 < n < M, n ≠
π (n − M / 2 ) π (n − M / 2 ) 2
hBP[ n] =
ωc2 ωc1 M
1 − + , n =
π π 2
However…
Fixed windows
Once filter length
is chosen, mail lobe and
side lobe properties are
determined!
Has an additional
parameter, beta, to
adjust stopband
attenuation.
Commonly Used Windows
Narrow Wide (poorest)
(poor)
Wide
main (poor)
lobe Very wide (poor) Widest
main lobe main
main lobe
lobe
main lobe
Poor side lobe
attenuation
Good side lobe Good side lobe
attenuation Very good side lobe Excellent side lobe
attenuation
attenuation attenuation
Kaiser Window
 The most popular adjustable window
n−M 2 2
I 0 β 1 − ( )
M 2
w[n] = , 0≤n≤M
I 0 (β )
In practice, this infinite series can be computed for a finite number of terms for a
desired accuracy. In general, 20 terms is adequate.
20 ( x k 2
2)
I 0 ( x) ≅ 1 + ∑
k!
k =1
FIR Design Using
Kaiser Window
 Given the following:
ª ωp - passband edge frequency and ωs - stopband edge frequency
ª δp - peak ripple value in the passband and δs - peak ripple value in the stopband
 Calculate:
( {
1. Minimum ripple in dB: α s = − 20 log10 (δ s ) or − 20 log10 min δ s , δ p })
2. Normalized transition bandwidth: ∆ω = ω s − ω p
If Wn is a two-element vector, Wn = [W1 W2], FIR1 returns an order N bandpass filter with passband
W1 < W < W2. You can also specify b = fir1(N,Wn,'bandpass'). If Wn = [W1 W2], b =
fir1(N,Wn,'stop') will design a bandstop filter. b = fir1(N,Wn,WIN) designs an N-th order FIR filter
using the N+1 length vector WIN to window the impulse response. If empty or omitted, FIR1 uses a
Hamming window of length N+1.
Use this filter with the filter() command as y=filter(b,a,x), where b are the coefficients
obtained from fir1(), a=1 since this is an FIR filter, x is the signal to be filtered, and y is the
filtered signal.
FIR Filter Design Through
Frequency Sampling
 An alternate FIR filter design based on using the inverse DFT of a
custom filter
 Unlike windows technique this technique can design a filter with an
arbitrary frequency response – not just lowpass, highpass, etc.
ª The desired filter response is specified at N equally spaced points in the [0 2π]
interval. These constitute the magnitude spectra:
2πk
H des (ωk ) ω = 2πk N = H des , k = 0,1,2, L , N − 1
k N
ª A phase term added to ensure linear phase: θ(ω)=-jω(N-1)/2 to obtain the DFT
coefficients of the desired filter Hdes[k]
2πk N −1
2πk − j N 2
H des [k ] = H des e , k = 0,1,2, L , N − 1
N
ª Inverse DFT of the filter is taken to obtain the impulse response of the desired
filter, h[n].
In Matlab: fir2
fir2 FIR arbitrary shape filter design using the frequency sampling method.
B = fir2(N,F,Am NPT, window) designs an Nth order FIR digital filter with the frequency response specified
by vectors F (frequencies) and A (amplitudes), and returns the filter coefficients in length N+1 vector B. The
desired frequency response is interpolated on an evenly spaced grid of length NPT points (512, by default).
The filter coefficients are then obtained by applying inverse DFT and multiplying by the window (default,
Hamming). Vectors F and A specify the frequency and magnitude breakpoints for the filter such that plot(F,A)
would show a plot of the desired frequency response. The frequencies in F must be between 0.0 < F < 1.0, with
1.0 corresponding to half the sample rate. They must be in increasing order and start with 0.0 and end with 1.0.
The filter B is real, and has linear phase, i.e., symmetric coefficients obeying B(k) = B(N+2-k), k = 1,2,...,N+1.
By default FIR2 windows the impulse response with a Hamming window. Other available windows, including
Boxcar, Hann, Bartlett, Blackman, Kaiser and Chebwin can be specified with an optional trailing argument.
For example, B = fir2(N,F,A,bartlett(N+1)) uses a Bartlett window.
For filters with a gain other than zero at Fs/2, e.g., highpass and bandstop filters, N must be even. Otherwise,
N will be incremented by one. In this case the window length should be specified as N+2.
Example
freq=[0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1]
amp=[0 0.5 1 0.8 0.6 0 0.5 0.5 1 0 0]
subplot(211)
[H w]=freqz(b, 1, 1024);
plot(w/pi, abs(H)); hold on
plot(freq, amp, 'r*')
grid
xlabel('Frequency, \omega/\pi')
title(' Magnitude response of the filter and the
corresponding points')
subplot(212)
plot(w/pi, unwrap(angle(H)));
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter')
grid
You will need to determine the filter order by trial and error. You may need higher
orders if your specified points require a sharp transition
Other FIR Design Methods
 While window and frequency sampling methods simple and powerful,
they do not allow precise control of the critical frequencies, nor do
they provide equiripple in passband and stopbands
 Several computer aided design techniques exist that allow optimal
control of all bands and ripples
 The Parks-McClellan algorithm (Remez exchange algorithm)
ª In Matlab, read the help file for the remez() function.
FIR Filter Design Example
Matlab Demo
audiofilter2.m
IIR Filter Design
 Major disadvantage of the FIR filter: Long filter lengths
ª Order of an FIR filter is usually much higher than the order of an equivalent IIR
filter meeting the same specifications Î higher computational complexity
ª IIR filters are therefore preferred for many practical applications.
ª Two potential concerns of IIR filters must be addressed however:
Linear phase
• __________________ stability
and _______________________
Maps a point in one plane into a unique point in the other plane.
 The analog to digital filter transformation is then
H ( z ) = Ha ( s ) 1− z −1
s = 2s
T 1+ z −1
Analog ÅÆ Digital
Frequency
 The parameter Ts often does not play a role in the design, and therefore Ts=2 is
chosen for convenience. Then, we have
z = 1+ s
−1
s = 1 − z −1 1− s
1+ z
 Since in the s-plane, s = σ + jΩ Î
(1 + σ ) + jΩ 2
(1 + σ ) 2 + Ω 2
z = (1 − σ ) − jΩ ⇒ z =
(1 − σ ) 2 + Ω 2
σ = 0 → s = jΩ → z = 1
σ < 0 → z <1
σ > 0 → z >1
Analog ÅÆ Digital
Frequency
 The left half of the s-plane corresponds to inside the unit circle in the z-plane
 The jΩ axis corresponds to the unit circle
 The stability requirement of the analog filters carry to digital filters:
ª Analog: The poles of the filter frequency response must be on the left half plane
ª Digital: The poles of the filter frequency response must be inside the unit circle, i.e., the
ROC must include the unit circle.
Effect of Bilinear
Transformation
 Since, the frequency response is defined on the unit circle, z=ejω ÍÎ s=jΩ
s=
2 1 − z −1
Ts 1 + z −1
2 e − jω
jΩ = 1−
Ts 1 + e − jω
2 ω −1 ΩTs
Ω = tan ⇔ ω = 2 tan
Ts 2 2
This mapping is (highly) nonlinear
Î Frequency warping
Bilinear
Transformation
 Steps in the design of a digital filter -
ª Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs
ª Design the analog filter Ha(s)
ª Design the digital filter H(z) by applying bilinear transformation to Ha(s)
 Note the following, however:
ª Transformation can be used to preserve the edge frequencies, however, the shape
of the magnitude response will not necessarily be the same in analog and digital
cases, due to nonlinear mapping
ª Transformation does not preserve phase response of analog filter
ª The distortion caused by the nonlinearity is at the high frequencies. Therefore, this
may not be a problem for lowpass filters. It may be too severe for certain high pass
filters.
Analog Filter Design
 We know
ª how to prewarp the frequencies to convert analog specs into digital specs.
ª how to transform an analog filter into a digital filter through the bilinear
transformation
ª But, how do we design the analog filter?
 Several approaches and prototypes:
ª Butterworth filter – maximally flat
ª Chebychev (type I and type II) filters – Equiripple in passband or stopband
ª Elliptic filter – Sharper transition band but nonlinear phase and nonequiripple
ª Bessel filter – Linear phase in passband, at the cost of wide transition band
 All of these filters are defined for lowpass characteristic
ª Spectral transformations are then used convert lowpass to any one of highpass,
bandpass or bandstop.
The Butterworth
Approximation
 The magnitude-square response of an Nth order analog lowpass Butterworth filter:
1 2 1
H (Ω) = ⇔ H (Ω) =
1 + (Ω / Ω c ) 2 N 1 + (Ω / Ω c ) 2 N
ª Ωc is the 3-dB cutoff frequency (20log|H(Ωc)|=-3), N is the filter order
ª The most interesting property of this function is that the first 2N-2 derivatives of this
function is zero at Ω=0. Î The function is as flat as possible, without being a constant.
ª The Butterworth LPF is therefore said to have a maximally-flat magnitude at Ω=0.
1-δp
Increasing N
δs
Ωp Ωs
Butterworth Filter
Frequency Response
 The necessary order to meet passband and  The 3-dB cut-off frequency is
stopband specs can be obtained as: then:
2
(
H (Ω p ) = 1 − δ p 2 , ) H (Ω s ) = δ s2
2
Ωc =
Ωp
2N
1
− 1
( )
2N 2N
Ωp 1 Ωs 1 1− δ 2
= − 1, = −1 p
Ω
c (1 − δ p )2 Ωc δ s2
1 1
log − 1 − log 2 − 1
1
1− δ 2
p ( δ s
)
N=
2 Ωp
log
Ωs
Butterworth filter
System Response
 We now know the frequency response of the Butterworth filter, and
how to obtain the filter order N, and the cutoff-frequency Ωc from the
passband and stopband edge frequencies
 In order to design the filter, we actually need the transfer function,
H(s) of the filter.
2 1
H ( s ) H ( − s ) s = jΩ = H ( Ω ) =
2N
s
1 +
jΩ c
ª This transfer function has 2N roots (poles), located at equal distances from each
other around a circle of radius Ωc
j π (2k +1) j π
sk = Ω c e 2 N e 2 , k = 0,1,L 2 N − 1
Designing a Digital LPF
Using Butterworth Appr.
1. Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs 2 ω
Ω= tan
Ts 2
2. Design the analog filter
a) From δp, δs, Ωp and Ωs obtain the order of the filter N log 1 1
− 1 − log 2 − 1
1 ( p)
1− δ 2
δ s
N=
2 Ωp
log
Ωs
Ωp
b) Use N, δp, and Ωp to calculate the 3dB cutoff frequency Ωc Ωc =
2N
1
c) Determine the corresponding H(s) and its poles − 1
(
1− δ
p )
2
3. Apply bilinear transformation to obtain H(z)
Ts
1+ s
z= 2
T
1− s s
2
In Matlab
 Yes, you guessed it right, Matlab has several functions:
buttord Butterworth filter order selection.
[N, Wn] = buttord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital
Butterworth filter that loses no more than Rp dB in the passband and has at least Rs dB of
attenuation in the stopband. Wp and Ws are the passband and stopband edge frequencies,
normalized from 0 to 1 (where 1 corresponds to pi radians/sample). buttord()also
returns Wn, the Butterworth natural frequency (or, the "3 dB frequency") to use with
butter()to achieve the specs.
[N, Wn] = buttord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in
which case Wp and Ws are in radians/second. When Rp is chosen as 3 dB, the Wn in
butter() is equal to Wp in buttord().
In Matlab
butter() Butterworth digital and analog filter design.
[B,A] = butter(N,Wn) designs an Nth order lowpass digital Butterworth filter and returns the filter
coefficients in length N+1 vectors B (numerator) and A (denominator). The coefficients are listed in
descending powers of z. The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding
to half the sample rate. If Wn is a two-element vector, Wn = [W1 W2], butter() returns an order
2N bandpass filter with passband W1 < W < W2. [B,A] = butter(N,Wn,'high') designs a
highpass filter.
When used with three left-hand arguments, as in [Z,P,K] = butter(...), the zeros and poles are
returned in length N column vectors Z and P, and the gain in scalar K.
[NUMd,DENd] = bilinear (NUM,DEN,Fs), where NUM and DEN are row vectors containing
numerator and denominator transfer function coefficients, NUM(s)/DEN(s), in descending powers of
s, transforms to z-transform coefficients NUMd(z)/DENd(z).
Each form of bilinear() accepts an optional additional input argument that specifies prewarping. For
example, [Zd,Pd,Kd] = bilinear Z,P,K,Fs,Fp) applies prewarping before the bilinear transformation so
that the frequency responses before and after mapping match exactly at frequency point Fp (match point Fp is
specified in Hz).
Example
 Design an IIR lowpass filter using the Butterworth approximation that
meets the following specs: fp = 2kHz, fs = 3kHz, αp<2dB αs>50dB,
fs=10kHz.
ª ωp = 0.4π, ωs=0.6 π, δp=0.2057, δs=0.032 Î Ωp=0.7265, Ωs=1.3764
T
2 1 − z −1 1+ s s
s= z= 2
Ts 1 + z −1 Ts
1− s
2
H ( z ) = H a ( s ) s = 2 1− z −1
Ts 1+ z −1
Analog ÅÆ Digital
Frequency
 The parameter Ts often does not play a role in the design, and therefore Ts=2 is
chosen for convenience. Then, we have
z = 1+ s
1− s
1 − z −1
s=
1 + z −1
 The left half of the s-plane corresponds to inside the unit circle in the z-plane
 The jΩ axis corresponds to the unit circle
 The stability requirement of the analog filters carry to digital filters:
ª Analog: The poles of the filter frequency response must be on the left half plane
ª Digital: The poles of the filter frequency response must be inside the unit circle, i.e., the
ROC must include the unit circle.
Effect of Bilinear
Transformation
 Since, the frequency response is defined on the unit circle, z=ejω ÍÎ s=jΩ
s=
2 1 − z −1
Ts 1 + z −1
2 e − jω
jΩ = 1−
Ts 1 + e − jω
2 ω −1 ΩTs
Ω = tan ⇔ ω = 2 tan
Ts 2 2
This mapping is (highly) nonlinear
Î Frequency warping
Bilinear
Transformation
 Steps in the design of a digital filter -
ª Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs
ª Design the analog filter Ha(s)
ª Design the digital filter H(z) by applying bilinear transformation to Ha(s)
 How to design the analog filter?
ª Butterworth filter – maximally flat
ª Chebychev (type I and type II) filters – Equiripple in passband or stopband
ª Elliptic filter – Sharper transition band but nonlinear phase and nonequiripple
ª Bessel filter – Linear phase in passband, at the cost of wide transition band
 All of these filters are defined for lowpass characteristic
ª Spectral transformations are then used convert lowpass to any one of highpass,
bandpass or bandstop.
 …and then there is direct design that sidesteps all of the above steps…
The Butterworth
Approximation
 The magnitude-square response of an Nth order analog lowpass Butterworth filter:
1 2 1
H (Ω) = ⇔ H (Ω) =
1 + (Ω / Ω c ) 2 N 1 + (Ω / Ω c ) 2 N
ª Ωc is the 3-dB cutoff frequency (20log|H(Ωc)|=-3), N is the filter order
ª The most interesting property of this function is that the first 2N-2 derivatives of this
function is zero at Ω=0. Î The function is as flat as possible, without being a constant.
ª The Butterworth LPF is therefore said to have a maximally-flat magnitude at Ω=0.
1-δp
Increasing N
δs
Ωp Ωs
Designing a Digital LPF
Using Butterworth Appr.
1. Prewarp ωp, ωs to find their analog equivalents Ωp, Ωs 2 ω
Ω= tan
Ts 2
2. Design the analog filter
a) From δp, δs, Ωp and Ωs obtain the order of the filter N
1 1
log − 1 − log 2 − 1
1 (
1− δ 2
p ) δ s
N=
2 Ωp
log
Ωs
Ωp
b) Use N, δp, and Ωp to calculate the 3dB cutoff frequency Ωc Ωc =
2N
1
− 1
(
1− δ
p )
2
c) Determine the corresponding H(s) and its poles
If the filter order is large, manually computing
H(s) is typically difficult, and usually done using K
H(s) =
filter design software (such as Matlab) (s − s1)(s − s2)L(s − s2N−1)
T
1+ s s
z= 2
3. Apply bilinear transformation to obtain H(z) T
1− s s
2
In Matlab
 Yes, you guessed it right, Matlab has several functions:
buttord Butterworth filter order selection.
[N, Wn] = buttord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital
Butterworth filter that loses no more than Rp dB in the passband and has at least Rs dB of
attenuation in the stopband. Wp and Ws are the passband and stopband edge frequencies,
normalized from 0 to 1 (where 1 corresponds to pi radians/sample).
buttord()also returns Wn, the Butterworth natural frequency (or, the "3 dB frequency")
to use with butter()to achieve the desired specs.
[N, Wn] = buttord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in
which case Wp and Ws are in radians/second. Note that for analog filter design Wn is not
restricted to the [0 1] range. When Rp is chosen as 3 dB, the Wn in butter() is equal to
Wp in buttord().
In Matlab
butter() Butterworth digital and analog filter design.
[B,A] = butter(N,Wn) designs an Nth order lowpass digital Butterworth filter and returns the filter
coefficients in length N+1 vectors B (numerator) and A (denominator). The coefficients are listed in
descending powers of z. The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding
to half the sample rate. If Wn is a two-element vector, Wn = [W1 W2], butter() returns an order
2N bandpass filter with passband W1 < W < W2.
When used with three left-hand arguments, as in [Z,P,K] = butter(...), the zeros and poles are
returned in length N column vectors Z and P, and the gain in scalar K.
[NUMd,DENd] = bilinear (NUM,DEN,Fs), where NUM and DEN are row vectors containing
numerator and denominator transfer function coefficients, NUM(s)/DEN(s), in descending powers of
s, transforms to z-transform coefficients NUMd(z)/DENd(z).
Each form of bilinear() accepts an optional additional input argument that specifies prewarping. For
example, [Zd,Pd,Kd] = bilinear Z,P,K,Fs,Fp) applies prewarping before the bilinear transformation so
that the frequency responses before and after mapping match exactly at frequency point Fp (match point Fp is
specified in Hz).
Example
 Design an analog IIR lowpass filter using the Butterworth approximation that meets
the following specs: fp = 2kHz, fs = 3kHz, αp<2dB αs>50dB, fs=10kHz.
ª ωp = 0.4π rad, ωs=0.6 π rad, δp=0.2057, δs=0.032 Î Ωp=0.7265 rad/s, Ωs=1.3764 rad/s
1 Mag. response of
TYPE I H (Ω ) = Cyb. filter, N odd
1 + ε 2C N
2
(Ω Ωc )
where CN(Ω) is the Chebyshev polynomial of order N:
1 cosh −1 1
−
1
1
ε= −1 ε δ2
(1 − δ p ) 2 N= s
cosh −1 (Ω s / Ω p )
ª No close form formula exists for computing Ωc. Therefore, for Type I, take Ωc = Ωp
ª Compute the Chebyshev polynomial Messy…!
ª Determine the poles of the filter Î Obtain the analog filter Really messy, in fact, downright ugly!
ª Apply bilinear transformation to obtain the digital filter Relatively easy, but by this time
you probably lost the will to live.
[B,A] = cheby1(N,R,Wn) designs an Nth order type I lowpass digital Chebyshev filter with R decibels of peak-
to-peak ripple in the passband. cheby1 returns the filter coefficients in length N+1 vectors B (numerator) and
A (denominator). The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding to half the sample
rate. Use R=0.5 as a starting point, if you are unsure about choosing R.
[B,A] = cheby2(N,R,Wn) designs an Nth order type II lowpass digital Chebyshev filter with the stopband
ripple R decibels down and stopband edge frequency Wn. cheby2() returns the filter coefficients in length
N+1 vectors B (numerator) and A (denominator). The cutoff frequency Wn must be 0.0 < Wn < 1.0, with 1.0
corresponding to half the sample rate. Use R = 20 as a starting point, if you are unsure about choosing R.
If Wn is a two-element vector, Wn = [W1 W2], cheby1 returns an order 2N bandpass filter with passband
W1 < W < W2. [B,A] = cheby1 (N,R,Wn,'high') designs a highpass filter.
[B,A] = cheby1(N,R,Wn,'stop') is a bandstop filter if Wn = [W1 W2].
When used with three left-hand arguments, as in [Z,P,K] = cheby1(…), the zeros and poles are returned in
length N column vectors Z and P, and the gain in scalar K.
plot(w, abs(H1));
grid; hold on
plot(w, abs(H2), 'r')
plot(w, abs(H3), 'g')
legend('N=3', 'N=5', 'N=10')
title('Magnitude Spectra of Chebyshev Filters');
xlabel('Analog angular frequency, \Omega (rad/s)')
In Matlab
[N, Wn] = cheb1ord(Wp, Ws, Rp, Rs) returns the order N of the lowest order digital Chebyshev Type I filter
that loses no more than Rp dB in the passband and has at least Rs dB of attenuation in the stopband. Wp and Ws
are the passband and stopband edge frequencies, normalized from 0 to 1 (where 1 corresponds to π rad/sample.).
For example,
Lowpass: Wp = .1, Ws = .2 Highpass: Wp = .2, Ws = .1 (note the reversal of freq.)
Bandpass: Wp = [.2 .7], Ws = [.1 .8] Bandstop: Wp = [.1 .8], Ws = [.2 .7]
cheb1ord() also returns Wn, the Chebyshev natural frequency to use with cheby1() to achieve the
specifications.
[N, Wn] = cheb1ord(Wp, Ws, Rp, Rs, 's') does the computation for an analog filter, in which case Wp and
Ws are in radians/second.
If designing an analog filter, use bilinear() to convert the analog filter coeff. to digital filter coeff.
chebyshev_demo2.m Demo
ª Where ε again controls the ripple amount and UN is some cryptic function
(Jacobian elliptic function of order N)
Ω
⌠ dθ
U (Ω ) = (if you must insist….
⌡
θ =0
(1 − N sin 2 θ ) see if you can pull N out of this!)
The functions ellipord() and ellip() determine the elliptic filter parameters, and design
the elliptic filter, respectively. The syntax and usage are similar to those of Chebyshev.
[N, Wn] = ellipord(Wp, Ws, Rp, Rs, ‘s’);
[b, a] = ellip(N, Rp, Rs, Wn, ‘s’);
Bessel Filter
 Yet another cryptically designed filter, used only in most desperate
situations.
ª Advantage: Has (approximately) linear phase in the passband, though there is no
guarantee that the linearity will be preserved when bilinear transform is applied.
ª Disadvantage: Horrendously wide transition band (no free lunch!)
In Matlab
[B,A] = besself(N,Wn) designs an Nth order lowpass analog Bessel filter and
returns the filter coefficients in length N+1 vectors B and A. The cut-off
frequency Wn must be greater than 0.
Spectral
Transformations
 But what about other types of filters?
 Spectral transformations: Process for converting lowpass filters into highpass,
bandpass or bandstop filters
ª In fact, spectral transformations can be used to convert a LPF into another LPF with a
different cutoff frequency.
ª The transformation can be done in either analog or discrete domain.
 Here is how to “design a non-lowpass filter by designing a lowpass filter”
ª Prewarp the digital filter edge frequencies into analog frequencies
ª Transform the filter specs into LPF specs using spectral transformation
ª Design analog LPF
ª Obtain the corresponding desired digital filter by either
1. Convert the analog LPF to digital LPF using bilinear transformation and use the digital – to – digital
spectral transformation to obtain the non-LPF characteristic
2. Convert the analog LPF to the desired analog non-LPF using the analog – to – analog spectral
transformation, followed by the bilinear transformation to obtain the digital equivalent
Analog –to – Analog
Transformations
Ωp : Passband edge-frequency of
LPÆLP
prototype analog LPF, typically
normalized to 1 rad/s.
ΩpLP: Passband edge-frequency of
LPÆHP desired analog LPF
ΩpHP: Passband edge-frequency of
desired analog HPF
ΩLBP: Lower passband edge-frequency
of desired analog BPF
LPÆBP
ΩUBP: Upper passband edge-frequency
of desired analog BPF
ΩLBS: Lower passband edge-frequency
LPÆBS of desired analog BSF
ΩUBS: Upper passband edge-frequency
of desired analog BSF
Analog –to – Analog
Transformations
Digital – to –Digital
transformations
 Digital – to – digital
transformations are obtained by
replacing every z-1 in H(z) with
a mapping function F(z-1)
 In digital – to – digital
transformations, the following
conditions must be satisfied:
1. The mapping must transform
the LP prototype into the
desired type of filter (duh!)
2. Inside of the unit circle must be
mapped to itself (so that stability
is insured – part 1)
3. The unit circle must also be
mapped to itself (so that the
stability is insured – part 2).
 The transformations given on
the right satisfy all of these
requirements
Needless to say…
 For any application of practical complexity, these transformations cannot be done by hand.
Digital filter design software, such as Matlab, is typically used for the transformations
lp2lp() Lowpass to lowpass analog filter transformation.
[NUMT,DENT] = lp2lp(NUM,DEN,Wo) transforms the lowpass filter prototype
NUM(s)/DEN(s) with unity cutoff frequency of 1 rad/sec to a lowpass filter with cutoff frequency
Wo (rad/sec).
 Design a type I Chebyshev digital IIR highpass filter with the following
specifications: Fp=700 Hz, Fst=500 Hz, αp=1dB, αs=32dB, FT=2kHz
ª First compute normalized angular digital frequencies:
2π Fs 2π × 500 2π Fp 2π × 700
ωs = = = 0. 5 π ωp = = = 0.7 π
FT 2000 FT 2000
Ω HP
p = tan(ω p / 2) = 1.9626105 Ω sHP = tan(ω s / 2) = 1.0
ª For the prototype lowpass filter pick Ωp=1 rad/s
ª Use the spectral transformation to convert these into lowpass equivalents:
Ω p Ω HP
p Ω p Ω HP
p Ω p Ω HP
p 1 ⋅1.962105
s= ⇒Ω=− ( s = jΩ ) Ωs = − = = 1.962105
sˆ Ωˆ Ω sHP 1
ª Then the analog prototype LPF specs are: Ωp=1, Ωs=1.926105, αp=1dB, αs=32dB
spect_trans_demo.m
Example Cont.
%Omega_p=tan(wp/2); %convert to analog frequencies
%Omega_s=tan(ws/2);
%Wp=1; Ws=Wp*Omega_p/Omega_s;
[B,A] = yulewalk(N,F,M) finds the Nth order recursive filter coefficients B and A such
that the filter:
matches the magnitude frequency response given by vectors F and M. Vectors F and M
specify the frequency and magnitude breakpoints for the filter such that plot(F,M) would
show a plot of the desired frequency response. The frequencies in F must be between 0.0
and 1.0,with 1.0 corresponding to half the sample rate. They must be in increasing order and
start with 0.0 and end with 1.0.
subplot(223)
[H2 w]=freqz(b2, 1, 1024);
plot(w/pi, abs(H2)); hold on
plot(freq, amp, 'r*'); grid
xlabel('Frequency, \omega/\pi')
title(' Fir2: Magnitude response')
subplot(224)
plot(w/pi, unwrap(angle(H2))); grid
xlabel('Frequency, \omega/\pi')
title(' Phase response of the designed filter') Compare filter orders and phase responses !!!
IIR vs. FIR
On Friday
 Demos of FIR and IIR filter designs on real signals
ª Play around with all the matlab functions described in this lecture to familiarize
yourself with these functions.