Professional Documents
Culture Documents
Signals and
Systems
Lecture Notes
2014
R (s ) E (s ) F (s ) V (s ) C (s )
G (s) M (s ) 1
s
controller maze rover
H ( s)
controller
RF
R1
R5
R3
C1
R0
Vi R2 C5
A1 R4
V1 A2
V2 A3 Vo
PMcL
i
Contents
LECTURE 1A – SIGNALS
OVERVIEW ..........................................................................................................1A.1
SIGNAL OPERATIONS ...........................................................................................1A.2
CONTINUOUS AND DISCRETE SIGNALS ................................................................1A.3
PERIODIC AND APERIODIC SIGNALS ....................................................................1A.4
DETERMINISTIC AND RANDOM SIGNALS .............................................................1A.6
ENERGY AND POWER SIGNALS ............................................................................1A.7
COMMON CONTINUOUS-TIME SIGNALS.............................................................1A.10
THE CONTINUOUS-TIME STEP FUNCTION .....................................................1A.10
THE RECTANGLE FUNCTION .........................................................................1A.13
THE STRAIGHT LINE .....................................................................................1A.15
THE SINC FUNCTION .....................................................................................1A.20
THE IMPULSE FUNCTION ...............................................................................1A.22
SINUSOIDS .........................................................................................................1A.26
WHY SINUSOIDS? ..............................................................................................1A.26
REPRESENTATION OF SINUSOIDS ..................................................................1A.28
RESOLUTION OF AN ARBITRARY SINUSOID INTO ORTHOGONAL FUNCTIONS 1A.30
REPRESENTATION OF A SINUSOID BY A COMPLEX NUMBER .........................1A.31
FORMALISATION OF THE RELATIONSHIP BETWEEN PHASOR AND SINUSOID ..1A.33
EULER’S COMPLEX EXPONENTIAL RELATIONSHIPS FOR COSINE AND SINE ..1A.34
A NEW DEFINITION OF THE PHASOR .............................................................1A.36
GRAPHICAL ILLUSTRATION OF THE RELATIONSHIP BETWEEN THE TWO TYPES OF
PHASOR AND THEIR CORRESPONDING SINUSOID ..........................................1A.37
NEGATIVE FREQUENCY ................................................................................1A.38
COMMON DISCRETE-TIME SIGNALS ..................................................................1A.39
THE DISCRETE-TIME STEP FUNCTION ..........................................................1A.40
THE UNIT-PULSE FUNCTION .........................................................................1A.41
SUMMARY .........................................................................................................1A.42
QUIZ ..................................................................................................................1A.43
EXERCISES ........................................................................................................1A.44
LEONHARD EULER (1707-1783) ........................................................................1A.47
LECTURE 6B – REVISION
LECTURE 7B – DISCRETIZATION
OVERVIEW .......................................................................................................... 7B.1
SIGNAL DISCRETIZATION .................................................................................... 7B.1
SIGNAL RECONSTRUCTION .................................................................................. 7B.2
HOLD OPERATION .......................................................................................... 7B.2
SYSTEM DISCRETIZATION ................................................................................... 7B.4
FREQUENCY RESPONSE ....................................................................................... 7B.7
RESPONSE MATCHING ....................................................................................... 7B.10
SUMMARY ......................................................................................................... 7B.13
EXERCISES ........................................................................................................ 7B.14
LECTURE 9A – STATE-VARIABLES
OVERVIEW .......................................................................................................... 9A.1
STATE REPRESENTATION .................................................................................... 9A.1
STATES .......................................................................................................... 9A.2
OUTPUT ......................................................................................................... 9A.4
MULTIPLE INPUT-MULTIPLE OUTPUT SYSTEMS............................................. 9A.4
SOLUTION OF THE STATE EQUATIONS ................................................................. 9A.5
TRANSITION MATRIX .......................................................................................... 9A.6
TRANSFER FUNCTION ....................................................................................... 9A.10
IMPULSE RESPONSE .......................................................................................... 9A.10
LINEAR STATE-VARIABLE FEEDBACK .............................................................. 9A.12
SUMMARY ........................................................................................................ 9A.18
EXERCISES ........................................................................................................ 9A.19
LECTURE 9B – STATE-VARIABLES 2
OVERVIEW .......................................................................................................... 9B.1
NORMAL FORM ................................................................................................... 9B.1
SIMILARITY TRANSFORM .................................................................................... 9B.4
SOLUTION OF THE STATE EQUATIONS FOR THE ZIR ........................................... 9B.6
POLES AND REPEATED EIGENVALUES............................................................... 9B.11
POLES .......................................................................................................... 9B.11
REPEATED EIGENVALUES ............................................................................ 9B.11
DISCRETE-TIME STATE-VARIABLES.................................................................. 9B.12
DISCRETE-TIME RESPONSE ............................................................................... 9B.16
DISCRETE-TIME TRANSFER FUNCTION .............................................................. 9B.17
SUMMARY ........................................................................................................ 9B.23
EXERCISES ........................................................................................................ 9B.24
ANSWERS
Lecture 1A – Signals
Overview. Signal operations. Periodic & aperiodic signals. Deterministic &
random signals. Energy & power signals. Common signals. Sinusoids.
Overview
Electrical engineers should never forget “the big picture”.
Every day we take for granted the power that comes out of the wall, or the light
that instantly illuminates the darkness at the flick of a switch. We take for
granted the fact that electrical machines are at the heart of every manufacturing
industry. There has been no bigger benefit to humankind than the supply of
electricity to residential, commercial and industrial sites. Behind this “magic”
is a large infrastructure of generators, transmission lines, transformers,
protection relays, motors and motor drives.
We also take for granted the automation of once hazardous or laborious tasks,
we take for granted the ability of electronics to control something as
complicated as a jet aircraft, and we seem not to marvel at the idea of your Why we study
systems
car’s engine having just the right amount of fuel injected into the cylinder with
just the right amount of air, with a spark igniting the mixture at just the right
time to produce the maximum power and the least amount of noxious gases as
you tell it to accelerate up a hill when the engine is cold!
We forget that we are now living in an age where we can communicate with
anyone (and almost anything), anywhere, at anytime. We have a point-to-point
telephone system, mobile phones, the Internet, radio and TV. We have never
lived in an age so “information rich”.
One thing that engineers need to do well is to break down a seemingly complex
system into smaller, easier parts. We therefore need a way of describing these
systems mathematically, and a way of describing the inputs and outputs of
these systems - signals.
Signal Operations
There really aren’t many things that you can do to signals. Take a simple FM
modulator:
Example of signal
operations
L L+R L+R+ (L-R)cos(2 f c t )
L'
Preemphasiser
to
FM antenna
modulator
R L-R
R' Preemphasiser
(L-R)cos(2 f c t )
cos(2 f c t )
Figure 1A.1
Some of the signals in this system come from “natural” sources, such as music
or speech, some come from “artificial” sources, such as a sinusoidal oscillator.
Linear system We can multiply two signals together, we can add two signals together, we can
operations
amplify, attenuate and filter. We normally treat all the operations as linear,
although in practice some nonlinearity always arises.
g(t )
Figure 1A.2
g [n]
Periodic function
defined g t g t T0 , T0 0 (1A.1)
The smallest value of T0 that satisfies Eq. (1A.1) is called the period. The
fundamental frequency of the signal is defined as:
Fundamental 1
frequency defined f0
T0 (1A.2)
g(t )
t
T0
Figure 1A.
g n g n T0 , T0 0 (1A.3)
An aperiodic signal is one for which Eq. (1A.1) or Eq. (1A.3) does not hold.
Aperiodic signals
defined Any finite duration signal is aperiodic.
Example
Find the period and fundamental frequency (if they exist) of the following
signal:
g(t )
t, (s)
-2 -1 0 1 2
T0 = 1
Figure 1A.5
If we add two periodic signals, then the result may or may not be periodic. The
result will only be periodic if an integral number of periods of the first signal
coincides with an integral number of periods of the second signal:
(1A.5)
p
T0 qT1 pT2 where is rational
q
We know that a sinusoid is a periodic signal, and we shall soon see that any
signal composed of a sum of sinusoids with frequencies that are integral
multiples of a fundamental frequency is also periodic.
g1(t)
g2(t)
g3(t)
t
g4(t)
t
Figure 1A.6
Random signals are most often the information bearing signals we are used to -
Random signals are
information bearing voice signals, television signals, digital data (computer files), etc. Electrical
signals
“noise” is also a random signal.
v2 t
(1A.6)
p t
R
or:
p t Ri 2 t (1A.7)
p t g 2 t (1A.8)
The dissipated energy, or the total energy of the signal is then given by:
A signal is classified as an energy signal if and only if the total energy of the
signal is finite:
0 E
is finite for an
(1A.10) energy signal
1 T2 2
g t dt
The average power
T T T 2
of a signal P lim
(1A.11)
If the signal is periodic with period T0 , we then have the special case:
1 T0 2 2
P g t dt
and the average
power of a periodic
signal T0 T0 2 (1A.12)
Example
The easiest way to find the average power is by performing the integration in
Eq. (1A.12) graphically. For the arbitrary sinusoid given, we can graph the
integrand g 2 t A 2 cos 2 2t T0 as:
g 2( t )
A2
equal areas
2
A
2
t
T0
Figure 1A.7
Note that in drawing the graph we don’t really need to know the identity
cos 2 1 cos 2 2 – all we need to know is that if we start off with a
area equal to the average value times the span, i.e. A 2T0 2 . (This is the Mean
Value Theorem for Integrals). So the average power is this area divided by the
period:
This is a surprising result! The power of any sinusoid, no matter what its
frequency or phase, is dependent only on its amplitude.
A signal is classified as a power signal if and only if the average power of the
signal is finite and non-zero:
0 P
are finite and non-
(1A.14) zero for a power
signal
We observe that if E, the energy of g t , is finite, then its power P is zero, and
if P is finite, then E is infinite. It is obvious that a signal may be classified as
one or the other but not both. On the other hand, there are some signals, for
example:
g t e at (1A.15)
that cannot be classified as either energy or power signals, because both E and
P are infinite.
It is interesting to note that periodic signals and most random signals are power
signals, and signals that are both deterministic and aperiodic are energy signals.
The continuous-time 0, t 0
ut 1 2, t 0
step function
defined (1A.16)
1, t 0
Graphically:
and graphed
u(t )
1
0 t
Figure 1A.8
0, t t0
ut t0 1 2, t t0 (1A.17)
1, t t0
Graphically, we have:
u(t- t0 )
1
0 t0 t
Figure 1A.9
We see that the argument t t 0 simply shifts the origin of the original
t t0
0,
T T
t t0 t t0
u 1 2,
T T T
(1A.18)
1, t t0
T T
In this case it is not meaningful to talk about the width of the step function, and
the only purpose of the constant T is to allow the function to be reflected about
the line t t 0 , as shown below:
t- t0
u( ) t0,T positive
T
1
0 t0 t
t- 2
u( )
1 -1
-2 -1 0 1 2 3 4 t
Figure 1A.10
Use Eq. (1A.18) to verify the bottom step function in Figure 1A.10.
The utility of the step function is that it can be used as a “switch” to turn
another function on or off at some point. For example, the product given by
u t 1cos2t is as shown below:
-2 -1 0 1 2 3 4 t
Figure 1A.11
One of the most useful functions in signal analysis is the rectangle function,
defined as:
1
0, t The rectangle
function defined
2
1
rect t 1 2, t (1A.19)
2
1, 1
t
2
rect ( t )
1
-1/2 0 1/2 t
Figure 1A.12
t t0 1
0,
T 2
t t0 t t0 1
rect 1 2,
T
(1A.20)
T 2
t t0 1
1,
T 2
rect ( t- t0 ) t0 positive
T Area = |T|
1
0 t0 t
|T|
rect ( t+
2
Area = 3 1 ±3 )
-4 -3 -2 -1 0 1 2 t
Figure 1A.13
Notice from the symmetry of the function that the sign of T has no effect on the
function’s orientation. However, the magnitude of T still acts as a scaling
factor.
g (t ) = t
1
slope=1
-2 -1 0 1 2 t
-1
Figure 1A.14
Now shift the straight line along the t-axis in the standard fashion, to make
g t t t 0 :
g ( t ) = t- t0
slope=1
0 t0 t
Figure 1A.15
To change the slope of the line, simply apply the usual scaling factor, to make:
This is the equation of a straight line, with slope 1 T and t-axis intercept t 0 :
and graphed
t- t0
g (t ) = T
1 slope = 1/T
0 t0 t+T
0 t
t- 1
slope = -1/3 g (t ) =
-3
1
-2 -1 0 1 t
-1
Figure 1A.16
We can now use our knowledge of the straight line and rectangle function to
completely specify piece-wise linear signals.
Example
g (t )
1
-8 -4 0 4 8 12 16 t
Figure 1A.17
t t 2
g0 t rect
4 4 (1A.22)
Graphically, it is:
-8 -4 0 4 8 12 16 t
-8 -4 0 4 8 12 16 t
g0( t )
1
-8 -4 0 4 8 12 16 t
Figure 1A.18
The next period is just g 0 t shifted right (delayed in time) by 4. The next
period is therefore described by:
t 4 t 42
g1 t g0 t 4 rect (1A.23)
4 4
It is now easy to see the pattern. In general we have:
t 4n t 4n 2
gn t g0 t 4n rect (1A.24)
4 4
where g n t is the nth period and n is any integer. Now all we have to do is
add up all the periods to get the complete mathematical expression for the
sawtooth:
t 4n t 4n 2
gt rect
n 4 4 (1A.25)
Example
g t t 1 rect t 15
. rect t 2.5 0.5t 2.5 rect 0.5t 2 (1A.26)
t 5 t 4 (1A.27)
g t t 1 rect t 15
. rect t 2.5 rect
2 2
From this, we can compose the waveform out of the three specified parts:
(t -1)rect(t -1.5)
1
0 1 2 3 4 5 t
1 rect(t -2.5)
0 1 2 3 4 5 t
1 ( t-5 t-4
-2 )rect( 2 )
0 1 2 3 4 5 t
g (t )
1
0 1 2 3 4 5 t
Figure 1A.19
The sinc function will show up quite often in studies of linear systems,
particularly in signal spectra, and it is interesting to note that there is a close
relationship between this function and the rectangle function. Its definition is:
sint
The sinc function
sinc t
defined
t (1A.28)
1.0 sinc( t )
0.8
0.6
0.4
0.2
-4 -3 -2 -1 0 1 2 3 4 t
-0.2
-0.4
Figure 1A.20
The inclusion of in the formula for the sinc function gives it certain “nice”
properties. For example, the zeros of sinc t occur at non-zero integral values,
its width between the first two zeros is two, and its area (including positive and
Features of the sinc
function negative lobes) is just one. Notice also that it appears that the sinc function is
undefined for t 0 . In one sense it is, but in another sense we can define a
function’s value at a singularity by approaching it from either side and
averaging the limits.
This is not unusual – we did it explicitly in the definition of the step function,
where there is obviously a singularity at the “step”. We overcame this by
calculating the limits of the function approaching zero from the positive and
negative sides. The limit is 0 (approaching from the negative side) and 1
(approaching from the positive side), and the average of these two is 1 2 . We
then made explicit the use of this value for a zero argument.
t t0
sin
t t 0 T (1A.30)
sinc
T t t
0
T
Its zeros occur at t 0 nT , its height is 1, its width between the first two zeros is
2 T and its area is T :
t-t
1.0 sinc( T 0)
t-5
1.0 sinc( 2 )
0.8
0.6
0.4
-2 0.2
0 2 4 6 8 10 t
-0.2
-0.4
Figure 1A.21
An informal The impulse function is often described as having an infinite height and zero
definition of the
impulse function width such that its area is equal to unity, but such a description is not
particularly satisfying.
t
(1A.31)
A more formal 1
definition of the t lim rect
impulse function T 0 T T
As T gets smaller and smaller the members of the sequence become taller and
narrower, but their area remains constant as shown below:
2
2rect ( 2t )
1
rect ( t )
t
rect ( )
1
2 2
-2 -1 0 1 2 t
Figure 1A.22
vi C vo
Figure 1A.23
If T is much larger than RC, as in the top diagram of Figure 1A.21, the As the duration of a
rectangular input
capacitor will be almost completely charged to the voltage 1 T before the pulse get smaller
pulse ends, at which time it will begin to discharge back to zero.
If we shorten the pulse so that T RC , the capacitor will not have a chance to
become fully charged before the pulse ends. Thus, the output voltage behaves and smaller
as in the middle diagram of Figure 1A.21, and it can be seen that there is a
considerable difference between this output and the proceeding one.
If we now make T still shorter, as in the bottom diagram of Figure 1A.21, we and smaller, the
note very little change in the shape of the output. In fact, as we continue to output of a linear
system approaches
make T shorter and shorter, the only noticeable change is in the time it takes the “impulse
response”
the output to reach a maximum, and this time is just equal to T.
If this interval is too short to be resolved by our measuring device, the input is
effectively behaving as a delta function and decreasing its duration further will
have no observable effect on the output, which now closely resembles the
impulse response of the circuit.
Graphical derivation
of the impulse
response of an RC vi ( t) vo ( t)
circuit by decreasing 1 (1-e-t/RC)
a pulse’s width while Area=1 T
maintaining its area 1 1
T T
0 RC T t 0 RC T t
vi ( t)
vo ( t)
1
T
Area=1
1
RC 1 (1-e-t/RC)
T
0 T RC t 0 T RC t
vi ( t) vo ( t)
1
T
Area=1
1 -t/RC
T (1-e )
1
RC
0 T RC t 0 T RC t
Figure 1A.24
With the previous discussion in mind, we shall use the following properties as
the definition of the delta function: given the real constant t 0 and the arbitrary
complex-valued function f t , which is continuous at the point t t 0 , then
f t t t dt f t
t2 behaviour upon
0 0 t1 t0 t2 (1A.32b) integration
t1
Eq. (1A.32b) is often called the sifting property of the delta function, because it “Sifting” property of
the impulse function
“sifts” out a single value of the function f t . defined
point t t 0 , but observe that the height of the spike corresponds to the area of
the delta function. Such a representation is shown below:
Graphical
representation of an
t0 positive impulse function
(t- t0 ) Area = 1
1
0 t0 t
Area = 2 2 ( t+2 )
2 Area = 1
( t- 1)
1
-4 -3 -2 -1 0 1 2 t
Figure 1A.25
Sinusoids
Here we start deviating from the previous descriptions of time-domain signals.
All the signals described so far are aperiodic. With periodic signals (such as
Sinusoids can be
described in the sinusoids), we can start to introduce the concept of “frequency content”, i.e. a
frequency-domain
shift from describing signals in the time-domain to one that describes them in
the frequency-domain. This “new” way of describing signals will be fully
exploited later when we look at Fourier series and Fourier transforms.
Why Sinusoids?
The sinusoid with which we are so familiar today appeared first as the solution
to differential equations that 17th century mathematicians had derived to
describe the behaviour of various vibrating systems. They were no doubt
surprised that the functions that had been used for centuries in trigonometry
appeared in this new context. Mathematicians who made important
contributions included Huygens (1596-1687), Bernoulli (1710-1790), Taylor
(1685-1731), Ricatti (1676-1754), Euler (1707-1783), D’Alembert (1717-
1783) and Lagrange (1736-1813).
Perhaps the greatest contribution was that of Euler (who invented the symbols
, e and i 1 which we call j). He first identified the fact that is highly
significant for us:
The special
relationship enjoyed For systems described by linear differential equations a (1A.33)
by sinusoids and
sinusoidal input yields a sinusoidal output.
linear systems
The output sinusoid has the same frequency as the input, it is however altered
in amplitude and phase.
We are so familiar with this fact that we sometimes overlook its significance.
Only sinusoids have this property with respect to linear systems. For example,
applying a square wave input does not produce a square wave output.
It was Euler who recognised that an arbitrary sinusoid could be resolved into a
pair of complex exponentials:
A sinusoid can be
We know this already from circuit analysis, and the whole topic of system
design concerns the manipulation of H.
The second major reason for the importance of the sinusoid can be attributed to
Fourier, who in 1807 recognised that:
Periodic signals are
Any periodic function can be represented as the weighted (1A.35) made up of
sinusoids - Fourier
sum of a family of sinusoids. Series
The way now lay open for the analysis of the behaviour of a linear system for
any periodic input by determining the response to the individual sinusoidal (or
exponential) components and adding them up (superposition).
This technique is called frequency analysis. Its first application was probably
that of Newton passing white light through a prism and discovering that red
light was unchanged after passage through a second prism.
Representation of Sinusoids
2t
(1A.36)
x t Acos
T
2 t
cos( )
T
A
Figure 1A.26
t0
Figure 1A.27
Mathematically, we have:
2 t t0
(1A.37)
2t 2t0
x t Acos
Acos T T
T
2t A sinusoid
where 2t 0 T . is the phase of the sinusoid. Note the negative sign in
the definition of phase, which means a delayed sinusoid has negative phase.
t0=T/4
Figure 1A.28
and:
2t (1A.38)
x t Acos
T 2
2t
Asin
T
x t Asint or Acost 2
(1A.39)
We shall use the cosine form. If a sinusoid is expressed in the sine form, then
we need to subtract 90 from the phase angle to get the cosine form.
Phase refers to the When the phase of a sinusoid is referred to, it is the phase angle in the cosine
angle of a
cosinusoid at t=0 form (in these lecture notes).
Example
Figure 1A.29
The cos() and -sin() terms are said to be orthogonal. For our purposes
orthogonal may be understood as the property that two vectors or functions
A quick definition of
mutually have when one cannot be expressed as a sum containing the other. orthogonal
We will look at basis functions and orthogonality more formally later on.
Suppose the convention is adopted that: the real part of a complex number is
Two real numbers
the cost amplitude, and the imaginary part is the sint amplitude of the completely specify a
sinusoid of a given
resolution described by Eq. (1A.41). frequency
Suppose we call the complex number created in this way the phasor associated which we store as a
complex number
with the sinusoid. called a phasor
x t – time-domain expression
Time-domain and
frequency-domain
symbols X – phasor associated with x t
The reason for the bar over X will become apparent shortly.
Example
With the previous example, we had x t 2cos 3t 15 . Therefore the phasor
. j 0.52 2 15 .
associated with it, using our new convention, is X 193
A phasor can be We can see that the magnitude of the complex number is the amplitude of the
“read off” a time-
domain expression sinusoid and the angle of the complex number is the phase of the sinusoid. In
general, the correspondence between a sinusoid and its phasor is:
xt A cost
The phasor that
corresponds to an
arbitrary sinusoid
X Ae j (1A.43)
Example
Example
we have:
We can see that the sinusoid A cos t represented by the phasor
X Ae j is equal to Re Xe jt . Therefore:
Graphical
interpretation of
Im rotating phasor /
time-domain
relationship
Xe j t = Ae je j t
A
Re
x( t )
complex plane
Figure 1A.30
Euler’s expansion:
Im
j
e
j sin
cos Re
Figure 1A.31
and:
Im
cos
Re
-j sin
-j
e
Figure 1A.32
Im
j
e
cos
Re
-j
e
Figure 1A.33
j2
and:
Im
-e-j e
j
jsin
Re
-j
e
Figure 1A.34
A better phasor /
A cost
time-domain A j jt A j jt
relationship e e e e
2 2 (1A.52)
Xe jt X *e jt
The two terms in the summation represent two counter-rotating phasors with
angular velocities and in the complex plane, as shown below:
Graphical
interpretation of
counter-rotating
phasors / time- Im
domain relationship
A /2 Xe j t = 2Ae je j t
x ( t ) Re
X*e- j t
complex plane
Figure 1A.35
Figure 1A.36
A sinusoid can be
generated by adding complex plane time-domain
up a forward rotating
complex number
and its backward
rotating complex X
conjugate *
X
Figure 1A.37
Negative Frequency
The phasor X * e jt rotates with a speed but in the clockwise direction.
Negative frequency
just means the Therefore, we consider this counter-rotating phasor to have a negative
phasor is going
clockwise frequency. The concept of negative frequency will be very useful when we start
to manipulate signals in the frequency-domain.
The link between You should become very familiar with all of these signal types, and you should
maths and graphs feel comfortable representing a sinusoid as a complex exponential. Being able
should be well
understood to manipulate signals mathematically, while at the same time imagining what is
happening to them graphically, is the key to readily understanding signals and
systems.
Example
g [n]
1
n
-1
Figure 1A.38
0, n 0
un
The discrete-time
step function
1, n 0
defined (1A.55)
Graphically:
and graphed
u [n]
1
-4 -3 -2 -1 0 1 2 3 4 n
Figure 1A.39
Note that this is not a sampled version of the continuous-time step function,
which has a discontinuity at t 0 . We define the discrete-time step to have a
value of 1 at n 0 (instead of having a value of 1 2 if it were obtained by
sampling the continuous-time signal).
Example
g [n]
5
4
3
2
1
-2 -1 0 1 2 3 4 5 n
0, n 0
n
The unit-pulse
defined
1, n 0 (1A.56)
Graphically:
and graphed
[n]
1
-4 -3 -2 -1 0 1 2 3 4 n
Figure 1A.40
Example
g [n]
5
4
3
2
1
-2 -1
-4 -3 1 2 3 4 5 6 n
-1
-2
With no obvious formula, we can express the signal as the sum of a series of
delayed and weighted unit-pulses. Working from left to right, we get:
g n n 2 2 n 1 3 n 2 n 2 n 3 4 n 4 .
Summary
Signals can be characterized with many attributes – continuous or discrete,
periodic or aperiodic, energy or power, deterministic or random. Each
characterization tells us something useful about the signal.
Sinusoids are special signals. They are the only real signals that retain their
shape when passing through a linear time-invariant system. They are often
represented as the sum of two complex conjugate, counter-rotating phasors.
References
Kamen, E.W. & Heck, B.S.: Fundamentals of Signals and Systems Using
MATLAB®, Prentice-Hall, Inc., 1997.
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
The signal cos(10t 3) sin(11t 2) has:
3.
The sinusoid 5sin 314t 80 can be represented by the phasor X :
4
2
The energy of the
0
-12 -8 -4
-2
4 8 12
t signal shown is:
-4
Answers: 1. b 2. x 3. c 4. x 5. c
Exercises
1.
For each of the periodic signals shown, determine their time-domain
expressions, their periods and average power:
g (t )
1
(a)
-2 0 2 t
g (t )
2
(b)
-2 -1 0 1 2 3 4 t
g (t )
1 e
-t / 10
(c)
-20 -10 0 10 20 30 40 t
g ( t ) 2cos 200 t
2
(d)
-3 -1 1 3 5 t
2.
For the signals shown, determine their time-domain expressions and calculate
their energy:
g (t )
10
(a)
-2 -1 1 2 t
-5
g (t )
1 cos t
(b)
-/2 /2 t
g ( t ) Two identical
1 cycles
2
t
(c)
0 1 2 3 4 t
g (t )
3
(d)
0 1 2 3 4 5 t
3.
Using the sifting property of the impulse function, evaluate the following
integrals:
f t t t t 2 dt
(i)
t 2 sintdt (iv)
1
(ii) t 3e
t
dt (v)
e jt t 2dt
f t t t 0
1 t t 4 dt
(iii) 3 (vi)
4.
t t0
Show that: T t t 0
T
tt
dt T f t 0
f t T
0
Hint: Show that:
This is a case where common sense does not prevail. At first glance it might
appear that t t 0 T t t 0 because “you obviously can’t scale
5.
Complete the following table:
(e) X 27 j11 x t
(i) X 3 30 , 1 x 2
The work of Euler built upon that of Newton and made mathematics the tool of
analysis. Astronomy, the geometry of surfaces, optics, electricity and
magnetism, artillery and ballistics, and hydrostatics are only some of Euler’s
fields. He put Newton’s laws, calculus, trigonometry, and algebra into a
recognizably modern form.
Among the symbols that Euler initiated are the sigma () for summation
Euler considered the motion of a point mass both in a vacuum and in a resisting
medium. He analysed the motion of a point mass under a central force and also
considered the motion of a point mass on a surface. In this latter topic he had to
solve various problems of differential geometry and geodesics.
He set up the main formulas for the topic of fluid mechanics, the continuity
equation, the Laplace velocity potential equation, and the Euler equations for
the motion of an inviscid incompressible fluid.
Euler did not stop working in old age, despite his eyesight failing. He
eventually went blind and employed his sons to help him write down long
equations which he was able to keep in memory. Euler died of a stroke after a
day spent: giving a mathematics lesson to one of his grandchildren; doing some
calculations on the motion of balloons; and discussing the calculation of the
orbit of the planet Uranus, recently discovered by William Herschel.
His last words, while playing with one of his grandchildren, were: “I die.”
Lecture 1B – Systems
Differential equations. System modelling. Discrete-time signals and systems.
Difference equations. Discrete-time block diagrams. Discretization in time of
differential equations. Convolution in LTI discrete-time systems. Convolution
in LTI continuous-time systems. Graphical description of convolution.
Properties of convolution. Numerical convolution.
N 1 M Linear differential
y N
t ai y t bi x t
i i (1B.1) equation
i 0 i 0
where:
d N y t
y N
t N (1B.2)
dt
Initial Conditions
y 0 , y 1 0 , , y N 1 0 (1B.3)
We take 0 as the time for initial conditions to take into account the possibility
of an impulse being applied at t 0 , which will change the output
instantaneously.
For the first order case we can express the solution to Eq. (1B.1) in a useful
(and familiar) form. A first order system is given by:
dy t
ay t bxt
First-order linear
differential equation (1B.4)
dt
dy t
e at ay t e at bxt (1B.5)
dt
Thus:
d at
dt
e y t e at bxt (1B.6)
e at y t y 0 e a bx d , t 0
t
0 (1B.7)
0
(1B.8)
differential equation
Use this to solve the simple revision problem for the case of the unit step.
The two parts of the response given in Eq. (1B.8) have the obvious names zero-
input response (ZIR) and zero-state response (ZSR). It will be shown later that
the ZSR is given by a convolution between the system’s impulse response and
the input signal.
System Modelling
In modelling a system, we are nearly always after the input/output relationship,
which is a differential equation in the case of continuous-time systems. If we’re
clever, we can break a system down into a connection of simple components,
each having a relationship between cause and effect.
Electrical Circuits
The three basic linear, time-invariant relationships for the resistor, capacitor
and inductor are respectively:
vt Rit
(1B.9a)
dvt
Cause / effect
it C
relationships for
electrical systems
dt (1B.9b)
dit
vt L (1B.9c)
dt
Mechanical Systems
d 2 xt
F t M
dt (1B.10a) Cause / effect
relationships for
mechanical
dxt
translational
F t k d
systems
(1B.10b)
dt
F t k s xt (1B.10c)
For rotational motion, the relationships for the inertia torque, damping torque
and spring torque are:
d 2 t
(1B.11a)
Cause / effect T t I
relationships for dt
d t
mechanical
T t kd
rotational systems (1B.11b)
dt
T t k s t
(1B.11c)
Discrete-time Systems
A discrete-time signal is one that takes on values only at discrete instants of
time. Discrete-time signals arise naturally in studies of economic systems –
amortization (paying off a loan), models of the national income (monthly,
quarterly or yearly), models of the inventory cycle in a factory, etc. They arise
in science, e.g. in studies of population, chemical reactions, the deflection of a
Discrete-time
systems are weighted beam. They arise all the time in electrical engineering, because of
important…
digital control, e.g. radar tracking system, processing of electrocardiograms,
digital communication (CD, mobile phone, Internet). Their importance is
probably now reaching that of continuous-time systems in terms of analysis
and design – specifically because today signals are processed digitally, and
hence they are a special case of discrete-time signals.
N M
yn ai yn i bi xn i (1B.12) Linear time-invariant
(LTI) difference
i 1 i 0 equation
Solution by Recursion
There is a MATLAB® function available for download from the Signals and
Systems web site called recur that solves the above equation.
Complete Solution
First-Order Case
A discrete-time gain
element
x [n ] y [n ] = A x [n ]
A
Figure 1B.1
A discrete-time unit-
delay element
x [n ] y [n ] = x [n -1]
D
Figure 1B.2
Example
Using these two elements and an adder, we can construct a representation of
the discrete-time system given by yn ayn 1 bxn . The system is shown
below:
x [n ] y [n ]
b
y [n -1]
a D
First-Order Case
dy t y nT T y nT
Approximating a
derivative with a
difference
dt t nT T (1B.16)
Show that yn gives approximate values of the solution y t at the times
t nT with arbitrary initial condition y 1 for the special case of zero input.
Second-order Case
d 2 y t dyt dxt
a1 a 0 y t b1 b0 xt (1B.19)
dt 2 dt dt
difference equation
First-Order System
n
yn a y 1 a bxi
The complete
n 1 n i response
i 0 (1B.22)
n The zero-state
yn a bxi
n i response (ZSR) – a
convolution
i 0 (1B.23) summation
In contrast to Eq. (1B.21), we can see that Eq. (1B.23) depends exclusively on
present and past values of the input signal. One advantage of this is that we
may directly observe how each past input affects the present output signal. For
example, an input xi contributes an amount a bxi to the totality of the
ni
For the first-order system of Eq. (1B.21), if we let xn n , then the output
of the system to a unit-pulse input can be expressed using Eq. (1B.23) as:
n
yn a b i
n i
(1B.24)
i 0
yn a b
n (1B.25)
hn a bun
discrete-time n (1B.26)
system’s unit-pulse
response
General System
xn xi n i (1B.27)
i 0
and since the system is LTI, the response yi n to x[i ] n i is given by:
The response to the sum Eq. (1B.27) must be equal to the sum of the yi n
Convolution
yn xi h[n i ]
summation defined
(1B.29) for a discrete-time
system
i 0
Graphical notation
for a discrete-time
system using the
x [n ] y [n ] unit-pulse response
h [n ]
Figure 1B.3
It should be pointed out that the convolution representation is not very efficient
in terms of a digital implementation of the output of a system (needs lots more
memory and calculating time) compared with the difference equation.
yn hi x[n i ] (1B.31)
i 0
Figure 1B.4
In other words, the output at time n is equal to a linear combination of past and
present values of the input signal, x. The system can be considered to have a
memory because at any particular time, the output is still responding to an
input at a previous time.
y [n ]
Figure 1B.5
System Memory
System memory
depends on the unit-
pulse response… h1 [n]
0 1 2 3 4 5 6 18 19 20
n
h2 [n]
0 1 2 3 4 5 6 18 19 20
n
Figure 1B.6
System 1 depends strongly on inputs applied five or six iterations ago and less
so on inputs applied more than six iterations ago. The output of system 2
depends strongly on inputs 20 or more iterations ago. System 1 is said to have a
shorter memory than system 2.
A system is stable if its output signal remains bounded in (1B.33) BIBO stability
defined
response to any bounded signal.
If a bounded input (BI) produces a bounded output (BO), then the system is
termed BIBO stable. This implies that:
This is something not readily apparent from the difference equation. A more
thorough treatment of system stability will be given later.
What can you say about the stability of the system described by Eq. (1B.21)?
Recall that we can consider the impulse as the limit of a rectangle function:
t
xr t rect
1 Start with a
rectangle input
T T (1B.35)
and since:
lim xr t t
As the input
(1B.37) approaches an
T 0 impulse function
then:
lim y r t ht
then the output (1B.38)
approaches the
impulse response T 0
t
- 4T - 2T 0 2T 4T 6T
Figure 1B.7
we have:
which get smaller
and smaller and
eventually approach xt lim ~
x t , T (1B.39)
T 0
the original
waveform
where:
t iT
x t , T
~
x iT rect
i T (1B.40)
y t , T
~
xiT Ty t iT r
…and we already
know the output…
i (1B.42)
because superposition holds for linear systems. The system response to xt is
just the response:
Convolution integral
y t x ht d
(1B.44)
for continuous-time
systems defined
y t x ht d
(1B.45) Convolution integral
if the input starts at
0 time t=0
If the input is causal, then ht 0 for negative arguments, i.e. when t .
The upper limit in the integration can then be changed so that:
Convolution integral
Once again, it can be shown that convolution is commutative which means that
it is also true to write (compare with Eq. (1B.31) ):
Alternative way of
writing the
y t h xt d
t (1B.47)
convolution integral
0
Graphical notation
for a continuous-
time system using x (t ) y ( t ) = h (t )* x ( t )
the impulse
response h (t )
Figure 1B.8
It should be pointed out, once again, that the convolution relationship is only
valid when there is no initial energy stored in the system. ie. initial conditions
are zero. The output response using convolution is just the ZSR.
h (t ) x (t )
1 1
e -t
t t
Figure 1B.9
Using graphical convolution, the output y t can be obtained. First, the input
signal is flipped in time about the origin. Then, as the time “parameter” t
1 e
x (0- )
1
h ( ) x (0- )
1
Area under the curve = y(0)
Figure 1B.10
Letting time “roll-on” a bit further, we take a snapshot of the situation when
t 1 . This is shown below:
Graphical illustration
of continuous-time -
“snapshot” at t=1
h ( )
1 e
1
x (1- )
1
1
h ( ) x (1- )
1 e
1
Area under the curve = y(1)
Figure 1B.11
0
(1B.49)
0
1
e d e 1
0 1 e 1 0.63
Graphical illustration
of continuous-time -
h ( ) “snapshot” at t=2
1 e
1 2
x (2- )
1
1 2
h ( ) x (2- )
1
e
1 2
Area under the curve = y(2)
Figure 1B.12
0
(1B.50)
0
2
e d e
2
0 1 e 2
0.86
y( t ) 1-e -t
1
y(2) = 0.86
y(1) = 0.63
y(0) = 0
1 2 t
Figure 1B.13
In this simple case, it is easy to verify the graphical solution using Eq. (1B.47).
The output value at any time t is given by:
0
(1B.51)
t
e d e
0
t
0 1 e t
Properties of Convolution
In the following list of continuous-time properties, the notation xt y t
should be read as “the input xt produces the output y t ”. Similar properties
also hold for discrete-time convolution.
axt ay t
Convolution
(1B.52a) properties
x1 t x2 t y1 t y2 t (1B.52b)
a1 x1 t a2 x2 t a1 y1 t a2 y2 t (1B.52c) Linearity
xt t 0 yt t 0
(1B.52d)
Time-invariance
Numerical Convolution
We have already looked at how to discretize a continuous-time system by
Computers work
discretizing a system’s input / output differential equation. The following with discrete data
procedure provides another method for discretizing a continuous-time system.
It should be noted that the two different methods produce two different
discrete-time representations.
T
i 1T
h xnT d
iT
n iT T
y nT h xnT d (1B.55)
iT
i 0
If T is small enough, h and xnT can be taken to be constant over each
interval:
h ( )
h (iT)
0
iT iT +T
Figure 1B.14
xnT xnT iT
n iT T
y nT hiT xnT iT d (1B.57)
iT
i 0
Since the integrand is constant with respect to , it can be moved outside the
integral which is easily evaluated:
n
y nT hiT xnT iT T (1B.58)
We approximate the
integral with a
i 0 summation
Writing in the notation for discrete-time signals, we have the following input /
output relationship:
Convolution
n
yn hi xn i T , n 0, 1, 2,
approximation for
causal systems with
(1B.59) inputs applied at t=0
i 0
Applying an impulse
to a system creates
the impulse
response
( t) y ( t ) = h (t )* ( t ) = h (t )
h (t )
( t) h (t ) h (t )
1 1
t t t
Figure 1B.15
The output, by definition, is the impulse response, ht . We can also arrive at
this result algebraically by performing the convolution integral, and noting that
it is really a sifting integral:
t ht ht d ht (1B.60)
If we now apply a delayed impulse to the system, and since the system is time-
invariant, we should get out a delayed impulse response:
Applying a delayed
impulse to a system
( t- t 0) y ( t ) = h (t )* ( t- t 0) = h ( t- t 0) creates a delayed
impulse response
h (t )
( t- t 0) h (t ) h ( t- t 0)
1 1
t0 t t t0 t
Figure 1B.16
Again, using the definition of the convolution integral and the sifting property
of the impulse, we can arrive at the result algebraically:
t t 0 ht t 0 ht d
ht t 0 (1B.61)
f x x x0 f x x0 (1B.62)
Convolving a
function with an
f (x ) (x- x 0 ) f (x- x 0 ) impulse shifts the
original function to
the impulse’s
location
x x0 x x0 x
Figure 1B.17
Summary
Systems are predominantly described by differential or difference equations
– they are the equations of dynamics, and tell us how outputs and various
states of the system change with time for a given input.
Discrete-time signals occur naturally and frequently – they are signals that
exist only at discrete points in time. Discrete-time systems are commonly
implemented using microprocessors.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall International, Inc., 1997.
Exercises
1.
The following continuous-time functions are to be uniformly sampled. Plot the
discrete signals which result if the sampling period T is (i) T 0.1 s ,
(ii) T 0.3 s , (iii) T 0.5 s , (iv) T 1 s . How does the sampling time affect
the accuracy of the resulting signal?
2.
Plot the sequences given by:
3.
From your solution in Question 2, find an y1 n y 2 n . Show graphically
that the resulting sequence is equivalent to the sum of the following delayed
unit-step sequences:
4.
Find yn y1 n y 2 n when:
5.
The following series of numbers is known as the Fibonacci sequence:
(a) Find a difference equation which describes this number sequence yn for
n 2 , when y0 0 and y1 1 .
(b) By evaluating the first few terms show that the following formula also
describes the numbers in the Fibonacci sequence:
yn
1
5
0.5 1.25 0.5 1.25
n n
(c) Using your answer in (a) find y20 and y25 . Check your results using
the equation in (b). Which approach is easier?
6.
Construct block diagrams for the following difference equations:
7.
(i) Construct a difference equation from the following block diagram:
x [n ] y [n ]
3 D
-2 D D
(ii) From your solution calculate yn for n = 0, 1, 2 and 3 given y 2 2 ,
8.
(a) Find the unit-pulse response of the linear systems given by the following
equations:
(b) Determine the first five terms of the response of the equation in (ii) to the
input:
0, n 2, 3, 4,
xn 1, n 1
1 ,
n
n 0, 1, 2,
using (i) the basic difference equation, (ii) graphical convolution and
(iii) the convolution summation. (Note yn 0 for n 2 ).
9.
For the single input-single output continuous- and discrete-time systems
characterized by the following equations, determine which coefficients must be
zero for the systems to be
(a) linear
2
d3y d2y
a1 3 a2 2 a3 a4 y a5 sin t a6 y a7 x
dy
(i)
dt dt dt
10.
To demonstrate that nonlinear systems do not obey the principle of
superposition, determine the first five terms of the response of the system:
to the input:
If y1 n denotes this response, show that the response of the system to the input
xn 2 x1 n is not 2 y1 n .
11.
A system has the unit-pulse response:
Find the response of this system when the input is the sequence:
n n 1 n 2 n 3
12.
For x1 n and x2 n as shown below find
x 1[n] x 2[n]
2 2
n n
0 1 2 3 0 1 2 3 4 5 6 7 8
13.
Use MATLAB® and discretization to produce approximate solutions to the
revision problem.
14.
Use MATLAB® to graph the output voltage of the following RLC circuit:
R L
v i (t ) C v o(t )
Compare with the exact solution: vo t 0.5 3 t e t cost , t 0 . How do
you decide what value of T to use?
15.
A feedback control system is used to control a room’s temperature with respect
to a preset value. A simple model for this system is represented by the block
diagram shown below:
t
x (t ) K y (t )
In the model, the signal xt represents the commanded temperature change
from the preset value, y t represents the produced temperature change, and t
is measured in minutes. Find:
c) the temperature change produced by the system when the gain K is 0.5 and
a step change of 0.75 is commanded at t 4 min .
16.
Use MATLAB® and the numerical convolution method to solve Q14.
17.
Sketch the convolution of the two functions shown below.
x(t) y(t)
2
1
-1 0 4 t 0 0.5 t
18.
Quickly changing inputs to an aircraft rudder control are smoothed using a
digital processor. That is, the control signal is converted to a discrete-time
signal by an A/D converter, the discrete-time signal is smoothed with a
discrete-time filter, and the smoothed discrete-time signal is converted to a
continuous-time, smoothed, control signal by a D/A converter. The smoothing
filter has the unit-pulse response:
hnT 0.5 n 0.25 n unT , T 0.25 s
Find the zero-state response of the discrete-time filter when the input signal
samples are:
Plot the input, unit-pulse response, and output for 0.75 t 1.5 s .
19.
A wave staff measures ocean wave height in meters as a function of time. The
height signal is sampled at a rate of 5 samples per second. These samples form
the discrete-time signal:
function n0=drn(n)
N=size(n,2);
rand(‘seed’, 0);
no(1)=rand-0.5;
for I=2:N;
no(i)=0.2*no(i-1)+(rand-0.5);
end
The received signal plus noise, xnT , is processed with a low-pass filter to
reduce the noise.
n n n
hnT 0.1820.76 0.1440.87 cos0.41n 0.1940.87 sin 0.41n unT
Plot the sampled height signal, snT , the filter input signal, xnT , the unit-
pulse response of the filter, hnT , and the filter output signal ynT , for
0t 6s.
In 1857 Kirchhoff extended the work done by the German physicist Georg
Simon Ohm, by describing charge flow in three dimensions. He also analysed
circuits using topology. In further studies, he offered a general theory of how
electricity is conducted. He based his calculations on experimental results
which determine a constant for the speed of the propagation of electric charge.
Kirchhoff noted that this constant is approximately the speed of light – but the
greater implications of this fact escaped him. It remained for James Clerk
Maxwell to propose that light belongs to the electromagnetic spectrum.
Kirchhoff’s most significant work, from 1859 to 1862, involved his close
collaboration with Bunsen. Bunsen was in his laboratory, analysing various
salts that impart specific colours to a flame when burned. Bunsen was using
coloured glasses to view the flame. When Kirchhoff visited the laboratory, he
suggested that a better analysis might be achieved by passing the light from the
flame through a prism. The value of spectroscopy became immediately clear.
Each element and compound showed a spectrum as unique as any fingerprint,
which could be viewed, measured, recorded and compared.
Spectral analysis, Kirchhoff and Bunsen wrote not long afterward, promises
“the chemical exploration of a domain which up till now has been completely
closed.” They not only analysed the known elements, they discovered new
“[Kirchhoff is] a Kirchhoff’s work on spectrum analysis led on to a study of the composition of
perfect example of
the true German light from the Sun. He was the first to explain the dark lines (Fraunhofer lines)
investigator. To
search after truth in in the Sun's spectrum as caused by absorption of particular wavelengths as the
its purest shape and
to give utterance
light passes through a gas. Kirchhoff wrote “It is plausible that spectroscopy is
with almost an also applicable to the solar atmosphere and the brighter fixed stars.” We can
abstract self-
forgetfulness, was now analyse the collective light of a hundred billion stars in a remote galaxy
the religion and
purpose of his life.” billions of light-years away – we can tell its composition, its age, and even how
– Robert von
Helmholtz, 1890. fast the galaxy is receding from us – simply by looking at its spectrum!
Orthogonality
The idea of breaking a complex phenomenon down into easily comprehended
components is quite fundamental to human understanding. Instead of trying to
commit the totality of something to memory, and then, in turn having to think
about it in its totality, we identify characteristics, perhaps associating a scale The concept of
breaking the
with each characteristic. Our memory of a person might be confined to the complex down into
the simple
characteristics gender, age, height, skin colour, hair colour, weight and how
they rate on a small number of personality attribute scales such as optimist-
pessimist, extrovert-introvert, aggressive-submissive etc.
We only need to know how to travel east and how to travel north and we can
go from any point on earth to any other point. An artist (or a television tube)
needs only three primary colours to make any colour.
Orthogonality in Mathematics
Vectors and functions are similarly often best represented, memorised and
manipulated in terms of a set of magnitudes of independent components. Recall
that any vector A in 3 dimensional space can be expressed in terms of any
three vectors a , b and c which do not lie in the same plane as:
Specifying a 3D
A A1a A2 b A3c (2A.1) vector in terms of 3
components
A1
A2
a b
Figure 2A.1
The vectors a , b and c are said to be linearly independent for no one of them
can be expressed as a linear combination of the other two. For example, it is
Basis set described impossible to write c a b no matter what choice is made for and
as set of linearly
independent vectors (because a linear combination of a and b must stay in the plane of a and b).
Such a set of linearly independent vectors is said to form a basis set for three
dimensional vector space. They span three dimensional space in the sense that
any vector A can be expressed as a linear combination of them.
a a a2 b a 0 ca 0
ab 0 b b b2 cb 0
(2A.2)
ac 0 bc 0 c c c2
Hence, if a , b and c are orthogonal, when we project the vector A onto each
of the basis vectors, we get:
Finding the
A a A1a a A2b a A3c a A1a 2
(2A.3) components of a
vector
We can see that projection of A onto a particular basis vector results in a scalar
quantity that is proportional to the “amount of A in that direction”. For
example, when projecting A onto a, we get the quantity A1 a 2 , where A1 is the
A 2 a 2 b 2 c
in terms of
(2A.4)
a b c
orthogonal
components
A a A b Az
A 2 a 2 b 2 z (2A.5)
a b z
If a 2 , b 2 , c 2 etc. are 1, the set of vectors are not just orthogonal they are
Orthonormal defined
orthonormal.
The definition of the dot product for vectors in space can be extended to any
general vector space. Consider two n-dimensional vectors:
Two n-dimensional
vectors
v
u
Figure 2A.2
v1
u, v u1 u 2 u3 v2 (2A.7)
v3
u1v1 u 2 v2 u3 v3
u v cos
If 90 the two vectors are orthogonal and u, v 0 . They are linearly
independent, that is, one vector cannot be written in a way that contains a
component of the other.
x, y xt y t dt
b Inner product for
(2A.8) functions defined
a
This definition ensures that the inner product of functions behaves in exactly
the same way as the inner product of vectors. Just like vectors, we say that
“two functions are orthogonal” if their inner product is zero:
Orthogonality for
x, y 0 (2A.9) functions defined
Example
Let xt sin 2t and y t sin 4t over the interval 1 2 t 1 2 :
x( t ) t
-1/2 1/2
-1
y( t ) t
-1/2 1/2
-1
Thus, the functions xt and y t are orthogonal over the interval 1 2 t 1 2 .
Consider a finite power signal, xt . The average power of xt is:
Px x t dt
1 T 2 1
x, x (2A.10)
T 0 T
Now consider two finite power signals, xt and y t . The average value of
the product of the two signals observed over a particular interval, T, is given by
the following expression:
1 T
1
T 0
x t y t dt x, y (2A.11)
T
This average can also be interpreted as a measure of the correlation between
xt and y t . If the two signals are orthogonal, then over a long enough
case, when the signals are added together the total power (i.e. the mean square
value), is:
The power of a
signal made up of 1
orthogonal Px y x y, x y
components is the
sum of the
T
xt y t dt
component signal
powers
1 T 2
T 0
x 2 t 2 xt y t y 2 t dt
1 T
T 0
1 2 1
x, x x, y y, y
T T T
Px 0 Py (2A.12)
This means that the total power in the combined signal can be obtained by
adding the power of the individual orthogonal signals.
Consider two finite energy signals in the form of pulses in a digital system.
Two pulses, p1 t and p 2 t , are orthogonal over a time interval, T, if:
Similar to the orthogonal finite power signals discussed above, the total energy
of a pulse produced by adding together two orthogonal pulses can be obtained
by summing the individual energies of the separate pulses.
The energy of a
Ex y x y, x y
signal made up of
orthogonal
components is the
xt y t dt
T 2 sum of the
component signal
0 energies
x 2 t 2 xt y t y 2 t dt
T
0
x, x 2 x, y y , y
(2A.14)
Ex 0 E y
For example, Figure 2A.3, illustrates two orthogonal pulses because they
occupy two completely separate portions of the time interval 0 to T. Therefore,
their product is zero over the time period of interest which means that they are
orthogonal.
p 1( t ) p 2( t )
1 1
0 T/2 T t 0 T/2 T t
Figure 2A.3
g t Ann t (2A.15)
n0
where the functions n t form an “orthogonal basis set”, just like the vectors
a, b and c etc. in the geometric vector case. The equivalent of the dot product
(or the light filter) for obtaining a projection in this case is the inner product
given by:
g t , n t g t n t dt
Definition of “inner T0 2
product” for a
function T 0 2 (2A.16)
and:
t m t dt 0 ab 0
T0 2
The “projection” of a
function onto an T0 2 n (2A.18)
orthogonal function
gives zero nm
When cn2 1 the basis set of functions n t (all n) are said to be orthonormal.
Example of an
orthogonal basis set
1( t ) 5( t ) – the Walsh
A A functions
T0 t T0 t
2( t ) 6( t )
A A
T0 T0
t t
3( t ) 7( t )
A A
T0 t T0 t
4( t ) 8( t )
A A
T0 T0
t t
Figure 2A.4
We can confirm that the Walsh functions are orthogonal with a few simple
integrations, best performed graphically. For example with n 2 and m 3 :
22 ( t )
2 ( t) 3 ( t)
2 t 3 t dt 0
2 T0
A
T0 0
t
Figure 2A.5
an t cos2nf 0t
The orthogonal
functions for the
They were chosen for the two reasons outlined in Lecture 1A – for linear
systems a sinusoidal input yields a sinusoidal output; and they have a compact
notation using complex numbers. The constant cn2 in this case is either T0 2 or
T0 , as can be seen from the following relations:
T0 mn0
cos2mf 0t cos2nf 0t dt T0 2 m n 0
T0 2
T0 (2A.20a)
0
2
mn
0 mn0
sin 2mf 0t sin 2nf 0t dt T0 2 m n 0
T0 2
T0 (2A.20b)
0
2
mn
If we choose the orthogonal basis set as in Eq. (2A.19), and the representation
of a function as given by Eq. (2A.15), then any periodic function g t with
period T0 be expressed as a sum of orthogonal components:
The trigonometric
g t a0 an cos2nf 0 t bn sin2nf 0 t
Fourier series
defined (2A.21)
n 1
g t dt
1 T0 2
a0
T0 T0 2
(2A.22a)
Compare these equations with Eq. (2A.3). These equations tell us how to “filter
out” one particular component of the Fourier series. Note that frequency 0 is
DC, and the coefficient a 0 represents the average, or DC part of the periodic
signal g t .
Example
comparison with:
g t a0 an cos2nf 0 t bn sin2nf 0 t
n 1
a 0 4, a1 3, b3 2
Example
Find the Fourier series for the rectangular pulse train g t shown below:
g (t)
A
0 t
T0
Figure 2A.6
Here the period is T0 and f 0 1 T0 . Using Eqs. (2A.18), we have for the
Fourier series coefficients:
2 A
g t dt
1 T0 2 1
a0
T0
T0 2 T0
2
Adt
T0
Af 0 (2A.23a)
2
A cos2nf 0t dt
2
an
T0 2
Asin2nf 0 t dt 0
2 2
T0 2
bn (2A.23c)
t nT0 (2A.24)
Arect Af 0 2 Af 0 sinc nf 0 cos 2nf 0 t
n n 1
n an bn The trigonometric
Fourier series
0 0.2A 0 coefficients can be
tabled
1 0.3742A 0
2 0.3027A 0
3 0.2018A 0
4 0.0935A 0
5 0 0
6 -0.0624A 0
etc. -0.0865A 0
0.2 A
6 f0 8 f0
0 f
2 f0 4 f0 10 f 0 12 f 0
Figure 2A.7
Observe that the graph of the Fourier series coefficients is discrete – the values
of a n (and bn ) are associated with frequencies that are only multiples of the
fundamental, nf 0 . This graphical representation of the Fourier series
coefficients lets us see at a glance what the “dominant” frequencies are, and
how rapidly the amplitudes reduce in magnitude at higher harmonics.
The compact
trigonometric
Fourier series g t An cos2nf 0 t n (2A.25)
defined n 0
an An cos n
bn An sin n (2A.26)
and therefore:
(2A.27a)
An an2 bn2
and associated
n tan 1 bn an
constants (2A.27b)
Gn an jbn
trigonometric (2A.29)
Fourier series
coefficients
The negative sign in Gn a n jbn comes from the fact that in the phasor
representation of a sinusoid the real part of Gn is the amplitude of the cos
component, and the imaginary part of Gn is the amplitude of the -sin
component.
(2A.30)
g t dt
1 T0 2
G0 a0
T0 T0 2
Gn an jbn
g t e j 2nf 0t dt
1 T0 2
Gn
T0 T0 2
n0 (2A.31a) Obtaining the
harmonic phasors
directly
g t e j 2nf 0t dt
2 T0 2
Gn
T0 T0 2
n0
(2A.31b)
The expression for the compact Fourier series, as in Eq. (2A.25), can now be
written as:
Fourier series
gt Re Gn e j 2nf 0 t
expressed as a sum
(2A.32) of harmonic phasors
n0 projected onto the
real axis
Example
Find the compact Fourier series coefficients for the rectangular pulse train g t
shown below:
g (t)
A
0 t
T0
Again, the period is T0 and f 0 1 T0 . Using Eqs. (2A.27), we have for the
average value:
1
G0
T0 0
Adt Af 0 (2A.33)
which can be seen (and checked) from direct inspection of the waveform.
The compact Fourier series coefficients (harmonic phasors) are given by:
2
Gn
T0 0
Ae j 2nf 0 t dt
(2A.34)
1 j 2nf 0 t
2 Af 0 e
j 2 nf 0 0
A
jn
1 e j 2nf 0
The next step is not obvious. We separate out the term e jnf 0 :
Gn
jn
e e
A jnf 0 jnf 0
e jnf 0 (2A.35)
e j e j
sin (2A.36)
j2
to give:
sin x
sincx
x
(2A.38)
we get:
2 Af 0sincnf 0 e jnf 0
An e j n
Therefore:
An 2 Af 0sincnf 0
(2A.40)
n nf 0
The Spectrum
Using the compact trigonometric Fourier series, we can formulate an easy-to-
interpret graphical representation of a periodic waveform’s constituent
sinusoids. The graph below shows a periodic waveform, made up of the first 6
terms of the expansion of a square wave as a Fourier series, versus time:
Figure 2A.8
The graph also shows the constituent sinusoids superimposed on the original
waveform. Let’s now imagine that we can graph each of the constituent
sinusoids on its own time-axis, and extend these into the z-direction:
Figure 2A.9
Figure 2A.10
We now have a graph that shows the amplitudes of the constituent sinusoids
versus frequency:
amplitude
3 f0 7 f0 11f0
frequency
f0 5 f0 9 f0
Figure 2A.11
The spectrum what is called a spectrum. However, since G n is complex, we have to resort to
defined…as a graph
of phasor values vs. two plots. Therefore, for a given periodic signal, we can make a plot of the
frequency
magnitude spectrum An vs. f and the phase spectrum n vs. f .
ampltiude
time
ampltiude
frequency
phase spectrum
frequency
Figure 2A.12
Example
The compact Fourier series representation for a rectangular pulse train was
found in the previous example. A graph of the magnitude and phase spectra
appear below for the case T0 5 :
An
0.4 A
0.2 A
6 f0 8 f0
0 f
2 f0 4 f0 10 f 0 12 f 0
n
2 f0 4 f0 6 f0 8 f0 10 f 0 12 f 0
0 f
-
-2
Note how we can have “phase” even though the amplitude is zero (at 5 f 0 ,
graphing linearly past the 2 point – in fact we could choose any range that
spans 2 since phase is periodic with period 2 .
e j e j
cos (2A.42)
2
e j e j
sin (2A.43)
j2
Substitution of Eqs. (2A.42) and (2A.43) into the trigonometric Fourier series,
Eq. (2A.21), gives:
g t a0 an cos2nf0t bn sin 2nf0t
n 1
a a
a0 n e j 2nf 0t n e j 2nf 0 t
n 1 2 2
b b
j n e j 2nf 0t j n e j 2nf 0t
2 2
a b
a0 n j n e j 2nf 0t
n 1 2 2
a b
n j n e j 2nf 0 t
2 2
(2A.44)
The complex
g t G e n
j 2nf 0t
(2A.45)
exponential Fourier
series
n
where:
The relationship
G0 a0 A0 (2A.46a)
between complex
exponential and
an jbn An j n trigonometric
Gn e (2A.46b)
Fourier series
coefficients
2 2 n 1
an jbn An j n
G n e
2 2 (2A.46c)
From Eqs. (2A.31a) and (2A.46a), we can see that an alternative way of
writing the Fourier series coefficients, in one neat formula instead of three, is:
The complex
g t e
1 T0 2
j 2nf 0 t
Gn
exponential Fourier
dt (2A.47) series coefficients
T0 T0 2 defined
Thus, the trigonometric and complex exponential Fourier series are not two
different series but represent two different ways of writing the same series. The
coefficients of one series can be obtained from those of the other.
backward rotating phasor Gn* . Eqs. (2A.32) and (2A.31a) then become:
The complex
exponential Fourier g t G e n
j 2nf 0 t
(2A.48a)
series… n
g t e j 2nf 0 t dt
1 T0 2
Gn
and Fourier series
coefficients defined (2A.48b)
T0 T0 2
where:
The symmetry of the
Gn Gn*
complex exponential (2A.49)
Fourier series
coefficients
Thus, if:
(2A.50)
Gn Gn e jn
then:
(2A.51)
Gn Gn e jn
Example
Find the complex exponential Fourier series for the rectangular pulse train
g t shown below:
g (t)
A
0 t
T0
Figure 2A.13
The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:
1 2
Gn
T0
2
Ae j 2nf 0t dt
j 2n
e
A jnf 0
e jnf 0
sin nf 0
A
n
Af 0sincnf 0 (2A.52)
In this case Gn turns out to be a real number (the phase of all the constituent
sinusoids is 0 or 180).
The double-sided
magnitude spectrum
of a rectangular Gn
pulse train
0.2 A
-8 f0 -6 f0 6 f0 8 f0
Figure 2A.14
Figure 2A.15
Thus, the Fourier series can be represented by spectral lines at all harmonics of
f 0 1 T0 , where each line varies according to the complex quantity Gn . In
A uniform train of
unit impulses,
g (t)
1
Figure 2A.16
The result of Eq. (2A.52) is still valid if we take the appropriate limit:
g t G e n
j 2nf 0 t its Fourier series,
n
t nT f e
n
0 0
n
j 2nf 0 t (2A.54)
Gn
f0
f
-4f0 -3f0 -2f0 - f0 0 f0 2 f0 3 f0 4 f0
Figure 2A.17
Example
Suppose we want to find the Fourier series coefficients of g t cos 2t . Note
period = 1 s.
Step 1
Step 2
g 1 0 1 0
Step 3
Find G fftg 0 2 0
2 .
Example
Find the Fourier series coefficients of a 50% duty cycle square wave.
Step 1
In this case the spectrum is so we can never choose f s high enough. There
is always some error. Suppose we choose 8 points of one cycle.
Step 2
g 1 1 0.5 0 0 0 0.5
1
Step 3
You should read Appendix A – The Fast Fourier Transform, and look at the
example MATLAB® code in the “FFT - Quick Reference Guide” for more
complicated and useful examples of setting up and using the FFT.
Even Symmetry
even odd odd . The limits on the integral span equal portions of negative
and positive time, and so we get bn 0 after performing the integration. The
integrand in the formula for a n is even even even , and will only be zero if
is positive or negative.
Example
Find the compact Fourier series coefficients for the cos-shaped pulse train g t
shown below:
g( t )
2
t
-1 0 1 2 3 4 5
-2
Figure 2A.18
0 1 t 1 2
g t 2 cos3t 1 2 t 1 2
(2A.57)
1 2t
0
and g t g t 2 .
Gn
1 1
g t e jnt
dt (2A.58)
2 1
0 2 cos3t e jnt dt 0
1 0.5
2 0.5
j 3t
0.5 e e j 3t jnt
e dt
0.5
2
Gn
j 3 n j 3 n 0.5 (2A.59)
e j 3 n 2 e j 3 n 2 e j 3 n 2 e j 3 n 2
j 3 n j 3 n j 3 n j 3 n
1 sin 3 n 2 1 sin 3 n 2
2 3 n 2 2 3 n 2
(2A.60)
Gn sinc 3 n 2 sinc 3 n 2
1 1
2 2
Now since G0 a0 and Gn a n jbn 2 for n 1 , we can see that the
imaginary part is zero (as we expect for an even function) and that:
sinc3 n 2 sinc3 n 2
1 1
G0 a0
2 2 (2A.61)
sinc3 n 2 sinc3 n 2 n 1
an 1 1
Gn
2 2 2
and therefore:
a0 sinc3 n 2 sinc3 n 2
1 1 2
2 2 3 (2A.62)
an sinc3 n 2 sinc3 n 2 n 1
Thus, the Fourier series expansion of the wave contains only cosine terms:
g t sinc3 n 2 sinc3 n 2cosnt
2
3 n1 (2A.63)
and therefore Gn Gn* a n 2 Gn for n 1 . That is, for even functions, the
Gn
An even symmetric
waveform has an
even and real
spectrum
-5 f 0 -4 f 0 -3 f 0 -2 f 0 - f0 2 f0 3 f0 4 f0 5 f0
f
0 f0
Odd Symmetry
An odd function is one which possesses rotational symmetry about the origin.
Mathematically, an odd function is such that g t g t . An odd function
can be expressed as a sum of sine waves only (sine waves are odd functions),
and therefore all the G n are imaginary. To see why, we consider the real
g t cos2nf 0t dt
2 T0 2
an
T0 T0 2
(2A.64)
odd even odd . The limits on the integral span equal portions of negative
and positive time, and so we get a n 0 after performing the integration. The
integrand in the formula for bn is odd odd even , and will only be zero if
Example
Find the complex exponential Fourier series coefficients for the sawtooth
waveform g t shown below:
g (t )
A
t
-T0 -T0 T0 T0
2 2
-A
Figure 2A.19
The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:
1 T0 2 2 A j 2nf 0 t
Gn
T0 T0 2 T
0
te dt (2A.66)
ut
dv e j 2nf 0 t dt
(2A.67)
du dt
e j 2nf 0 t
v
j 2nf 0
2 A e j 2nf0t
T0 2 j 2nf 0t
T0 2 e
Putting sin n 0 and cosn 1 for all integer values of n, we obtain:
n
Gn
jA
1n
n
(2A.69)
Now since G0 a0 and Gn a n jbn 2 for n 1 , we can see that the real
Gn j
bn
jA
1n n 1
2 n
(2A.70)
and therefore:
bn
2A
1n n 1
n
(2A.71)
Thus, the Fourier series expansion of the sawtooth wave contains only sine
terms:
g t 1n 1 sin 2nf0t
2A
(2A.72)
n 1 n
therefore Gn Gn* j bn 2 Gn . That is, for odd functions, the complex
A graph of the spectrum in this case looks like (the values are imaginary):
Gn
An odd symmetric
-4 f 0 -2 f 0 f0 3 f0
waveform has an
odd and imaginary
f
-3 f 0 - f0 0 2 f0 4 f0
spectrum
Half-Wave Symmetry
0 t
Figure 2A.20
Looking at the formula for the complex exponential Fourier series coefficients:
g t e j 2nf 0t dt
1 T0 2
Gn
T0 T0 2
(2A.73)
Gn
1 0
g t e j 2nf 0 t
dt
1 T0 2
g t e j 2nf 0t dt
T0 T0 2 T0 0 (2A.74)
I1 I2
(2A.75)
g t e j 2nf0t dt
1 T0 2
I2
T0 0
g t T0 2e j 2nf0t dt
1 T0 2
T0 0
Letting t T0 2 , we have:
(2A.76)
g e j 2nf0 T0 2 d
1 0
I2
T0 -T0 2
e jn
g e j 2nf0 d
0
T0 -T0 2
Now since the value of a definite integral is independent of the variable used in
the integration, and noting that e jn 1 , we can see that:
n
I 2 1 I1 (2A.77)
n
Therefore:
Gn I1 I 2 (2A.78)
I1 1 I1
n
2 I n odd
1
0 n even
Thus, a half-wave symmetric function will have only odd harmonics – all even
harmonics are zero.
Example
A square pulse train (50% duty cycle square wave) is a special case of the
rectangular pulse train, for which T0 2 . For this case g t is:
0 t
T0
Figure 2A.21
n
Gn Af 0sincnf 0
A
sinc (2A.80)
2 2
Recalling that sinc x 0 for all integer values of x , we can see that Gn will
be zero for all even harmonics (except G0 ) . We expect this, since apart from a
Gn A /2
Figure 2A.22
In this case the spacing of the discrete spectral frequencies, nf 0 , is such that
Power
One of the advantages of representing a signal in terms of a set of orthogonal
Calculating power
using orthogonal components is that it is very easy to calculate its average power. Because the
components
components are orthogonal, the total average power is just the sum of the
average powers of the orthogonal components.
For example, if the double-sided spectrum is being used, since the magnitude
P Gn Gn Gn*
2
Power for a double-
sided spectrum
n n (2A.81)
Note that the DC component only appears once in the sum ( n 0 ). Its power
contribution is G02 which is correct.
Example
How much of the power of a 50% duty cycle rectangular pulse train is
contained in the first three harmonics?
g(t ) T0 /2
A
0 t
T0
Figure 2A.23
1 T0 4 2 A2
P A dt (2A.82)
T0 T0 4 2
A n
Gn sinc (2A.83)
2 2
and draw the double-sided spectrum:
Gn A /2
Figure 2A.24
A2
P0 G0
2
(2A.84)
4
which is 50% of the total power. The DC plus fundamental power is:
1
P0 P1 G
2
n (2A.85)
n 1
G0 2 G1
2 2
A2 2 A2
2 0.4526 A2
4
which is 90.5% of the total power.
3
P0 P1 P2 P3 Gn
2
(2A.86)
n 3
A2 2 A2 2 A2
2 0 2
4 9
0.4752 A2
which is 95% of the total power. Thus, the spectrum makes obvious a
characteristic of a periodic signal that is not obvious in the time domain. In this
case, it was surprising to learn that 95% of the power in a square wave is
contained in the frequency components up to the 3rd harmonic. This is
important – we may wish to lowpass filter this signal for some reason, but
retain most of its power. We are now in a position to give the cutoff frequency
of a lowpass filter to retain any amount of power that we desire.
Filters
Filters are devices that shape the input signal’s spectrum to produce a new
output spectrum. They shape the input spectrum by changing the amplitude and
phase of each component sinusoid. This frequency-domain view of filters has
been with us implicitly – we specify the filter in terms of a transfer function A filter acts to
change each
H s . When evaluated at s j the transfer function is a complex number. component sinusoid
of a periodic
The magnitude of this complex number, H j , multiplies the corresponding
function
magnitude of the component phasor of the input signal. The phase of the
complex number, H j , adds to the phase of the component phasor.
Visual view of a
filter’s operation
X0 Y0 = H(0 ) X 0
excitation at DC H(0) response to DC
X1 Y1 = H( f0) X 1
excitation with a sinusoid H ( f0 ) response to a sinusoid
at the fundamental frequency at the fundamental frequency
Xn Yn = H ( n f 0 ) X n
excitation with a sinusoid H (n f 0 ) response to a sinusoid
at the n th harmonic
at the n th harmonic
H( f ) Yn = H( f ) X n
Xn
Xn f Yn
f f
input spectrum output secptrum
Figure 2A.25
Recall from Lecture 1A that it is the sinusoid that possesses the special
property with a linear system of “a sinusoid in gives a sinusoid out”. We now
have a view that “a sum of sinusoids in gives a sum of sinusoids out”, or more
simply: “a spectrum in gives a spectrum out”. The input spectrum is changed
by the frequency response of the system to give the output spectrum.
Example
Let’s see what happens to a square wave when it is “passed through” a 3rd
order Butterworth filter. For the filter, we have:
H j
1
(2A.87a)
1 0
A filter is defined in 6
terms of its
magnitude and
phase response
2 02 3
H j tan 1
3
2 0 0
2 (2A.69b)
Here, 0 is the “cutoff” frequency of the filter. It does not represent the
angular frequency of the fundamental component of the input signal – filters
know nothing about the signals to be applied to them. Since the filter is linear,
superposition applies. For each component phasor of the input signal, we
multiply by H j e jH j . We then reconstruct the signal by adding up the
H ()
1 3rd order
Butterworth filter
magnitude response
Gn
A /2 output signal
magnitude spectrum
Figure 2A.26
T0 2T0 t
Figure 2A.27
This looks like a shifted sinusoid (DC + sine wave) with “a touch of 3rd
harmonic distortion”. With practice, you are able to recognise the components
Some features to
look for in a of waveforms and hence relate them to their magnitude spectrum as in
spectrum
Figure 2A.26. How do we know it is 3rd harmonic distortion? Without the DC
component the waveform exhibits half-wave symmetry, so we know the 2nd
harmonic (an even harmonic) is zero.
If we extend the cutoff frequency to ten times the fundamental frequency of the
input square wave, the filter has less effect:
Sharp transitions in
the time-domain are
caused by high
frequencies g (t)
A
T0 2T0 t
Figure 2A.28
From this example it should be apparent that high frequencies are needed to
make sharp transitions in the time-domain.
Fourier
g t a0 an cos2nf 0t g t An cos2nf 0 t n gt Gn e j 2nf t 0
Series n 1 n 0 n
bn sin 2nf 0 t
Re Gn e j 2nf 0t
n 0
Fourier 1 T0 2
g t e j 2nf0t dt
1 1 T0 2
g t dt
T0 T0 2 gt e j 2nf 0 t dt
T0 2
Series
a0 Gn
T0 T0 2
Gn
T0 0 T 2
Coefficients
n 0
gt cos 2nf 0 t dt
2 T0 2
T0 T0 2
an
g t e j 2nf0t dt
2 T0 2
Gn
T0 T0 2
2 T0 2
g t sin2nf 0 t dt
n 0
T0 T0 2
bn
an |Gn|
Spectrum of |Gn|
A
a single A
A cos 2
sinusoid
A cos2f 0 t - f0 0 f0 f
0 f0 f 0
f0 f Gn
bn Gn
- A sin - f0
0 f0 f
-
0 f0 f 0
f0 f
Summary
All periodic waveforms are made up of a sum of sinusoids – a Fourier
series. There are three equivalent notations for the Fourier series.
The coefficients of the basis functions in the Fourier series are called
“Fourier series coefficients”. For the complex exponential Fourier series,
they are complex numbers, and are just the phasor representation of the
sinusoid at that particular harmonic frequency.
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
The signal 3cos(2t ) 4sin(4t ) has:
2.
The Fourier series of
g(t)
the periodic signal will
have no:
-3 -2 -1 0 1 2 3 f (Hz)
Answers: 1. c 2. c 3. a 4. c 5. a
Exercises
1.
A signal g t 2cos100t sin100t .
(c) Plot the real and imaginary components of the double-sided spectrum.
2.
|G| G
2
45°
-100
0 0
-100 100 (rads )
-1 100 (rads-1 )
-45°
3.
What is g t for the signal whose spectrum is sketched below.
Re{G n } 2
Im{G n }
4.
Sketch photographs of the counter rotating phasors associated with the
spectrum below at t 0 , t 1 and t 2 .
|G| G
1
30°
1
0 0
-1 1 (rads )
-1 -1 (rads-1 )
-30°
5.
For each of the periodic signals shown, find the complex exponential Fourier
series coefficients. Plot the magnitude and phase spectra in MATLAB®,
for 25 n 25 .
(a)
-2 0 2 t
g (t )
1
(b)
-2 -1 0 1 2 3 4 t
g (t )
1 e -t / 10
(c)
-2 - 0 2 t
-1
g ( t ) cos(8 t )
1
(d)
-3 -1 1 3 5 t
g (t )
1
(e)
0 T0 t
6.
A voltage waveform vi t with a period of 0.08s is defined by:
vi 60 V, 0 t 0.01 s ; vi 0 V, 0.01 t 0.08 s . The voltage vi is applied as the
source in the circuit shown below. What average power is delivered to the
load?
1
5
Vo
vi vo
Vi
16
mH
f (Hz)
0 20 40 60 80
load
7.
A periodic signal g t is transmitted through a system with transfer function
H . For three different values of T0 ( T0 2 3, 3, and 6 ) find the
double-sided magnitude spectrum of the output signal. Calculate the power of
the output signal as a percentage of the power of the input signal g t .
g (t)
T0 H ()
1
1 2
0 t -12 0 12
T0
8.
Estimate the bandwidth B of the periodic signal g t shown below if the power
of all components of g t within the band B is to be at least 99.9 percent of the
total power of g t .
g (t)
T0
A
t
-A
Fourier is famous for his study of the flow of heat in metallic plates and rods.
The theory that he developed now has applications in industry and in the study
of the temperature of the Earth’s interior. He is also famous for the discovery
that many functions could be expressed as infinite sums of sine and cosine
terms, now called a trigonometric series, or Fourier series.
Fourier first showed talent in literature, but by the age of thirteen, mathematics
became his real interest. By fourteen, he had completed a study of six volumes
of a course on mathematics. Fourier studied for the priesthood but did not end
up taking his vows. Instead he became a teacher of mathematics. In 1793 he
became involved in politics and joined the local Revolutionary Committee. As
he wrote:-
This extract is from …it was in attempting to verify a third theorem that I employed the
a letter found among procedure which consists of multiplying by cos x dx the two sides of the
Fourier’s papers, equation
and unfortunately
lacks the name of
the addressee, but
x a0 a1 cos x a2 cos 2 x ...
was probably
intended for and integrating between x 0 and x . I am sorry not to have known the
Lagrange. name of the mathematician who first made use of this method because I
would have cited him. Regarding the researches of d’Alembert and Euler
could one not add that if they knew this expansion they made but a very
imperfect use of it. They were both persuaded that an arbitrary…function
could never be resolved in a series of this kind, and it does not seem that
any one had developed a constant in cosines of multiple arcs
[i.e. found a1 , a 2 ,…, with 1 a1 cos x a 2 cos 2 x ... for 2 x 2 ]
the first problem which I had to solve in the theory of heat.
f x ~ r a r expirt but Fourier’s work extended this idea in two totally
new ways. One was the “Fourier integral” (the formula for the Fourier series
coefficients) and the other marked the birth of Sturm-Liouville theory (Sturm
and Liouville were nineteenth century mathematicians who found solutions to
Napoleon was defeated in 1815 and Fourier returned to Paris. Fourier was
elected to the Académie des Sciences in 1817 and became Secretary in 1822.
Shortly after, the Academy published his prize winning essay Théorie
analytique de la chaleur (Analytical Theory of Heat). In this he obtains for the
first time the equation of heat conduction, which is a partial differential
equation in three dimensions. As an application he considered the temperature
of the ground at a certain depth due to the sun’s heating. The solution consists
of a yearly component and a daily component. Both effects die off
exponentially with depth but the high frequency daily effect dies off much
more rapidly than the low frequency yearly effect. There is also a phase lag for
the daily and yearly effects so that at certain depths the temperature will be
completely out of step with the surface temperature.
All these predictions are confirmed by measurements which show that annual
variations in temperature are imperceptible at quite small depths (this accounts
for the permafrost, i.e. permanently frozen subsoil, at high latitudes) and that
daily variations are imperceptible at depths measured in tenths of metres. A
reasonable value of soil thermal conductivity leads to a prediction that annual
temperature changes will lag by six months at about 2–3 metres depth. Again
this is confirmed by observation and, as Fourier remarked, gives a good depth
for the construction of cellars.
References
Körner, T.W.: Fourier Analysis, Cambridge University Press, 1988.
g(t )
Figure 2B.1
0 t
T0 T0
Figure 2B.2
In the limit, if we let T0 become infinite, the pulses in the periodic signal
repeat after an infinite interval, and:
limit T0 .
1 T0 2
Gn
g p t e j 2 nf 0t
dt
T0 T0 2 (2B.2b)
Gn Af 0 sincnf 0 (2B.3)
Gn
0.2 A
-8 -6 6 8
-12 -10 -4 -2 0 2 4 10 12 f
Figure 2B.3
As the period
increases, the
spectral lines get
closer and closer,
Gn
but smaller and 0.04 A
smaller
-8 -6 6 8
-12 -10 -4 -2 0 2 4 10 12 f
Figure 2B.4
The envelope retains the same shape (it was never a function of T0 anyway),
but the amplitude gets smaller and smaller with increasing T0 . The spectral
lines get closer and closer with increasing T0 . In the limit, it is impossible to
(2B.4)
Gn T0
T0 2
g p t e j 2nf t dt
0
T0 2
nf 0 f , T0 (2B.5)
With the limiting process as defined in Eqs. (2B.1) and (2B.5), Eq. (2B.4)
becomes:
(2B.6)
Gn T0 g t e j 2 ft
dt
The right-hand side of this expression is a function of f (and not of t), and we
represent it by:
G f g t e j 2ft dt
(2B.7) The Fourier
transform defined
Therefore, G f lim GnT0 has the dimensions amplitude per unit frequency,
T0
f 0 df , T0 (2B.8)
That is, the fundamental frequency becomes infinitesimally small as the period
is made larger and larger. This agrees with our reasoning in obtaining
Eq. (2B.5), since there we had an infinite number, n, of infinitesimally small
discrete frequencies, f 0 df , to give the finite continuous frequency f. In order
to apply the limiting process to Eq. (2B.2a), we multiply the summation by
T0 f 0 1 :
g p t G T e n 0
j 2nf 0 t
f0
n (2B.9)
and use Eq. (2B.8) and the new quantity G f GnT0 . In the limit the
gt G f e j 2ft df
The inverse Fourier
transform defined
(2B.10)
Eqs. (2B.7) and (2B.10) are collectively known as a Fourier transform pair.
The function G f is the Fourier transform of g t , and g t is the inverse
Fourier transform of G f .
gt G f e
transform and (2B.11a)
j 2 ft
inverse transform df
side-by-side
G f g t e j 2ft dt
(2B.11b)
gt G f
Notation for a (2B.12)
Fourier transform
pair
Continuous Spectra
The concept of a continuous spectrum is sometimes bewildering because we
Making sense of a
generally picture the spectrum as existing at discrete frequencies and with continuous
spectrum
finite amplitudes. The continuous spectrum concept can be appreciated by
considering a simple analogy.
G(x )
G1 G2 G3 Gn
x1 x2 x3 xn x1 xn
(a) (b)
Figure 2B.5
The beam is loaded at n discrete points, and the total weight W on the beam is
given by:
W Gi
individual finite
weights
i 1 (2B.13)
W G x dx
xn or continuous
infinitesimal weights
x1
(2B.14)
An exactly analogous situation exists in the case of a signal and its frequency
spectrum. A periodic signal can be represented by a sum of discrete
exponentials with finite amplitudes (harmonic phasors):
g t Gn e j 2nf0t
n (2B.15)
gt G f e j 2ft df
from infinitesimal
sinusoids – if we
have an infinite
number of them! (2B.16)
An electrical analogy could also be useful: just replace the discrete loaded
beam with an array of filamentary conductors, and the continuously loaded
beam with a current sheet. The analysis is the same.
(2B.17b)
2. g t is “absolutely integrable”, i.e. g t dt
Note that these are sufficient conditions and not necessary conditions. Use of
the Fourier transform for the analysis of many useful signals would be
impossible if these were necessary conditions.
g t dt
(2B.18)
E
2
Many signals of interest to electrical engineers are not energy signals and are
therefore not absolutely integrable. These include the step function and all
periodic functions. It can be shown that signals which have infinite energy but
which contain a finite amount of power and meet Dirichlet condition 1 do have
valid Fourier transforms.
For practical purposes, for the signals or functions we may wish to analyse as
engineers, we can use the rule of thumb that if we can draw a picture of the
waveform g t , then it has a Fourier transform G f .
-T /2 T /2 t
Figure 2B.6
t
G f A rect e j 2ft dt
T
T 2
A e j 2ft dt
T 2
A
j 2f
e jfT e jfT
sin fT
A
(2B.19)
f
AT sinc fT
t
A rect AT sinc fT
and its transform
T (2B.20)
g(t ) G (f )
A AT
-T /2 T /2 t -5 -4 -3 -2 -1 1 2 3 4 5 f
T T T T T T T T T T
Figure 2B.7
There are a few interesting observations we can make in general by Time and frequency
characteristics
considering this result. One is that a time limited function has frequency reflect an inverse
components approaching infinity. Another is that compression in time (by relationship
making T smaller) will result in an expansion of the frequency spectrum
(“wider” sinc function in this case). Another is that the Fourier transform is a
linear operation – multiplication by A in the time domain results in the
spectrum being multiplied by A.
Letting A=1 and T=1 in Eq. (2B.20) results in what we shall call a “standard”
transform:
One of the most
rect t sinc f
(2B.21) common transform
pairs – commit it to
memory!
We may now state our observations more formally. We see that the linearity
property of the Fourier transform pair can be defined as:
agt aG f
(2B.22) Linearity is obeyed
by transform pairs
Time scaling
t
property of
g T G fT
T
transform pairs
(2B.23)
We could (and should in future) derive our Fourier transforms by starting with
a standard Fourier transform and then applying appropriate Fourier transform
pair properties. For example, we could arrive at Eq. (2B.20) without
integration by the following:
Applying standard
t
rect T sinc fT (scaling property)
T
t
A rect AT sinc fT (linearity) (2B.24)
T
We now only have to derive enough standard transforms and discover a few
Fourier transform properties to be able to handle almost any signal and system
of interest.
gt G f e
(2B.25)
j 2 ft
df
g f G x e j 2xf dx
(2B.26)
g f G t e j 2tf dt
(2B.27)
Notice that the right-hand side is precisely the definition of the Fourier
transform of the function G t . Thus, there is an almost symmetrical
relationship between the transform and its inverse. This is summed up in the
duality property:
Duality property of
Example
G (f ) 1
-B B f
Figure 2B.8
sinct rect f . Since the rect function is symmetric, this can also be
Graphically:
Fourier transform of
a sinc function
G (t ) g(-f )
2B 1
-5 -2 -3 -1 -1 1 1 3 2 5 t -B B f
2B B 2B B 2B 2B B 2B B 2B
Figure 2B.9
Example
G f t e j 2ft dt 1
(2B.30)
t 1
Fourier transform of
(2B.31) an impulse function
It is interesting to note that we could have obtained the same result using
Eq. (2B.20). Let the height be such that the area AT is always equal to 1, then
we have from Eq. (2B.20):
t
rect sinc fT
1
T T (2B.32)
Now let T 0 so that the rectangle function turns into the impulse function
t . Noting that sinc 0 is unity, we arrive at Eq. (2B.31). Graphically, we
have:
g(t ) G (f ) 1
1
0 t f
Figure 2B.10
This result says that an impulse function is composed of all frequencies from
DC to infinity, and all contribute equally.
Example
We wish to find the Fourier transform of the constant 1 (if it exists), which is a
DC signal. Choosing the path of direct evaluation of the integral, we get:
G f 1e j 2ft dt
(2B.33)
1 f
Fourier transform of
a constant
(2B.34)
1 f
(2B.35)
1 f
(2B.36)
which is the same as Eq. (2B.34). Again, two different methods converged on
the same result, and one method (direct integration) seemed impossible! It is
therefore advantageous to become familiar with the properties of the Fourier
transform.
Time Shifting
A time shift to the right of t0 seconds (i.e. a time delay) can be represented by
g t t 0 g t t 0 e j 2ft dt
(2B.37)
g(t )
A -5 -4 -3 -2 -1 1 2 3 4 5 f
T T T T T T T T T T
G (f )
AT
0 T /2 T t
1 2 3 4 5
T T T T T
-5 -4 -3 -2 -1 f
T T T T T -2
slope = T
Figure 2B.11
Frequency Shifting
Similarly to the last section, a spectrum G f can be shifted to the right,
G f f 0 , or to the left, G f f 0 . This property will be used to derive
The inverse Fourier transform of a frequency shifted function to the right is:
G f f 0 e G f f 0
(2B.39)
j 2ft
df
Frequency shift
G x e j 2 x f 0 t dx G f f 0
property defined
(2B.40)
g t e j 2f 0 t
G f f 0
e j 2ft0 in the time-domain. The sign of the exponent in the shifting factor is
opposite to that with time shifting – this can also be explained in terms of the
duality property.
For example, consider “amplitude modulation”, making use of the of the Euler
expansion for a cosine:
Multiplication by a
g t cos2f c t g t 12 e j 2f c t 12 e j 2f c t sinusoid in the time-
domain shifts the
original spectrum up
and down by the
carrier frequency
1
2
G f f c 12 G f f c (2B.41)
e j 2f 0t f f0 (2B.42a)
e j 2f 0t f f0 (2B.42b)
Therefore, by substituting into Euler’s relationship for the cosine and sine:
sin 2f 0 t 1
2j
e j 2f 0t 21j e j 2f 0t
2j e j 2f 0t 2j e j 2f 0t (2B.43b)
0 T0 t
- f0 f0 f
sin(2 f 0t ) G (f )
1
j j
2 -2
0 T0 t
- f0 f0 f
Figure 2B.12
gt Gn e j 2nf 0 t
n (2B.45)
e j 2nf 0t f nf 0 (2B.46)
Gn e j 2nf 0 t G n f nf 0 (2B.47)
G e
n
n
j 2nf 0 t
G f nf
n
n 0
(2B.48)
Therefore, in words:
Example
Find the Fourier transform of the rectangular pulse train g t shown below:
g (t)
A
0 t
T0
Figure 2B.13
We have already found the Fourier series coefficients for this waveform:
Gn Af 0sincnf 0 (2B.50)
For the case of T0 5 , the spectrum is then a weighted train of impulses, with
spacing equal to the fundamental of the waveform, f 0 :
The double-sided
magnitude spectrum
of a rectangular
pulse train is a
G (f )
0.2 A
weighted train of
impulses weights = Gn
-8 f0 -6 f0 6 f0 8 f0
Figure 2B.14
This should make intuitive sense – the spectrum is now defined as “spectral
Pairs of impulses in density”, or a graph of infinitesimally small phasors spaced infinitesimally
the spectrum
correspond to a close together. If we have an impulse in the spectrum, then that must mean a
sinusoid in the time-
domain finite phasor at a specific frequency, i.e. a sinusoid (we recognise that a single
sinusoid has a pair of impulses for its spectrum – see Figure 2B.12).
g (t)
1
Figure 2B.15
Gn f 0 (2B.51)
Using Eq. (2B.48), we therefore have the following Fourier transform pair:
The Fourier
t kT f f nf
transform of a
uniform train of
0 0 0 impulses is a
k n (2B.52) uniform train of
impulses
Graphically:
g (t)
1
G( f )
f0
-2f 0 - f0 0 f0 2 f0 3 f0 f
Figure 2B.16
t 1 (F.2)
e t u t
1
1 j 2f
(F.3)
Acos 2f 0t e j f f 0 e j f f 0
A A
(F.4)
2 2
t kT f f nf
k
0 0
n
0 (F.5)
Assuming g t G f .
t
g T G fT (F.7) Scaling
T
g t e j 2 f 0 t G f f 0 (F.9) Frequency-shifting
gt j 2fG f
d Time-differentiation
(F.11)
dt
G 0
g d G f f
t 1
j 2f 2
(F.12) Time-integration
g1 t g2 t G1 f *G2 f
Multiplication
(F.13)
gt dt G0
(F.15)
Area
G f df g 0
(F.16)
Summary
Aperiodic waveforms do not have a Fourier series – they have a Fourier
transform. Periodic waveforms also have a Fourier transform if we allow
for the existence of impulses in the transform.
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Exercises
1.
Write an expression for the time domain representation of the voltage signal
with double sided spectrum given below:
X( f )
2 30° 2 -30°
-1000 1000 f
2.
Use the integration and time shift rules to express the Fourier transform of the
pulse below as a sum of exponentials.
x(t)
2
-1 0 2 4 t
3.
Find the Fourier transforms of the following functions:
g (t )
10
(a)
-2 -1 -5 1 2 t
g (t )
1 cos t
(b)
-/2 /2 t
g ( t ) Two identical
1 cycles
t2
(c)
0 1 2 3 4 t
g (t )
3
(d)
0 1 2 3 4 5 t
4.
Show that:
a t 2a
e
a 2f
2 2
5.
The Fourier transform of a pulse p t is P f . Find the Fourier transform of
g t in terms of P f .
g(t )
2
p(t )
1 1
1 t -1.5 t
6.
Show that:
g t t 0 g t t 0 2G f cos2ft 0
7.
Use G1 f and G2 f to evaluate g1 t g 2 t for the signals shown below:
g (t ) g (t )
1 2
a 1
ae -at
t t
8.
Find G1 f G2 f for the signals shown below:
G1( f )
A
-f 0 f0 f
G2( f )
k
-f 0 f0 f
9.
Determine signals g t whose Fourier transforms are shown below:
|G ( f )| G (f )
A -2 ft 0
-f 0 f0 f f
(a)
|G ( f )| G (f )
A
/2
f0
-f 0 f0 f -f 0 f
- /2
(b)
10.
Using the time-shifting property, determine the Fourier transform of the signal
shown below:
g (t )
A
T
-T t
-A
11.
Using the time-differentiation property, determine the Fourier transform of the
signal below:
g (t )
A
-2
2 t
-A
William Thomson was probably the first true electrical engineer. His
engineering was firmly founded on a solid bedrock of mathematics. He
invented, experimented, advanced the state-of-the-art, was entrepreneurial, was
a businessman, had a multi-disciplinary approach to problems, held office in
the professional body of his day (the Royal Society), published papers, gave
lectures to lay people, strived for an understanding of basic physical principles
and exploited that knowledge for the benefit of mankind.
William Thomson was born in Belfast, Ireland. His father was a professor of
engineering. When Thomson was 8 years old his father was appointed to the
chair of mathematics at the University of Glasgow. By age 10, William
Thomson was attending Glasgow University. He studied astronomy, chemistry
and natural philosophy (physics, heat, electricity and magnetism). Prizes in
Greek, logic (philosophy), mathematics, astronomy and physics marked his
progress. In 1840 he read Fourier’s The Analytical Theory of Heat and wrote:
…I had become filled with the utmost admiration for the splendour and
poetry of Fourier… I took Fourier out of the University Library; and in a
fortnight I had mastered it - gone right through it.
After graduating, he moved to Paris on the advice of his father and because of
his interest in the French approach to mathematical physics. Thomson began
trying to bring together the ideas of Faraday, Coulomb and Poisson on
electrical theory. He began to try and unify the ideas of “action-at-a-distance”,
the properties of the “ether” and ideas behind an “electrical fluid”. He also
became aware of Carnot’s view of heat.
Such demands for vast amounts of time run counter to the laws of
thermodynamics. Every day the sun radiates immense amounts of energy. By
the law of conservation of energy there must be some source of this energy.
Thomson, as one of the founders of thermodynamics, was fascinated by this
problem. Chemical processes (such as the burning of coal) are totally
insufficient as a source of energy and Thomson was forced to conclude that
gravitational potential energy was turned into heat as the sun contracted. On
this assumption his calculations showed that the Sun (and therefore the Earth)
was around 100 million years old.
With respect to the lapse of time not having been sufficient since our planet
was consolidated for the assumed amount of organic change, and this
objection, as argued by [Thomson], is probably one of the gravest yet
advanced, I can only say, firstly that we do not know at what rate species
change as measured by years, and secondly, that many philosophers are not
yet willing to admit that we know enough of the constitution of the universe
and of the interior of our globe to speculate with safety on its past duration.
…this seems to be one of the many cases in which the admitted accuracy of
mathematical processes is allowed to throw a wholly inadmissible
A variant of the appearance of authority over the results obtained by them. Mathematics
adage: may be compared to a mill of exquisite workmanship, which grinds you stuff
“Garbage in equals of any degree of fineness; but nevertheless, what you get out depends on
garbage out”.
what you put in; and as the grandest mill in the world will not extract
wheat-flour from peascods, so pages of formulae will not get a definite
result out of loose data.
However, Thomson’s estimates were the best available and for the next thirty
years geology took its time from physics, and biology took its time from
geology. But Thomson and his followers began to adjust his first estimate
down until at the end of the nineteenth century the best physical estimates of
the age of the Earth and Sun were about 20 million years whilst the minimum
the geologists could allow was closer to Thomson’s original 100 million years.
Then in 1904 Rutherford announced that the radioactive decay of radium was
accompanied by the release of immense amounts of energy and speculated that
this could replace the heat lost from the surface of the Earth.
A problem for the geologists was now replaced by a problem for the physicists.
The answer was provided by a theory which was just beginning to be gossiped
about. Einstein’s theory of relativity extended the principle of conservation of
energy by taking matter as a form of energy. It is the conversion of matter to
heat which maintains the Earth’s internal temperature and supplies the energy
radiated by the sun. The ratios of lead isotopes in the Earth compared to
meteorites now leads geologists to give the Earth an age of about 4.55 billion
years.
Attempts were made to provide underwater links between the various separate
systems. The first cable between Britain and France was laid in 1850. The
operators found the greatest difficulty in transmitting even a few words. After
12 hours a trawler accidentally caught and cut the cable. A second, more
heavily armoured cable was laid and it was a complete success. The short lines
worked, but the operators found that signals could not be transmitted along
submarine cables as fast as along land lines without becoming confused.
In spite of the record of the longer lines, the American Cyrus J. Fields
proposed a telegraph line linking Europe and America. Oceanographic surveys
showed that the bottom of the Atlantic was suitable for cable laying. The
connection of existing land telegraph lines had produced a telegraph line of the
length of the proposed cable through which signals had been passed extremely
rapidly. The British government offered a subsidy and money was rapidly
raised.
Faraday had predicted signal retardation but he and others like Morse had in
mind a model of a submarine cable as a hosepipe which took longer to fill with
water (signal) as it got longer. The remedy was thus to use a thin wire (so that
less electricity was needed to charge it) and high voltages to push the signal
through. Faraday’s opinion was shared by the electrical adviser to the project,
Dr Whitehouse (a medical doctor).
In undersea cables of the type proposed, capacitive effects dominate and the
current is approximately governed by the diffusion (i.e. heat) equation. This
equation predicts that electric pulses will last for a time that is proportional to
the length of the cable squared. If two or more signals are transmitted within
this time, the signals will be jumbled at the receiver. In going from submarine
cables of 50 km length to cables of length 2400 km, retardation effects are
2500 times worse. Also, increasing the voltage makes this jumbling (called
intersymbol interference) worse. Finally, the diffusion equation shows that the
wire should have as large a diameter as possible (small resistance).
Realising that the success of the enterprise would depend on a fast, sensitive
detector, Thomson set about to invent one. The problem with an ordinary
galvanometer is the high inertia of the needle. Thomson came up with the
mirror galvanometer in which the pointer is replaced by a beam of light.
In a first attempt in 1857 the cable snapped after 540 km had been laid. In
1858, Europe and America were finally linked by cable. On 16 August it
carried a 99-word message of greeting from Queen Victoria to President
Buchanan. But that 99-word message took 16½ hours to get through. In vain,
Whitehouse tried to get his receiver to work. Only Thomson’s galvanometer
was sensitive enough to interpret the minute and blurred messages coming
through. Whitehouse ordered that a series of huge two thousand volt induction
coils be used to try to push the message through faster – after four weeks of
In 1859 eighteen thousand kilometres of undersea cable had been laid in other
parts of the world, and only five thousand kilometres were operating. In 1861
civil war broke out in the United States. By 1864 Field had raised enough
capital for a second attempt. The cable was designed in accordance with
Thomson’s theories. Strict quality control was exercised: the copper was so
pure that for the next 50 years ‘telegraphist’s copper’ was the purest available.
Once again the British Government supported the project – the importance of
quick communication in controlling an empire was evident to everybody. The
new cable was mechanically much stronger but also heavier. Only one ship
was large enough to handle it and that was Brunel’s Great Eastern. She was
fives time larger than any other existing ship.
This time there was a competitor. The Western Union Company had decided to
build a cable along the overland route across America, Alaska, the Bering
Straits, Siberia and Russia to reach Europe the long way round. The
commercial success of the cable would therefore depend on the rate at which
messages could be transmitted. Thomson had promised the company a rate of
8 or even 12 words a minute. Half a million pounds was being staked on the
correctness of the solution of a partial differential equation.
In 1865 the Great Eastern laid cable for nine days, but after 2000 km the cable
parted. After two weeks of unsuccessfully trying to recover the cable, the
expedition left a buoy to mark the spot and sailed for home. Since
communication had been perfect up until the final break, Thomson was
confident that the cable would do all that was required. The company decided
to build and lay a new cable and then go back and complete the old one.
Cable laying for the third attempt started on 12 July 1866 and the cable was
landed on the morning of the 27th. On the 28th the cable was open for business
and earned £1000. Western Union ordered all work on their project to be
stopped at a loss of $3 000 000.
For his work on the transatlantic cable Thomson was created Baron Kelvin of
Largs in 1892. The Kelvin is the river which runs through the grounds of
Glasgow University and Largs is the town on the Scottish coast where
Thomson built his house.
Other Achievements
There are many
other factors
Thomson worked on several problems associated with navigation – sounding
influencing local
tides – such as machines, lighthouse lights, compasses and the prediction of tides. Tides are
channel width –
which produce primarily due to the gravitational effects of the Moon, Sun and Earth on the
phenomena akin to
resonance in the oceans but their theoretical investigation, even in the simplest case of a single
tides. One example
of this is the narrow ocean covering a rigid Earth to a uniform depth, is very hard. Even today, the
Bay of Fundy,
between Nova study of only slightly more realistic models is only possible by numerical
Scotia and New computer modelling. Thomson recognised that the forces affecting the tides
Brunswick, where
the tide can be as change periodically. He then approximated the height of the tide by a
high as 21m. In
contrast, the trigonometric polynomial – a Fourier series with a finite number of terms. The
Mediterranean Sea
is almost tideless coefficients of the polynomial required calculation of the Fourier series
because it is a
broad body of water coefficients by numerical integration – a task that “…required not less than
with a narrow
entrance.
twenty hours of calculation by skilled arithmeticians.” To reduce this labour
Thomson designed and built a machine which would trace out the predicted
Michelson (of height of the tides for a year in a few minutes, given the Fourier series
Michelson-Morley
fame) was to build a coefficients.
better machine that
used up to 80
Fourier series Thomson also built another machine, called the harmonic analyser, to perform
coefficients. The the task “which seemed to the Astronomer Royal so complicated and difficult
production of ‘blips’
at discontinuities by that no machine could master it’ of computing the Fourier series coefficients
this machine was
explained by Gibbs from the record of past heights. This was the first major victory in the struggle
in two letters to
Nature. These ‘blips’ “to substitute brass for brain” in calculation.
are now referred to
as the “Gibbs
phenomenon”.
Thomson worked in collaboration with Tait to produce the now famous text
Treatise on Natural Philosophy which they began working on in the early
1860s. Many volumes were intended but only two were ever written which
cover kinematics and dynamics. These became standard texts for many
generations of scientists.
During the first half of Thomson's career he seemed incapable of being wrong
while during the second half of his career he seemed incapable of being right.
This seems too extreme a view, but Thomson's refusal to accept atoms, his
opposition to Darwin's theories, his incorrect speculations as to the age of the
Earth and the Sun, and his opposition to Rutherford's ideas of radioactivity,
certainly put him on the losing side of many arguments later in his career.
William Thomson, Lord Kelvin, died in 1907 at the age of 83. He was buried
in Westminster Abbey in London where he lies today, adjacent to Isaac
Newton.
References
Burchfield, J.D.: Lord Kelvin and The Age of the Earth, Macmillan, 1975.
Körner, T.W.: Fourier Analysis, Cambridge University Press, 1988.
Encyclopedia Britannica, 2004.
Morrison, N.: Introduction to Fourier Analysis, John Wiley & Sons, Inc., 1994.
Thomson, S.P.: The Life of Lord Kelvin, London, 1976.
Kelvin & Tait: Treatise on Natural Philosophy, Appendix B.
Introduction
Since we can now represent signals in terms of a Fourier series (for periodic
signals) or a Fourier transform (for aperiodic signals), we seek a way to
We have a
describe a system in terms of frequency. That is, we seek a model of a linear, description of
signals in the
time-invariant system governed by continuous-time differential equations that frequency domain -
we need one for
expresses its behaviour with respect to frequency, rather than time. The systems
concept of a signal’s spectrum and a system’s frequency response will be seen
to be of fundamental importance in the frequency-domain characterisation of a
system.
A sinusoid is just a
(3A.3)
xt e j e j 0t e j e j 0t
sum of two complex A A
conjugate counter-
rotating phasors 2 2
Xe j 0t X *e j 0t
Where X is the phasor representing xt . Inserting this into Eq. (3A.1) gives:
yt h Xe j0 t X *e j0 t d
h e j0 Xe j0t d h e j0 X *e j0t d
This rather unwieldy expression can be simplified. First of all, if we take the
Fourier transform of the impulse response, we get:
The Fourier
(3A.5)
H ht e jt dt
transform of the
impulse response
appears in our
analysis…
H H * (3A.7)
Y * H * 0 X * H 0 X * (3A.9)
yt A H cos t H
0 0 0 (3A.11) phase of the input
sinusoid change –
according to the
Fourier transform of
Hence the response resulting from the sinusoidal input xt A cos 0 t is response
the impulse
also a sinusoid with the same frequency 0 , but with the amplitude scaled by
There are two ways to get H f . We can find the system impulse response
Two ways to find the
ht and take the Fourier transform, or we can find it directly from the frequency response
differential equation describing the system.
Example
For the simple RC circuit below, find the forced response to an arbitrary
sinusoid (this is also termed the sinusoidal steady-state response).
Finding the frequency
response of a simple
system
R
vi C vo
Figure 3A.1
dvo t 1 1
vo t vi t
dt RC RC (3A.13)
which is obtained by KVL. Since the input is a sinusoid, which is really just a
sum of conjugate complex exponentials, we know from Eq. (3A.6) that if the
input is Vi Ae j 0t then the output is Vo H 0 Ae j 0t . Note that Vi and
Vo are complex numbers, and if the factor e j0t were suppressed they would be
d
dt
H 0 Ae j 0t 1
RC
H 0 Ae j 0t 1
RC
Ae j 0t (3A.14)
and thus:
1 1 (3A.15)
j 0 H 0 Ae j 0t H 0 Ae j 0t Ae j 0t
RC RC
(3A.16)
1 1
j 0 H 0 H 0
RC RC
and therefore:
1 RC
H 0
j 0 1 RC (3A.17)
Frequency response
1 RC
H
of a lowpass RC
circuit
j 1 RC (3A.18)
This is the frequency response for the simple RC circuit. As a check, we know
that the impulse response is:
Using your standard transforms, show that the frequency response is the
Fourier transform of the impulse response.
1 RC Magnitude response
H of a lowpass RC
2 1 RC 2
circuit
(3A.20)
1 2
0
0 0 2 0
0 0 2 0
0º
-45 º
-90 º
H ( )
Figure 3A.2
Filter terminology
the circuit is an example of a lowpass filter. The frequency 0 1 RC is
defined
termed the cutoff frequency. The bandwidth of the filter is also equal to 0 .
Periodic Inputs
For periodic inputs, we can express the input signal by a complex exponential
Fourier series:
which is just a
xt X n e jn o t Fourier series for a
periodic signal
n (3A.22)
It follows from the previous section that the output response resulting from the
complex exponential input X n e jn0t is equal to H n 0 X n e jn0t . By linearity,
y t H n X 0 n e jn o t
n (3A.23)
Since the right-hand side is a complex exponential Fourier series, the output
y t must be periodic, with fundamental frequency equal to that of the input,
i.e. the output has the same period as the input.
It can be seen that the only thing we need to determine is new Fourier series The frequency
response simply
coefficients, given by: multiplies the input
Fourier series
Yn H n 0 X n
coefficients to
(3A.24) produce the output
Fourier series
coefficients
Yn H n 0 X n (3A.25)
Don’t forget – the
frequency response is
just a frequency and the output phase spectrum is:
dependent complex
number
These relationships describe how the system “processes” the various complex
exponential components comprising the periodic input signal. In particular,
Eq. (3A.25) determines if the system will pass or attenuate a given component
of the input. Eq. (3A.26) determines the phase shift the system will give to a
particular component of the input.
Aperiodic Inputs
If we can do finite
sinusoids, we can do Taking the Fourier transform of both sides of the time domain input/output
infinitesimal sinusoids
too! relationship of an LTI system:
Start with the
convolution integral
again yt ht xt (3A.27)
we get:
(3A.28)
ht xt e jt dt
Y f
and transform to the
frequency domain
(3A.29)
(3A.30)
(3A.31)
which is:
Convolution in the
Y f H f X f (3A.33) time-domain is
multiplication in the
frequency-domain
Eq. (3A.33) is the frequency-domain version of the equation given by The output spectrum
is obtained by
Eq. (3A.27). It says that the spectrum of the output signal is equal to the multiplying the input
spectrum by the
product of the frequency response and the spectrum of the input signal. frequency response
Note that the frequency-domain description applies to all inputs that can be
Fourier transformed, including sinusoids if we allow impulses in the spectrum.
Periodic inputs are then a special case of Eq. (3A.33).
By similar arguments together with the duality property of the Fourier Convolution in the
frequency-domain is
transform, it can be shown that convolution in the frequency-domain is multiplication in the
time-domain
equivalent to multiplication in the time-domain.
Ideal Filters
A first look at
Now that we have a feel for the frequency-domain description and behaviour
frequency-domain of a system, we will briefly examine a very important application of electronic
descriptions - filters
circuits – that of frequency selection, or filtering. Here we will examine ideal
filters – the topic of real filter design is rather involved.
Ideal filters pass sinusoids within a given frequency range, and reject
(completely attenuate) all other sinusoids. An example of an ideal lowpass
filter is shown below:
|H| Cutoff
1
ideal
Vi Vo
Pass Stop filter
0
0 0
Figure 3A.3
Filter types Other basic types of filters are highpass, bandpass and bandstop. All have
similar definitions as given in Figure 3A.3. Frequencies that are passed are said
to be in the passband, while those that are rejected lie in the stopband. The
point where passband and stopband meet is called 0 , the cutoff frequency.
f
H f Krect (3A.36)
2B
Most filter specifications deal with the magnitude response. In systems where
the filter is designed to pass a particular “wave shape”, phase response is
extremely important. For example, in a digital system we may be sending 1’s A particular phase
response is crucial
and 0’s using a specially shaped pulse that has “nice” properties, e.g. its value for retaining a
signal’s “shape”
is zero at the centre of all other pulses. At the receiver it is passed through a
lowpass filter to remove high frequency noise. The filter introduces a delay of
D seconds, but the output of the filter is as close as possible to the desired
pulse shape.
A filter introducing
delay, but retaining
delay
signal “shape”
D
vi vo
vi Lowpass vo
Filter
0 1 t 0 1 1.5 t
Figure 3A.4
To the pulse, the filter just looks like a delay. We can see that distortionless
transmission through a filter is characterised by a constant delay of the input
signal:
vo t Kvi t D
Distortionless
(3A.37) transmission defined
In Eq. (3A.37) we have also included the fact that all frequencies in the
passband of the filter can have their amplitudes multiplied by a constant
without affecting the wave shape. Also note that Eq. (3A.37) applies only to
frequencies in the passband of a filter - we do not care about any distortion in a
stopband.
vi A cost (3A.38)
vo KA cos t D
KA cost D (3A.39)
The input and output signals differ only by the gain K and the phase angle
which is:
Distortionless
transmission
requires a linear
D (3A.40)
phase
That is, the phase response must be a straight line with negative slope that
passes through the origin.
In general, the requirement for the phase response in the passband to achieve
distortionless transmission through the filter is:
d
Group delay defined D
d (3A.41)
The delay D in this case is referred to as the group delay. (This means the
group of sinusoids that make up the wave shape have a delay of D).
f j 2fD
H f Krect e
2 B (3A.42)
Example
Model of a short
piece of co-axial
1k cable, or twisted pair
R
C 1 nF 0 = 1
RC
Figure 3A.5
We would also like to know the group delay caused by this filter.
1
H
1 0
2
(3A.43a)
H tan 1 0 (3A.43b)
0
0 2 0
H
/4
-1
-
/2 0
Figure 3A.6
1
0.99
1 0
2
(3A.44)
01425
. 0
d 1 1
D
d 1 0 2 0 (3A.45)
1
0.99
1 0
2
(3A.46)
01005
. 0
We can see from Eqs. (3A.44) and (3A.46) that we must have 01
. 0 for
practically distortionless transmission. The delay for 01
. 0 , according to
Eq. (3A.45), is approximately given by:
1 Approximate group
D delay for a first-
0 (3A.47)
order lowpass circuit
For the values shown in Figure 3A.6, the group delay is approximately 1 μs . In
practice, variations in the magnitude transfer function up to the half-power
frequency are considered tolerable (this is the bandwidth, BW, of the filter).
Over this range of frequencies, the phase deviates from the ideal linear
characteristic by at most 4 1 0.2146 radians (see Figure 3A.6).
Frequencies well below 0 are transmitted practically without distortion, but
The ideal filter is unrealizable. To show this, take the inverse Fourier transform
of the ideal filter’s frequency response, Eq. (3A.42):
It should be clear that the impulse response is not zero for t 0 , and the filter An ideal filter is
unrealizable
is therefore not causal (how can there be a response to an impulse at t 0 because it is non-
before it is applied?). One way to design a real filter is simply to multiply causal
Eq. (3A.48) by u t , the unit step.
1 1
x (t ) H (f ) y (t ) eg. j 2 f C H (f ) =
j 2 f RC +1
Figure 3A.7
Y f H f X f (3A.49)
Xf X f nf
n 0 (3A.50)
n
and therefore:
Spectrum of a
Yf H nf X f nf
filtered periodic
signal
0 n 0 (3A.51)
n
Example
N Amplitude Phase
0 1
1 2 -30°
2 3 -90°
What is the Fourier series of the output if the signal is passed through a filter
with transfer function H f j 4f ? The period is 2 seconds.
There are 3 components in the input signal with frequencies 0, 0.5 Hz and 1
Hz. The complex gain of the filter at each frequency is:
0 0 0
1 j 4 0.5 2 90°
2 j 4 1 4 90°
n Amplitude Phase
0 0
1 4 60°
2 12 0°
Example
Suppose the same signal was sent through a filter with transfer function as
sketched:
|H ( f )|
1
0.5
-1 -0.5 0 0.5 1 f
H ( f)
-1 -0.5 0 0.5 1 f
Figure 3A.8
n Amplitude Phase
0 1
1 2 -120°
2 1.5 -270°
Sampling
Sampling is one of the most important operations we can perform on a signal.
Sampling is one of
Samples can be quantized and then operated upon digitally (digital signal the most important
processing). Once processed, the samples are turned back into a continuous- things we can do to
a continuous-time
time waveform. (e.g. CD, mobile phone!) Here we demonstrate how, if certain signal – because we
can then process it
parameters are right, a sampled signal can be reconstructed from its samples digitally
almost perfectly.
An ideal sampler
multiplies a signal
g (t ) g (t ) p (t ) = gs ( t ) by a uniform train of
impulses
p (t )
Figure 3A.9
g s t gt t kTs (3A.52)
k
An ideal sampler
produces a train of
impulses - each g (t ) g s (t )
impulse is weighted sampled signal
by the original signal g (t )
-2 /B -1 /B 0 1/B 2/B t -2 /B -1 /B 0 1/B 2/B t
p (t )
-2Ts - Ts 0 Ts 2Ts t
Figure 3A.10
That is, for ideal sampling, the original signal forms an envelope for the train
of impulses, and we have simply generated a weighted train of impulses, where
each weight takes on the signal value at that particular instant of time.
Gs f G f f s f nf s
n
fs G f nf
n
s
(3A.53)
P (f )
fs
-2 fs - fs 0 fs 2 fs f
Figure 3A.11
replica of the original, periodically repeated along the frequency axis. Spacing
between repeats is equal to the sampling frequency, f s (the inverse of the
sampling interval).
An ideally sampled
spectrum is a train S (f )
of impulses - each
impulse is weighted G( f )
by the original
spectrum
f
-8 f 0 -4 f 0 0 f0 4 f0 8 f0
Figure 3A.12
F 1
S f g t T0 t kT0 (3A.54)
k
s( t )
- T0 0 T0 t
Figure 3A.13
Reconstruction
If a sampled signal g s t is applied to an ideal lowpass filter of bandwidth B,
the only component of the spectrum Gs f that is passed is just the original
spectrum G f .
We recover the
original spectrum by
Gs( f ) G( f ) lowpass filtering
lowpass
filter
Figure 3A.14
H( f ) 1/ fs
-2 B -B 0 B 2B f
Figure 3A.15
Hence the time-domain output of the filter is equal to g t , which shows that
the original signal can be completely and exactly reconstructed from the
sampled waveform g s t .
A weighted train of
impulses turns back
into the original
signal after lowpass g s (t ) g( t )
lowpass
filtering…an
operation that is not filter
-2 /B -1 /B 0 1/B 2/B t -2 /B -1 /B 0 1/B 2/B t
so easy to see in the
time-domain!
Figure 3A.16
There are some limitations to perfect reconstruction though. One is that time-
We can’t sample
and reconstruct limited signals are not bandlimited (e.g. a rect time-domain waveform has a
perfectly, but we can
get close! spectrum which is a sinc function which has infinite frequency content). Any
time-limited signal therefore cannot be perfectly reconstructed, since there is
no sample rate high enough to ensure repeats of the original spectrum do not
overlap. However, many signals are essentially bandlimited, which means
spectral components higher than, say B, do not make a significant contribution
to either the shape or energy of the signal.
Aliasing
We saw that sampling in one domain implies periodicy in the other. If the
We have to ensure
function being made periodic has an extent that is smaller than the period, there no spectral overlap
when sampling
will be no resulting overlap and hence it will be possible to recover the
continuous (unsampled) function by windowing out just one period from the
domain displaying periodicy.
f s 2B (3A.55)
To illustrate aliasing, consider the case where we have failed to select the
sample rate higher than twice the bandwidth, B, of a lowpass signal. The
sampled spectrum is shown below, where the repeats of the original spectrum
now overlap:
An illustration of
aliasing in the
frequency-domain folded-over high-frequency
X s( f ) components
- fs - B 0 B fs f
f s /2 = fold-over frequency
Figure 3A.17
X r( f ) X (f )
-B 0 B f -B 0 B f
"reconstructed" original
Figure 3A.18
How to avoid
To summarise – we can avoid aliasing by either:
aliasing
1. Selecting a sample rate higher than twice the bandwidth of the signal
(equivalent to saying that the fold-over frequency is greater than the
bandwidth of the signal); or
2. By bandlimiting (using a filter) the signal so that its bandwidth is less than
half the sample rate.
Vi ( f ) Vo ( f )
-B 0 B f -B 0 B f
f s /2
Vi ( f ) Vo ( f )
anti-alias
filter
H ( f)
1
- f s /2 0 f s /2 f
Figure 3A.19
If the system is designed correctly, then the high frequency components of the
original signal that are rejected by the AAF are “not important” in that they
carry little energy and/or information (e.g. noise). Sampling then takes place at
a rate of f s Sa/s. The reconstruction filter has the same bandwidth as the AAF,
Figure 3A.20
0 t -B 0 B f
s(t ) S(f )
1 fs
0 Ts t -f s 0 fs f
g s (t ) Gs ( f )
Afs
C
0 t -f s -B 0 B fs f
Reconstruction
h (t ) H( f ) 1
1 fs
fs
0 Ts t - f s /2 0 f s /2 f
g r (t ) Gr ( f )
A
C
0 t -B 0 B f
Figure 3A.21
g1(t)
1
0 T1 t
Figure 3A.22
Convolved with a
uniform impulse
train
1
Figure 3A.23
g p t g1 t t kT0
k (3A.56)
Figure 3A.24
F g p t F g1 t F t kT0
k
G1 f f 0 f nf (3A.57)
0
n
Fourier series
Gn f 0 G1 nf 0
coefficients from the (3A.58)
Fourier transform of
one period
0
f1 2f1 f
f0
0 f 0 2f 0 f
Gp( f )
f 0 T1
-8 f0 -6 f0 6 f0 8 f0
-12 f0 -10 f0 -4 f0 -2 f0 0 2 f0 4 f0 10 f0 12 f0 f
Figure 3A.25
According to Eq. (3A.58), the Fourier series coefficients are just the weights of
the impulses in the spectrum of the periodic function. To get the nth Fourier
series coefficient, use the weight of the impulse located at nf 0 .
This is in perfect agreement with the concept of a continuous spectrum. Each Remember that
pairs of impulses in
frequency has an infinitesimal amplitude sinusoid associated with it. If an a spectrum
represent a sinusoid
impulse exists at a certain frequency, then there is a finite amplitude sinusoid at in the time-domain
that frequency.
Windowing defined Windowing in the time domain is equivalent to multiplying the original signal
g t by a function which is non-zero over the window interval and zero
elsewhere. So the Fourier transform of the windowed signal is the original
signal convolved with the Fourier transform of the window.
Example
Find the Fourier transform of sin 2t when it is viewed through a rectangular
window from 0 to 1 second:
A rectangular
window applied to a
sinusoid
1
0 1 t
Figure 3A.26
-4 -3 -2 -1 0 1 2 3 4 f
-1 0 1 f
G( f )
0.5
-4 -3 -2 -1 0 1 2 3 4 f
Figure 3A.27
-4 -3 -2 -1 0 1 2 3 4 f
-1 0 1 f
G( f )
2
-4 -3 -2 -1 0 1 2 3 4 f
Figure 3A.28
Obviously, the longer the window, the more accurate the spectrum becomes.
Example
Find the Fourier transform of sinc t when it is viewed through a window from
-2 to 2 seconds:
Viewing a sinc
function through a
g (t ) rectangular window
w
1
-2 -1 0 1 2 t
Figure 3A.29
We have:
g w t sinct rect t 4
(3A.62)
and:
Ripples in the
spectrum caused by
a rectangular
window 1
-1 -0.5 0 0.5 1 f
-1 -0.5 0 0.5 1 f
-1 -0.5 0 0.5 1 f
Figure 3A.30
Two physical operations we can do on signals are multiplication (with another Multiplication in the
time-domain is
signal) and filtering (with a filter with a defined transfer function).
convolution in the
frequency-domain
Multiplication in time domain convolution in frequency domain
x (t ) x (t ) y (t )
y (t )
Figure 3A.31
Filter
x (t ) specified by y (t ) = h (t ) * x (t )
H ( f ) or h ( t )
Figure 3A.32
Summary
Sinusoidal inputs to linear time-invariant systems yield sinusoidal outputs.
The output sinusoid is related to the input sinusoid by a complex-valued
function known as the frequency response, H f .
The output of an LTI system due to any input signal is obtained most easily
by considering the spectrum: Y f H f X f . This expresses the
important property: convolution in the time-domain is equivalent to
multiplication in the frequency-domain.
Filters are devices best thought about in the frequency-domain. They are
frequency selective devices, changing both the magnitude and phase of
frequency components of an input signal to produce an output signal.
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
The convolution of x t and y t is given by:
(a) x y t d (b) x y t d (c) X Y f d
2.
vi vo The Fourier transform
vi Ideal vo of the impulse response
0 0
t Filter t
of the filter resembles:
(a) f
(b) t (c) f
3.
The Fourier transform of one period of a periodic waveform is G(f). The
Fourier series coefficients, Gn , are given by:
4.
x(t) y(t) The peak value of the
convolution is:
t t
f 1
(a) g at G (b) g at G f (c) ag t aG f
a a
Answers: 1. a 2. c 3. c 4. x 5. x
Exercises
1.
Calculate the magnitude and phase of the 4 kHz component in the spectrum of
the periodic pulse train shown below. The pulse repetition rate is 1 kHz.
x(t)
10
0.4 0.6
0 0.1 0.5 0.9 1 t (ms)
-5
2.
By relating the triangular pulse shown below to the convolution of a pair of
identical rectangular pulses, deduce the Fourier transform of the triangular
pulse:
x(t)
A
0 t
3.
The pulse x t 2 Bsinc 2 Bt rect Bt 8 has ripples in the amplitude spectrum.
4.
A signal is to be sampled with an ideal sampler operating at 8000 samples per
second. Assuming an ideal low pass anti-aliasing filter, how can the sampled
signal be reconstituted in its original form, and under what conditions?
5.
A train of impulses in one domain implies what in the other?
6.
The following table gives information about the Fourier series of a periodic
waveform, g t , which has a period of 50 ms.
Table 1
Harmonic # Amplitude Phase (º)
0 1
1 3 -30
2 1 -30
3 1 -60
4 0.5 -90
(a) Give the frequencies of the fundamental and the 2nd harmonic. What is
the signal power, assuming that g t is measured across 50 ?
(b) Express the third harmonic as a pair of counter rotating phasors. What is
the value of the third harmonic at t 20 ms ?
(c) The periodic waveform is passed through a filter with transfer function
H f as shown below.
|H( f )|
1
0.5
H( f )
Draw up a table in the same form as Table 1 of the Fourier series of the
output waveform. Is there a DC component in the output of the
amplifier?
7.
A signal, bandlimited to 1 kHz, is sampled by multiplying it with a rectangular
pulse train with repetition rate 4 kHz and pulse width 50 s. Can the original
signal be recovered without distortion, and if so, how?
8.
A signal is to be analysed to identify the relative amplitudes of components
which are known to exist at 9 kHz and 9.25 kHz. To do the analysis a digital
storage oscilloscope takes a record of length 2 ms and then computes the
Fourier series. The 18th harmonic thus computed can be non-zero even when
no 9 kHz component is present in the input signal. Explain.
9.
Use MATLAB® to determine the output of a simple RC lowpass filter
subjected to a square wave input given by:
xt rectt 2n
n
Introduction
The modern world cannot exist without communication systems. A model of a
typical communication system is:
A communication
system
Message Transmitted Received Output
Source Input signal signal signal signal Output Destination
Transmitter Channel Receiver
transducer transducer
Distortion
and
noise
Figure 3B.1
The source originates the message, such as a human voice, a television picture,
or data. If the data is nonelectrical, it must be converted by an input transducer
into an electrical waveform referred to as the baseband signal or the message
signal.
The receiver processes the signal from the channel by undoing the signal
modifications made at the transmitter and by the channel.
The receiver output is fed to the output transducer, which converts the
electrical signal to its original form.
A channel acts partly as a filter – it attenuates the signal and distorts the
waveform. Attenuation of the signal increases with channel length. The
waveform is distorted because of different amounts of attenuation and phase
shift suffered by different frequency components of the signal. This type of
distortion, called linear distortion, can be partly corrected at the receiver by an
equalizer with gain and phase characteristics complementary to those of the
channel.
The channel may also cause nonlinear distortion through attenuation that
varies with the signal amplitude. Such distortion can also be partly corrected
by a complementary equalizer at the receiver.
The signal is not only distorted by the channel, but it is also contaminated
along the path by undesirable signals lumped under the broad term noise which
are random and unpredictable signals from causes external and internal.
External noise includes interference from signals transmitted on nearby
channels, human-made noise generated by electrical equipment (such as motor
drives, combustion engine ignition radiation, fluorescent lighting) and natural
noise (electrical storms, solar and intergalactic radiation). Internal noise results
from thermal motion of electrons in conductors, diffusion or recombination of
charged carriers in electronic devices, etc. Noise is one of the basic factors that
sets a limit on the rate of communication.
Modulation
Baseband signals produced by various information sources are not always
suitable for direct transmission over a given channel. These signals are
modified to facilitate transmission. This conversion process is known as
modulation. In this process, the baseband signal is used to modify some
parameter of a high-frequency carrier signal.
The figure below shows a baseband signal mt and the corresponding AM
and FM waveforms:
Carrier
v
Amplitude-modulated wave
v
Frequency-modulated wave
Figure 3B.2
At the receiver, the modulated signal must pass through a reverse process
called demodulation in order to retrieve the baseband signal.
Ease of Radiation
AM transmission, the signal xt and the carrier cos c t are simply
Double side-band –
suppressed carrier
(DSB-SC) Signal multiplier
modulation
x( t ) y ( t ) = x ( t ) cos(2 f c t ) = modulated signal
Local cos(2 f c t )
Oscillator
Figure 3B.3
The local oscillator in Figure 3B.3 is a device that produces the sinusoidal
signal cos c t . The multiplier is implemented with a non-linear device, and is
X f f f0 X f f0 (3B.1)
The spectrum of the modulated signal is a replica of the signal spectrum but
“shifted up” in frequency. If the signal has a bandwidth equal to B then the
modulated signal spectrum has an upper sideband from f c to f c B and a
-2 -1 0 1 2 t modulated signal -2 -1 1 2 t
0
Local
Oscillator cos(2 f ct )
-2 -1 1 2 t
0
Figure 3B.4
Figure 3B.5
Modulation lets us The higher frequency range of the modulated signal makes it possible to
share the spectrum,
and achieves achieve good propagation in transmission through a cable or the atmosphere. It
practical also allows the “spectrum” to be shared by independent users – e.g. radio, TV,
propagation
mobile phone etc.
Demodulation
The reconstruction of xt from xt cos c t is called demodulation. There are
many ways to demodulate a signal, here we will consider one common method
called synchronous or coherent demodulation.
Coherent
demodulation of a
x ( t ) cos (2 f c t ) x (t ) DSB-SC signal
2
x ( t ) cos(2 f c t ) lowpass
filter
Local
cos(2 f c t )
Oscillator
Figure 3B.6
The first stage of the demodulation process involves applying the modulated Coherent
waveform xt cos c t to a multiplier. The other signal applied to the demodulation
requires a carrier at
the receiver that is
multiplier is a local oscillator which is assumed to be synchronized with the synchronized with
carrier signal cos c t , i.e. there is no phase shift between the carrier and the the transmitter
1
X f f c X f f c 1 f f c f f c
2 2
(3B.3)
X f X f 2 f c X f 2 f c
1 1
2 4
xt can be “extracted” from the output of the multiplier by lowpass filtering
with a cutoff frequency of B Hz and a gain of 2.
Therefore, it is easy to see that lowpass filtering, with a gain of 2, will produce
xt . An example of demodulation in the time-domain is given below:
Demodulation in the
time-domain
x ( t ) cos (2 f c t )
2
1
x ( t ) cos(2 f c t )
-2 -1 0 1 2 t x( t )
1 1
x(t )
lowpass
-2 -1 0 1 2 t filter -2 -1 0 1 2 t
Local cos(2 f c t )
Oscillator
-2 -1 0 1 2 t
Figure 3B.7
Demodulation in the
frequency-domain
Yf
1
X f f c X f f c
1/2
1/4 1/4
2
Y (f )
-2 f c 0 2 fc f
1/2
X (f )
x (t ) 1
lowpass
- fc 0 fc f
filter
-2 -1 0 1 2 f
1
f f c f f c H( f ) 2
2
1/2
cos(2 f c t ) -2 -1 0 1 2 f
- fc 0 fc f Local
Oscillator
Figure 3B.8
The DSB-SC
Time-Domain Frequency-Domain
modulation and
g (t ) Modulation G( f ) demodulation
C A process in both the
time-domain and
frequency-domain
0 t -B 0 B f
l (t ) L( f )
1 1/2
0
Tc t
-f c 0 fc f
gm(t ) Gm( f )
C A /2
t
0 -f c -B -f c +B f c -B f c +B f
-f c 0 fc
Demodulation
L( f )
1/2
l (t )
1
0 -f c 0 fc
Tc t f
Gm( f )
g i( t ) A /2
C
A /4 A /4
0 t -2 f c -f c -B B fc 2 fc f
0
H( f )
h (t )
2 2
0 T0 t - f0 0 f0 f
gd( t ) Gd( f )
C A
t -B 0 B f
Figure 3B.9
illustrated below:
AM modulation
Adder Multiplier
+
cos(2 f c t )
A Local
oscillator
Figure 3B.10
The local oscillator in Figure 3B.10 is a device that produces the sinusoidal
signal cos2f c t . The multiplier is implemented with a non-linear device, and
The spectrum of the modulated signal should show a replica of the signal
spectrum but “shifted up” in frequency. Ideally, there should also be a pair of
impulses representing the carrier sinusoid. If G f , the spectrum of g t , is
bandlimited to B Hz, then the modulated signal spectrum has an upper
sideband from f c to f c B and a lower sideband from f c B to f c . Since the
Envelope Detection
There are several ways to demodulate the AM signal. One way is coherent (or
synchronous) demodulation. If the magnitude of the signal g t never exceeds
the bias A, then it is possible to demodulate the AM signal using a very simple
and practical technique called envelope detection.
gAM( t ) R C A+ g (t )
0 = 1/ RC
Figure 3B.11
g (t) g (t ) g (t )
AM r d
lowpass
filter
precision
rectifier
Figure 3B.12
The quick way to In so far as the transmission of information is concerned, only one sideband is
determine Fourier
necessary, and if the carrier and the other sidebands are suppressed at the
series coefficients
transmitter, no information is lost. When only one sideband is transmitted, the
modulation system is referred to as a single-sideband (SSB) system.
SSB modulation
Multiplier Adder
g (t ) g SSB (t )
modulated
signal
cos(2 f c t )
Local Oscillators
-90°
phase- sin(2 f c t )
shifter
Figure 3B.13
The local oscillators in Figure 3B.13 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz). The “-90° phase-shifter” is a device that shifts the phase of all
positive frequencies by –90° and all negative frequencies by +90°. It is more
commonly referred to as a Hilbert transformer. Note that the local oscillator
sin 2f c t can be generated from cos2f c t by passing it through a Hilbert
transformer.
below:
QAM modulation
Multiplier Adder
cos(2 f c t )
Local
Oscillators
sin(2 f c t )
g 2( t )
Figure 3B.14
The local oscillators in Figure 3B.14 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz).
G1 ( f ) G2 ( f )
f f
original message spectra
G QAM( f )
Re (I)
-fc
Im (Q)
fc
f
QAM spectrum
Figure 3B.15
Normally, we represent a spectrum by its magnitude and phase, and not its real
and imaginary parts, but in this case it is easier to picture the spectrum in
“rectangular coordinates” rather than “polar coordinates”.
The appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude and phase, but since the amplitude of the
quadrature components (cos and sin) of the carrier vary in proportion to the
message signals, this modulation technique is called quadrature amplitude
modulation, or QAM for short.
Coherent Demodulation
There are several ways to demodulate the QAM signal - we will consider a
simple analog method called coherent (or synchronous) demodulation.
Coherent QAM
demodulation
g QAM( t ) cos(2 f c t )
lowpass g 1( t )
filter
cos(2 f c t )
g QAM( t )
sin(2 f c t )
g QAM( t ) sin(2 f c t )
lowpass g 2( t )
filter
Figure 3B.16
The first stage of the demodulation process involves applying the modulated
waveform g QAM t to two separate multipliers. The other signals applied to
each multiplier are local oscillators (in quadrature again) which are assumed to
be synchronized with the modulator, i.e. the frequency and phase of the
demodulator’s local oscillators are exactly the same as the frequency and phase
of the modulator’s local oscillators. The signals g1 t and g 2 t can be
“extracted” from the output of each multiplier by lowpass filtering.
Summary
Modulation shifts a baseband spectrum to a higher frequency range. It is
achieved in many ways – the simplest being the multiplication of the signal
by a carrier sinusoid.
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Exercises
1.
Sketch the Fourier transform of the waveform g t 1 cos 2t sin 20t .
2.
A scheme used in stereophonic FM broadcasting is shown below:
A
C
L
R
B
cos(2 f c t )
The input to the left channel (L) is a 1 kHz sinusoid, the input to the right
channel (R) is a 2 kHz sinusoid. Draw the spectrum of the signal at points A, B
and C if f c 38 kHz .
3.
Draw a block diagram of a scheme that could be used to recover the left (L)
and right (R) signals of the system shown in Question 12 if it uses the signal at
C as the input.
1. there are some important signals, such as the unbounded ramp function The need for the
Laplace transform
r t tu t , which do not have a Fourier transform. Also, unstable systems,
t 1 e - t t e - t
t t t
In other words, the factor e t is used to try and force the overall function to
zero for large values of t in an attempt to achieve convergence of the Fourier
transform (note that this is not always possible).
F xt e t
xt e e
t jt
dt
(4A.1)
xt e j t
dt
X j xt e j t dt (4A.2)
The two-sided
Laplace transform
X s xt e st dt (4A.3)
By making the integral only valid for time t 0 , we can incorporate initial
conditions into the s-domain description of signals and systems. That is, we
assume that signals are only applied at t 0 and impulse responses are causal
and start at t 0 . The lower limit in the Laplace transform can then be
replaced with zero. The result is termed the one-sided Laplace transform,
which we will refer to simply as “the Laplace transform”:
X s xt e st dt
The one-sided
Laplace transform
0 (4A.4)
Im s
s= + j
j
0 Re s
Figure 4A.2
xt et X j e jt d
1
2
(4A.5)
xt X j e j t d
1
2
(4A.6)
1 c j
xt
The inverse Laplace
st
X s e ds transform
2j
(4A.7)
c j
The region of convergence (ROC) for the Laplace transform X s , is the set of
values of s (the region in the complex plane) for which the integral in
Eq. (4A.4) converges.
Example
X s e at u t e st dt (4A.9)
0
X s e e dt e sa t dt
at st
0 0
1 sa t
e (4A.10)
sa 0
0 Re z 0
lim e zt
t
Re z 0 (4A.12)
0 Res a 0
lim e s a t
t
Res a 0 (4A.13)
X s Res a 0
1
(4A.14)
sa
or:
e at u t
1
Re s a (4A.15)
sa
1 -at
e u( t )
0 t -a 0 c
c-j
Figure 4A.3
This fact means that the integral defining X s in Eq. (4A.10) exists only for
the values of s in the shaded region in Figure 4A.3. For other values of s , the
integral does not converge.
The ROC is not If all the signals we deal with are restricted to t 0 , then the inverse transform
needed if we deal
with causal signals is unique, and we do not need to specify the ROC.
only
X X s s j
The Fourier (4A.16)
transform from the
Laplace transform
That is, the Fourier transform X is just the Laplace transform X s with
s j .
Example
X
1 1 1
(4A.17)
s 3 s j j 3 3 j 2f
A quick check from our knowledge of the Fourier transform shows this to be
correct (because the Laplace transform includes the j axis in its region of
convergence).
0.8
0.6
|X(s)|
0.4
0.2
0
-6
10
-4 5
0
-2
-5
0 -10
j
Figure 4A.4
There are several things we can notice about the plot above. First, note that the
surface has been defined only in the ROC, i.e. for Re s 3 . Secondly, the
surface approaches an infinite value at the point s 3 . Such a point is termed
a pole, in obvious reference to the surface being analogous to a tent (a zero is a
point where the surface has a value of zero).
A pole-zero plot is a
shorthand way of
representing a j
Laplace transform
-3
s -plane
Figure 4A.5
A pole-zero plot locates all the critical points in the s-plane that completely
specify the function X s (to within an arbitrary constant), and it is a useful
analytic and design tool.
Thirdly, one cut of the surface has been fortuitously placed along the imaginary
axis. If we graph the height of the surface along this cut against , we get a
picture of the magnitude of the Fourier transform vs. :
The Fourier
transform is
obtained from the |X |
1/3 magnitude of
j-axis of a plot of
the Laplace 1
X ( ) =
transform over the 3+ j
entire s-plane
0
Figure 4A.6
With these ideas in mind, it should be apparent that a function that has a
Laplace transform with a ROC in the right-half plane does not have a Fourier
transform (because the Laplace transform surface will never intersect the
j -axis).
Example
X s t e st dt (4A.18)
0
t 1
The Laplace
(4A.19) transform of an
impulse
Thus, the Laplace transform of an impulse is 1, just like the Fourier transform.
Example
u t
1 step
(4A.20)
s
This is a frequently used transform in the study of electrical circuits and control
systems.
Example
cos 0t u t
2
e e j 0t u t
1 j 0t
(4A.21)
1 j 0 t
2
1 1
e e j 0 t u t
1
2 s j0 s j0
s
s 2 02 (4A.22)
cos 0t u t
s
(4A.23)
s 2 02
Differentiation Property
dx dx
L e st dt (4A.24)
dt 0 dt
Integrating by parts, we obtain:
dx
L xt e st s xt e st dt
(4A.25)
dt 0 0
For the Laplace integral to converge [i.e. for X s to exist], it is necessary that
transform
xt sX s x 0
d differentiation
(4A.26) property
dt
u t
1
(L.1)
s
t 1 (L.2)
e t u t
1
(L.3)
s 1
cost u t s
s2 2
(L.4)
sin t u t
s2 2
(L.5)
t
x T X sT (L.7) Scaling
T
e at xt X s a
Multiplication by
(L.9) exponential
dN
t xt 1 X s
N N
(L.10) Multiplication by t
ds N
d
dt
xt sX s x 0 (L.11) Differentiation
x d X s
t 1
0 s
(L.12) Integration
x1 t x2 t X 1 s X 2 s (L.13) Convolution
bM s M bM 1 s M 1 ... b1 s b0
X s (4A.27)
a N s N a N 1 s N 1 ... a1s a0
where the pi are called the poles of X s . If the poles are all distinct, then the
partial fraction expansion of Eq.
(4A.28) is:
X s
Rational Laplace
c1 c cN
transforms written in
2 ... (4A.29)
terms of poles
s p1 s p 2 s pN
ci s pi X s s pi
Residues defined
(4A.30)
Note that the form of the time-domain expression is determined only by the
poles of X s !
Example
2s 1 (4A.32)
Y s
s 2 s 4
2s 1 c c (4A.33)
1 2
s 2 s 4 s 2 s 4
2s 1 c s 2 (4A.34)
c1 2
s 4 s4
2 2 1 (4A.35)
c 1
2 4 1
2 4 1 (4A.36)
c 3
4 2 2
1 (4A.37)
Y s
3
s2 s4
The inverse Laplace transform can now be easily evaluated using standard
transform (L.3) and property (L.7):
y t e 2 t 3e 4 t , t 0 (4A.38)
Note that the continual writing of u t after each function has been replaced by
the more notationally convenient condition of t 0 on the total solution.
c1
X s
c1 c3 cN
...
s p1 s1 p1 s p3 s pN (4A.39)
Now suppose the pole p1 of X s is repeated r times. Then the partial fraction
expansion of Eq.
(4A.28) is:
The residues are given by Eq. (4A.30) for the distinct poles r 1 i N and:
Example
s4 (4A.44)
Y s
s 1s 2 2
s4 c c c3 (4A.45)
1 2
s 1s 2 s 1 s 2 s 2 2
2
1 4 (4A.46)
c 3
1 2 2 1
24 (4A.47)
c 2
2 1 3
s4 (4A.48)
1 s 2 c 2 s 2 c 3
c 2
s 1 s 1
d s 4 d c1 s 2 d (4A.49)
2
c 2 s 2
ds s 1 ds s 1 ds
du dv (4A.50)
v u
d u
dx 2 dx
dx v v
d s 4 s 1 s 4 3 (4A.51)
ds s 1 s 1 2
s 12
d
c2 s 2 c2 (4A.52)
ds
3 (4A.53)
c2 3
2 12
(4A.54)
Y s
3 3 2
s 1 s 2 s 2 2
MATLAB® can be used to obtain the poles and residues for a given rational
function X s .
Example
Given:
s 2 2s 1
X s 3
s 3s 2 4 s 2
calculate xt .
num = [1 –2 1];
den = [1 3 4 2];
[r,p] = residue(num,den);
Thus, we see that any high order rational function X s can be decomposed
into a combination of first-order factors.
difficult?
differential time-domain
equation solution
time-domain
LT ILT
frequency-domain
algebaric easy! s -domain
equation solution
Figure 4A.7
Example
D 2
5 D 6 y t D 1x t (4A.56)
for the initial conditions y 0 2 and y 0 1 and the input xt e 4t u t .
d2y dy dx (4A.57)
2
5 6y x
dt dt dt
dy
dt
sY s y 0 sY s 2
d2y
dt 2
s 2Y s sy 0 y 0 s 2Y s 2 s 1
(4A.58)
X s
1
s4
and
dx
dt
sX s x 0
s
s4
0
s
s4
(4A.59)
s Y s 2s 1 5sY s 2 6Y s s s 4 s 1 4
2 (4A.60)
Collecting all the terms of Y s and the remaining terms separately on the left-
hand side, we get:
s 1
s 2
5s 6 Y s 2 s 11
s4
(4A.61)
so that:
s 2
5 s 6 Y s 2 s 11 s 1
initial condition terms
s4 (4A.62)
input term s
Therefore:
2 s 11 s 1
Y s
5s
s
2
6
s 4 s 2 5s 6
zero -input component zero -state component
7 5 1 2 2 32
s 2 s 3 s 2 s 3 s 4
(4A.63)
y t 7 e- 2t
e 3t 12 e 2t 2e 3t 32 e 4t
-5
zero -input response
zero -state response
3t 4t (4A.64)
13
2
- 2t
e -3e e
3
2 , t0
The Laplace transform method gives the total response, which includes
zero-input and zero-state components. The initial condition terms give rise to
the zero-input response. The zero-state response terms are exclusively due to
the input.
d N y t N 1 d i y t M d i xt
Systems described
ai bi
by differential
equations (4A.65)
dt N i 0 dt i
i 0 dt i
If we take the Laplace transform of this equation using (L.11), assuming zero
initial conditions, we get:
transform into
bM s M bM 1s M 1 ... b1s b0
Y s N X s
rational functions of
s (4A.66)
N 1
s a N 1s ... a1s a0
Now define:
Y s H s X s
Transfer function
defined (4A.67)
Y s H s X s (4A.69)
b s z1 s z 2 s z M
Transfer function
H s M
factored to get zeros
s p1 s p2 s p N
and poles
(4A.71)
where the z’s are the zeros of the system and the p’s are the poles of the
system. This shows us that apart from a constant factor bM , the poles and zeros
determine the transfer function completely. They are often displayed on a pole-
zero diagram, which is a plot in the s-domain showing the location of all the
poles (marked by x) and all the zeros (marked by o).
You should be familiar with direct construction of the transfer function for
electric circuits from previous subjects.
Block Diagrams
Block diagrams are Systems are often represented as interconnections of s-domain “blocks”, with
transfer functions
each block containing a transfer function.
Example
v i (t ) C v o(t )
Figure 4A.8
we can perform KVL around the loop to get the differential equation:
dvo t (4A.72)
vo t RC vi t
dt
Vo s sRCVo s Vi s (4A.73)
Vo s 1 1 (4A.74)
Vi s 1 sRC 1 sT
Figure 4A.9
Note that there’s no hint of what makes up the inside of the block, except for
the input and output signals. It could be a simple RC circuit, a complex passive
circuit, or even an active circuit (with op-amps). The important thing the block
does is hide all this detail.
Notation
A block represents
multiplication with a
Vi (s ) Vo(s ) transfer function
G (s)
Figure 4A.10
Addition and
subtraction of
X Z=X+Y X Z=X-Y signals in a block
diagram
Y Y
Figure 4A.11
Cascading Blocks
Blocks can be connected in cascade.
Cascading blocks
implies multiplying
the transfer
functions X Y=G1X Z=G1G2 X
G1( s) G2( s)
Figure 4A.12
Care must be taken when cascading blocks. Consider what happens when we
try to create a second-order circuit by cascading two first-order circuits:
A circuit which IS
NOT the cascade of
two first-order R R
circuits
Vi C C Vo
Figure 4A.13
Show that the transfer function for the above circuit is:
Vo
2
1 RC 2
R R
Vi C C Vo
Figure 4A.14
Vo 1 RC 1 RC
Vi s 1 RC s 1 RC
2
1 RC
2
s 2 RC s 1 RC
2
(4A.76)
They are different! In the first case, the second network loads the first (i.e. they We can only
interact). We can only cascade circuits if the “outputs” of the circuits present a cascade circuits if
they are buffered
low impedance to the next stage, so that each successive circuit does not “load”
the previous circuit. Op-amp circuits of both the inverting and non-inverting
type are ideal for cascading.
B (s )
H (s)
Figure 4A.15
B s = output multiplied by H s
E s = actuating error signal
R s C s = system error
C s
= closed-loop transfer function
R s
G s H s = loop gain
To find the transfer function, we solve the following two equations which are
self-evident from the block diagram:
C s G s E s
E s Rs H s C s (4A.77)
C s G s Rs G s H s C s
C s 1 G s H s G s Rs (4A.78)
and therefore:
Transfer function for
C s G s the standard
feedback connection
R s 1 G s H s (4A.79)
E s
Finding the error
1 signal’s transfer
Rs 1 G s H s
function for the
standard feedback
(4A.80)
connection
and:
Bs G s H s
Rs 1 G s H s (4A.81)
Notice how all the above expressions have the same denominator.
Characteristic
equations for
negative feedback positive feedback positive and
negative feedback
1+GH 1-GH
Figure 4A.16
Combining blocks in X1 X2 X3 X1 X3
cascade G1 G2 G1 G2
X1 X3 X1 X3
Moving a summing G G
point behind a block
X2 X2
G
X1 X3 X1 X3
Moving a summing G G
point ahead of a
block X2 X2
1
G
X1 X2 X1 X2
Moving a pick-off G G
point behind a block
X1 X1 1
G
Moving a pick-off X1 X2 X1 X2
G G
point ahead of a
block X2 X2
G
Eliminating a X1 X2 X1 G X2
feedback loop G
1 GH
Example
Given:
R C
G
C G R HC
R
GH C
H
R 1 C
GH
H
Example
Given:
G3
i o
G1 G2 G4
H1
H2
we put:
G1 G2 G1G2
G3
G4 G3+ G4
G1G2
G1G2
1- G1G2 H1
H1
Therefore we get:
i G1G2 o
G3+ G4
1- G1G2 H1
H2
i G1G2 (G3+ G4 ) o
1- G1G2 H1 + G1G2 (G3+ G4 ) H2
i o
G1 G2
i o1
G1 G2
Therefore:
o1 G1G2
i 1 G1G2
Considering d only:
d o2
G2
G1
we have:
o2 G2
i 1 G1G2
G1G2 G2
o o1 o 2 i d
1 G1G2 1 G1G2
Summary
The Laplace transform is a generalization of the Fourier transform. To find
the Laplace transform of a function, we start from a known Laplace
transform pair, and apply any number of Laplace transform pair properties
to arrive at the solution.
The impulse response and the transfer function form a Laplace transform
pair: ht H s .
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Exercises
1.
Obtain transfer functions for the following networks:
a) b)
R C1
C R1
R2
C2
c) d)
C R2
L
L1
C C1
R1
R
2.
Obtain the Laplace transforms for the following integro-differential equations:
di t
Rit i d et
1 t
a) L
dt C 0
d 2 xt dx t
b) M B Kx t 3t
dt 2 dt
d 2 t d t
c) J B K t 10 sin t
dt 2 dt
3.
Find the f t which corresponds to each F s below:
s2 5
F s 3
a)
s 2s 2 4s
F s
s
b)
s 4 5s 2 4
3s 1
F s
5 s 3 s 2
c) 2
4.
Use the final value theorem to determine the final value for each f t in 3 a),
b) and c) above.
5.
Find an expression for the transfer function of the following network (assume
the op-amp is ideal):
C2
R2
R1 C1
Vi
Vo
6.
In the circuit below, the switch is in a closed position for a long time before
t 0 , when it is opened instantaneously.
y (t ) 1 H 2
5
1
10 V F
5
t =0
7.
Given:
R (s ) E (s ) C (s )
G (s)
B (s )
H (s)
C s E s B s
(i) (ii) (iii)
R s R s R s
8.
Using block diagram reduction techniques, find the transfer functions of the
following systems:
a)
R Y
G1( s) G2( s)
H2( s)
G3( s)
H3( s)
b)
c
Y
X1 X5
a b
9.
Use block diagram reduction techniques to find the transfer functions of the
following systems:
a)
H2
R C
G1 G2 G3 G4
H1
H3
b)
B D
X Y
A E
C F
Laplace was the son of a farmer of moderate means, and while at the local
military school, his uncle (a priest) recognised his exceptional mathematical
talent. At sixteen, he began to study at the University of Caen. Two years later
he travelled to Paris, where he gained the attention of the great mathematician
and philosopher Jean Le Rond d’Alembert by sending him a paper on the
principles of mechanics. His genius was immediately recognised, and Laplace
became a professor of mathematics.
In 1784 Laplace was appointed as examiner at the Royal Artillery Corps, and
in this role in 1785, he examined and passed the 16 year old Napoleon
Bonaparte.
If man were restricted to collecting facts the sciences were only a sterile
nomenclature and he would never have known the great laws of nature. It is
in comparing the phenomena with each other, in seeking to grasp their
relationships, that he is led to discover these laws...
The first volume of the Mécanique Céleste is divided into two books, the first
on general laws of equilibrium and motion of solids and also fluids, while the
second book is on the law of universal gravitation and the motions of the
centres of gravity of the bodies in the solar system. The main mathematical
approach was the setting up of differential equations and solving them to
describe the resulting motions. The second volume deals with mechanics
applied to a study of the planets. In it Laplace included a study of the shape of
the Earth which included a discussion of data obtained from several different
expeditions, and Laplace applied his theory of errors to the results.
After the publication of the fourth volume of the Mécanique Céleste, Laplace
continued to apply his ideas of physics to other problems such as capillary
action (1806-07), double refraction (1809), the velocity of sound (1816), the
theory of heat, in particular the shape and rotation of the cooling Earth
(1817-1820), and elastic fluids (1821).
Many original documents concerning his life have been lost, and gaps in his
biography have been filled by myth. Some papers were lost in a fire that
destroyed the chateau of a descendant, and others went up in flames when
Allied forces bombarded Caen during WWII.
Overview
Transfer functions are obtained by Laplace transforming a system’s
The transfer function
input/output differential equation, or by analysing a system directly in the tells us lots of things
s-domain. From the transfer function, we can derive many important properties
of a system.
Stability
To look at stability, let’s examine the rational system transfer function that
ordinarily arises from linear differential equations:
c (4B.2a)
ce pt The impulse
s p response is always
A cos s A sin
some sort of
Ae t cost
exponential
s
2 2
(4B.2b)
If H s has repeated poles, then ht will contain terms of the form ct i e pt
Re pi 0
Conditions on the
poles for a stable (4B.3a)
system
Stability defined A system is stable if all the poles of the transfer function lie (4B.3b)
in the open left-half s-plane
Unit-Step Response
Consider a system with rational transfer function H s Bs As . If an input
xt is applied for t 0 with no initial energy in the system, then the transform
of the resulting output response is:
B s
Y s X s
As (4B.4)
Transform of step-
B s
response for any
Y s
As s
system
(4B.5)
H 0 E s
Y s
s As (4B.6)
where E s is a polynomial in s and the residue of the pole at the origin was
given by:
stable so that all the roots of As 0 lie in the open left-half plane, then the
term y tr t converges to zero as t , in which case y tr t is the transient
part of the response.
So, if the system is stable, the step response contains a transient that decays to
zero and it contains a constant with value H 0 . The constant H 0 is the
steady-state value of the step response.
First-Order Systems
H s
p
s p (4B.9)
y t 1 e pt , t 0
Step-response of a (4B.10)
first-order system
which has been written in the form of Eq. (4B.8). If the system is stable, then p
lies in the open left-half plane, and the second term decays to zero. The rate at
which the transient decays to zero depends on how far over to the left the pole
is. Since the total response is equal the constant “1” plus the transient response,
the rate at which the step response converges to the steady-state value is equal
to the rate at which the transient decays to zero. This may be an important
design consideration.
An important quantity that characterizes the rate of decay of the transient is the
Time constant
time constant T. It is defined as 1 p , assuming p 0 . You are probably
defined
familiar with the concept of a time constant for electric circuits (eg. T RC
for a simple RC circuit), but it is a concept applicable to all first-order systems.
The smaller the time constant, the faster the rate of decay of the transient.
Second-Order Systems
n2
H s 2
Standard form of a
second-order
s 2 n s n2 (4B.11) lowpass transfer
function
If we write:
n2
H s
s p1 s p2 (4B.13)
n2
H s
s p1 s p2 (4B.15)
n2
Y s
s p1 s p2 s (4B.16)
Rewriting as partial fractions and taking the inverse Laplace transform, we get
the unit-step response:
y t 1 c1e p1t c2 e p 2 t , t 0
Step response of a (4B.17)
second-order
overdamped system
Therefore, the transient part of the response is given by the sum of two
exponentials:
Transient part of the
Steady-state part of
n2
yss t H 0
the step response of
a second-order
overdamped system
1
p1 p 2 (4B.19)
It often turns out that the transient response is dominated by one pole (the one
closer to the origin – why?), so that the step response looks like that of a first-
order system.
n2
H s
s n 2 (4B.20)
Expanding H s s via partial fractions and taking the inverse transform yields
the step response:
Unit-step response
y t 1 1 n t e n t (4B.21) of a second-order
critically damped
system
yss t H 0 1
the unit-step
(4B.23) response of a
second-order
critically damped
system
as before.
p1, 2 n j d
Underdamped pole (4B.25)
locations are
complex conjugates
n2 (4B.26)
H s
s n 2 d2
and the transform of the unit-step response Y s H s s can be expanded as:
1 s n n (4B.27)
Y s
s s n 2 d2
Thus, from the transform Eq. (4B.2b), the unit-step response is:
Step response of a
n t
y t 1 sin d t cos 1
second-order
underdamped e n
system d (4B.28)
Second-order pole
locations
Im s
=0
= 0.707 j n
j d
=1 Re s
=5 - n
- j n
Figure 4B.1
We can see, for fixed n , that varying from 0 to 1 causes the poles to move
from the imaginary axis along a circular arc with radius n until they meet at
How the damping
the point s n . If 0 then the poles lie on the imaginary axis and the ratio, , varies the
pole location
transient never dies out – we have a marginally stable system. As is
increased, the response becomes less oscillatory and more and more damped,
until 1 . Now the poles are real and repeated, and there is no sinusoid in the
response. As is increased, the poles move apart on the real axis, with one
moving to the left, and one moving toward the origin. The response becomes
more and more damped due to the right-hand pole getting closer and closer to
the origin.
Sinusoidal Response
Again consider a system with rational transfer function H s Bs As . To
determine the system response to the sinusoidal input xt C cos 0 t , we first
find the Laplace transform of the input:
X s
Cs Cs
s 2 02 s j 0 s j 0 (4B.29)
B s
Y s
Cs
As s j 0 s j 0 (4B.30)
Y s
C 2 H j 0 C 2 H * j 0 E s
s j 0 s j 0 As (4B.31)
The sinusoidal (4B.32)
y t H j 0 e j 0 t H * j 0 e j 0 t ytr t
response of a
C
system
2
When the system is stable, the y tr t term decays to zero and we are left with
the steady-state response:
Steady-state
yss t C H j 0 cos 0t H j 0 , t 0 (4B.34) sinusoidal response
of a system
This result is exactly the same expression as that found when performing
Fourier analysis, except there the expression was for all time, and hence there
was no transient! This means that the frequency response function H can
be obtained directly from the transfer function:
Arbitrary Response
Suppose we apply an arbitrary input xt that has rational Laplace transform
X s C s Ds where the degree of C s is less than Ds . If this input is
applied to a system with transfer function H s Bs As , the transform of
the response is:
B s C s
Y s
As Ds (4B.36)
F s E s
Y s
D s As (4B.37)
transform of E s As .
Summary
A system is stable if all the poles of the transfer function lie in the open left-half
s-plane.
The frequency response of a system can be obtained from the transfer function by
setting s j .
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
Determine whether the following circuit is stable for any element values, and
for any bounded inputs:
R L
2.
Suppose that a system has the following transfer function:
H s
8
s4
a) Compute the system response to the following inputs. Identify the steady-
state solution and the transient solution.
(i) xt u t
(ii) xt tu t
3.
Consider three systems which have the following transfer functions:
H s
32
(i)
s 4 s 16
2
H s
32
(ii)
s 8s 16
2
H s
32
(iii)
s 10s 16
2
4.
For the circuit shown, compute the steady-state response yss t resulting from
the inputs given below assuming that there is no initial energy at time t 0 .
a) xt u t 10 F
c) xt cos5t 6 u t
10 F
The mid-Victorian age was a time when the divide between the rich and the
poor was immense (and almost insurmountable), a time of unimaginable
disease and lack of sanitation, a time of steam engines belching forth a steady
rain of coal dust, a time of horses clattering along cobblestoned streets, a time
when social services were the fantasy of utopian dreamers. It was into this
smelly, noisy, unhealthy and class-conscious world that Oliver Heaviside was
born the son of a poor man on 18 May, 1850.
In 1874 Heaviside resigned from his job as telegraph operator and went back to
live with his parents. He was to live off his parents, and other relatives, for the
rest of his life. He dedicated his life to writing technical papers on telegraphy
and electrical theory – much of his work forms the basis of modern circuit
theory and field theory.
In 1876 he published a paper entitled On the extra current which made it clear
that Heaviside (a 26-year-old unemployed nobody) was a brilliant talent. He
had extended the mathematical understanding of telegraphy far beyond
Now all has been When Maxwell died in 1879 he left his electromagnetic theory as twenty
blended into one
theory, the main equations in twenty variables! It was Heaviside (and independently, Hertz)
equations of which
can be written on a
who recast the equations in modern form, using a symmetrical vector calculus
page of a pocket notation (also championed by Josiah Willard Gibbs (1839-1903)). From these
notebook. That we
have got so far is equations, he was able to solve an enormous amount of problems involving
due in the first place
to Maxwell, and next field theory, as well as contributing to the ideas behind field theory, such as
to him to Heaviside
and Hertz. – H.A. energy being carried by fields, and not electric charges.
Lorentz
A major portion of Heaviside’s work was devoted to “operational calculus”.1
This caused a controversy with the mathematicians of the day because although
Rigorous
it seemed to solve physical problems, it’s mathematical rigor was not at all
mathematics is clear. His knowledge of the physics of problems guided him correctly in many
narrow, physical
mathematics bold instances to the development of suitable mathematical processes. In 1887
and broad. – Oliver
Heaviside Heaviside introduced the concept of a resistance operator, which in modern
terms would be called impedance, and Heaviside introduced the symbol Z for
it. He let p be equal to time-differentiation, and thus the resistance operator for
an inductor would be written as pL. He would then treat p just like an algebraic
1
The Ukrainian Mikhail Egorovich Vashchenko-Zakharchenko published The Symbolic
Calculus and its Application to the Integration of Linear Differential Equations in 1862.
Heaviside independently invented (and applied) his own version of the operational calculus.
Heaviside also played a role in the debate raging at the end of the 19th century
about the age of the Earth, with obvious implications for Darwin’s theory of
evolution. In 1862 Thomson wrote his famous paper On the secular cooling of
the Earth, in which he imagined the Earth to be a uniformly heated ball of
The practice of
molten rock, modelled as a semi-infinite mass. Based on experimentally eliminating the
physics by reducing
derived thermal conductivity of rock, sand and sandstone, he then a problem to a
mathematically allowed the globe to cool according to the physical law of purely mathematical
exercise should be
thermodynamics embedded in Fourier’s famous partial differential equation for avoided as much as
possible. The
heat flow. The resulting age of the Earth (100 million years) fell short of that physics should be
carried on right
needed by Darwin’s theory, and also went against geologic and palaeontologic through, to give life
and reality to the
evidence. John Perry (a professor of mechanical engineering) redid Thomson’s problem, and to
analysis using discontinuous diffusivity, and arrived at approximate results that obtain the great
assistance which
could (based on the conductivity and specific heat of marble and quartz) put the physics gives to
the mathematics. –
the age of the Earth into the billions of years. But Heaviside, using his Oliver Heaviside,
Collected Works,
operational calculus, was able to solve the diffusion equation for a finite Vol II, p.4
spherical Earth. We now know that such a simple model is based on faulty
premises – radioactive decay within the Earth maintains the thermal gradient
without a continual cooling of the planet. But the power of Heaviside’s
methods to solve remarkably complex problems became readily apparent.
Heaviside spent much of his life being bitter at those who didn’t recognise his
genius – he had disdain for those that could not accept his mathematics without
formal proof, and he felt betrayed and cheated by the scientific community
who often ignored his results or used them later without recognising his prior
work. It was with much bitterness that he eventually retired and lived out the
rest of his life in Torquay on a government pension. He withdrew from public
and private life, and was taunted by “insolently rude imbeciles”. Objects were
thrown at his windows and doors and numerous practical tricks were played on
him.
Heaviside should be Today, the historical obscurity of Heaviside’s work is evident in the fact that
remembered for his
vectors, his field his vector analysis and vector formulation of Maxwell’s theory have become
theory analyses, his
brilliant discovery of “basic knowledge”. His operational calculus was made obsolete with the 1937
the distortionless
circuit, his
publication of a book by the German mathematician Gustav Doetsch – it
pioneering applied showed how, with the Laplace transform, Heaviside’s operators could be
mathematics, and
for his wit and replaced with a mathematically rigorous and systematic method.
humor. – P.J. Nahin
The last five years of Heaviside’s life, with both hearing and sight failing, were
years of great privation and mystery. He died on 3rd February, 1925.
References
Nahin, P.: Oliver Heaviside: Sage in Solitude, IEEE Press, 1988.
Overview
An examination of a system’s frequency response is useful in several respects.
It can help us determine things such as the DC gain and bandwidth, how well a
system meets the stability criterion, and whether the system is robust to
disturbance inputs.
Despite all this, remember that the time- and frequency-domain are
inextricably related – we can’t alter the characteristics of one without affecting
the other. This will be demonstrated for a second-order system later.
H s
K
s p (5A.2)
The sinusoidal steady state corresponds to s j . Therefore, Eq. (5A.2) is, for
the sinusoidal steady state:
H
K
j p (5A.3)
H H e jH (5A.4a)
H H H
in terms of (5A.5A)
magnitude and
phase
If the logarithm (base 10) of the magnitude is multiplied by 20, then we have
the gain of the transfer function in decibels (dB):
The magnitude of
the transfer function
in dB
H dB 20 log H dB (5A.5)
H
1
1 j 0 (5A.6)
H
1
1 0
2
(5A.7)
H tan 1
0 (5A.8)
Magnitude Responses
A magnitude response is the magnitude of the transfer function for a sinusoidal
steady-state input, plotted against the frequency of the input. Magnitude The magnitude
response is the
responses can be classified according to their particular properties. To look at magnitude of the
these properties, we will use linear magnitude versus linear frequency plots. transfer function in
the sinusoidal
For the simple first-order RC circuit that you are so familiar with, the steady state
magnitude function given by Eq. (5A.7) has three frequencies of special
interest corresponding to these values of H :
H 0 1
H 0
1
0.707
2
H 0 (5A.9)
A simple lowpass
filter
|H|
1
R
1 2
Vi C Vo
0
0 0 2 0
Figure 5A.1
An idealisation of the response in Figure 5A.1, known as a brick wall, and the
circuit that produces it are shown below:
An ideal lowpass
filter
|T| Cutoff
1
ideal
Vi Vo
Pass Stop filter
0
0 0
Figure 5A.2
For the ideal filter, the output voltage remains fixed in amplitude until a critical
frequency is reached, called the cutoff frequency, 0 . At that frequency, and
for all higher frequencies, the output is zero. The range of frequencies with
Pass and stop output is called the passband; the range with no output is called the stopband.
bands defined
The obvious classification of the filter is a lowpass filter.
If the positions of the resistor and capacitor in the circuit of Figure 5A.1 are
interchanged, then the resulting circuit is:
Vi R Vo
Figure 5A.3
H s
s
s 1 RC (5A.10)
j 0
H
1 j 0 (5A.11)
H 0 0
H 0
1
0.707
2
H 1 (5A.12)
A simple highpass
filter
|H|
1
C
1 2
Vi R Vo
0
0 0 2 0 3 0
Figure 5A.4
This filter is classified as a highpass filter. The ideal brick wall highpass filter
is shown below:
An ideal highpass
filter
|T| Cutoff
1
ideal
Vi Vo
Stop Pass filter
0
0 0
Figure 5A.5
Phase Responses
Like magnitude responses, phase responses are only meaningful when we look Phase response is
at sinusoidal steady-state signals. A transfer function for a sinusoidal input is: obtained in the
sinusoidal steady
state
Y
H
Y
H
X X 0 (5A.13)
j z
H K
j p (5A.14)
We use the sign of this phase angle to classify systems. Those giving positive Lead and lag circuits
are known as lead systems, those giving negative as lag systems. defined
tan 1
0 (5A.16)
Lagging phase
response for a
simple lowpass filter 0 0 2 0
0º
-45 º
-90 º
Figure 5A.6
For the circuit in Figure 5A.5, show that the phase is given by:
90 tan 1
0 (5A.17)
The phase response has the same shape as Figure 5A.6 but is shifted upward
by 90 :
Leading phase
response for a
simple highpass
filter 90 º
45 º
0º
0 0 2 0
Figure 5A.7
The angle is positive for all , and so the circuit is a lead circuit.
n2
H s 2 (5A.18)
s 2 n s n2
n2
H (5A.19)
n2 2 j 2 n
n2 The magnitude
H j response of a
2
lowpass second
2
2 2 2 (5A.20) order transfer
n n function
Typical magnitude
and phase r = n 1-2 2
n
responses of a
lowpass second |H| 1
order transfer 2
function 1
-40 dB / decade
0
0 n
0
All
, degrees
-90
-180º asymptote
for all
-180
0 1
Figure 5A.8
H 0 1, H n 1 2 , H 0 (5A.22)
and for large , the magnitude decreases at a rate of -40 dB per decade, which
is sometimes described as two-pole rolloff.
s 2 n s n2
order transfer
(5A.24) function
the poles are located on a circle of radius n and at an angle with respect to
the negative real axis of cos 1 . These complex conjugate pole locations
are shown below:
p n
p* - n
Figure 5A.9
In terms of the poles shown in Figure 5A.9, the transfer function is:
n2
Lowpass second
H s
order transfer
s p s p
function using pole
(5A.25) factors
j p m1 1 j p m2 2
Polar representation (5A.26)
of the pole factors and
Magnitude function
n2
H
written using the
polar representation
of the pole factors
m1m2 (5A.27)
and:
Phase function
written using the
polar representation
of the pole factors
1 2 (5A.28)
Determining the
magnitude and j j j
phase response
m1 j 2
from the s plane
m1
p p j n p
1
m1 1 1
j 1
m2 m2 m2
2 2 2
p* p* p*
1 n 2
0
|H| 1
2
1
-90 º
-180 º
0 1 n 2
Figure 5A.10
and one above n . From this construction we can see that the short length of
m1 near the frequency n is the reason why the magnitude function reaches a
peak near n . These plots are useful in visualising the frequency response of
the circuit.
Bode Plots
Bode* plots are plots of the magnitude function H dB 20 log H and the
We normally don’t deal with equations when drawing Bode plots – we rely on
our knowledge of the asymptotic approximations for the handful of factors that
go to make up a transfer function.
*
Dr. Hendrik Bode grew up in Urbana, Illinois, USA, where his name is pronounced boh-dee.
Purists insist on the original Dutch boh-dah. No one uses bohd.
Magnitude Phase
Transfer
Function Asymptote Linear Approximation
Factor
H , dB H ,
40
0
20
0 -90
K
-20 -180
-40
0.01 n 0.1 n n 10 n 100 n
0.01 n 0.1 n n 10 n 100 n
40
0
20
1 0 -90
s n -20 -180
-40
0.01 n 0.1 n n 10 n 100 n
0.01 n 0.1 n n 10 n 100 n
40
0
20
1 0 -90
s n 1 -20 -180
-40
0.01 n 0.1 n n 10 n 100 n
0.01 n 0.1 n n 10 n 100 n
40
0
20
1
0 -90
s 2
2
2 s 1
n n
-20 -180
-40
0.01 n 0.1 n n 10 n 100 n
0.01 n 0.1 n n 10 n 100 n
Example
A band
enhancement filter
|H|, dB 20 dB
0 dB
2
10 10
3
10
4
10
5
rad/s (log scale)
Figure 5A.11
Decomposing a
Bode plot into first-
order factors |H|, dB 20 dB
0 dB
10 rad/s (log scale)
2 3 4 5
10 10 10
|H|, dB
1 4
2 3
Figure 5A.12
Those marked 1 and 4 represent zero factors, while those marked 2 and 3 are
pole factors. The pole-zero plot corresponding to these factors is shown below:
4 3 2 1
Figure 5A.13
H
1 j 10 1 j 10
2 5
1 j 10 1 j 10
3 4
(5A.29)
H s
s 10 s 10
2 5 The transfer function
s 10 s 10
corresponding to the
3 4 Bode plot
(5A.30)
A realisation of the
specifications
-2 -3 -5 -4
10 10 10 10
Vi
Vo
1 1 1 1
Figure 5A.14
Magnitude scaling is 1
required to get
realistic element
Cnew Cold and Rnew km Rold
values
km (5A.32)
Since the capacitors are to have the value 10 nF, this means k m 108 . The
element values that result are shown below and the design is complete:
A realistic
implementation of
the specifications 1 M 100 k 1 k 10 k
Vi
Vo
10 nF 10 nF 10 nF 10 nF
Figure 5A.15
In this simple example, the response only required placement of the poles and
zeros on the real axis. However, complex pole-pair placement is not unusual in
design problems.
Digital Filters
Digital filtering involves sampling, quantising and coding of the input analog
Digital filters use
signal (using an analog to digital converter, or ADC for short). Once we have analog filters
converted voltages to mere numbers, we are free to do any processing on them
that we desire. Usually, the signal’s spectrum is found using a fast Fourier
transform, or FFT. The spectrum can then be modified by scaling the
amplitudes and adjusting the phase of each sinusoid. An inverse FFT can then
be performed, and the processed numbers are converted back into analog form
(using a digital to analog converter, or DAC). In modern digital signal
processors, an operation corresponding to a fast convolution is also sometimes
employed – that is, the signal is convolved in the time-domain in real-time.
The components of
a digital filter
Anti-alias Digital Reconstruction
vi ADC Signal DAC vo
Filter Processor Filter
Figure 5A.16
The digital signal processor can be custom built digital circuitry, or it can be a
Digital filter
general purpose computer. There are many advantages of digitally processing advantages
analog signals:
1. A digital filter may be just a small part of a larger system, so it makes sense
to implement it in software rather than hardware.
9. Filter responses can be made which always have linear phase (constant
delay), regardless of the magnitude response.
Digital filter
disadvantages Some disadvantages are:
1. Processor speed limits the frequency range over which digital filters can be
used (although this limit is continuously being pushed back with ever faster
processors).
2. Analog filters (and signal conditioning) are still necessary to convert the
analog signal to digital form and back again.
Summary
A frequency response consists of two parts – a magnitude response and a
phase response. It tells us the change in the magnitude and phase of a
sinusoid at any frequency, in the steady-state.
Bode plots are magnitude (dB) and phase responses drawn on a semi-log
scale, enabling the easy analysis or design of high-order systems.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
With respect to a reference frequency f 0 20 Hz , find the frequency which is
2.
Express the following magnitude ratios in dB: (a) 1, (b) 40, (c) 0.5
3.
Draw the approximate Bode plots (both magnitude and phase) for the transfer
functions shown. Use MATLAB® to draw the exact Bode plots and compare.
Note that the magnitude plots for the transfer functions (c) and (d); (e) and (f)
are the same. Why?
4.
Prove that if G s has a single pole at s 1 the asymptotes of the log
magnitude response versus log frequency intersect at 1 . Prove this not
only analytically but also graphically using MATLAB®.
5.
Make use of the property that the logarithm converts multiplication and
division into addition and subtraction, respectively, to draw the Bode plot for:
100s 1
G s
s 0.01s 1
Use asymptotes for the magnitude response and a linear approximation for the
phase response.
6.
Draw the exact Bode plot using MATLAB® (magnitude and phase) for
100s 2
G s
s 2 2 n s n2
when
Compare this plot with the approximate Bode plot (which ignores the
value of ).
7.
Given
206.66s 1
(a) G s
s 0.1s 1 94.6s 1
2
(b) G s
4 s 1 .5 s 2
s 2 10 s
Draw the approximate Bode plots and from these graphs find G and G at
(i) 0.1 rads -1 , (ii) 1 rads -1 , (iii) 10 rads -1 , (iv) 100 rads -1 .
8.
The experimental responses of two systems are given below. Plot the Bode
diagrams and identify the transfer functions.
(a) (b)
G1 G1 G2 G2
rads-1 (dB) (°) rads-1 (dB) (°)
0.1 40 -92 0.01 -26 87
0.2 34 -95 0.02 -20 84
0.5 25 -100 0.04 -14 79
1 20 -108 0.07 -10 70
2 14 -126 0.1 -7 61
3 10 -138 0.2 -3 46
5 2 -160 0.4 -1 29
10 -9 -190 0.7 -0.3 20
20 -23 -220 1 0 17
30 -32 -235 2 0 17
40 -40 -243 4 0 25
50 -46 -248 7 -2 36
100 -64 -258 10 -3 46
20 -7 64
40 -12 76
100 -20 84
500 -34 89
1000 -40 89
9.
Given G s K s s 5
(a) Plot the closed loop frequency response of this system using unity feedback
when K 1 . What is the –3 dB bandwidth of the system?
(b) Plot the closed loop frequency response when K is increased to K 100 .
What is the effect on the frequency response?
10.
The following measurements were taken for an open-loop system:
(i) 1 , G 6 dB , G 25
(ii) 2 , G 18 dB , G 127
feedback arrangement.
11.
An amplifier has the following frequency response. Find the transfer function.
40
30
20
10
|H(w)| dB
-10
-20
-30
-40
0 1 2 3 4 5
10 10 10 10 10 10
w (rad/s)
100
80
60
40
20
arg(H(w)) deg
-20
-40
-60
-80
-100
0 1 2 3 4 5
10 10 10 10 10 10
w (rad/s)
Overview
Control systems employing feedback usually operate to bring the output of the
system being controlled “in line” with the reference input. For example, a maze
rover may receive a command to go forward 4 units – how does it respond?
Can we control the dynamic behaviour of the rover, and if we can, what are the
limits of the control? Obviously we cannot get a rover to move infinitely fast,
so it will never follow a step input exactly. It must undergo a transient – just
like an electric circuit with storage elements. However, with feedback, we may
be able to change the transient response to suit particular requirements, like
time taken to get to a certain position within a small tolerance, not
“overshooting” the mark and hitting walls, etc.
Steady-State Error
One of the main objectives of control is for the system output to “follow” the
system input. The difference between the input and output, in the steady-state,
is termed the steady-state error:
R (s ) E (s ) C (s )
G (s)
Figure 5B.1
System type defined The type of the control system, or simply system type, is the number of poles
that G s has at s 0 . For example:
101 3s
G s
s s 2 2s 2 type1
Gs 3
4
type 3
s s 2
(5B.2)
When the input to the control system in Figure 5B.1 is a step function with
magnitude R, then Rs R s and the steady-state error is:
Step-error constant
K P lim G s
– only defined for a
step-input
s0
(5B.4)
R
ess
1 KP (5B.5)
We see that for the steady-state error to be zero, we require K P . This will
only occur if there is at least one pole of G s at the origin. We can summarise
the errors of a unity-feedback system to a step-input as:
Steady-state error to
R a step-input for a
type 0 system : ess constant unity-feedback
1 KP system
(5B.6)
type 1 or higher system : ess 0
Transient Response
Consider a maze rover (MR) described by the differential equation:
dvt
Maze rover force /
kf
vt xt
1 velocity differential
equation
dt M M (5B.7)
where vt is the velocity, xt is the driving force, M is the mass and k f
Figure 5B.2
MR transfer function
for position output
X (s ) 1M V (s ) 1 C (s )
skf M s
Figure 5B.3
X (s ) 1M C (s )
ss k f M
Figure 5B.4
The whole point of modelling the rover is so that we can control it. Suppose we
wish to build a maze rover position control system. We will choose for
simplicity a unity-feedback system, and place a “controller” in the feed-
forward path in front of the MR’s input. Such a control strategy is termed
series compensation.
Simple MR position
control scheme
R (s ) E (s ) X (s ) 1M
C (s )
Gc( s ) ss k f M
controller
Figure 5B.5
C s KP M
Rs s s k f M K P M (5B.9)
which can be manipulated into the standard form for a second-order transfer
function:
Second-order
C s n2
control system
Rs s 2 2 n s n2 (5B.10)
Although this simple controller can only vary n2 , with n fixed (why?), we Controllers change
the transfer function
can still see what sort of performance this controller has in the time-domain as – and therefore the
time-domain
K P is varied. In fact, the goal of the controller design is to choose a suitable response
For a unit-step function input, Rs 1 s , and the output response of the
system is obtained by taking the inverse Laplace transform of the output
transform:
n2
C s
s s 2 2 n s n2 (5B.11)
We have seen previously that the result for an underdamped system is:
ct 1
e n t
1 2
sin n 1 2 t cos 1 (5B.12)
Step response
definitions
c (t )
1.05
1.00
0.95
0.90
0.50
td
0.10
0
tr t
tp
ts
Figure 5B.6
ct ts css
5%
css
p1 , p2 n j n 1 2
j d (5B.14)
Damping ratio
(5B.18)
defined
n
- = - n
- jn
Figure 5B.7
1.2 0.5
c( t ) 1.0
0.8 1.0
1.5
0.6 2.0
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
nt
Figure 5B.8
Settling Time
The settling time, t s , is the time required for the output to come within and
stay within a given band about the actual steady-state value. This band is
usually expressed as a percentage p of the steady-state value. To derive an
estimate for the settling time, we need to examine the step-response more
closely.
The standard, second-order lowpass transfer function of Eq. (5B.10) has the
s-plane plot shown below if 0 1 :
Pole locations
showing definition of
the angle j
j n 1- 2
n
- n
s -plane
Figure 5B.9
n
cos
n (5B.19)
ct 1
e n t
1 2
sin n 1 2 t (5B.20)
The curves 1 e n t 1 2 are the envelope curves of the transient
response for a unit-step input. The response curve ct always remains within a
pair of the envelope curves, as shown below:
Pair of envelope
curves for the unit-
1 - nt step response of a
1+ 1+ e
lowpass second-
1 2
order underdamped
1 2 system
T= 1
n
e - t
n
1-
1 2
1 0
1- T 2T 3T 4T 5T
1 2
3
1
cos
1
t
1 2
2
1
cos
1
t
1 22
Figure 5B.10
To determine the settling time, we need to find the time it takes for the
response to fall within and stay within a certain band about the steady-state
value. This time depends on and n in a non-linear fashion, because of the
1+e - tn
1 steady-state value
- nt
1- e
Figure 5B.11
The dashed curves in Figure 5B.11 represent the loci of maxima and minima of
the step-response. The maxima and minima are found by differentiating the
time response, Eq. (5B.20), and equating to zero:
dct e n t
dt
1 2
cos n 1 2 t n 1 2
e n t
1 2
sin n 1 2 t n
(5B.21)
0
1 2 cos n 1 2 t sin n 1 2 t
1 2
tan n 1 2 t
1 2
tan n 1 2 t n (5B.22)
1 2
tan 1
n 1 2 t n
(5B.23)
1 2
tan 1
(5B.24)
Substituting Eq. (5B.24) into Eq. (5B.23), and solving for t, we obtain:
Eq. (5B.25) gives the time at which the maxima and minima of the step-
response occur. Since ct in Eq. (5B.20) is only defined for t 0 , Eq. (5B.25)
only gives valid results for t 0 .
n n
1 2 n
cn 1
1
e n 1 2
sin n
1 2 1 2 (5B.26)
n
n
Since:
n
c1 n 1
1 1 2
e sin , n odd
1 2 (5B.29a)
n
c2 n 1
1 1 2
e sin , n even
1 2 (5B.29b)
(5B.30)
sin 1 2
n
c1 n 1 e 1 2
, n odd (5B.31a)
n
c2 n 1 e 1 2
, n even (5B.31b)
Eq. (5B.31a) and Eq. (5B.31b) are, respectively, the relative maximum and
minimum values of the step-response, with the times for the maxima and
minima given by Eq. (5B.25). But these values will be exactly the same as
those given by the following exponential curves:
c1 t 1 e
Exponential curves
n t (5B.32a) passing through the
maxima and minima
n
t n 0, 1, 2,
n 1 2 ,
(5B.33)
Since the exponential curves, Eqs (5B.32), pass through the maxima and
minima of the step-response, they can be used to approximate the extreme
bounds of the step-response (note that the response actually goes slightly
outside the exponential curves, especially after the first peak – the exponential
curves are only an estimate of the bounds).
Graph of an
underdamped step-
response showing
exponential curves
- nt
bounding the 1+ e
maxima and minima
1+
1 steady-state value
1-
1- e - t
n
0
ts
Figure 5B.12
The exponential terms in Eqs. (5B.32) represent the deviation from the steady-
state value. Since the exponential response is monotonic, it is sufficient to
calculate the time when the magnitude of the exponential is equal to the
required error . This time is the settling time, t s :
e t n s (5B.34)
Taking the natural logarithm of both sides and solving for t s gives
p
where .
100
Peak Time
The peak time, t p , at which the response has a maximum overshoot is given by
n 1 2 d (5B.36)
system
Percent Overshoot
The magnitudes of the overshoots can be determined using Eq. (5B.31a). The
maximum value is obtained by letting n 1 . Therefore, the maximum value is:
1 2
maximum overshoot e (5B.38)
Percent overshoot
(5B.39)
P.O. e 1 2 for a second-order
100% system
Summary
The time-domain response of a system is important in control systems. In
“set point” control systems, the time-domain response is the step response
of the system. We usually employ feedback in a system so that the output
tracks the input with some acceptable steady-state error.
ln
Settling time: ts
n
Peak time: tp
n 1 2 d
References
Kuo, B.: Automatic Control Systems 7th ed., Prentice-Hall, 1995.
Exercises
1.
2.
Determine a second-order, all-pole transfer function which will meet the
following specifications for a step input:
(b) 5% overshoot
3.
Given:
n2
C s
s s 2 2 n s n2
4.
The experimental zero-state response to a unit-step input of a second-order all-
pole system is shown below:
c( t )
a
b
1.0
0 0.4 0.8 t
(a) Derive an expression (in terms of a and b) for the damping ratio.
(b) Determine values for the natural frequency and damping ratio, given
a 0.4 and b 0.08 .
5.
Given:
G1 s
10
(i)
s 10
G 2 s
1
(ii)
s 1
G3 s
10
(iii)
s 1s 10
6.
Find an approximate first-order model of the transfer function:
G s
4
s 3s 2
2
Hint: A suitable criterion to use that is common in control theory is the integral
of the square of the error, ISE, which is defined as:
I 1 e 2 t dt
T
The upper limit T is a finite time chosen somewhat arbitrarily so that the
integral approaches a steady-state value.
7.
Automatically controlled machine-tools form an important aspect of control
system application. The major trend has been towards the use of automatic
numerically controlled machine tools using direct digital inputs. Many
CAD/CAM tools produce numeric output for the direct control of these tools,
eliminating the tedium of repetitive operations required of human operators,
and the possibility of human error. The figure below illustrates the block
diagram of an automatic numerically controlled machine-tool position control
system, using a computer to supply the reference signal.
servo
amplifier servo motor
R ( s) 1 C( s)
Ka = 9
s ( s +1)
position feedback
(b) What is the percent overshoot and time to peak resulting from the
application of a unit-step input?
(c) What is the steady-state error resulting from the application of a unit-step
input?
(d) What is the steady-state error resulting from the application of a unit-ramp
r t tu t input?
8.
Find the steady-state errors to (a) a unit-step, and (b) a unit-ramp input, for the
following feedback system:
R (s ) 1 C (s )
s +1
2
3s
Note that the error is defined as the difference between the actual input r t
and the actual output ct .
9.
Given
G s
0.1
s 0.1
R (s ) C (s )
K G ( s)
Sketch the time responses of the closed-loop system and find the system
time constants when (i) K 0.1 , (ii) K 1 and (iii) K 10 .
Overview
We apply feedback in control systems for a variety of reasons. The primary
purpose of feedback is to more accurately control the output - we wish to
reduce the difference between a reference input and the actual output. When
the input signal is a step, this is called set-point control.
Recall that the basic feedback system was described by the block diagram:
General feedback
control system
R (s ) E (s ) C (s )
G (s)
B (s )
H (s)
Figure 6A.1
C s G s General feedback
control system
R s 1 G s H s
(6A.1) transfer function
The only way we can improve “system performance” – whatever that may be –
is by choosing a suitable H s or G s . Some of the criteria for choosing
Transient Response
One of the most important characteristics of control systems is their transient
response. We might desire a “speedy” response, or a response without an
“overshoot” which may be physically impossible or cause damage to the
system being controlled (e.g. Maze rover hitting a wall!).
Figure 6A.2
controller plant
Figure 6A.3
Example
We have already seen that the MR can be described by the following block
diagram (remember it’s just a differential equation!):
MR transfer function
for velocity output
X (s ) 1M V (s )
skf M
Figure 6A.4
expression for V s . Since the MR’s transfer function does not contain a K s
term, then the only way it can appear in the expression for V s is if
X s K s .
V s G s X s
1M K
s kf M s
K kf K kf
s s kf M (6A.2)
vt
K
kf
k
1 e f
M t
, t0 (6A.3)
If K is set to v0 k f then:
,
Step response of
vt v0 1 e
kf M t
MR velocity
t0 (6A.4)
Simple open-loop
MR controller
R (s ) X (s ) 1M V (s )
kf
skf M
controller plant
Figure 6A.5
Open-loop control
signal and not on the output. This type of control is deficient in several aspects.
has many Note that the reference input has to be converted to the MR input through a
disadvantages
gain stage equal to k f , which must be known. Also, by examining Eq. (6A.4),
we can see that we have no control over how fast the velocity converges to v0 .
Closed-Loop Control
To better control the output, we’ll implement a closed-loop control system. For
simplicity, we’ll use a unity-feedback system, with our controller placed in the
feed-forward path:
Closed-loop MR
controller
R (s ) E (s ) X (s ) 1M
V (s )
Gc( s ) skf M
controller plant
Figure 6A.6
proportional control since the control signal xt is directly proportional to the
error signal et .
Proportional
controller
V (s ) (P controller)
R (s ) E (s ) X (s ) 1M
KP skf M
controller plant
Figure 6A.7
Gc s G s
V s R s
1 Gc s G s
KP M v0
s k f M K P M s
K P v0 k f K P K P v0 k f K P
s s k f M KP M
(6A.5)
vt
K P v0
k f KP
1 e f P
k K M t
, t0 (6A.6)
response using a
proportional
controller
make this error as small as desired by choosing a suitably large value for K P .
One of the deficiencies of the simple P controller in controlling the maze rover
was that it did not have a zero steady-state error. This was due to the fact that
the overall feedforward system was Type 0, instead of Type 1. We can easily
make the overall feedforward system Type 1 by changing the controller so that
it has a pole at the origin:
Integral controller
(I controller)
R (s ) E (s ) KI X (s ) 1M
V (s )
s skf M
controller plant
Figure 6A.8
T s
K I sM
s k f M K I sM
KI M
s k f M s KI M
2 (6A.7)
We can see straight away that the transfer function is 1 at DC (set s 0 in the
transfer function). This means that the output will follow the input in the
steady-state (zero steady-state error). By comparing this second-order transfer
function with the standard form:
n2
T s 2
s 2 n s n2 (6A.8)
we can see that the controller is only able to adjust the natural frequency, or the
distance of the poles from the origin, n . This may be good enough, but we
Proportional plus
Integral controller
R (s ) E (s ) KI X (s ) 1M
V (s ) (PI controller)
KP +
s skf M
controller plant
Figure 6A.9
The controller in this case causes the plant to respond to both the error and the
integral of the error. With this type of control, the overall transfer function of
the closed-loop system is:
T s
K P K I s 1 M
s k f M K P K I s 1 M
KI M KP M s
s 2 k f M K P M s K I M
(6A.9)
Again, we can see straight away that the transfer function is 1 at DC (set s 0
in the transfer function), and again that the output will follow the input in the
steady-state (zero steady-state error). We can now control the damping ratio ,
as well as the natural frequency n , independently of each other. But we also
have a zero in the numerator. Intuitively we can conclude that the response will
be similar to the response with integral control, but will also contain a term
which is the derivative of this response (we see multiplication by s – the zero –
as a derivative). We can analyse the response by rewriting Eq. (6A.9) as:
n2 K P K I n2 s
T s 2 2 (6A.10)
s 2 n s n s 2 n s n2
2
K P dcI t
ct cI t (6A.11)
K I dt
The figure below shows that the addition of the zero at s K I K P reduces
the rise time and increases the maximum overshoot, compared to the I
controller step response:
PI controller
response to a unit-
step input KP d cI (t )
cI (t ) +
KI dt
c (t )
cI (t )
1.00
KP d cI (t )
KI dt
0
t
Figure 6A.10
One of the best known controllers used in practice is the PID controller, where
the letters stand for proportional, integral, and derivative. The addition of a
derivative to the PI controller means that PID control contains anticipatory
control. That is, by knowing the slope of the error, the controller can anticipate
the direction of the error and use it to better control the process. The PID
controller transfer function is:
G c s K P K D s
KI
(6A.12)
s
There are established procedures for designing control systems with PID
controllers, in both the time and frequency-domains.
Disturbance Rejection
A major problem with open-loop control is that the output ct of the plant will
be “perturbed” by a disturbance input d t . Since the control signal r t does Feedback minimizes
the effect of
not depend on the plant output ct in open-loop control, the control signal disturbance inputs
cannot compensate for the disturbance d t . Closed-loop control can
compensate to some degree for disturbance inputs.
Example
Open-loop MR
system with
D1( s) D2( s) D3( s) disturbance inputs
R (s ) E (s ) X (s ) 1M V (s )
Gc( s ) skf M
controller
Figure 6A.11
R (s ) E (s ) X (s ) 1M V (s )
Gc( s ) skf M
controller
Figure 6A.12
Gc s G s
V s Rs D1 s
1 Gc s G s
G s
D2 s
1 Gc s G s
D3 s
1
1 Gc s G s (6A.13)
Sensitivity
Sensitivity is a measure of how the characteristics of a system depend on the Sensitivity defines
how one element of
variations of some component (or parameter) of the system. The effect of the a system affects a
characteristic of the
parameter change can be expressed quantitatively in terms of a sensitivity system
function.
System Sensitivity
System sensitivity
T T T defined
ST
T
(6A.14)
:
T T0
ST
0 (6A.15)
Example
R (s ) E (s ) C (s )
G ( s)
R (s ) C (s )
G ( s)
B (s )
H ( s)
Figure 6A.13
G s
T s
1
1 G s H s H s (6A.16)
T s G s
T
1 (6A.17)
G
System sensitivity to G T
G for open-loop SGT 1
systems T G (6A.18)
G s
T
1 G s H s
T 1 GH GH
1 GH 2
(6A.19)
G
Then:
System sensitivity to
G 1 1
S
T
G for closed-loop
G 1 GH 1 GH 2 1 GH
G (6A.20) systems
Show that S HT 1 for this example. This result means that stable feedback Feedback elements
must be accurate
components must be used in order to receive the full benefits of feedback. and stable
Summary
Various types of controllers, such as P, PI, PID are used to compensate the
transfer function of the plant in unity-feedback systems.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
Show that the following block diagrams are equivalent:
R (s ) C (s )
G (s)
H (s)
R (s ) C (s )
G' ( s)
G s
where G s
1 G s H s 1
2.
Assume that an operational amplifier has an infinite input impedance, zero
output impedance and a very large gain K.
large.
K
V1 V2
3.
For the system shown:
R ( s) 100 C( s)
K1 =10 G=
s ( s +1)
K2 =10
4.
It is important to ensure passenger comfort on ships by stabilizing the ship’s
oscillations due to waves. Most ship stabilization systems use fins or hydrofoils
projecting into the water in order to generate a stabilization torque on the ship.
A simple diagram of a ship stabilization system is shown below:
Td( s )
R (s ) Tf ( s ) (s )
Ka G (s)
roll
fin actuator ship
K1
roll sensor
, the oscillations continue for several cycles and the rolling amplitude can
reach 18 for the expected amplitude of waves in a normal sea. Determine and
compare the open-loop and closed-loop system for:
5.
The system shown uses a unity feedback loop and a PI compensator to control
the plant.
D (s)
R (s ) E (s ) KI C (s )
KP + G (s)
s
PI compensator plant
Find the steady-state error, e , for the following conditions [note that the
error is always the difference between the reference input r t and the plant
output ct ]:
G s G s
K1 K1
(a) , KI 0 (b) , KI 0
1 sT1 1 sT1
G s G s
K1 K1
(c) , KI 0 (d) , KI 0
s1 sT1 s1 sT1
when:
(i) d t 0 , r t unit-step
(ii) d t 0 , r t unit-ramp
(iii) d t unit-step, r t 0
How does the addition of the integral term in the compensator affect the
steady-state errors of the controlled system?
Overview
Digital control of continuous-time systems has become common thanks to the
ever-increasing performance/price ratio of digital signal processors
(microcontrollers, DSPs, gate arrays etc). Once we convert an analog signal to
a sequence of numbers, we are free to do anything we like to them.
Complicated control structures can be implemented easily in a computer, and
even things which are impossible using analog components (such as median
filtering). We can even create systems which “learn” or adapt to changing
systems, something rather difficult to do with analog circuitry.
The side-effect of all these wonderful benefits is the fact that we have to learn a
new (yet analogous) body of theory to handle what are essentially discrete-time
systems. It will be seen that many of the techniques we use in the continuous-
time domain can be applied to the discrete-time case, so in some cases we will
design a system using continuous-time techniques and then simply discretize it.
To handle signals in the discrete-time domain, we’ll need something akin to the
Laplace transform in the continuous-time domain. That thing is the z-
transform.
The z-Transform
We’ll start with a discrete-time signal which is obtained by ideally and
uniformly sampling a continuous-time signal:
We’ll treat a
discrete-time signal
xs t xt t nT
as the weights of an
s ideally sampled
(7A.1) continuous-time
n
signal
instants, we replace the xt with the value at the sample instant:
xs t xnT t nT
s s
n (7A.2)
Also, if we establish the time reference so that xt 0, t 0 , then we can say:
xs t xnTs t nTs
n 0 (7A.3)
This is the discrete-time signal in the time-domain. The value of each sample is
represented by the area of an impulse, as shown below:
A discrete-time
signal made by
ideally sampling a x (t )
continuous-time
signal
0 t
0 Ts t
xs ( t )
0 t
Figure 7A.1
X s s xnT t nT e
s s
st
dt
0 (7A.4)
n 0
Since summation and integration are linear, we’ll change the order to give:
X s s xnTs t nTs e st dt
0 (7A.5)
n 0
Using the sifting property of the delta function, this integrates to:
X s s xnTs e snTs
n 0 (7A.6)
X z xnTs z n
n 0 (7A.8)
Since xnTs is just a sequence of sample values, xn , we normally write the
z-transform as:
X z xnz n
Definition of
z-transform
n 0 (7A.9)
Thus, if we know the sample values of a signal, xn , it is a relatively easy step
to write down the z-transform for the sampled signal:
Note that Eq. (7A.9) is the one-sided z-transform. We use this for the same
reasons that we used the one-sided Laplace transform.
z eTs (7A.12a)
z Ts
(7A.12b)
Im s Im z
unit circle
j
sTs
z=e Ts
e
Ts
Re s Re z
s -plane z -plane
Figure 7A.2
s-domain maps linearly on to the unit-circle in the z-domain with a phase angle
z Ts . In other words, distance along the j -axis in the s-domain maps
s -plane z -plane
Figure 7A.3
Aliasing
2
s 2f s
Ts (7A.13)
When the frequency is equal to the foldover frequency (half the sample rate),
s j s 2 and:
s
j Ts
ze sTs
e 2
e j 1
(7A.14)
Figure 7A.4
Figure 7A.5
Thus, absolute frequencies greater than the foldover frequency in the s-plane
are mapped on to the same point as frequencies less than the foldover
frequency in the z-plane. That is, they assume the alias of a lower frequency.
The energies of frequencies higher than the foldover frequency add to the
energy of frequencies less than the foldover frequency and this is referred to as
frequency folding.
Finding z-Transforms
We will use the same strategy for finding z-transforms of a signal as we did for
the other transforms – start with a known standard transform and successively
apply transform properties. We first need a few standard transforms.
Example
X z nz n
n 0
n 1
The z-transform of a
unit-pulse (7A.16)
Example
X z a n unz n
n 0 (7A.17)
n
a
X z
n0 z
2 3
a a a
1
z z z (7A.18)
To convert this geometric progression into closed-form, let the sum of the
first k terms of a general geometric progression be written as S k :
k 1
S k x n 1 x x 2 x n x k 1
n 0 (7A.19)
xS k x x 2 x 3 x k (7A.20)
S k 1 x 1 x x 2 x 3 x k 1
x x2 x3 x k (7A.21)
This is a telescoping sum, where we can see that on the right-hand side only the
first and last terms remain after performing the subtraction.
1 xk
k 1
Sk x n
n 0 1 x (7A.22)
Closed-form
1
xn 1 x x2
expression for an
infinite geometric , x 1
progression
n 0 1 x (7A.23)
X z
1 a
, 1
a z (7A.24)
1
z
This can be rewritten as:
The z-transform of a
X z
geometric z
progression , z a
za (7A.25)
1 n
a u [n]
|a|
0 n 0 Re z
Figure 7A.6
As was the case with the Laplace transform, if we restrict the z-transform to
causal signals, then we do not need to worry about the ROC.
Example
The z-transform of a
un
z unit-step
z 1 (7A.26)
Example
cosn e jn e jn 2 (7A.27)
z
e jn
z e j (7A.28)
Therefore:
1 z
cosn
z
j
j
,
2 z e z e (7A.29)
z z cos
cosn un
z 2 2 z cos 1 (7A.30)
A similar derivation can be used to find the z-transform of sin n un .
One of the most important properties of the z-transform is the right shift
property. It enables us to directly transform a difference equation into an
algebraic equation in the complex variable z.
The z-transform of a function shifted to the right by one unit is given by:
Z xn 1 xn 1z n
n 0 (7A.31)
Z xn 1 xr z r 1
r 1
x 1 z 1
xr z r
r 0
z 1 X z x 1
(7A.32)
Thus:
Standard z-Transforms
(Z.1)
un
z
z 1
n 1 (Z.2)
(Z.3)
a un
n z
za
z 2 a cos z (Z.4)
a cos n un 2
n
z 2a cos z a 2
z-Transform Properties
Assuming xn X z .
z (Z.7)
Multiplication by an a xn X
n
a
q 1 (Z.8)
xn q z X z xk q z
Right shifting q k
k 0
(Z.10)
nxn z X z
Multiplication by n d
dz
q 1 (Z.11)
xn q z X z xk z
Left shifting
q q k
k 0
n (Z.12)
Summation xk
z
X z
k 0 z 1
Inverse z-transform
xn
1
n 1 defined – but too
X z z dz hard to apply!
2j C
(7A.34)
You won’t need to evaluate this integral to determine the inverse z-transform,
just like we hardly ever use the definition of the inverse Laplace transform. We
manipulate X z into a form where we can simply identify sums of standard
transforms that may have had a few properties applied to them.
Given:
b0 z n b1 z n 1 bn
F z
z p1 z p2 z pn (7A.35)
Expand functions of
F z k 0 k1
z z z z into partial
k2 kn fractions – then find
z p1 z p2 z pn (7A.36)
the inverse z-
transform
Example
z 3 2z 2 z 1
F z
z 22 z 3
Therefore:
F z z 3 2 z 2 z 1
z z 2 z 3
2
z
k k k2 k
0 1 3
z z 2 z 2 2
z 3
F z z 3 2z 2 z 1 1
k0 z
z z 0 z 2 z 3 z 0 12
2
2 F z d z 3 2 z 2 z 1
k1 z 2
d
dz z z 2 dz z z 3 z 2
z z 3 3 z 2 4 z 1 z 3 2 z 2 z 1 2 z 3
77
z z 3
2 2
z 2
100
F z z 3 2z 2 z 1
k 2 z 2
19
2
z z 2 z z 3 z 2
10
F z z 3 2z 2 z 1
k 3 z 3
11
z z 2
2
z z 3 z 3
75
Therefore:
F z
1 77 z 19 z 11 z
12 100 z 2 10 z 2 75 z 3
2
77 n 19 n n 11
f n n 2 2 3 , n 0
1 n
12 100 10 2 75
z za
Note: For k 2 use na n (Standard transform Z.3 and
z p2 2
z a 2
n n z
property Z.10). Then with linearity (property Z.6) we have: a .
a z a 2
difficult?
difference time-domain
equation solution
time-domain
ZT IZT
frequency-domain
algebaric easy! z -domain
equation solution
Figure 7A.7
Example
Now:
yn u n Y z
xn 2 u n 0.5 u n
n n z
z 0 .5
xn 1u n z 1 X z x 1 z 1
z 1
z 0 .5 z 0 .5
xn 2u n z 2 X z z 1 x 1 x 2 z 2 X z
1 (7A.39)
z z 0 .5
Taking the z-transform of Eq. (7A.37) and substituting the foregoing results,
we obtain:
11 11 37
Y z 5 z 1Y z 6 z 2Y z z 1
6 6 36
3 5 (7A.40)
z 0 .5 z z 0 .5
or:
1 5 z 1
6 z 2 Y z 3 11z 1 z 3zz05.5 (7A.41)
z 3 z 11 z 3 z 5
Y z
z 5z 6
2
z 0.5 z 2 5 z 6
zero -input component zero -state component (7A.42)
and:
Y z 3 z 11 3 z 5
z z 2 z 3 z 0.5z 2 z 3
5 2 26 15 22 3 28 5
z
2
z 3
z 0 . 5 z
2 z
3
zero -input component zero -state component (7A.43)
Therefore:
Y z 5
z z 26 z 22 z 28 z
2
z
2z 3
15z 0.5
3z 2 5z3
zero -input component zero -state component (7A.44)
26
15 0.5 n
73 2 185 3 , n 0
n n (7A.45)
As can be seen, the z-transform method gives the total response, which
includes zero-input and zero-state components. The initial condition terms give
rise to the zero-input response. The zero-state response terms are exclusively
due to the input.
Taking the z-transform of both sides and using the right-shift property gives:
Y z a z 1Y z y 1 bX z (7A.47)
ay 1 (7A.48)
Y z X z
and corresponding
b
z-transform
1 az 1 1 az 1
which can be written:
ay 1z
Y z X z
bz
za za (7A.49)
The first part of the response results from the initial conditions, the second part
results from the input.
Y z X z
bz
za (7A.50)
H z
bz
za (7A.51)
so that:
Y z H z X z
Discrete-time
transfer function
(7A.52)
defined
N M
yn ai yn i bi xn i
i 1 i 0 (7A.53)
and if the system has zero initial conditions, then taking the z-transform of both
sides results in:
b0 z N b1 z N 1 bM z N M
Y z N N 1
X z
z a1 z a N 1 z a N (7A.54)
b0 z N b1 z N 1 bM z N M
Transfer function
H z N
derived directly from
difference equation
z a1 z N 1 a N 1 z a N (7A.55)
yn hn xn hi xn i , n 0
i 0 (7A.56)
hn H z
response and
(7A.57) transfer function
form a z-transform
pair
That is, the unit-pulse response and the transfer function form a z-transform
pair.
Stability
inside the unit-circle. The right-half s-plane maps outside the unit circle.
Recall that functions of the Laplace variable s having poles with negative real
parts decay to zero as t . In a similar manner, transfer functions of z
having poles with magnitudes less than one decay to zero in the time-domain
as n . Therefore, for a stable system, we must have:
pi 1 (7A.58a)
A discrete-time unit-
delay element
x [n ] y [n ] = x [n -1]
D
Figure 7A.8
By taking the z-transform of the input and output, we can see that we should
represent a delay in the z-domain by:
Delay element in
block diagram form
in the z-domain
X ( z) Y ( z ) = z -1 X ( z)
z -1
Figure 7A.9
Example
D
y [n -1]
Figure 7A.10
z-domain block
diagram of a
z -1T X ( z ) numeric integrator
X ( z) T X ( z) Y ( z)
T z -1
z -1
-1
z Y ( z)
Figure 7A.11
Figure 7A.12
You should confirm that this transfer function obtained using block diagram
reduction methods is the same as that found by taking the z-transform of
Eq. (7A.59).
Summary
The z-transform is the discrete-time counterpart of the Laplace transform.
There is a mapping from the s-domain to the z-domain given by z e sTs .
The mapping is not unique, and for frequencies above the foldover
frequency, aliasing occurs.
The unit-pulse response and the transfer function form a z-transform pair:
hn H z .
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
Construct z-domain block diagrams for the following difference equations:
2.
(i) Construct a difference equation from the following block diagram:
X ( z) Y ( z)
3 z -1
z -1
-2 z -2
(ii) From your solution calculate yn for n = 0, 1, 2 and 3 given y 2 2 ,
3.
Using z-transforms:
(a) Find the unit-pulse response for the system given by:
Hint: xn can be written as the sum of a unit step and two unit-pulses, or as
the subtraction of two unit-steps.
4.
Determine the weighting sequence for the system shown below in terms of the
individual weighting sequences h1 n and h2 n .
h1[n]
x [n ] y [n ]
h2[n]
5.
For the feedback configuration shown below, determine the first three terms of
the weighting sequence of the overall system by applying a unit-pulse input
and calculating the resultant response. Express your results as a function of the
weighting sequence elements h1 n and h2 n .
x [n ] y [n ]
h1[n]
h2[n]
6.
Using the definition of the z-transform
2, n 0, 2, 4,
xn
0, all other n
by noting that xn x1 n x2 n where x1 n is the unit-step sequence and
x2 n is the unit-alternating sequence. Verify your result by directly
(c) Use the linearity property of the z-transform and the z-transform of e anT
(from tables) to find Z cosnT . Check the result using a table of
z-transforms.
7.
Poles and zeros are defined for the z-transform in exactly the same manner as
for the Laplace transform. For each of the z-transforms given below, find the
poles and zeros and plot the locations in the z-plane. Which of these systems
are stable and unstable?
1 2 z 1
(a) H z
3 4 z 1 z 2
(b) H z
1
1 3 4 z 1 8 z 4
2
5 2 z 2
(c) H z
1 6 z 1 3z 2
Note: for a discrete-time system to be stable all the poles must lie inside the
unit-circle.
8.
Given:
9.
Use the direct division method to find the first four terms of the data sequence
xn , given:
14 z 2 14 z 3
X z
z 1 4z 1 2z 1
10.
Use the partial fraction expansion method to find a general expression for xn
in Question 9. Confirm that the first four terms are the same as those obtained
by direct division.
11.
Determine the inverse z-transforms of:
z2
(a) X z (b) X z 3 2 z 1 6 z 4
z 1z a
(c) X z
1 e z
aT
(d) X z
z z 1
z 1z e aT
z 1z 2 z 1 4
(e) X z (f) X z
4 z
z 2 z 1
3
z z 1
2
12.
Given:
(b) Find H z
(i) by directly taking the z-transform of the difference equation
(ii) from your answer in (a)
(c) From your solution in (b) find the unit-step response of the system. Check
your solution using a convolution method on the original difference
equation.
13.
Given yn xn yn 1 yn 2 , find the unit-step response of this system
using the transfer function method, assuming zero initial conditions.
14.
Use z-transform techniques to find a closed-form expression for the Fibonacci
sequence:
Hint: To use transfer function techniques, the initial conditions must be zero.
Construct an input so that the ZSR of the system described by the difference
equation gives the above response.
-1 1
That is:
1
1 1
Find the two values of that satisfy the golden ratio. Are they familiar values?
15.
Given F s , find F z .
1
s a 2
16.
Given:
X 1 z z 6
X 2 z X 3 z D z
z 1 z 1
X 3 z X 1 z kX 2 z
X 4 z X 2 z X 3 z
10 3
z2 z A
Are there any differences between the operations which apply to discrete-time
and continuous-time transfer functions?
17.
Using the initial and final value theorems, find f 0 and f of the
following functions:
(a) F z (b) F z 1 5 z 3 2 z 2
1
z 0.3
z2 14 z 2 14 z 3
(c) F z (d) F z
z 14 z a z 1 4z 1 2z 1
18.
Perform the convolution yn yn when
using
Compare answers.
19.
Determine the inverse z-transform of:
z z 1
F z
z 1 z 2 z 1 4
HINT: This has a repeated root. Use techniques analogous to those for the
Laplace transform when multiple roots are present.
Lecture 7B – Discretization
Signal discretization. Signal reconstruction. System discretization. Frequency
response. Response matching.
Overview
Digital signal processing is now the preferred method of signal processing.
Communication schemes and controllers implemented digitally have inherent
advantages over their analog counterparts: reduced cost, repeatability, stability
with aging, flexibility (one H/W design, different S/W), in-system
programming, adaptability (ability to track changes in the environment), and
this will no doubt continue into the future. However, much of how we think,
analyse and design systems still uses analog concepts, and ultimately most
embedded systems eventually interface to a continuous-time world.
It’s therefore important that we now take all that we know about continuous-
time systems and “transfer” or “map” it into the discrete-time domain.
Signal Discretization
We have already seen how to discretize a signal. An ideal sampler produces a
weighted train of impulses:
Sampling a
continuous-time
g (t ) g (t ) p (t ) = gs ( t ) signal produces a
discrete-time signal
(a train of impulses)
p (t )
Figure 7B.1
Digital signal In a computer, values can only be stored as discrete values, not only at discrete
processing also
quantizes the
times. Thus, in a digital system, the output of a sampler is quantized so that we
discrete-time signal have a digital representation of the signal. The effects of quantization will be
ignored for now – be aware that they exist, and are a source of errors for digital
signal processors.
Signal Reconstruction
The reconstruction of a signal from ideal samples was accomplished by an
ideal filter:
Lowpass filtering a
discrete-time signal
(train of impulses) gs (t ) g( t )
produces a lowpass
continuous-time
signal filter
Figure 7B.2
We then showed that so long as the Nyquist criterion was met, we could
reconstruct the signal perfectly from its samples (if the lowpass filter is ideal).
To ensure the Nyquist criterion is met, we normally place an anti-alias filter
before the sampler.
Hold Operation
The output from a digital signal processor is obviously digital – we need a way
to convert a discrete-time signal back into a continuous-time signal. One way
is by using a lowpass filter on the sampled signal, as above. But digital systems
have to first convert their digital data into analog data, and this is accomplished
with a DAC.
Figure 7B.3
Figure 7B.4
System Discretization
Suppose we wish to discretize a continuous-time LTI system. We would expect
the input/output values of the discrete-time system to be:
Discretizing a
system should
produce the same x (t ) y (t ) x [n ] = x ( nTs ) y [n ] = y ( nTs )
(sampled) signals h (t ) h [n ]
Figure 7B.5
the relationship Eq. (7B.3) holds true. One way is to match the inputs and
outputs in the frequency-domain. You would expect that since z e sTs , then
we can simply do:
H d z H s s 1 T
An exact match of (7B.4)
the two systems
s ln z
f x f a x a f a
x a
2
f a (7B.5)
2!
y2 y3 y4
ln 1 y y (7B.6)
2 3 4
Now, from inspection, we can also have:
y2 y3 y4
ln 1 y y (7B.7)
2 3 4
Subtracting Eq.
(7B.7) from Eq.
(7B.6), and dividing by 2 gives us:
1 1 y y3 y5 y7
ln y
2 1 y 3 5 7 (7B.8)
Now let:
1 y
z
1 y (7B.9)
or, rearranging:
z 1 (7B.10)
y
z 1
z 1 1 z 1 3 1 z 1 5
ln z 2
z 1 3 z 1 5 z 1 (7B.11)
Now if z 1 , then we can truncate higher than first-order terms to get the
approximation:
z 1
ln z 2
z 1 (7B.12)
We can now use this as an approximate value for s. This is called the bilinear
transformation (since it has 2 (bi) linear terms):
Bilinear
1 2 z 1 transformation
s ln z defined - an
Ts Ts z 1 (7B.13)
approximate
mapping between s
and z
The open LHP in the s-domain maps onto the open unit disk in the z-
domain (thus the bilinear transformation preserves the stability condition).
The j -axis maps onto the unit circle in the z-domain. This will be used
later when we look at frequency response of discrete-time systems.
Discretizing a
2 z 1
H d z H
system using the
bilinear
Ts z 1 (7B.14) transformation
Frequency Response
frequency
z e j
determine discrete- (7B.16)
time frequency
response
j j
z=e
-1 1
-j
Figure 7B.6
But this point is also given by the angle 2n , so the mapping from the z-
plane to the s-domain is not unique. In fact any point on the unit circle maps to
the s-domain frequency Ts 2n , or:
Frequency response
mapping is not 2
unique – aliasing n
Ts Ts
n s
Ts
(7B.17)
The mapping from the z-domain to the s-domain is not unique. Conversely,
the mapping from the s-domain to the z-domain is not unique. This means
that frequencies spaced at s map to the same frequency in the z-domain.
2 e j 1
H d H j
s
T e 1 (7B.18)
2 e j 1
j
Ts e j 1 (7B.19)
Using Euler’s identity show that this can be manipulated into the form:
Mapping
continuous-time
2 frequency to
tan discrete-time
frequency using the
Ts 2 (7B.20) bilinear
transformation
Ts
2 tan 1
2 (7B.21)
2
system to get a
H d H tan
similar frequency
response
Ts 2 (7B.22)
Note that we may be able to select the sample period T so that all our “critical”
frequencies will be mapped by:
cTs T
c 2 tan 1 2 c s
2 2 (7B.23)
cTs
Eq. (7B.15).
Response Matching
In control systems, it is usual to consider a mapping from continuous-time to
discrete-time in terms of the time-domain response instead of the frequency
Time-domain (step
response. For set-point control, this mapping is best performed as step response) matching
invariance synthesis, although other mappings can be made (like impulse
invariance).
where the output of the discrete-time system is obtained by sampling the step-
response of the continuous-time system.
X z
z
z 1 (7B.25)
then we want:
Step response
z 1 matching to get the
H d z Y z discrete-time
transfer function
z (7B.26)
Example
Gc s 500
5
s (7B.27)
R (s ) E (s ) 500s 5 X (s ) 0.001
V (s )
s s 0.01
Figure 7B.7
Show that the block diagram can be reduced to the transfer function:
R (s ) 0.5
V (s )
s 0.5
Figure 7B.8
Y s
0.5 1
s 0.5 s
1 1
(7B.28)
s s 0.5
Signals and Systems 2014
7B.13
and taking the inverse Laplace transform gives the step response:
y t 1 e 0.5t u t (7B.29) Continuous-time
step response
yn 1 e 0.5 nTs un (7B.30)
and desired
discrete-time step
response
Y z
z z
z 1 z e 0.5Ts (7B.31)
Hence, using Eq. (7B.26) yields the following transfer function for the
corresponding discrete-time system:
Equivalent discrete-
z 1 1 e0.5Ts bz 1
H d z 1
time transfer
function using step
z e 0.5Ts z e 0.5Ts 1 az 1 (7B.32) response
You should confirm that this difference equation gives an equivalent step-
response at the sample instants using MATLAB® for various values of T.
Summary
We can discretize a continuous-time signal by sampling. If we meet the
Nyquist criterion for sampling, then all the information will be contained in
the resulting discrete-time signal. We can then process the signal digitally.
A system (or signal) may be discretized using the bilinear transform. This
maps the LHP in the s-domain into the open unit disk in the z-domain. It is
an approximation only, and introduces warping when examining the
frequency response.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
A Maze Rover phase-lead compensator has the transfer function:
20s 1
H s
s 4
2.
Repeat Exercise 1, but this time use the bilinear transform.
3.
s2
A continuous-time system has a transfer function G s and it is
s 3s 2
2
4.
Compare the step responses of each answer in Exercises 1 and 2 using
MATLAB®.
Overview
The design of a control system centres around two important specifications –
the steady-state performance and the transient performance. The steady-state
performance is relatively simple. Given a reference input, we want the output
to be exactly, or nearly, equal to the input. The way to achieve the desired
steady-state error, for any type of input, is by employing a closed-loop
configuration that gives the desired system type. The more complex
specification of transient behaviour is tackled by an examination of the
system’s poles. We will examine transient performance as specified for an all-
pole second-order system (the transient performance criteria still exist for
higher-order systems, but the formula shown here only apply to all-pole
second-order systems).
The transient specification for a control system will usually consist of times
taken to reach certain values, and allowable deviations from the final steady-
state value. For example, we might specify percent overshoot, peak time, and
5% settling time for a control system. Our task is to find suitable pole locations
for the system.
The percent overshoot for a second-order all pole step-response is given by:
1 2
P.O. e (8A.1)
Since cos , where is the angle with respect to the negative real axis,
then we can shade the region of the s-plane where the P.O. specification is
satisfied:
PO specification
region in the s-plane
shaded region is where
PO is satisfied
j
=cos -1
s -plane
Figure 8A.1
Peak Time
tp
d
(8A.2)
Since a specification will specify a maximum peak time, then we can find the
minimum d t p required to meet the specification.
Again, we define the region in the s-plane where this specification is satisfied:
Peak time
specification region
j in the s-plane
j d
shaded region is where
t p is satisfied d= t
p
s -plane
Figure 8A.2
Settling Time
ln
ts
(8A.3)
Since a specification will specify a maximum settling time, then we can find
the minimum ln t s required to meet the specification.
Settling time
specification region
in the s-plane
shaded region is where
t s is satisfied
j
-ln
= t
s
s -plane
Figure 8A.3
Combined Specifications
Since we have now specified simple regions in the s-plane that satisfy each
specification, all we have to do is combine all the regions to meet every
specification.
Combined
specifications region
shaded region is where in the s-plane
all specs are satisfied
j
j d
s -plane
Figure 8A.4
Desired closed-loop
pole locations are
represented with a chosen pole location that meets the
square around them specs and keeps the system linear
j
j d
s -plane
Figure 8A.5
Dominant second-
order all-pole
system
j
s -plane
Figure 8A.6
Dominant second-
order all-pole
j system made by
moving a pole using
a minor-feedback
loop
minor-loop feedback
s -plane
Figure 8A.7
close zero
s -plane
Figure 8A.8
Example
We evaluate the regions in the order they are given. For the PO spec, we have:
0.1 e 1
2
0.591
so that:
cos 1 53
For the peak time spec, we have:
d 2 rads-1
1.57
For the settling time spec, we get:
ln 0.02 39
0.43 nepers
9 9
j
j 2.1
j2
53°
-1.6 -0.43
53° s -plane
-j 2
-j 2.1
z eTs e jTs
eTs Ts
z z (8A.4)
j
e Ts
Ts
s -plane
z -plane
Figure 8A.9
0 z 1
0 z 1
0 z 1 (8A.5)
We will now translate the performance specification criteria areas from the s-
plane to the z-plane.
Percent Overshoot
In the s-plane, the PO specification is given by s j tan cos 1 . This
line in the z-plane must be:
e Ts Ts tan cos 1 (8A.6)
j =0.5
unit-circle
s
j =0
2
s =0
=
2
=1
s -plane
z -plane
Figure 8A.10
The region in the z-plane that corresponds to the region in the s-plane where
the PO specification is satisfied is shown below:
PO specification
region in the z-plane
j
unit-circle =0.5
s
j
=0.5 2
s =0
=
2
s -plane
z -plane
Figure 8A.11
Peak Time
z e sTs eTs e j d Ts
eTs d Ts
(8A.7)
For a given d , this locus is a straight line between the origin and the unit-
plane so that 0 ):
j
unit-circle
s
j
2
j 2 2 Ts
j 1 s Ts
2 1 Ts
- 1 Ts
-j 1
s -plane
z -plane
Figure 8A.12
The region in the z-plane that corresponds to the region in the s-plane where
the peak time specification is satisfied is shown below:
Peak time j
specification region unit-circle
in the z-plane
s
j
2
j d
d Ts
- d Ts
-j d
z -plane
s -plane
Figure 8A.13
Settling Time
For a given , this locus is a circle centred at the origin with a radius e Ts :
j
unit-circle
2T s
e
- 2 - 1
1T s
e
s -plane z -plane
Figure 8A.14
The region in the z-plane that corresponds to the region in the s-plane where
the settling time specification is satisfied is shown below:
Settling time
j
unit-circle specification region
in the z-plane
e T s
-
s -plane z -plane
Figure 8A.15
Combined Specifications
A typical specification will mean we have to combine the PO, peak time and
We choose z-plane
poles close to the settling time regions in the z-plane. For the s-plane we chose pole locations
unit-circle so as not
to exceed the linear close to the origin. Since z eTs , and is a small negative number near the
range of systems
origin, then we need to maximize z . We therefore choose pole locations in the
z-plane which satisfy all the criteria, and we choose them as far away from the
origin as possible:
Combined
specifications region
in the z-plane
unit-circle desired pole
locations
z -plane
Figure 8A.16
Summary
The percent overshoot, peak time and settling time specifications for an all-
pole second-order system can easily be found in the s-plane. Desired pole
locations can then be chosen that satisfy all the specifications, and are
usually chosen close to the origin so that the system remains in its linear
region.
The percent overshoot, peak time and settling time specifications for an all-
pole second-order system can be found in the z-plane. The desired pole
locations are chosen as close to the unit-circle as possible so the system
stays within its linear bounds.
References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Overview
The roots of a system’s characteristic equation (the poles), determine the
mathematical form of the system’s transient response to any input. They are
very important for system designers in two respects. If we can control the pole
locations then we can position them to meet time-domain specifications such as
P.O., settling time, etc. The pole locations also determine whether the system is
stable – we must have all poles in the open LHP for stability. A “root locus” is
a graphical way of showing how the roots of a characteristic equation in the
complex (s or z) plane vary as some parameter is varied. It is an extremely
valuable aid in the analysis and design of control systems, and was developed
by W. R. Evans in his 1948 paper “Graphical Analysis of Control Systems,”
Trans. AIEE, vol. 67, pt. II, pp. 547-551, 1948.
Root Locus
As an example of the root locus technique, we will consider a simple unity-
feedback control system:
R (s ) E (s ) C (s )
Gc( s ) Gp( s)
controller plant
Figure 8B.1
C s Gc s G p s
Rs 1 Gc s G p s (8B.1)
1 Gc s G p s 0 (8B.2)
Now suppose that we can “separate out” the parameter of interest, K, in the
characteristic equation – it may be the gain of an amplifier, or a sampling rate,
or some other parameter that we have control over. It could be part of the
controller, or part of the plant. Then the characteristic equation can be written:
1 KPs 0
Characteristic
equation of a unity- (8B.3)
feedback system
where Ps does not depend on K. The graph of the roots of this equation, as
the parameter K is varied, gives the root locus. In general:
KPs K
s z1 s z 2 s z m
s p1 s p2 s pn (8B.4)
where zi are the m open-loop zeros and pi are the n open-loop poles of the
system. Also, rearranging Eq. (8B.3) gives us:
KPs 1 (8B.5)
Taking magnitudes of both sides leads to the magnitude criterion for a root
locus:
Root locus
magnitude criterion
P s 1 K (8B.6a)
Similarly, taking angles of both sides of Eq. (8B.5) gives the angle criterion for
a root locus:
To construct a root locus we can just apply the angle criterion. To find a
particular point on the root locus, we need to know the magnitude of K.
R (s ) E (s ) 0.5
C (s )
Kp ss
We are trying to see the effect that the controller parameter K p has on the
G p s and Gc s K p
0.5
ss 2
K K p and Ps
0.5
s s 2
For such a simple system, it is easier to derive the root locus algebraically
rather than use the angle criterion. The characteristic equation of the system is:
1 KPs 1 K p
0.5
0
ss 2
or just:
s 2 2 s 0 .5 K p 0
s 1 1 0.5 K P
We will now evaluate the roots for various values of the parameter K p :
Kp 0 s1, 2 0, 2
K p 1 s1, 2 1 1 2 , 1 1 2
Kp 2 s1, 2 1, 1
K p 3 s1, 2 1 j 2 , 1 j 2
Kp 4 s1, 2 1 j , 1 j
Kp =4
Kp =0
-1
-2
Kp =2
s -plane
-j
What may have not been obvious before is now readily revealed: the system is
unconditionally stable (for positive K p ) since the poles always lie in the LHP;
and the K p parameter can be used to position the poles for an overdamped,
The root locus starts (i.e. K 0 ) at the poles of P s . This can be seen by
substituting Eq. (8B.4) into Eq. (8B.3) and rearranging:
This shows us that the roots of Eq. (8B.3), when K 0 , are just the open-loop
poles of P s , which are also the poles of Gc s G p s .
Any portion of the real axis forms part of the root locus for K 0 if the total
number of real poles and zeros to the right of an exploratory point along the
real axis is an odd integer. For K 0 , the number is zero or even.
5. Asymptote Angles
The n m branches of the root locus going to infinity have asymptotes given
by:
r
A 180
nm (8B.8)
This can be shown by considering the root locus as it is mapped far away from
the group of open-loop poles and zeros. In this area, all the poles and zeros
contribute about the same angular component. Since the total angular
component must add up to 180 or some odd multiple, Eq. (8B.8) follows.
The asymptotes all intercept at one point on the real axis given by:
The value of A is the centroid of the open-loop pole and zero configuration.
A breakaway point is where a section of the root locus branches from the real
axis and enters the complex region of the s-plane in order to approach zeros
which are finite or are located at infinity. Similarly, there are branches of the
root locus which must break-in onto the real axis in order to terminate on
zeros.
Breakaway and
break-in points of a
j root locus
break-in breakaway
point point
-5 -4 -1
-3 -2
s -plane
Figure 8B.2
The breakaway (and break-in) points correspond to points in the s-plane where
multiple real roots of the characteristic equation occur. A simple method for
finding the breakaway points is available. Taking a lead from Eq. (8B.7), we
can write the characteristic equation 1 KP s 0 as:
f s B s KAs 0 (8B.10)
where As and B s do not contain K. Suppose that f s has multiple roots
of order r. Then f s may be written as:
f s s p1 s p2 s pk
r (8B.11)
df s
0
ds s p1 (8B.12)
df s
Bs KAs 0
ds (8B.13)
The particular value of K that will yield multiple roots of the characteristic
equation is obtained from Eq. (8B.13) as:
Bs
K
As (8B.14)
B s
f s Bs As 0
As (8B.15)
or
B s
K
As (8B.17)
and:
Equation to find
dK
0
breakaway and
break-in points
ds (8B.19)
It should be noted that not all points that satisfy Eq. (8B.19) correspond to
actual breakaway and break-in points, if K 0 (those that do not, satisfy
K 0 instead). Also, valid solutions must lie on the real-axis.
The intersection of the root locus and the imaginary axis can be found by
solving the characteristic equation whilst restricting solution points to s j .
This is useful to analytically determine the value of K that causes the system to
become unstable (or stable if the roots are entering from the right-half plane).
Zeros tend to “attract” the locus, while poles tend to “repel” it.
Use a computer to plot the root locus! The other rules provide intuition in
shaping the root locus, and are also used to derive analytical quantities for the
gain K, such as accurately evaluating stability.
Example
R (s ) E (s ) K
s C (s )
s
s
s
This system has four poles (two being a double pole) and one zero, all on the
negative real axis. In addition, it has three zeros at infinity. The root locus of
this system, illustrated below, can be drawn on the basis of the rules presented.
j
K =10.25
= 1.7
60°
-8 -7 -6 -5 -4 -3 -2 -1 -60° 1
double pole
-7.034 -0.4525
s -plane
Rule 1 There are four separate loci since the characteristic equation,
1 G s 0 , is a fourth-order equation.
Rule 2 The root locus starts ( K 0 ) from the poles located at 0, -1, and a
double pole located at -4. One pole terminates ( K ) at the zero located at -6
and three branches terminate at zeros which are located at infinity.
Rule 5 The loci approach infinity as K becomes large at angles given by:
1
1 180 60 ,
4 1
3
2 180 180 ,
4 1
5
3 180 300
4 1
Rule 6 The intersection of the asymptotic lines and the real axis occurs at:
9 6
A 1
4 1
Rule 7 The point of breakaway (or break-in) from the real axis is determined as
follows. From the relation:
K s 6
1 G s H s 1 0
ss 1s 4
2
we have:
ss 1s 4
2
K
s 6
dK
2
s 6 2ss 1s 4 2s 1s 4 ss 1s 4 02
ds s 62
Therefore:
Rule 8 The intersection of the root locus and the imaginary axis can be
determined by solving the characteristic equation when s j . The
characteristic equation becomes:
ss 1s 4 K s 6 0
2
s 4 9s 3 24s 2 16 K s 6 K 0
4 j 9 3 24 2 j 16 K 6 K 0
16 K
3
16 K 16 K
2
24 6K 0
9 9
Solving this quadratic, we finally get K 10.2483 . Substituting this into the
preceding equation, we obtain:
1.7 rads-1
2
Imag Axis
-2
-4
-6
-8
-15 -10 -5 0 5 10
Real Axis
Thus, the computer solution matches the analytical solution, although the
analytical solution provides more accuracy for points such as the crossover
point and breakaway / break-in points; and it also provides insight into the
system’s behaviour.
MATLAB®’s RLTool
We can use MATLAB® to graph our root loci. From the command window,
just type “rltool” to start the root locus user interface.
Example
The previous maze rover positioning scheme has a zero introduced in the
controller:
R (s ) E (s ) 0.5 C (s )
K p
s s
s
controller maze rover
then place our Gc transfer function into the “C” position and press OK.
MATLAB® will draw the root locus and choose appropriate graphing
quantities. The closed-loop poles are shown by red squares, and can be dragged
by the mouse. The status line gives the closed-loop pole locations, and the gain
K p that was needed to put them there can be observed at the top of the user
interface.
j
Kp =15
Kp =0
-5 -4 -1
-3 -2
s -plane
Kp =1
We can see how the rules help us to “bend and shape” the root locus for our
purposes. For example, we may want to increase the damping (move the poles
to the left) which we have now done using a zero on the real axis. We couldn’t
do this for the case of a straight gain in the previous example:
-5 -4 -1
-3 -2
s -plane
We will now see what happens if we place a pole in the controller instead of a
zero:
R (s ) E (s ) Kp 0.5 C (s )
s s
s
controller maze rover
j
-4 -1
-3 -2
s -plane
Kp =4
We see that the pole “repels” the root locus. Also, unfortunately in this case,
the root locus heads off into the RHP. If the parameter K p is increased to over
1 Gc z G p z 1 KP z 0
Root locus applies
to discrete-time (8B.20)
systems also
Example
R (z ) E (z ) C (z )
Kp G
z
G s
1
s s 1
Ts2 z 1
2
G z
4 2Ts z 2 8 z 4 2Ts
K pTs2 z 1
2
T z
4 2Ts
Ts2 z 2 2Ts2 8 z 4 2Ts Ts2
z -plane
The corresponding step-responses for the two extreme cases are shown below:
Ts =1
Step response due
to two different
sample periods
1
Ts =0.1
0 n Ts
We can see that the sample time affects the control system’s ability to respond
to changes in the output. If the sample period is small relative to the time
constants in the system, then the output will be a good approximation to the
continuous-time case. If the sample period is much larger, then we inhibit the
ability of the feedback to correct for errors at the output - causing oscillations,
increased overshoot, and sometimes even instability.
Summary
The root locus for a unity-feedback system is a graph of the closed-loop
pole locations as a system parameter, such as controller gain, is varied from
0 to infinity.
There are various rules for drawing the root locus that help us to
analytically derive various values, such as gains that cause instability. A
computer is normally used to graph the root locus, but understanding the
root locus rules provides insight into the design of compensators in
feedback systems.
The root locus can tell us why and when a system becomes unstable.
We can bend and shape a root locus by the addition of poles and zeros so
that it passes through a desired location of the complex plane.
References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Exercises
1.
For the system shown:
R ( s) K (s + 4) C( s)
s (s +1) (s + 2)
(b) Find the range of K for stability and the frequency of oscillation if unstable.
(c) Find the value of K for which the closed-loop system will have a 5%
overshoot for a step input.
(d) Estimate the 1% settling time for the closed-loop pole locations found in
part (c).
2.
The plant shown is open-loop unstable due to the right-half plane pole.
U (s) 1 C( s)
( s +3) ( s 8)
(a) Show by a plot of the root locus that the plant cannot be stabilized for any
K, K , if unity feedback is placed around it as shown.
R ( s) K C( s)
( s +3) ( s 8)
(b) An attempt is made to stabilize the plant using the feedback compensator
shown:
R ( s) K C( s)
( s +3) ( s 8)
( s +1)
( s +2)
3.
For the system shown the required value of a is to be determined using the
root-locus technique.
R ( s) (s + a ) 1 C( s)
(s + 3) s ( s +1)
C s
(a) Sketch the root-locus of as a varies from to .
R s
(b) From the root-locus plot, find the value of a which gives both minimum
overshoot and settling time when r t is a step function.
(c) Find the maximum value of a which just gives instability and determine the
frequency of oscillation for this value of a.
4.
The block diagram of a DC motor position control system is shown below.
R ( s) 10 C( s)
0.3
s ( s +4) actual
desired
position amplifier gear position
and motor train
Ks
tachometer
C s
(a) Sketch the root-locus of as K varies from to .
R s
Use two plots: one for negative feedback and one for positive feedback.
Find all important geometrical properties of the locus.
(b) Find the largest magnitude of K which just gives instability, and determine
the frequency of oscillation of the system for this value of K.
(c) Find the steady-state error (as a function of K) when r t is a step function.
(d) From the root locus plots, find the value of K which will give 10%
overshoot when r t is a step function, and determine the 10-90% rise time
for this value of K.
Note: The closed-loop system has two poles (as found from the root locus) and
no zeros. Verify this yourself using block diagram reduction.
E
B
E
t
B 0
E
B J
t
From these he was able to predict that there should exist electromagnetic waves
which could be transmitted through free space at the speed of light. The
revolution in human affairs wrought by these equations and their experimental
verification by Heinrich Hertz in 1888 is well known: wireless
communication, control and measurement - so spectacularly demonstrated by
television and radio transmissions across the globe, to the moon, and even to
the edge of the solar system!
James Maxwell was born in Edinburgh, Scotland. His mother died when he
was 8, but his childhood was something of a model for a future scientist. He
was endowed with an exceptional memory, and had a fascination with
mechanical toys which he retained all his life. At 14 he presented a paper to the
Royal Society of Edinburgh on ovals. At 16 he attended the University of
Edinburgh where the library still holds records of the books he borrowed while
still an undergraduate – they include works by Cauchy on differential
1
It was Oliver Heaviside, who in 1884-1885, cast the long list of equations that Maxwell had
given into the compact and symmetrical set of four vector equations shown here and now
universally known as “Maxwell's equations”. It was in this new form ("Maxwell redressed," as
Heaviside called it) that the theory eventually passed into general circulation in the 1890s.
We can scarcely avoid the conclusion that light consists in the transverse
undulations of the same medium which is the cause of electric and magnetic
phenomena.
Maxwell’s famous account, “A Dynamical Theory of the Electromagnetic
All the mathematical
Field” was read before a largely perplexed Royal Society in 1864. Here he sciences are
founded on relations
brought forth, for the first time, the equations which comprise the basic laws of between physical
laws and laws of
electromagnetism. numbers, so that the
aim of exact science
Maxwell also continued work he had begun at Aberdeen, on the kinetic theory is to reduce the
problems of nature
of gases (he had first considered the problem while studying the rings of to the determination
of quantities by
Saturn). In 1866 he formulated, independently of Ludwig Boltzmann, the operations with
kinetic theory of gases, which showed that temperature and heat involved only numbers. – James
Clerk Maxwell
molecular motion.
Maxwell was spending an evening with Sir William Grove who was then
engaged in experiments on vacuum tube discharges. He used an induction
coil for this purpose, and found the if he put a capacitor in series with the
primary coil he could get much larger sparks. He could not see why. Grove
knew that Maxwell was a splendid mathematician, and that he also had
mastered the science of electricity, especially the theoretical art of it, and so
he thought he would ask this young man [Maxwell was 37] for an
explanation. Maxwell, who had not had very much experience in
experimental electricity at that time, was at a loss. But he spent that night in
working over his problem, and the next morning he wrote a letter to Sir
William Grove explaining the whole theory of the capacitor in series
connection with a coil. It is wonderful what a genius can do in one night!
Maxwell’s letter, which began with the sentence, “Since our conversation
yesterday on your experiment on magneto-electric induction, I have considered
it mathematically, and now send you the result,” was dated March 27, 1868.
Preliminary to the mathematical treatment, Maxwell gave in this letter an
unusually clear exposition of the analogy existing between certain electrical
and mechanical effects. In the postscript, or appendix, he gave the
mathematical theory of the experiment. Using different, but equivalent,
symbols, he derived and solved the now familiar expression for the current i in
such a circuit:
di 1
L Ri idt V sin t
dt C
The solution for the current amplitude of the resulting sinusoid, in the steady-
state is:
V
I
2
1
R L
2
C
from which Maxwell pointed out that the current would be a maximum when:
1
L
C
When creating his standard for electrical resistance, Maxwell wanted to design
a governor to keep a coil spinning at a constant rate. He made the system stable
by using the idea of negative feedback. It was known for some time that the
governor was essentially a centrifugal pendulum, which sometimes exhibited
“hunting” about a set point – that is, the governor would oscillate about an
equilibrium position until limited in amplitude by the throttle valve or the
travel allowed to the bobs. This problem was solved by Airy in 1840 by fitting
a damping disc to the governor. It was then possible to minimize speed j
fluctuations by adjusting the “controller gain”. But as the gain was increased,
the governors would burst into oscillation again. In 1868, Maxwell published
s -plane
The Cavendish laboratory was opened in 1874, and Maxwell spent the next 5
years editing Henry Cavendish’s papers.
References
Blanchard, J.: The History of Electrical Resonance, Bell System Technical
Journal, Vol. 20 (4), p. 415, 1941.
Lecture 9A – State-Variables
State representation. Solution of the state equations. Transition matrix.
Transfer function. Impulse response. Linear state-variable feedback.
Overview
The frequency-domain has dominated our analysis and design of signals and
systems – up until now. Frequency-domain techniques are powerful tools, but
they do have limitations. High-order systems are hard to analyse and design.
Initial conditions are hard to incorporate into the analysis process (remember –
the transfer function only gives the zero-state response). A time-domain
approach, called the state-space approach, overcomes these deficiencies and
also offers additional features of analysis and design that we have not yet
considered.
State Representation
Consider the following simple electrical system:
R L
vs i C vC
Figure 9A.1
They characterize the future behaviour of a system once the inputs to a system
are specified, together with a knowledge of the initial states.
States
For the system in Figure 9A.1, we can choose i and vC as the state variables.
Therefore, let:
State variables
q1 i (9A.1a)
q 2 vC (9A.1b)
di
vs Ri L vC
dt (9A.2)
di R 1 1
i vC v s
dt L L L (9A.3)
In terms of our state variables, given in Eqs. (9A.1), we can rewrite this as:
dq1 R 1 1
q1 q2 vs
dt L L L (9A.4)
Finally, we write Eq. (9A.4) in the standard nomenclature for state variable
dq
analysis – we use q and also let the input, v s , be represented by the
dt
symbol x :
R 1 1
q1 q1 q2 x
L L L (9A.5)
dvC
iC
dt (9A.6)
q1 Cq 2 (9A.7)
1
q 2 q1
C (9A.8)
q1 R L 1 L q1 1 L
q 1 C q 0 x
2 0 2 (9A.9)
We will reserve small boldface letters for column vectors, such as q and b
Matrix, vector and
and capital boldface letters for matrices, such as A . Scalar variables such as x scalar notation
are written in italics, as usual.
Eq. (9A.10) is very important – it tells us how the states of the system q
change in time due to the input x.
Output
The system output can usually be expressed as a linear combination of all the
state variables.
For example, if for the RLC circuit of Figure 9A.1 the output y is vC then:
y vC
q2 (9A.11)
q
y 0 1 1
q 2 (9A.12)
Output equation
y cT q (9A.13)
y cT q dx (9A.14)
We can solve the state equations in the s-domain. Taking the Laplace
Transform of Eq. (9A.10) gives:
sQs q 0 AQs bX s (9A.16)
Notice how the initial conditions are automatically included by the Laplace
transform of the derivative. The solution will be the complete response, not just
the ZSR.
1
Qs sI A q 0 sI A bX s
1 (9A.17)
Φs sI A
1 Resolvent matrix
defined
Y s cT Φs q 0 cT Φs bX s (9A.20)
Before we do that, we also define the ILT of the resolvent matrix, called the
transition matrix:
φt Φ s
The transition matrix
and resolvent matrix
form a Laplace
transition resolvent (9A.21)
transform pair
matrix matrix
Complete solution of
the state equation
qt φt q 0 φt bx d
t
0
(9A.22)
ZIR ZSR
Notice how multiplication in the s-domain turned into convolution in the time-
domain. The transition matrix is a generalisation of impulse response, but it
applies to states – not the output!
We can get the output response of the system after solving for the states by
direct substitution into Eq. (9A.14).
Transition Matrix
The transition matrix possesses two interesting properties that help it to be
calculated by a digital computer:
φ0 I (9A.23a)
φt e At (9A.23b)
The first property is obvious by substituting t 0 into Eq. (9A.22). The second
relationship arises by observing that the solution to the state equation for the
case of zero input, q Aq , is q q0 e At . For zero input, Eq. (9A.22) gives
qt φt q 0 , so that we must have φt e At . The matrix e At is defined
by:
e At
I
At At At
2
3
How to raise e to a
matrix power
1! 2! 3! (9A.24)
Example
d2y dy dr (9A.25)
2
2 y r
dt dt dt
r sin t y 0 1 y 0 0 (9A.26)
Let:
q1 y, q 2 y , x r r (9A.27)
then:
q1 q2
q 2 q1 2q2 x
(9A.28)
or just:
q Aq bx (9A.29)
0 1 0 q q (9A.30)
A , b , q 1 , q 1
1 2 1 q 2 q 2
s 0 0 1 s 1 (9A.31)
sI A
0 s 1 2 1 s 2
adj B (9A.32)
B 1
B
s 2 1 s 2 1 (9A.33)
1 s
Φs sI A
1
s 1
2
s 12
s 12 1 s
s 12 s 12
The transition matrix is the inverse Laplace transform of the resolvent matrix:
q ZIR t φt q 0
q1 e t t 1 te t 1 e t t 1
q
e t 1 t 0 te t
t (9A.35)
2 ZIR te
X s
1 s (9A.37)
2
s 1 s 1
2
s 2 1
1 s 1
q ZSR t L
2
s 1 s 1
2
0
1 s 1 s 2 1
s 1 s 12
2
1
L1
s 1 s 2 1
(9A.38)
s
s 1 s 2 1
Use partial fractions to get:
q1
1
12 e t cos t sin t (9A.39)
q
t
2 ZSR 2 e cos t sin t
The total response is the sum of the ZIR and the ZSR:
This is just the solution for the states. To get the output, we use:
y c T q dx (9A.41)
cT 1 0, d 0
You should confirm this solution by solving the differential equation directly
using your previous mathematical knowledge, eg. method of undetermined
coefficients.
Transfer Function
The transfer function of a single input-single output (SISO) system can be
obtained easily from the state variable equations. Since a transfer function only
gives the ZSR (all initial conditions are zero), then Eq. (9A.19) becomes:
The output in the s-domain, using the Laplace transform of Eq. (9A.13) and
Eq. (9A.43), is just:
Y s c T Qs
c T Φs bX s (9A.44)
H s cT Φs b
transfer function
(9A.45)
from a state-variable
description of a
system
Impulse Response
The impulse response is just the inverse Laplace transform of the transfer
function:
ht cT φt b
cT e At b (9A.46)
Example
Continuing the analysis of the system used in the previous example, we can
find the transfer function using Eq. (9A.45):
s2 1
H s 1 0
s 12 s 1 0
2
1 s 1
s 12 s 12
1
1 0
s 12
s
s 12
1
s 12 (9A.47)
d2y dy
2
2 y x
dt dt
s 2s 1 Y s X s
2 (9A.48)
Y s
H s
X s
1
2
s 2s 1
1 (9A.49)
s 12
Why would we use the state-variable approach to obtain the transfer function?
For a simple system, we probably wouldn’t, but for multiple-input multiple-
output systems, it is much easier using the state-variable approach.
Block diagram of
linear state-variable
feedback
q q
dt
r (t ) x (t ) y (t )
k0 b cT
A
q
kT
controller system to be controlled
Figure 9A.2
The system has been characterised in terms of states – you should confirm that
the above diagram of the “system to be controlled” is equivalent to the matrix
formulation of Eqs. (9A.10) and (9A.13).
We have placed a controller in front of the system, and we desire the output y
to follow the set point, or reference input, r. The design of the controller
involves the determination of the controller variables k 0 and k to achieve a
desired response from the system (The desired response could be a time-
domain specification, such as rise time, or a frequency specification, such as
bandwidth).
The controller just multiples each of the states qi by a gain k i , subtracts the
sum of these from the input r , and multiplies the result by a gain k 0 .
The input to the
open-loop system is
modified by the
x k0 r k T q (9A.50)
feedback
q A k q bk 0 r (9A.51)
State-variable
A k A k 0 bk T (9A.52) feedback modifies
the A matrix
For the ZSR, the transfer function is given by Eq. (9A.45) with the above
substitutions:
The closed-loop
transfer function
where:
The modified
Φ k s sI A k
1 (9A.54) resolvent matrix
when linear state-
variable feedback is
applied
We choose the controller variables k 0 and k T k1 k 2 k n to create the
transfer function obtained from the design criteria (easy for an n 2 second-
order system).
Example
X (s ) Q1 Q2 = Y ( s )
1 1
s +70 s
C s n 2
2
Rs s 2 n s n 2
(9A.56)
2500
2
s 70.71s 2500
1
Q1 X
s 70
1
Q2 Q1
s
Y Q2 (9A.57)
Rearranging, we get:
sQ1 70Q1 X
sQ2 Q1
Y Q2 (9A.58)
70 0 1
q q x
1 0 0
y 0 1q (9A.59)
or just:
q Aq bx (9A.60)
y c qT
Controller Process
R (s ) X (s ) Q1( s) Q2( s) = C (s )
1 1
k0 s +70 s
k1
k2
We see that the controller accepts a linear combination of the states, and
compares this with the reference input. It then provides gain and applies the
resulting signal as the “control effort”, X s , to the process.
x k 0 r k1q1 k 2 q2 (9A.61)
or in matrix notation:
x k0 r k T q (9A.62)
q Aq b k 0 r k T q
q A k 0 bk T q bk 0 r
(9A.63)
q A k q bk 0 r
where:
A k A k 0 bk T (9A.64)
If we let:
Φ k s sI A k
1 (9A.65)
H s k0cT Φ k s b (9A.66)
A k A k 0 bk T
70 0 1
k 0 k1 k2
1 0 0
70 0 k k2
k0 1
1 0 0 0
70 k 0 k1 k0 k2 (9A.67)
1 0
Then:
Φ k s sI A k
1
1
s 70 k0 k1 k0 k2
1 s
1 s k0 k 2 (9A.68)
1 s 70 k k
s 70 k0 k1 s k0 k2
2
0 1
H s k0cT Φ k s b
k0 k2 1
0 1
k0 s
s 70 k0 k1 s k0 k2
2
1 s 70 k0 k1 0
0 1
k0 s
2
s 70 k0 k1 s k0 k2 1
k0 (9A.69)
2
s 70 k0 k1 s k0 k2
The values of k 0 , k1 and k 2 can be found from Eqs. (9A.56) and (9A.69). The
k 0 2500
k 0 k 2 2500
70 k 0 k1 70.71 (9A.70)
k 0 2500
k1 2.843 10 4
k2 1 (9A.71)
This completes the controller design. The final step would be to draw the root
locus and examine the relative stability, and the sensitivity of slight gain
variations. For this simple system, the final step is not necessary.
Summary
State-variables describe “internal states” of a system rather than just the
input-output relationship.
Linear state-variable feedback involves the design of gains for each of the
states, plus the input.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
Using capacitor voltages and inductor currents write a state-variable
representation for the following circuit:
R1 L1 C1
vs R2 R3
2.
Consider a linear system with input u and output y. Three experiments are
performed on this system using the inputs x1 t , x2 t and x3 t for t 0 . In
each case, the initial state at t 0 , x0 , is the same. The corresponding
observed outputs are y1 t , y 2 t and y3 t . Which of the following three
(a) If x3 x1 x2 , then y3 y1 y 2 .
(b) If x3 1
2 x1 x2 , then y3 1
2 y1 y2 .
(c) If x3 x1 x2 , then y3 y1 y 2 .
3.
Write dynamical equation descriptions for the block diagrams shown below
with the chosen state variables.
(a)
1 Q1 1 Q3
3 s +2 s +2
X Y
2
1 Q2
s
(b)
1 Q1 1 Q2 1 Q3
X Y
s +2 s +1 s +2
4.
Find the transfer function of the systems in Q3:
(ii) directly from your answers in Q3 (use the resolvent matrix etc)
5.
Given:
1 1 1
q q x
4 1 2
and:
3 0 t 0
q0 xt
0 1 t 0
Find qt .
6.
Write a simple MATLAB® script to evaluate the time response of the system
described in state equation form in Q5 using the approximate relationship:
qt T qt
q
T
Compare these plots to the values given by the exact solution to Q5 (obtained
by finding the inverse Laplace transforms of the answer given).
Lecture 9B – State-Variables 2
Normal form. Similarity transform. Solution of the state equations for the ZIR.
Poles and repeated eigenvalues. Discrete-time state-variables. Discrete-time
response. Discrete-time transfer function.
Overview
State-variable analysis is useful for high-order systems and multiple-input
multiple-output systems. It can also be used to find transfer functions between
any output and any input of a system. It also gives us the complete response.
The only drawback to all this analytical power is that solving the state-variable
equations for high-order systems is difficult to do symbolically. Any computer
solution also has to be thought about in terms of processing time and memory
storage requirements.
Normal Form
Solving matrix equations is hard...unless we have a “trivial” system:
z1 1 0 0 0 z1
z 0 0 0 z
2 2 2
0 0 0 (9B.1)
z
n 0 0 0 n zn
z Λz (9B.2)
q Aq (9B.3)
q q0 e t (9B.4)
Aq q (9B.5)
Therefore:
A I q 0 (9B.6)
Characteristic
equation
A I 0 (9B.7)
This is called the characteristic equation of the system. The eigenvalues are
the values of which satisfy A I 0 . Once we have all the ’s, each
column vector q i which satisfies the original equation Eq. (9B.5) is called a
column eigenvector.
Eigenvalues and
eigenvectors
defined A I 0 i eigenvalue
Aq i i q i q i eigenvector (9B.8)
Example
2 3 2
A 10 3 4
3 6 1 (9B.9)
2 3 2
A I 10 3 4 0
3 6 1 (9B.10)
1 2
2 3 eigenvalues
3 11 (9B.13)
Take 1 2 :
2 3 2 2 0 0 q1 0
10 3 4 0 2 0 q 2 0
3 6 1 0 2 q3 0
0
4 3 2 q1 0
10 5 4 q 2 0
3 6 3 q3 0 (9B.14)
Solve to get:
1
q 1 2 for 1 2
5 (9B.15)
0 2
q 2 2 for 2 3 and q 3 4 for 3 11
3 3 (9B.16)
Similarity Transform
The eigenvalues and eigenvectors that arise from Eq. (9B.5) are put to good
use by transforming q Aq into z Λz . First, form the square n n matrix:
Similarity transform
defined
q11 q2 1 qn 1
q12 q
U q 1 q 2 q n
2 2 qn 2
q1n q2 n qn n
(9B.17)
Aq i i q i (9B.18)
AU UΛ (9B.19)
where:
Diagonal matrix
1 0 00 defined
0 2 00
Λ
0 0 0
0 0 0 n
(9B.20)
Example
AU UΛ
2 3 2 1 0 2 1 0 2 2 0 0
10 3 4 2
2 4 2 2 4 0 3 0
3 6 1 5 3 3 5 3 3 0 0 11
(9B.21)
VA ΛV (9B.22)
q T (1) q (1)1 q (1) 2 q (1) n
V
T ( 2) ( 2)
q q 1 q ( 2) 2 q ( 2) n
T (n) (n)
q q 1 q 2 q n
(n) (n)
(9B.23)
VU I (9B.24)
which implies:
Relationship
between the two
similarity transforms
V U 1 (9B.25)
AU UΛ
VAU VUΛ (9B.26)
q Aq (9B.28)
and q 0 , we would like to determine the ZIR q ZIR t .
Now pre-multiply by U 1 :
z U 1 AUz (9B.30)
z1 1 0 0 0 z1
z 0 0 0 z
2 2 2
0 0 0 (9B.32)
z
n 0 0 0 n zn
z1 1 z1
z 2 2 z 2
z n n z n
(9B.33)
dz1
1 z1
dt (9B.34)
z1 z1 0e 1t
z 2 z 2 0e 2t
z n z n 0e n t
(9B.36)
q Uz
Ue Λt z 0
Ue Λt U 1q0 (9B.39)
and since we know the ZIR is q ZIR t φt q0 then the transition matrix is:
Transition matrix
φt Ue Λt U 1
written in terms of (9B.40)
eigenvalues and
eigenvectors
This is a quick way to find the transition matrix φt for high-order systems.
Ue Λt U 1q0
eigenvectors, and
(9B.41) initial conditions
Example
Given:
0 1 0
q t qt xt
2 3 2
y t 3 1qt
2
q0 (9B.42)
3
A I 0
1
0
2 3
2 3 2 0
1 2 0
(9B.43)
1 1, 2 2
U 1 1 , U 2 1 , U 1 1 (9B.44)
1
2 1 2
2 1 (9B.45)
V 1 2 1, V 2 1 1, V
1 1
Forming Λ is easy:
1 0 (9B.46)
Λ
0 2
φt Ue Λt U 1
1 1 e t 0 2 1
1 2 0 e 2t 1 1
1 1 2e t e t
2t
1 2 e e 2t
2e t e 2t e t e 2 t (9B.47)
t 2t
2e 2e e t 2e 2t
We have seen before that the transfer function of a system using state-variables
is given by:
H s cT Φs b
cT sI A b
1
cT adjsI A b
sI A (9B.50)
where we have used the formula for the inverse B 1 adj B B . We can see
that the poles of the system are formed directly by the characteristic equation
sI A 0 . Thus, the poles of a system are given by the eigenvalues of the
matrix A .
Poles of a system
poles = eigenvalues of A (9B.51) and eigenvalues of
A are the same!
Repeated Eigenvalues
Discrete-time State-Variables
The concepts of state, state vectors and state-variables can be extended to
discrete-time systems.
Example
we select:
Therefore:
q1 n 1 yn (9B.56)
yn 1 yn 2 xn
so that:
Also:
q 2 n 1 yn 1 (9B.59)
q 2 n 1 q1 n (9B.60)
The equations are now in state variable form, and we can write:
1 1 1
qn 1 qn xn
1 0 0
yn 1 1qn 1xn (9B.62)
Example
b0 b1 z 1 bN z N (9B.63)
H z
1 a1 z 1 a N z N
then:
Y z X z
Pz
(9B.64)
1 p
b0 b1 z b p z 1 a1 z a p z p
1
Q1 z z 1 P z
Q2 z z 2 P z
QN z z N P z
(9B.66)
The state equations are then built up as follows. From the first equation in
Eq. (9B.66):
Q1 z z 1 Pz
zQ1 z P z
a1 z 1 P z a2 z 2 P z a N z N P z X z
a1Q1 z a 2 Q2 z a N QN z X z
(9B.67)
Q2 z z 2 P z
z 1 z 1 P z
z 1Q1 z
zQ2 z Q1 z
(9B.69)
q 2 n 1 q1 n (9B.70)
Similarly:
q N n 1 q N 1 n (9B.71)
We now have all the state equations. Returning to Eq. (9B.64), we have:
Y z b0 P z b1 z 1 P z bN z N P z (9B.72)
Eliminating the q1 n 1 term using Eq. (9B.68) and grouping like terms gives:
a1 a2 a N 1 aN 1
1 0 0 0 0
qn 1 0 1 0 0 qn 0 xn
0 0 1 0 0
yn c1 c2 c N qn b0 xn
(9B.75)
where ci bi ai b0 i 1, 2, , N
Discrete-time Response
Once we have the equations in state-variable form we can then obtain the
discrete-time response.
We now define:
This is the expected form of the output response. For the states, it can be seen
that:
q ZIR φnq0
The ZIR and ZSR
for a discrete-time
state-variable
n 1
q ZSR φn i 1bxi
system
i 0 (9B.81)
Therefore:
Q z zI A zq0 zI A bX z
1 1 (9B.83)
Similarly:
Y z zcT zI A q0
1
cT zI A b d X z
1
(9B.84)
For the transfer function, we put all initial conditions q0 0 . Therefore, the
transfer function is:
The discrete-time
H z cT zI A b d
transfer function in 1 (9B.85)
terms of state-
variable quantities
To get the unit-pulse response, we revert to Eq. (9B.80), set the initial
conditions to zero and apply a unit-pulse response:
h0 d
hn cT φn 1b n 1, 2, (9B.86)
h0 d
The unit-pulse
response in terms of
hn cT A n 1b n 1, 2,
state-variable
quantities (9B.87)
Example
x [n ]
q [ n +1]
2
q2[ n] y [n ]
5/6 5
D
q [ n]
1
1/6 1
If
We would like to find the output yn if the input is xn un and the initial
conditions are q1 0 2 and q 2 0 3 .
and:
q n
yn 1 5 1 (9B.89)
q 2 n
A UΛ U 1
A 2 A A UΛ U 1UΛ U 1 UΛ 2 U 1
A n UΛ n U 1
(9B.90)
1 5 1 1 1
(9B.91)
I A 1 5 2 0
6 6 3 2
6 6
1 3 0 (9B.92)
Λ
0 1 2
and for 2 1 2 :
1 2 1 x1 0 2 (9B.94)
1 6 1 3 x 0 so choose u 2 1
2
Therefore:
3 2 (9B.95)
U
1 1
adjU 1 2 (9B.96)
U 1
U 1 3
3 2 1 3n 0 1 2
φn
1 1 0 1 2n 1 3
3 2 1 3n 21 3n
n
1 1 1 2 31 2
n
n 1 (9B.99)
y ZSR n cT φn i 1bxi
i 0
We have:
cT φn i 1b
31 3ni 1 21 2ni 1 61 3
n i 1
61 2 0
n i 1
1 5 n i 1
1 3 1 2 21 3 31 2
n i 1 n i 1 n i 1
1
61 3ni 1 61 2ni 1
1 5 n i 1
21 3 31 2
n i 1
41 3 91 2
n i 1 n i 1 (9B.100)
i 0
n 1 n 1
41 3 1 3 91 2 1 2
n 1 i n 1 i
i 0 i 0
1 3 n 1 2
n n
121 3 181 2
n
1 3 1 2
61 3 1 3 181 2 1 2 n
n n n
61 3 181 2 12
n n
(9B.101)
12 21 3 31 2 , n 0
n n
(9B.102)
Y z zcT zI A q0 cT zI A b d X z
1 1
z 1 2
1
z 1
1
0
z 1 5 1 5 z
1 6 z 5 6 3 1 6 z 5 6 z 1
z 5 6 1 2 z 5 6 1 0
z
1 5
1
1 6 z 3 z 2 5 6 z 1 6 1 5 1 6 z z
z 2 5 6 z 1 6 z 1
z
2 z 4 3 z 1
2
z
1 5
1
1 5 z2
z 5 6 z 1 6 3z 1 3 z 5 6 z 1 6
2
z 1
13z 2 3z 5z 2 z
2
z 5 6 z 1 6 z 1 z 2 5 6 z 1 6
z z z z z
8 21 12 6 18 (9B.103)
z 1 3 z 1 2 z 1 z 1 3 z 1 2
Therefore:
Obviously, the two solutions obtained using different techniques are in perfect
agreement.
Summary
The similarity transform uses a knowledge of a system’s eigenvalues and
eigenvectors to reduce a high-order coupled system to a simple diagonal
form.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Exercises
1.
Find Λ , U and V for the system described by:
0 1 0 0 1
q 0 0
1 q 2 7 x
6 11 6 20 5
y 0 0 1q
Note: U and V should be found directly (i.e. Do not find V by taking the
inverse of U). You can then verify your solution by:
2.
Consider the system:
q1 q1 x
q 2 q1 2q2 x
with:
0
q0
1
(a) Find the system eigenvalues and find the zero input response of the system.
(b) If the system is given a unit-step input with the same initial conditions, find
q t . (Use the resolvent matrix to obtain the time solution). What do you
notice about the output?
3.
A system employing state-feedback is described by the following equations:
q1 0 1 0 q1 0
q 0 0 1 q2 0 xt
2
q 3 5 7 3 q3 1
q1
y 2 4 3q2
q3
q1
xt k1 k2 k 3 q2 r t
q3
(iii) Obtain values for k1 , k 2 and k 3 to place the closed-loop system poles
(iv) Find the steady-state value of the output due to a unit-step input.
4.
For the difference equation:
3z 3 5 4 z 2 5 8 z
(a) Show that H z
z 3 1 4 z 2 1 4 z 1 16
(d) Draw a block diagram of the state-variable description found in (b), and use
block-diagram reduction to find H z .
5.
Find y5 and y10 for the answer to Q4b given:
1
0 n0
q0 1 and xn
1 n0
n
0
(a) by calculating qn and yn for n 0, 1, , 10 directly from the state
equations.
(b) by using the fundamental matrix and finding y5 and y10 directly.
6.
A discrete-time system with:
k z b
H z
z z a z c
Overview
Digital signal processing is becoming prevalent throughout engineering. We
The motivation
have digital audio equipment (CDs, MP3s), digital video (MPEG2 and behind developing
the FFT
MPEG4, DVD, digital TV), digital phones (fixed and mobile). An increasing
number (billions!) of embedded systems exist that rely on a digital computer (a
microcontroller normally). They take input signals from the real analog world,
convert them to digital signals, process them digitally, and produce outputs that
are again suitable for the real analog world. (Think of the computers
controlling any modern form of transport – car, plane, boat – or those that
control nearly all industrial processes). Our motivation is to extend our existing
frequency-domain analytical techniques to the “digital world”.
Our reason for hope that this can be accomplished is the fact that a signal’s Samples contain all
the information of a
samples can convey the complete information about a signal (if Nyquist’s signal
criterion is met). We should be able to turn those troublesome continuous-time
integrals into simple summations – a task easily carried out by a digital
computer.
0 t -2 B -B 0 B 2B f
Figure F.1
Since the signal is strictly time-limited (it only exists for a finite amount of
time), it’s spectrum must be infinite in extent. We therefore cannot choose a
sample rate high enough to satisfy Nyquist’s criterion (and therefore prevent
aliasing). However, in practice we normally find that the spectral content of
signals drops off at high frequencies, so that the signal is essentially band-
limited to B:
A time-limited signal
which is also
essentially band-
limited x (t ) X (f )
A C
0 t -B 0 B f
Figure F.2
We will now assume that Nyquist’s criterion is met if we sample the time-
domain signal at a sample rate f s 2 B .
waveform is:
xs t xt t kTs
k
xk t kT
(F.1)
s
k
X s f X f f s f nf s
n
f X f nf
n
s s
(F.2)
The sampled waveform and its spectrum are shown graphically below:
A sampled signal
and its spectrum
x s (t ) weights are Xs (f )
A x ( kTs) = x [ k ] fs C
0 t -B 0 B = f s /2 f
M samples with spacing Ts
Figure F.3
We are free to take as many samples as we like, so long as MTs . That is,
we need to ensure that our samples will encompass the whole signal in the
time-domain. For a one-off waveform, we can also sample past the extent of
the signal – a process known as zero padding.
X s f x s t e j 2ft dt
xk t kT e
s
j 2ft
dt (F.3)
k
Using the sifting property of the impulse function, this simplifies into the
definition of the discrete-time Fourier transform (DTFT):
The discrete-time
Xsf xk e
Fourier transform
defined j 2fkTs
k (F.4)
As shown in Figure F.3, the DTFT is periodic with period f s 2 B , i.e. the
The frequency
fs spacing of a DFT
f0
N (F.5)
The time-domain
T0 NTs (F.6) sample spacing of a
DFT
X ss f X s f f nf 0
n
X nf f nf
(F.7)
s 0 0
n
Ideal samples of a
DTFT spectrum
X ss ( f )
fs C
- f s /2 0 f s /2 f
sample spacing is f0
N samples over one period
Figure F.4
1
xss t xs t t kT0
f0 k
T x t kT
k
0 s 0 (F.8)
Periodic extension
of the time-domain
signal due to ideally x ss(t )
sampling the DTFT
spectrum A T0
t
- T0 0 T0 2T0
M samples in each repeat with spacing Ts
Figure F.5
T0 MTs (F.9)
NTs MTs
or
NM (F.10)
If we do this, then the reconstructed sampled waveform has its repeats next to
each other:
x ss(t )
A T0
t
- T0 0 T0 2T0
M samples in each repeat with spacingTs
Figure F.6
Returning now to the spectrum in Figure F.4, we need to find the sampled
spectrum impulse weights covering just one period, say from 0 f f s . These
N 1
X s nf 0 xk e j 2nf 0 kTs
k 0 (F.12)
Notice how the infinite summation has turned into a finite summation over
M N samples of the waveform, since we know the waveform is zero for all
the other values of k.
Now using Eq. (F.5), we have f 0Ts 1 N . We then have the definition of the
The DFT takes an input vector xk (time-spacing unknown – just a function
of k) and produces an output vector X n (frequency-spacing unknown – just a
function of n). It is up to us to interpret the input and output of the DFT.
All we need to consider here is the creation and interpretation of FFT results,
and not the algorithms behind them.
Creating FFTs
Knowing the background behind the DFT, we can now choose various
parameters in the creation of the FFT to suit our purposes. For example, there
may be a requirement for the FFT results to have a certain frequency
resolution, or we may be restricted to a certain number of samples in the time-
domain and we wish to know the frequency range and spacing of the FFT
output.
or
f s Nf 0
or
N T0 f s (F.14)
Example
Example
An analog signal is viewed on a DSO with a window size of 1 ms. The DSO
takes 1024 samples. What is the frequency resolution of the spectrum, and
what is the folding frequency (half the sample rate)?
The sample rate is f s Nf 0 1024 1000 1.024 MHz . The folding frequency
Example
% Number of samples
N=1000;
% Time window
T0=N*Ts;
f0=1/T0;
G=fftshift(fft(g));
The frequency resolution of the FFT in this case will be f 0 1 kHz and the
output will range from –500 kHz to 499 kHz. Note carefully how the time and
frequency vectors were specified so that the last value does not coincide with
the first value of the second periodic extension or spectrum repeat.
Interpreting FFTs
The output of the FFT can be interpreted in four ways, depending on how we
interpret the time-domain values that we feed into it.
If the FFT input, xn , is the weights of the impulses of an ideally sampled
time-limited “one-off” waveform, then we know the FT is a periodic repeat of
the original unsampled waveform. The DFT gives the value of the FT at
frequencies nf 0 for the first spectral repeat. This interpretation comes directly
from Eq. (F.12).
FT FFT
samples are
Xs ( f ) human
fs C X [n ] X [n ] fs C
f interpretation f
- f s /2 0 f s /2 - f s /2 0 f s /2
Figure F.7
Consider the case where the FFT input, xn , is the weights of the impulses of
an ideally sampled periodic waveform over one period with period T0 .
According to Eq. (F.8), the DFT gives the FT impulse weights for the first
spectral repeat, if the time-domain waveform were scaled by T0 . To get the FT,
we therefore have to scale the DFT by f 0 and recognise that the spectrum is
periodic.
FT FFT
weights are
Xs ( f ) f0 fs C f0 X [n ] human
X [n ] fs C
f interpretation f
- f s /2 0 f s /2 - f s /2 0 f s /2
Figure F.8
If we ideally sample the “one-off” waveform at intervals Ts , we get a signal corresponding to Case 1. The sampling process
creates periodic repeats of the original spectrum, scaled by f s . To undo the scaling and spectral repeats caused by sampling, we
should multiply the Case 1 spectrum by Ts and filter out all periodic repeats except for the first. The DFT gives the value of the
FT of the sampled waveform at frequencies nf 0 for the first spectral repeat. This interpretation comes directly from Eq. (F.12). All
Interpretation of FFT
of one-off waveform
x (t ) x s (t ) weights are x [ n]
A A x ( nTs) = x [ n ] A
ideal input to
t t n
0 T0 0 T0 0 N
sampling computer
real periodic signal real periodic sampled signal discrete signal
FT FT FFT
samples are
X ( f) Ts X [n ] ideal Xs ( f ) human
C fs C X [n ] fs C
f reconstruction f f
interpretation
- f s /2 0 f s /2 - f s /2 0 f s /2 - f s /2 0 f s /2
Figure F.9
To create the FFT input for this case, we must ideally sample the continuous signal at intervals Ts to give xn . The sampling
process creates periodic repeats of the original spectrum, scaled by f s . We are now essentially equivalent to Case 2. To undo the
scaling and spectral repeats caused by sampling, we should multiply the Case 2 spectrum by Ts and filter out all periodic repeats
except for the first. According to Eq. (F.8), the DFT gives the FT impulse weights for the first spectral repeat. All we have to do is
scale the DFT output by f 0Ts 1 N to obtain the true FT impulse weights.
Interpretation of FFT
of periodic x (t ) x s (t ) weights are x [ n]
waveform A A x ( nTs) = x [ n ] A
ideal input to
t t n
0 T0 0 T0 0 N
sampling computer
real periodic signal real periodic sampled signal discrete signal
FT FT FFT
f reconstruction f f
interpretation
- f s /2 0 f s /2 - f s /2 0 f s /2 - f s /2 0 f s /2
Figure F.10
Overview
The phase-locked loop (PLL) is an important building block for many
electronic systems. PLLs are used in frequency synthesisers, demodulators,
clock multipliers and many other communications and electronic applications.
For DSB-SC modulation, a constant phase error will cause attenuation of the
Frequency and
output signal. Unfortunately, the phase error may vary randomly with time. A phase errors cause
distorted
frequency error will cause a “beating” effect (the output of the demodulator is demodulator output
the original message multiplied by a low-frequency sinusoid). This is a serious
type of distortion.
For SSB-SC modulation, a phase error in the local carrier gives rise to a phase
distortion in the demodulator output. Phase distortion is generally not a
problem with voice signals because the human ear is somewhat insensitive to
phase distortion – it changes the quality of the speech but still remains
intelligible. In video signals and data transmission, phase distortion is usually
intolerable. A frequency error causes the output to have a component which is
slightly shifted in frequency. For voice signals, a frequency shift of 20 Hz is
tolerable.
Carrier Regeneration
The human ear can tolerate a drift between the carriers of up to about 30 Hz.
A pilot is transmitted Quartz crystals can be cut for the same frequency at the transmitter and
to enable the
receiver to generate receiver, and are very stable. However, at high frequencies (> 1 MHz), even
a local oscillator in
frequency and quartz-crystal performance may not be adequate. In such a case, a carrier, or
phase synchronism
with the transmitter
pilot, is transmitted at a reduced level (usually about -20 dB) along with the
sidebands.
pilot tone
f f
-B 0 B -fc-B -fc 0 fc fc+B
Figure P.1
A PLL is used to In demodulation applications, the PLL is primarily used in tracking the phase
regenerate a local
carrier
and frequency of the carrier of an incoming signal. It is therefore a useful
device for synchronous demodulation of AM signals with a suppressed carrier
or with a little carrier (the pilot). In the presence of strong noise, the PLL is
more effective than conventional techniques.
phase
v i (t ) phase difference loop v in( t ) v o( t )
VCO
pilot tone detector filter
local carrier
(input sinusoid) (output sinusoid)
Figure P.2
It can be seen that the PLL is a feedback system. In a typical feedback system,
the signal fed back tends to follow the input signal. If the signal fed back is not
equal to the input signal, the difference (the error) will change the signal fed
back until it is close to the input signal. A PLL operates on a similar principle,
expect that the quantity fed back and compared is not the amplitude, but the
phase of a sinusoid. The voltage controlled oscillator (VCO) adjusts its
frequency until its phase angle comes close to the angle of the incoming signal.
At this point, the frequency and phase of the two signals are in synchronism
(expect for a difference of 90°, as will be seen later).
v in( t ) v o( t )
VCO
input voltage output sinusoid
Figure P.3
fi
fo
slope = k v
vin
0
Figure P.4
The horizontal axis is the applied input voltage, and the vertical axis is the
frequency of the output sinusoid. The amplitude of the output sinusoid is fixed.
We now seek a model of the VCO that treats the output as the phase of the
sinusoid rather than the sinusoid itself. The frequency f o is the nominal
frequency of the VCO (the frequency of the output for no applied input).
f i t f o k v vin t (P.1)
To relate this to the phase of the sinusoid, we need to generalise our definition
of phase. In general, the frequency of a sinusoid cos cos2f1t is
proportional to the rate of change of the phase angle:
d
2f1
dt
1 d
f1 (P.2)
2 dt
We are used to having a constant f1 for a sinusoid’s frequency. The
instantaneous phase of a sinusoid is given by:
t (P.3)
2f1d
2f1t (P.4)
(P.5)
vo Ao cos o t o t (P.6)
where:
(P.7)
This equation expresses the relationship between the input to the VCO and the
phase of the resulting output sinusoid.
Example
Suppose a DC voltage of VDC V is applied to the VCO input. Then the output
The resulting sinusoidal output of the VCO can then be written as:
In other words, a constant DC voltage applied to the input of the VCO will
produce a sinusoid of fixed frequency, f1 f 0 k vVDC .
When used in a PLL, the VCO input should eventually be a constant voltage
(the PLL has locked onto the phase, but the VCO needs a constant input
voltage to output the tracked frequency).
Phase Detector
The phase detector is a device that produces the phase difference between two
input sinusoids:
phase
v i (t ) phase difference
sinusoid 1 detector
v o(t )
sinusoid 2
Figure P.5
Four-quadrant multiplier
v i (t ) x ( t ) = v i (t )v o(t )
v o(t )
Figure P.6
To see why a multiplier can be used, let the incoming signal of the PLL, with
constant frequency and phase, be:
vi Ai sin i t i (P.8)
If we let:
i i o t i (P.9)
vi Ai sin o t i t (P.10)
Note that the incoming signal is in phase quadrature with the VCO output (i.e.
one is a sine, the other a cosine). This comes about due to the way the
multiplier works as a phase comparator, as will be shown shortly.
Thus, the PLL will lock onto the incoming signal but it will have a 90°
phase difference.
x vi v o
Ai sin o t i Ao cos o t o
Ai Ao
sin i o sin 2 o t i o (P.11)
2
If we now look forward in the PLL block diagram, we can see that this signal
passes through the “loop filter”. If we assume that the loop filter is a lowpass
filter that adequately suppresses the high frequency term of the above equation
(not necessarily true in all cases!), then the phase detector output can be
written as:
Ai Ao
x sin i o K 0 sin i o
2 (P.12)
output voltage that is proportional to the sine of the phase difference between
the two input sinusoids.
PLL Model
A model of the PLL, in terms of phase, rather than voltages, is shown below:
A PLL model
i (t ) e (t ) o(t )
x (t ) v in( t )
2k v d
t
K0 sin( ) h (t )
phase detector
Figure P.7
If the PLL is close to “lock”, i.e. its frequency and phase are close to that of the
incoming signal, then we can linearize the model above by making the
approximation sin e e . With a linear model, we can convert to the s-domain
i (s ) e (s ) o(s )
K0 H (s) 2k v
s
loop filter VCO
phase detector
Figure P.8
Note that the integrator in the VCO in the time-domain becomes 1 s in the
block diagram thanks to the integration property of the Laplace transform.
Figure P.9
This is the closed-loop transfer function relating the VCO output phase and the
incoming signal’s phase.
Loop Filter
i t i o t i (P.13)
then we want:
lim e t 0 (P.14)
t
The analysis is best performed in the s-domain. The Laplace transform of the
input signal is:
i o i (P.15)
i s 2
s s
e s i s o s
1 T s i s
i s
s
s 2k v K 0 H s
s i o i
s 2k v K 0 H s s
(P.16)
2
s
lim e t lim s e s
t s 0
s2 i o i
lim
s 0 s 2k K H s
s s
2
v 0
i o i s
s 2k v K 0 H s s 0 s 2k v K 0 H s s 0 (P.17)
lim H s (P.18)
s 0
(P.19)
H s a
b
s
Obviously, the constants a and b must be chosen to meet other system
requirements, such as overshoot and bandwidth.
Definitions
Symbol Description
T0 time-window
f 0 1 T0 discrete frequency spacing
fs sample rate
Ts 1 f s sample period
N number of samples
Creating FFTs
Given Choose Then
T0 - time-window f s - sample rate N T0 f s
N - number of samples f s Nf 0
N - number of samples f s - sample rate T0 NTs
T0 - time-window f s Nf 0
f s - sample rate N - number of samples T0 NTs
T0 - time-window N T0 f s
MATLAB® Code
Code Description
% Sample rate This code starts off with a given sample rate, and
fs=1e6;
Ts=1/fs; chooses to restrict the number of samples to 1024 for
computational speed.
% Number of samples
N=1024;
f
- f s /2 0 f s /2
4. periodic continuous Values of Multiply X n weights are
waveform (period T0 ) waveform by 1 N to give X ( f) f0 C X [n ]
Xn =
x (t )
A
over one weights of N
period at impulses at
t
0 T0 intervals f
frequencies nf 0
Ts . - f s /2 0 f s /2
in true FT.
Graphing
Code Description
figure(1); Creates a new graph, titled “Figure 1”.
plot(t,y); Graphs y vs. t on the current figure.
subplot(211); Creates a 2 x 1 matrix of graphs in the current figure.
Makes the current figure the top one.
title('Complex poles'); Puts a title 'Complex poles' on the current figure.
xlabel('Time (s)'); Puts the label 'Time (s)' on the x-axis.
ylabel('y(t)'); Puts the label 'y(t)' on the y-axis.
semilogx(w,H,'k:'); Makes a graph with a logarithmic x-axis, and uses a
black dotted line ('k:').
plot(200,-2.38,'kx'); Plots a point (200,-2.38) on the current figure,
using a black cross ('kx').
axis([1 1e5 -40 40]); Sets the range of the x-axis from 1 to 1e5, and the
range of the y-axis from –40 to 40. Note all this
information is stored in the vector [1 1e5 -40 40].
grid on; Displays a grid on the current graph.
hold on; Next time you plot, it will appear on the current graph
instead of a new graph.
Definitions
Symbol Description
aij Element of a matrix. i is the row, j is the column.
a11 a12 a13 A is the representation of the matrix with elements
A a21 a 22
a 23 aij aij .
a31 a32 a33
x1 x is a column vector with elements xi .
x x2
x3
0 0 0 Null matrix, every element is zero.
0 0 0 0
0 0 0
1 0 0 Identity matrix, diagonal elements are one.
I 0 1 0
0 0 1
0 0 Scalar matrix.
I 0 0
0 0
1 0 0 Diagonal matrix, aij 0 i j .
Λ 0 2 0
0 0 3
Multiplication
Multiplication Description
Z kY Multiplication by a scalar: zij kyij
n
z Ax Multiplication by a vector: zi aik xk
k 1
n
Z AB Matrix multiplication: zij aik bkj .
k 1
A 1
adj A Reciprocal of A: A 1 A AA 1 I .
A Only exists if A is square and non-singular.
Formula is only used for 3x3 matrices or smaller.
Linear Equations
Terminology Description
a11 x1 a12 x2 a13 x3 b1 Set of linear equations written explicitly.
a21 x1 a22 x2 a23 x3 b2
a31 x1 a32 x2 a33 x3 b3
a11 a12 a13 x1 b1 Set of linear equations written using matrix elements.
a a 22 a23 x2 b2
21
a31 a32 a33 x3 b3
Ax b Set of linear equations written using matrix notation.
x A 1b Solution to set of linear equations.
Eigenvalues
Equations Description
Ax x are the eigenvalues. x are the column
eigenvectors.
A I 0 Finding eigenvalues.
Answers
1A.1
t 2 2k
(a) g t sin t rect , T0 2 , P 1
k
4
t 1 3k t 34 3k t 1 3k t 34 3k
(b) g t 2 rect t 3k rect rect ,
1
1 1 1
k 4 2 4 2
T0 3 , P 169
1k rect t 5 10k , T0 20 ,
(c) g t e t 10 k 10
P 1
2 1 e
2
k 10
t 4k
(d) g t 2 cos200t rect , T0 4 , P 1
k 2
1A.2
t 2 t 74 t 1 t 43 t 1 t 43 t 2 t 74
(a) g t rect
1
1 1
rect
3 1
rect
3 1
rect
1
,
10 2 10 2 10 2 10 2
E 83 13
t 25 t 25
(d) g t rect rect
3 rect t 2
5
, E 19
5
1A.3
1A.4
Let t T . Then:
t t0
f t
T
dt
f T t 0 T Td if T 0
or
f T t 0 T Td if T 0
f T t 0 T T d all T
1A.5
45°
100
(a) X 2715 (b) X 510 (c) X (d) 8 3, 8
(e) x t 29.15 cost 22.17 (f) x t 100 cost 60 (g) X 5596
. 59.64
1B.1
a)
(i) (ii)
2 2
1.5 1.5
1 1
x(t)
x(t)
0.5 0.5
0 0
-0.5 -0.5
-1 -1
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
(iii) (iv)
2 2
1.5 1.5
1 1
x(t)
x(t)
0.5 0.5
0 0
-0.5 -0.5
-1 -1
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
b)
(i) (ii)
1.5 1.5
1 1
0.5 0.5
x(t)
x(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
(iii) (iv)
1.5 1.5
1 1
0.5 0.5
x(t)
x(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
1 1
0.5 0.5
x(t)
x(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
(iii) (iv)
1.5 1.5
1 1
0.5 0.5
x(t)
x(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time (sec) Time (sec)
1B.2
a) b)
y1[n ] y2[n ]
3 3
2 2
1 1
-2 -1 0 1 2 3 4 5 n -2 -1 0 1 2 3 4 5 n
-1 -1
-2
-3
-4
1B.3
a [n]
3
-2 -1 0 1 2 3 4 5 n
-1
-2
-3
1B.4
1, n 1, 3, 5,
yn
0, all other n
1B.5
a) yn yn 1 yn 2
1B.6
(i)
D D
x [n ] y [n ]
(ii)
x [n ] y [n ]
3 D D D D
D D
1B.7
(i) yn yn 1 2 yn 2 3 xn 1
1B.8
(a)
(b) y 1 1 , y0 0.75 , y1 1.375 , y2 1.0625 , y3 1.21875
1B.9
(i) (a) a1 a 4 0 (b) a5 0
1B.10
y1 0 1 , y1 1 3 , y1 2 7 , y1 3 15 , y1 4 31 , y1 5 63
1B.11
yn 2 n n 2 2 n 4 n 6
1B.12
(i)
12
8
4
-1 0 1 2 3 4 5 n
12
8
4
-1 0 1 2 3 4 5 6 7 8 9 10 11 n
(iii)
24
20
16
12
8
4
-1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n
1B.14
The describing differential equation is:
d 2 vo R dvo 1 1
2
vo vi .
dt L dt LC LC
0.8
0.6
0.4
y(t)
0.2
-0.2
-0.4
-0.6
0 2 4 6 8 10 12 14 16 18 20
Time (sec)
Use T 0.1 so that the waveform appears smooth (any value smaller has a
minimal effect in changing the solution).
1B.15
(b) ht Ke Kt u t
dy
(a) Ky Kx
dt
(c) y t 0.75 1 e 0.5t 4 u t 4
(d) (e)
-0.5( t -4)
y( t ) 1-e 0.8
0.7
0.75 0.6
0.5
y(t)
0.4
0 4 t (min) 0.3
0.2
0.1
0
0 2 4 6 8 10 12 14 16 18 20
Time (sec)
1B.16
1B.17
-1-0.50 1 2 3 4 4.5 5 t
1B.18
The output control signal is smoothed, since it is not changing as rapidly as the
input control signal. It is also delayed in time.
1
x(t)
0.5
0
-1 -0.5 0 0.5 1 1.5
Time (sec)
0.4
0.3
h(t)
0.2
0.1
0
-1 -0.5 0 0.5 1 1.5
Time (sec)
0.2
0.15
y(t)
0.1
0.05
0
-1 -0.5 0 0.5 1 1.5
Time (sec)
1B.19
2 2
1 1
si[nT]
x[nT]
0 0
-1 -1
-2 -2
0 2 4 6 0 2 4 6
t (s) t (s)
0.5 2
1
h[nT]
y[nT]
0 0
-1
-0.5 -2
0 2 4 6 0 2 4 6
t (s) t (s)
Note that the filter effectively removes the added noise; however, it also
introduces a time delay of between three and four samples.
2A.1
|G| G
2.236
2 90°
1 45°
26.57°
0 50 100 f 0 50 100 f
(a)
|G| G
2 90°
Re{G} Im{G }
1 1 1 1
1/2
-50
-100 -50 0 50 100 f -100
0 50 100 f
(c) -1/2
2A.2
G 445
2A.3
g t 4 cos 200t
2A.4
1 G
2A.5
(a) Gn
1
j n sinc n 1 sinc n 1
4 2 2
0.4
0.3
0.2
0.1
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
-1
-2
-25 -20 -15 -10 -5 0 5 10 15 20 25
0.5
0.4
0.3
0.2
0.1
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
(c) Gn
e 10
1 1 1
n
2 0.1 jn
0.8
0.6
0.4
0.2
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
-1
-2
-25 -20 -15 -10 -5 0 5 10 15 20 25
1 n n
(d) Gn sinc 8 sinc 8
4 2 2
0.25
0.2
0.15
0.1
0.05
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
sincn 1 1
n
(e) Gn and the special case of n 0 can be evaluated by
j 2n
1
applying l’Hospital’s Rule or from first principles: G0 .
2
0.5
0.4
0.3
0.2
0.1
0
-25 -20 -15 -10 -5 0 5 10 15 20 25
-1
-2
-25 -20 -15 -10 -5 0 5 10 15 20 25
2A.6
P 4.294 W
2A.7
|G|
0.5
0.2387
-12 -6 0 6 12
2
(a) T0 : -0.02653 , P 73%
3
|G|
0.5
0.1592
-12 -6 0 6 12
(b) T0 : , P 60%
3
|G|
0.5
-12 -6 0 6 12
(c) T0 : , P 50%
6
2A.8
n
5 T0 Note: Gn Asinc 2 Asincn
2
2B.1
x t 4 cos 2000t 30 , P 8
2B.2
2B.3
5 10 cos3f 5 cos4f
(a)
2f 2
(b) sincf 12 sincf 12
2 2
(c)
1 jf e jf
sinc f cos2f e j 3f
2f 2
2B.4
e at u t e at u t
a t
Hint: e
2B.5
3P 1.5 f
2B.6
This follows directly from the time shift property.
2B.7
G1 f
a
a j 2f
, G2 f
1 1
j 2f 2
f , g1 t g 2 t 1 e at u t
2B.8
G1( f ) * G2( f )
2 Ak
Ak
-2 f 0 2 f0 f
2B.9
(a) 2 Af 0 sinc2 f 0 t t 0
2B.10
j 2 A sin 2 fT
f
2B.11
Asinc4f cos4f
jf
3A.1
0.2339 0
3A.2
X f A sinc 2 f
3A.3
B4
3A.4
By passing the signal through a lowpass filter with 4 kHz cutoff - provided the
original signal contained no spectral components above 4 kHz.
3A.5
Periodicy.
3A.6
(a) 20 Hz, 40 Hz, P 01325
. W
j 3 j
(b) G3 0.5e e j120t 0.5e 3 e j120t , 0.9781
(c)
Harmonic # Amplitude Phase (°)
0 1
1 3 -66
2 1 -102
3 0.5 -168
4 0.25 -234
Yes.
3A.7
Yes. The flat topped sampling pulses would simply reduce the amplitudes of
the “repeats” of the baseband spectrum as one moved along the frequency axis.
For ideal sampling, with impulses, all “repeats” have the same amplitude. Note
that after sampling the pulses have tops which follow the original waveform.
3A.8
Truncating the 9.25 kHz sinusoid has the effect of convolving the impulse in
the original transform with the transform of the window (a sinc function for a
rectangular window). This introduces “leakage” which will give a spurious
9 kHz component.
3B.1
G( f )
j/2 - j/2
-10 0 10 f
-11 -9 9 11
3B.2
G( f )
0.5
-2 -1 0 1 2 f (kHz)
A:
G( f )
0.5
0.25
-40 -36 36 40
0
-38 38 f (kHz)
-39 -37 37 39
B:
G( f )
0.5
-40 -36 36 40
0
-38 -2 -1 1 2 38 f (kHz)
-39 -37 37 39
C:
3B.3
lowpass
filter L
lowpass
filter R
cos(2 f c t )
4A.1
a)
1 RC
b)
s 1 R1C1 s 1 R2C2
s 1 RC s 2
1 R1C1 1 R2 C 2 1 R2 C1 s 1 R1 R2 C1C 2
1 2 s R L 1 R2 C1 s R1 L1
c) d) 2
sR L s 1 R2 C1 R1 L1 s R1 R2 R2 L1C1
4A.2
a) I s sL R
1
Li 0 E s
sC
b) X s Ms Bs K M sx 0
2 dx 0
3
Bx 0 2
dt s
d 0 10
c) s Js 2 Bs K J s 0 B 0 2
s 2
dt
4A.3
9
f t
5 7 t
a) e cos 3t tan 1
4 2 3
f t cost cos2t
1
b)
3
1 2 15
f t
15
c) t 8t e 2t 7t
40 2 2
4A.4
a) f 5 4 b), c) The final value theorem does not apply. Why?
4A.5
1 R1C 2 s
T s
s 1 R1C1 s 1 R2C2
4A.6
y t 5e t cos2t 26.6u t
4A.7
G 1 GH
a) (i) (ii) (iii)
1 GH 1 GH 1 GH
4A.8
Y G1 G1G3 H 3 G1G2 H 2 G1G2 G3 H 2 H 3 G1G2 G1G2 G3 H 3
a)
R 1 G2 H 2 G1G3 G1G2 G3 H 2 G1G2 G3 G3 H 3 G2 G3 H 2 H 3
ab ac 1
b) X 5 X1 Y
1 bd ac 1 bd ac
4A.9
C G1G2 G3G4
a)
R 1 G3G4 H 1 G2 G3 H 2 G1G2 G3G4 H 3
Y AE CE CD AD
b)
X 1 AB EF ABEF
4B.1
Yes, by examining the pole locations for R, L, C > 0.
4B.2
(i)
y t 2 1 e 4t u t
(ii)
y t 2t 12 12 e 4t u t
8
(iii) y t
cos 2t tan 1 2 e 4t u t
8
5 5
8 2 40 4t
(iv) y t cos10t tan 1 e u t
29 5 29
4B.3
(i)
underdamped, y t 2
e sin 2 3t cos 1 0.5 u t ,
4 2t
3
ht
16 2t
3
e sin 2 3t
(ii)
critically damped, y t 2 21 4t e 4t u t , ht 32te 4t
(iii)
overdamped, y t 2 83 e 2t 23 e 8t u t , ht 163 e 2t e 8t
4B.4
1
(a) y ss t u t (b) y ss t cos t tan 1 u t
10 170
17 13
5A.1
a) 2 kHz b) 2.5 Hz
5A.2
a) 0 dB b) 32 dB c) –6 dB
5A.3
a) b)
Magnitude response Magnitude response
40 40
20
30
|H(w)| dB
|H(w)| dB
0
20
-20
10
-40
0 -60
1 2 3 4 5 6 -1 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
Phase response Phase response
0
50
-50
arg(H(w)) deg
arg(H(w)) deg
0
-100
-50
-150
1 2 3 4 5 6 -1 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
c) d)
Magnitude response Magnitude response
20 20
0 0
|H(w)| dB
|H(w)| dB
-20 -20
-40 -40
-60 -60
-3 -2 -1 0 1 2 -3 -2 -1 0 1 2
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
Phase response Phase response
-80
0
-100
-20
arg(H(w)) deg
arg(H(w)) deg
-120
-40
-140
-60
-160
-80
-180
-100
-3 -2 -1 0 1 2 -3 -2 -1 0 1 2
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
e) f)
Magnitude response Magnitude response
80 80
60 60
|H(w)| dB
|H(w)| dB
40 40
20 20
0 0
-3 -2 -1 0 1 2 -3 -2 -1 0 1 2
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
Phase response Phase response
100
180
80
160
arg(H(w)) deg
arg(H(w)) deg
60
140
40
120
20
100
0
80
-3 -2 -1 0 1 2 -3 -2 -1 0 1 2
10 10 10 10 10 10 10 10 10 10 10 10
w (rad/s) w (rad/s)
5A.5
Magnitude response
80
60
|H(w)| dB
40
20
0
-2 -1 0 1 2 3 4
10 10 10 10 10 10 10
w (rad/s)
Phase response
0
-50
arg(H(w)) deg
-100
-150
-2 -1 0 1 2 3 4
10 10 10 10 10 10 10
w (rad/s)
5A.6
Magnitude response
50
40
|H(w)| dB
30
20
10
0
0 1 2
10 10 10
w (rad/s)
Phase response
150
arg(H(w)) deg
100
50
0
0 1 2
10 10 10
w (rad/s)
5A.7
a)
Magnitude response
100
50
|H(w)| dB
0
-50
-100
-2 -1 0 1 2 3
10 10 10 10 10 10
w (rad/s)
Phase response
-100
arg(H(w)) deg
-150
-200
-250
-2 -1 0 1 2 3
10 10 10 10 10 10
w (rad/s)
(i) +28 dB, -135° (ii) +5 dB, -105° (iii) –15 dB, -180° (iv) – 55 dB, -270°
b)
Magnitude response
40
20
|H(w)| dB
-20
-40
-60
-1 0 1 2 3
10 10 10 10 10
w (rad/s)
Phase response
200
150
arg(H(w)) deg
100
50
0
-1 0 1 2 3
10 10 10 10 10
w (rad/s)
(i) +35 dB, +180° (ii) –5 dB, +100° (iii) –8 dB, +55° (iv) –28 dB, +90°
5A.8
a) G1 s
450
ss 25s 20
Magnitude response
50
0
|H(w)| dB
-50
-100
0 1 2 3
10 10 10 10
w (rad/s)
Phase response
-50
-100
arg(H(w)) deg
-150
-200
-250
-300
0 1 2 3
10 10 10 10
w (rad/s)
10s
b) G2 s
s 0.2s 10
Magnitude response
20
0
|H(w)| dB
-20
-40
-60
-2 -1 0 1 2 3
10 10 10 10 10 10
w (rad/s)
Phase response
100
80
arg(H(w)) deg
60
40
20
0
-2 -1 0 1 2 3
10 10 10 10 10 10
w (rad/s)
5A.9
a)
Magnitude response
10
-10
|H(w)| dB
-20
-30
-40
-50
-60
-2 -1 0 1 2
10 10 10 10 10
w (rad/s)
Phase response
0
-50
arg(H(w)) deg
-100
-150
-2 -1 0 1 2
10 10 10 10 10
w (rad/s)
b)
Magnitude response
10
-10
|H(w)| dB
-20
-30
-40
-50
-60
-2 -1 0 1 2
10 10 10 10 10
w (rad/s)
Phase response
0
-50
arg(H(w)) deg
-100
-150
-2 -1 0 1 2
10 10 10 10 10
w (rad/s)
5A.10
(i) –3.3 dB, 8° (ii) –17.4 dB, 121°
5A.11
Magnitude response
40
20
|H(w)| dB
-20
-40
0 1 2 3 4 5
10 10 10 10 10 10
w (rad/s)
Phase response
100
50
arg(H(w)) deg
-50
-100
0 1 2 3 4 5
10 10 10 10 10 10
w (rad/s)
5B.1
5B.2
G s
196
s 19.6s 196
2
5B.3
ct 1
e nt
1 2
sin 1 2 n t cos 1
5B.4
ln a b
a) b) 0.247 , n 16.21 rads -1
4 ln a b
2 2
5B.5
(a)
j
(i) -10
j
(ii) -1
j
(iii) -10 -1
63.2%
0
t
0.1
(ii)
v
100%
steady-state
63.2%
0
t
1
(iii)
v
100%
63.2%
different
0
1 t
5B.6
G1 s
1.299
s 0.6495
1.8
1.6
First-order system
1.4
1.2
Amplitude
0.8
0.6
0.4
Second-order system
0.2
0
0 1 2 3 4 5 6 7 8
Time (s)
5B.7
a) n 3 , 1 6 b) 58.8% c) 0 d) 1 9
5B.8
a) 0 b) 3
5B.9
a) T 10 sec b) (i) T 9.09 sec (ii) T 5 sec (iii) T 0.91 sec
6A.3
1000 s2 s
a) S T
1 b) S T
2 c) S 2
T
s s 1000 s s 1000
K1 K2 G
6A.4
a)
open-loop S KT a 1
open-loop S KT1 0
1 1
closed-loop S KT a for n
1 K 1 K a G s 1 K 1 K a
K1 K a
closed-loop S KT1 1 for K1 K a 1
1 K1 K a
b)
s
open-loop G s 1 for n
Td s
s G s 1
closed-loop for n
Td s 1 K1 K a G s 1 K1 K a
6A.5
1 K1 1 K1
a) (i) (ii) (iii) (iv)
1 K P K1 1 K P K1 1 K P K1
1
b) (i) 0 (ii) (iii) 0 (iv) 0
K I K1
1 1 1
c) (i) 0 (ii) (iii) (iv)
K P K1 KP KP
The integral term in the compensator reduces the “order” of the error,
i.e. infinite values turn into finite values, and finite values become zero.
7A.1
(i)
z -2
x [n ] y [n ]
z -1
(ii)
x [n ] y [n ]
3 z -4
z -1 z -1
7A.2
(iii) yn yn 1 2 yn 2 3 xn 1
7A.3
or yn 3 1 3 un 1 2 3 1 3
n
n2
un 2
(the responses above are equivalent – to see this, graph them or rearrange
terms)
7A.4
hn h1 n h2 n
7A.5
h0 h1 0
h2 h1 2 h22 1h13 0 2h1 0h1 1h2 1 h12 0h2 2
7A.6
(a)
(i) F z (ii) F z
z 1
z 1 2 za
z z a
(iii) F z (iv) F z
z
z a 2
z a 3
2z 2
(b) X z
z 2 1
7A.7
(a) zeros at z 0, - 2 ; poles at z 1 3, - 1 ; stable
7A.8
3 z 1 2 z 4
H z
1 z 1 z 2
7A.9
xn 0, 14, 10.5, 9.125, ...
7A.10
7A.11
1 a n1
(a) xn for n 0, 1, 2,...
1 a
7A.12
2 n1 1
(a) hn 1 4 1 2 1 8 1 4
n n
2 2 n 3
1 8 z2
(b) H z
z 1 2z 1 4
(c) yn 1 3 1 24 1 4 1 4 1 2
n n
(d) yn 1 2 1 2 1 18 1 4 n 3 4 9
n n
7A.13
yn 1
1
5
0.5 1.25
n 3
1
5
0.5 1.25
n 3
for n 0, 1, 2,...
7A.14
yn
1
5
0.5 1.25 0.5 1.25
n n
7A.15
zTe T
F z
z e
T 2
7A.16
z 1 k k z 1
(a) X 3 z X 1 z Dz
k 1z 6k 1 k 1z 6k 1
7A.17
(a) f 0 0 , f 0 (b) f 0 1 , f 0
(c) f 0 1 4 , f (d) f 0 0 , f 8
1
1 a
7A.18
(i) n 1un
(ii) n 2 n 1 3 n 2 2 n 3 n 4
7A.19
7B.1
yn 0.7741 yn 1 20 xn 18.87 xn 1
7B.2
yn 0.7730 yn 1 17.73 xn 17.73 xn 1
7B.3
0.1670 z 0.3795
H d z
z 0.3679z 0.1353
8B.1
(a)
Root Locus Editor (C)
8
2
Imag Axis
-2
-4
-6
-8
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
Real Axis
(c) K 0.206
(d) t s 10.7 s
8B.2
(a) One pole is always in the right-half plane:
For K 0 : For K 0 :
Root Locus Editor (C) Root Locus Editor (C)
4 1
0.8
3
0.6
2
0.4
1
0.2
Imag Axis
Imag Axis
0 0
-0.2
-1
-0.4
-2
-0.6
-3
-0.8
-4 -1
-4 -2 0 2 4 6 8 -20 -15 -10 -5 0 5 10 15 20 25
Real Axis Real Axis
For K 0 : For K 0 :
Root Locus Editor (C) Root Locus Editor (C)
15 1
0.8
10
0.6
0.4
5
0.2
Imag Axis
Imag Axis
0 0
-0.2
-5
-0.4
-0.6
-10
-0.8
-15 -1
-4 -2 0 2 4 6 8 -20 -15 -10 -5 0 5 10 15 20 25
Real Axis Real Axis
The system is always unstable because the pole pair moves into the right-half
plane ( s j 3.5 ) at a lower gain ( K 37.5 ) than that for which the right-
half plane pole enters the left-half plane ( K 48 ). The principle is sound,
however. A different choice of pole-zero locations for the feedback
compensator is required in order to produce a stable feedback system.
For K 0 , the root in the right-half plane stays on the real-axis and moves to
the right. Thus, negative values of K are not worth pursuing.
For the given open-loop pole-zero pattern, there are two different feasible locus
paths, one of which includes a nearly circular segment from the left-half
s-plane to the right-half s-plane. The relative magnitude of the poles and zeros
determines which of the two paths occurs.
30
20
20
10
10
Imag Axis
Imag Axis
0 0
-10
-10
-20
-20
-30
-30 -40
-90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 -20 -15 -10 -5 0 5 10
Real Axis Real Axis
8B.3
(a)
Root Locus Editor (C)
5
1
Imag Axis
-1
-2
-3
-4
-5
-7 -6 -5 -4 -3 -2 -1 0 1 2
Real Axis
(b) a 32 27
8B.4
(a)
j
K >0
0
-3 -1
s -plane
j
K <0 j 3
3 2 32
K 2 K
5 K 5
5
0
-3 3 -1 3
s -plane
j 3
(b) K 2 5 , n 3 rads -1
(c) Any value of K 2 5 gives zero steady-state error. [The system is type 1.
Therefore any value of K will give zero steady-state error provided the
system is stable.]
9A.1
1 R2
vC C R R3 C1 R2 R3 0
q 1, A 1 2 , b 1
R2 1 R2 R3
iL1
L R R
R1 L
1
1 2 3 L1 R2 R3
9A.2
(a) false (b) false (c) false [all true for x0 0 ]
9A.3
2 0 0 3 2 1 1 1
q 0
0 0 q 1 x q 1
1 0 q 0 x
(a) (b)
2 2 2 0 0 1 2 0
y 2 2 1q 0x y 0 0 1q 0x
9A.4
(a) H s 8
s 12 s 3 (b) H s
1
s s 2 s 5s 9 s 7
2 3 2
9A.5
3s 1
q
1
t
-1 s 3
2
2
s 3
s s 3
2
1 4 cos 3t 3 sin 3t
q t L 12 u t
2 2
2s 6
2 2 cos 3t 14 sin 3t
s 3 s s 3
2
3
9B.1
1 1 , 2 2 , 3 3
9B.2
1 e t
(a) 1, 2 (b) qt t
e
9B.3
(ii) –1, 1 j 2 (iii) k1 75 , k 2 49 , k 3 10 (iv) y ss t 1 40
(v) State feedback can place the poles of any system arbitrarily (if the system is
controllable).
9B.4
0 0 1 16 3 16
qn 1 1 0 1 4 qn 1 8 xn
0 1 1 4 1 2
yn 0 0 1qn 3 xn
9B.6
a 1 0 0
qn 1 k 0 k qn k xn
b c 0 c b c
yn 1 0 0qn