You are on page 1of 74

SIGNALS AND SYSTEMS

Aydn Akan

Department of Electrical and Electronics Engineering


University of Istanbul
Avcilar, Istanbul 34850 TURKEY
E-mail: akan@istanbul.edu.tr

ck

...

-2

2
-1

0 1

October 2001, Istanbul

...
k

TABLE OF CONTENTS

1. CONTINUOUS AND DISCRETE TIME SIGNALS AND


SYSTEMS

1.1. INTRODUCTION TO SIGNALS


1.2. BASIC TRANSFORMATIONS ON SIGNALS
1.3. BASIC CONTINUOUS-TIME SIGNALS
1.4. BASIC DISCRETE-TIME SIGNALS
1.5. PERIODICITY OF DISCRETE-TIME SINUSOIDS
1.6. INTRODUCTION TO SYSTEMS
1.7. PROPERTIES OF SYSTEMS

2. LINEAR TIME-INVARIANT SYSTEMS

2.1. CONTINUOUS-TIME LTI SYSTEMS


2.2. DISCRETE-TIME LTI SYSTEMS
2.3. PROPERTIES OF LTI SYSTEMS
2.4. REPRESENTATION OF LTI SYSTEMS BY DIFFERENTIAL
EQUATIONS
2.5. REPRESENTATION OF LTI SYSTEMS BY BLOCK DIAGRAMS

3. CONTINUOUS-TIME FOURIER ANALYSIS

3.1. RESPONSE OF LTI SYSTEMS TO COMPLEX SINUSOIDS


2

3.2. CONTINUOUS-TIME FOURIER SERIES


3.3. CONTINUOUS-TIME FOURIER TRANSFORM FOR APERIODIC SIGNALS
3.4. PROPERTIES OF THE CONTINUOUS-TIME FOURIER TRANSFORM
3.5. SAMPLING OF CONTINUOUS TIME SIGNALS

4. FILTERING

4.1. IDEAL FILTERS


4.2. NON-IDEAL FREQUENCY SELECTIVE FILTERS

1. CONTINUOUS AND DISCRETE TIME SIGNALS AND SYSTEMS


1.1. INTRODUCTION TO SIGNALS
A signal is the outcome of a physical system. Signals are represented
mathematically as functions of one or more independent variables. A speech
signal is represented by acoustic pressure as a function of time. A picture is
represented as brightness function of two spatial (x and y) variables.
There are two basic types of signals:
Continuous-time Signals
Discrete-time Signals
In the case of continuous-time signals, the independent variable is continuous; these signals are defined for a continuum of values of the independent
variable. On the other hand, for discrete-time signals, the independent variable takes only a discrete-set of values. Hence discrete-time signals are only
defined at discrete time instants.
A speech signal as a function of time (pressure or electrical voltage as a
function of time), [see Fig. 1] atmospheric pressure as a function of attitude,
electrocardiogram ECG (or any other biological) signals as a function of time
are some examples of continuous-time signals.
The amount of precipitation in kg per month, the height of a child measured and recorded every year are examples of discrete-time signals. Figures
2 and 3 show examples of such signals.
t,
n,

Continuous-time variable
Discrete-time variable

x(t),
x(n),

Continuous-time signal
Discrete-time signal

x(n) is defined only for integer values of n. For some discrete-time signals, the independent variable is inherently discrete (e.g. precipitation per
month, height vs. year).
On the other hand, some discrete-time signals may represent samples of
a continuous-time signal. For example, to process a continuous time signal
on a digital computer, we use samples of this signal at discrete-time signals.
This is called sampling of a continuous-time signal.
4

A speech segment
0.4

0.3

Amplitude, x(t) [V]

0.2

0.1

0.1

0.2

0.3

0.4

10
12
Time, t [msec]

14

16

18

20

Figure 1: Example of a continuous-time signal: a 20 msec. speech segment.

x(t) x(nT ) = x(n)

n = ... 1, 0, 1, ...

where T is called the sampling period. Finally, based on the nature of the
time and the amplitude variables, we have the following signal classification:

Amplitude / Time
Continuous
Discrete

Continuous
ANALOG
QUANTIZED

Discrete
DISCRETE-TIME
DIGITAL

1.2. BASIC TRANSFORMATIONS ON SIGNALS


a) Reversing in Time:
Given a continuous-time signal x(t) (or a discrete-time signal x(n)), the
time reversed version of it, xr (t) (or xr (n)) is obtained by
xr (t) = x(t)
xr (n) = x(n)
5

Jan
Feb
Mar
Apr

Nov
Dec

P(kg)

months

Figure 2: Example of discrete-time signal: the amount of precipitation in


kg per month.

H(cm)

90

80
60

70

...

50

...

years

Figure 3: Another example of discrete-time signal: the height of a child


measured and recorded every year.
Figures 4 and 5 show a continuous-time signal x(t), and its time reversed
version, xr (t). A similar example can be given for a discrete-time signal.
b) Scaling in Time :
A continuous-time signal x(t) (or a discrete-time signal x(n)), can be
scaled in time such that xs (t) (or xs (n)) is defined as
xs (t) = x(t); > 0
xs (n) = x(dn); d > 1, integer

x(t)

Figure 4: A continuous-time signal x(t).


x(t)

Figure 5: xr (t), Time-reversed version of x(t).


Here, the real number > 0 is called the scaling factor. When we have
< 1, then x(t) is an expanded version of x(t) in time. On the other
hand if > 1, then x(t) is shrunken (contracted) version of x(t). For
discrete-time signals the scaling factor d must be an integer value because
the signal is only defined for integer values of n, and (dn) must also be an
integer.

<1

x(t) =

expansion
= 1 x(t)

> 1 contraction

In Figs. 6, 7, and 8, we give a continuous-time signal x(t) and its scaled


versions. Fig. 7 show x(2t), contracted signal, and Fig. 8 show x(0.5t),
expanded signal.
c) Shifting in Time:
7

x(t)

Figure 6: A continuous-time signal x(t).


x(2 t)

Figure 7: Scaled signal x(2t); contraction.


Given a continuous-time signal x(t) (or a discrete-time signal x(n)), its
time-shifted or translated version xt (t) (or xt (n)) is obtained as
xt (t) = x(t t0 )
xt (n) = x(t n0 )
where t0 and n0 indicate the shift amount. When this shift amount is
positive, shifting in time is called delay, and when it is negative shifting
is called advance. We have a continuous-time signal x(t) in. Fig 9 and its
time delayed version xt (t) = x(t t0 ) in Fig. 10 by a shift amount t0 > 0.
t0 > 0
t0 < 0

Shift to the right


Shift to the left

d) Scaling by a Constant:
8

Delaying the signal


Advancing the signal

x(t/2)

Figure 8: Scaled signal x(0.5t); expansion.


x(t)

x(0)

Figure 9: A given continuous-time signal x(t).


Given a signal x(t) or x(n), we can scale its magnitude by multiplying
the signal by a constant c, such that
xc (t) = cx(t)
xc (n) = cx(n)

e) Even and Odd Decomposition:


Any real-valued signal can be decomposed into two parts: one even
symmetric and one odd symmetric component. That is,
x(t) = xe (t) + xo (t)
where xe (t) indicates the even symmetric part and xo (t) indicates the odd
symmetric part of the signal. A signal is called even symmetric if it is
identical to its reflection about zero (or time-reversed version):
9

x(t-t0)

x(0)

t0

Figure 10: Time delayed signal xt (t) = x(t t0 ).

xe (t) = xe (t),

xe (n) = xe (n),

A signal is referred to as an odd symmetric signal if


xo (t) = xo (t),

xo (n) = xo (n),

For an odd symmetric signal around zero, xo (0) = 0 or xo (0) = 0.


To obtain the even and odd components of a given signal,
x(t) = xe (t) + xo (t)

(1)

x(t) = xe (t) + xo (t)


x(t) = xe (t) xo (t)
Adding both sides of equations (1) and (2), we get
x(t) + x(t) = 2xe (t)
Hence, we obtain
xe (t) =

x(t) + x(t)
.
2

Subtracting (2) from (1), we have


x(t) x(t) = 2xo (t)

10

(2)

Then, we get
xo (t) =

x(t) x(t)
.
2

f ) Periodicity of Signals:
A signal is periodic with period T if there exist a positive real number
T for which;
x(t) = x(t + mT )
for any integerm
The smallest possible value of T is called the fundamental period T0 , and
x(t) = x(t + T0 ). A constant signal is periodic with any choice of T (there
is no smallest possible value).
For discrete time signals,
x(n) = x(n + kN )

for any integerk

A signal that is periodic with N is also periodic with integer multiples


of N , i.e., x(n) = x(n + 2N ) = x(n + 3N ) = .... A signal that is not periodic
is called aperiodic.

1.3. BASIC CONTINUOUS-TIME SIGNALS


a) Unit-Step Function:
The unit-step function is defined as
(

u(t) =

1 t>0
0 t<0

Figure 11 shows a unit-step function u(t). As we see, u(t) is not defined for
t = 0 causing a discontinuity.
b) Unit-Impulse Function:
The unit-impulse or delta dirac function is defined as the derivative of
the unit-step function:

11

u(t)

Figure 11: The unit-step function u(t).

(t) =

d
u(t)
dt

Conversely, u(t) can be obtained from (t) as a running integration;

u(t) =

Z t

( )d

As we mention above, u(t) has a discontinuity at t = 0, hence we cannot


take the derivative. So lets define a non-ideal step function u (t), (see Fig.
12) from which we can get the derivative;

(t) =

d
u (t)
dt

The (t) (shown in Fig. 13) can be viewed as a non-ideal impulse function.
In fact, as 0, (t) gets narrower and higher but maintains its unit
area. Finally,
lim (t) = (t)

Figure 14 shows the ideal unit-impulse function in this as the limit case.

12

u (t)

Figure 12: A non-ideal unit-step function u (t).

dD

(t)

1/

Figure 13: Non-ideal impulse function, (t).


The value of (t) of t = 0 is actually , thus the area under it is unity.
However, we always assume that the amplitude value of the unit-impulse
function is also unity.
Z t

k( )d = ku(t)

A very important property of impulse function is that, we can represent


any point of a signal by a scaled impulse, i.e.,
x(t)(t) = x(0)(t)
that is if we multiply a signal x(t) with a unit impulse function, we get just
an impulse function at t = 0 scaled by the amplitude of the signal at the
t = 0, i.e., x(0). Similarly if we multiply our signal x(t) with a shifted unit
impulse function (t t0 ), we get another impulse function at t0 scaled by
the amplitude of the signal at t0 , i.e., x(t0 ).
x(t)(t t0 ) = x(t0 )(t t0 ).
13

d(t)
1

Figure 14: The ideal unit-impulse function (t).


Later on, we will see how any signal can be represented as a weighted combination of shifted impulses.
c) Complex Exponentials/Sinuoids:
A continuous-time complex exponential signal is given in general by
x(t) = Aeat
where the amplitude A and the phase a are complex in general. If A and a
are real, we define real-valued exponential functions. When a > 0, we have
increasing exponential (see Fig. 15), and when a < 0, we have decreasing
exponential (see Fig. 16).

x(t)
Aeat
A

a>0

Figure 15: An increasing exponential signal.


If the phase a is complex, and pure imaginary, i.e., a = j0 ; we have a
periodic signal that is called a complex sinusoid;
x(t) = ej0 t
14

x(t)
Aeat
A

a<0

Figure 16: A decreasing exponential signal.


where 0 is the radial (angular) frequency of the sinusoid. To see that
x(t) is periodic,
ej0 (t+T ) = ej0 t ej0 T
To satisfy the above equation, we need that
ej0 T = 1.
In fact, for 0 6= 0 there is a smallest possible T , T0 which is called the
fundamental period of x(t), such that 0 T0 = 2 or
T0 = 2/|0 |.
Then
ej0 T0 = ej2 = 1
proving that x(t) is a periodic signal with period T0 , or frequency 0 .
d) Real sinusoids:
A general real-valued sinusoidal signal is given by
x(t) = A cos(0 t + )

(3)

where t is the time in seconds, 0 = 2F0 is the fundamental radial frequency in radian/seconds, is the initial phase in radians, F0 is the frequency in Hertz. Fig. 17 shows a part of a (periodic) real-valued sinusoidal
signal with radial frequency 0 , i.e., fundamental period T0 = 2/0 .
Complex sinusoids and real sinusoids are related by the Euler identities:
15

x(t)

Acos(W0 t + F)
T0

AcosF

Figure 17: A continuous-time real-valued sinusoid.

ej0 t = cos(0 t) + j sin(0 t)


ej0 t = cos(0 t) j sin(0 t)
ej0 t + ej0 t
cos(0 t) =
2
ej0 t ej0 t
sin(0 t) =
2j
Moreover, we have that
A j0 t j
A j0 t j
e
e +
e
e
2
2
A cos(0 t + ) = Re{Aej(0 t+) }
A cos(0 t + ) =

A sin(0 t + ) = Im{Aej(0 t+) }

1.4. BASIC DISCRETE-TIME SIGNALS


a) The-Unit Step Function:
Discrete-time unit-step function u(n) is defined as 1 for positive n values
and 0 for negative n, i.e.,
(

u(n) =

1 n0
0 n<0

16

Fig. 18 show a discrete-time unit-step function u(n). Notice that in discretetime unit-step function, there is no ambiguity at n = 0, namely u(0) = 1.
u(n)

Figure 18: Discrete-time unit-step function u(n).

b) Unit-Impulse (Sample) Function:


The discrete-time unit-impulse or unit-sample function (n) given in Fig.
19 is defined as 1 for n = 0 and 0 elsewhere, i.e.,
(

(n) =

1 n=0
0 n=
6 0

d(n)
1

. . .

. . .
n

Figure 19: Discrete-time unit-impulse function (n).


As in the continuous-time case, discrete-time unit-step and unit-impulse
functions are also related to each other. For example, unit-impulse function
can be obtained from unit-step function by a first order difference, i.e.,
(n) = u(n) u(n 1)
Similarly, unit-step function can be represented as a combination of shifted
unit-impulse functions, i.e.,
17

u(n) =

(n k)

k=0

Furthermore, we can obtain the values of unit-step function for any n, as a


running-sum of a single unit-impulse function, i.e.,
n
X

u(n) =

(m)

m=

This idea is illustrated in Figs. 20-(a) and 20-(b) for n < 0 and n 0
respectively.

d (m)

u(n)=0

1
m

n<0
(a)

d (m)

u(n)=1

n>0

(b)
Figure 20: Representation of unit-step as running-sum of unit-impulse, (a)
for n < 0, (b) for n 0.

18

c) Complex Exponentials/Sinusoids:
In the discrete-time, we define complex exponentials as
x(n) = Aean = An
In general, A and are complex values. When A and are real, we have
the following situations:
|| > 1: increasing exponential
|| < 1: decreasing exponential
> 0: all positive values
< 0: alternating signs
In Fig. 21, we see an increasing exponential with all positive values, > 1,
but in Fig. 22, we have a decreasing exponential with all positive values,
0 < < 1.

x(n)

a>1

...

Aan
A

...

Figure 21: An increasing exponential with positive values, > 1.


Fig. 23, shows an increasing exponential with positive and negative
values, < 1, and Fig. 24, shows a decreasing exponential with positive
and negative values, 1 < < 0.
If the exponent a is complex, and pure imaginary, i.e., a = j0 ; we have
a discrete-time complex sinusoid;
x(n) = ej0 n

19

x(n)
0<a<1

Aan
A

...

...

Figure 22: A decreasing exponential with positive values, 0 < < 1.

x(n)

a<-1

...

Aan
A

...

Figure 23: An increasing exponential with positive and negative values,


< 1.
where 0 is the discrete angular frequency of the sinusoid. We will
discuss the periodicity of discrete-time (complex and real valued) sinusoids
in detail.
c) Real Sinusoids:
A discrete-time real-valued sinusoidal signal is given by
x(n) = A cos(0 n + )

(4)

where n is the integer sample index, 0 = 2f0 is the fundamental radial


(or angular) frequency in radians, is the initial phase in radians, f0 is the
20

x(n)
Aan

...

-1<a <0
A

...

Figure 24: A decreasing exponential with positive and negative values, 1 <
< 0.
normalized discrete frequency with no units.
Complex- and real-valued sinusoids are related to each other in the same
way with their continuous-time counterparts:
Aej(0 n+) = A cos(0 n + ) + jA sin(0 n + )
Aej(0 n+) = A cos(0 n + ) jA sin(0 n + )
A j(0 n+) A j(0 n+)
e
+ e
A cos(0 n + ) =
2
2
A j(0 n+) A j(0 n+)
A sin(0 n + ) =
e
e
2j
2j

1.5. PERIODICITY OF DISCRETE-TIME SINUSOIDS


Here we discuss two periodicity issues of discrete-time sinusoids.
1. Periodicity of the discrete frequency:
As opposed to the continuous-time sinusoids, the frequency of discretetime sinusoids is not increased as we make 0 larger. Consider a complex
sinusoid, ej0 n . Now lets add 2 to its frequency, i.e.,
ej(0 +2)n = ej2n ej0 n = ej0 n
21

since ej2n = 1. We see that increasing the frequency of this signal by an


amount of 2, the signal remains the same, i.e., frequency 0 + 2 is equivalent to 0 . Hence we can say that the frequency of discrete-time sinusoids is periodic by multiples of
Therefore, we only consider the frequency range 0 2 or equivalently,
. The signals are slow at frequencies around 0 or 2 and fast
around .

22

1. Periodicity of the sinusoidin time:


Now we consider the time periodicity of the sinusoid. For any signal to
be periodic, we need that the signal repeat itself after N samples, i.e.,
ej0 (n+N ) = ej0 n ej0 N
For the above to be equal to the original signal ej0 n , we require that
ej0 N = 1.
That is only possible if
0 N = k2
i.e., integer multiples of 2, (k integer). Hence, we have that
N
2
= .
0
k
It is also clear that the period of a discrete-time signal must be an integer
value. Therefore, for a discrete-time sinusoid to be periodic the ratio 20
must be a rational value. The smallest possible integer N0 that satisfies this
condition is called the fundamental period of the sinusoid.
Example 1: Let us determine whether or not the signal
x(n) = ej2.3n
is periodic, and if so, its fundamental period. As we see, the radial frequency
of the signal is = 2.3 rad. Considering mod2 of this value, we get
fundamental frequency as 0 = 0.3 rad. Now, the ratio
2
2
20
N
=
=
=
0
0.3
3
k
Now definitely, the value 20
3 = 6.6666... samples cannot be the period of a
discrete-time signal. However, it is a rational number, then x(n) is periodic. Now we can find an integer k such that k 20
3 is an integer. Hence for
k = 3, the fundamental period N0 = 20 samples is calculated.
Example 2: In this example we consider a real sinusoid,
x(n) = cos(0.5n).

23

It looks like the frequency of this signal is 0 = 0.5 rad. Calculating the
ratio
2
2
N
=
= 4 = .
0
0.5
k
4 is not an integer value and it cannot be the period of this signal. Moreover, it is not a rational number. It is not possible to find an integer
k such that k 4 is an integer. Hence the signal x(n) = cos(0.5n) is not
periodic.

1.6. INTRODUCTION TO SYSTEMS


A system performs a transformation on the input signal and generates
an output.
y(n) = T [x(n)]
In Fig. 25, we show both continuous-time and discrete-time systems. The
input signal is denoted by x(t) (or by x(n) for discrete-time systems) and
the output by y(t) (or y(n)).
x(t)

y(t)

x(n)

y(n)

Figure 25: Continuous- and discrete-time systems


Systems can be used in groups to form larger systems, i.e., two systems can be connected one after the other (serial or cascade) or in parallel.
Figs. 26 and 27 show cascade and parallel operation of two systems S and
systems S .
x(t)

y(t)
2

Figure 26: Serial or cascade connection of two systems.

1.7. PROPERTIES OF SYSTEMS


1. Systems with memory:

24

x(t)

y(t)

Figure 27: Parallel connection of two systems.


Any system with transformation
y(t) = kx(t)
for a constant k is called memoryless system. For example, a resistor,
v(t) = Ri(t) is memoryless. y(t) = x(t) that is called identity system is
also memoryless. Any other system must have memory to keep past values
of input and output to generate the present output. For instance,
x(n) + x(n 1) + x(n 2)
3
is a system with memory as it stores and uses two previous input samples.
y(n) =

y(t) = 1/C

Z t

x( )d( )

where x(t) is the input current and y(t) is the output voltage of a capacitor
that is also a system with memory.
2. Invertible Systems:
A system is invertible if by observing the output, we can determine its
input. The transformation must be 1-1 for us to get the systems inverse.
Given the system equation
y(t) = 2x(t)
the inverse system can be determined easily as x(t) = 1/2 y(t).
Example: Consider an accumulator system
y(n) =

n
X

x(k).

k=

Lets obtain its inverse transformation T 1 :


y(n) = + x(n 2) + x(n 1) + x(n)
y(n 1) = + x(n 2) + x(n 1)
25

(5)
(6)

Subtracting (6) from (5), we get the inverse transformation as


x(n) = y(n) y(n 1).
Hence the inverse of the accumulator system is a first order difference system
(just like the inverse of integrator is a differentiator).
3. Superposition (Linearity):
Given a system with input/ouput relation y(t) = T [x(t)], we have two
conditions for the system to be linear
1) Homogeneity: If the system is applied a scaled version of the input,
that is ax(t), and the output is
T [ax(t)] = aT [x(t)] = ay(t)
then system is called homogeneous.
2) Additivity: Given a system with input/output pairs y1 (t) = T [x1 (t)]
and y2 (t) = T [x2 (t)], and the system is applied the sum of the two input
signals x1 (t) + x2 (t), and the output is
T [x1 (t) + x2 (t)] = T [x1 (t)] + T [x2 (t)] = y1 (t) + y2 (t)
then the system is called additive. Combining conditions (1) and (2), we
have the superposition or linearity property of systems. In general, if a
system is given combination of inputs
X

ak xk (t) = a1 x1 (t) + a2 x2 (t) +

, with T [xk (t)] = yk (t) and


"

ak xk (t) =

ak T [xk (t)] =

X
k

then this system is called linear.


4. Time Invariance:

26

ak yk (t)

For a system with input/output relation y(t) = T [x(t)], if the system is


applied a time-shifted version of the input, i.e., x(t t0 ), and the response
of the systems is
T [x(t t0 )] = y(t t0 )
then the system is called time-invariant. This means that delaying the
input causes just the same amount of delay at the output. The system
treats the input signal in the same way, i.e., the system does not change or
evolve with time. For instance,
y(t) = sin[x(t)],

and

y(n) = x(n) + 2x(n 1)

systems are time-invariant (TI). However,


y(n) = nx(n 1)
system is not time-invariant or it is time varying, due to the time-dependence
of the system equation.
5. Causality:
Causality is the property of real-life systems. It means that the system
does not require any input or output value from the future times. A system
is causal if the output is a function of only present and previous values of
the input. For example,
y(t) = x(t 1),

and

y(n) = x(n) 0.8x(n 1)

systems are causal. However,


y(n) = 3x(n) x(n + 1),

and

y(t) = x(t + 1)

systems are not causal or noncausal.


Causal systems can be implemented physically and operated real-time.
However, non-causal systems can be realized to work only off-line using
previously stored information about past and future.
6. Stability:
Stability is a property that all physical systems are required to have. It
means that given a non-diverging (or stable) input to a system, the output generated by the system must also be finite, or bounded. This concept
27

of stability is called Bounded-Input-Bounded-Output Stability. Hence any


bounded input to a stable system does not cause the output to diverge.
Then, a system is called Bounded-Input-Bounded-Output (BIBO)
stable if a bounded input generates a bounded output. For example,
y(n) = x(n) + 0.8y(n 1)

and

y(n) = x(n) 0.8x(n 1)

systems are stable, i.e., any bounded input will generate a bounded output
signal. However,
y(n) = x(n) + 2y(n 1)
system is unstable because of the 2 gain factor in the feedback from y(n 1)
to y(n) which causes the output to blow up as n grows.

28

2. LINEAR TIME-INVARIANT SYSTEMS


An important class of systems is Linear Time-Invariant Systems (LTI)
systems. This class of systems satisfy the Linearity and Time-Invariance
properties. Many physical process can be modelled by LTI systems and they
are easy to analyze. Recall that linearity implies that an infinite combination
of signals

x(t) =

ak xk (t)

k=1

= a1 x1 (t) + a2 x2 (t) +
generates the output
y(t) = a1 y1 (t) + a2 y2 (t) +
=

ak yk (t)

k=1

where T [xk (t)] = yk (t).


Before we go in detail about LTI systems we present the idea of signal
representation in terms of shifted impulses. Any single point of a signal can
be represented by a shifted and weighted impulse function. That is, if we
multiply a signal x(t) with a unit-step function (t), we get
x(t)(t) = x(0)(t)
which is a weighted impulse with x(0) at t = 0. As shown in Fig. 28, the
signal component at t = 0 can be represented by an impulse at t = 0 ((t))
with an amplitude equal to the value of the signal at t = 0 (x(0)).
Now considering every single point of the signal and combining them,
we can represent the whole signal by a combination of shifted and properly
weighted impulse functions, i.e.,
x(t) =
=

x( ) (t ) d
x(t) (t )d
29

(7)

x(t)

x(t) (t)

x(0)

x(0) (t)

d(t)

=
t

Figure 28: Representation of a continuous-time signal by impulses.


This is called the shifting property of impulse functions. An analogous
representation can be obtained for a discrete-time signal x(n) as given in
Fig. 29. For example, the value of x(n) at n = 1, or x(1) can be represented
by a multiplication of the signal with a unit impulse shifted to x(1),
x(n)(n 1) = x(1)(n 1).
Repeating this for all values of n, we get
x(n) = + x(1)(n + 1) + x(0)(n) + x(1)(n 1) +
=

x(k)(n k)

(8)

k=

d(n-1)

x(n)

x(n)

d(n-1)

x(1)

x(1)

=
n

Figure 29: Representation of a discrete-time signal by impulses.


Example 1: Consider a four sample discrete-time signal
x(n) =

1
1
1
(n + 1) + (n) (n 1) + (n 2)
2
2
4
30

given in Fig. 30. This signal can be represented as a sum of four shifted
impulses that are scaled by the amplitude of the samples, i.e.,
x(n) = x(1) (n + 1) + x(0)(n) + x(1) (n 1) + x(2) (n 2)
We show all of these four components of the signal in Fig. 30.

2.1. CONTINUOUS-TIME LTI SYSTEMS


In this section we discuss how the output of an LTI system is related to its
input. In the previous chapter, we express systems using a transformation
form the input to the output. In the time domain continuous-time systems input/output relation is usually represented by differential equations.
Another relation we consider in this section is called The Convolution
Integral. In the following, we develop this very useful tool for the analysis
of continuous-time LTI systems.
According to the shifting property of impulses, any signal can be represented as a weighted combination of shifted impulses:

x(t) =

x( )(t )d

(9)

Let h(t) be the response of the system given in Fig. 31, to the input (t)
(unit-impulse function), i.e.,
T [(t)] = h(t)
Since the system is known to be time-invariant, the response of the system
to a shifted impulse should be
T [(t )] = h(t )

(Time-Invariance)

By homogeneity condition of linearity, the response of this system to a


shifted impulse, weighted by a scalar, x( )(t ) is,
T [x( )(t )] = x( )T [(t )]
= x( )h(t )

31

(Homogeneity)

x(n) (n+1)

1/2

-3

-2

-1

x(n) (n)

x(n)

-3

-2

-1

x(n) (n-1)

1/2

1/4
1
-3

-2

-1

-1/2

1
-3

-2

-1

-1/2

x(n) (n-2)

1/4

-3

-2

-1

+
Figure 30: A discrete-time signal and its impulse representation.
32

d(t)

d(t-t)

h(t)

t)

h(t-

Figure 31: Impulse response of a continuous-time system.


Finally by the additivity property of linear systems, the output of the
system to combination of x( )(t ), (or x(t)) should be the superposition of the outputs to individual inputs x( )(t ),
Z

x( ) (t ) d

= x(t)

x( ) h(t ) d

= y(t)

(10)
(Additivity)

(11)

where y(t) is the output of this system to the input signal x(t). This relation
is called The Convolution Integral and it is denoted by ,
y(t) = x(t) h(t) =

x( ) h(t ) d

(12)

The impulse response h(t) is the signature of an LTI system. As such,


an LTI system is fully characterized by its impulse response. For any given
input signal, we can analytically calculate the corresponding output. Moreover, all properties of the system are encrypted and hidden in the impulse
response.
Properties of convolution integral:
1. Commutative: x(t) h(t) = h(t) x(t) The convolution operation is
commutative, therefore we have
y(t) = x(t) h(t) =
= h(t) x(t) =

x( ) h(t ) d
x(t ) h( ) d

which can be proven by a simple change of variables.


2. Associative:
x(t) [h1 (t) h2 (t)] = [x(t) h1 (t)] h2 (t)
This property is applied to cascade connection of two LTI systems.
33

3. Distributive:
x(t) (h1 (t) + h2 (t)) = (x(t) h1 (t)) + (x(t) h2 (t))
which is applied to parallel connection of LTI systems.

Example 2:
The input signal to an LTI system is x(t) = eat u(t) and the impulse
response of the system is h(t) = u(t) as shown in Fig. 32. Lets calculate
the response of the system to x(t) by using convolution;
y(t) =

Z
=

x( ) h(t ) d

x(t)
1

0 < a < 1

h(t)

Figure 32: The input signal and impulse response of an LTI system.
We will solve this problem step by step, so we need x( ), h( ), h( ),
and h(t ), then the product x( )h(t ) t, . Now, lets draw these
signals,
34

1. x( ) = ea u( )
2. h( ) = u( ), h( ) = u( )
3. h(t ) = u(t )
4.
(

x( )h(t =

0
t<0
ea u(t ) t 0

5. Then y(t) = 0, t < 0. For t 0, we only need to integrate the product


obtain in (4).

y(t) =
=
=
=
=

Z
0

Z t
0

x( )h(t )d

(13)

ea u(t )d

(14)

ea d

(15)

1 a t
e
0
a
1 eat
a

(16)
(17)

Therefore, the total response of the system to x(t) is given by

y(t) =

1 eat
u(t),
a

and shown in Fig. 34.

2.2. DISCRETE-TIME LTI SYSTEMS


In this section we repeat the above derivations for discrete-time systems
to get their input/output relation. Discrete-time LTI systems are in general
represented the time domain by constant coefficient difference equations.
We present here another relation between the input and the output which
is called The Convolution Sum. for the analysis of discrete-time LTI
systems.
35

By the shifting property of discrete-time unit impulse, any signal can be


represented as a weighted combination of shifted impulses:
x(n) =

x(k)(n k)

k=

The output can be calculated as a linear combination of the responses to


input components x(k)(nk) which are shifted and weighted impulses. Let
h(n) denote the response to (n) then the response to (n k) is h(n k)
due to the time-invariance property of the system:
T [(n n)] = h(n k)

(Time-Invariance)

Also the response of this system to a shifted and weighted impulse, x(k)(n
k) is,
T [x(k)(n k)] = x(k)T [(n k)]
= x(k)h(n k)

(Homogeneity)

Then the output of the system to the sum of x(k)(n k), k (or x(n))
should be the sum of the outputs to individual inputs x(k)(n k) by the
additivity property of linear systems:

X
k=

x(k) (n k) = x(n)
x(k) h(n k) = y(n)

(18)
(Additivity)

(19)

k=

where y(n) is the response of this system to the input signal x(n). This
relation is called The Convolution Sum for discrete-time systems and it
is again denoted by ,
y(n) = x(n) h(n) =

x(k) h(n k)

(20)

k=

h(n) is called as the impulse response of a discrete-time LTI system. As


before, a discrete-time LTI system is fully characterized by its impulse response.
Properties of convolution sum:
36

The discrete-time convolution has the following properties:


1. Commutative: x(n) h(n) = h(n) x(n), i.e., we can interchange the
role of the input signal and the impulse response:

y(n) = x(n) h(n) =


= h(n) x(n) =

k=

x(k) h(n k)
x(n k) h(k)

k=

2. Associative:
x(n) [h1 (n) h2 (n)] = [x(n) h1 (n)] h2 (n)
This property is illustrated by the serial (cascade) connection of two
LTI systems shown in Fig. 35.
3. Distributive:
x(n) [h1 (n) + h2 (n)] = [x(n) h1 (n)] + [x(n) h2 (n)]
The distributive property can be applied to the parallel connection of
two LTI systems shown in Fig. 36.

Example 3:
We are given a discrete-time system with input
x(n) = n u(n),

0<<1

and the impulse response


h(n) = u(n)
given in Fig. 37
To solve this problem step by step, we need x(k), h(k), h(k), h(n k),
and the product x(k)h(n k). Now, lets give and draw all these signals,
1. x(k) = k u(k)
2. h(k) = u(k), h(k) = u(k)
37

3. h(n k) = u(n k)
4.

x(k)h(n k) =

0
n<0
k
u(n k) n 0

5. Then we see that y(n) = 0, t < 0, and for n 0, we need to perform


the sum of x(k)h(n k) over k.

y(n) =

x(k)h(n k)

k=

k=0
n
X

(21)

k u(n k)

(22)

(23)

k=0

1 n+1
, n0
1
Therefore, the total output of this system to x(n) is given by
=

y(n) =

1 n+1
1

(24)

u(n) n

and shown in Fig. 39.


Remark: Infinite and finite geometric series expansion identities for any
0 < || < 1 are given by

X
k=0
N
1
X

k =

k =

k=0

1
1

1 N
1

2.3. PROPERTIES OF LTI SYSTEMS


An LTI system is fully characterized by its impulse response through
the convolution integral or sum. Hence, the properties of the system can be
investigated on the impulse response.
38

y(t) =
=

x( )h(t )d
x(t )h( )d

= x(t) h(t)
and

y(n) =
=

k=

x(k)h(n k)
h(k)x(n k)

k=

= x(n) h(n)

1. LTI Systems with/without memory:


A system is memoryless if the output at any time depends only on the
input at that same time. For these systems the impulse response must be
h(t) = 0,

t 6= 0.

For instance, y(t) = Kx(t) scaling system has impulse response h(t) = K(t)
(and for K = 1, it is called an identity system) which is a system without
memory. Same is true for discrete-time system y(n) = Kx(n), and h(n) =
K(n).
If a system has an impulse response h(t) or h(n) that is not zero for t 6= 0
then that system has memory.
y(n) = x(n 1) + x(n + 1)
is a system memory.
Recall that a memoryless system with K = 1 is the identity system.
y(t) = x(t) (t) =

y(n) = x(n) (n) =

x( )(t )d = x(t)

X
k=

39

x(k)(n k) = x(n)

Above equations are true because of the shifting property of the impulse
function.
2. Invertibility of LTI systems:
An LTI system is invertible if given the impulse response of the system,
we can obtain an inverse system h0 (t), such that
h(t) h0 (t) = (t)
which means that the equivalent of cascade of h(t) and h0 (t) is an identity
system as given in Fig. 40
x(t) h(t) = y(t)
y(t) h0 (t) = x(t) h(t) h0 (t)
= x(t)
then we need
h(t) h0 (t) = (t).

Example 4:
The delay system is defined as y(t) = x(t t0 ),
If t0 > 0, input signal is shifted to the right :delay
If t0 < 0, input signal is shifted to the left :advance

h(t) = (t t0 ).

x(t t0 ) = x(t) (t t0 )
Remark: The convolution of a signal with an impulse, is the same signal.
The convolution of a signal with an shifted impulse shifts the signal to the
position of the impulse.
The inverse of the above system is;
h0 (t) = (t + t0 )
y(t) h0 (t) = x(t t0 ) (t + t0 ) = x(t)
h(t) h0 (t) = (t t0 ) (t + t0 ) = (t)
Then, the inverse of the delay system with h(t) = (t t0 ) is the advance
system with h0 (t) = (t + t0 ), and
x(t) = y(t) (t + t0) = y(t + t0 )
40

Example 5:
The system with impulse response h(n) = u(n) is called summer or
accumulator.

n
X

y(n) =

x(k)u(n k) =

k=

x(k)

k=

This system is invertible.


h0 (n) = (n) (n 1)
x(n) = y(n) y(n 1)
h(n) h0 (n) = u(n) [(n) (n 1)]
= u(n) u(n 1) = (n)

3. Causality of LTI systems:


In general the output of a causal system depends only on the present
and past values of the input. Special to LTI systems, causality requires that
h(n) = 0

y(n) =

n
X

x(k)h(n k) =

k=

n<0

h(k)x(n k)

k=0

For example h(n) = u(n) and its inverse h0 (n) = (n)(n1) are casual.
However, h(t) = (t t0 ) for t0 < 0 is a non-casual system (advance).

4. Stability of LTI systems:


A system is stable if every bounded input produces a bounded output.
Then for LTI systems, we have
|x(n)| < B
41

The output is

|y(n)| =
x(n k)h(k)
k=

|y(n)|

|x(n k)||h(k)|

k=

|y(n)| B

|h(k)| n

k=

which is to say that if the impulse response is absolutely summable;

|h(k)| <

k=

then y(n) is bounded and the system is stable. The condition for continuoustime LTI systems is
Z

|h(t)|dt <

Delay systems h(n) = (n n0 ) or h(t) = (t t0 ) are stable

|h(n)| =

|(n n0 )| = 1 <

|h(t)|dt =

|(t t0 )|dt = 1 <

However, accumulator or integrator systems (h(n) = u(n), h(t) = u(t)) are


unstable;

|u(n)| =
|u(t)|dt =

for the integrator system the output is


y(t) =

x( )u(t )d =

Z t

5. Unit Step Response of LTI systems:


42

x( )d

The response of a LTI system to a unit-step function is called the Unit


Step Response and denoted by s(t) or s(n)
x(t) = u(t)
s(t) =
=
=

Z t

x(t )h( )d
u(t )h( )d
h( )d

which is the area under h(t) up to time t. Hence the inverse would be
h(t) =

d
s(t) = s0 (t)
dt

that is the impulse response is the first derivative of the step response.
Similarly of r discrete-time LTI systems
x(n) = u(n)

(25)

s(n) = u(n) h(n)

(26)

=
=

X
k=
n
X

u(n k)h(k)

(27)

h(k)

(28)

k=

which means that step response is the sum of the values of h(n) up to
time n. Then the inverse is a first order difference of the step response
h(n) = s(n) s(n 1).

2.4. REPRESENTATION OF LTI SYSTEMS BY DIFFERENTIAL EQUATIONS


a) Continuous-Time LTI systems:
Linear, constant-coefficient differential equations are usually employed
to represent or model continuous-time LTI systems. Input and output of a
continuous-time system are related by a simple differential equation:
d
y(t) + 2y(t) = x(t)
dt
43

To find the output as a function of the input, we need to solve the above
implicit equation and get explicit representation
y(t) = yp (t) + yh (t)
yh (t) is the homogeneous solution and is determined by the auxiliary conditions. It does not depend on the input. For example, y(0) = y0 . For such a
system to be linear, the output must be zero, when input is zero therefore
yh (t) must be zero which requires that the auxiliary conditions be
y(0) = y0 = 0.
If y0 6= 0 the system is not linear. Generally, a system can be represented
by a linear part (that has zero auxiliary conditions and yh (t) = 0) plus the
response to nonzero auxiliary conditions as illustrated by Fig. 41. yp (t) is
called the private solution due to the input.
Causality of LTI systems described by differential equations depends on
the auxiliary condition as well. Initial rest condition specifies that if the
input x(t) = 0 for t t0 , y(t) also is zero for t t0 . However choosing
a fixed point for auxiliary condition such as y(0) = 0 leads to non-causal
systems.
For a causal system initial rest is chosen so that y(t) = 0 when x(t) = 0
but initial rest does not specify the auxiliary condition at a fixed point in
time. A a summary, if we make the initial rest assumption, and if x(t) = 0
for t t0 , we need to solve y(t) for t > t0 using the condition y(t0 ) = 0
which is called initial condition. Initial rest also implies time-invariance.
x(t) y(t), y(t0 ) = 0 x(t T ) y(t T ),
y(t0 + T ) = 0 (initial
condition).
A general N t h-order linear constant-coefficient differential equation is
given by
N
X

ak

k=0

M
X
dk
dk
y(t)
=
b
x(t)
k
dtk
dtk
k=0

if N = 0 explicitly
y(t) =

M
1 X
dk
bk k x(t)
a0 k=0 dt

44

(29)

a) Discrete-Time LTI systems: Linear constant-coefficient difference equations are used to represent discrete-time LTI systems:
N
X

ak y(n k) =

k=1

M
X

bk x(n k)

(30)

k=1

The system is incrementally linear, and it is linear if the auxiliary conditions


are zero.
1
y(n) =
a0

(M
X

bk x(n k)

k=0

N
X

ak y(n k)

k=1

with auxiliary conditions: y(N ), y(N + 1)....y(1)


The above is a recursive system using a feedback from the output to the
input. Such systems have impulse response with infinite length (or timesupport) and they are called Infinite Impulse Response (IIR) systems.
However for N = 0
y(n) =

M
1 X
bk x(n k)
a0 k=0

is a non-recursive system without a feedback. The output depends only


input values, but not the previous output values. This type of systems have
finite length impulse responses and called Finite Impulse Response (FIR)
systems. The impulse response samples of FIR systems are basically the
coefficients of the difference equations, i.e.,
(

h(n) =

bn
a0

0nM
otherwise

For example, h0 (n) = (n) (n 1) has


finite nonzero points (length 2)

n
1
and it is an FIR system. However h(n) = 2 u(n) is the impulse response
(length ) of an IIR system.

2.5. REPRESENTATION OF LTI SYSTEMS BY BLOCK


DIAGRAMS
The basic operations needed in the LTI systems are
45

Addition of two signals


Multiplication by a constant
Delay
which are shown by the block symbols given in Fig. 42.
Example 6: Given a second order discrete-time system y(n) = b0 x(n) +
b1 x(n 1) a1 y(n 1). We obtain its block diagram as follows (see Fig. 43:
w(n) = b0 x(n) + b1 x(n 1) and y(n) = w(n) a1 y(n 1) is called the
Direct I implementation of the system (top figure). We can change the order
of the systems and obtain
x(n) a1 s(n 1) = s(n)
b0 s(n) + b1 s(n 1) = y(n)
which is called the Direct II implementation of the same system (see bottom
figure). In general
1
y(n) =
a0

(N
X

bk x(n k)

k=0

N
X

ak y(n k)

k=1

we define
w(n) =

N
X

bk x(n k)

k=0

and
1
y(n) =
a0

w(n) +

N
X

(ak )y(n k)

k=1

for Direct I implementation and


1
s(n) =
a0

N
X

ak s(n k) + x(n)

k=1

and
y(n) =

N
X

bk s(n k)

k=0

are used for Direct II implementation.


46

3. CONTINUOUS TIME FOURIER ANALYSIS


Representation of signals in terms of shifted and weighted impulses yields
the impulse response representation of LTI systems. Another important
representation of LTI systems is the continuous and discrete-time Fourier
transforms and series. With this very useful representation, a signal will be
expressed as a combination of complex exponentials.

3.1. RESPONSE OF LTI SYSTEMS TO COMPLEX SINUSOIDS


Consider a continuous-time LTI system with impulse response h(t), and
assume this system is applied a complex exponential x(t) = est , then the
output is

y(t) =

= e

s(t)

h( )es(t ) d
Z

h( )es d

= est H(s)
We conclude that the complex exponential est is an eigenfunction of LTI
systems where the transfer function (the Laplace transform of h(t)) H(s) is
the corresponding eigenvalue.
x(t) =

ak esk t

y(t) =

ak H(sk ) esk t

Since complex exponentials are eigenfunctions of LTI systems, representing


input signals in terms of weighted exponentials makes it easy to obtain the

47

output and analyze the LTI systems.

3.2. CONTINUOUS-TIME FOURIER SERIES


Given a a continuous-time periodic signal with period T0
x(t) = x(t + T0 )

t,

2
= 0
T0

Our aim is to find a series representation for this signal in term of complex
sinusoids:
k (t) = ejk0 t ,
k = 0, 1, 2,
Let us use these harmonically related complex exponentials {k (t)} to
represent the periodic signal x(t). The k (t) are called the basis functions
and they all have fundamental frequency k0 that is integer multiple of 0 ,
(fundamental frequency of the signal). Thus they are periodic with Tk0 that
means they are also periodic with T0 . The Fourier basis {k (t) = ejk0 t }
form an orthogonal basis for the square summable functions space L2 (R).
Therefore, any signal with finite energy in a single period can be expressed
by the following Fourier Series Expansion:

x(t) =

ck ejk0 t

(31)

k=

For k = 0 we have c0 that represents the average value or DC term of


the signal. For k = 1 we have c1 and c1 , terms with the fundamental
frequency of the original signal are called the first harmonics, c1 ej0 t with
period T0 . For |k| 2 we have other harmonics. k = N are referred to as
the N th harmonic component.
The representation of a periodic signal in this form is referred as the
Fourier series representation. If x(t) is real x (t) = x(t), then

x (t) = x(t) =
=

X
k=

ck ejk0 t
ck ejk0 t

(32)
replaced k with k

(33)

k=

(34)
48

Two representations of the same signal must be identical:


ck = ck

ck = ck

or

proving that the Fourier series coefficients ck are complex even symmetric.
In order to determine the Fourier Series Coefficients for a given periodic signal x(t), lets follow this derivation:
The Fourier Series (FS) representation of the signal is

x(t) =

ck ejk0 t

k=

Multiply both sides with ejn0 t and integrate over a period,


Z T0
0

jn0 t

x(t)e

Z T0 X

dt =

k=

ck ejk0 t ejn0 t dt

ck

Z T0

k=

ej(kn)0 t dt

(35)

Here, we use the orthogonality of the basis functions, i.e.,


Z T0
0

j(kn)0 t

Z T0

dt =

cos[(k n)0 t]dt + j

T0 k = n
0 k=
6 n

Z T0
0

sin[(k n)0 t]dt

= T0 (n k)

This tells us that k (t) and n (t) are orthogonal. Therefore, using this in
equation (35), we get
Z T0
0

x(t)ejn0 t dt = T0 Cn

which gives us the FS expansion coefficients:


ck =
x(t) =

1
T0

Z
T0

x(t)ejk0 t dt

ck ejk0 t

k=

49

The integration on t above is over a single period, so it could be from 0 to


T0 as well as any other time period with T0 duration. {ck } are called Fourier
Series Coefficients or spectral coefficients with c0 as the DC component.
Z

1
c0 =
T0

T0

x(t)d(t)

that is the average value of x(t). In general |ck |2 , k is plotted to give the
spectral information of the signal. The value of |c1 |2 is the energy of the
fundamental harmonic at frequency 0 , the value of |c2 |2 is the energy of
the second harmonic at frequency 20 , etc.
Now, lets explore the relation between the above complex FS expansion
and the real or trigonometric FS representation. Recall that ck = ck , and

x(t) = c0 +
= c0 +
= c0 +

h
X

ck ejk0 t + ck ejk0 t

k=1
h
X

ck ejk0 t + ck ejk0 t ]

k=1

2Re ck ejk0 t

k=1

Let ck = ak + jbk , then


x(t) = c0 + 2
= c0 + 2

X
k=1

Re(ak + jbk ) [cos(k0 t) + j sin(k0 t)]


[ak cos(k0 t) bk sin(k0 t)]

k=1

This is called the real or trigonometric Fourier series representation.

50

Remark: LTI System Interpretation.


If x(t) is periodic with period T0 sec. and has a FS representation:

x(t) =

ck ejk0 t

k=

where 0 = 2/T0 rad/sec., and it is applied to an LTI system with impulse


response h(t) as shown in Fig. 44. The output of the system is
y(t) =
=

h( )

ck ejk0 (t ) d

k=

=
=

h( )x(t )d

ck e

k=

X
k=

jk0 t

h( )ejk0 d

ck H(k0 )ejk0 t
dk ejk0 t

k=

given as another FS representation with {dk = ck H(k0 )} as the FS coefficients. Here we define the integral
H() =

h( )ejk d

as the transfer function or frequency response of the above LTI system and
H(k0 ) is the response of the system to a sinusoid with frequency k0 .
Example 1: Obtain the FS representation for sin 0 t. Without going into
calculation, we can use the following Euler identity,
ej0 t ej0 t
2j
j0 t
x(t) = c1 e
+ c1 ej0 t

sin 0 t =

From this we see the FS coefficients,


c1 =

1
,
2j

c1 =

1
,
2j
51

ck = 0 k 6= 1

Example 2:
Consider the following periodic signal shown in Fig. 45 with fundamental
period T0 sec. and fundamental angular frequency 0 = 2/T0 rad/sec.
(

x() =

c0 =

1
T0

1 |t| < T1
0 T < |t| <

Z
T0

x(t)dt =

1
T0

Z T1
T1

T0
2

2T1
T0

1dt =

and for k 6= 0, we get the FS coefficients


ck =
=
=

1
T0

Z T1
T1

ejk0 t dt =

1
1
ejk0 t TT
1
jk0 T0

2 ejk0 T1 ejk0 T0
k0 T0
2j
2
sin k0 T1
sin k0 T1 =
k0 T0
k

and show them in Fig. 46.


Any periodic signal can be approximated by a finite number of Fourier
Series coefficients.
xN (t) =

N
X

ck e

1
ck =
T0

jk0 t

k=N

Z
T0

x(t)ejk0 t dt

As N , the approximated signal xN (t) x(t).


Remark: Drichlet Conditions
The signal x(t) must be a continuous function and it must be square
integrable
Z
T0

|x(t)|2 dt <

(has finite energy over one period), then the coefficients {ck } are finite. x(t)
has Fourier Series representation. For a periodic signal x(t) to have a Fourier
Series Representation, it must satisfy the Drichlet Conditions.

52

1. Over one period, x(t) must be absolutely integrable


Z

T0

|x(t)|dt <

|Ck | <

2. During a single period, there are no more than ak finite number of


maxima and minima.
3. In any finite interval of time, there are only a finite number of discontinuities that are also finite.
Periodic signal in Fig. 47 has discontinuities, but only 2 in one period, so
it satisfies the Drichlet conditions. It has Fourier Series Representation and
it can be approximated by a truncated Fourier Series expansion. However,
it is very hard to approximate the edges of the signal and the error or
imperfections at the edges of the square is named after Gibbs.

3.3. CONTINUOUS-TIME FOURIER TRANSFORM FOR


APERIODIC SIGNALS
The Fourier Series expansion discussed in the previous section is very
useful for the spectral analysis of continuous-time periodic signals. We also
need a similar tool for aperiodic signals which is the Fourier Transform
(FT).
Given an aperiodic signal x(t), we can extend it to obtain a periodic
version x
(t) as shown in Fig. 48 which can be expressed by a FS representation,
(x)(t) =

ck e

1
ck =
T0

jk0 t

k=

ck =

1
T0

T0
2
T
20

x(t)e

If we define
T0 ck = X() =

jk0 t

1
dt =
T0

x(t)ejt dt

T0
2

T0
2

x
(t)ejk0 t dt

x(t)ejk0 t dt

k0 = ,

then we can write


x
(t) =

1
0 X
X(k0 )ejk0 t =
X(k0 )ejk0 t
T
2
k= 0
k=

53

As T0 goes to , then x
(t) x(t) and 0 d, and therefore
x(t) =
X() =

1
2

X()ejt d

x(t)ejt dt

The inverse Fourier Transform


The Fourier Transform

Example 3:
We calculate the Fourier Transform of a two-sided exponential signal
given in Fig. 49.
x(t) = ea|t| , a > 0
The Fourier transform is
X() =
=
=

Z 0

ea|t| ejt dt
e(aj)t dt +

Z
0

e(a+j)t dt

1
2a
1
+
= 2
a j a + j
a + 2

The Fourier spectrum of x(t) is shown in Fig. 50.


Example 4: Delta function, x(t) = (t). The Fourier Transform is
X() =

(t)ejt dt = 1

Example 5: Pulse (or rectangular gate) signal.


(

x(t) =

1 |t| < T1
0 |t| > T1

The Fourier Transform of this pulse is


X() =

Z T1

= 2

T1

ejt dt

sin(T1 )
= 2T1 sinc(T1 )

54

where sinc function is defined as sin(x)/x. The FT of x(t) is given in Fig.


??.
Example 6: Signal with pulse (or rectangular gate) spectrum shown in Fig.
52.
(

1 || < W
0 || > W

X() =

The inverse Fourier Transform of X(),


1
x(t) =
2

Z W
W

sin(W t)
W
=
sinc(W t)
t

ejt d =

The signal x(t) is a sinc function in this case. Examples 5 and 6, point to
the duality property of the FT.
Remark: Fourier Transform of Periodic Signals
We can obtain the Fourier transform of a periodic signal via the Fourier
series representation of it. We explain this using the following examples:
Example 7: Unit impulse spectrum
X() = 2()
The inverse Fourier Transform of X() is,
1
x(t) =
2

2 ()ejt d = ej0t = 1

Now consider the FT that is a shifted impulse to frequency 0 shown in Fig.


??.
X() = 2( 0 )
The signal that corresponds to X() is,
x(t) =

1
2

2 ( 0 )ejt d = ej0 t

that is a complex sinusoid at frequency 0 , which is clearly a periodic signal.


Example 8: Combination of complex sinusoids, i.e., Fourier Series.
55

The periodic signal ej0 t has a FT 2 ( 0 ). So, we can express


any periodic signal as a combination of ejk0 t (that is the FS expansion)
then take its FT which will be the combination of 2 ( k0 ). Given a
periodic signal,

x(t) =

ck ejk0 t

k=

then the FT is obtained by

X() =

2 ck ( k0 )

k=

which is again impulses in frequency (separated by 0 ) and weighted by ck


(see Fig. 54).
Example 9: Sinus function.

x(t) = sin 0 t =

i
1 h j0 t
e
ej0 t
2j

ck = 0, k 6= 1, then the FT of sinus function is


X() =

1
[2 ( 0 ) 2 ( + 0 )]
2j

We show the Fourier transform in Fig. ??.


Example 10: Cosine function.

x(t) = cos 0 t =

i
1 h j0 t
e
+ ej0 t
2

ck = 0, k 6= 1, then the FT of cosine


X() =

1
[2 ( 0 ) + 2 ( + 0 )]
2

which is given in Fig. 56.


Example 11: A periodic impulse train.
56

Figure 5

Consider the periodic impulse train of period T sec. shown in Fig. ??.
x(t) =

(t kT )

...

k=

The FS coefficients of this signal,

ck =

1
T

Z T /2
T /2

(t)ejk0 t dt =

Figure

1
T

and using the result of Example 8, we get the FT of the pulse train
X() =

X
k=

2
2 ck ( k0 ) =
T

( k

k=

2
)
T

that is another impulse train in frequency (of period 2/T ) illustrated in


Fig. 58.

3.4. PROPERTIES OF THE CONTINUOUS-TIME FOURIER


TRANSFORM
1. Linearity: The FT is a linear operation.
x2 (t) X2 () then

x1 (t)

X1 () and

a1 x1 (t) + a2 x2 (t) a1 X1 () + a2 X2 ()

2. Symmetry: If x(t) is real then its FT is conjugate symmetric, X() =


X (). For example,
x(t) = eat u(t) X() =
and
X() =

1
a + j

1
= X ()
a j

If the signal x(t) is real, and that


X() = Re{X()} + jIm{X()}
X() = |X()|ej
then the symmetry property becomes,
57

6 X()

The real part of the FT Re{X()} is even symmetric


The imaginary part of the FT Im{X()} is odd symmetric
The magnitude of the FT |X()| is even symmetric
The phase of the FT 6 X() is odd symmetric
Furthermore,
If x(t) is real and even, then X() is also real and even; X() =
X()
If x(t) is real and odd, then X() is pure imaginary and odd; X() =
X()
Even and odd symmetric parts of the signal
x(t) = xe (t) + xo (t)
corresponds to
X() = Re{X()} + jIm{X()}
then xe (t) Re{X()} and xo (t) Im{X()}.

3. Time Shifting: If x(t) has a FT X(), then the time shifted version of
x(t),
6 X()t0 )

x(t t0 ) ejt0 X() = |X()|ej(

that is shifting the signal in time just adds a phase to its FT, the magnitude
remains the same.
4. Differentiation and Integration: The derivative of x(t) has the FT
d
x(t)
dt
x0 (t)

1
jX()ejt d
2
jX()

and the integration of x(t) has the FT


Z t

x( )d

1
X() + X(0)()
j
58

5. Time and Frequency Scaling: If a signal x(t) has the FT X(), then
its time-scaled version would have

x(at)

x(at)e

jt

1
1
dt =
x( )ej(/a) d =
X
|a|
|a|

6. Duality: In general
g(t) f () f (t) 2g()
Dual of differentiation in time (property 4) is
jtx(t)

d
X()
d

the dual of time shifting (property 3) is modulation, i.e.,


ej0 t x(t) X( 0 )
and the dual of integration in time (property 4) is
1
x(t) + x(0)(t)
jt

X()d

Rectangular pulse signal and sinc signal are Duals of each other. Let
x1 (t) be a pulse
(

x1 t =

1 |t| < T1
0 |t| > T1

then its FT is a sinc function,


X1 () =

2 sin(T1 )
T1
= 2T1 sinc

Further let x2 (t) be a sinc signal:

W
Wt
sin(Wt )
=
sinc
x2 (t) =
t

59

then its FT is a frequency pulse,


(

1 || < W
0 || > W

X2 () =

which is the result of Duality property of the FT.


7. Parsevals Relation: This is the property which indicates that the FT
preserves the energy of the signal, i.e., the energy of the signal calculated in
the time-domain or in the frequency domain are identical.
x(t) X()
Z

1
|x(t)| dt =
2

|X()|2 d = Ex

A similar energy relation is valid for periodic signals,


1
T0

Z
T0

|x(t)|2 dt =

|ck |2

k=

is the average energy of the signal in one period.


8. Convolution Property: This property is actually very useful for the
frequency domain analysis of LTI systems. We consider an LTI system with
impulse response h(t), and assume an input signal x(t) is applied to this
system. The output is
y(t) = x(t) h(t) =

x( ) h(t ) d

Then the FT of the output,


Y () =
=

Z Z

x( )

= H()

x( ) h(t )d ejt dt

h(t )e

x( )ej d

= H()X()

60

jt

dt d

where H() is the FT of the impulse response and it is called the Frequency
Response of the LTI system.
y(t) = h(t) x(t) Y () = H()X()

Example 12: The delay system.


h(t) = (t t0 )

H() = ejt0

Y () = H()X() = ejt0 X()


y(t) = x(t t0 )

Example 13: Differentiator.


d
x(t)
dt
Y () = jX() = H()X()
y(t) =

H() = j

Example 14:
Given an LTI system with h(t) = eat u(t), a > 0 and input signal
x(t) = ebt u(t), b > 0, find the output using frequency methods.
X() = 1/(b + j) and H() = 1/(a + j) and then the output is
Y () = 1/(b + j)(a + j).

1
1
1
Y () =

b a a + j b + j
y(t) =

i
1 h at
e u(t) ebt u(t)
ba

Example 15:
Another LTI system with h(t) = et u(t) is given
x(t) =

+3
X
k=3

61

ck ejk2t

c0 = 1, c1 = c1 = 1/4, c2 = c2 = 1/2, c3 = c3 = 1/3


H() =
X() =

+3
X

1
1 + j

2ck ( 2k)

k=3

Y () = X()H() =

+3
X

2ck H(2k)( 2k)

k=3

+3
X
k=3

2ck
( 2k)
1 + j2k

+3
X

y(t) =

k=3

ck
ej2kt
1 + j2k

9. Modulation Property:
Assume that two signals are multiplied such that
r(t) = s(t)p(t) R() =

1
[S() P ()]
2

3.5. SAMPLING OF CONTINUOUS TIME SIGNALS


A continuous-time signal can be sampled or discretized by multiplying
it with an impulse train, p(t);
p(t) =

(t kT )

k=

r(t) = s(t)p(t) =

s(t)(t kT ) =

k=

X
k=

The Fourier Transform of the impulse train,

2 X
2
P () =
( k )
T k=
T

62

s(kT )(t kT )

Then the FT of the sampled signal,


R() =

1
1 X
2
S() ( k )
[S() P ()] =
2
T k=
T

R() =

1 X
2
S( k )
T k=
T

The sampling of a continuous-time signal is illustrated in the time and


frequency domains in Fig. ??.
For a non-aliased sampling, i.e, for the spectra of S() not to overlap in
R(), we need that

We call

2
T

2
2
1 1
21
T
T
= s as the sampling frequency.

2
= s s 21
T
is called the Nyquist sampling criteria. The original signal can be recovered from r(t) by using a low-pass filter provided that the Nyquist Criteria
is satisfied. We show in Fig. 60 the recovery of the original signal as a
combination of sinc functions.
The frequency response of LTI systems is generally presented in logarithmic (dB) scale. These plots are called Bode Diagrams.
H() =

Y ()
X()

Frequency response

|Y ()| = |H()||X()|
|H()| : Magnitude response of the system.
|Y ()| : Magnitude spectrum of the output.
|X()| : Magnitude spectrum of the input.

6
6
6
6

s(t)

Y () = 6 H() + 6 X()

H() : Phase response of the system.


Y () : Phase spectrum of the output.
X() : Phase spectrum of the input.

63

Figure 60:

The magnitude response in dB is defined as |H()|(dB) = 20log10 |H()|


which is then graphed as a function of as shown in Fig. ??.

64

4. FILTERING
Frequency selective Filters are devices that allow some frequency components at the input signal to appear at the output, but eliminate other
frequencies.

Stopban

4.1. IDEAL FILTERS


1. Ideal Low-Pass Filter:
The magnitude response of an ideal low-pass filter is
(

|Hlp ()| =

1 || < C
0 || > C

In Fig. 62 we give the magnitude and phase responses of an ideal lowpass filter.
The impulse response of a zero phase filter would be symmetric around
zero, but it is non-causal. So a realizable or causal filter will have an impulse
response that is symmetric around > 0 as displayed in Fig. ?? we give the
impulse response of a zero phase (but non-causal) filter, and its time-shifted
(non-zero phase) version.

Figure 62: Magnit

Pass

2. Ideal High-Pass Filter:

Figure 64: M
An ideal high-pass filter is the complementary of the ideal low-pass filter.
The magnitude response is given by;
(

|Hhp ()| =

1 || > C
0 || < C

and shown in Fig. 64.

65

3. Band-Pass Filter:
A band pass filter is the combination of a low-, and high-pass filters. An
ideal band pass filter magnitude response given in Fig. ?? that is.
(

|Hbp ()| =

1 1 < || < 2
0 otherwise

4.2. NON-IDEAL FREQUENCY SELECTIVE FILTERS


Ideal filters are not possible to implement in practice, due to their sharp
transitions and constant responses. However, we approximate the ideal filters using some polynomial approximations which are called non-ideal filters.
Butterworth
Chebychev I and II
Elliptic

66

x( )
1

t
t

h( )

t
t

h(-

t
t

h(t-

1
t<0

h(t-

t >=0

Figure 33: Calculation steps of a convolution integral using graphics.

67

y(t)

1/a

Figure 34: The response of the system to input x(t).

x
x

h = h1 * h2

h1

y1

h2

=
x

h2

y2

h1

Figure 35: Cascade connection of two discrete-time LTI systems.

h1
x

h = h1 + h2

y1

+
h2

y2

Figure 36: Parallel connection of two discrete-time LTI systems.

68

x(n) =

u(n)

1
0<

<1

. . .

h(n) =u(n)

Figure 37: The input signal and impulse response of a discrete-time LTI
system.

69

x(k) =

u(k)

1
0<

<1

. . .

h(k) =u(k)

h(-k)

h(n-k)

n < 0

h(n-k)

n >= 0

Figure 38: Calculation steps of a convolution sum.

70

y(n)

1/1- a

...
n

Figure 39: The response of the system to input x(n).


x(t)

x(t)

h(t)

y(t)

h(t) * h'(t) =

x(t)

h'(t)

d(t)

x(t)

Figure 40: An LTI system and its inverse cascaded.

yh(t)
x(t)

Linear System with zero


aux cond.

y(t)

yp(t) +

Figure 41: An incrementally linear system model.


x (n)
2

x1(n)

x(n)

+
a

x(n)
z

-1

x1(n) + x2(n)
t

a x(n)

x(t)

x x(t) d(t)
-h

x(n-1)

Figure 42: Block diagram symbols used to represent LTI systems.


71

x(n)

b0

w(n)

-1

s(n)

+
z

b0

-1

y(n)

-1

-a1

x(n)

-1

-a1

b1

x(n)

y(n)

b1

b0

y(n)

-1

-a1

b1

Figure 43: Block diagram of a second order discrete-time system.


x(t)

y(t) = x(t) * h(t)

h(t)

Figure 44: Periodic input to an LTI system.


x(t)

...

...
t
-T0

-T1

T1

T0

Figure 45: A periodic pulse train.


72

ck

...

-2

...

2
-1

0 1

Figure 46: Fourier series coefficients of the periodic pulse train.

x(t)

...

...
t
-T0

-T1

T1

T0

Figure 47: Periodic pulse train with 2 discontinuities in a period.

x(t)

-T1

T1

~
x(t)

...

...
-T0

-T1

T1

T0

Figure 48: An aperiodic signal and its periodic extension.

73

x(t)
1

0 < a

Figure 49: A two-sided exponential signal.

X(

2/a

1/a

-a

Figure 50: Fourier transform of two-sided exponential.

74

You might also like