You are on page 1of 34

SIGNALS & SYSTEMS

UNIT I
Contents:

1.1 Introduction about signals

1.2 Classification of signals

1.3 Basic signals

1.4 Operations performed on signals

1.5 Properties of signals

1.6 Systems

1.7 Properties of systems

1.8 Linear Time Invariant Systems

1.1 INTRODUCTION ABOUT SIGNALS


We first, define what signals and systems are.

Signals are functions of one or more variables .

Systems respond to an input signal by producing an output signal .

What is a Signal?

1
Anything which carries information is a signal. e.g. human voice,
chirping of birds, smoke signals, gestures (sign language), fragrances
of the flowers.

Many of our body functions are regulated by chemical signals, blind


people use sense of touch. Bees communicate by their dancing
pattern.

Examples of signals include :

1 . A voltage signal: voltage across two points varying as a function


of time.
2 . A force pattern: force varying as a function of 2-dimensional
space.
3 . A photograph: color and intensity as a function of 2-dimensional
space.
4 . A video signal: color and intensity as a function of 2-dimensional
space and time.

Modern high speed signals are: voltage changer in a telephone wire,


the electromagnetic field emanating from a transmitting
antenna,variation of light intensity in an optical fiber.

Thus we see that there is an almost endless variety of signals


and a large number of ways in which signals are carried from on
place to another place.

We are all immersed in a sea of signals. All of us from the smallest


living unit, a cell, to the most complex living organism (humans),
receive signals all the time and continue to process them. Survival of
any living organism depends upon its ability to process the signals
appropriately.

Signals: The Mathematical Way

A signal is a real (or complex) valued function of one or more


real variable(s).When the function depends on a single variable, the
signal is said to be one-dimensional and when the function depends on
two or more variables, the signal is said to be multidimensional.

2
Examples of a one dimensional signal: A speech signal, daily
maximum temperature, annual rainfall at a place
An example of a two dimensional signal: An image is a two
dimensional signal, vertical and horizontal coordinates representing
the two dimensions.
Four Dimensions: Our physical world is four dimensional(three
spatial and one temporal).

Examples of systems include :

1. An oscilloscope: takes in a voltage signal, outputs a 2-dimensional


image characteristic of the voltage signal.
2. A computer monitor: inputs voltage pulses from the CPU and
outputs a time varying display.
3 An accelerating mass : force as a function of time may be looked
at as the input signal, and velocity as a function of time as the output
signal.
4 A capacitance: terminal voltage signal may be looked at as the
input, current signal as the output.

1.2 Classification of signals


We use the term signal to mean a real or complex valued function of
real variable(s) and denote the signal by x(t)
The variable t is called independent variable and the value x of t as
dependent variable.

When t takes a vales in a countable set the signal is called a discrete


time signal. For example
t ε {0, T, 2T, 3T, 4T,...}
t ε {....-1,
0 ,1,...}
t ε {1/2, 3/2, 5/2, 7/2,...}
For convenience of presentation we use the notation x[n] to denote
discrete time signal. When both the dependent and independent
variables take values in countable sets (two sets can be quite different)
the signal is called Digital Signal.

When both the dependent and independent variable take value in


continous set interval, the signal is called an Analog Signal.

3
Continuous-time and discrete-time signals :

A signal was defined as merely a mapping from one set to another. In


certain cases, the independent variable is continuous, i.e. the elements
of the domain set have a continuity associated with them. This means
the mapping is defined over a continuum of values of the independent
variable. Such signals are called continuous-time signals . A force
pattern (force as a function of 2-dimensional space) or a speech signal
would be an example of a continuous-time signal.

On the other hand, certain signals are defined for only discrete values
of the independent variable; i.e. the elements of the domain are not
continuous. Such signals are called discrete-time signals .

Ex: India's population count, done every 10 years is an example of a


discrete-time signal. In fact, the image files on your computer are also
discrete-time signals; the information is stored pixel-wise, and not over
a continuous stretch of 2 spatial co-ordinates. We typically index a
discrete time variable by integers.

Discrete-time signals

Discrete variables are those in which there exists a


neighborhood around each value in which no other value is
present.

Why should we bother about discrete variables?

Discrete variables come up intrinsically in several applications. Take


for example, the cost of gold in the market every day. The dependent
variable (cost) is a function of discrete time (incremented once every
day). Another example is the marks scored by the students in class.
Here the dependent variable (marks) is a function of the discrete
variable roll number. While it is perfectly fine to talk about marks of
02007005, it makes no sense to talk of marks of roll no 02007011.67 -
this system is inherently discrete.

Need the discrete variable be uniform?

No. though we imagine natural number or integers when we think of


discrete signals, the points need not be equally spaced. For example, if
the markets remained closed on Sundays, we would not record a price
for gold on that day - so the spacing between the variables on this axis
changes.

4
A discrete variable is one which can ultimately be indexed by
integers.

First, the simplest and most intuitive discrete set is the integer axis
itself:

Examples of discrete variables

Now that we seem to have an intuitive understanding of what a


discrete variable is, let us take some examples of discrete variables:

Notation:
When we write x(t) it has two meanings. One is value of x at time t and
the other is the pairs (x(t), t) allowable value of t. By signal we mean
the second interpretation.

Notation for continous time signal


{x(t)} denotes the continuous time signal. Here {x(t)} is short notation
for {x(t), t ε I } where I is the set in which t takes the value.

Notation for discrete time signal


Similarly for discrete time signal we will use the notation {x(t)}, where
{x(t)} is short for {x(t), n ε I }.

Henceforth, we shall represent the independent variable for


continuous-time signals by t (enclosed in (.) brackets), and for
discrete-time signals by n (enclosed in [.] brackets).

{x(t)} refers to the whole waveform,while x[n] refers to a


particular value.

Conclusion:
* Thus a signal may also be defined as a mapping from one set
(domain) to another
(co domain).
* Continuous-time signal means the mapping is defined over a
continuum of values
of the independent variable.

5
* A discrete variable is one which can ultimately be indexed by
integers
We will enclose discrete variables in brackets [.] as opposed to
parenthesis (.) for
continuous variables.
Review Questions:
1. What is a signal? Give example.
2. Give examples for 1 dimensional and 2 dimensional signal.
2. Mention the basic types of signals.
3. Compare discrete time signal with digital signal.

1.3 Basic Signals in detail

Elementary Signals

Continuous time unit step and unit impulse functions

The Continuous Time Unit Step Function: The definition is


analogous to its Discrete Time counterpart i.e.

u(t) = 0, t < 0
= 1, t ≥ 0

The unit step function is discontinuous at the origin.

The Continuous Time Unit Impulse Function: The unit impulse


function also known as the Dirac Delta Function, was first defined by
Dirac as



There are several elementary signals that occur prominently in the


study of digital signals and digital signal processing.
(a) UNIT SAMPLE SEQUENCE:

6
Defined by

Graphically this is as shown below.

Unit sample sequence is also known as impulse sequence.

(b) UNIT STEP SEQUENCE:

Defined by :

Graphically this is as shown below

Unit step in terms of unit impulse function

Having studied the basic signal operations namely Time Shifting,


Time Scaling and Time Inversion it is easy to see that

similarly,

7
Summing over we get

Looking directly at the Unit Step Function we observe that it can be


constructed as a sum of shifted Unit Impulse Functions

The unit function can also be expressed as a running sum of the Unit
Impulse Function

We see that the running sum is 0 for n < 0 and equal to 1 for n >= 0
thus defining the Unit Step Function u[n].

Sifting property

Consider the product . The delta function is non zero only at the
origin so it follows the signal is the same as .

More generally

It is important to understand the above expression. It means the


product of a given signal x[n] with the shifted Unit Impulse Function is
equal to the time shifted Unit Impulse Function multiplied by x[k].
Thus the signal is 0 at time not equal to k and at time k the amplitude

8
is x[k]. So we see that the unit impulse sequence can be used to
obtain the value of the signal at any time k. This is called the Sampling
Property of the Unit Impulse Function. This property will be used in the
discussion of LTI systems. For example consider the product .
It gives .

Likewise, the product x[n] u[n] i.e. the product of the signal u[n] with
x[n] truncates the signal for n < 0 since u[n] = 0 for n <0

Similarly, the product x[n] u[n-1] will truncate the signal for n < 1.

9
(c) EXPONENTIALSEQUENCE:
The complex exponential signal or sequence {x[n]} is defined by
x[n] = C αn

where C and α are, in general, complex numbers.


Note that by writing α = eβ , we can write the exponential sequence as
x[n] = c eβn
Real exponential signals:
If C and are real, we can have one of the several type of behavior
illustrated below

10
For |α| > 1 magnitude of the signals grows exponentially,
|α| < 1 It is decaying exponential.
For α > 1 all terms of {x[n]} have same sign,
α<1 sign of terms in {x[n]} alternates.
(d)SINUSOIDAL SIGNAL:
The sinusoidal signal {x[n]} is defined by

Euler's relation allows us to relate complex exponentials and sinusoids


as

and

The general discrete time complex exponential can be written in terms


of real exponential and sinusiodal signals.

Specifically if we write C and α in polar form and then

Thus for |α| = 1 , the real and imaginary parts of a complex exponential
sequence are sinusoidal.
|α| < 1, they correspond to sinusoidal sequence multiplied by
a decaying exponential,
|α| > 1 , they correspond to sinusiodal sequence multiplied
by a growing exponential.

Review Questions:
1. List some of the elementary signals.
2. Define unit impulse function.
3. Define unit step function.

11
1.4 OPERATIONS PERFORMED ON SIGNALS
i) Sequence Addition
ii) Scalar Multiplication
iii)Sequence Multiplication
iv) Shifting
v) Reflection

i) Sequence addition:
Let {x[n]} and {y[n]}be two sequences. The sequence addition is
defined as term by term addition. Let {z[n]} be the resulting
sequence.
{z[n]} = {x[n]} + {y[n]}
where each term z[n] = x[n] + y[n]
We will use the following notation
{x[n]} + {y[n]} = {x[n] + y[n]}

ii) Scalar multiplication:


Let a be a scalar. We will take a to be real if we consider only the real
valued signals, and take to be a complex number if we are
considering complex valued sequence. Unless otherwise stated we will
consider complex valued sequences. Let the resulting sequence be
denoted by {w[n]}
{w[n]} = a {x[n]}
is defined by w[n] = ax[n]
each term is multiplied by a
a {w[n]} = {aw[n]}
Note: If we take the set of all sequences and define these two
operations as addition and scalar multiplication they satisfy all the
properties of a linear vector space.

iii) Sequence multiplication:


Let {x[n]} and {y[n]} be two sequences, and {z[n]} be resulting
sequence
{z[n]} = {x[n]}{y[n]}
where z[n] = x[n] y[n]
The notation used for this will be {x[n]} {y[n]} = {x[n] y[n]}

Now we consider some operations based on independent variable n.

iv) Shifting:
This is also known as translation. Let us shift a sequence {x[n]} by n0
units, and the resulting sequence be {y[n]}

12
where is the operation of shifting the sequence right by n 0 unit.
The terms are defined by y[n] = x[n - n0]. We will use short notation
{x[n - n0]} to denote shift by n0.
Figure below show some examples of shifting.
{x[n]}

Consider the figure to the left.


{x[n-2]}

A negative value of n0 means shift


towards right.

{x [n+1]}

A positive value of n0 means shift


towards left.

v) Reflection:
Let {x[n]} be the original sequence, and {y[n]} be reflected sequence,
then y[n] is defined by
y[n] = x[-n]

{x[n]}

13
We will denote this by {x[-n]}
When we have complex valued signals, sometimes we reflect and do
the complex conjugation, ie, y[n] is defined by y[n] = x*[-n], where *
denotes complex conjugation. This sequence will be denoted by {x*[-
n]}.

We will learn about more complex operations later on. Some of these
operations commute, ie. if we apply two operations we can interchange
their order and some do not commute. For example scalar
multiplication and reflection commute.
Then v[n] = z[n] for all n. Shifting and scaling do not commute.

14
{x[n]} {y[n]} = {x[n-1} {z[n]} = {y[-
n]}

{x[n]} {w[n]} = {x[-n]} {u[n]}


= {w[n-1]}

We can combine many of these operations in one step, for example


{y[n]} may be defined as y[n] = 2x [3-n].

Review Questions :
1. What are the mathematical operations that can be
performed on signals?
2. What do you mean by shifting of signals?
2. Which of the operations obey commutative property?

1.5 Properties of signals

i) Energy of a signal
ii) Power of a signal
iii) Periodicity of signals
iv) Even and Odd signals
v) Periodicity property of sinusoidal signals

i) Energy of a Signal:
The total energy of a signal {x[n]} is defined by

15
A signal is referred to as an energy signal, if and
only if the total energy of the signal Ex is finite.

ii) Power of a signal:


If {x[n]} is a signal whose energy is not finite, we define power of the
signal as

A signal is referred to as a power signal if the power P x satisfies the


condition

An energy signal has a zero power and a power signal has infinite
energy. There are signals which are neither energy signals nor power
signals. For example {x[n]} defined by x[n] = n does not have finite
power or energy.

iii) Periodic Signals:


An important class of signals that we encounter frequently is the class
of periodic signals. We say that a signal {x[n] is periodic with period N,
where N is a positive integer, if the signal is unchanged by the time
shift of N ie.,
{x[n]} = {x[n + N]}
or x[n] = x[ n + N ] for all n.

Since {x[n]} is same as {x[n+N]} , it is also periodic so we get


{x[n]} = {x[n+N]} = {x[n+N+N]} = {x[n+2N]}

Generalizing this we get {x[n]} = {x[n+kN]}, where k is a positive


integer. From this we see that {x[n]} is periodic with 2N, 3N,... The
fundamental period N0 is the smallest positive value N for which the
signal is periodic.

The signal illustrated below is periodic with fundamental period N0 = 4


Except for a all zero signal all periodic signals have infinite energy.
They may have finite power. Let {x[n]} be periodic with period N, then
the power Px is given by

16
where k is largest integer such that kN -1 ≤ M. Since the signal is
periodic, sum over one period will be same for all terms. We see that k
is approximately equal to M/N (it is integer part of this) and for large M
we get 2M/N terms and limit 2M/(2M +1) as M goes to infinite is one
we get

iv) Even and odd signals:

A real valued signal {x[n]} is referred to as an even signal if it is


identical to its time reversed counterpart ie, if
{x[n]} = {x[-n]}
A real signal is referred to as an odd signal if
{x[n]} = {-x[-n]}
An odd signal has value 0 at n = 0 as x[0] = -x[n] = - x[0]

Given any real valued signal {x[n]} we can write it as a sum of


an even signal and an odd signal. Consider the signals
Ev ({x[n]}) = {xe[n]} = {1/2 (x[n] + x[-n])}
and Od ({x[n]}) = {xo[n]} = {1/2(x[n] -x [-n])}

We can see easily that

17
{x[n]} = {xe[n]} + {xo[n]}

The signal {xe[n]} is called the even part of {x[n]}. We can verify
very easily that {xe[n]} is an even signal. Similarly, {xo[n]} is called
the odd part of {x[n]} and is an odd signal. When we have complex
valued signals we use a slightly different terminology. A complex
valued signal {x[n]} is referred to as a conjugate symmetric signal if
{x[n]} = {x*[-n]}

where x* refers to the complex conjugate of x. Here we do


reflection and complex conjugation. If {x[n]} is real valued this is same
as an even signal.
A complex signal {x[n]} is referred to as a conjugate antisymmetric
signal if
{x[n]} = {-x*[-n]}
We can express any complex valued signal as sum conjugate
symmetric and conjugate antisymmetric signals. We use notation
similar to above
Ev({x[n]}) = {xe[n]} = {1/2(x[n] + x*[-n])}
and Od ({[n]}) = {xo[n]} = {1/2(x[n] - x*[-n])}
then {x[n]} = {xe[n]} + {xo[n]}
We can see easily that {xe[n]} is conjugate symmetric signal and
{xo[n]} is conjugate antisymmetric signal. These definitions reduce to
even and odd signals in case signals takes only real values.

Review Questions:
1. List out the properties of signals.
2. Define periodicity of a signal.
3. What do you understand by conjugate symmetric and
conjugate antisymmetric signal ?
1.6 System:
What is a system?

A signal was defined as a mapping from a set of the independent


variable (domain) to the set of the dependent variable (co-domain). A
system is also a mapping, but across signals, or across
mappings . That is, the domain set and the co-domain set for a
system are both sets of signals, and corresponding to each signal in
the domain set, there exists a unique signal in the co-domain set.

18
In signals and systems terminology, we say; corresponding to
every possible input signal, a system “produces” an output
signal.

In that sense, realize that a system, as a mapping is one step


hierarchically higher than a signal. While the correspondence for a
signal is from one element of one set to a unique element of another,
the correspondence for a system is from one whole mapping from a set
of mappings to a unique mapping in another set of mappings!

Examples of systems

Examples of systems are all around us. The speakers that go with your
computer can be looked at as systems whose input is voltage pulses
from the CPU and output is music (audio signal). A spring may be
looked as a system with the input , say, the longitudinal force on it as a
function of time, and output signal being its elongation as a function of
time. The independent variable for the input and output signal of a
system need not even be the same.

In fact, it is even possible for the input signal to be continuous-time


and the output signal to be discrete-time or vice-versa. For example,
our speech is a continuous-time signal, while a digital recording of it is
a discrete-time signal! The system that converts any one to the other
is an example of this class of systems.

As these examples may have made evident, we look at many physical


objects/devices as systems, by identifying some variation
associated with them as the input signal and some other
variation associated with them as the output signal (the
relationship between these, that essentially defines the system
depends on the laws or rules that govern the system) . Thus a
capacitance with voltage (as a function of time) considered as the
input signal and current considered as the output signal is not the
same system as a capacitance with, say charge considered as the
input signal and voltage considered as the output signal. Why?

The mappings that define the system are different in these two cases.

19
System description

The system description specifies the transformation of the input signal


to the output signal. In certain cases, a system has a closed form
description. E.g. the continuous-time system with description y(t) =
x(t) + x(t-1); where x(t) is the input signal and y(t) is the output
signal. Not all systems have such a closed form description. Just as
certain "pathological" functions can only be specified by tabulating the
value of the dependent variable against all values of the independent
variable; some systems can only be described by tabulating the output
signal against all possible input signals.

Explicit and Implicit Description

When a closed form system description is provided, it may either be


classified as an explicit description or implicit one.

For an explicit description, it is possible to express the output at a


point, purely in terms of the input signal. Hence, when the input is
known, it is easily possible to find the output of the system, when the
system description is Explicit. In case of an Explicit description, it is
clear to see the relationship between the input and the output. e.g.
y(t) = { x(t) } 2
+ x(t-5).

In case the system has an Implicit description, it is harder to see the


input-output relationship. An example of an Implicit description is y(t)
- y(t-1) x(t) = 1. So when the input is provided, we are not directly
able to calculate the output at that instant (since, the output at 't-1'
also needs to be known). Although in this case also, there are methods
to obtain the output based solely on the input, or, to convert this
implicit description into an explicit one. The description by itself
however is in the implicit form.

20
The mapping involved in systems

We shall next discuss the idea of mapping in a system in a


little more depth.

A signal maps an element in one set to an element in another. A


system, on the other hand maps a whole signal in one set to a signal in
another. That is why a system is called a mapping over mappings.
Therefore, the value of the output signal at any instant of time
(remember "time" is merely symbolic) in general depends on the whole
input signal. Thus, even if the independent variable for the input and
output signal are the same (say time t), do not assume the value
the output signal at, say t = 5 depends on only the value of the
input signal at t = 5.

For example, consider the system with description:

The output at, say t = 5 depends on the values of the input signal for
all t <= 5.

Henceforth; we shall call systems with both input and output signal
being continuous-time as continuous-time systems , and those with
both input and output signal being discrete-time as discrete-time
systems. Those that do not fall into either of these classes (i.e. input

21
discrete-time and output continuous-time and vice-versa) we shall call
hybrid systems.

Recap

A system is a mapping across signals, in other words mapping across mappings.


In signals and systems terminology, we say that corresponding to every possible
input signal, a system “produces” an output signal.
For an explicit description, it is possible to express the output at a point, purely in
terms of the input signal.
In case the system has an Implicit description, when the input is provided, we may
not be able to calculate the output directly, it may need some mathematical
induction to be done.

Review Questions:
1. What is a system? Give example.
2. Explain implicit and explicit description of a system.

1.7 Properties of systems:


i) Memory
ii) Linearity
iii) Shift-invariance
iv) Stability
v) Causality

i) Memory:

Memory is a property relevant only to systems whose input and output


signals have the same independent variable. A system is said to be
memoryless if its output for each value of the independent
variable is dependent only on the input signal at that value of
independent variable. For example the system with description :
y(t) = 5x(t) ( y(t) is the output signal corresponding to input signal
x(t) ) is memoryless. In the physical world a resistor can be considered
to be a memoryless system (with voltage considered to be the input
signal, current the output signal).

By definition, a system that does not have this property is said to have
memory.

How can we identify if a system has memory?

22
For a memoryless system, changing the input at any instant can
change the output only at that instant. If, in some case, a change in
input signal at some instant changes the output at some other instant,
we can be sure that the system has memory.

Examples:

Assume y[n] and y(t) are respectively outputs corresponding to input


signals x[n] and x(t)

1. The identity system y(t) = x(t) is of-course Memoryless


2. System with description y[n] = x[n-5] has memory. The input at
any "instant" depends on the input 5 "instants" earlier.
3. System with description

also has memory. The output at any instant depends


on all past and present inputs.

ii) Linearity:

Now we come to one of the most important and revealing properties


systems may have - Linearity. Basically, the principle of linearity is
equivalent to the principle of superposition, i.e. a system can be said to
be linear if, for any two input signals, their linear combination yields as
output the same linear combination of the corresponding output
signals.

Definition:

(It is not necessary for the input and output signals to have the same
independent variable for linearity to make sense. The definition for
systems with input and/or output signal being discrete-time is similar.)

Example of linearity

23
A capacitor, an inductor, a resistor or any combination of these are all
linear systems, if we consider the voltage applied across them as an
input signal, and the current through them as an output signal. This is
because these simple passive circuit components follow the principle
of superposition within their ranges of operation.

Additivity and Homogeneity:

Linearity can be thought of as consisting of two properties:

• Additivity

A system is said to be additive if for any two input signals x1(t) and
x2(t),

i.e. the output corresponding to the sum of any two inputs is the sum
of the two outputs.

• Homogeneity (Scaling)

A system is said to be homogenous if, for any input signal X(t),

i.e. scaling any input signal scales the output signal by the same
factor.

To say a system is linear is equivalent to saying the system obeys both


additivity and homogeneity.

a) We shall first prove homogeneity and additivity imply


linearity.

24
b) To prove linearity implies homogeneity and additivity.

This is easy; put both constants equal to 1 in the definition to get


additivity; one of them to 0 to get homogeneity.

Additivity and homogeneity are independent properties.

We can prove this by finding examples of systems which are additive


but not homogeneous, and vice versa.

Again, y(t) is the response of the system to the input x(t).

Example of a system which is additive but not homogeneous:

[ It is homogeneous for real constants but not complex ones - consider


]

Example of a system which is homogeneous but not additive:

Examples of Linearity:

Assume y[n] and y(t) are respectively outputs corresponding to input


signals x[n] and x(t)

1) System with description y(t) = t . x(t) is linear.

Consider any two input signals, x1(t) and x2(t), with corresponding
outputs y1(t) and y2(t).

a and b are arbitrary constants. The output corresponding to a.x1(t) +


b.x2(t) is

25
= t (a.x1(t) + b.x2(t))

= t.a.x1(t) + t.b.x2(t), which is the same linear combination of y1(t)


and y2(t).

Hence proved.

2) The system with description is not linear.

See for yourself that the system is neither additive, nor homogenous.

Show for yourself that systems with the following descriptions are
linear:

iii) Shift Invariance

This is another important property applicable to systems with the same


independent variable for the input and output signal. We shall first
define the property for continuous time systems and the definition for
discrete time systems will follow naturally.

Definition:

Say, for a system, the input signal x(t) gives rise to an output signal
y(t). If the input signal x(t - t0) gives rise to output y(t - t0), for
every t0, and every possible input signal, we say the system is shift
invariant.

i.e. for every permissible x(t) and every t0

In other words, for a shift invariant system, shifting the input signal
shifts the output signal by the same offset.

26
Note this is not to be expected from every system. x(t) and x(t - t0)
are different (related by a shift, but different) input signals and a
system, which simply maps one set of signals to another, need not at
all map x(t) and x(t - t0) to output signal also shift by t0

A system that does not satisfy this property is said to be shift variant.

Examples of Shift Invariance:

Assume y[n] and y(t) are respectively outputs corresponding to input


signals x[n] and x(t)

27
iv) Stability

Let us learn about one more important system property known as


stability. Most of us are familiar with the word stability, which
intuitively means resistance to change or displacement. Broadly
speaking a stable system is a one in which small inputs lead to
predictable responses that do not diverge, i.e. are bounded. To get the
qualitative idea let us consider the following physical example.

Example

Consider an ideal mechanical spring (elongation proportional to


tension). If we consider tension in the spring as a function of time as
the input signal and elongation as a function of time to be the output
signal, it would appear intuitively that the system is stable. A small
tension leads only to a finite elongation.

There are various ideas/notions about stability not all of which are
equivalent. We shall now introduce the notion of BIBO Stability, i.e.
BOUNDED INPUT-BOUNDED OUTPUT STABILITY.

28
Statement:

Note: This should be true for all bound inputs x(t)

Examples

Consider systems with the following descriptions. y(t) is the output


signal corresponding to the input signal x(t).

BIBO Stable system : In a BIBO stable system, every bounded input


is assured to give a bounded output. An unbounded input can give us

29
either a bounded or an unbounded output, i.e. nothing can be said for
sure.

BIBO Unstable system: In a BIBO unstable system, there exists at


least one bounded input for which output is unbounded. Again, nothing
can be said about the system's response to an unbounded input.

v) Causality

Causality refers to cause and effect relationship (the effect follows the
cause). In a causal system, the value of the output signal at any
instant depends only on "past" and "present" values of the input signal
(i.e. only on values of the input signal at "instants" less than or equal
to that "instant"). Such a system is often referred to as being non-
anticipative, as the system output does not anticipate future values
of the input (remember again the reference to time is merely
symbolic). As you might have realized, causality as a property is
relevant only for systems whose input and output signals have the
same independent variable. Further, this independent variable
must be ordered (it makes no sense to talk of "past" and "future"
when the independent variable is not ordered).

What this means mathematically is that If two inputs to a causal


(continuous-time) system are identical up to some time to, the
corresponding outputs must also be equal up to this same time (we'll
define the property for continuous-time systems; the definition for
discrete-time systems will then be obvious).

Definition

Let x1(t) and x2(t) be two input signals to a system and y 1(t) and
y2(t) be their respective outputs.

The system is said to be causal if and only if:

This of course is only another way of stating what we said before: for
any t0 : y( t0) depends only on values of x(t) for t <= t0

As an example of the behavior of causal systems, consider the figure


below:

30
The two input signals in the figure above are identical to the point t =
t0, and the system being a causal system, their corresponding outputs
are also identical till the point t = t0.

Examples of Causal systems

Assume y[n] and y(t) are respectively the outputs corresponding to


input signals x[n] and x(t)

1. System with description y[n] = x[n-1] + x[n] is clearly causal, as


output "at" n depends on only values of the input "at instants" less
than or equal to n ( in this case n and n-1 ).

2. Similarly, the continuous-time system with description

is causal, as value of output at any time t0 depends on


only value of the input at t0and before.

3. But system with description y[n] = x[n+1] is not causal as output


at n depends on input one instant later.

Deductions from System Properties

31
Now that we have defined a few system properties, let us see how
powerful inferences can be drawn about systems having one or more
of these properties.

Theorem

Statement: If a system is additive or homogeneous, then x(t)=0


implies y(t)=0.

Proof:

This completes the proof.

Theorem:

Statement : If a causal system is either additive or homogeneous


,then y(t) can not be non zero before x(t) is non-zero .

Proof:

Say x(t) = 0 for all t less than or equal to t0.

We have to show that the system response y(t) = 0 for all t less than
or equal to t0.

32
Since the system is either additive or homogeneous the response to
the zero input signal is the zero output signal. The zero input signal
and x(t) are identical for all t less than or equal to t0.

Hence, from causality, their output signals are identical for all t less
than or equal to t0.

Review Questions:

1. Mention the properties of system.

2. What is meant by additivity and homogeneity?

3. Define BIBO stability.

4. What is a shift invariant system?

4. What is causality?

1.8 Linear Time Invariant System:

As the name suggests the two basic properties of a LTI system are:

1) Linearity

A linear system (continuous or discrete time ) is a system that


possesses the property of SUPERPOSITION. The principle of
superposition states that the response of sum of two or more weighted
inputs is the sum of the weighted responses of each of the signals.
Mathematically

y[n] = a
 k yk[n] = a1y1[n] + a2y2 [n] + ......

Superposition combines in itself the properties of ADDITIVITY and


HOMOGENEITY. This is a powerful property and allows us to evaluate
the response for an arbitrary input, if it can be expressed as a sum of
functions whose responses are known.

2) Time Invariance

It allows us to find the response to a function which is delayed or


advanced in time; but similar in shape to a function whose response is
known.

33
Given the response of a system to a particular input, these two
properties enable us to find the response to all its delays or advances
and their linear combination.

Review Questions:
1. Define continuous time signal.
2. Define discrete time signal.
3. Find whether the signal given by x(n)=5 cos(6π n) is periodic.
ω
4. Determine the power and RMS value of the signal x(t) = ej t cosω ot.
5. State Parseval’s theorem for discrete time signal.
6. Check whether the system classified by y(t) = ex(t) is time invariant or
not.
7. Explain the concept of time scaling and time shifting.
8. State any two properties of discrete time systems.
α
9. Verify whether x(t)=Ae- t u(t), α >0, is an energy signal or not.
10.What is an energy signal? Check whether or not unit step signal is an
energy signal?
11. What do you mean by an even signal and an odd signal?
12. Draw the signal x(n) = u(n) – u(n – 3).
13. For a signal x(t) = e-tu(t), draw its reverse and scale changed versions
x(-t) and x(2t).
14. Define energy signal and power signal.
15. What is the relation between δ (t) and u(t)?
16. Define random signal.
17. Define deterministic signal.
18. Define unit step signal.
19. Define unit sample signal.
20. Define periodic and aperiodic signal.
21.When the discrete time signal is said to be even?
22.When the continuous time signal is said to be odd?

34

You might also like