You are on page 1of 54

DIGITAL COMMUNICATIONS

CHAPTER 1: SIGNALS AND SPECTRA

INFORMATION REPRESENTATION
Communication system converts information into electrical electromagnetic/optical signals appropriate for the transmission medium.
Analog Systems convert analog message into signals that can propagate through the channel. Analog signals converted to bits by sampling and quantizing (A/D conversion) Digital Systems convert bits (digits, symbols) into signals Computers naturally generate information as characters/bits
2

DIGITAL AND ANALOG


Analog Message: Characterized by data whose value varies over a continuous range
Environmental effects (Temperature, Atmospheric pressure etc)

DIGITAL AND ANALOG


Digital Message: Constructed with a finite number of Symbols.
Printed Languages 26 Letters 10 Numbers Space and Punctuation Marks etc Human Speech As it is made up of finite vocabulary Binary and M-ary messages
4

DIGITAL COMMUNICATION SYSTEM


Important features of DCS:
Transmitter sends a waveform from a finite set of possible waveforms during a limited time Channel distorts, attenuates the transmitted signal and adds noise to it. Receiver decides which waveform was transmitted from the noisy received signal Probability of error is an important measure for the system performance (Pb)

WHY DIGITAL COMMUNICATION?


Easy to regenerate the distorted signal
Regenerative repeaters along the transmission path can detect a digital signal and retransmit a new, clean (noise free) signal These repeaters prevent the accumulation of noise along the path This is not possible with analog communication systems

WHY DIGITAL COMMUNICATION?


to distortion and interference More immune to channel noise and distortion Two-state signal representation and operation facilitates signal regeneration Hardware flexibility Digital hardware implementation is flexible and permits the use of microprocessors, digital switching and vLsi etc low cost Use of Lsi and vLsi in the design of components and systems have resulted in lower cost Different types of digital signals (data, telephone, television) can be identically treated- a bit is a bit Can combine digital signals through multiplexing Can use packet switching Encryption and Privacy techniques are easier to implement
7

Immunity

Why digital?
Digital techniques need to distinguish between discrete symbols allowing regeneration versus amplification Good processing techniques are available for digital signals, such as
- Data compression (or source coding) - Error Correction (or channel coding) - Equalization - Security

Easy to mix signals and data using digital techniques

WHY DIGITAL COMMUNICATION?


Disadvantages
Requires reliable synchronization Signal processing intensive Non-graceful degradation

General structure of a communication system

Noise Info.
SOURCE

Source

Received Transmitted Received info. signal signal Transmitter Receiver Channel User

Transmitter
Formatter Source encoder Channel encoder Modulator

Receiver
Formatter Source decoder Channel decoder Demodulator

10

11

12

What do we need to know before taking off toward designing a DCS?


Classification of signals Random process Autocorrelation Power and energy spectral densities Noise in communication systems Signal transmission through linear systems Bandwidth of signal
13

Classification Of Signals

Deterministic and Random Signals Periodic and Non-periodic Signals Analog and Discrete Signals Energy and Power Signals An energy signal has finite energy but zero average power Power Signals Power signal has finite average power but infinite energy

Deterministic and Random Signals


Deterministic signal: No uncertainty with respect to the signal value at any time.
Deterministic signals or waveforms are modeled by explicit mathematical expressions, such as x(t) = 5 cos10t.

Random signal: Some degree of uncertainty in signal values before it actually occurs.
For a random waveform it is not possible to write such an explicit expression. Random waveform/ random process, may exhibit certain regularities that can be described in terms of probabilities and statistical averages. Thermal noise in electronic circuits due to the random movement of electrons Reflection of radio waves from different layers of ionosphere
15

Periodic and Non-periodic Signals


A signal x(t) is called periodic in time if there exists a constant T0 > 0 such that

x(t) = x(t +T0)


Where t denotes Time

for

< t <

& T0 is the period of the periodic signal

A periodic signal

A non-periodic signal

16

Analog and Discrete Signals


An analog signal x(t) is a continuous function of time; that is, x(t) is uniquely defined for all t. A discrete signal x(kT) is one that exists only at discrete times; it is characterized by a sequence of numbers defined for each time, kT, where k is an integer and T is a fixed time interval

A discrete signal Analog signal


17

Energy and Power Signals


The performance of a communication system depends on the received signal energy: higher energy signals are detected more reliably (with fewer errors) than are lower energy signals. An electrical signal can be represented as a voltage v(t) or a current i(t) with instantaneous power p(t) across a resistor defined by

p (t) =
OR

(t)
( t )

p ( t )

18

Energy and Power Signals


In communication systems, power is often normalized by assuming R to be 1. The normalization convention allows us to express the instantaneous power as 2

p(t ) = x (t )

where x(t) is either a voltage or a current signal. The energy dissipated during the time interval (-T/2, T/2) by a real signal with instantaneous power expressed by Equation (1.4) can then be written as:

The average power dissipated by the signal during the interval is:

19

Energy and Power Signals


We classify x(t) as an energy signal if, and only if, it has nonzero but finite energy (0 < Ex < ) for all time, where

An energy signal has finite energy but zero average power Signals that are both deterministic and non-periodic are termed as Energy Signals

20

Energy and Power Signals


Power is the rate at which the energy is delivered We classify x(t) as an power signal if, and only if, it has nonzero but finite Power (0 < Px < ) for all time, where

An power signal has finite power but infinite energy Signals that are random or periodic termed as Power Signals

21

The Unit Impulse Function


The impulse function is an abstractionan infinitely large amplitude pulse, with zero pulse width, and unity weight (area under the pulse), concentrated at the point where its argument is zero. The unit impulse is characterized by the following relationships:

Sifting Or Sampling Property

22

Spectral Density
The spectral density of a signal characterizes the distribution of the signals energy or power in the frequency domain. This concept is particularly important when considering filtering in communication systems while evaluating the signal and noise at the filter output. The energy spectral density (ESD) or the power spectral density (PSD) is used in the evaluation.

23

Energy Spectral Density (ESD)


Energy spectral density describes the signal energy per unit bandwidth measured in joules/hertz. Represented as x(f), the squared magnitude spectrum x(f) = |X(f)|2 According to Parsevals theorem, the energy of x(t): Ex Therefore: Ex =

(1.14)

x2(t)

dt

|X(f)|2 df

(1.13) (1.15)

= x (f) df
-

The Energy spectral density is symmetrical in frequency about origin and total energy of the signal x(t) can be expressed as: Ex = 2 x (f) df
0

(1.16)
24

Power Spectral Density (PSD)


The power spectral density (PSD) function Gx(f ) of the periodic signal x(t) is a real, even, and nonnegative function of frequency that gives the distribution of power of x(t) in the frequency domain. PSD is represented as: Gx(f ) = |Cn|2 (f nf0)
n=-

(1.18)

Whereas the average power of a periodic signal x(t) is represented as:


T0 /2

Px =

1/T0 -T0 /2

x2(t)

dt = |Cn|2
n=-

(1.17)

Using PSD, the average normalized power of a real-valued signal is represented as: Px = Gx(f ) df = 2 Gx(f ) df
-
0

(1.19)
25

Autocorrelation
Autocorrelation of an Energy Signal
Correlation is a matching process; autocorrelation refers to the matching of a signal with a delayed version of itself. Autocorrelation function of a real-valued energy signal x(t) is defined as: Rx() = x(t) x(t+) dt for
-

- <<

(1.21)

The autocorrelation function Rx() provides a measure of how closely the signal matches a copy of itself as the copy is shifted units in time. Rx() is not a function of time; it is only a function of the time difference between the waveform and its shifted copy.

26

Autocorrelation of an Energy Signal


The autocorrelation function of a real-valued energy signal has the following properties:

Rx() = Rx(-) Rx() Rx(0) Rx() x (f)

symmetrical in about zero for all maximum value occurs at the origin autocorrelation and ESD form a Fourier transform pair, as designated by the double-headed arrows

Rx(0) = x2(t) dt
-

value at the origin is equal to the energy of the signal

27

Autocorrelation of a Periodic (Power) Signal


The autocorrelation function of a real-valued power signal x(t) is defined as:

When the power signal x(t) is periodic with period T0, the autocorrelation function can be expressed as:

28

Autocorrelation of power signals


The autocorrelation function of a real-valued periodic signal has properties similar to those of an energy signal:
symmetrical in about zero maximum value occurs at the origin autocorrelation and PSD form a Fourier transform pair, as designated by the double-headed arrows value at the origin is equal to the average power of the signal
29

30

31

Random Signal
Random Variables All useful signals are random, i.e. the receiver does not know a priori what wave form is going to be sent by the transmitter Let a random variable X(A) represent the functional relationship between a random event A and a real number. The distribution function Fx(x) of the random variable X is given by

Another useful function relating to the random variable X is the probability density function (pdf), denoted

32

Ensemble (Statistical) Averages


The first moment of the probability distribution of a random variable X is called mean value mx or expected value of a random variable X The second moment of a probabity distribution is meansqure value of X Central moments are the moments of the difference between X and mx, and second central moment is the variance of x. Variance is equal to the difference between the meansquare value and the square of the mean
33

Random process
A random process is a collection of time functions, or signals, corresponding to various outcomes of a random experiment. For each outcome, there exists a deterministic function, which is called a sample function or a realization.
A random process X(A, t) can be viewed as a function of two variables: an event A and time. Random variables Real number

Sample functions or realizations (deterministic function) time (t)


34

Statistical Averages of a Random Process


A random process whose distribution functions are continuous can be described statistically with a probability density function (pdf). A partial description consisting of the mean and autocorrelation function are often adequate for the needs of communication systems. Mean of the random process X(t) : E{X(tk)} = x Px(x) dx = mx(tk)
-

(1.30)

Autocorrelation function of the random process X(t) Rx(t1,t2) = E{X(t1) X(t2)} = xt1xt2 Px(xt1,xt2) dxt1dxt2 (1.31)

35

Stationarity
A random process X(t) is said to be stationary in the strict sense if none of its statistics are affected by a shift in the time origin. Cyclostationary: If the mean and autocorrelation function are periodic in time. A random process is said to be wide-sense stationary (WSS) if two of its statistics, its mean and autocorrelation function, do not vary with a shift in the time origin. E{X(t)} = mx = a constant Rx(t1,t2) = Rx (t1 t2) (1.32) (1.33)

36

Time Averaging and Ergodicity


When a random process belongs to a special class, known as an ergodic process, its time averages equal its ensemble averages. The statistical properties of such processes can be determined by time averaging over a single sample function of the process. A random process is ergodic in the mean if mx = lim 1/T x(t) dt

(1.35-a)

T -

It is ergodic in the autocorrelation function if


T/2

Rx() = lim 1/T x(t) x (t + ) dt


-T/2

(1.35-b)
37

Noise in Communication Systems


The term noise refers to unwanted electrical signals that are always present in electrical systems; e.g spark-plug ignition noise, switching transients, and other radiating electromagnetic signals or atmosphere, the sun, and other galactic sources. Can describe thermal noise as a zero-mean Gaussian random process. A Gaussian process n(t) is a random function whose amplitude at any arbitrary time t is statistically characterized by the Gaussian probability density function

38

The normalized or standardized Gaussian density function of a zero-mean process is obtained by assuming unit variance.

39

White Noise
The primary spectral characteristic of thermal noise is that its power spectral density is the same for all frequencies of interest in most communication systems Power spectral density Gn(f )

Autocorrelation function of white noise is The average power Pn of white noise is infinite

40

The effect on the detection process of a channel with additive white Gaussian noise (AWGN) is that the noise affects each transmitted symbol independently. Such a channel is called a memoryless channel.

The term additive means that the noise is simply superimposed or added to the signal

41

Signal transmission through linear systems

A system can be characterized equally well in the time domain or the frequency domain

A system is assumed to be linear and time-invariant It is also assumed to have no stored energy at the time the input is applied

Input Linear system

Output

42

Impulse Response
The linear time invariant system or network is characterized in the time domain by an impulse response h (t ), to an input unit impulse (t)

The response of the network to an arbitrary input signal x (t ) is found by the convolution of x (t )with h (t )

The system is assumed to be causal, which means that there can be no output prior to the time, t =0, when the input is applied. convolution integral can be expressed as:

43

Frequency Transfer Function


The frequency-domain output signal Y (f )is obtained by taking the Fourier transform and Frequency transfer function or the frequency response is defined as:

The phase response is defined as:

44

Random Processes and Linear Systems


If a random process forms the input to a time-invariant linear system, the output will also be a random process. The input power spectral density GX (f ) and the output power spectral density GY (f ) are related as:

45

Ideal distortionless transmission


The overall system response must have a constant magnitude response The phase shift must be linear with frequency All of the signals frequency components must also arrive with identical time delay in order to add up correctly Time delay t0 is related to the phase shift and the radian frequency = 2f

Another characteristic often used to measure delay distortion of a signal is called envelope delay or group delay:

46

Ideal Filters
The ideal low-pass filter transfer function with bandwidth Wf = fu hertz can written as where

47

Ideal Filters
The impulse response of the ideal low-pass filter is:

48

Realizable Filters
The very simplest example of a realizable low-pass filter is made up of resistance (R) and capacitance (C)

49

Realizable Filters
The phase response of a realizable filter

50

Bandwidth Of Digital Data


Baseband versus Bandpass
An easy way to translate the spectrum of a low-pass or baseband signal x(t) to a higher frequency is to multiply or heterodyne the baseband signal with a carrier wave cos 2fct The resulting waveform, xc(t), is called a doublesideband (DSB) modulated signal and is expressed as From the frequency shifting theorem
51

Generally the carrier wave frequency is much higher than the bandwidth of the baseband signal
fc >> fm and therefore W DSB = 2fm

Bandwidth Theorems of communication and information theory are based on the assumption of strictly band-limited channels The mathematical description of a real signal does not permit the signal to be strictly duration limited and strictly band-limited.
52

All bandwidth criteria have in common the attempt to specify a measure of the width, W, of a nonnegative real-valued spectral density defined for all frequencies |f| <

The single-sided power spectral density for a single heterodyned pulse xc(t) takes the analytical form

53

Different Bandwidth Criteria


Half-power bandwidth. Null-to-null bandwidth. Fractional power containment bandwidth. Bounded power spectral density. Absolute bandwidth.

54

You might also like