You are on page 1of 656
eg Sey Bete Digital Signal Processing S Salivahanan Professor and Head Electronics and Communication Engineering Department Mepco Schlenk Engineering College Sivakasi A Vallavaraj Senior Lecturer Department of Electronics Engineering Caledonian College of Engineering Sultanate of Oman C Gnanapriya Infosys Technologies Limited Bangalore Tata McGraw-Hill Publishing Company Limited NEW DELHI McGraw-Hill Offices New Delhi New York St Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico City Milan Montreal San Juan Singapore Sydney Tokyo Toronto Information contained in this work has been obtained by Tata McGraw-Hill, from sources believed to be reliable. However, neither Tata McGraw-Hill nor its authors guarantee the accuracy or completeness of any information published herein, and neither Tata McGraw-Hill nor its authors shall be responsible for any errors, omissions, or damages arising out of use of this information. This work is published with the understanding that Tata McGraw-Hill and its authors are supplying information but are not attempting to render engineering or other professional services. If such services are required, the assistance of an appropriate professional should be sought. INA Tata McGraw-Hill © 2000, Tata McGraw-Hill Publishing Company Limited 21" reprint 2007 DZLCRRXYRCYZR No part of this publication may be reproduced in any form or by any ‘means without the prior written permission of the publishers This edition can be exported from India by the publishers, Tata McGraw-Hill Publishing Company Limited ISBN 0-07-463996-X Published by Tata McGraw-Hill Publishing Company Limited, 7 West Patel Nagar, New Delhi 110 008, typeset at Anvi Composers, A1/33 Pashchim Vihar, New Delhi 110 063 and printed at A P Offset Pvt. Ltd., Naveen Shahdara, Delhi 110 032 sca ne EE Contents Foreword v Preface vii 1, Classification of Signals and Systems i 1.1 Introduction 1 1.2 Classification of Signals 3 1.3 Singularity Functions 9 1.4 Amplitude and Phase Spectra 15 1.5 Classification of Systems 17 1.6 Simple Manipulations of Discrete-time Signals 21 1.7. Representations of Systems 23 1.8 Analog-to-Digital Conversion of Signals 28 Review Questions 37 2. Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 40 2.1 Introduction 40 2.2 _Trigonometric Fourier Series 41 2.3 Complex or Exponential form of Fourier Series 52 2.4 Parseval’s Identity for Fourier Series 58 2.5 Power Spectrum ofa Periodic Function 59 2.6 Fourier Transform 62 2.7 Properties of Fourier Transform 64 2.8 Fourier Transform of Some Important Signals 75 2.9 Fourier Transform of Power and Energy Signals 103 Review Questions 119 3. Applications of Laplace Transform to System Analysis 127 3.1 Introduction _127 3.2 Definition 128 3.3 Region of Convergence (ROC) 128 3.4 Laplace Transforms of Some Important Functions 129 3.6 Convolution Integral 138 3.7 Table of Laplace Transforms 142 3.8 Partial Fraction Expansions 144 x Contents 3.9 Network Transfer Function _146 3.10 _s-plane Poles and Zeros 147 3.11 Laplace Transform of Periodic Functions 154 3.12 Application of Laplace Transformation in Analysing Networks 157 Review Questions 183 4, z-Transforms 9. SSC‘; 4.1 Introduction _193 4.3 Properties of z-transform 203 44 Evaluation of the Inverse z-transform 213 Review Questions 228 5. Linear Time Invariant Systems 236 5.1 Introduction 236 5.2 Properties of a DSP System 238 5.3 Difference Equation and its Relationship with System Function, Impulse Response and Frequency Response 256 5.4 Frequency Response 260 Review Questions 272 6._Di Fad hens f 279 6.1 Introduction 279 6.2 Discrete Convolution 279 6.3 Discrete-Time Fourier Transform (DTFT) 305 6.4 Fast Fourier Transform (FFT) 319 6.5 Computing an Inverse DFT by Doing a Direct DFT 344 6.6 Composite-radix FFT 352 6.7 Fast (Sectioned) Convolution 368 6.8 Correlation 373 Review Questions 376 7. Finite Impulse Response (FIR) Filters 380 7.1 Introduction 380 7.2 Magnitude Response and Phase Response of Digital Filters 381 7.3 Frequency Response of Linear Phase FIR Filters 384 7.4 Design Techniques for FIR Filters 385 7.5 Design of Optimal Linear Phase FIR Filters 409 Review Questions 414 8. Infinite Impulse Response (IIR) Filters 417 8.1 Introduction 417 8.2 IIR Filter Design by Approximation of Derivatives 418 Contents xi 8.3 8.4 8.5 8.6 8.7 8.8 8.9 IIR Filter Design by Impulse Invariant Method 423 TIR Filter Design by the Bilinear Transformation 427 Butterworth Filters 432 Chebyshev Filters 439 Inverse Chebyshev Filters 444 Elliptic Filters 445 Frequency Transformation 446 Review Questions 450 9. Realisation of Digital Linear Systems 453 10. il. 9.1 9.2 9.3 9.4 Introduction 453 Basic Realisation Block Diagram and the Signal-flow Graph 453 Basic Structures for IIR Systems 455 Basic Structures for FIR Systems 482 Review Questions 489 Effects of Finite Word Length in Digital Filters 496 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 Introduction 496 Rounding and Truncation Errors 496 Quantisation Effects in Analog-to-Digital Conversion of Signals 499 Output Noise Power from a Digital System 502 Coefficient Quantisation Effects in Direct Form Realisation of IIR filters 505 Coefficient Quantisation in Direct Form Realisation of FIR Filters 508 Limit Cycle Oscillations 510 Product Quantisation 513 Sealing 518 10.10 Quantisation Errors in the Computation of DFT 519 Review Questions 521 Multirate Digital Signal Processing 523 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 Introduction 523 Sampling 524 Sampling Rate Conversion 525 Signal Flow Graphs 535 Filter Structures 539 Polyphase Decomposition 541 Digital Filter Design 551 Multistage Decimators and Interpolators 555 Digital Filter Banks 565 11.10 Two-channel Quadrature Mirror Filter Bank 572 11.11 Multilevel Filter Banks 578 Review Questions 581 xii Contents 12. 13. 14, 15. Spectral Estimation 584 12.1 Introduction 584 12.2 Energy Density Spectrum 584 12.3 Estimation of the Autocorrelation and Power Spectrum of Random Signals 586 12.4 DFT in Spectral Estimation 591 12.5 Power Spectrum Estimation: Non-Parametric Methods 593 12.6 Power Spectrum Estimation: Parametric methods 606 Review Questions 628 Adaptive Filters 631 13.1 Introduction 631 13.2 . Examples of Adaptive filtering 631 13.3 The Minimum Mean Square Error Criterion 643 13.4 The Widrow LMS Algorithm 645 13.5 Recursive Least Square Algorithm 647 13.6 The Forward—Backward Lattice Method 650 13.7 Gradient Adaptive Lattice Method 654 Review Questions 655 Applications of Digital Signal Processing 658 14.1 Introduction 658 14.2 Voice Processing 658 14.3 Applications to Radar 671 14.4 Applications to Image Processing 673 14.5 Introduction to Wavelets 675 Review Questions 686 MATLAB Programs 688 15.1 Introduction 688 15.2 Representation of Basic Signals 688 15.3 Discrete Convolution 691 15.4 Discrete Correlation 693 15.5 Stability Test 695 15.6 Sampling Theorem 696 15.7 Fast Fourier Transform 699 15.8 Butterworth Analog Filters 700 15.9 Chebyshev Type-1 Analog Filters 706 15.10 Chebyshev Type-2 Analog Filters 712 15.11 Butterworth Digital IIR Filters 718 15.12 Chebyshev Type-1 Digital Filters 724 15.13 Chebyshev Type-2 Digital Filters 729 15.14 FIR Filter Design Using Window Techniques 735 15.15 Upsampling a Sinusoidal Signal 750 15.16 Down Sampling a Sinusoidal Sequence 750 Contents xiii 15.17 Decimator 751 15.18 Estimation of Power Spectral Density (PSD) 751 15.19 PSD Estimator 752 15.20 Periodogram Estimation 753 15.21 State-space Representation 753 15.22 Partial Fraction Decomposition 753 15.23 Inverse z-transform 754 15.24 Group Delay 754 15.25 Overlap-add Method 755 15.26 IIR Filter Design-impulse Invariant Method 756 15.27 IIR Filter Design-bilinear Transformation 756 15.28 Direct Realisation of IIR Digital Filters 756 15.29 Parallel Realisation of IIR Digital Filters 757 15.30 Cascade Realisation of Digital IIR Filters 757 15.31 Decimation by Polyphase Decomposition 758 15.32 Multiband FIR Filter Design 758 15.33 Analysis Filter Bank 759 15.34 Synthesis Filter Bank 759 15.35 Levinson-Durbin Algorithm 759 15.36 Wiener Equation’s Solution 760 15.37 Short-time Spectral Analysis 760 15.38 Cancellation of Echo produced on the Telephone—Base Band Channel 761 15.39 Cancellation of Echo Produced on the Telephone—Pass Band Channel 763 Review Questions 765 Appendix A 773 Appendix B 774 Appendix C 782 Index 802 Chapter 1 Classification of Signals and Systems 1.1 INTRODUCTION Signals play a major role in our life. In general, a signal can be a function of time, distance, position, temperature, pressure, etc., and it represents some variable of interest associated with a system. For example, in an electrical system the associated signals are electric current and voltage. In a mechanical system, the associated signals may be force, speed, torque, etc. In addition to these, some examples of signals that we encounter in our daily life are speech, music, picture and video signals. A signal can be represented in a number of ways. Most of the signals that we come across are generated naturally. However, there are some signals that are generated synthetically. In general, a signal carries information, and the objective of signal processing is to extract this information. Signal processing is a method of extracting information from the signal which in turn depends on the type of signal and the nature of information it carries. Thus signal processing is concerned with representing signals in mathematical terms and extracting the information by carrying out algorithmic operations on the signal. Mathematically, a signal can be represented in terms of basic functions in the domain of the original independent variable or it can be represented in terms of basic functions in a transformed domain. Similarly, the information contained in the signal can also be extracted either in the original domain or in the transformed domain. A system may be defined as an integrated unit composed of diverse, interacting structures to perform a desired task. The task may vary such as filtering of noise in a communication receiver, detection of range of a target in a radar system, or monitoring steam pressure in a boiler. The function of a system is to process a given input sequence to generate an output sequence. 2 Digital Signal Processing It is said that digital signal processing techniques origin in the seventeenth century when finite difference methods, numerical integration methods, and numerical interpolation methods were developed to solve physical problems involving continuous variables and functions. There has been a tremendous growth since then and today digital signal processing techniques are applied in almost every field. The main reasons for such wide applications are due to the numerous advantages of digital signal processing techniques. Some of these advantages are discussed subsequently. Digital circuits do not depend on precise values of digital signals for their operation. Digital circuits are less sensitive to changes in component values. They are also less sensitive to variations in temperature, ageing and other external parameters. In a digital processor, the signals and system coefficients are represented as binary words. This enables one to choose any accuracy by increasing or decreasing the number of bits in the binary word. Digital processing of a signal facilitates the sharing of a single processor among a number of signals by time-sharing. This reduces the processing cost per signal. Digital implementation of a system allows easy adjustment of the processor characteristics during processing. Adjustments in the processor characteristics can be easily done by periodically changing the coefficients of the algorithm representing the processor characteristics. Such adjustments are often needed in adaptive filters. Digital processing of signals also has a major advantage which is not possible with the analog techniques. With digital filters, linear phase characteristics can be achieved. Also multirate processing is possible only in the digital domain. Digital circuits can be connected in cascade without any loading problems, whereas this cannot be easily done with analog circuits. Storage of digital data is very easy. Signals can be stored on various storage media such as magnetic tapes, disks and optical disks without any loss. On the other hand, stored analog signals deteriorate rapidly as time progresses and cannot be recovered in their original form. For processing very low frequency signals like seismic signals, analog circuits require inductors and capacitors of a very large size whereas, digital processing is more suited for such applications. Though the advantages are many, there are some drawbacks associated with processing a signal in the digital domain. Digital processing needs ‘pre’ and ‘post’ processing devices like analog-to-digital and digital-to-analog converters and associated reconstruction filters. This increases the complexity of the digital system. Also, digital techniques suffer from frequency limitations. For reconstructing a signal from its sample, the sampling frequency must be atleast twice the highest frequency component present in that signal. The available frequency range of operation of a digital signal processor is primarily Classification of Signals and Systems 3 determined by the sample-and-hold circuit and the analog-to-digital converter, and as a result is limited by the technology available at that time. The highest sampling frequency is presently around 1GHz reported by K.Poulton, etal., in 1987. However, such high sampling frequencies are not used since the resolution of the A/D converter decreases with an increase in the speed of the converter. But the advantages of digital processing techniques outweigh the disadvantages in many applications. Also, the cost of DSP hardware is decreasing continuously. Consequently, the applications of digital signal processing are increasing rapidly. 1.2 CLASSIFICATION OF SIGNALS Signals can be classified based on their nature and characteristics in the time domain. They are broadly classified as (i) continuous-time signals and (ii) discrete-time signals. A continuous-time signal is a mathemati- cally continuous function and the function is defined continuously in the time domain. On the other hand, a discrete-time signal is specified only at certain time instants. The amplitude of the discrete-time signal between two time instants is just not defined. Figure 1.1 shows typical continuous-time and discrete-time signals. A x(t) >»; ° (a) Continuous-time signal A xr (n) + —! cis aT ~4T -2T 0 aT (b) Discrete-time signal Fig. 1.1 Continuous-Time and Discrete-Time Signals image not available image not available image not available image not available image not available Classification of Signals and Systems 9 1.2.4 Energy and Power Signals Signals can also be classified as those having finite energy or finite average power. However, there are some signals which can neither be classified ‘as energy signals nor power signals. Consider a voltage source v(t), across a unit resistance R, conducting a current i(t). The instantaneous power dissipated by the resistor is v ro p(t) = v(t) i(t) = =?OR Since R= 1 ohm, we have p(t) = v%Xt) = Ht) (1.12) The total energy and the average power are defined as the limits T E= lim Ji%)de, joules (1.13) To ip and 1 r Ps Hin gp JPW ae, watts (1.14) The total energy and the average power normalised to unit resistance of any arbitrary signal x(t) can be defined as : . E= jim flx@P de, Joules (1.15) wen and T ued 2 Pe fim op Jil dt, watts (1.16) The energy signal is one which has finite energy and zero average power, i.e. x(t) is an energy signal if 0 < E < », and P = 0. The power signal is one which has finite average power and infinite energy, i.e. 0< P<, and £ = ~. If the signal does not satisfy any of these two conditions, then it is neither an energy nor a power signal. 1.3 SINGULARITY FUNCTIONS Singularity functions are an important classification of non-periodic signals. They can be used to represent more complicated signals. The unit-impulse function, sometimes referred to as delta function, is the basic singularity function and all other singularity functions can be derived by repeated integration or differentiation of the delta function. The other commonly used singularity functions are the unit-step and unit-ramp functions. image not available image not available image not available Classification of Signals and Systems 13 Proof AL Gott) B(t— to] = (0) BUE~ fp) + 200) BUt ~t) = x(t) 5(t tg) + % (to) B(t = to), ty < by < ty Integrating, we get & 4 ty . ty J Gps bso) de = J [XW BE ~ tod] de + J Lilt) 5 - to) de ‘ i 4 V7 [x() 8(¢ to)? = J x10) 8 ty) de + 2) 4 LHS = 0. ty Therefore, [ x(t) 5(¢-t))dt + %(t) = 0 4 8 ie. J x(t) 5(¢ - to) de = - (ty) i, Similarly, t i x(t) 5(t — to) dt = ¥(t) 4 ty Hence, J x(t) 8" (¢- ty) dé = (-1)" x"(tp) 4 1.3.6 Representation of Signals In the signal given by x(at + 6), i.e., x(a(t + b/a)), a is a scaling factor and b/a is a pure shift version in the time domain. If b/a is positive, then the signal x(t) is shifted to left. If b/a is negative, then the signal x(t) is shifted to right. Ifa is positive, then the signal x(t) will have positive slope. If a is negative, then the signal x(t) will have negative slope. Ifa is less than 0, then the signal x(t) is reflected or reversed through the origin. If |a| < 1, x(t) is expanded, and if |a| > 1, x(t) is compressed. Sketch the following signals (a) x(t) = T1(2t + 8) (c) x(t) = cos(20 mt- 5x) and = (d) x(t) = r (— 0.5t + 2) Solution (a) M(2t + 8) = 1(2¢ + 3/2)) Here the signal shown in Fig. E1.2(a) is shifted to left, with centre at -3/2. Since a = 2, i.e. {a| > 1, the signal is compressed. The signal width becomes 1/2 with unity amplitude. image not available image not available image not available Classification of Signals and Systems 17 x(t) = 8 sin (20n¢ - 2). Bcos (20n¢ - x. 2) = 8 cos (zone - 22) The single-sided amplitude and phase spectra are shown in Fig. E.1.4a. The signal has an amplitude of 8 units at f= 10 Hz anda phase angle of 2 radians at f = 10 Hz. To plot the double-sided spectrum, the signal is converted into the form as in Eq.1.28. Therefore, 20n¢ - 22 = j(20ne - 25 xt) = 4D) ge) The double-sided amplitude and phase spectra are shown in Fig. E.1.4b. The signal has two components at f = 10 Hz and f= -10 Hz. The amplitude of these components are 4 units each and the phase of these components are -2t and = radians, respectively. 1.5 CLASSIFICATION OF SYSTEMS As with signals, systems are also broadly classified into continuous-time and discrete-time systems. In a continuous-time system, the associated signals are also continuous, i.e. the input and output of the system are both continuous-time signals. On the other hand, a discrete-time system handles discrete-time signals. Here, both the input and output signals are discrete-time signals. Both continuous and discrete-time systems are further classified into the following types. (i) Static and dynamic systems (ii) Linear and non-linear systems (iii) Time-variant and time-invariant systems (iv) Causal and non-causal systems, and (v) Stable and unstable systems. 1.5.1 Static and Dynamic Systems The output of a static system at any specific time depends on the input at that particular time. It does not depend:on past or future values of the input. Hence, a static system can be considered as a system with no memory or energy storage elements. A simple resistive network is an example of a static system. The input/output relation of such systems does not involve integrals or derivatives. The output of a dynamic system, on the other hand at any specified time depends on the inputs at that specific time and at other times. Such systems have memory or energy storage elements. The equation characterising a dynamic system will always be a differential equation image not available image not available image not available Classification of Signals and Systems 21 (iii) If a pole lies on the imaginary axis, it must be a single-order one, ie. no repeated poles must lie on the imaginary axis. The systems not satisfying the above conditions are unstable. 1.6 SIMPLE MANIPULATIONS OF DISCRETE-TIME SIGNALS When a signal is processed, the signal undergoes many manipulations involving the independent variable and the dependent variable. Some of these manipulations include (i) shifting the signal in the time domain, (ii) folding the signal and (iii) scaling in the time-domain. A brief introduction of these manipulations here, will help the reader in the following chapters. 1.6.1 Transformation of the Independent Variable Shifting In the case of discrete-time signals, the independent variable is the time, n. A signal x(n) may be shifted in time, i.e. the signal can be either advanced in the time axis or delayed in the time axis. The shifted signal is represented by x(n — k), where & is an integer. If ‘k’ is positive, the signal is delayed by & units of time and if k is negative, the time shift results in an advance of signal by & units of time. However, advancing the signal in the time axis is not possible always. If the signal is available in a magnetic disk or other storage units, then the signal can be delayed or advanced as one wishes. But in real time, advancing a signal is not possible since such an operation involves samples that have not been generated. As a result, in real-time signal processing applications, the operation of advancing the time base of the signal is physically unrealizable. Folding This operation is done by replacing the independent variable n by —n. This results in folding of the signal about the origin, i.e. n = 0. Folding is also known as the reflection of the signal about the time origin n = 0. Folding of a signal is done while convoluting the signal with another. Time scaling This involves replacing the independent variable n by kn, where k is an integer. This process is also called as down sampling. If x(n) is the discrete-time signal obtained by sampling the analog signal, x(t), then x(n) = x(nT), where T is the sampling period. If time-scaling is done, then the time-scaled signal, y[n] =x(kn) =x(knT). This implies that the sampling rate is changed from 1/T to V/kT. This decreases the sampling rate by a factor of k. Down-sampling operations are discussed in detail in Chapter 11 of this book. The folding and time scaling operations are shown in Fig. 1.7(a) and (b). image not available image not available image not available Classification of Signals and Systems 25 twice the input delayed twice, x(n — 2). Let the input sequence be x(n) = {0,1, 1, 2, 0, 0, 0, ...}. The output sequence for the system as described by Eq. 1.32 is y(n) = (0, 1, 4, 7, 8, 4, 0, 0, ...J. The block diagram representation of the system described by Eq.1.32 is shown in Fig. 1.9. x(n) y(n) ylo] =x [n] +3x{n- 1] +2x[n- 2] Fig. 1.9 Discrete-Time System Corresponding to Eq. 1.32 In digital signal processing applications, our prime concern is of linear, time-invariant discrete-time systems. Such systems are modelled using linear difference equations with constant coefficients. The block diagram representation of these systems contain only unit delays, constant multipliers and adders. A continuous-time system is modelled by a linear differential equation. An ordinary linear differential equation with constant coefficients characterises linear, constant parameter systems. For example, an nth order system is represented by a n= pO 4 ay 1 PP 4 4 a, BOs y= atv (1.33) The general solution of the above equation consists of two components, namely, the homogeneous solution and the particular solution. The homogeneous solution is the sourcefree, natural solution of the system, whereas the particular solution is the component due to the source x(t). a, 1.7.2 Impulse Response of a System The impulse response of a system is another method for modelling a system. The impulse response of a linear, time-invariant system is the response of the system when the input signal is an unit-impulse function. The system is assumed to be initially relaxed, i.e. the system has zero initial conditions. The impulse response of a system is represented by the notation A(t) (continuous-time) or h(n) (discrete- image not available image not available image not available Classification of Signals and Systems 29 discussed in some detail and this enables one to understand the relationship between the digital signals and discrete-time signals. Figure 1.11 shows the block diagram of an analog-to-digital converter. The sampler extracts the sample values of the input signal at the sampling instants. The output of the sampler is the discrete-time signal with continuous amplitude. This signal is applied to a quantiser which converts this continuous amplitude into a finite number of sample values. Each sample value can be represented by a digital word of finite word length. The final stage of analog-to-digital conversion is encoding. The encoder assigns a digital word to each quantised sample. Sampling, quantizing and encoding are discussed in the following sections. f + ‘Sampler f Quantiser T | Encoder rt Continuous-time Discrete-time Discrete-time Digital output continuous-amplitude Continuous-amplitude — discrete-amplitude signal input signal signal signal Fig. 1.11 Analog-to-Digital Converter 1.8.1 Sampling of Continuous-time Signals Sampling is a process by which a continuous-time signal is converted into a discrete-time signal. This can be accomplished by representing the continuous-time signal x(t), at a discrete number of points. These discrete number of points are determined by the sampling period, T, i.e. the samples of x(t) can be obtained at discrete points ¢ = nT, where n is an integer. The process of sampling is illustrated in Fig.1.12. The sampling unit can be thought of as a switch, where, to one of its inputs the continuous-time signal is applied. The signal is available at the output only during the instants the switch is closed. Thus, the signal at the output end is not a continuous function of time but only discrete samples. In order to extract samples of x(t), the switch closes briefly every T seconds. Thus, the output signal has the same amplitude as x(t) when the switch is closed and a value of zero when the switch is open. The switch can be any high speed switching device. The continuous-time signal x(t) must be sampled in such a way that the original signal can be reconstructed from these samples. Otherwise, the sampling process is useless. Let us obtain the condition necessary to faithfully reconstruct the original signal from the samples of that signal. The condition can be easily obtained if the signals are analysed in the frequency domain. Let the sampled signal be represented by x,(t). Then, x, (t) = x(t) g(t) (1.40) where g(t) is the sampling function. The sampling function is a continuous train of pulses with a period of T seconds between the pulses, and it models the action of the sampling switch. The sampling function is shown in Fig. 1.12(c) and (d). The frequency spectrum of the sampled image not available image not available image not available image not available image not available image not available image not available Classification of Signals and Systems 37 Quantisation level crore 15 11 \ u he Guantsation 13 1101 } | 12 1100 | "1 1011 10 1010 - 9 1001 8 1000 7 O11 6 0110 | 5 0101 4 0100 3 0011 2 0010 \ 1 0001 0 0000 oO T aT 3T 4T Fig. 1.17 Quantizing and Encoding ied 13 14 15 Ze REVIEW QUESTIONS What are the major classifications of signals? With suitable examples distinguish a deterministic signal from a random signal. What are periodic signals? Give examples. Describe the procedure used to determine whether the sum of two periodic signals is periodic or not. Determine which of the following signals are periodic and determine the fundamental period. (a) x(t) = 10 sin 25 nt (b) xo(t) = 10 sin VB at image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 41 Examples of periodic processes are the vibration of a tuning fork, oscillations of a pendulum, conduction of heat, alternating current passing through a circuit, propagation of sound in a medium, ete. Fourier series may be used to represent either functions of time or functions of space co-ordinates. In a similar manner, functions of two and three variables may be represented as double and triple Fourier series respectively. Periodic waveforms may be expressed in the form of Fourier series. Non-periodic waveforms may be expressed by Fourier transforms. 2.2 TRIGONOMETRIC FOURIER SERIES A periodic function f(t) can be expressed in the form of trigonometric series as Fi) = Fp + a; c08 wyt + ay 08 2idy t + a COS Bit +... +b, SiN Wp t+ bo sin 2Wpt + bg sin 3Myt + .. (2.1) where @y = 2nf = 3 fis the frequency and a’s and b's are the coefficients. The Fourier series exists only when the function f(t) satisfies the following three conditions called Dirichlet’s conditions. (i) f(¢) is well defined and single-valued, except possibly at a finite number of points, i.e. f (é) has a finite average value over the period T. (ii) f(t) must posses only a finite number of discontinuities in the period T. (iii) f(¢) must have a finite number of positive and negative maxima in the period T. Equation 2.1 may be expressed by the Fourier series f= Lay + Ya, cosnoyt+ Yb, sinnwyt (2.2) n=l n=l where a,, and 6,, are the coefficients to be evaluated. Tanne Eq. 2.2 for a full period, we get T/2 TI2 “Treat = = $a fat+ [ Ya, cos net +, sin n gt) dt Tn -T2 -T/gn=1 Integration of cosine or sine function for a complete period is zero. TI2 Therefore, fre dt == 4 aT -T2 rI2 Jrwae (2.3) -TI2 2 Hence, = gy image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 45 Ke) le m2 |-74 |o 14 72 -A Fig. E2.1 Solution Since the given waveform is symmetrical about the horizontal axis, the average area is zero and hence the d.c. term a= 0. In addition, f(t) = f(-t) and so only cosine terms are present, ice., b, TI2 Now, a,== 2 | Fle) cos nagt at -TI2 -A, from -T/2——sin Wot - — sin 2W9t - —— sin 3Wot -... 25 ov 2a or 3h ° image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 61 Note: > n=-w —_3__ 4+(nn)? Solution The complex exponential Fourier transform representa- tion of a signal f(t) is = 0.669 ft)= > ce’! where wo = 3% nae The given signal f(t) over the interval (0, 7) is > 3 jnxt é) = —— e"™ A * x 4+(nn)? (a) Comparing the above two equations, we get 3 = ———,; and on ae nme irt =einnt Hence, a =a ie T=2 (b) When n = 3, the component of f(t) will be - 3 aise 8 4+(3n) 4+(3n)? Similarly, when n = — 3, the component will be =-=—3__,-sant__ 3 4+(-3n) 4+(3n)? C3 [cos 3n¢+ j sin 3n¢] C3 [cos 3xt - j sin 3x] Therefore, cz + ¢_3 = cos 3nt —_s_ 4+(3n)* Hence, when one of the components of f(¢) is A cos 3 nt , the value of Ais «—o_ 4+(3n)? (c) Total (maximum) power P,= >, |3—,|~ 0.669 neta 14+ (ne)? The power in f(t) is P= col? + 2[leil? +1col? + les!” +1eal?] 3 3_f 3 3 f 3. f =/=|] +2) os] ans! Hes i 4 4 + (nm) 4 + (210) 4+ (30) 4 + (41) image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems _65 Operation fo) Fijo) ; Time-integration frac 7a" jo) + FO) § (w) Frequency-integration fw J FG) do Cip Time convolution Alt) * fit) = F,(jo) F,( jo) JA@he-vde Frequency convolution (Multiplication) Alt)-flt) = IF jo) * F,( jo)! Frequency shifting (Modulation) FO) er F(jo- joy) Symmetry FU jt) 2nf (—a) Real-time function fit) F( jo) = F'(- jo) Re[F( jw)| = RelF(-jo)] IF jo)] = -Im[Fjo)) |FGo)| = |Fjo)| Of( ja) = - Of( jo) , = t 2 an: t io) [2 Parseval’s theorem = E= J If@? ae Bas Jigen do Duality If (t) & g(jo), then g(t) © 2nf (- jo) 2.7.1 Linearity The Fourier transform is a linear operation. Therefore, if A) & Fy (jo) fa (t) = Fo (jo) then, af, (t) + bf (t) = aF; (j @) + bF yj @) where a and b are arbitrary constants. 2.7.2 Symmetry If ft) @ F(jo) then, FV jt) = 2nf(-) Proof Since fit)= + f F(jo)e/* do an, anf(-t)= f F(jo')e Ft do’ where the dummy variable « is replaced by w’. image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 69 R x(t) = ertAC - AW | 1 x(t) ce vt) | | | | bs 13 + t ° Fig. E2.9 « ) = Fle URC] = 1 = (ROP Therefore, X( j w) = Flte““""] = a, ro) a+ joROe Ro’? We know that Y(j @) = X( jo) H(j @) (Roy? 1 (RC)? Hence, ¥( jo) = —2O_,_1_____(R0y"_ ence, jo) = TT iaRO® UtjaRO) G+ joROP af_ (cy afd 1 Therefore, y(t) = ¢ ~! | —~+=~—_ | = ¢-! | —.—___+___ erefore, y(t) =F aa Fe RC (is) RC 2 9 tRC fe we 1 fete Hence, = y(t) = Re 3 u(t) The input signal x(t) = system whose transfer function is h(t) output signal y(t) when b 4a and 6 =a. Solution The Fourier transform of x(¢) and h(t) are u(t), a> 0 is applied to the u(t), b > 0. Determine the X( jo) = —— and (a+ jo) . 1 Hi joyeptos Jo) (6+ jo) Therefore, ¥( jw) = X(j @) H(j w) = x (a+ jo)(6+ jo) Expanding the function Y(j @) in partial fractions, we get A + B (a+jo (6+ jo) Y¥( jw) = image not available image not available a You have elther reached a page that is unavailable for viewing or reached your viewing lil far this book image not available image not available image not available [Oo + ag — (9 -)Q] > [Po — 0) g + PO +) Q] x y Ns (wy) 76 Digital Signal Processing (Lf) q unwop Kouenbasy (yj umwop away, sjouBis 2uDqoduy) awios Jo wuofsuos2 Janey ZZ aIqey image not available image not available image not available 80 Digital Signal Processing (piueg) egind uejssnep (,3 ~) dxo AC “oT > Os yen esinduy, L wo orsu , 42 4 0 4-42- Grek Safes TTT TTT | * Y v yo 2 to) tw angus 7 | x x 1 8 o| FH } (8m — 0) 2 — (Om +m) 2 (sq) m8 a5 “FL "i | por 4 y | Yay OD) g emuop kouonbosy (J uyowop auny image not available image not available a You have elther reached a page that is unavailable for viewing or reached your viewing lil far this book image not available image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 87 Alternate Method F(jo)= { f(te/* de 2 =f We de= rof o 210 enim 4 =10(— =j 19 (s820—Jsin 2o-1) o = 20 sin 20 + j(cos 20- D) @ Therefore, |F(jw)| = 20 ‘sin? 2@ + (cos 2@ — 1)? cos 20 — +) j@) = ®(j@) = tan"! 2%8(w — wo), we get F [cos @p t] = 7 [5(@ — Wy) + 8( + Wy] eit eo | 2j = jm [3(@ + 9) — 5(@ - Wo)] These transform pairs are shown in Figs 2.16 (a) and (b) . Using the frequency convolution theorem we have Similarly, # [sin @ ¢] = a F UF (t) cos Wo t] = SELF io) # x {G(- 09) + 8(w + @)}] = F(jo) #2 [5+ oy) +5(w- @)] = F IF (+ joy) +FYo-a)} image not available 98 Digital Signal Processing F( jo) = F [sgn (t))] = J sgn (t) e/® de oO - f Cnet aes f (nei ae bo a . 0 = [=22"] +| “JO Jo 1,121.2 =—t—=- JO Jo JO Therefore, ¥ {sgn (¢)] = 2 Jo Hence, we have the transform pair sgn (t) <> 2 d The amplitude and phase spectra of the signum function are shown in Figs 2.18 (a) and (b) respectively. A \FCio)| (ia) (a) (b) Fig. 2.18 (a) Amplitude and (b) Phase Spectra of the Signum Function Determine the Fourier transform of f(t) sgnit). Solution f(t) =e~*""! sgn () _f-e, for t<0 e@, for t>0 Therefore, F(f(e) = f fit) eZ ae Q “ = J et evfot des f ent e4 at -o 0 Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 99 0 “ e. f elt jon de +f erat sot ap “n 0 8 (a+ jot ]* -| Se IL. [Serge +a] a+jo -[a+ 1 |- 0. =-F (jo), it is also an odd The amplitude of F (jw) is |F (j o)| = el z a a 2.8.10 Pulse Train Signal or Periodic Gate Function From Eq. 2.10, the Fourier series representation for this pulse train signal shown in Fig. 2.19 is fo= Foe! n=-= Kt) bed-+j PLU LLL: Fig. 2.19 Pulse Train Signal -2T To find the spectrum, we calculate the Fourier series coefficients as 17? ats t) e JPMeF ae ra Df e 100 Digital Signal Processing where Wo = a and n is the number of the harmonic. 1% | cham f Ae lttae T in 2A [einmodl2 ~ 1) T 2j __2A sin (4804) n@pT 2 _ 4a)" (ass) zr 2 The sinc function oscillates with period 2n and decays with increasing x. It has zeros at nn, n = +1, +2, ... and is an even function of x. Then the spectrum of the periodic gate function is Che Ad gine (2202) T 2 2n n( 2) = ae sinc] 22 = Ad sine (224) P r Thus we can represent f(t) as _ FP Ad. (ntd) jn%e fo= D Af sine ("E4)e T Ff) = QnAd Ss .. nnd zs ¥ sine ("Z4) 5 o- nap) ne-w From the above equation, we find that the values c, are real. Ea What percentage of the total power is contained within the first zero crossing of the spectrum envelope for f(¢) as given in Fig. E2.20(a). image not available image not available Fourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 103 Substituting this result in the above Fourier series representation, we obtain At) =3 Dene" Therefore, F(jo) = Fif (il = fz aad — = 2m ¥, 8(@- nwo), where wy = nen= Hence, Yat - kT) & Fao = NG) hon= a Each unit impulse of the unit-impulse train in the time domain has a transform of an impulse train in the frequency domain. The locations of the impulses are at n W) = n2n/T where n = 0, + 1, + 2,... The area for each impulse in the frequency domain is Wp. 2.9 FOURIER TRANSFORM OF POWER AND ENERGY SIGNALS 2.9.1 Power Signal The average power of a signal x(t) over a single period (¢;, t; + T) is given by 1 pat? weg), Porat where x(t) is a complex periodic signal. A signal f(¢) is called a power signal if the average power is expressed by P, LT aye P,= Lt = = Lt on |p x(t) T/2 Pye z fix@pP dé, for a continuous periodic signal with period 7; -Tr i N is equal to a finite value, i.e. equal to the average power over a single period. Ifx(¢) is bounded, P., is finite. Hence, every bounded and periodic signal is a power signal. But it is true that a power signal is not necessarily a bounded and periodic signal. If the signal x(t) contains finite signal power, i.e. 0 < P, <= , then x(t) is called a power signal. N-1 Ylx@)/, for a digital signal with x(n) = 0 for n < 0; n=0 or Py = 104 Digital Signal Processing 2.9.2 Energy Signal A signal x(t) is called an energy signal if its total energy over the interval (—~, co) is finite, that is x ¥ E,= He fio? dt< E.= ||x()Pdt<« If the signal x(t) contains finite signal energy, i.e. 00 ~ 10, for t<0 Solution Since the value of the given function is zero between —T to 0, the limits must be applied between 0 to T instead of -T to T. [ Using L’ Hospital’s rule ] - E,= Lt fixe)? at T= “p w 7 -4t |? we fae T Lt fe de T= " - & — I° & pa 4 1 ing & — & gS + » Si " mle 1 T a)? dt ral yf ~ * qi = Lt = Toye 2! =8 -8T _ 4) Lt #[5= = Lt (ee -3) tow 2T|-8 |p to. —-16T Here, as the signal energy E, is finite and the signal power P, = 0, the signal is an energy signal. Ea Determine the signal energy and signal power for the following complex-valued signal and determine whether it is an energy signal or a power signal. x(t) = Ael?™at 106 Digital Signal Processing Solution Here, the signal period is T = ae Since x(¢) is a periodic o signal, it cannot be an energy signal. Therefore, the power signal is evaluated as er 1 m7 J |x(e)[? de ‘A u+(2) 2 Py=a f |Ae**{ de 4 P. w(t) =a fArde 4 aye =a[A%], «=A? Since the signal has finite power, it is a power signal and E, Determine the magnitude and phase spectrum of the een ae in Fig. B2.24(a). A | ee =T jo T | | -a}—I | Fig. E2.24(a) Solution Here f(t)=A, for-T0 =0, forw=0 For the limits -T < w<0, F (j @) = For the limits 0<@h d x(t} er at ko at The Laplace transform gives the total solution to the differential equation and corresponding initial and final value problems. Laplace transform is an important and powerful tool in system analysis and design. This transform is widely used for describing continuous circuits and systems, including automatic control systems and also for analysing signal flow through causal linear time invariant systems with non-zero initial conditions. The z-transform, to be discussed in the next chapter, is suitable for dealing with discrete signals and systems. We can conclude that the Laplace transform and the z-transform are complementary to the Fourier transform. 128 Digital Signal Processing 3.2. DEFINITION For periodic or non-periodic time functions f(t), which is zero for t < 0 and defined for ¢ > 0, the Laplace transform of f(t), denoted as £{f (¢)}, is defined by LIf(t)) = Fis) = f Fe" at a Putting s = o + ja, we have Fis) = f fte"tae 3.) 0 The condition for the Laplace transform to exist is Jir@e*|ae <0 for some finite o. Laplace transform thus converts the time domain function f(t) to the frequency domain function F(s). This transform defined to the positive-time functions is called single-sided or unilateral, because it does not depend on the history of x(t) prior to ¢ = 0. In the double-sided or bilateral Laplace transform, the lower limit of integration is t = — 0, where x(t) covers over all time. Due to the convergence factor e~°', the ramp, parabolic functions, etc. are Laplace transformable. In transient problems, the Laplace transform is preferred to the Fourier transform, as the Laplace transform directly takes into account the initial conditions at t = 0, due to the lower limit of integration. Inverse Laplace Transform The inverse Laplace transform is used to convert frequency domain function F(s) to the time domain function f(t), as defined by oy +jo ft)= oF) =f Fis)e"ds (3.2) 2nj eyeia Here, the path of integration is a straight line parallel to the jw-axis, such that all the poles of F(s) lie to the left of the line. In practice, it is not necessary to carry out this complicated integration. With the help of partial fractions and existing tables of transform pairs, we can obtain the solution of different equations. 3.3 REGION OF CONVERGENCE (ROC) For the existence of the Laplace transform, the integral F(s) = fro e~“dt must converge. This limits the variable s = 0+ jatoa 0 Applications of Laplace Transform to System Analysis 129 part of the complex plane called the Region of Convergence (ROC), For example, F(s) = £ {e*} + Here, the Laplace transform is defined only for R,(s) > 3. The region R,(s) > 3, i.e. o> 3 is called the Region of Convergence, which is shown in Fig. 3.1. ROC is required in computing the Laplace transform and inverse Laplace transform. If the ROC is not specified, the inverse Laplace transform is not unique. Region of convergence Fig. 3.1 Region of Convergence for F(s) = —! (s-3) In the one-sided Laplace transform, all time functions are assumed to be positive and there is a one-to-one correspondence between the Laplace transform and its inverse. Hence, no ambiguity will arise, even if the ROC is not specified in the one-sided Laplace transform. But, in the two-sided Laplace transform, the specification of ROC is essential. 3.4 LAPLACE TRANSFORMS OF SOME IMPORTANT FUNCTIONS 1. Unit Step Function ft)=1,0 +a jot 4 g-Jeot Similarly, £ le“ cos wyt} = fera(emee™ eee, (s+a)? +03 Hence, Lie cos apt) = —2+* (3.10) (s+a)? +03 7. Damped Hyperbolic Sine and Cosine Functions ey L{e™ sinh @p t} = e{ £ 1 ~(a~ mg )t -(a + 0g)t = Mater 00)— sier'@*] | ae ee a 2Lst+a-M sta+@ a (s +a)? - a 132. Digital Signal Processing ®o nat gi = Hence, L{e™ sinh wot = Gra)? of (3.11) Similarly, £{e“ cosh apt} = —~+*— (3.12) (s +a)? - a 8. t” Function Lit") = je edt = fe" {) d 0 “é Similarly, £ (0) = 2=1, gn-2) s By taking Laplace transformations of ¢"~*, ¢"~*, ... and substituting in the above equation, we get oft") = = —— SS L(t") sos n! oy_ ato 1 nt . sett = —, L(t?) = — x == 57, when n is a positive integer. s s" ss n! Therefore, L(t") = yet (3.13) Substituting n = 1, we have £{t} = 1/s” d the Laplace transform of the following fun (a) f() = 08+ 3t—6t+4 (L) f(é) = cos* 3t (c) f(t) = sin at cos bt (d) f(t) =t sin at ¢ (e) f= a © f(t) = 8(2—3¢ +2) Solution (a) Lif (t)) = cle? + 3t?— Gt + 4] 3! 2! ! =—47+35-65+- sis? 8 nSs8 044 s s Applications of Laplace Transform to System Analysis 133 (b) Alt) = cos* 3t We know that cos 3A = 4 cos°A — 3 cos A cos 9t + 3 cos =e Therefore, L{cos* 3t} = <[ 7 ai sy 3s 4|s?+81 s°+9 | 1 s s == 8-3 {tye aoe (c) £{sin at cos bt} = <[2tsin (a+b)t+sin (a-b)e}] Hote ote 2Ls°+(a+b)? 5? +(a-b)* (@) cit sin at) =-4 cfsin ad] ds -a)_@ ds|s? +a? = nab ys? +a?)7) 1 =-o-— 2s] =—24a8 (s? +a”)? ef | w@ fo=[45¢] | | Here, c(l-e}=4-_1_ s (s-) l-e'| _7f1 1 - ~ e{ }-i2 tlie toes log (s - DIF 134 Digital Signal Processing [me 2)T we(1- B22 (f) The given impulse function is f(t) = 5(¢?- 3t + 2) = l(t -1)¢ - 2)] = &(t - 1) ult - 1) + BG - 2) ut - 2) = &(t-1) + &(t- 2) Therefore, F(s) = £[8(t - 1) + £ [&(t — 2)) =et+e* [Rue Determine the Laplace transform of the rectangular pulse shown in Fig. E3.2. . A(t) Solution —_F(s) = cit) = | fe det 0 1 = fle “de © | ase | ss |__| _+i Is | 0! T Fig. E3.2 =eteeT-1 =s =2f-e°7) 8 Alternate method The given pulse is represented in terms of the step function as f(t) = ult) - ut -T) Taking Laplace transform, we obtain = 1f-e-? Fis) = <[1 eT} Find the Laplace transform of a single sawtooth pulse shown in Fig. E3.3. Solution The function for the given wave form is f(t)=t forO1 Applications of Laplace Transform to System Analysis 135 f(t) LUPO) = J flo) eat 0 t I 0 1 Fig. E3.3 m of the triangular pulse shown in Fig. E3.4. 4 Kt) 0} 72 T ' Fig. E3.4 Solution For the given triangular waveform, fi)=2t, foroses 72 =2- Fe for T2stsT TI r= cinen= | (Be)e"ats [ (2-2ee"a Tie ale ic “dt 42 Jj et aad [een dt Heal” [=] -2f Si) i be Ar e 136 Digital Signal Processing | 2 [z ensT/2 etl 7 2 +0 TL2 -s s 8 + ae" _ et/2] te Fig. E3.5 Solution For the given wave form, 1, forOn By definition, we have LIF) = ff) eat 0 = JAsint ede oO =A fsinte“dt oma -—A (s? +1) es™ +1 (s?+)) le (-ssint - cos tif 3.5 INITIAL AND FINAL VALUE THEOREMS 3.5.1 Initial Value Theorem If the function f(t) and its derivative f ‘(t) are Laplace transformable, then 138 Digital Signal Processing Lt f(t)= Lt sF(s) tor sae Proof We know that Lif (DO) =s[c fe) — FO) By taking the limit s > © on both sides tt fPOl= lt, [sF(s) — F(0)) ht j Poe" dt= wit [sF(s) - f (0)] ee ee, the integration of LHS becomes zero ie. j (Lt [£% edt = 0 "us sPto -f(0)=0 Therefore, Lt sFis) =f(0)= Ut fo 3.5.2 Final Value Theorem If f(t) and f (t) are Laplace transformable, then Lt f(t) = Lt sF(s) (3.15) treo a0 Proof We know that Lf (t)} = sF(s) — f (0) Taking the limit s — 0 on both sides, we get Lt Lift} = Lt [sF(s) — f (0)] 830 30 (i st ap = lo "i lt J PO e“ dt= Lt, sFs)— f(0 Therefore, J rode = Lt [sFis) - ((0)] 4 so [f@Ip = Lt f()- Lt f= Lt sF (s)-f(0) tom 130 30 Since f (0) is not a function ofs, it gets cancelled from both sides of the above equation. Therefore, Lt f= “Lt sFs) toe so 3.6 CONVOLUTION INTEGRAL If X(s) and H(s) are the Laplace transforms of x(t) and A(t), then the product of X(s)H(s) = Y(s), where Y(s) is the Laplace transform of y(t) given by Applications of Laplace Transform to System Analysis 139 y(t) = x(@sh(t) = J x(r)h(t - 2) dr (3.16) 0 ¥(s) = X(s) His) (3.17) ' Proof: Let y(¢) = J x(t) A(t- Ddt 0 Ys) = LL yO] = f e“* y(t) dt 0 ! e* | x(t) h(t 1) de dt 0 owns om t =f J etx) Ae - var ae 0 Changing the order of integration, the above equation becomes Liye) = ff et x(t) h(e— 0) de dz ot = Jao i e*n(t - oat 0 + Putting a = t- t, then ¢ = a + t and da = dt, we get Y(s) = cyto = f x) J eee ntana ° 0 = fume dt [jevemcoee] 0 0 = X(s)-H(s) Therefore, Y¥(s) = X(s)-H(s) The convolution of the signals in the time domain is equal to the multiplication of their individual Laplace transforms in the frequency domain. t y(t) = J x(t) h(t - 1) dt defines the convolution of functions x(t) and A(t) 0 and is expressed symbolically as y(t) = x(t) * A(t) This theorem is very useful in frequency domain analysis. image not available image not available image not available Applications of Laplace Transform to System Analysis 143 [_ S.No. fw) Fis) J 13. * eos(ingt) ult) ~ oh coakengh) (sta)? +05 14. sin h wg 16. cos h wot O 16. sinh @ot —}—, ° = +a? —o8 17. “cosh ant ite. eosn a Gta? -ob zi ssin @ + Wo cos @ 18. sin (@ot + @ etal $ cos 8— Wy sin® 19. cos (at + 8) — oo sx +05 Table 3.2 Properties of Laplace transform S.No. Property Time domain Frequency domain 1. Linearity af, (tbh) a F, (s) +b Fs) a and b are constants 2. Scalar multiplication Af (t) RF (s) 3. Scale change flat)a20 Ar(2) a a 4, Time delay f(t-a),a>0 F(s)e® 5. s-shift e* F(t) F(s+a) 6. Multiplication by ¢” ¢t"f(t),n=1,2,.. 1)" cee 3 7. Time differentiation /'(t) s F(s) - f (0) ro s? F\s)- sf (0)-f’ (0) re) s" F(s)-s"~* 0) -s"? £'0)-...- fF" "(0 f@—u)"? F(s) 8. Time integration Soe —Fu)du 9 (a-)! a” 9, Frequency Crero F*(s) differentiation -tfo F?(s)= ro iS F(t) F”(s) (Contd.) 144 Digital Signal Processing ~~ iO") Lio") S.No. Property Time domain Frequency domain 10. Frequency integration £0 JF(s)ds 11. Convolution AO*hO= t JA@AG-dd FLWF Oo 12. Final value f)=lim fs ——idim s F(s) tow 390 13. Initial value f= lim f® lim s Fis) t30" see 14, Time periodicity M=ftenD 2 R() 21,2, oe “e - where Fy(s) =| f(tye"** de 0 Table 3.3 Representation of Laplace transform circuit elements Time domain e-domain voltage in s-domain] i R Ks) > R ws we Ris) 1 i> é 19) oe , —k— 1) 2 Ve Pe Css 3 Ki) + Ze Ot) Ue Cs os s Ks) ~ Ls sabe O LsI(s)- Li(0*) Lio) M3) > bs LsI(s) + Li(0*) 3.8 PARTIAL FRACTION EXPANSIONS For the given function F(s) = N(s) Dis)” the inverse Laplace transform can be determined by expanding it into partial fractions. The degree of the Applications of Laplace Transform to System Analysis 145 numerator polynomial N(s) must be lower than that of denominator polynomial D(s). If the degree of the numerator is greater or equal to the degree of the denominator, the numerator N(s) is divided by the denominator D(s) so that the remainder can be expanded more easily into partial fractions. Case 1: For simple and real roots N(s) (po) — py(s— py) where po, Pp; and pz are real roots and the degree of N(s) < 3. Expanding F(s) into partial fractions, we have F()= —40_, Ar, Aa 8-Po S-Py S~P2 The constant Ay can be evaluated by multiplying F(s) with (s — po) and substituting s = pg, as given below Ao = (8 -Po) F(S)|s=p9 Similarly, the other constants can be evaluated with the help of a general solution given by Aj=(s—p;) F(s)|« fT © 4 ‘System t Xs) Function ¥(s) 5 H(s) Fig. 3.2 Transfer Function of a System The transfer function of a system H(s) is the Laplace transform of the impulse response A(t). The transfer function H(s) is strictly analogous to Applications of Laplace Transform to System Analysis 147 the frequency response used in Fourier analysis. There is a considerable similarity between the transfer functions of Laplace transform and Fourier transform. The main advantage of transformed functions is that the time-domain convolution is replaced by frequency-domain multiplication. Hence, ¥(s) = X(s) H(s) (3.19) If the Laplace transform Y (s) of the output signal is determined, then its equivalent time determined function y(t) can be determined by taking the inverse Laplace transform. 3.9.1 Step and Impulse Responses We know that the Laplace transform of a unit impulse 8(t) is unity, i.e. (8()} = 1. If the unit impulse is given as the system excitation, i.e. X(s) = 1, then the output response will be Y(s) =X (s) H(s) = H(s) Thus, it is shown that the impulse response Ah (¢) and the transfer function of the system H(s) constitute a transform pair, i.e. Lth(t))=H(s) £7 {H(s)} = h@) This implies that the step and impulse responses can be directly obtained from the system function. The step response is the integral of the impulse response. Hence, the integral property of the Laplace transform is used to obtain the step response a(t) as ate) = ct {2} s The unit ramp response ¥(t) is obtained from the equation given by y(t)= £71 {#2} 3.10 s-PLANE POLES AND ZEROS The transfer function of a linear time-invariant system may be expressed as 1 Y(s) _ @n 8" +@y-18"" X(8) by 8 + by 18" 8 + The system function is a rational function of s and is expressed as N(s) _,, (8-21) (8 ~ 29) (8 ~ 25) Dis) — (s~ py) (8 - P2) (8 ps) H(s)= H(s)= (3.21) image not available image not available image not available Applications of Laplace Transform to System Analysis 151 Substituting the values of A; and Ay , we get 3 7 6 (s+2) (s+4) Taking inverse Laplace transform, we get i(t) = -3e -* + 6e-* Ea Plot the poles and zeros for (s+ 1)(s +3) F(s) = 4 - —- . (s+ 2)(s +4) Solution The poles and zeros are plotted in Fig. E3.11(a). For evaluating f(t), the degree of the numerator polynomial must be one degree less than the degree of the denominator polynomial. I(s)=- and hence obtain f(t). Ajo Fig. E3.11 (a) Dividing numerator polynomial by denominator polynomial, we get 4 eDG+8) {1 2s+5 | (s+ 2)(s+4) (s + 2) (s+ 4) =4-0[ 8228 — (s+ 2)(s+4) =i=8' Ay 4 Aa (s+2) (s+4) To find the coefficients A, and A, F(s)= The poles and zero for the given function Lowey, are plotted (s + 2)(s +4) in Fig. E3.11(b) from which the coefficients A, and A, can be calculated. image not available Applications of Laplace Transform to System Analysis 153 Fig. E3.12(a) Evaluation of A, h(i) = Ay e Gide +A, e Atibe = 7.074 e~' elt + 7.0707 J*4 et edt = 107e [eine o ensaeniare] = 14.14e~ cos (n/4 + It (b) At = 2, the phasors from poles and zero are drawn to the testing point A at j2 as shown in Fig. E3.12 (b) Fig. E3.12 (b) Magnitude M(j2) = 10x = 420 =4.47 2 v2 x10 a You have elther reached a page that is unavailable for viewing or reached your viewing lil far this book a You have elther reached a page that is unavailable for viewing or reached your viewing lil far this book 156 Digital Signal Processing 0) A O t T 2T Fig. E3.14 T =—_1 A yo-st - 7? [idee a] T =5 = A fte at eo 0 y)> SD ew 1 @ le f y me = ° 1 ‘ ole ee os ib “—s a i. =s Hoy os 1 ete a 3 — 3 =— 4 Ts*(1-e eames 3.16, Find the Laplace transform of the full wave rectified output as shown in Fig. E3.15. 0) T2 T 372 t Fig. £3.15 Solution The function for the given waveform is f(t)=Asin@pt for 0 seen OB -) @ BNE Ans : (a) (b) -2 () 1 (d) undefined 3.25 Find f(0%) if Fo) = 2342, 2 Ans : 2 \3.16 Find the initial and final values of the function 178° +78" +8+6 F(s) = ————_——_—_s>—_ (= Fy get 4559 + 4a? 420 Ans :0, 3. 3.17 Explain the following terms in relation to Laplace transform (a) Linearity (6) Scaling (c) Time-shift (d) Frequency differentiation and [3.18 State and explain any two properties of Laplace transform. 3.19 Explain the methods of determining the inverse Laplace transform. 3.20 Discuss the concept of transfer function and its applications. 3.21 Determine the inverse Laplace transform of 257 +38+3 P(e) = 22 +3843 = Canis+3y | | | (e) Time convolution | 3.22 Obtain the inverse Laplace transform of a (s? +2)" 186 Digital Signal Processing : ae Ans: =t sint 2 3.23 Obtain the inverse Laplace transform of the function B -3T t-e +B ( ) Ans: f(t) = 2 cos (be -) sin( BP) 3.24 Find the inverse Laplace transform of F(s)= (s) 7 2 5 s+5 @ Pie) aad ads 7 sets (ii) F2(s) = —>—> oe s*(s? +2) Ans: (i) f(t) = ute [2 cos J 31 Sts a sin V3 a1] 23 _3,(,_1)__3 | (9 = (i) fio = 2 f(t aye tlt 3 u(t 3 3.25 The transfer function of a network is given by 3s (s+ 2)(s” +s +2) Plot the pole-zero diagram and hence obtain h(t). ene) 3 yjze j2 2 '3.26 Draw the poles and zero for V(s) = H(s) = Ans: h(t) = j3e* { (s + 1) (s+ 3) (s+2)(s+4) v(t) by making use of the pole-zero diagram. Confirm the result analytically. and evaluate 1-2 3-4 Ans: u(t) = 6(t) - Se" — = ns: u(t) = dt) 3° 2° Sa lis \3.27 The transform current is given by I(s) GiD(?d Draw the pole-zero diagram and hence determine i(t). 3.28 Explain the significance of pole-zero diagram in circuit analysis. How will you determine the time domain response from the pole- zero plot? oe aoe s?+48+3 3.29 The current flowing in a network is given by I(s) = Tuas Draw the pole-zero diagram and hence obtain i(t). a You have elther reached a page that is unavailable for viewing or reached your viewing lil far this book 188 Digital Signal Processing AA >t oO a < T+a 2T 2T+a Fig. Q3.34 anh j | ay 0 v2 T 372 t Fig. Q3.35 2 1 A@o _ 2a Ane: FQ) = ATE) Fa of UE M0* [3.36 Determine the Laplace transform of the periodic sawtooth waveform, as shown in Fig. Q3.36. at) -A Fig. Q3.36 2AlT sT | : = =4!5 coth SF Ans: F(s) Ts [Zo x5 Applicaions of Laplace Transform to System Analysis 189 3.37 In the network of Fig. Q3.37, determine the current in the inductor Lz after the switch is closed at t = 0. Assume that the voltage source v(t) is applied at t = --. 2H = 22 aa (yetve v id ~~ Rs2a (or uss Fig. Q3.37 Ans : ig(t) = (4- tem - 26") ue 3.38 Derive from first principles the Laplace transform of a unit-step function. Hence or otherwise determine the Laplace transform of @ unit ramp function and a unit impulse function. 3.39 What do you understand by the impulse response of a network? Explain its significance in circuit analysis? 13.40 If impulse response of a network is e~*', what will be its step response? 13.41 The unit step of a network is (1- e~*'). Determine the impulse response h(t) of the network. 3.42 The unit step response of a linear system r(t) = (2e"* — ult). Find (a) the impulse response and (b) the response due to an input x(t) shown in Fig. Q3.42. x(t) 2 J, 0 4 Fig. Q3.42 |3.43 Determine the Laplace transform of v(t) =e u(t) -e ©" u (t-1) If this voltage is applied to a network whose impedance is s? +4843 Z(s) = od s(s? +6s+8) ., then find the current I(s) and also i(t). 190 Digital Signal Processing 3.44 A sinusoidal voltage 25sint is applied at the instant t = 0 to an RL circuit with R = 5Qand L = 1H. Determine i(t) by using Laplace transform method. 3.45 In the circuit shown in Fig. Q3.45, the steady state condition exists with the switch in position 1. The switch is moved to position 2 at t = 0. Calculate the current through the coil at the switching instant and current for all values t > 0. 252 4, § VN + 100V = 252 3.46 In the circuit of Fig. Q 3.46, the switch S is closed and steady-state conditions have been reached. At t = 0, the switch S is opened. Obtain the expression for the current through the inductor. Fig. Q3.46 Ans : 5cos1000t 3.47 In the circuit of Fig. Q3.47, the switch S is closed at t = 0 after the switch is kept open for a long time. Determine the voltage across the capacitor. 182 WWW i) =10A 129 -4F 5 Xtzo Fig. Q3.47 Applicaions of Laplace Transform to System Analysis 191 Ans: 1+ 4e~1! 3.48 In the circuit shown in Fig. Q3.48, the initial current through L is 2A and initial voltage across C is 1V and the input excitation is x(t) = cos2t. Obtain the resultant current i(t) and hence v(t) across C. 3/22 12H cos 2t CQ) Ou emi | ssn Fig. Q3.48 Ans: i(t)=- Be +5e tS 2 cos2t+d sin 2t oft) = 8 8 et - Sem +3 sinat- a cos 2t 3.49 In the circuit shown in Fig. Q3.47, the initial current is i,(0) = 5A, initial voltage is v, (0) = 10V and x(t) = 10u(t). Find the voltage v(t) across the capacitor for t > 0. Ans : 0, (t) = 20- 10e-' - 5e* \3.50 Determine the resultant current i(t) when the pulse shown in Fig. Q3.50(a) is applied to the RL circuit shown in Fig. Q3.50(b). 12 A x(t) rT L 1 L 2H te Cun 1 (a) Fig. Q3.50 Ans : i(t) = (1-e*!?) u(t) - (1-e 1?) utt- 1) 3.51 An exponential current 2e~*' is applied at time t = 0 to a parallel RC circuit comprising resistor R = Za and capacitor C = IF. Using Laplace transformation, obtain complete particular solution for voltage v(t) across the’ network. Assume zero charge across the capacitor before the application of current. Ans : u(t) = 2e-*! - 2e-* 192. Digital Signal Processing (3.52 In the parallel RLC circuit, Ig = 5 amp, L = 0.2 H, C = 2F, and R = 0.5. Switch S is opened at time t = 0. Obtain the complete particular solution for the voltage v(t) across the parallel network, Assume zero current through inductor L and zero voltage across capacitor C before switching. Ans: v(t) = Sew sin 4 .53 A rectangular waveform shown in Fig. Q3.53 (a) is applied to an RLC circuit of Fig. Q3.53(b). Obtain the voltage v(t) across the capacitor C. A x(0) Fig. Q3.53 Chapter 4 2-Transforms 4.1 INTRODUCTION The Laplace transform plays a very important role in the analysis of analog signals or systems and in solving linear constant coefficient differential equations. It transforms the differential equations into the complex s-plane where algebraic operations and inverse transform can be performed to obtain the solution. Like the Laplace transform, the z-transform provides the solution for linear constant coefficient difference equations, relating the input and output digital signals in the time domain. It gives a method for the analysis of discrete time systems in the frequency domain. An analog filter can be described by a frequency domain transfer function of the general form K(s - z,)(s — 2g) (8 - 23) -- (s- py)(s — poXs - pg) wheres is the Laplace variable and K is a constant. The poles p, Po, 3... and zeros 2, 2g, Z3 ... can be plotted in the complex s-plane. The transfer function H(z) of a digital filter may be described as H(s) = K(z - 2,)(2- 29) (z — 25) -- (z- pz — py Mz - pg)- Here the variable z is not the same as the variable s. For example, the frequency response of a digital filter is determined by substituting z = e/®; but the equivalent substitution in the analog case is s = jw, where w is the angular frequency in radians per second. Another essential difference is that the frequency response of an analog filter is not a periodic function. The transfer function H(s) is converted into a transfer function H(z), so that the frequency response of the digital filter over the range 0 < w < x approximates that of the analog filter over the range O 1. In other words, the ROC for X(z) is the area outside the unit circle in the z-plane. The ROC of a rational z-transform is bounded by the location of its poles. For example, the z-transform of the unit step response u(n) is X@) = = 7 which has a zero at z = 0 and a pole at z = 1 and the ROC z- is|z| > 1 and extending all the way to -, as shown in Fig. 4.2. tm(2) Pole at z=1 Zero at z=0 i WES Fig. 4.2 Pole-Zero Plot and ROC of the Unit-Step Response u(n) Important Properties of the ROC for the z-transform (i) X(z) converges uniformly if and only if the ROC of the z-transform X(z) of the sequence includes the unit circle. The ROC of X(z) consists of a ring in the z-plane centered about the origin. That is, the ROC of thez-transform ofx(n) has values ofz for which x(n) r™ is absolutely summable. Ylx@) r-" | <0 nea (ii) The ROC does not contain any poles. (iii) When x(n) is of finite duration, then the ROC is the entire z-plane, except possibly z = 0 and /or z = ~. (iv) Ifx(n) is a right-sided sequence, the ROC will not include infinity. (v) If x(n) is a left-sided sequence, the ROC will not include z = 0. However, if x(n) = 0 for all n > 0, the ROC will include z = 0. 198 Digital Signal Processing (vi) If x(n) is two-sided, and if the circle |z| = rp is in the ROC, then the ROC will consist of a ring in the z-plane that includes the circle |z| =ro. That is, the ROC includes the intersection of the ROC’s of the components. (vii) IfX(z) is rational, then the ROC extends to infinity, i.e. the ROC is bounded by poles. (viii) If x(n) is causal, then the ROC includes z = -. (ix) If x(n) is anti-causal, then the ROC includes z = 0. To determine the ROC for the series expressed by the Eq. 4.2, which is called a two-sided signal z-transform, this equation can be written as « <1 - Yer" = Yxn)z"+ YL x(n) 2" a =0 ne» n=-0 Y xen) 27" + FY x(n) 2" n=l n=0 The first series, a non-causal sequence, converges for |z| r,, resulting in an annular region of convergence. Then the Eq. 4.2 converges for r, < |z| < rg. provided r;< ra. The causal, anti-causal and two-sided signals with their corresponding ROCs are shown in Table 4.2. Some important commonly used z-transform pairs are given in Table 4.3. Table 4.2 The Causal, anti-causal and two-sided signals and their ROCs Signals ROCs (a) Finite duration signals Anti-causal z-Transforms 199 0 (b)Infinite duration signals Causal 200 Digital Signal Processing Sequence representation A signal or sequence at time origin (n = 0) is indicated by the symbol T. If the sequence is not indicated by 1, then it is understood that the first (leftmost) point in this sequence is at the time origin. Table 4.3 Some important z-transform pairs S. Signal Sequence Laplace z-transform ROC No. __xit) x(n) _ transform X(s) X(z) 1 (Be) 3(n) 1 All z-plane 2 &¢-k) Sink) ~ a |z]>0,k>0 Jz] <2, k<0 3. ult) u(n) 7 jz[>1 8 1 4. -u(-n-1) 2 jz] <1 8 5. tu) nu(n) J. ee 3° (i-27)? (@-1)? 6. a"u(n) an lzl>lal | 1 7. -a"u(-n-1 lel < lel 8. "u(n) St z2|>|a nat uln a lz1> lal , az 9. —na"u(-n-1) Gao lzl < la] von 1 1 4 10. e* e Gio lew lz] > le~*] 2 2 1 (+27) _az+D uw n? u(n) = Teh Gp (el>t 2. tet one te es ea] (s+a)? (l-e“z")? (z-e*) 7 z oO zsin oy 13. sin@t sinan wat Fines lz|>1 2(z — C08 Wp) 14. t z= SS 1 COS Dot cos aon 8 +05 2? 22 cos @y +1 lel > . ‘ Mo zsinh > 15. sinh @t sinh ap z[>1 eran mot Eater 8? +05 2? 22 cosh@ +1 lel 2z-Transforms 201 ‘S. Signal Sequence Laplace z-transform ROC No. _ x(t) x(n)__ transform X(s) X@) 8 2(z - cosh Wo) 16. cos h Wot cosh Mn : “ 8? - 05 27-22 cosh Wy +1 jz}>1 17. ew em ‘ 7 & ze“sin @ sin @t sin yn lz] > Je*] Gietao! Poo” osu, Fe (sta) +@ 2° - 2ze™* cos Oy +e pat an ¢ sta 2(z — e~* cos Wy) a C08 Mt cos@yn —=*_ __ FR EEO! Jz] > Je] (sta)? +03 27 - 22e~* cos oy +e ne zasin @ a" sin Wyn = SCsi I > el 29, 2 2° —2zac0s Wy +a 8 2(2- aos wy) a” cos tay n =e et > lal z? —2zac0s @ +a Eazy Determine the z-transform of the following finite luration signals Oy 4, 8% FO, ¥ (a) xin) = { } ? 2, 4, 7, 1,2 (ain) = {? Ne 191, \ (c) x(n) = (1, 2, 5, 4, 0, 1) (d) x(n) = { 0, 0, 1, 2, 5, 4,0, 1) (e) x(n) = &(n) (f) x(n) = &n —k) (g) x(n) = &(n + k) Solution (a) x(n) = { 3, 1, 2,5, 7,0, 1 re} Taking z-transform, we get Xz) = 829 + 2° + 22454 Tet 42°. ROC: Entire z-plane except z = 0 and z = ~. (b) x(n) = {* 4, 5, 7, 0, 1, *| t Taking z-transform, we get X(z) = 227442454721 4754224, ROC: Entire z-plane except z = 0 and z = « (c) x(n) = (1, 2,5, 4,0, 1) Taking z-transform, we get X(z) = 14 221+ 52? 4 423425 ROC: Entire z-plane except z = 0. 202 Digital Signal Processing (d) x(n) = (0, 0, 1, 2, 5, 4, 0, 1) Taking z-transform, we get Xz) = 27 +2254 5244425427. ROC: Entire z-plane except z = 0. (e) x(n) = &(n), hence X(z) = 1, ROC: Entire z-plane. (f) x(n) = &(n —k), k > 0, hence X(z) = z*, ROC: Entire z-plane except z=0 (g) x(n) = &(n +k), k > 0, hence X(z) = z*, ROC: Entire z-plane except Z=0, [EEREEEEIEZE) Determine the z-transform including the region of convergence of ", n2z0d0 0, n |a|. Values of z for which X(z) = 0 are called zeros of X(z), and values ofz for which X(z) > ~ are called poles of X(z). Here the poles are at z = a and zeros at z = 0. The region of convergence is shown in Fig. E 4.3. Zero at origin z= 0 Pola at z= a Rez Cy Izl= lal iio Boundary tor ROC Fig. E4.3. ROC for the z-transform of x(n) = a". z-Transforms 203 4.3, PROPERTIES OF z-TRANSFORM A number of useful theorems for z-transforms are presented and discussed in this section, These are summarised in Table 4.4. Table 4.4 Properties of z-Transform roperty or z-transform operation . Transformation Xe) = Y x(nye™ n=0 x n-1 Inverse a] k@e" ae X(2) transformation 2nj Linearity ay x; (n) + ax (n) a, X; (2) + aX; (2) ‘Time reversal x(-n) X(e) Time shifting (x (m-k) @ z* Xz) Giz (a +k) (ii) z* X@) Convolution xy(n)*x,(n) X,@) Xz) Correlation Tan O= Ym @x,n-D Ry sq (=X (2) Xy(2"') Scaling a"x(n) X (az) pi dX@) az? aX(z) dz Differentiation nx (n) (or) -2 |. Time differentiation x(n) — x (n — 1) X() (1-27) . Time integration yx X(2)= ( 2 } = z-1 . Initial value theorem Jim x(n) lim X() n= bee Final value theorem —_lim x(n) jim(2=*) xe) Jes fe 4.3.1 Linearity Ifx,(n) 44 X, (2) and x,(n) <*> X,(z), then x(n) = Gy xy(n) + Ay XQ(n) 4» X(z) = ay Xy(z) + ay Xz) (4.5) where a , and a, are arbitrary constants. It implies that the z-transform of a linear combination of signals is the same linear combination of their z- transforms. Determine the x(n) = &(n + 1) + 38(n) + 65(n — 3) — 8(n — 4). Solution From the linearity property, we have X(z) = Z(5(n + 1)} + 3Z (5(n)} + 6Z (8(n — 3)} - Z (5(n - 4)} Using the z-transform pairs, we obtain X(z)=2+3+462%-24 204 Digital Signal Processing Therefore, x(n) = {* 3,0, 0,6, - ‘} + The ROC is the entire z-plane except z = 0 and z = =. The same result can be obtained by using the definition of the transform. | Example 4.5 | Find the z-transform of x(n) = cos @yn for n = Solution x(n) = cos Wyn = Film" + 70") Using the transform, for n 2 0, Zia") = Taree 1 Jooyty = Therefore, forn 20, Z((e?)'] = ———, [2] >1 a 1 -jooyry = Similarly forn20, Z{(e?®°)"] = Toe iopt slz[>1 Therefore, X(z) =Zle0s apn] = Z| 3(el9" + e700") i =—2__ 1-ez 1 fel ¢e%]e+ © (= 23 271) 1 eM 21) _ 1-27 cos 1-227 cos wy + 2% 1 2° —2z cos @) +1 Similarly, we can find Z [sin @ 7] using the property of linearity, i.e. Zlsin on) =Z [ne _ edo i _ z7 sin @ ____zsin@y le} 1-227 cos@y +2" 2% -2z cos my) +1" 4.3.2 Time Reversal | If x(n) 25 X(2); ROC: 7, < |z|

You might also like