You are on page 1of 17

Laplace Transform

Elisa Franco November 9, 2007

Introduction
Consider a dierential equation with constant coecients as: an xn + an1 xn1 + . . . + a0 x = bm um + bm1 um1 + . . . + b0 u (1)

Suppose the above equation is a model for a dynamical system of interest: the solution x(t) is then the state of such system, while u(t) is its input. Furthermore assume that n > m: this is equivalent to assuming that our system is causal (the state at time t only depends on past states and inputs). We can easily nd the solution to a dierential equation of this type using the Laplace transform.

Denition

Under suitable assumptions, the Laplace Transform (LT) is a bijective (and therefore invertible) mapping that links functions of time f (t) with functions of complex argument F (s), s = + j C. A key feature of this transformation is that of mapping dierential equations to algebraic equations: in particular, the convolution operation will correspond to the multiplication operation, as we will see in Section 3.

Functions of time Dierential equations

L L1

s C

Complex functions Algebraic equations

Figure 1: Laplace Transform mapping The Laplace Transform of a function f (t), piecewise continuous and bounded such that

f (t) = 0 for t 0, is dened

as:
+

F (s) = L[f (t)]

f (t)est dt

(2)

For those values of s such that the integral converges. It can be proved that if the integral converges for s0 , then it will for all s such that Re{s} > Re{s0 } = 0 . The value 0 limits the region of convergence 2 , or ROC, of the transform. j

Figure 2: ROC in the complex plane

Properties of the Laplace Transform


Linearity The integral operation is linear, therefore the LT inherits such property: L[c1 f1 (t) + c2 f2 (t)] = c1 L[f1 (t)] + c2 L[f2 (t)] Transformation of the integral If functions f (t), g(t) = same ROC: 1 L[g(t)] = L[f (t)] s
t 0

f ( )d have LT with the

Trasformation of the derivative If f (t), f (t) have LT with the same ROC: L[f (t)] = sL[f (t)] f (0 ) This can be extended to the derivative of order n:
n1

L[f n (t)] = sn L[f (t)]

snk1 f k (0 )
k=0

1 This denition allows to consider functions that present an impulse at the origin. The bilateral Laplace Transform runs the integral from to + 2 More precisely: Z + 0 inf Re{s}, s C : f (t)est dt = K < 0

Frequency shift If f (t) admits a LT and k C, the function ekt f (t) has ROC 0 +Re{k} and: L[ekt f (t)] = F (s k) In fact:
+ 0

ekt f (t)est dt =

+ 0

f (t)e(sk)t dt

Time shift If f (t) admits a LT, the introduction of a time shift t0 > 0 still yields a LT with the same ROC and : L[f (t t0 ) 1(t t0 )] = est0 F (s) This can be shown with a change of variable:
+ 0 + t0

f (t t0 )est dt =

f ( )es est0 dt = est0 F (s)

Multiplication by tn Assume n = 1, and develop the derivative of F (s): d F (s) = ds d ds


+ 0 +

f (t)est dt =

f (t)
0 + 0

d st e dt ds

f (t)test dt =

tf (t)est dt = L[tf (t)]

This yields the general formula: L[tn f (t)] = (1)n Time scaling dn F (s) dsn

1 s F ( ), a R+ a a Again this can be proved with a change of variables: L[f (at)] =


+ 0

f (at)est dt =

+ 0

1 s f (t)es a d( ) = F ( ) a a a

Convolution This is probably the most important property of the LT. Recall the denition of convolution integral: given f (t), g(t), f (t) = g(t) = 0 fpr t 0 ,
t

h(t) It can be proved that:

f (t) g(t) =
0

f ( )g(t )d

L[f (t) g(t)] = F (s) G(s)

In fact:
+ + t 0

[f (at)f (t) g(t)]dt =


0 0

f ( )g(t )d est dt

=
D

f ( )g(t )est d dt

Where D = {(t, ) R2 | t} is the domain of integration. This means that:


+ +

[f (at)f (t) g(t)]dt =


0 0 +

+ 0

f ( )g(t )est dt}d es( +) f ( )g()d d

=
0

= F (s) G(s) Note: for t > , g(t ) 0. This is a fundamental property, see Figure 3.
t 0

y(t) =

h(t )u( )d y(t)

u(t)

Y (s) = H(s)U (s)

Figure 3: LT of a convolution integral LT of a periodic function Consider a periodic function f (t) as the one represented in Figure 4 and dened as:

h(t)

f (t)

2T

3T

Figure 4: Periodic function

f (t) =

h(t) f or t [0+ , T ]) 0 otherwise


+ k=0 h(t

The function f (t) can also be described as f (t) = time shift properties:
+

kT ). For the linearity and H(s) 1 esT

F (s) = L[f (t)] = Since


+ skT k=0 e

H(s)eskT = F (s)
k=0 k=0

eskT =

is a geometric series.

List of transformations
LT of the step function:
+

L[1(t)] = LT of the Dirach (t):

1 1 1(t)est dt = [est ] = 0 s s

(0 = 0)

L[(t)] =

(t)est dt =

(t)est dt = 1

(0 = )

Note: the expression of the LT of (t) allows to easily nd the LT of a time series of impulses (Figure 5):

f (t)

2T

3T

4T

Figure 5: Series of impulses


+ +

L[

k=0

(t kT )] =

k=0

eskT 1 =

1 1 esT

This result is based on the transformation of a periodic function. Recall the convolution integral property: multiplication in the complex domain implies convolution in the time domain. Then any periodic function f (t), having period T , can be seen as the convolution of a basis function dened between 0 and T with a series of impulses having the same period. 5

LT of the exponential function: L[ekt 1(t)] = LT of tn


+ 0

ekt est dt =

+ 0

e(ks)t dt =

1 1 [e(ks)t ]+ = 0 ks sk }= n! sn+1 (0 = 0)

(0 = Re{k})

L[tn 1(t)] = (1)n {(1)n

n! sn+1

This can be shown using the step function LT and the multiplication by tn . LT of tn ekt : L[tn ekt 1(t)] = n! (s k)n+1

LT of sin(t). Recall Euler formulas ej = cos + j sin , ej = cos j sin : L[ 1 ejt ejt 1(t)] = 2j 2j 1 1 s j s + j = 1 2j = 2 2 + 2 2j s s + 2 (0 = 0)

LT of cos(t): L[ 1 ejt + ejt 1(t)] = 2 2 1 1 + s j s + j = s 1 2s = 2 2 s2 + 2 s + 2 (0 = 0)

Examples

Example 5.1 Find the Laplace Transform of the function represented in Figure 6.

f (t)

Figure 6: Example 1 We can see this function f (t) as the sum of three elementary parts: A step at t = 0
1 A ramp of slope ( T ) at t = 0 1 A ramp of slope (+ T ) at t = T

1 1 f (t) = 1(t) + ( )t + ( )(t T ) T T

The linearity property of LTs allows immediately to nd: L[f (t)] = 1 1 1 1 1 1 1 + ( ) 2 + [ 2 eT s ] = [1 eT s ] s T s T s s T s2 Example 5.2 Find the Laplace Transform of the function given in Figure 7:

f (t)

Figure 7: Example 2

The function of Figure 7 is just the delayed version of what shown in Figure 8.

f (t)

Figure 8: Example 2, function shifted to the origin Moreover, f1 (t) in Figure 8 can be interpreted as the dierence of A sinusoid anticipated of A sinusoid delayed of

and zero for t < 0 and zero for t < 0

()

f1 (t) = sin (t + ) 1(t) sin (t ( )) 1(t) All half periods cancel except the rst. We can now use the sinusoid angle addition formulas: sin(t + ) = sin(t) cos + cos(t) sin and we can nd: L[f1 (t)] =
cos + s sin + 2 e 2 + 2 2 s s +

Finally we can nd the LT of f (t), which is f1 (t) with a delay of L[f (t)] = L[f1 (t)] e =

cos + s sin e + 2 e 2 + 2 2 s s +

Example 5.3 Solution of dierential equations - Lets go back to our initial problem: we want to use LTs to solve dierential equations. We will just apply the property of derivation to the nth order of a function.

If u(t) = 1(t), lets apply the property of LT for the derivatives: s2 Y (s) s y(0 ) y(0 ) + {3 [s Y (s) y(0 )]} + 2 Y (s) = s2 Y (s) s + {3 [s Y (s) 1]} + 2 Y (s) = (s2 + 3 s + 2)Y (s) s 3 =
1 s 1 s 1 s

Consider this dierential equation: 2 d y + 3 dy + 2y(t) = (1 + 3t)u(t) dt dt y(0 ) = 1 dy =0 dt |


t=0

3 s2

3 s2

3 s2

This is an algebraic equation in s we can easily solve for Y (s) : Y (s) = s2 s+3 1 1 3 + 2 ( + 2 ) = Yl (s) + Yf (s) + 3s + 2 s + 3s + 2 s s

The rst term Yl (s) is the free response, the second term Yf (s) is the Forced response. Now we need to go back to the time domain, since we are looking for y(t)!

Inverse Laplace Transformation

Under certain assumptions, one can uniquely dene the Inverse Laplace Transform (ILT) of F (s) = L[f (t)] as f (t) = L1 [F (s)]; the operator L1 is linear. The general formula is: f (t) = 1 2 j
+j j

F (s) est ds

(3)

where > 0 delimits the ROC and the integration is along the vertical line = .

Integrals of complex variables


Lets recall some useful denitions from complex analysis. A function f (t) which is C (or analytic) on the entire complex plane except for some isolated singularity a (i.e. the function is not continuous on a) can be approximated in Laurent series in a:
+

f (s) =
n=

an (s a)n

The discontinuity point (called also singularity) a can be classied as: Removable: there are no negative powers in the Laurent series, therefore limsa f (s) = a0 . Thus we can dene f (a) = a0 Pole of order m: there is a nite number m of negative powers in the Laurent series. Therefore a is a removable singularity for the function f1 (s) = (s a)m f (s) and limsa f (s) = . Essential singularity: there are innite negative powers in the Laurent series. If f (s) is analytic on an area of the complex plane delimited by a closed path , except for a nite number r of isolate singularities ak in the interior of , Cauchys theorem tells us that:
r

f (s)ds = 2j
k=1

Res(f, ak )

Where Res(f, ak ) is called integral residue of f (s) on ak . It can be proved that the valued of Res(f, ak ) is the coecient ak,1 of the Laurent series expansion of f (s) around ak . There exist easy formulas to compute the residuals, and we will see them later. Res(f, ak ) = ak,1 A Laplace Transform F (s) is analytic in its domain of convergence, therefore all its singularities are to the left of the ROC line = 0 . Consider: 1 F (s)est ds (4) 2j C Where C is called Bronwich integral path and is the solid bold line path shown in Figure 9. If is the semi-circle part of C, having radius R, we can write our ILT integral introduced at

C
R 0

Figure 9: Inverse transformation integral path equation (3) as: f (t) = = 1 2 j 1 2 j


+j j R C

F (s) est ds = F (s) est ds lim F (s) est ds

lim

If the term limR F (s) est ds was negligible, we could easily compute (3) using the residual formulas! A sucient condition yielding limR F (s) est ds = 0 is that F (s) is a rational function of P (s) polynomials in s, F (s) = Q(s) , where deg(Q) > deg(P ) (F (s) strictly proper, or causal!). So we can apply Cauchys theorem; the factor 2j is eliminated. Summarizing, if F (s) is analytic except for a nite number of singularities and strictly proper, then we can nd the ILT as:
r

f (t) =
k=1

Res(F (s)est , ak )

Note that if R , then C includes all the singular points of F (s).

6.1

Inverse Laplace Transform of polynomial rational functions

This is the most common type of transfer function, and we can mechanically nd the ILT. Consider: N (s) , deg(D) = n, deg(N ) = m, m < n (5) F (s) = D(s) This can be rewritten as: F (s) = N (s) (s z1 )m1 (s z2 )m2 (s zr )mr =K D(s) (s p1 )n1 (s p2 )n2 (s pq )nq

con m1 + m2 + ... + mr = m, ed n1 + n2 + ... + nq = n. z1 , z2 , ..., zm C are the distinct zeros of F (s), each having multiplicity mj ; 10

p1 , p2 , ..., pm C are the distinct poles of F (s), each having multiplicity ni . We can always decompose our rational function (5) in simple fractions: F (s) = C1,1 + ... (s p1 ) C2,1 + + ... (s p2 ) Cq,1 + + ... (s pq )
q ni

C1,n1 + (s p1 )n1 C2,n2 + + (s p2 )n2 Cq,nq + (s pq )nq

(6)

For the linearity property, which holds evidently also for the ILT: L1 [F (s)] = L1 [ Ci.j ] (s pi )j

i=1 j=1

The ILT of each simple fraction has the form: L1 [ Ci.j Ci,j ]= t(j1) epi t 1(t) j (s pi ) (j 1)!
q ni

In general, Ci,j , pi C. In summary: L1 [F (s)] = Ci,j t(j1) epi t 1(t) (j 1)!

i=1 j=1

Now: How do we calculate the coecients Ci,j ? Method 1: determine Ci,j using the identity principle of polynomials. In practice, pick (6), multiply out everything and equate the coecients of each power of s at the numerator to N (s). Method 2: compute the residuals of F (s): we are dealing with a strictly proper polynomial rational functions, with isolated singularities - the poles. It can be shown that Ci,j are the coecients a1 of the Laurent series of F (s) around the ith pole (note, of F (s) and not of F (s)est ). Residuals formula: Ci,j = 1 lim (ni j)! spi d(ni j) F (s) (s pi )ni d s(ni j) (7)

For simple poles (multiplicity one), this simplies to: Ci = N (pi ) D (pi ) (8)

Where D (pi ) is D(s) without the term (s pi ), evaluated at pi . Note: now we have decomposed our F (s) into parts that we know how to inverse transform

11

Proof of formula (7). Consider i = 1:


n1

F (s) =
j=1

Ci,j + K(s) (s p1 )j

since p1 is not a pole of K(s) and the second term goes to zero. Consider then:

where K(s) represents the remaining terms of the expansion. Taking the limit s p1 : n1 Ci,j lim (s p1 )n1 + (s p1 )n1 K(s) = C1,n1 sp1 (s p1 )j
j=1

d [(s p1 )n1 F (s)] = (n1 1)(s p1 )n1 2 C1,1 + ... + C1,n1 1 ds taking again the limit one obtains C1,n1 1 and further on for the remaining terms. Examples Lets go back to example 5.3), expanding in partial fractions: Y (s) = Yl + Yf = Free response Yl . Poles: p1 = 1, p2 = 2. Apply method 1: Yl = Yl = Equating the numerators: C1 + C2 = 1 2 C1 + C2 = 3 We get C1 = 2, C2 = 1. Now inverse transform all the parts of Yl . Yl = 1 2 = L1 [Yl ] = yl (t) = 2 et e2t (s + 1) (s + 2) C1 (s + 2) + C2 (s + 1) (s2 + 3s + 2) 1 1 3 s+3 + ( + ) s2 + 3 s + 2 s2 + 3 s + 2 s s2

Yl =

C2 C1 + (s + 1) (s + 2)

s(C1 + C2 ) + 2 C1 + C2 s+3 = 2 2 + 3s + 2) (s s + 3s + 2

Forced response Yf . Poles: p1 = 1, p2 = 2 are simple poles, while p3 = 0 has multiplicity 2. Partial fractions: C3,1 C3,2 C1 , 1 C2 , 1 Yf = + + 2 + (s + 1) (s + 2) s s Its more convenient to use method 2. 12

Apply the residual formulas: C1,1 = (s + 3) =2 (s2 )(s + 2) |s=1

C2,1 = C3,1 =

(s + 3) 1 = 2 )(s + 1) (s 4 |s =2

(s + 3) 7 d = ds (s + 1)(s + 2) |s=0 4 (s + 3) 3 = (s + 1)(s + 2) |s=0 2

C3,2 =

Therefore we can inverse transform as: 3 7 1 yf (t) = t + 2 et e2t 2 4 4 The overall system response is the sum of yl (t) and yf (t).

The mass spring system revisited


Lets look again at the mass spring system in Figure 10. We know how to obtain the

u(t)

1111 0000

1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 k l0 1111 0000 111 000 1111111 0000000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000

Figure 10: Spring mass system dierential equation that describes it: M d(l l0 ) d2 (l l0 ) + k(l l0 ) + b = u(t) 2 dt dt d2 y dy +ky +b = u(t) 2 dt dt

Dening y = (l l0 ) one gets: M Lets use the Laplace Transform:

M [s2 Y (s) sy(0 ) y(0 )] + kY (s) + b [sY (s) y(0 )] = U (s) Solve for Y (s): Y (s)(M s2 + bs + k) = sM y(0 ) + M y(0 ) + by(0 ) + U (s) 13

(sM + b)y(0 ) + M y(0 ) U (s) + 2 + bs + k 2 + bs + k Ms Ms Now we have the free response and the forced response. Note that Y (s) = M s2 1 + bs + k

is the LT of the impulse response of the system (we lose information about the initial conditions.) Now lets compare this with the state space description of the system, using variables x1 (position) and x2 (velocity): 0 x1 = k x2 M 1 + b M x1 x2 0
1 M

y= 1 0

Now apply the LT to the matrix dynamical system: x = Ax + B u y= Cx L[x] = L[A x + B u]

X(s) = [s I A]1 x(0 ) + [s I A]1 B U (s) Recall the inversion rule for a 2 2 matrix: a b c d This yields: [sI A] = s
k M 1

X(s)[s I A] = x(0 ) + B U (s)

s X(s) x(0 ) = A X(s) + B U (s)

= Y(s) = C X(s) = C [s I A]1 x(0 ) + C [s I A]1 B U(s) = L1 [] = y(t) 1 d b ad bc c a

1 b s+ M
b M s+ M k + bs + k M

[sI A]1 =

M s2

1 s

Finally, multiplying by B, C we get Y (s): Y (s) = C[sI A]1 x(0 ) + C[sI A]1 BU (s) Free response: Yl (s) = 1 0
b M s+ M k M s2 + bs + k M

1 s

M x1 (0 ) s+ ) = 2 + bs + k x2 (0 Ms 14

b M

x1 (0 ) x2 (0 )

Yl (s) = Forced response: Yf (s) = 1 0

(sM + b)x1 (0 ) + M x2 (0 ) M s2 + bs + k

b M s+ M k M s2 + bs + k M U (s) = 2 + bs + k Ms

1 s

0
1 M

U (s)

They are identical to what found just by applying the LT to the dierential equation. Note: we also found another way to compute eAt . eAt = L1 [[sI A]1 ]

Now lets look at the step response of the system, with zero initial conditions. Y (s) = s(M s2 C1 1 = + ...?... + bs + k) s

The poles can be either real or complex: p1,2 = 1. Real and distinct poles: b2 > 4 k M . If b = 1, M = 1/4, k = 5/36 we get: y(t) = [ 2. Coincident poles: b2 = 4 k M . If b = 1, M = 1/4, k = 1/4 we get: y(t) = [4 4e 2 t 2te 2 t ] 1(t) 3. Most interesting case: Complex conjugate poles: b2 < 4 k M . For instance: 1 Y (s) = 2 + s + 1) s(s
1 has p1 = 2 j 3 2 ,
1 1 1 36 9 9e 6 t + e56t ] 1(t) 5 5

b2 4 k M 2M

p2 = 1 + j 2

3 2 .

Comparing with the classical notation: Y (s) = 1 s(1 +


2 n s

s2 2 n

the poles are placed as in Figure 11. If the damping is low, the poles move towards the

15

Im(p)

arccos = 60

Re(p)

Figure 11: Complex conjugate poles imaginary axis; for = 0 we will have permanent oscillations. Finding the ILT: Y (s) = C1 C3 C2 + + 1 1 s s + ( 2 + j 23 ) s + ( 2 j 23 ) As + B C 1 + = 2 + s + 1) +s+1 s s(s

Lets expand with method 1: Y (s) = s2

Y (s) =

The response is: Y (s) =

1 Cs2 + Cs + C + As2 + Bs = 2 + s + 1) 2 + s + 1) s(s s(s C = 1 = A = 1 B = 1

1 s+1 1 s+1 2 = s (s + s + 1) s (s + 1 )2 + ( 3 )2 2 2 1 1 s+ 2 1 3 2 3 = + s 3 (s + 1 )2 + ( 3 )2 (s + 1 )2 + ( 3 )2
2 2 2 2

3 3 1t 3 2 sin( t) e t) cos( y(t) = 1(t) e 2 3 2 t 3 3 3 y(t) = 1(t) e 2 [cos( t) + sin( t)] 2 3 2


1t 2

= L1 [] =

16

If the sin coecient is the tangent of /6 sin 3 3 6 t) + t)] y(t) = 1(t) e [cos( sin( 2 cos 6 2 t 1 3 2 y(t) = 1(t) e t )] [cos( cos 6 2 6
t 2

17

You might also like