You are on page 1of 11

Lecture Notes on Control Systems/D.

Ghose/2012 30

9.5 The Transfer Function

Consider the n-th order linear, time-invariant dynamical system.

dy d2 y dn y du d2 u dm u
a0 y + a1 + a2 2 + · · · + an n = b0 u + b1 + b2 2 + · · · + bm m
dt dt dt dt dt dt
with zero initial conditions on all derivatives. Taking the Laplace transform on both
sides, we get,

a0 Y (s) + a1 sY (s) + a2 s2 Y (s) + · · · + an sn Y (s)


= b0 U (s) + b1 sU (s) + b2 s2 U (s) + · · · + bm sm U (s)

From which,


Y (s) b0 + b1 s + b2 s2 + · · · + bm sm m
k=1 bk s
k
= = 
U (s) a0 + a1 s + a2 s2 + · · · + an sn n
j=1 aj s
j

N (s)
⇒ G(s) =
D(s)
Where, G(s) is called the transfer function. It is defined from differential equations for
which the initial condition is zero.
G(s) is said to be proper if m ≤ n.
G(s) is said to be strictly proper if m < n.
Note: We will understand these terms in a better way a little later. However, in most
cases we will be dealing with strictly proper systems.
Let us do some more algebraic manipulations. Let,

bm
K =
an
bk
b̄k = , for k = 0, . . . , m − 1
bm
aj
āj = , for j = 0, . . . , n − 1
an
Then,
bm sm + bm−1 sm−1 + · · · + b1 s + b0
G(s) =
an sn + an−1 sn−1 + · · · + a1 s + a0
Lecture Notes on Control Systems/D. Ghose/2012 31

 
bm s m + bm−1 m−1
bm
s + · · · + bbm1 s + bbm0
=  
an sn + an−1 s n−1 + · · · + a1 s + a0
an an an
 
m
bm s + b̄m−1 s m−1
+ · · · + b̄1 s + b̄0
G(s) =
an sn + ān−1 sn−1 + · · · + ā1 s + ā0
 
sm + b̄m−1 sm−1 + · · · + b̄1 s + b̄0
= K
sn + ān−1 sn−1 + · · · + ā1 s + ā0
(s − z1 )(s − z2 ) · · · (s − zm )
G(s) = K
(s − p1 )(s − p2 ) · · · (s − pn )

where, m ≤ n

1. The numerator roots z1 , · · · , zm are called system zeros.

2. The denominator roots p1 , · · · , pn are called the system poles.

Figure 9.9: A sketch of the poles and zeros along the real line

3. The denominator polynomial is called the characteristic polynomial.

4. The transfer function is the Laplace transform of the impulse response function4 .

An Example. An example of why m > n (or, a system which is not proper) is a bad idea.

Newton’s law says f = mv̇.


Let us identify the input and the output.
Lecture Notes on Control Systems/D. Ghose/2012 32

Figure 9.10: Force applied on a mass

Figure 9.11: Case 1

Case 1: Let us say that input is v and output is f . Also, let v be a step input.
Looks like a bad idea!!!
Case 2: Let the input be f and the output be v, and let f be the unit force.

Figure
F igure 99.12:
12: Case 2

Looks OK.
Now, look at the transfer function.
4
To see this, put u(t) = δ(t), then U (s) = 1. So, Y (s) = G(s) and y(t) = L−1 [G(s)]
Lecture Notes on Control Systems/D. Ghose/2012 33

f (t) = mv̇ ⇒ F (s) = msV (s) (assuming zero initial condition)

F (s)
For Case 1: F (s) = msV (s) ⇒ V (s)
= ms (which is an improper transfer function)
−st
Note that V (s) = e s 0 ⇒ F (s) = me−st0 . Thus, a delayed step function is caused by a
delayed impulse signal.

1 V (s) 1
For Case 2: V (s) = ms
F (s) ⇒ F (s)
= ms
(which is strictly proper).
−st −st −st
So, F (s) = e s 0 ⇒ V (s) = mes2 0 = 1s me s 0 . This may be interpreted as the integral of
a delayed step function, giving rise to a ramp function.

9.6 Initial and Final Value Theorems

Initial Value Theorem


Since
 
df
L = sF (s) − f (0)
dt
and since, s → ∞ ⇒ e−st → 0, we have,
 

df df −st
lim L = lim e dt = 0
s→∞ dt s→∞ 0 dt
So,
lim [sF (s) − f (0)] = 0
s→∞

Therefore,
⇒ f (0) = lim sF (s)
s→∞

Final Value Theorem


  ∞ ∞
df df −st df
lim L = lim e dt = dt = f (∞) − f (0)
s→0 dt s→0 0 dt 0 dt
Therefore,
lim[sF (s) − f (0)] = f (∞) − f (0)
s→0

and hence,
lim sF (s) = f (∞) = lim f (t)
s→0 t→∞

However, the final value theorem is meaningful only if the following conditions are met.
Lecture Notes on Control Systems/D. Ghose/2012 34

df
1. The Laplace transform of f (t) and dt
exists.
2. limt→∞ f (t) exists.
3. All poles of F (s) are on the left half plane except for one which may be at the
origin.
4. No poles on the imaginary axis.

9.7 Partial Fraction Expansion

The reason we introduced the Laplace transform is to devise an easy way to find the
system response.

Figure 9.13: Input-output representation

One way to do this would be by using the convolution integral,


t
y(t) = g(t − τ )u(τ )dτ
0

But if the Laplace transforms are known for g and u, then


Y (s) = G(s)U (s)
and then y(t) is obtained by finding the inverse Laplace transform.
To achieve this, we need to break up Y (s) into pieces for which the inverse Laplace
transforms are available, and then use the Laplace transform tables to find the inverse
Laplace transform of the complete Y (s).
Let,

N (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0 m


(s − zk )
G(s) = = = K
k=1
D(s) n
an s + an−1 s n−1 + · · · + a1 s + a0 j=1 (s − pj )
n

Note that G(s) is not necessarily the transfer function of the system. It could be the
output Y (s) or any other function in the s-domain having a numerator polynomial and
a denominator polynomial of appropriate order.
Lecture Notes on Control Systems/D. Ghose/2012 35

Case 1: Distinct Poles: pi = pj , i = j


Then, the partial fraction expansion of G(s) is


n
ki
G(s) =
i=1 s − pi

where, ki are called the residues. How do we find ki ?


To find ki : Multiply G(s) by (s − pi ) and let s → pi .

⎡ ⎤

n
kj ⎦
lim ⎣(s − pi )
lim (s − pi )G(s) = s→p
ki = s→p
j=1 (s − pj )
i i

Example. Let
s
G(s) =
s2 + 3s + 2
It is easy to find the poles of G(s),
s
G(s) =
(s + 1)(s + 2)

Since the poles are distinct (p1 = −1, p2 = −2), we may expand G(s) into partial
fractions as,
k1 k2
G(s) = +
s+1 s+2
To find the residues,
s −1
k1 = lim (s + 1)G(s) = lim = = −1
s→−1 s+2 −1 + 2
s→−1
s −2
k2 = lim (s + 2)G(s) = lim = =2
s→−2 s→−2 s + 1 −2 + 1
So,
−1 2
G(s) = +
s+1 s+2

Verify that the above is indeed the same as the original G(s).

Case 2: A pole of multiple order or repeated pole.

N (s)
G(s) =
(s − p1 )(s − p2 ) · · · (s − pi−1 )(s − pi )l (s − pi+1 ) · · · (s − pn )
Lecture Notes on Control Systems/D. Ghose/2012 36

In the above, the pole pi has order l > 1. All other poles have order 1. Then the
partial fraction expansion is given by,

k1 k2 ki−1
G(s) = + + ··· + ← Simple poles
s − p1 s − p2 s − pi−1
ki+1 kn
+ + ··· + ← Simple poles
s − pi+1 s − pn
A1 A2 Al
+ + + ··· + ← Repeated poles
s − pi (s − pi ) 2 (s − pi )l
where,
kj = (s − pj )G(s)|s=pj ← For simple poles

and for the repeated pole,




Al = (s − pi )l G(s)
s=pi
 
d  
Al−1 = (s − pi ) G(s)
l
ds s=pi
 
1 d 2  
Al−2 = (s − p i )l
G(s)
2! ds2 s=pi
..
.  
1 d(l−2)  
A2 = (s − pi ) G(s)
l
(l − 2)! ds(l−2) s=pi
 
1 d(l−1)  
A1 = (s − pi ) G(s)
l
(l − 1)! ds(l−1) s=pi

Example. Consider a system which has a response given by the differential equa-
tion,
ẍ + 2ẋ + x = 0, x(0) = a, ẋ(0) = b
Taking Laplace transform on both sides,
s2 X(s) − sx(0) − ẋ(0) + 2(sX(s) − x(0)) + X(s) = 0
s2 X(s) − sa − b + 2(sX(s) − a) + X(s) = 0
From which,
as + (2a + b) as + (2a + b)
X(s) = =
s2 + 2s + 1 (s + 1)2
The partial fraction expansion is then,
A1 A2
X(s) = +
s + 1 (s + 1)2
Lecture Notes on Control Systems/D. Ghose/2012 37

where,


A2 = (s + 1)2 X(s) = as + (2a + b)|s=−1
s=−1
= −a

+ 2a + b = a +b
d 
A1 = {(s + 1)2 X(s) 
ds 
s=−1
= a

So,
a a+b
X(s) = +
s + 1 (s + 1)2

Verify that the above is indeed the same as the original X(s).

Case 3: Complex poles

G(s) = TRANSFER FUNCTION HAVING DISTINCT POLES


+ TRANSFER FUNCTION HAVING REPEATED POLES
+ TRANSFER FUNCTION HAVING COMPLEX POLES

The complex roots are expressed as,

k̄1 s + k̄2
(s + a)2 + b2

When b > 0,

(s + a)2 + b2 = (s + a)2 − (jb)2


= (s + a + jb)(s + a − jb)

Since the complex roots are distinct, one can use the method of distinct roots as
given earlier to obtain the residues. But in that case the residues will also be
complex. On further manipulations we can get back the real numbers. Finally we
can use the following inverse Laplace transforms,
 
−1 b
L = e−at sin bt
(s + a)2 + b2
 
s + a
L−1 = e−at cos bt
(s + a)2 + b2

Example.
Lecture Notes on Control Systems/D. Ghose/2012 38

s+3
G(s) =
s3 + 3s2 + 6s + 4
s+3
=
s + s + 2s2 + 2s + 4s + 4
3 2

s+3
= 2
s (s + 1) + 2s(s + 1) + 4(s + 1)
s+3
= 2
(s + 2s + 4)(s + 1)
s+3
= √
(s + 1)[(s + 1)2 + ( 3)2 )
So, the poles are,
√ √
p1 = −1, p2 = −1 − j 3, p3 = −1 + j 3

Since all the poles are distinct, by partial fraction expansion,


k1 k2 k3
G(s) = + √ + √
s+1 s+1+j 3 s+1−j 3

and, the residues are computed as,

k1 = (s + 1)G(s)|s=−1

s+3 

= 
(s + 1)2 + 3 s=−1
−1 + 3
=
3
2
=
3

√ 

k2 = (s + 1 + j 3)G(s) √
s=−1−j 3

s+3 
= √ 
(s + 1)(s + 1 − j 3) s=−1−j √3

−1 − j 3 + 3
= √ √
(−j 3)(−j2 3)

2−j 3
=
−6 √
1 3
= − +j
3 6
Similarly,
√ 
k3 = (s + 1 − j 3)G(s) √
s=−1+j 3
Lecture Notes on Control Systems/D. Ghose/2012 39


s+3 
= √ 
(s + 1)(s + 1 + j 3) s=−1+j √3

−1 + j 3 + 3
= √ √
(j 3)(j2 3)

2+j 3
=
−6 √
1 3
= − −j
3 6
Substituting these values,
√ √
2/3 −(1/3) + j( 3/6) −(1/3) − j( 3/6)
G(s) = + √ + √
s+1 s+1+j 3 s+1−j 3

2/3 s+1 √ 3
= + (−2/3) 2
+ (1/ 3)
s+1 (s + 1) + 3 (s + 1)2 + 3
Applying the inverse transform we get,
2 2 √ 1 √
g(t) = e−t − e−t cos 3t + √ e−t sin 3t
3 3 3
An alternative way to solve the same problem is by equating the coefficients. This
avoids the complication of using imaginary numbers. Let,
s+3
G(s) =
(s + 1)[(s + 1)2 + 3]
k1 k2 s + k3
= +
s + 1 (s + 1)2 + 3
So,
s+3 k1 [(s + 1)2 + 3] + (s + 1)(k2 s + k3 )
=
(s + 1)[(s + 1)2 + 3] (s + 1)[(s + 1)2 + 3]
Since the denominators are the same, the numerators must also be the same. Thus,
s + 3 = (k1 + k2 )s2 + (2k1 + k2 + k3 )s + 4k1 + k3
Comparing the coefficients of the powers of s,
k1 + k2 = 0
2k1 + k2 + k3 = 1
4k1 + k3 = 3
Solving,
2
k1 =
3
2
k2 = −
3
1
k3 =
3
Lecture Notes on Control Systems/D. Ghose/2012 40

The rest follows in the same way as before.


Question. How would you handle repeated complex roots?

You might also like