You are on page 1of 201

Introduction to

Feedback Control Systems


Randal K. Douglas
D. Lewis Mingori
Jason L. Speyer
Department of Mechanical and Aerospace Engineering
University of California, Los Angeles
Introduction to
Feedback Control Systems
Randal K. Douglas
D. Lewis Mingori
Jason L. Speyer
Department of Mechanical and Aerospace Engineering
University of California, Los Angeles
Copyright c 2000
Contents
Chapter 1 Feedback and Control Systems 1
1.1 The Structure of a Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Open-Loop Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Closed-Loop Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 General Input-Output Relationships and Differential Equations . . . . . . . . . . . . . 5
1.5 State-Space Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 2 Dynamic System Modeling 11
2.1 Simplifying LTI Differential Equations: The Laplace Transform . . . . . . . . . . . . 11
2.1.1 Time differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Time integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 A Time-Domain Input-Output Description: The Convolution Integral . . . . . . . . . . 17
2.3 An s-Domain Input-Output Description: Transfer Functions . . . . . . . . . . . . . . 20
2.3.1 Transfer Function for Systems Described by LTI Differential Equations . . . . 22
2.3.2 Initial and Final Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Inverse Laplace Transform Using Partial Fraction Expansion . . . . . . . . . . . . . . 26
2.4.1 The poles are distinct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4.2 A pole is of multiple order . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.3 Complex Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Summary: Relating Time-Domain and s-Domain Representations . . . . . . . . . . . 31
v
Page vi Contents
Chapter 3 Dynamic System Response 35
3.1 Time Response of a Few Simple Mechanical Systems . . . . . . . . . . . . . . . . . . 35
3.1.1 Mass-Spring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 Mass-Damper System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1.3 Mass-Spring-Damper System . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 System Time Response Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Characteristics of First-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.1 First-Order System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 S-Plane Description of First-Order Systems . . . . . . . . . . . . . . . . . . . 39
3.4 Characteristics of Second-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 Second-Order System Parameters . . . . . . . . . . . . . . . . . . . . . . . . 40
Both poles are real and distinct . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Both poles are complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Both poles are real and equal . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4.2 S-Plane Description of Second-Order Systems . . . . . . . . . . . . . . . . . 43
3.4.3 S-plane Interpretation Of Second-Order Dynamics . . . . . . . . . . . . . . . 44
3.5 Performance of First-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.1 System Performance Denitions . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5.2 Relation to System Characteristics . . . . . . . . . . . . . . . . . . . . . . . . 46
Rise-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Settling-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 Performance of Second-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6.1 System Performance Denitions . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6.2 Relation to System Characteristics . . . . . . . . . . . . . . . . . . . . . . . . 49
Settling-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Peak-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Maximum overshoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Rise-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6.3 Applications to Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.7 Higher-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.7.1 An Extra Pole At The Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.7.2 An Extra Non-zero, Stable Pole . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.7.3 An Extra Zero At The Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.7.4 A Zero on the Real Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Chapter 4 Block Diagram Algebra 63
Rev. May 4, 2010 Introduction to Feedback Control Systems
Contents Page vii
Chapter 5 Stability 67
5.1 S-Plane Interpretation of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2 Stability And The Hurwitz Determinants . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2.1 Positive Coefcients Are Necessary for Stability . . . . . . . . . . . . . . . . 70
5.2.2 Positive Hurwitz Determinants Are Both Necessary And Sufcient for Stability 71
Chapter 6 Feedback Control 73
6.1 Motivation A DC Motor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 Objectives of Feedback Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3 Feedback Control and Transient Response . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3.1 P-Control in First-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3.2 P-Control in Second-Order Systems . . . . . . . . . . . . . . . . . . . . . . . 80
6.4 Feedback Control and Steady-State Response . . . . . . . . . . . . . . . . . . . . . . 82
6.4.1 High Control Gain K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.4.2 Command Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4.3 Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4.4 Steady-State Response to a Ramp Input . . . . . . . . . . . . . . . . . . . . . 87
A Ramp Input Applied to a P-Control Feedback System . . . . . . . . . . . . 87
A Ramp Input Applied to an Integral Control Feedback System . . . . . . . . 88
A Ramp Input Applied to a Double Integral Control Feedback System . . . . . 89
6.5 System Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.6 Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.6.1 Transfer functions from the disturbances to the output . . . . . . . . . . . . . 94
6.6.2 A tradeoff between disturbance attenuation and tracking performance . . . . . 95
6.7 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7.1 Feedback Improves Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Chapter 7 Root Locus 99
7.1 Root Locus as a Design Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Root Locus Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.1 Evans Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.2 The Angle Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Complex Numbers Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Angle Criterion Development . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.3 Root Locus Sketching Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.4 Proofs and Examples of Root Locus Properties . . . . . . . . . . . . . . . . . . . . . 118
Introduction to Feedback Control Systems Rev. May 4, 2010
Page viii Contents
Chapter 8 Root Locus Compensator Design 129
8.1 Proportional-Derivative Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.2 Lead Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.3 Proportional-Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Lag Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.5 Compensator Design By Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.5.1 Lead Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.5.2 Lag Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5.3 Lead-Lag Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.6 Aircraft Pitch-Rate Autopilot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6.1 System Component Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6.2 Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.6.3 Design Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.6.4 System Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.6.5 Closed-Loop Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.6.6 Design 1: Proportional Control . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.6.7 Design 2: Integral Compensation . . . . . . . . . . . . . . . . . . . . . . . . 148
8.6.8 Design 3: Lag Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.6.9 Design 4: Fast Lag Compensation . . . . . . . . . . . . . . . . . . . . . . . . 149
Chapter 9 Frequency Response 153
9.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.2 System Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
9.3 Frequency Response Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
9.4 Bode Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.4.1 Pure Differentiators and Integrators . . . . . . . . . . . . . . . . . . . . . . . 159
9.4.2 First-Order Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.4.3 Second-Order Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.4.4 Relate Bode Plot to Time and s-Domain System Characteristics . . . . . . . . 163
Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
DC Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Velocity Constant from Frequency Response . . . . . . . . . . . . . . . . . . 165
9.5 Nyquist Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.5.1 Complex Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.5.2 Applications to Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.5.3 A Renement in the Nyquist Path . . . . . . . . . . . . . . . . . . . . . . . . 171
9.5.4 Counting Encirclements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.5.5 Relative Stability: Gain and Phase Margin . . . . . . . . . . . . . . . . . . . . 176
Rev. May 4, 2010 Introduction to Feedback Control Systems
Contents Page ix
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Application to Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Gain and Phase Margins From Bode Plots . . . . . . . . . . . . . . . . . . . . 177
Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.5.6 Phase Margin Gives Closed-Loop Damping and Overshoot . . . . . . . . . . . 178
9.6 Frequency Domain Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.6.1 Satisfy the Steady-State Error Requirement . . . . . . . . . . . . . . . . . . . 180
9.6.2 Check Other Design Requirements . . . . . . . . . . . . . . . . . . . . . . . . 180
9.6.3 Design a Lead Compensator . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Find the Maximum Phase Lead. . . . . . . . . . . . . . . . . . . . . . . . . . 183
Find the Phase Span as a Function of Maximum Phase Lead . . . . . . . . . . 183
9.6.4 Design a Lead Compensator . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.6.5 Check The Compensator Design . . . . . . . . . . . . . . . . . . . . . . . . . 185
References 187
Index 189
Introduction to Feedback Control Systems Rev. May 4, 2010
CHAPTER 1
Feedback and Control Systems
A system is a collection of components that interact with one another and the operating environment.
Control is the process of causing a system variable to conform to some desired value.
Example (Plants). water, sun, fertilizer, tlc healthy plants.
Example (College). hard work, talent, support, luck GPA.
Example (Economic System). monetary and tax policy wealth distribution, economic growth.
1.1 The Structure of a Control Problem
There are ve essential elements of a conventional control problem as shown in the following table.
1
Page 2 Feedback and Control Systems
Problem Element Example Example
1) The plant Aircraft National economy
2) Control inputs Control surface deections or
inputs to actuators
Monetary policy,
Defense budget, Tax law
3) Disturbance inputs Wind gusts Natural disasters, corruption
4) Outputs Translational and rotational
position and velocities
GNP, Interest rate,
Unemployment rate,
Housing starts
5) Control objective Speed of achieving
commanded input,
Handling qualities, Stability
Zero unemployment,
Zero ination, Positive growth
Table 1.1: Elements of a control problem.
A control system design objective is to meet system specications in the presence of large input
disturbances and plant variations. Generally controller design goals are characterized by
Speed
Accuracy
Stability
Robustness
Depending on the nature of the control problem elements, the problem is classied by several
parameters. This course will concentrate on systems dened by the bold-face parameters.
Open-loop Closed-loop
Dynamic Static
Linear Nonlinear
Time-variant Time-invariant
Continuous Discrete time
Stochastic Deterministic
Table 1.2: Classication of control system characteristics.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback and Control Systems Page 3
1.2 Open-Loop Control System
The characterizing feature of an open-loop control system is that the output is not used to make the
plant conform to the desired output. The general form of an open-loop control system is illustrated in
Figure 1.1 as a block diagram. The block diagram identies the major system components as blocks
in the gure, omits details and without equations shows the major directions of information and energy
ow from one component to another. Usually the blocks represent physical devices that can be described
mathematically.
Controller
Plant
(Dynamic system)
Output
Disturbance
input
Reference input Control input
Performance criterion
Figure 1.1: An open-loop control system block diagram.
Example (Automobile Cruise Control).
Performance
|VV
0
|, |VV
0
|
2
, etc.
Commanded speed (V
0
)
Controller
Automobile
Engine/Transmission
Suspension
Tires
Chassis Dynamics
Actual speed (V)
(Wind, Hills, Road)
Disturbance input
Throttle command
Figure 1.2: An open-loop automobile speed control system.
This system has no way to correct the error between the commanded velocity and the actual velocity,
V V
0
. Obviously it cant be expected to work well.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 4 Feedback and Control Systems
1.3 Closed-Loop Control System
The characterizing feature of a closed-loop system is that the controlled system variable is measured,
compared to a desired value and the difference used to inuence the value of the controlled variable.
The process of measuring the value of some variable and using that information to change its value is
called feedback. The general form of a closed-loop control system is illustrated in Figure 1.3
Controller
Plant/Actuators
(Dynamic system)
Output
Disturbance input
Reference input Control input
Measurement
instruments
Error
Disturbance input
Performance criterion
+
_
Figure 1.3: A closed-loop control system block diagram.
Example (Automobile Cruise Control). The input is the throttle pedal angle. The vehicle output is
the speed as found by integrating the acceleration
V =
1
m

Fdt since F = m

V
The net force, F, applied to the vehicle center of mass is found, for example, from the drive wheel
torque, road friction and wind drag. The mathematical description of an automobile including the
throttle, engine, transmission, suspension, tires, chassis and other components is obviously very
complicated. Most likely, mathematical models for many of the vehicle components would be derived
empirically.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback and Control Systems Page 5
Performance criterion
Controller
Automobile
Throttle actuator
Engine/Transmission
Suspension ...
Actual speed (V)
Disturbance input
Command speed (V
0
) Throttle input
Error
(V
0
V)
+
_
(Wind, Hills, Road)
Accelerometer
Speedometer
Figure 1.4: A closed-loop cruise control system.
1.4 General Input-Output Relationships and Differential Equations
An analytical study of a control system requires mathematical models for each of the components
and their interaction. The mathematical models express a relationship between the component input
variables and output variables. In real applications a model is always an approximation of a physical
system. The input r(t) and output y(t) can be vectors; here they are scalars.
Operating Component
(Plant, Actuator, Sensor, etc.)
Input: r(t) Output: y(t)
Figure 1.5: A system component input-output representation.
Sometimes, a simple gain is sufcient. For example, in a cruise control system a speedometer could
be regarded as a component that provides a voltage proportional to the vehicle speed. The mathematical
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 6 Feedback and Control Systems
model is the simple linear relationship:
voltage = K speed
More often, a component is represented as a dynamic system. In a dynamic system, the output at a time
t depends not only on the input at time t but on the input for all time before t. Such models are usually
described by differential or by difference equations. The component models emphasized here are linear
differential equations. A linear system is often a good approximation when the operating region of the
component is sufciently restricted.
Example (Pendulum).
I

= torque = mgLsin
I = mL
2
For small , sin is a good approximation.
The approximate linear dynamics are

=
g
L

L
Figure 1.6: A pendulum.
Denition (Linear function). A function f(x) is a linear function of the independent variable x if and
only if it satises two properties:
Additivity: f(x
1
+x
2
) = f(x
1
) +f(x
2
) for all x
1
and x
2
in the domain of f(x).
Homogeneity: f(x) = f(x) for all x in the domain of f(x) and all scalars .
The systems considered here are those that may be modeled by a linear time-invariant ordinary
differential equation,
n

j=0
a
j
d
j
y(t)
dt
j
=
m

k=0
b
k
d
k
r(t)
dt
k
, where n > m
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback and Control Systems Page 7
and where the a
j
, j = 0, . . . , n and b
k
, k = 0, . . . , m are constants.
Example (A Rocket Sled). Here are free body diagrams of the vehicle and payload.
Rocket Sled Mass: m
1
Payload Mass: m
2
Input: r(t)
Thrust
Friction Force
Friction Force
Output: y(t)
Velocity or Position
of Payload Mass
Figure 1.7: Schematic of a rocket sled with a payload.
Payload Mass
m
2
Inertia:
Spring Force:
Friction Force:
Position: y
2
Vehicle Mass
m
1
Thrust: T
Inertia:
Spring Force:
Position: y
1
Friction Force:
Friction Force:
m
1
y
1

K(y
1
y
2
)
b
1
y
1

b
2
(y
1
y
2
)
K(y
1
y
2
)
m
2
y
2

b
2
(y
2
y
1
)
Figure 1.8: Free body diagrams of a rocket sled and payload.
The equations of motion for the two masses are:
K(y
1
y
2
) = m
2
y
2
+b
2
( y
2
y
1
)
T = m
1
y
1
+b
1
y
1
b
2
( y
2
y
1
) +K(y
1
y
2
)
Regard the input of the system as the thrust T and the output as the position of the payload y
2
. After a lot
of rearranging, a fourth-order linear time-invariant differential equation relates the input to the output.

y
2
+
_
m
1
b
2
+m
2
(b
1
+b
2
)
m
1
m
2
_

y
2
+
_
(m
1
+m
2
)K +b
1
b
2
m
1
m
2
_
y
2
+
_
b
1
K
m
1
m
2
_
y
2
=
_
b
2
m
1
m
2
_

T +
_
K
m
1
m
2
_
T
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 8 Feedback and Control Systems
or

y
2
+a
3

y
2
+a
2
y
2
+a
1
y
2
=

b
1

T +

b
0
T
The previous example illustrates an important point which is how system component blocks are used to
build more complex systems. For example:
Hydralic
System
Aircraft
Dynamics
Command
Actuator
Position Pitch Rate
Figure 1.9: Components combine to form complex systems.
1.5 State-Space Representations
With a system modeled as an ordnary differential equation, an equivalent state-space representation has
become common. A state space model replaces a single second or higher-order differential equation
n

j=0
a
j
d
j
y(t)
dt
j
=
m

k=0
b
k
d
k
r(t)
dt
k
, where n > m
with an equivalent set of coupled rst-order equations
x = f(x, u)
y = h(x, u)
where x is a vector with dimension given by the order of the governing differential equation. In the case
of a linear system model, the state space form is
x = Ax +Bu
y = Cx +Du
Variable x is the system state; u is the system input and y is the output. All variables can be vectors
but in these notes, only x will be a vector; u and y will be scalars. This is known as a Single-Input,
Single-Output (SISO) system.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback and Control Systems Page 9
Example (A simple mass, spring and damper system).
If y(t) is the position of the mass and f(t) is an applied
force, the governing differential equation is
m y +c y +ky = f (1.1)
The second-order differential equation (1.1) is equivalent
to a pair of rst-order equations. Let
k
m
f(t)
c
Figure 1.10: A simple mass, spring
and damper.
x
1
= y
x
2
= y
and put
x =
_
x
1
x
2
_
Then,
x
1
= x
2
x
2
=
k
m
x
1

c
m
x
2
+
1
m
f
This is written in a compact matrix form as
_
x
1
x
2
_
=
_
0 1

k
m

c
m
_ _
x
1
x
2
_
+
_
0
1
m
_
u
y =
_
1 0

_
x
1
x
2
_
Introduction to Feedback Control Systems Rev. May 4, 2010
CHAPTER 2
Dynamic System Modeling
Systems considered in this class are those that can be modeled by linear time-invariant (LTI) differential
equations. The Laplace transform reduces the manipulation of these equations to simple algebra.
2.1 Simplifying LTI Differential Equations: The Laplace Transform
This section provides a quick review of the Laplace transform and a few of its properties. First, the
Laplace transform denition is given followed by a discussion of the types of functions for which the
transform exists. Next, derivations of the Laplace transform of commonly used functions are provided.
Denition (Laplace transform). The Laplace transform is dened for the function f(t) as
L
_
f(t)

F(s) =

f(t)e
st
dt, Re(s) 0 (2.1)
where s is a complex number and Re(s) is the real part of s.
Note that s is regarded as a constant in the integration. The Laplace transform of f(t) exists if, for
some real number , the following integral converges


0
f(t)e
t
dt < (2.2)
11
Page 12 Dynamic System Modeling
If it is true that for some real constants , M and T, |f(t)| < Me
t
for T t < , then the
integral (2.2) converges for < < . In general, the Laplace transform is dened for f(t) such that
1. f(t) is piecewise continuous.
2. f(t)e
t
0 as t for some real .
The Laplace transform is only dened for Re(s) . By a process called analytic continuation, the
domain of F(s) is extended to include all of the complex plane so that F(s) is analytic everywhere
except at the singularities (poles).
Example (Dirac Delta Function). The Dirac delta function or impulse function has innite amplitude
and unit area.
(t) =
_
0 t = 0
t = 0
such that

(t)dt = 1
L
_
(t)

(t)e
st
dt = e
st

t=0
= 1
t
f (t)
Figure 2.1: A Dirac delta function.
Example (Unit Step Function).
u(t) =
_
1 t 0
0 t < 0
L
_
u(t)

u(t)e
st
dt =

e
st
dt =
e
st
s

t=
t=0

=
1
s
t
1
f (t)
Figure 2.2: A unit step function.
Example (Exponential Function).
u(t) = e
at
L
_
u(t)

e
at
e
st
dt =
1
s +a
e
(s+a)t

0
=
1
s +a
Re(s) +a 0
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 13
Example (Sine and Cosine Functions). Laplace transforms for the sine and cosine are built from the
Laplace transform for the exponential. Let
u(t) = sin t v(t) = cos t
First, note that
sin t = Im(e
it
) cos t = Re(e
it
)
Then,
L
_
e
it
_
=

e
it
e
st
dt
=

e
(is)t
dt
=
_
1
i s
e
(is)t
_

0
=
s
s
2
+
2
+
i
s
2
+
2
So that,
L[u(t)] =

s
2
+
2
L[v(t)] =
s
s
2
+
2
Example (Time Delay).
u(t) = f(t t
0
)
L
_
u(t)

= L
_
f(t)

e
st
0
For example,
L[(t t
0
)] = L[(t)] e
st
0
= e
st
0
L[u(t t
0
)] = L[u(t)] e
st
0
=
e
st
0
s
Example (Unit Pulse Function). The Laplace transform of a pulse is found by summing a step with a
delayed step.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 14 Dynamic System Modeling
u(t) u(t t
0
) =
_

_
0 t < 0
1 0 t < t
0
0 t t
0
L
_
u(t) u(t t
0
)

[u(t) u(t t
0
)]e
st
dt
=

t
0
0
e
st
dt =
e
st
s

t=t
0
t=0
=
1 e
st
0
s
t
1
f (t)
t
0
Figure 2.3: A unit pulse function.
Example (Delayed Delta Function). The Dirac delta function seen at the top of the section in
a heuristic setting is revisited. The delta function is formed as a pulse with width that
is taken in a limit to zero. It has already been shown how a pulse combines a pair of
step functions with a delay. Consider a pulse of width and height
1

, applied at time t
i
.
Notice that the pulse has a unit area for all .

(t t
i
) =
1

[u(t t
i
) u(t t
i
)]
=
_

_
0 t < t
i
1

t
i
t < t
i
+
0 t
i
+ t
1

t t
i
t
i
+
Figure 2.4: A pulse of width .
The Laplace transform of the pulse is taken as follows.
L
_

(t t
i
)

= L
_
1

[u(t t
i
) u(t t
i
)]
_
=
1

_
1
s
e
st
i

1
s
e
s(t
i
+)
_
=
1
s
e
st
i
_
1 e
s
_
Now nd the limit of the pulse Laplace transform as goes to zero.
lim
0
L[

(t t
i
)] =
1
s
e
st
i
lim
0
1

_
1
_
1 s +
1
2
(s)
2

__
=
1
s
e
st
i
lim
0
_
s
1
2
s
2
+
_
= e
st
i
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 15
So, the Laplace transform of a pulse at time t
i
, taken in the limit, is the Laplace transform of a Dirac
delta function () that has been delayed to time t
i
.
With a bit more work the sifting property can be demonstrated. With f(t) as any piecewise
continuous function, (t t
i
) has the property that

f(t)(t t
i
)dt = f(t
i
)
To get the transfer function for systems modeled by LTI ordinary differential equations, the Laplace
transform of the derivative and integral are needed.
2.1.1 Time differentiation
Differentiation becomes multiplication by s.
Theorem 2.1. Given the condition that f(t)e
st
0 as t , the Laplace transform of
df
dt
is
L
_
df
dt
_
= sF(s) f(0

)
where f(0

) is the initial value of f(t).


Proof. By denition of the Laplace transform,
L
_
df
dt
_
=


0
df
dt
e
st
dt
Integrate by parts to get
f(t)e
st

0
=


0
df
dt
e
st
dt
. .
L[
df
dt
]
+


0
f(t)e
st
(s)dt
. .
sF(s)
= f(0)
The Laplace transform of higher-order derivatives are found in the same way and are given by
L
_
d
n
f
dt
n
_
= s
n
F(s) s
n1
f(0

) s
n2
f
(1)
(0

) f
n1
(0

)
Example.
dx
dt
+
1

x(t) = br(t), x(0) = x


0
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 16 Dynamic System Modeling
Let L
_
x(t)

= X(s) and L
_
r(t)

= R(s). Then,
L
_
dx
dt
_
+
1

L
_
x

= bL
_
r

sX(s) x
0
+
1

X(s) = bR(s)
and
X(s) =
x
0
s +
1

+
b
s +
1

R(s)
Finding x(t) is very easy. We know that
L
_
e
st

=
1
s +a
So x(t) with Laplace transform given by X(s) is
x(t) = x
0
e

+b

t
0
e

r()d
2.1.2 Time integration
Integration becomes division by s.
Theorem 2.2.
L
_
t
0
f()d
_
=
1
s
F(s)
Proof. Find L
_

t
0
f()d
_
using the Laplace transform of the derivative. Let g(t) =

t
0
f()d. Then
dg
dt
= f(t) and g(0) = 0. So,
L
_
f(t)

= L
_
dg
dt
_
= sG(s) g(0) = sG(s) = F(s)
and
G(s) = L
_
t
0
f()d
_
=
1
s
F(s)
In general, for zero initial conditions,
L
_
t
1
0

t
n1
0

tn
0
f()ddt
n
dt
2
_
=
1
s
n
F(s)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 17
2.2 A Time-Domain Input-Output Description: The Convolution Integral
This section builds a system input-output description in the time-domain. Given an input as a function of
time and a system impulse response, the convolution integral is developed as an expression that provides
the system response. First, a system input is approximated as a sum of pulses then in the limit as a linear
combination of an innite series of impulses. Next the system output is found using a property of linear
systems called the principle of superposition. The principle of superposition states that if two inputs are
added together and applied to a system, the output is the same as if the responses to each of the inputs
were found separately and added together. The principle is applied to show that the system output at
a given time is found by summing the impulse responses over all previous times. When applied to
constant systems, this summation is the convolution integral.
Consider a piecewise continuous function r(t), where r(t) = 0 for t < 0. Approximate r(t) as a
series of pulse functions as in Figure 2.5.
r(t
i
)
t
i
t
i
r(t
i
)
r(t
i
)

(tt
i
)
t t
Figure 2.5: Function approximation as a series of pulses.
Each pulse is given by
r

(t
i
) = r(t
i
)

(t t
i
)
So, the function r(t) is approximated by a summation over the pulses as
r(t)

i=1
r(t
i
)

(t t
i
)
In the limit as goes to zero, the summation becomes an integration by denition and r(t) is given by
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 18 Dynamic System Modeling
the following integral identied above as the sifting property
r(t) = lim
0

i=1
r(t
i
)

(t t
i
) =


0
r()(t )d
Now regard r(t) as the input of a system and let G() be a functional that species uniquely the
system output in terms of the input:
y = G(r)
Assume the system has the properties that it is linear, time-invariant, causal and relaxed dened as
follows.
Denition (Linear). A system G is linear if it satises the properties of additivity and homogeneity.
Suppose r
1
(t) and r
2
(t) are system inputs and
1
and
2
are scalars. A system G is linear if
G(
1
r
1
+
2
r
2
) =
1
G(r
1
) +
2
G(r
2
)
A linear system is also said to satisfy the principle of superposition.
Denition (Time-invariant). A system is time-invariant means that the input-output relationship can
be shifted in time, that is, if
r(t) y(t), then r(t ) y(t ).
Denition (Causal). A system is causal means that future inputs dont affect the current output.
Denition (Relaxed). A system is relaxed means that the system output y(t) for t 0 is solely and
uniquely excited by the input r(t) applied at t 0.
Approximate r(t) by the pulse functions r

(t
i
) = r(t
i
)

(t t
i
). Then the system output y(t) is
approximated by
y(t) G
_

i=1
r

(t
i
)
_
= G
_

i=1
r(t
i
)

(t t
i
)
_
=

i=1
G[

(t t
i
)] r(t
i
)
Now take the limit as 0
y(t) =

g(t )r()d t > (2.3)


Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 19
Here, g(t ) = G[(t )] is called the impulse response and is the output at time t due to an impulse
(t ) applied at .
The limits on the integral (2.3) are changed by applying the assumptions that the system is relaxed
and causal. The system is relaxed means that the output y(t) for t 0 is only affected by inputs r(t)
for t 0. Thus the lower integration limit becomes 0.
y(t) =


0
g(t )r()d
The system is causal means the impulse response g(t) is zero for t < 0 or equivalently, that g(t) = 0
for > t. This means the upper limit of the integral (2.3) becomes t. Finally, the output y(t) due to an
input r() applied from = 0 to = t is given by the convolution integral:
y(t) =

t
0
g(t )r()d
Example. A simple damped mechanical system is illustrated in Figure 2.6.
Mass: m
Friction: bv
Force: f
1
v
f
1 bv
m
mv
Figure 2.6: A simple damped mechanical system.
The differential equation of motion as given by Newtons Law is
m v = f
1
bv
or
v +
b
m
v =
1
m
f
1
Solve this the conventional way by multiplying by an integrating factor e

b
m
(t)
to get
d
d
_
v()e

b
m
(t)
_
=
1
m
f
1
()e

b
m
(t)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 20 Dynamic System Modeling
Then integrate from = 0

to = t so that
v()e

b
m
(t)

t
0
=

t
0

1
m
e

b
m
(t)
f
1
()d
and
v(t) = v(0)e

b
m
t
+

t
0

1
m
e

b
m
(t)
f
1
()d = v(0)e

b
m
t
+

t
0

f
2
(t )f
1
()d
The form of the solution shows that the integrating factor f
2
(t ) =
1
m
e

b
m
(t)
can be interpreted
as the system impulse response. Notice that the system input f
1
(t) can be the output of another system
component as illustrated in Figure 2.7.
Hydraulic
System
Mass System
f
1
(t) v(t)
Figure 2.7: Simplify analysis of complex systems by decomposition into smaller components.
2.3 An s-Domain Input-Output Description: Transfer Functions
Finding a system response using the convolution integral can be very complicated. Fortunately, there
is an easier way using the Laplace transform. The following theorem shows that convolution in the
time-domain is the same as multiplication in the s-domain.
Theorem 2.3. Given the linear system output by convolution
y(t) = g(t) r(t) =

t
0
g(t )r()d
dene the Laplace transforms
G(s) = L
_
g(t)

, R(s) = L
_
r(t)

, Y (s) = L
_
y(t)

,
then the input-output relationship is given by a simple multiplication.
Y (s) = G(s)R(s)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 21
Proof. Let
y(t) = g(t) r(t) =

t
0
g()r(t )d
Now nd the Laplace transform of y(t).
L
_
y(t)

= L
_
t
0
g()r(t )d
_
= L
_

0
g()r(t )d
_
because r(t ) = 0 for > t
=


0
_

0
g()r(t )d
_
e
st
dt by denition of the Laplace transform
=


0
g()r(t )e
st
ddt
Change the order of integration.
L
_
y(t)


0
g()r(t )e
st
dtd
=


0
g()r(t )e
s
e
s(t)
dtd
=


0
g()e
s
_

0
r(t )e
s(t)
dt
_
d
r(t ) = 0 for t < because inputs at future times cant affect the current output. So,
L
_
y(t)


0
g()e
s
_

r(t )e
s(t)
dt
_
d
Now do a change of variables. Let = t . Then dt = d and t = [, ) [0, )
L
_
y(t)


0
g()e
s
_

0
r()e
s
d
_
d
=


0
g()e
s
d


0
r()e
s
d
Y (s) = G(s)R(s)
The only missing detail is to show that

t
0
g()r(t )d =

t
0
g(t )r()d
Do this for homework.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 22 Dynamic System Modeling
Theorem 2.3 says that the system response given by the convolution integral in the time-domain
y(t) =

t
0
g(t )r()d
is simplied by multiplication of Laplace transforms in the s-domain
Y (s) = G(s)R(s)
Time
Domain
s
Domain
Figure 2.8: System descriptions in the time and s-domains are equivalent.
The system transfer function G(s) is the Laplace transform of the impulse response function g(t). It
expresses in the s-domain the relationship between the system input R(s) and the system output Y (s).
For example, if the system input is a Dirac delta function r(t) = (t) then, R(s) = 1 and the system
output is Y (s) = G(s).
2.3.1 Transfer Function for Systems Described by LTI Differential Equations
The Laplace transform is a tool that provides easy evaluation of system responses by transforming linear
time-invariant differential equations into simpler algebraic equations and by including initial conditions
systematically.
Consider the n
th
-order linear, time-invariant dynamic system
n

j=0
a
j
d
j
y
dt
j
=
m

k=0
b
k
d
k
u
dt
k
with zero initial conditions on all derivatives. Take the Laplace transform of both sides. Then,
_
_
n

j=0
a
j
s
j
_
_
Y (s) =
_
m

k=0
b
k
s
k
_
U(s)
The relationship between the input and the output can be expressed as the ratio of two polynomials.
Y (s)
U(s)
=

m
k=0
b
k
s
k

n
j=0
a
j
s
j
= G(s) =
N(s)
D(s)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 23
Note that the transfer function G(s) is dened from the differential equation where the initial conditions
are zero. Let K =
bm
an
and

b
k
=
b
k
bm
for k = 0, . . . , m and a
j
=
a
j
an
for j = 1, . . . , n. Then
G(s) =
b
m
s
m
+b
m1
s
m1
+ +b
0
a
n
s
n
+a
n1
s
n1
+ +a
0
= K
s
m
+

b
m1
s
m1
+ +

b
0
s
n
+ a
n1
s
n1
+ + a
0
= K

m
k=1
(s z
k
)

n
j=1
(s p
j
)
The numerator roots z
k
are called system zeros.
The denominator roots p
j
are called system poles.
The denominator polynomial is called the characteristic equation.
G(s) is said to be proper if m = n.
G(s) is said to be strictly proper if m < n.
Example (Why m > n is Not a Good Idea). Consider the simple mass model of Figure 2.9.
m
v(t)
f(t)
Figure 2.9: A simple mass with an applied force.
By Newtons law
f(t) = m v(t)
F(s) = msV (s)
There are two variables, f and v. What is the input and what is the output? First, consider the input as
a unit step in speed v(t) and the output as the applied force f(t). The transfer function is
F(s)
V (s)
= ms (an improper transfer function)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 24 Dynamic System Modeling
Given that the input is a step applied at time t
0
, the output is
F(s) = msV (s) = ms
e
st
0
s
= me
st
0
The output is an impulse at time t
0
. This is not how most physical systems are observed to behave.
v(t)
t
1
t
0
t t
0
f(t)
Figure 2.10: Input-output relationship for an improper system.
Now consider the input as a unit step in the applied force f(t) and the output as the speed v(t). The
transfer function is
V (s)
F(s)
=
1
ms
(a strictly proper transfer function)
Given that the input is a step applied at time t
0
, the output is
V (s) =
1
ms
F(s) =
1
ms
e
st
0
s
=
e
st
0
ms
2
The output is a ramp that begins at time t
0
.
f(t)
t
1
t
0
t
v(t)
Figure 2.11: Input-output relationship for a strictly proper system.
Its not necessarily bad to have derivatives of the input. Just be sure there are enough derivatives of the
output so that system behavior is reasonable.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 25
2.3.2 Initial and Final Value Theorems
Output values of interest include the steady-state value or y() and the initial value or y(0). They can
be determined from the time-domain response y(t) and from the s-domain output function Y (s) using
the following theorems.
Theorem 2.4 (Initial value theorem). Given a function f(t) with Laplace transform F(s), the value
of f(t) for t = 0 is
f(0) = lim
s
sF(s)
Proof. As s goes to innity, e
st
goes to zero hence the Laplace transformL
_
df
dt
_
=

0
df
dt
e
st
dt also
goes to zero. Since L
_
df
dt
_
= sF(s) f(0

) the result follows.


Theorem 2.5 (Final value theorem). Given a function f(t) with Laplace transform F(s), the value of
f(t) as t is
f() = lim
s0
sF(s)
The nal value theorem holds only if the following conditions are met:
The Laplace transforms of f(t) and
df
dt
exist.
lim
t
f(t) exists.
All poles of F(s) are in the LHP except for one which may be at the origin.
No poles are on the imaginary axis.
Proof. As s 0
L
_
df
dt
_
=


0
df
dt
e
st
dt


0
df
dt
dt = f() f(0)
Then
lim
s0
L
_
df
dt
_
= lim
s0
[sF(s) f(0)] = f() f(0)
and
f() = lim
s0
sF(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 26 Dynamic System Modeling
2.4 Inverse Laplace Transform Using Partial Fraction Expansion
The reason for introducing the Laplace transform is to have an easy way to nd the system response.
If u(t), y(t) and g(t) are a system input, output and impulse response function, they are related by the
convolution integral
y(t) =

t
0
g(t )u()d
Given the Laplace transforms U(s) and G(s), Y (s) is found by simple multiplication
Y (s) = G(s)U(s)
Now, to nd y(t) from Y (s), an inverse Laplace transform is needed
y(t) = L
1
_
Y (s)

1
2l

c+l
cl
Y (s)e
st
ds
where c is a real constant greater than the real parts of all the singularities of Y (s) and where y(t) = 0
for t < 0.
The inverse Laplace transform is hard to evaluate so for complicated Y (s) it is helpful to have a way
to break up Y (s) into pieces and then use tables to nd the inverse Laplace transform of each piece.
This procedure is called partial fraction expansion. Let
G(s) =
N(s)
D(s)
=
b
m
s
m
+b
m1
s
m1
+ +b
0
a
n
s
n
+a
n1
s
n1
+ +a
0
= K

m
k=1
(s z
k
)

n
j=1
(s p
j
)
The following sections illustrate partial fraction expansion for three cases: distinct poles, repeated poles
and complex poles.
2.4.1 The poles are distinct
Suppose the poles p
i
are distinct, p
i
= p
j
for i = j. The partial fraction expansion of G(s) is
G(s) =
n

i=1
K
i
s p
i
,
and the K
i
are called residues. Find K
i
by multiplying G(s) by (s p
i
) and letting s p
i
.
K
i
= lim
sp
i
(s p
i
)G(s) = lim
sp
i
_
_
(s p
i
)
n

j=1
K
j
s p
j
_
_
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 27
Example (Two Real Poles). Let
G(s) =
s
s
2
+ 3s + 2
R(s) = 1
Factor the denominator
Y (s) =
s
(s + 2)(s + 1)
Then,
Y (s) =
K
1
s + 2
+
K
2
s + 1
Find K
1
K
1
= lim
s2
(s + 2)G(s) = lim
s2
(s + 2)
s
(s + 2)(s + 1)
= lim
s2
s
s + 1
= 2
Find K
2
K
2
= lim
s1
(s + 1)G(s) = lim
s1
(s + 1)
s
(s + 2)(s + 1)
= lim
s1
s
s + 2
= 1
So
Y (s) =
2
s + 2

1
s + 1
y(t) = 2e
2t
e
t
2.4.2 A pole is of multiple order
Suppose the transfer function has one pole with multiplicity l
G(s) =
N(s)
(s p
1
)(s p
2
) (s p
i
)
l
(s p
n
)
The partial fraction expansion is
G(s) =
K
1
sp
1
+
K
2
sp
2
+ +
Kn
spn
simple poles (distinct)
+
A
1
sp
i
+
A
2
(sp
i
)
2
+ +
A
l
(sp
i
)
l
l-terms of repeated roots
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 28 Dynamic System Modeling
where
A
l
= (s p
i
)
l
G(s)

s=p
i
A
l1
=
_
d
ds
(s p
i
)
l
G(s)
_
s=p
i
A
l2
=
1
2!
_
d
2
ds
2
(s p
i
)
l
G(s)
_
s=p
i
.
.
.
A
1
=
1
(l 1)!
_
d
l1
ds
l1
(s p
i
)
l
G(s)
_
s=p
i
Example (One Repeated Root). Let x + 2 x +x = 0; x(0) = 0, x(0) = x
0
. Then
s
2
X(s) sx
0
x
0
+ 2(sX(s) x
0
) +X(s) = 0
and,
X(s) =
sx
0
+ 2x
0
+ x
0
s
2
+ 2s + 1
=
sx
0
+ 2x
0
+ x
0
(s + 1)
2
The partial fraction expansion is
X(s) =
A
1
s + 1
+
A
2
(s + 1)
2
A
2
= (s + 1)
2
X(s)

s=1
= (s + 1)
2
_
sx
0
+ 2x
0
+ x
0
(s + 1)
2
_
s=1
= (sx
0
+ 2x
0
+ x
0
)

s=1
= x
0
+ x
0
A
1
=
_
d
ds
(s + 1)
2
X(s)
_
s=1
=
_
d
ds
(sx
0
+ 2x
0
+ x
0
)

s=1
= x
0
and,
X(s) =
x
0
s + 1
+
x
0
+ x
0
(s + 1)
2
The inverse Laplace transform which gives x(t) is now easy to nd
x(t) = L
1
_
X(s)

= x
0
e
t
+ (x
0
+ x
0
)te
t
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 29
Remark. Verication of an inverse Laplace transform is easy, nding it directly is much more difcult.
x(t) = L
_
te
at


0
te
(s+a)t
dt
=
1
s +a
te
(s+a)t

1
s +a
e
(s+a)t
dt
=
1
s +a
te
(s+a)t

1
(s +a)
2
e
(s+a)t

0
=
1
(s +a)
2
2.4.3 Complex Roots
Suppose the transfer function has complex conjugate poles,
G(s) =
b
m
s
m
+b
m1
s
m1
+ +b
0
a
n
s
n
+a
n1
s
n1
+ +a
0
=
_
K
1
s p
1
+ +
K
i
s p
i
_
. .
distinct roots
+
_
A
1
s p
j
+ +
A
l
(s p
j
)
l
_
. .
repeated roots
+
_

K
1
s +

K
2
(s +a)
2
+b
2
_
. .
complex roots
Complex pairs are distinct so the residues could be found as for distinct roots, but then the residues
would also be complex.
G(s) =
G
1
(s)
(s +a)
2
+b
2
then
G(s) =
K
1
s + (a +ib)
+
K
2
s + (a ib)
+
When the algebra is worked out, K
1
and K
2
are complex conjugates, that is,
G(s) =
K
1
s + (a +ib)
+
K
2
s + (a ib)
+ =
c +id
s + (a +ib)
+
c id
s + (a ib)
+
Real numbers are easier to work with, so let G(s) take the form
G(s) =
c +id
s + (a +ib)
+
c id
s + (a ib)
+
=
(c +id)(s +a ib) + (c id)(s +a +ib)
(s +a)
2
+b
2
+
=
2cs + 2(ca +db)
(s +a)
2
+b
2
+
=

K
1
s +

K
2
(s +a)
2
+b
2
+
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 30 Dynamic System Modeling
Now,

K
1
and

K
2
are real and are given by

K
1
= 2c,

K
2
= 2(ac +bd)
Invert this transform using
b
(s +a)
2
+b
2
e
at
sin bt
s +a
(s +a)
2
+b
2
e
at
cos bt
Example (Complex Roots). Let
G(s) =
s + 3
s
3
+ 3s
2
+ 6s + 4
=
s + 3
(s + 1)(s
2
+ 2s + 4)
=
s + 3
(s + 1)[(s + 1)
2
+ 3]
So the poles are
p
1
= 1, p
2,3
= 1 i

3
By partial fraction expansion:
G(s) =
K
1
s + 1
+
K
2
s + 1 +i

3
+
K
3
s + 1 i

3
=
K
1
s + 1
+

K
2
s +

K
3
(s + 1)
2
+ 3
Find K
1
, K
2
and K
3
.
K
1
= (s + 1)G(s)

s1
=
s + 3
s
2
+ 2s + 4

s1
=
2
3
K
2
= (s + 1 +

3)G(s)

s1i

3
=
s + 3
(s + 1)(s + 1 i

3)

s1i

3
=
2 i

3
(i

3)(i2

3)
=
2 i

3
6
=
1
3
+i

3
6
K
2
=
1
3
i

3
6
So
G(s) =
2
3
s + 1
+

1
3
+i

3
6
s + 1 +i

3
+

1
3
i

3
6
s + 1 i

3
Now get rid of the complex numbers:

K
2
= 2c,

K
3
= 2(ac +bd)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 31
where
K
2
= c +id =
1
3
+i

3
6
(s +a)
2
+b
2
= (s + 1)
2
+ 3
_

_
c =
1
3
d =

3
6
a = 1 b =

3
So

K
2
=
2
3
,

K
3
= 2
_

1
3
+
1
2
_
=
1
3
and
G(s) =
2
3
s + 1
+

2
3
s +
1
3
(s + 1)
2
+ 3
=
2
3
s + 1

2
3
s + 1
(s + 1)
2
+ 3
+
1
(s + 1)
2
+ 3
Finally, get the inverse Laplace transform:
g(t) =
2
3
e
t

2
3
e
t
cos

3t +
1

3
e
t
sin

3t
Could also nd

K
2
and

K
3
directly by equating coefcients.
G(s) =
s + 3
s
3
+ 3s
2
+ 6s + 4
=
s + 3
(s + 1)[(s + 1)
2
+ 3]
=
K
1
s + 1
+

K
2
s +

K
3
(s + 1)
2
+ 3
Then
S + 3 = K
1
_
(s + 1)
2
+ 3

+ (

K
2
s +

K
3
)(s + 1)
(K
1
+

K
2
)s
2
+ (2K
1
+

K
2
+

K
3
)s+ (4K
1
+

K
3
)
= 0 = 1 = 3
So

K
2
= K
1
=
2
3

K
3
= 4K
1
+ 3 =
1
3
And as before,
G(s) =
2
3
s + 1
+

2
3
s +
1
3
(s + 1)
2
+ 3
=
2
3
s + 1

2
3
s + 1
(s + 1)
2
+ 3
+
1
(s + 1)
2
+ 3
2.5 Summary: Relating Time-Domain and s-Domain Representations
Notice the relationship between the time response and the pole location for the following functions.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 32 Dynamic System Modeling
f(t) F(s) Poles s-plane Time Response
1
1
s
p
1
= 0
Im(s)
Re(s)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Modeling Page 33
f(t) F(s) Poles s-plane Time Response
t
1
s
2
p
1,2
= 0
Im(s)
Re(s)
e
at 1
sa
p
1
= a
Im(s)
Re(s)
e
at 1
s+a
p
1
= a
Im(s)
Re(s)
sin bt
b
s
2
+b
2
p
1,2
= ib
Im(s)
Re(s)
cos bt
s
s
2
+b
2
p
1,2
= ib
Im(s)
Re(s)
e
at
sin bt
b
(s+a)
2
+b
2
p
1,2
= a
ib
Im(s)
Re(s)
Table 2.1: Time-domain and s-domain representation of common functions.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 34 Dynamic System Modeling
Summary:
Poles are real no oscillation.
Pole is positive exponential increase, unstable.
Pole is negative exponential decrease, stable.
Poles are imaginary oscillation with no damping.
Poles are complex damped oscillation.
Im(s)
Re(s)
t
t t t
t t
no damping
no oscillation
Left half plane (stable) Right half plane (unstable)
Figure 2.12: A time-response map of the s-plane.
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 3
Dynamic System Response
The objective of this section is to develop ways to sketch a system time response given a transfer
function.
3.1 Time Response of a Few Simple Mechanical Systems
A simple mechanical system, masses connected by springs and dampers, exhibit oscillatory, damped
and damped-oscillatory motion. In the context of controller design it is sometimes helpful to think of
this motion as the result of energy exchange and dissipation. Oscillation is a result of energy transfer
between two energy storage elements. The mass stores kinetic energy and the spring stores potential
energy. Damping is a result of energy dissipation.
3.1.1 Mass-Spring System
Governing Differential Equation
m x +kx = f
Transfer Function
X(s) =
1
ms
2
+k
F(s)
=
1

km

k
m
s
2
+
k
m
F(s)
35
Page 36 Dynamic System Response
k
m
f(t)
t
x(t)
zero slope
x
0
= f
0
/ k
Step Response
t
non-zero slope
Impulse Response
Re(s)
Im(s)
Re(s)
Im(s)
Figure 3.1: Time response of a simple mass-spring system.
3.1.2 Mass-Damper System
Governing Differential Equation
m x +c x = f
Transfer Function
X(s) =
1
ms
2
+cs
F(s)
=
1
c
1
s
c
m
s +
c
m
F(s)
c
m
f(t)
t
x(t)
v

=f
0
/c
Step Response
t
x(t)
x

=f
0
/c
Impulse Response
Re(s)
Im(s)
Re(s)
Im(s)
Figure 3.2: Time response of a simple mass-damper system.
3.1.3 Mass-Spring-Damper System
Governing Differential Equation
m x +c x +kx = f
Transfer Function
X(s) =
1
ms
2
+cs +k
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 37
k
m
f(t)
c
Step Response
t
x(t)
x

=f
0
/k
t
x(t)
Impulse Response
Re(s)
Im(s)
Re(s)
Im(s)
Figure 3.3: Time response of a simple mass-damper system.
3.2 System Time Response Characteristics
Time response characteristics fall into three categories:
transient response soon after a command.
steady-state response long after a command.
stability
In practice there is often a tradeoff between transient response and steady-state response.
Given a system Y (s) = G(s)R(s), nd the behavior of y(t) for various inputs r(t). The input r(t)
is not known in advance so consider the system response for certain standard inputs.
1) unit step r(t) = u(t) R(s) =
1
s
2) unit ramp r(t) = tu(t) R(s) =
1
s
2
3) unit impulse r(t) = (t) R(s) = 1
4) sinusoid r(t) = u(t) sin t R(s) =

s
2
+
2
Any linear, constant-coefcient system can be decomposed into a cascade of rst and second-order
systems as was shown in the partial fraction expansion development. So, begin looking at system time
response by looking at the time response of rst and second-order systems.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 38 Dynamic System Response
G(s)
R(s) Y(s)
G
1
(s)
R(s)
G
2
(s) G
3
(s)
Y(s) Y
1
(s) Y
2
(s)
Figure 3.4: Decompose complex systems into rst and second-order systems.
3.3 Characteristics of First-Order Systems
Consider the rst-order system
a
1
y +a
0
y = b
0
r
(a
1
s +a
0
)Y (s) = b
0
R(s)
The transfer function is
G(s) =
Y (s)
R(s)
=
b
0
a
1
s +a
0
=
b
0
a
1
s +
a
0
a
1
The poles of G(s) are the roots of the denominator polynomial: p =
a
0
a
1
.
3.3.1 First-Order System Parameters
Denition (Time constant ). =
a
1
a
0
Denition (Open-loop or DC gain K). K =
b
0
a
0
Then
G(s) =
_
b
0
a
0
_
a
0
a
1
s +
a
0
a
1
= K
1

s +
1

Rev. May 4, 2010 Introduction to Feedback Control Systems


Dynamic System Response Page 39
3.3.2 S-Plane Description of First-Order Systems
The rst-order system pole and time constant are related by p =
1

. Suppose a rst-order system is


given an initial value, y(0) = y
0
; then the time constant is the time required for y(t) to reach 37% of
its initial value, y() = 0.37 y
0
, as illustrated in Figure 3.5. To see this, solve for y(t) then set t = .
The DC gain comes from the observation that for a unit step input, y(t) goes to K as t goes to
innity. To see this, solve for Y (s) using a partial fraction expansion.
Y (s) = G(s)R(s) = k
1

s +
1

1
s
=
K
s

K
s +
1

Then
y(t) = Ku(t) Ke

t
and goes to K as t goes to innity. The same result is achieved using the nal value theorem.
y() = lim
s0
sY (s) = lim
s0
sK
1
s
1

s +
1

= K
K

t
y(t)
K

0.37
Im(s)
Re(s)
1

Figure 3.5: Impulse response of a rst-order system.


3.4 Characteristics of Second-Order Systems
Consider the second-order system
a
2
y +a
1
y +a
0
y = b
0
r
(a
2
s
2
+a
1
s +a
0
)Y (s) = b
0
R(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 40 Dynamic System Response
The transfer function is
G(s) =
Y (s)
R(s)
=
b
0
a
2
s
2
+a
1
s +a
0
=
b
0
a
2
s
2
+
a
1
a
2
s +
a
0
a
2
The poles of G(s) are the roots of the denominator polynomial
p
1,2
=
a
1
2a
2

_
a
1
2a
2
_
2

_
a
0
a
2
_
(3.1)
3.4.1 Second-Order System Parameters
Depending on the value of the term inside the square-root of (3.1), the second-order system poles fall in
one of three categories: both poles are real and distinct, both poles are complex or the poles are real and
equal. The impulse response for each case is distinctively different.
Both poles are real and distinct
The poles are real and distinct when
_
a
1
2a
2
_
2
>
_
a
0
a
2
_
Dene
G(s) =
Y (s)
R(s)
=
b
0
a
2
(s p
1
)(s p
2
)
=
K
1
s p
1
+
K
2
s p
2
The impulse response is the sum of two exponential decays
g(t) = K
1
e
p
1
t
+K
2
e
p
2
t
The response is non-oscillatory. Such a system is said to be overdamped.
Both poles are complex
The poles form a complex conjugate pair when
_
a
1
2a
2
_
2
<
_
a
0
a
2
_
For convenience, dene two quantities
Denition (Time constant ).
1

=
a
1
2a
2
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 41
Denition (Damped frequency
d
).
d
=

a
0
a
2

_
a
1
2a
2
_
2
Then the poles are
p
1,2
=
1

i
d
and the transfer function is
G(s) =
b
0
a
2
(s +
1

)
2
+
2
d
=
b
0
a
2

d
(s +
1

)
2
+
2
d
The impulse response is a decaying sinusoid
g(t) =
b
0
a
2

d
e

t
sin
d
t
Since the response is oscillatory, such a system is said to be underdamped.
Both poles are real and equal
This situation occurs when
_
a
1
2a
2
_
2
=
_
a
0
a
2
_
(3.2)
The poles are
p
1
= p
2
=
a
1
2a
2
=
1

Note that,
d
= 0. The transfer function is
G(s) =
b
0
a
2
(s +
1

)
2
The impulse response is the borderline case between an oscillatory and non-oscillatory response.
g(t) =
b
0
a
2
te

t
Such a system is said to be critically damped. The value of a
1
needed for critical damping a
1c
,
satises (3.2) so that
a
1c
= 2

a
0
a
2
Denition (Damping ratio ). =
a
1
a
1c
=
a
1c
2

a
0
a
2
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 42 Dynamic System Response
Denition (Natural frequency
n
).
n
=

a
0
a
2
Given the above denition of a system damping ratio, it is apparent that for < 1, the system is
underdamped with an oscillatory impulse response and has a pair of complex poles. For > 1, the
system is overdamped with a non-oscillatory impulse response and a pair of real and distinct poles.
Furthermore, when the system has no damping, = 0, the damped frequency is the same as the natural
frequency
d
=
n
.
A second-order system can be described completely by two parameters and a DC gain yet, four
parameters have been presented: ,
d
, and
n
. Two expressions are needed to relate the four
parameters. Describe and
d
in terms of and
n
as follows.
1

=
a
1
2a
2
=
a
1
2

a
0
a
2

a
0
a
2
a
2
=
a
1
2

a
0
a
2

a
0
a
2
=
n

d
=

a
0
a
2

_
a
1
2a
2
_
2
=

a
0
a
2

1
a
2
a
0
_
a
1
2a
2
_
2
=

a
0
a
2

1
_
a
1
2

a
0
a
2
_
2
=
n

1
2
Now describe the system in terms of and
n
.
a
2
y +a
1
y +a
0
y = b
0
r
y +
_
a
1
a
2
_
y +
_
a
0
a
2
_
y =
_
b
0
a
0
a
0
a
2
_
r
y + 2
n
y +
2
n
y = K
2
n
r
The transfer function is
G(s) =
Y (s)
R(s)
=
b
0
a
2
s
2
+a
1
s +a
0
= K
w
2
n
s
2
+ 2
n
s +
2
n
As for the rst-order system, K =
b
0
a
0
is the open-loop gain or the DC gain. This is because, for a unit
step input, the output approaches K.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 43
3.4.2 S-Plane Description of Second-Order Systems
The poles of a second-order system are
p
1,2
=
a
1
2a
2

_
a
1
2a
2
_
2

_
a
0
a
2
_
=
n

2
1
For an overdamped system > 1, so
2
1 > 0 and the poles are both real. Since
1

=
n
and

d
=
n

2
1 the system poles are also given by
p
1,2
=
1


d
Further, as the system damping goes to innity, the poles go to and 0. This follows from
p
1,2
=
n

n

2
1

n

n
as
= {2
n
, 0}
Re(s)
Im(s)

n
Re(s)
Im(s)

as
Figure 3.6: Migration of poles as the damping becomes large.
For an underdamped system < 1, so
2
1 < 0 and the poles form a complex pair.
p
1,2
=
n
i
n

1
2
=
1

i
d
As the system damping goes to zero, the poles approach the imaginary axis. This is because
p
1,2
=
n
i
n

1
2
i
n
as 0
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 44 Dynamic System Response
Im(s)
Re(s)

sin =
1
2

n

d
=
n
as 0
Figure 3.7: Migration of poles as the damping goes to zero.
3.4.3 S-plane Interpretation Of Second-Order Dynamics
Figure 3.8 summarizes the relations between the system pole locations in the s-plane and the system
characteristics
d
,,
n
and .
Constant
d
loci Constant loci Constant
n
loci Constant loci
Figure 3.8: S-plane loci for constant system characteristics.
In the next sections, the system characteristics: ,
d
,
n
, and are related directly to the time response.
The goal is to know what the time response looks like just by looking at the transfer function.
3.5 Performance of First-Order Systems
Consider the rst-order system
a
1
y(t) +a
0
y(t) = b
0
r(t)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 45
where the response is given by
Y (s) =
b
0
a
1
s +a
0
R(s) = K
1

s +
1

R(s)
where
1

=
a
0
a
1
and K =
b
0
a
0
. Apply a step input R(s) =
1
s
and nd the response by a partial fraction
expansion,
Y (s) = K
1

s(s +
1

= K
_
1
s

1
s +
1

_
The time response is given by the inverse Laplace transform
y(t) = K
_
1 e

_
, y(t) =
K

Figure 3.9 illustrates the relationship between the location of a rst-order system pole and the step
response.
K
0.63K

Forced Response
K
Natural Response

Natural Response
Forced Response
Im(s)
Re(s)
Figure 3.9: First-order system pole location and step performance.
For rst-order systems, the time constant inuences two features of the time response:
t = is the time when y(t) reaches 63% of the nal value.

is the initial slope of y(t), that is, y(0) = K


1

.
The nal value theorem can be used as long as > 0 to show that y() = K.
y() = lim
s0
sY (s) = lim
s0
sK
1

s(s +
1

)
= K
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 46 Dynamic System Response
3.5.1 System Performance Denitions
Denition (Rise-time T
r
). For a rst-order system with a step input, the rise-time is the time required
for the system output y(t) to go from
1
10
to
9
10
of the nal y().
Denition (Settling-time T
s
). For a rst-order system with a step input, the settling-time is the time
required for the system output y(t) to stay within either 2% or 5% of the nal y().
3.5.2 Relation to System Characteristics
The rise-time T
r
and settling-time T
s
describe the transient response. How are they related to the system
characteristic, the time constant ?
Rise-time
From y(t) = K(1 e

), solve for T
r
= ln 0.9 2.2.
Settling-time
Using the 2% settling-time denition, at t = T
s
,
y(T
s
) = K(1 e

Ts

) = 0.98K
Solve for T
s
to get:
T
s
= ln 50 4
Alternatively, using the 5% settling-time denition, at t = T
s
,
y(T
s
) = K(1 e

Ts

) = 0.95K
and the settling-time is given by
T
s
= ln 20 3
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 47
3.6 Performance of Second-Order Systems
Consider the second-order underdamped system
a
2
y(t) +a
1
y(t) +a
0
y(t) = b
0
r(t)
where the response is
Y (s) =
b
0
a
2
s
2
+a
1
s +a
0
R(s) = K

2
n
s
2
+ 2
n
s +
2
n
R(s)
and where

2
n
=
a
0
a
2
, =
a
1
2

a
0
a
2
, K =
b
0
a
0
Apply a step input R(s) =
1
s
and nd the response by a partial fraction expansion:
Y (s) = K

2
n
s(s
2
+ 2
n
s +
2
n
)
=
K
1
s
+
K
2
s +K
1
s
2
+ 2
n
s +
2
n
Now nd the residues K
1
, K
2
and K
3
. The easiest way is to multiply and equate like terms in the
numerator.
Y (s) = K

2
n
s(s
2
+ 2
n
s +
2
n
)
=
K
1
(s
2
+ 2
n
s +
2
n
) + (K
2
s +K
3
)s
s(s
2
+ 2
n
s +
2
n
)
=
(K
1
+K
2
)s
2
+ (2
n
K
1
+K
2
)s +K
1

2
n
s(s
2
+ 2
n
s +
2
n
)
Equate like powers of s to get
K
1
+K
2
= 0
2
n
K
1
+K
3
= 0
K
1

2
n
= K
2
n

K
1
= K
K
2
= K
K
3
= 2
n
K
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 48 Dynamic System Response
Then
Y (s) = K
_
1
s

s + 2
n
s
2
+ 2
n
s +
2
n
_
= K
_
1
s

s + 2
n
(s +
n
)
2
+
2
n
(1
2
)
_
= K
_
1
s

s +
n
(s +
n
)
2
+
2
n
(1
2
)


n
(s +
n
)
2
+
2
n
(1
2
)
_
= K
_
1
s

s +
n
(s +
n
)
2
+
2
n
(1
2
)

1
2

1
2
(s +
n
)
2
+
2
n
(1
2
)
_
And the time response is
y(t) = K
_
1 e
nt
cos
n

1
2
t

1
2
e
nt
sin
n

1
2
t
_
= K
_
1
1

1
2
e
nt
_

1
2
cos
n

1
2
t + sin
n

1
2
t
_
_
This can be simplied further using trigonometric identities to combine the sin() and cos(). Since the
system is assumed to be underdamped, 0 1, let sin = and cos =

1
2
so that
y(t) = K
_
1
1

1
2
e
nt
_
cos cos
n

1
2
t + sin sin
n

1
2
t
_
_
= K
_
1
1

1
2
e
nt
cos
_

1
2
t
_
_
= K
_
1
1

1
2
e
nt
cos (
d
t )
_
Finally, normalize y(t) so that y(t) =
1
K
y(t) and y(t) goes to 1 as t goes to . Figure 3.10 illustrates
the relationship between the location of second-order underdamped system poles and the step response.
3.6.1 System Performance Denitions
Denition (Rise-time T
r1
). Same as for rst-order systems. With a step input, the rise-time is the time
required for the system output y(t) to go from
1
10
to
9
10
of the nal y().
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 49
Mp
T
s
t
1.0
0.9
0.1
T
r2
T
r1
T
p
Natural Response
Forced Response
Im(s)
Re(s)

n
Figure 3.10: Second-order system poles and step performance.
Denition (Rise-time T
r2
). For a second-order underdamped system with a step input, an alternate
denition of rise-time is the time required for the system output y(t) to rst reach the steady-state
output value y(). See Figure 3.10.
Denition (Settling-time T
s
). Same as for rst-order systems. With a step input, the settling-time is
the time required for the system output y(t) to stay within either 2% or 5% of the nal y().
Denition (Maximum overshoot M
p
). For a second-order system with a step input, the maximum
overshoot is the difference between the maximum value of the output, max
t
y(t), and the steady-state
output value y(). See Figure 3.10. The maximum overshoot usually is expressed as a percent.
Denition (Peak-time T
p
). The peak-time is the time of maximum overshoot.
3.6.2 Relation to System Characteristics
The quantities T
r1
, T
r2
, T
s
, M
p
, and T
p
describe the transient response. How are they related to the
system characteristics
n
and ?
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 50 Dynamic System Response
Settling-time
The envelope of y(t) is governed by
y(t) = 1
1

1
2
e
nt
At T
s
,
y(T
s
) = 1
1

1
2
e
nTs
= 0.98
Solve for T
s
:
T
s
=
ln 0.02

1
2


n
For 0 0.9, 3.9 ln 0.02

1
2
4.7. For the purpose of setting a denition, put
T
s
=
4

n
Rule. T
s
depends only on
n
. To decrease T
s
, increase
n
.
Peak-time
At the peak t = T
p
, y(T
p
) = 1 +M
p
and

y(T
p
) = 0. Solve for T
p
by looking for zero slope.
y(t) = 1
1

1
2
e
t
cos(
d
t )
= 1 e
nt
_
cos
d
t +

1
2
sin
d
t
_
Get the derivative of y(t)

y(t) =
n
e
nt
_
cos
d
t +

1
2
sin
d
t
_
+e
nt
(
d
sin
d
t
n
cos
d
t)
=
_

2

1
2
+
n

1
2
_
e
nt
sin
d
t
=

n

1
2
e
nt
sin
d
t
Now nd where the slope is zero

y(t) = 0 for t = n

d
where n = 0, 1, 2, . . .
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 51
The peak-time, T
p
, occurs at the rst occurrence of

y(t) = 0. So,
T
p
=

d
=

1
2
Rule. T
p
depends only on
d
. To decrease T
p
, increase
d
.
Maximum overshoot
Evaluate y(t) at t = T
p
=

d
y(T
p
) = 1 e
nTp
_
cos
d
T
p
+

1
2
sin
d
T
p
_
= 1 e

1
2
_
cos +

1
2
sin
_
= 1 +e

1
2
= 1 +M
p
Then
M
p
= e

1
2
For 0 0.6, M
p
can be approximated as
M
p
1

0.6
The approximate linear relation between overshoot and damping is illustrated in Figure 3.11.
Rule. M
p
depends on only. To decrease M
p
, increase .
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 52 Dynamic System Response
0.8
0.6
0.4
0.2
0 0.2 0.4 0.6 0.8 1.0

M
p
1.0
Figure 3.11: Overshoot decreases approximately linearly with increased damping.
Rise-time
The 10% 90% rise-time has no analytic relationship to and
n
. But for underdamped systems, it
makes sense to introduce an alternate 0%100% rise-time denition. Set y(t) = 1 at t = T
r
2
.
y(T
r
) = 1 e
nTr
_
cos
d
T
r
+

1
2
sin
d
T
r
_
= 1
cos
d
T
r
+

1
2
sin
d
T
r
= 0
tan
d
T
r
=

1
2

T
r
=
1

d
tan
1
_

1
2

_
Let cos =

1
2
and sin = . Then tan =

1
2
and
tan
_
+

2
_
=

1
2

tan
1
_

1
2

_
=

2
+ =

2
+ sin
1


2
+
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 53
The rise-time is
T
r
=
1

d
tan
1
_

1
2

_
=

2
+ sin
1

1
2

+ 2
2
n

1
2


2
n
T
r


2
n
Rule. To decrease T
r
, increase
n
.
Rise-time can also be made smaller by decreasing , but this causes larger overshoot. See Figure 3.12
for a comparison of the two approaches to decreasing rise-time.
1.0
t

n
1

n
2

n
1
>
n
2
1.0
t
=0.4
=0.2
Figure 3.12: Inuence of natural frequency and damping ratio on rise-time.
3.6.3 Applications to Design
System design rules infered from a study of second-order system performance are summarized in
Table 3.1 The s-plane interpretation of second-order dynamics of Figure 3.8, is now viewed in
Figure 3.13 as s-plane lines of constant system performance. Given the specications for T
s
, M
p
,
and T
r
, design the system so that the poles are in the intersection of the three good regions illustrated in
Figure 3.14.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 54 Dynamic System Response
Performance objective Design approach
Fast response. (T
r
) Make
n
large.
Small overshoot. (M
p
) Dont make too small (0 < < 0.8).
Short settling-time. (T
s
) Make
n
large.
Table 3.1: Meet time-domain specications by adjusting s-plane characteristics.
1
2

M
p
= e

0.6
1
(overshoot)
Constant loci
T
p
=

d
(peak time)
Constant
d
loci
T
s
=
4

n
(settling time)
Constant loci
(rise time)

n
1
2

2
+2

2
n
T
r

Constant
n
loci
Figure 3.13: S-plane loci for constant system performance.
Im(s)
Re(s)
Overshoot ()
Settling time (
n
)
Rise time (
n
)
Figure 3.14: Performance specications and the s-plane.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 55
3.7 Higher-Order Systems
All previous analysis applies only to rst and second-order systems. What happens when more poles
and zeros are added?
3.7.1 An Extra Pole At The Origin
Let
G(s) =
1
(s +)
2
+
2
and

G(s) =
1
s[(s +)
2
+
2
]
Recall that
L
_
g(t)dt
_
=
1
s
L
_
g(t)

=
1
s
G(s)
so the response of

G(s) is just the integral of the response of G(s).
Step Response for G(s)
y(t)
t
y(t)
Impulse Response for G(s)
and Step Response for G(s)
Impulse Response for G(s)
t
Figure 3.15: Step response for a three-pole system, one at the origin.
3.7.2 An Extra Non-zero, Stable Pole
Let
G(s) =
1
(s +)
2
+
2
and

G(s) =
1
(s +)[(s +)
2
+
2
]
First, consider the step response of G(s) as in the previous section.
Y (s) = G(s)
1
s
=
1
s[(s +)
2
+
2
]
=
K
1
s
+
K
2
s +K
3
(s +)
2
+
2
=
1

2
+
2
_
1
s

s + 2
(s +)
2
+
2
_
The response is the sum of a step and a second-order system impulse response
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 56 Dynamic System Response
+ =
Figure 3.16: Step response by partial fraction expansion.
Now consider the step response of

G(s).
Y (s) =

G(s)
1
s
=
1
s(s +)[(s +)
2
+
2
]
=
K
1
s
+
K
2
(s +)
+
K
3
s +K
4
(s +)
2
+
2
+ =
Figure 3.17: Step response for a three-pole stable system.
For large , the effect of the third pole on the transient response
is negligible. How large does have to be? The rule of thumb
is that the third pole should be 5 times faster than the dominant
poles. The text [5] shows that K
2
goes to zero as goes to
innity so the contribution to y(t) by s + is small as moves
far from .
3.7.3 An Extra Zero At The Origin
Im(s)
Re(s)
n
>5
n
Figure 3.18: Fast poles have small
effect on the transient response.
Recall that
L
_
dg
dt
_
= sL
_
g(t)

= sG(s)
so the response of sG(s) is just the derivative of the response of G(s).
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 57
Example.
Two poles Two poles with a zero
Im(s)
Re(s)
i
-i
Im(s)
Re(s)
i
-i
G(s) =

s
2
+
2

G(s) =
s
s
2
+
2
g(t) = sin t g(t) =
d
dt
sin t = cos t
Example.
Two poles Two poles with a zero
Im(s)
Re(s)
-a
Im(s)
Re(s) -a
G(s) =
1
(s+a)
2

G(s) =
s
(s+a)
2
g(t) = te
at
g(t) =
d
dt
te
at
= (1 at)e
at
g(t)
e
at
t
t
g(t)
e
at
(1at)
t
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 58 Dynamic System Response
3.7.4 A Zero on the Real Axis
This situation is difcult to characterize in a general way. Consider four examples.
Example (Pole-Zero Cancellation). Consider a system with two real poles, one fast and the other
slow. For example, let
G(s) =
20
(s + 1)(s + 20)
The step response, as illustrated in Figure 3.19, is
Y (s) =
20
s(s + 1)(s + 20)
=
K
1
s
+
K
2
s + 1
+
K
3
s + 20

1
s

1.05
s + 1
+
0.05
s + 20
y(t) = 1 1.05e
t
+ 0.05e
20t
1
1
Im(s)
Re(s)
20 1
Figure 3.19: Step response with one fast and one slow pole.
So the step response is dominated by e
1
, the slow pole. Now, add a zero at z = 1.1
G(s) =
20(s + 1.1)
1.1s(s + 1)(s + 20)
The step response, as illustrated in Figure 3.20, is
Y (s) =
20(s + 1.1)
1.1s(s + 1)(s + 20)
=
K
1
s
+
K
2
s + 1
+
K
3
s + 20

1
s

0.095
s + 1

0.905
s + 20
y(t) = 1 0.095e
t
0.905e
20t
So the step response is dominated by e
20t
, the fast pole.
Motivated by the previous example, the following design rule is stated.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 59
1
1
t
Im(s)
Re(s)
20 1
Figure 3.20: Step response with one fast and one slow pole and one zero.
Rule. The residue of a pole tends to be small if there is a zero nearby. While zeros tend to cancel
poles, beware of pole-zero cancellation in the right half plane where even a small imperfection in the
cancellation produces an unstable system.
Example (Minimum Phase Zero). Consider a second-order stable system with complex poles, for
example, G(s) =
1
(s+a)
2
+b
2
and put a zero at z =
1

so that

G(s) =
s + 1
(s +a)
2
+b
2
= sG(s) +G(s)
The step response for

G(s) is
Y (s) =

G(s)
1
s
= G(s) +G(s)
1
s
= [ impulse response of G(s)] + [ step response of G(s)]
The zero increases the overshoot and decreases the rise-time.
This is a similar effect to decreasing the damping or rotating the poles towards the imaginary axis.
The general nature of the response is determined by the poles (e.g. exponential decay, oscillation)
but the shape, that is, the contribution of each pole (the size of the residues), is inuenced by the
zeros.
Example (Non-minimum Phase Zero). As in the last example, let G(s) =
1
(s+a)
2
+b
2
. Now, add a
non-minimum phase zero z =
1

so that

G(s) =
s + 1
(s +a)
2
+b
2
= sG(s) +G(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 60 Dynamic System Response
Im(s)
Re(s)
Figure 3.21: A minimum phase zero tends to decrease the apparent damping.
The step response for

G(s) is
Y (s) =

G(s)
1
s
= G(s) +G(s)
1
s
= [ impulse response of G(s)] + [ step response of G(s)]
Figure 3.22: A non-minimum phase zero causes command reversal.
The non-minimum phase zero causes the initial response to go in the direction opposite the command
to create a command reversal.
Example (Aircraft Normal-Acceleration Control). Aircraft normal acceleration, A
n
, is generated
by lift mostly in the wings and some in the tail surfaces. Command A
n
by rotating the horizontal tail
surface.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Dynamic System Response Page 61
Rotate the tail surface.
Generate negative lift and a pitch-up moment.
Rotate the nose up.
Increase angle-of-attack.
Increase lift.
Increase normal acceleration, A
n
.
Note that before the aircraft nose pitches up and angle-of-attack increases, the total lift decreases as the
rotated elevator pushes the tail down. The net result is that normal acceleration rst goes negative and
then positive. This behavior is characteristic of a system with a non-minimum phase zero. Its also an
argument for using canards, control surfaces in front of the wing.
Introduction to Feedback Control Systems Rev. May 4, 2010
CHAPTER 4
Block Diagram Algebra
Block diagram manipulations provide an easy way to combine components of a complicated system
to nd a system response. Block diagram algebra is introduced as three fundamental congurations:
cascade, parallel and feedback.
Cascade conguration: Let r
1
(t) drive a system g
1
(t) to produce an output y(t).
y(t) =

t
0
g
1
(t )r
1
()d
Y (s) = G
1
(s)R
1
(s)
The input R
1
(s) could be the output of some another system, for example, R
1
(s) = G
2
(s)R(s). Then
Y (s) = G
1
(s)R
1
(s) = G
1
(s)G
2
(s)R(s)
so that the effective transfer function from R(s) to Y (s) is just the product,
Y (s)
R(s)
= G
1
(s)G
2
(s) = G(s)
This situation is represented by the block diagram illustrated in Figure 4.1.
63
Page 64 Block Diagram Algebra
R(s) R
1
(s) Y(s)
G
2
(s) G
1
(s)
Figure 4.1: Systems connected in a cascade conguration.
Parallel conguration: Suppose r(t) drives two systems g
1
(t) and g
2
(t)
y
1
(t) =

t
0
g
1
(t )r()d
y
2
(t) =

t
0
g
2
(t )r()d
and an output y(t) is formed as the difference: y(t) = y
1
(t) y
2
(t). Then
y(t) =

t
0
g
1
(t )r()d

t
0
g
2
(t )r()d
=

t
0
[g
1
(t ) g
2
(t )]r()d
The effective transfer function from R(s) to Y (s) is just the difference,
Y (s)
R(s)
= G(s) = G
1
(s) G
2
(s)
This situation is represented by the block diagram of Figure 4.2.
G
2
(s)
G
1
(s)
R(s) Y(s)
+
-
Figure 4.2: Systems connected in a parallel conguration.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Block Diagram Algebra Page 65
Feedback conguration: Consider the conguration illustrated in Figure 4.3.
G
2
(s)
G
1
(s)
R(s) Y(s)
+
-
E(s)
Figure 4.3: Systems connected in a feedback conguration.
Solve for the effective transfer function from R(s) to Y (s) as
Y (s) = G
1
(s)E(s) and E(s) = R(s) G
2
(s)Y (s)
Then
Y (s) = G
1
(s) [R(s) G
2
(s)Y (s)] = G
1
(s)R(s) G
1
(s)G
2
(s)Y (s)
(1 +G
1
(s)G
2
(s))Y (s) = G
1
(s)R(s)
Y (s) =
G
1
(s)
1 +G
1
(s)G
2
(s)
R(s)
G(s) =
G
1
(s)
1 +G
1
(s)G
2
(s)
More complicated block diagrams can be simplied by applying each of the above three cases:
blocks in cascade, parallel and feedback congurations, to subsystems within the block as shown in the
following example.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 66 Block Diagram Algebra
Example (Block Diagram Reduction).
G
4
(s)
G
3
(s)
Y(s)
+
-
G
1
(s)
R(s)
+
-
G
2
(s)
H
2
(s)
H
1
(s)
+
-
G
1
G
2
1+G
1
G
2
H
1
Y(s) R(s)
+
-
G
1
+G
2
H
2
G
1
G
2
(G
1
+G
2
)
1+G
1
G
2
H
1
Y(s) R(s)
+
-
H
2
G
'
1+G
'
H
2
Y(s) R(s)
Figure 4.4: A block diagram reduction combining cascade,parallel and feedback elements.
In the last block, the composite system G

is given by
G

=
G
1
G
2
(G
3
+G
4
)
1 +G
1
G
2
H
1
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 5
Stability
There are several ways to dene system stability; generally, a system is said to be stable if the response
is always appropriate given the stimulus. But response, stimulus and appropriate can each have several
meanings. A response could be from an output or from some internal variable. It is quite possible for
a system output to appear well-behaved while an internal variable grows without bound. A stimulus
could be an initial condition or a signal that is applied for all time. Appropriate could mean that signals
remain small or that signals go to zero when the stimulus is removed. A denition of stability that is
often applied to linear time-invariant (LTI) systems is asymptotic internal stability. This means that the
output and all internal variables never become unbounded and all go to zero as time goes to innity
when all signals other than initial condition are removed. Asymptotic internal stability is dened in
terms of the system natural response.
Denition (Natural response). The natural response of a linear time-invariant system is the impulse
response.
The system natural response is the time function corresponding to the system transfer function and gives
the response due to non-zero initial conditions.
67
Page 68 Stability
Denition (Stable). A linear time-invariant system is stable if the natural response approaches zero as
time goes to innity.
Denition (Unstable). A linear time-invariant system is unstable if the natural response grows without
bound as time goes to innity.
Denition (Marginally stable). A linear time-invariant system is marginally stable if the natural
response neither grows nor approaches zero as time goes to innity.
Example. Consider a rst-order system
x(t) +ax(t) = br(t)
x(t) = x
0
e
at
+

t
0
e
a(t)
r()d
Let r(t) = 0 so that the response is just the natural response
x(t) = x
0
e
at
The system is stable if a < 0 since for any x
0
,
x(t) = x
0
e
at
0 as t
The system is unstable if a > 0 since for any x
0
= 0,
x(t) = x
0
e
at
as t
Here is an example of a marginally stable system.
x(t) +
2
x(t) = br(t)
The natural response has the form
x
n
(t) = Asin t +Bcos t
Rev. May 4, 2010 Introduction to Feedback Control Systems
Stability Page 69
5.1 S-Plane Interpretation of Stability
The poles and zeros of a transfer function determine a systems natural response so a time history may
be identied with pole locations in the s-plane. For example, the terms of a partial fraction expansion
identify the classes of signals contained in the impulse response. For a rst-order pole,
G(s) =
1
s +a
(5.1)
the impulse response is an exponential function
g(t) = e
at
If a > 0, the pole of G(s) is s < 0, the impulse response g(t) goes to zero as time increases and the
system is stable. If a < 0, the pole of G(s) is s > 0, the impulse response g(t) grows with time and the
system is unstable. Table 5.1 and Figure 5.1 summarize the regions of the s-plane identied with stable,
unstable and marginally stable systems.
Pole location Response Stability
Left half plane Exponential decay or damped
sinusoid
Stable
Right half plane Exponential increase or sinusoid
with exponentially increasing
magnitude
Unstable
Imaginary axis
with multiplicity > 1
Response of the form t sin(t +) Unstable
Imaginary axis
with multiplicity 1
Sinusoidal response Marginally stable
Table 5.1: S-plane interpretation of stability.
5.2 Stability And The Hurwitz Determinants
The Routh-Hurwitz stability test is most useful for testing the stability of large-order systems but it is a
complicated test and will not be covered in this class. A simpler test using the Hurwitz determinants is
easy to apply to small systems and is explained below. Let
G(s) =
N(s)
D(s)
=
s
m
+b
m
s
m1
+ +b
1
s +b
0
s
n
+a
n
s
n1
+ +a
1
s +a
0
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 70 Stability
Unstable
Marginally Stable
Stable
Re(s)
Im(s)
Figure 5.1: S-plane interpretation of stability.
The transfer function G(s) is stable if and only if all the roots of the characteristic equation D(s) = 0
have negative real parts.
You could just solve for the roots of D(s) and check for stability directly, but this does not work
well when some of the coefcients a
k
are a function of some parameter, for example, a controller gain.
In this case a more complicated check is needed.
5.2.1 Positive Coefcients Are Necessary for Stability
It is easy to show that if G(s) is stable, the coefcients a
n1
, . . . , a
1
, a
0
are all positive. You can
convince yourself of this by factoring D(s) as
D(s) = s
n
+a
n1
s
n1
+ +a
1
s +a
0
(5.2)
=

k
(s +
k
)

j
[(s +
j
+i
j
)(s +
j
i
j
)]
=

k
(s +
k
)

j
_
s
2
+ 2
j
s + (
2
j
+
2
j
)

(5.3)
The
k
are the real roots of D(s) and the (
j
i
j
) are the complex roots of D(s). If all the roots
have negative real parts, then the
k
> 0 and the
j
> 0. Then, since all the coefcients in the product
(5.3) are positive, all the coefcients a
n1
, . . . , a
1
, a
0
in (5.2) must also be positive.
Therefore, if any a
k
0 where 0 k n 1, then G(s) is not stable. However, even if all the
a
n1
, . . . , a
1
, a
0
are positive, it is still possible that G(s) is not stable.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Stability Page 71
Example. The characteristic equation
D(s) = s
3
+s
2
+ 11s + 51 = (s + 3)(s
2
2s + 17)
has positive coefcients but has two unstable roots
s
2,3
= 1 4i
Therefore, even when all coefcients are positive, another check is needed.
5.2.2 Positive Hurwitz Determinants Are Both Necessary And Sufcient for Stability
The Hurwitz test is a method for checking G(s) for stability.
1. Let G(s) =
N(s)
D(s)
where D(s) = a
0
s
n
+a
1
s
n1
a
2
s
n2
+ +a
n1
s +a
n
.
Note that the notation for the coefcients is changed.
2. Form the Hurwitz determinants where any undened coefcients are replaced by 0.
D
1
= a
1
, D
2
= det

a
1
a
3
a
0
a
2

, D
3
= det

a
1
a
3
a
5
a
0
a
2
a
4
0 a
1
a
3

and
D
n
= det

a
1
a
3
a
5
a
7
0
a
0
a
2
a
4
a
6
0
0 a
1
a
3
a
5
0
0 a
0
a
2
a
4
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 a
n

3. The roots of D(s) all have negative real parts and G(s) is stable if and only if the Hurwitz
determinants D
1
, D
2
, . . . , D
n
are positive.
Example. Consider the characteristic equation D(s) given in the last example
D(s) = s
3
+s
2
+ 11s + 51 = a
0
s
3
+a
1
s
2
+a
2
s +a
3
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 72 Stability
The Hurwitz determinants are
D
1
= a
1
= 1
D
2
=

a
1
a
3
a
0
a
2

1 51
1 11

= 40 < 0
D
3
=

a
1
a
3
0
a
0
a
2
0
0 a
1
a
3

1 51 0
1 11 0
0 1 51

= 1 (11 51) 1 (51 51) = 40 51 < 0


The two determinants, D
2
and D
3
, are negative so D(s) has unstable roots.
As you might suspect, the Hurwitz determinants are not independent of one another. For example,
for a fourth-order system, you dont need to compute D
4
to know its sign. This is because
D
4
= det

a
1
a
3
0 0
a
0
a
2
a
4
0
0 a
1
a
3
0
0 a
0
a
2
a
4

= a
4
det

a
1
a
3
0
a
0
a
2
a
4
0 a
1
a
3

= a
4
D
3
If you know a
4
> 0, and it has to be for G(s) to be stable, and you know D
3
> 0, then D
4
> 0.
It turns out that because of this dependence, you only need to check about half of the Hurwitz
determinants. Either check all the odd numbered determinants or check all the even numbered
determinants as suggested by the following Li enard-Chipart criterion.
Theorem 5.1 (Li enard-Chipart criterion). A polynomial with real positive coefcients, has roots
with negative real parts if and only if either all the even Hurwitz determinants are positive or all the
odd Hurwitz determinants are positive.
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 6
Feedback Control
Feedback control allows a system dynamic response to be modied without changing any system
components. Feedback control is best introduced by an example so the rst section develops a model of
a simple DC motor.
6.1 Motivation A DC Motor Model
A simple permanent magnet DC motor conguration is illustrated in Figure 6.1. A permanent magnet
produces a constant and uniform magnetic eld in which a conducting loop is free to rotate.
Development of a model involves applying the Lorentz force law and Faradays induction law to
describe the electro-mechanical coupling, Newtons second law to describe the mechanical model and
Kirchhoffs voltage law to describe the electrical model.
The Lorentz force law shows that current passing through the loop interacts with the magnetic eld
to produce a force on the horizontal part of the loop. If l is the length of the loop, the force, which acts
along a line perpendicular to both the direction of the current and the magnetic eld, is
F(t) = li(t)
73
Page 74 Feedback Control
F
i

N
S
Physical System
E V
i
L
R
Electrical Model
(t)
T
Mechanical Model
Figure 6.1: A simple model of a DC motor.
Given that r and (t) are the radius and angular position of the loop, the generated torque is
T(t) = 2r cos (t)li(t)
The torque produced by N loops is given by a summation as T(t) = 2rli(t)N

N
k=1
cos
k
(t). If N
is large, the torque is simply proportional to the current
T(t) = K
t
i(t) (6.1)
K
t
= rlN Torque constant
Faradays induction law shows that as the loop, a conductor, moves through the magnetic eld, an
electic potential is generated across the two ends of the loop. This is the back emf and is given by the
time rate of change in the magnetic ux (t) through the loop
(t) =
d(t)
dt
=
d
dt
(2rl sin (t))
= 2rl cos (t)(t)
where (t) =

(t). The back emf produced by N loops is given by a summation as
(t) = 2rl(t)N
N

k=1
cos
k
(t)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 75
If N is large, the back emf is proportional to the rotational speed of the loop
(t) = K
e
(t) (6.2)
K
e
= rlN Back emf constant
The mechanical model is described by Newtons second law which relates the net torque to the
inertia of the loop and load. If T(t) is the torque generated by the motor, (t) is the rotational speed, J
is the motor and load inertia and D is a friction coefcient then,
T(t) = J
d(t)
dt
+D(t) (6.3)
The electrical model is described by Kirchhoffs voltage law which states that the sum of the voltages
around a closed circuit is zero,

V = 0. If V (t) is the voltage across the two ends of the motor loop,
i(t) is the current passing through the loop, (t) is a back emf and L and R are the inductance and
resistance of the loop then,
V (t) = L
di(t)
dt
+(t) +Ri(t) (6.4)
An examination of equations (6.1) through (6.4) reveals that there are four equations in ve
variables: T(t), i(t), (t), (t) and V (t). With one mathematical degree of freedom, these equations
are solved as a system with one input and one output. The input is an applied voltage V (t) and the
output is the motor speed (t). The Laplace transforms of (6.1) through (6.4) are
T(s) = K
t
i(s)
(s) = K
e
(s)
T(s) = Js(s) +D(s)
V (s) = Lsi(s) +(s) +Ri(s)
A transfer function from the voltage to the speed is found rst by solving for V (s) as
V (s) = (Ls +R)i(s) +K
e
(s)
= (Ls +R)
1
K
t
T(s) +K
e
(s)
= (Ls +R)
1
K
t
(Js +D)(s) +K
e
(s)
=
1
K
t
[(Ls +R)(Js +D) +K
t
K
e
] (s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 76 Feedback Control
Now form the transfer function and arrange the terms so that the DC gain, natural frequency and
damping ratio are easily identied.
G(s) =
(s)
V (s)
=
K
t
(Ls +R)(Js +D) +K
t
K
e
=
K
t
(LJ)s
2
+ (LD +RJ)s + (RD +K
t
K
e
)
=
K
t
RD +K
t
K
e
RD+KtKe
LJ
s
2
+
LD+RJ
LJ
s +
RD+KtKe
LJ
If the friction is small, D 0, then
G(s) =
1
K
e
KtKe
LJ
s
2
+
R
L
s +
KtKe
LJ
= K

2
n
s
2
+ 2
n
s +
2
n
where the DC gain K, natural frequency
n
and damping ratio are
K =
1
K
e

2
n
=
K
e
K
t
LJ
=
1
2
n
R
L
=
1
2

LJ
K
e
K
t
R
L
=
R
2

J
K
e
K
t
L
If the inductance is small, L 0, which is usually reasonable, the transfer function becomes rst-order
G(s) =
1
Ke
RJ
KeKt
s + 1
= K
1

m
s + 1
where
m
is the mechanical time constant and K is the DC gain
K =
1
K
e

m
=
RJ
K
e
K
t
The motor speed responds as a rst-order system to a step change in applied voltage.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 77
(t )

m
(0)
1

m
Figure 6.2: The motor speed responds exponentially to applied voltage.
With (s) = G(S)V (s) and V (s) =
1
s
, partial fraction expansion is used to solve for (t) as
(s) =
1
K
e

m
1
s(s +
1
m
)
=
1
K
e
_
1
s

1
s +
1
m
_
(t) =
1
K
e
_
1 e

t
m
_
While a smaller time constant yields a faster response, for a DC motor, all the parameters that determine
the time constant are xed.

m
=
RJ
K
e
K
t
where
R: Coil resistance.
J: Rotor and load inertia.
K
e
: Back emf constant.
K
t
: Torque constant.
It would seem that the only way to improve the time constant would be to change the motor design or
to reduce the load. But there is another way. Note what happens when a step input is doubled in size as
in Figure 6.3.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 78 Feedback Control
(t)
2/K
e
1/K
e
t
Figure 6.3: Response with a scaled input.
The system has the same sluggish
response as (t), slowly rising to
() =
2
Ke
. But note how (t)
reaches
1
Ke
, the original objective,
much faster. To speed up the response,
try the following. Initially make the
input larger than a unit step. Then, as
the output (t) grows and approaches
the desired output, decrease the input.
Instead of driving G(s) with a unit step, V (t) = u(t), form an error, e(t)

=r(t) y(t), and drive
G(s) with the error multiplied by a large number (again). Figure 6.4 is a block diagram of the proposed
arrangement where R =
c
is the commanded rotor speed, e =
c
is the error in the commanded
rotor speed, u = V is the motor input voltage and Y = is the rotor speed.
Y(s)
K(s)
R(s)
+
-
G(s)
e(s) u(s)
Figure 6.4: A feedback control conguration.
The effective transfer function from R(s) to Y (s) is the closed-loop transfer function G
CL
and is found
algebraically from Y = GKe and e = R Y
G
CL
(s) =
Y (s)
R(s)
=
G(s)K(s)
1 +G(s)K(s)
For the DC motor, the closed-loop transfer function is
G
CL
=
K
_
1
Ke
1
ms+1
_
1 +K
_
1
Ke
1
ms+1
_ =
K
Ke
1 +
K
Ke
1
m
1+
K
Ke
s + 1
The closed-loop system is also a rst-order system but with a faster time constant

m
1 +
K
Ke
<
m
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 79
6.2 Objectives of Feedback Control Systems
Generally, the objectives of a feedback controller are to improve system performance in the sense of
1. Speed as measured by the rise-time
2. Accuracy as measured by the settling-time and steady-state error
3. Stability or overshoot
4. Robustness as measured by sensitivity to disturbances, sensor errors and modelling errors.
6.3 Feedback Control and Transient Response
Often, the feedback controller is just a simple gain, an amplier. This conguration is called
proportional control or P-control. This simple controller can be used to improve the response speed
but, in second-order systems, it also increases the overshoot.
6.3.1 P-Control in First-Order Systems
Consider the rst-order, open-loop system
G(s) =
1
s + 1
The closed-loop system, illustrated in Figure 6.5, has the following transfer function
Y
K
R
+
-
G
e
u
Figure 6.5: A closed-loop conguration.
G
CL
(s) =
G(s)K
1 +G(s)K
=
K
s+1
1 +
K
s+1
=
K
K + 1
1
_

K+1
_
s + 1
=

K
1
s + 1
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 80 Feedback Control
where the closed-loop DC gain

K and time constant are

K =
K
K + 1
=

K + 1
With a positive controller gain K > 0, the closed-loop time constant is faster than that of the open-
loop system, < . Table 6.1 summarizes the effect of increasing the feedback controller gain for a
rst-order system.
Performance measure
Relation to
system characteristic Effect of increased K
Rise-time T
r
= 2.2 Faster (T
r
)
Settling-time T
s
= 4 Faster (T
s
)
Table 6.1: Effect of increasing controller gain on rst-order system transient performance.
6.3.2 P-Control in Second-Order Systems
While the simplied DC motor transfer function from applied voltage to rotor speed is a rst-order
system, the transfer function to the rotor position is second-order. Here is an example of a DC motor
rotor position control system.
Example. The feedback control problem is to apply voltage to the DC motor to cause the rotor to seek
a commanded position. Here is a block diagram.

c
+
-
e
u

1
1
s

m
s+1
Figure 6.6: Block diagram of a DC motor position command system.
Since the rotor position is given by integrating the rotor speed, the open-loop transfer function from the
applied voltage to the rotor position is second-order.
G
OL
(s) =
(s)
(s)
= K
1
s(
m
s + 1)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 81
The open-loop poles are p
1
= 0, p
2
=
1
m
which produce the impulse and step responses illustrated
in Figure 6.7. Note that the open-loop response is not oscillatory.
K
(t)
Impulse Response

Im(s)
Re(s)
Step Response
(t)
Figure 6.7: Open-loop poles and transient response of a DC motor.
The closed-loop transfer function from the commanded to the measured rotor position is found as
G
CL
(s) =
(s)

c
(s)
=
G
OL
1 +G
OL
=
K
s(
m
+ 1) +K
=
K
m
s
2
+
1
m
s +
K
m
The closed-loop poles are
p
1,2
=
1
2
m
_
1

1 4K
m
_
The system is still second-order but, depending on K, the response might oscillate. The poles are
complex when
1 4K
m
< 0 or K >
1
4
m
Consider the second-order, open-loop system
G(s) =

2
n
s
2
+ 2
n
s +
2
n
The closed-loop system has the following transfer function
G
CL
(s) =
GK
1 +GK
=
K
n
s
2
+ 2
n
s +
2
n
+K
2
n
=
K
K + 1
(K + 1)
2
n
s
2
+ 2
n
s + (K + 1)
2
n
=

K

2
n
s
2
+ 2


n
s +
2
n
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 82 Feedback Control
The closed-loop DC gain

K, natural frequency
n
and damping ration

are

K =
K
K + 1

2
n
= (K + 1)
2
n

=
2
n
2
n
=

K + 1
With a positive controller gain K > 0, the closed-loop natural freqency is faster than that of the open-
loop system,
n
>
n
, the closed-loop damping is lower,

< , and the time constant is the same
for open-loop and closed-loop systems,


n
=
n
. Table 6.2 summarizes the effect of increasing the
feedback controller gain for a rst-order system.
Performance measure
Relation to
system characteristic Effect of increased K
Rise-time T
r


2n
Faster (T
r
)
Overshoot M
p
1

0.6
More (M
p
)
Settling-time T
s
=
4
n
Same, (T
s
constant)
Table 6.2: Effect of increasing controller gain on second-order system transient performance.
6.4 Feedback Control and Steady-State Response
While P-control often works well to improve the transient response, in some applications it can induce
a large steady-state error. To see this, consider again the rst-order open-loop system.
G(s) =
1
s + 1
Y (s) =
G(s)K(s)
1 +G(s)K(s)
R(s)
=

K
1
s + 1
R(s)
where the closed-loop DC gain and time constant are

K =
K
K + 1
=

K + 1
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 83
Now apply a step command, R(s) =
1
s
. This means the output y(t) should go to y(t) 1 for large
enough t.
Y (s) =

K
1
s + 1
1
s
y(t) =

K
_
1 e

t
_
y() =

K =
K
K + 1
< 1
The conclusion is that with P-control, the system responds faster but never reaches the commanded
output. An explanation is as follows. In a closed-loop conguration, the system G(s) is driven by
u = Ke. If the error ever went to zero, that is, y = r, then u = 0, y = 0, and e = r = 0. So, the error,
e, has to be something non-zero which, when multiplied by K, gives a control u = Ke large enough to
keep y close to r.
The next three sections discuss proposals for xing the P-control steady-state error problem. The
rst two might seem reasonable at rst but a more careful look shows their aws. Integral control is the
preferred method for achieving zero steady-state error.
6.4.1 High Control Gain K
When the control gain K is large

K =
K
K + 1
1
The steady-state error goes to zero since R(s) =
1
s
y() =

K 1. Also, by Table 6.2, the system
response is very fast since a high gain implies a fast natural frequency
n
and a fast rise-time T
r
. But,
look at the system input
u = Ke = K(r y)
at the initial time t = 0, the error e(0) = 1 since r(0) = 1 and y(0) = 0. The system G(s), at the initial
time, is driven by a step of magnitude K. In any practical system, large inputs are not reasonable.
Remark. Large K is not feasible.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 84 Feedback Control
6.4.2 Command Shaping
If the steady-state output is off by
K
K+1
, try scaling the command by
K+1
K
.
Y (s) =

K
1
s + 1
K + 1
K
R(s) =
1
s + 1
R(s)
Again, steady-state error goes to zero since
R(s) =
1
s
y() = 1
But, this works only if the open-loop gain K is well known. Suppose K is off by so that the open-loop
gain is K

= K +. Then the closed-loop DC gain is

=
K +
K + 1 +
K + 1
K
1 +
1
K(K + 1)

Remark. Steady-state error is zero only if K is known exactly.


6.4.3 Integral Control
For the output y(t) to be non-zero in steady-state, the system G(s) has to be driven by a control u(t)
that is non-zero. Since it is desired that the command tracking error go to zero, instead of setting the
control proportional to the error, u(t) = Ke(t), try setting the control proportional to the integral of the
error as in
u(t) = K

t
0
e()d
In this way, the control may approach a non-zero constant as the tracking error goes to zero. An integral
control feedback conguration is illustrated in Figure 6.8.
Y
K
1
s
R
+
-
G
e
u
Figure 6.8: An integral control feedback conguration.
The following two examples examine the effect of integral control on the transient and steady-state
response of rst-order and second-order systems when a step input is applied.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 85
Example. Consider the rst-order open-loop system
Y (s) = G(s)R(s) =
1
s + 1
R(s)
The closed-loop system is given by
Y (s) =
G(s)K(s)
1 +G(s)K(s)
R(s)
=
1
s+1
K
s
1 +
1
s+1
K
s
R(s)
=
K

s
2
+
1

s +
K

R(s)
Note that 2
n
=
1

does not depend on K so the settling-time T


s
=
4
n
does not depend on K.
To analyse the transient response, put the closed-loop system in the second-order system form
G
CL
(s) =
Y (s)
R(s)
=
K

s
2
+
1

s +
K

=

2
n
s
2
+ 2
n
s +
2
n
where the closed-loop natural frequency
n
and damping ratio are
n
=

and =
1
2

K
. Then,
depending on the size of the feedback gain K, the system is either underdamped or overdamped.
K >
1
4
< 1 Underdamped
K <
1
4
> 1 Overdamped
The system response might oscillate if the gain is high but the system is stable for all K.
To analyse the steady-state response, apply a step command, R(s) =
1
s
. Then the output is given by
Y (s) =
K

s(s
2
+
1

s +
K

)
The nal value theorem applies because the system is stable for all K as long as the open-loop system
is stable. The theorem shows that there is no steady-state error
y() = lim
s0
sY (s) = lim
s0
K

s
2
+
1

s +
K

= 1
The conclusion is that for rst-order systems, integral control provides zero steady-state error with
respect to a step command but also introduces oscillation for large control gain K.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 86 Feedback Control
Example. Consider the second-order open-loop system
Y (s) = G(s)R(s) =

2
n
s
2
+ 2
n
s +
2
n
R(s)
The closed-loop system is given by
Y (s) =
G(s)K(s)
1 +G(s)K(s)
R(s)
=

2
n
s
2
+2ns+
2
n
K
s
1 +

2
n
s
2
+2ns+
2
n
K
s
R(s)
=

2
n
K
s
3
+ 2
n
s
2
+
2
n
s +
2
n
K
R(s)
Note that this is a third-order system and has three real poles or one real and two complex poles.
To check the stability of the closed-loop system, form the Hurwitz determinants. The characteristic
polynomial is
s
3
+ 2
n
s
2
+
2
n
s +
2
n
K a
0
s
3
+a
1
s
2
+a
2
s +a
3
and the three Hurwitz determinants are
D
1
= a
1
= 2
n
D
2
=

a
1
a
3
a
0
a
2

2
n

2
n
K
1
2
n

= 2
3
n
K
2
n
=
2
n
(2
n
K)
D
3
=

a
1
a
3
0
a
0
a
2
0
0 a
1
a
3

2
n

2
n
K 0
1
2
n
0
0 2
n

2
n
K

= K
2
n

2
n

2
n
K
1
2
n

= K
4
n
(2
n
K)
The conditions for the determinants to be positive are
D
1
= 2
n
> 0 always
D
2
=
2
n
(2
n
K) > 0 K < 2
n
D
3
= K
4
n
(2
n
K) > 0 0 < K < 2
n
To analyze the steady-state response, apply a step command, R(s) =
1
s
. The nal value theorem shows
that there is no steady-state error, assuming that the theorem holds, that is, that the system is stable.
y() = lim
s0
sY (s) = lim
s0

2
n
K
s
3
+ 2
n
s
2
+
2
n
s +
2
n
K
= 1
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 87
The conclusion is that for second-order systems, integral control provides zero steady-state error with
respect to a step command but only for a sufciently small control gain K. Higher gains produce an
unstable system.
6.4.4 Steady-State Response to a Ramp Input
Applying a ramp input shows how, generally, integral control improves the steady-state error but
degrades stability. Consider the second-order open-loop system
Y (s) = G(s)R(s) =

2
n
s
2
+ 2
n
s +
2
n
R(s)
From the closed-loop system illustrated in Figure 6.9, the closed-loop transfer function from the ramp
input to the command tracking error is
e(s) = R(s) Y (s)
= R(s)
G(s)K(s)
1 +G(s)K(s)
R(s)
=
1
1 +G(s)K(s)

1
s
2
Y
K
R
+
-
G
e
u
Figure 6.9: A unity feedback conguration.
A Ramp Input Applied to a P-Control Feedback System
With proportional control, the feedback is a constant, K(s) = K. The command tracking error is easier
to nd when written as
e(s) = R(s) Y (s) =
_
1
G(s)K
1 +G(s)K(s)
_
R(s)
Then
e(s) =
_
1
K
K + 1
(K + 1)
2
n
s
2
+ 2
n
s + (K + 1)
2
n
_
1
s
2
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 88 Feedback Control
Now nd e(t) as t . Assuming the nal value theorem holds,
e() = lim
s0
se(s) = lim
s0
_
1
K
K + 1
(K + 1)
2
n
s
2
+ 2
n
s + (K + 1)
2
n
_
1
s
= lim
s0
_
1
K
K + 1
_
1
s
=
With P-control the steady-state error is unbounded as is illustrated in Figure 6.10.
e t ( )
y(t) : slope
K
< 1
r(t) =t: slope = 1
K+1
t
Figure 6.10: Ramp input response of a second-order system with P-control.
A Ramp Input Applied to an Integral Control Feedback System
With integral control, the feedback is K(s) =
K
s
. Find the command tracking error as
e(s) = R(s) Y (s) =
_
1
1 +G(s)
K
s
_
R(s)
=
1
1 +
_
K
2
n
s(s
2
+2ns+
2
n
)
_
1
s
2
=
s
2
+ 2
n
s +
2
n
s
3
+ 2
n
s
2
+
2
n
s +K
2
n
1
s
Then
e(s) =
_
1
K
K + 1
(K + 1)
2
n
s
2
+ 2
n
s + (K + 1)
2
n
_
1
s
2
Find e(t) as t . Again, assuming the nal value theorem holds,
e() = lim
s0
se(s) = lim
s0
s
_
s
2
+ 2
n
s +
2
n
s
3
+ 2
n
s
2
+
2
n
s +K
2
n
_
1
s
=
1
K
With integral control the steady-state error goes to a constant as is illustrated in Figure 6.11.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 89
1
K
e(t)
r(t) =t: slope = 1
y(t) : slope = 1
t
Figure 6.11: Ramp input response of a second-order system with integral control.
A Ramp Input Applied to a Double Integral Control Feedback System
One integrator gets rid of the steady-state error for step commands so, try two integrators to get rid
of steady-state error for ramp commands. The feedback controller is K(s) =
K
s
2
and the command
tracking error is
e(s) = R(s) Y (s) =
_
1
1 +G(s)
K
s
2
_
R(s)
=
1
1 +
_
K
2
n
s
2
(s
2
+2ns+
2
n
)
_
1
s
2
=
s
2
+ 2
n
s +
2
n
s
2
(s
2
+ 2
n
s +
2
n
) +K
2
n
Find e(t) as t . Again, assuming the nal value theorem holds,
e() = lim
s0
se(s) = lim
s0
s
_
s
2
+ 2
n
s +
2
n
s
2
(s
2
+ 2
n
s +
2
n
) +K
2
n
_
= 0
With double integral control, the steady-state error goes to zero. But this result is found assuming the
nal value theorem holds. So, check the closed-loop system and see if or when it is stable.
Form the Hurwitz determinants using the closed-loop characteristic equation.
s
4
+ 2
n
s
3
+
2
n
s
2
+
2
n
K a
0
s
4
+a
1
s
3
+a
2
s
2
+a
3
s +a
4
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 90 Feedback Control
D
1
= a
1
= 2
n
D
2
=

a
1
a
3
a
0
a
2

2
n
0
1
2
n

= 2
3
n
D
3
=

a
1
a
3
0
a
0
a
2
0
0 a
1
a
3

2
n
0 0
1
2
n
K
2
n
0 2
n
0

= (2
n
)(2
n
)(K
2
n
)
= 4K
2

4
n
D
4
=

a
1
a
3
0 0
a
0
a
2
a
4
0
0 a
1
a
3
0
0 a
0
a
2
a
4

= a
4
D
3
= (K
2
n
)(4K
2

4
n
)
= 4K
2

6
n
For stability, all the Hurwitz determinants have to be positive.
D
1
= 2
n
> 0
D
2
= 2
3
n
> 0
D
3
= 4K
2

4
n
> 0
D
4
= 4K
2

6
n
> 0 impossible
The fourth Hurwitz determinant is not positive for any K so the double-integrator conguration is not
stable for any K.
This is also a good example to illustrate the Li enard-Chipart criterion. First, all coefcients in the
characteristic polynomial have to be positive. This is a necessary condition.
a
i
> 0 a
4
= K
2
n
> 0 K > 0
Now look at the odd numbered Hurwitz determinants.
D
1
= 2
n
> 0
D
3
= 4K
2

4
n
> 0 < 0
Obviously, K cannot be both positive and negative so again the double-integrator conguration is not
stable for any K.
Conclusion: Adding integrators improves the steady-state response but degrades stability.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 91
6.5 System Type
The study of integral feedback control applied to rst and second-order systems showed that the number
of integrators in the open-loop system strongly affects the closed-loop steady-state performance. Let
G(s)K(s) =
K

m
j=1
(s +z
j
)
s
l

n
k=1
(s +p
k
)
Systems are typed according to the value of l.
Type 0: l = 0, no integrator
Type 1: l = 1, one integrator
Type 2: l = 2, two integrators
Systems with higher type tend to have better steady-state performance but worse stability.
There is a connection between the system type and the steady-state error for various inputs. First,
dene the static error constants.
Position constant: K
p
= lim
s0
G(s)K(s)
Velocity constant: K
v
= lim
s0
sG(s)K(s)
Acceleration constant: K
a
= lim
s0
s
2
G(s)K(s)
Consider the unity feedback closed-loop conguration of Figure 6.12. The closed-loop transfer function
from the command to the error is
e(s) = R(s) Y (s) =
1
1 +G(s)K(s)
R(s)
Y
K
R
+
-
G
e
u
Figure 6.12: A unity feedback system conguration.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 92 Feedback Control
Now nd the steady-state error in terms of K
p
, K
v
and K
a
. Assume the nal value theorem holds, then
e() = lim
s0
se(s) = lim
s0
sR(s)
1 +G(s)K(s)
Unit Step: R(s) =
1
s
.
e() = lim
s0
s
1
s
1 +G(s)K(s)
=
1
1 +K
p
Unit Ramp: R(s) =
1
s
2
.
e() = lim
s0
s
1
s
2
1 +G(s)K(s)
= lim
s0
1
s +sG(s)K(s)
=
1
K
v
Unit Parabola: R(s) =
1
s
3
.
e() = lim
s0
s
1
s
3
1 +G(s)K(s)
= lim
s0
1
s
2
+s
2
G(s)K(s)
=
1
K
a
Example (A Targeting System). Suppose G(s)K(s) is type zero and models a targeting system.
G(s)K(s) has the form
G(s)K(s) =

m
j=1
(s +z
j
)

n
k=1
(s +p
k
)
where p
k
= 0 k
Then
K
p
= lim
s0
G(s)K(s) = K
z
1
z
m
p
1
p
n
<
The steady-state tracking error for a unit step is nonzero but bounded. But the tracking error for ramp
and parabolic inputs is unbounded because
K
v
= lim
s0
sG(s)K(s) = 0
K
a
= lim
s0
s
2
G(s)K(s) = 0
The system will track the target with nonzero but known error as long as the target holds still. If the
target moves away at a steady speed or worse, accelerates, the system will lose the target.
The system type and static error constants are dened for a unity feedback conguration as shown
in Figure 6.13.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 93
Y
K
R
+
-
G
e
u
Y R
+
-

G
Figure 6.13: Two unity feedback congurations.
Sometimes it is more convenient to express the feedback in a non-unity feedback form as shown in
Figure 6.14.
Y
H
R
+
-
G
Figure 6.14: A non-unity feedback conguration.
System type and static error constants are still dened but the system has to be rearranged into a unity
feedback form. From Figure 6.14, the transfer function from R(s) to Y (s) is
Y = G(R HY )
GR = (1 +GH)Y
Y = (1 +GH)
1
GR
From Figure 6.13, the transfer function from R(s) to Y (s) is
Y =

G(R Y )
= (1 +

G)
1

GR
=

G(1 +

G)
1
R
Solve for

G by setting (1 +GH)
1
G =

G(1 +

G)
1
. Then

G = (1 +GH G)
1
G
Now nd the system type by looking at

G.
K
p
= lim
S0

G(s), K
v
= lim
s0
s

G(s), K
a
= lim
s0
s
2

G(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 94 Feedback Control
6.6 Disturbances
System disturbances are system inputs other than the command signal. Generally, they are unknown
functions of time, not directly measurable and cannot be removed from the system. In an airplane pitch
rate command system for example, disturbances would include wind gusts and pitch rate sensor noise.
Both the wind gust and the sensor noise inevitably stimulate the system output, the airplane pitch rate
and neither signal can be measured.
To include the effect of disturbances in closed-loop system performance analyses, rst nd transfer
functions from each input signal to the output. Next, analyse the effect of the control system gains on
the size of each disturbance component in the output signal.
6.6.1 Transfer functions from the disturbances to the output
System disturbances can appear at the system input, the system output or at the sensor. Figure 6.15
illustrates a general feedback conguration with additive disturbances and sensor noise. The notation
used in the gure is
G: plant D
i
: input disturbance
K: compensator D
o
: output disturbance
H: feedback N: sensor noise
Y
K
R
+
-
G
e u
H
+
+
+
+
D
i
D
o
+
+
N
R
'
Y
'
Figure 6.15: A system with disturbances and sensor noise.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 95
Find the closed-loop transfer functions fromeach of the inputs R, D
i
, D
o
and N to Y . FromFigure 6.15,
the following relations between signals are found
Y = D
o
+Y

= G(D
i
+u)
u = Ke
e = R R

= H(N Y )
Solve for Y ;
Y = D
o
+G(D
i
+u)
= D
o
+G(D
i
+Ke)
= D
o
+G[D
i
+K(R R

)]
= D
o
+G{D
i
+K[R H(N +Y )]}
Then
(1 +GKH)Y = D
o
+GD
i
+GKR GKHN
and
Y =
1
1 +GKH
D
o
+
G
1 +GKH
D
i
+
GK
1 +GKH
R
GKH
1 +GKH
N (6.5)
Transfer functions from each of the inputs to the output Y are found from (6.5).
G
Do
=
Y (s)
D
o
(s)
=
1
1 +GKH
G
D
i
=
Y (s)
D
i
(s)
=
G
1 +GKH
G
R
=
Y (s)
R(s)
=
GK
1 +GKH
G
N
=
Y (s)
N(s)
=
GKH
1 +GKH
6.6.2 A tradeoff between disturbance attenuation and tracking performance
A design goal is to choose K(s) and H(s) to meet two objectives.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 96 Feedback Control
1. The command response described by
Y
R
R
=
GK
1 +GKH
should exhibit fast rise-time, low overshoot, and small steady-state error.
2. The disturbance responses should all be small
Y
Do
=
1
1 +GKH
D
o
Y
D
i
=
G
1 +GKH
D
i
Y
N
=
GKH
1 +GKH
N
The four transfer functions cannot all be independent since there are only two parameters K(s) and
H(s). For example
G
Do
+G
N
=
1
1 +GKH
+
GKH
1 +GKH
=
1 +GKH
1 +GKH
= 1
This means if the effect of sensor noise N on the output is small, then the effect of output disturbances
D
o
has to be large.
Example (Load Torque on a DC Motor). The governing differential equations for a DC motor are
V (t) = L
di
dt
+(t) +Ri(t)
T(t) = J
d
dt
+D(t) +T
L
(t)
where T(t) = K
t
i(t), (t) = K
e
(t), and T
L
(t) is the load torque.
Now, nd the transfer function from the applied voltage V and load torque T
L
to the rotor speed .
V = sLI + +RI = (R +sL)I + = (R +sL) +K
e

T(t) = sJ +D +T
L
= (D +sJ) +T
L
= K
t
I
V (s) = (R +sL)(D +sJ)
1
K
t
(s) + (R +sL)
1
K
t
T
L
(s) +K
e
(s)
(s) =
K
t
V (s) + (R +sL)T
L
(s)
(R +sL)(D +sJ) +K
e
K
t
=
1
R+sL
K
t
1
D+sJ
1 +K
e
1
R+sL
K
t
1
D+sJ
V (s) +
1
D+sJ
1 +K
e
1
R+sL
K
t
1
D+sJ
T
L
(s)
An equivalent block diagram is given in Figure 6.16.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Feedback Control Page 97
D+sJ
(s)
K
t
R+sL
V(s)
+
-
1
K
e
+
+
T
L
(s)
Figure 6.16: A block diagram for a DC motor showing load torque.
Simplify by assuming that L 0 and D 0. L 0 means that the inductance is negligible so that the
mechanical dynamics dominate the electrical dynamics. D 0 means that friction is negligible. Now
the block diagram is given in Figure 6.17.
(s)
K
t
R
V(s)
+
-
1
sJ
K
e
+
+
T
L
(s)
Figure 6.17: A simplied DC motor with zero inductance and friction.
Rearrange to have the load torque enter the feedback loop at the same point as the command as in
Figure 6.18, illustrating a block diagram reduction.
(s)
K
t
sRJ
V(s)
+
-
K
e
+
+
T
L
(s)
R
K
t
(s)
K
t
R
V(s)
+
-
1
sJ
K
e
+
+
T
L
(s)
R
K
t
Figure 6.18: Load torque enters the feedback loop at the same point as the command.
Finally, load torque can be modeled as an input disturbance D
i
(s) where D
i
(s) =
R
Kt
T
L
(s) and
=
RJ
KeKt
as in Figure 6.19.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 98 Feedback Control
(s) 1
K
e
( )
V(s) +
+
D
i
(s)
s+1
Figure 6.19: Load torque is modelled as an input disturbance.
V(s)
1
K
e
( )
(s)
K(s)

c
(s)
+
-
+
+
D
i
(s)

e
(s)
s+1
Figure 6.20: A motor speed command conguration with load torque disturbance attenuation.
A closed-loop motor speed command conguration is illustrated in Figure 6.20. A control design
objective is to nd K(s) to achieve both disturbance attenuation and command tracking
(s)
D
i
(s)
0 and
(s)

c
(s)
1
6.7 Sensitivity
6.7.1 Feedback Improves Sensitivity
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 7
Root Locus
The root locus is a graphical technique that gives a quick look at the migration of the closed-loop poles
as some parameter, such as a controller gain, is varied.
7.1 Root Locus as a Design Tool
The common use of the root locus is as a tool to help choose a controller gain. Suppose a system G(s)
is put in a standard unity feedback conguration with a simple proportional gain controller.
Y(s)
K
R(s)
+
-
G(s)
e(s) u(s)
Figure 7.1: Unity feedback conguration.
The closed-loop poles are given as the roots of
1 +KG(s) = 0
99
Page 100 Root Locus
The general objectives of any system design are to achieve:
Speed
Accuracy
Stability
These goals can be translated to
Choose K such that the roots of 1 +KG(s) = 0 are stable, that is, in the left half plane.
Given the design requirements for the rise-time, settling-time and maximum overshoot, choose K
such that the roots of 1 + KG(s) = 0 lie in the intersection of the associated s-plane regions as
shown in Figure 7.2.

2
>
ln
2
M
p

2
+ ln
2
M
p
(Overshoot)

n
>
4
T
s
(Settling Time)

n
>

2T
r
(Rise time)
Figure 7.2: Performance specications and the s-plane.
Example (Aircraft Stabilization and Pitch Rate Command System). Most aircraft congurations
are designed for good open-loop handling qualities. This means that without any feedback control,
the airplane is stable and responds to the pilot in a predictable and comfortable way. Certain high-
performance aircraft congurations are designed with other considerations and achieve the required
handling qualities only with closed-loop feedback control. In this example, a dynamic model for an
unstable airplane is developed and a feedback control law is designed to stabilize the airplane and track
pitch-rate commands given by the pilot.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 101
Elementary ight mechanics: The aerodynamic center of an aircraft is the point where moments
generated by the aerodynamics are independent of angle-of-attack. Figure 7.3 shows an aircraft designed
with the aerodynamic center in front of the center of gravity. Suppose the aircraft angle-of-attack
increases slightly because of a wind gust. The lift will increase, which will cause the nose to pitch
up, which will further increase the angle-of-attack and the lift. Eventually, the aircraft will depart. This
is a statically unstable conguration. Static stability means that after a disturbance, forces and moments
are produced that tend to reduce the effect of disturbance.
mg
Lift
CG
AC
Figure 7.3: A statically unstable aircraft conguration.
A stable conguration has the aerodynamic center behind the center of gravity. When angle-of-
attack and lift increase due a wind gust, the nose will tend to pitch down causing lift to decrease and
return to what it was before the disturbance.
Remark. Remember CG before AC as commanding general before air cadet.
Aircraft dynamics model: Usually, the longitudinal dynamics of an aircraft are reasonably
approximated by linearizing them about a nominal operating condition. Dene the linearized quantities


= angle of attack
q

= pitch rate

= elevator deection
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 102 Root Locus
The short-period linear dynamics are
= Z

+Z
q
q +Z
e

e
(7.1a)
q = M

+M
q
q +M
e

e
(7.1b)
Next, nd the transfer function from
e
to q by taking the Laplace transform of the dynamics (7.1)
s(s) = Z

(s) +Z
q
q(s) +Z
e

e
(s) (7.2a)
sq(s) = M

(s)M
q
q(s) +M
e

e
(s) (7.2b)
Now solve the two equations (7.2) for q(s) in terms of
e
(s)
q(s) =
M
e
s + (Z
e
M

M
e
)
s
2
(Z

+M
q
)s + (Z

M
q
Z
q
M

e
(s)
Here are the stability derivatives for a ight condition of Mach 0.9 and altitude 20,000 ft.
Z

= 1.6187 sec
1
Z
q
= 0.997 sec
1
M

= 2.9618 sec
1
M
q
= 0.7704 sec
1
Z
e
= 0.16625 sec
1
M
e
= 22.5382 sec
1
For the given ight condition, the transfer function from
e
to q is
q(s) =
22.5382s + [(0.16625)(2.9618) (1.6187)(22.5382)]
s
2
(1.6187 0.7704)s + [(1.6187)(0.7704) (0.997)(2.9618)]

e
(s)
= 22.5382
(s + 1.64)
(s + 2.965)(s 0.575)

e
(s)
The poles and zero are p
1
= 2.965, p
2
= 0.575, and z = 1.64. Since the second pole is positive, the
aircraft is unstable. Small deviations in the elevator deection will cause the pitch rate to grow without
bound.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 103
A stability augmentation and pitch rate command system: A functional block diagram is shown in
Figure 7.4
Amplifier
Actuator
(Servo)
Aircraft
Dynamics
(Unstable)
Pitch Rate
Gyro
q
c
(s)
e(s)

e
(s)
q(s)
q
m
(s)
u(s)
+
-
Figure 7.4: Aircraft stability augmentation and pitch-ratecommand system functional block diagram.
The system component models are
Amplier :
u(s)
e(s)
= K
Servo :
e(s)
u(s)
= 1
rad
volt
Rate Gyro:
qm(s)
q(s)
= 1
volt
rad
sec
A block diagram of the proportional control system is shown in Figure 7.5
q
c
(s)
e(s) q(s)
K
-22.538(s +1.64)
(s +2.965)(s - 0.575)
+
-
Figure 7.5: Aircraft stability augmentation and pitch-rate command system block diagram.
The root locus is a plot of the closed-loop poles as K varies from zero to innity. You could solve for the
poles exactly by solving for the roots of the characteristic equation as a function of K. But the value of
the root locus is that it is a graphical technique that allows you to draw the plot right away and without
solving for anything.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 104 Root Locus
1 -1 -2 -3 -4
Figure 7.6: Root locus of an aircraft stability augmentation and pitch-rate command system.
The root locus, Figure 7.6, shows that for large enough K, the airplane is stable.
Example (DC Motor Position Control System). Consider a proportional gain control design for a
DC motor position command system as shown in Figure 7.7.

c
+
-
1

m
+ 1
e
v

1
s
Figure 7.7: A DC motor position command system.
The closed-loop transfer function from the commanded motor position to the measured position is
(s)

c
(s)
=
G(s)K
1 +G(s)K
=
K
s
2
+s +K
The closed-loop poles are
p
1,2
=
1
2
_
1

1 4K
_
There are two possibilities. If the gain K is low, 0 K
1
4
, the closed-loop poles are real.
K = 0 p
1,2
=
_

, 0
_
K =
1
4
p
1,2
=
_

1
2
,
1
2
_
If the gain is high, K >
1
4
, the closed-loop poles are complex.
K
_
_
_
p
1

1
2
+i
p
2

1
2
i
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 105
The root locus, Figure 7.8, shows the migration of the closed-loop poles as K varies fromzero to innity.
Given a maximumovershoot requirement, one design approach would be to pick the proportional control
gain K so that the closed-loop poles lie within the damping ratio design cone.


1
2
design
K=0 K=0
K
K =
1
4
Re(s)
Im(s)
Figure 7.8: Root locus for a DC motor position command system.
The root locus clearly shows the characteristics of proportional feedback control in this example. As K
varies from zero to innity:
Closed-loop behavior changes from overdamped to critically damped to underdamped.
Closed-loop is always stable.
Constant settling-time for large K.
Choose K to keep locus inside
min
cone.
In the last example, a root locus shows that rise-time can only be improved to a point before
overshoot becomes large. In the next example, root locus shows how proportional-derivative or PD
control can provide both fast rise-time and low overshoot.
Example (DC Motor Position Control System With PD Control).
Consider a proportional-derivative control design for a DC motor position command system as shown
in Figure 7.9.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 106 Root Locus

c
+
-
1

m
s+1
e
v

1
s
K
P
+ K
D
s
Figure 7.9: A DC motor position command system with PD control.
The closed-loop transfer function from the commanded motor position to the measured position is
(s)

c
(s)
=
G(s)K(s)
1 +G(s)K(s)
=
K
P
+K
D
s
s
2
+ (1 +K
D
)s +K
P
Notice that K
D
is like a damping coefcient for a mass-spring-damper system. Since K
D
is a design
parameter, expect the PD controller to achieve better damping than P-control. The closed-loop poles
are:
p
1,2
=
1
2
_
(1 +K
D
)

(1 +K
D
)
2
4K
P

_
Notice that when K
D
= 0, the closed-loop poles are the same as for the simple P-controller case.
Figure 7.10 illustrates three steps in a design approach to nding the proportional and derivative
controller gains K
P
and K
D
.
Root locus for K
D
with K
p
held fixed

Re(s)
Im(s)
Root locus for K
p
with K
D
= 0

Im(s)
Re(s)

Re(s)
Im(s)
Open-loop poles
Figure 7.10: PD controller design with root locus.
1. Put K
P
= K
D
= 0, so the poles are just p
1
= 0 and p
2
=
1

.
2. Keep K
D
= 0, but increase K
P
to some gain K
P
>
1
4
as in the second gure. Now the
closed-loop poles are far from the origin, so the rise-time is good but the damping is poor and the
settling-time has not improved.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 107
3. Hold K
P
xed to the value just found and vary K
D
from zero to innity as in the third gure. As
K
D
increases, the damping and the settling-time both are improved. The rise-time is not affected
much because the poles stay a xed distance from the origin.
The root loci of Figure 7.10 are drawn using graphical techniques quickly and with very little effort.
But this example is small enough that the root locus also can be veried analytically. It has already been
shown that the closed-loop poles are given by
p
1,2
=
1
2
_
(1 +K
D
)

(1 +K
D
)
2
4K
P

_
The poles are either both real or a form complex conjugate pair. Suppose 0 (1 +K
D
)
2
4K
P
so
that the poles are complex. Then it can be shown that the magnitude of the poles does not depend on
K
D
.

p
1,2

2
=

1
2
_
(1 +K
D
) i

4K
P
(1 +K
D
)
2
_

2
=
1
4
2
_
(1 +K
D
)
2
+ (4K
P
(1 +K
D
)
2
)

=
K
P

Since the magnitude of the complex poles does not depend on K


D
, as K
D
varies, the poles follow an
arc a constant radius from the origin.
Now suppose 4K
P
< (1 +K
D
)
2
so that both poles are real. Then it can be shown that as K
D
becomes large, one pole approaches zero and the other approaches .
K
D
p
1,2

1
2
_
(1 +K
D
)

(1 +K
D
)
2
_

_
_
_
p
1
=
1+K
P


p
2
= 0
The root locus provided the same information as the preceding analysis and provided it much quicker
and with much less effort.
A root locus for rst and second-order systems is easy to nd because the roots of the characteristic
equation are found using a simple formula. Solving for the closed-loop poles of a higher-order system
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 108 Root Locus
is harder to do. The utility of the simple graphical technique of the root locus procedure now should be
clear.
Example (Root Locus for a Higher-Order System). Consider a third-order system G(s) with
proportional feedback control
G(s) =
1
s[(s + 4)
2
+ 16]
The closed-loop transfer function is
G
CL
(s) =
G(s)K
1 +G(s)K
=
K
s[(s + 4)
2
+ 16] +K
=
K
s
3
+ 8s
2
+ 32s +K
The characteristic equation is
D(s) = s
3
+ 8s
2
+ 32s +K = a
0
s
3
+a
1
s
2
+a
2
s +a
3
For stability, all the coefcients have to be positive, so immediately K > 0. Now check the Hurwitz
determinants. D(s) is third-order so there are three Hurwitz determinants. All three have to be positive
for D(s) to have stable roots but, by the Li enard-Chipart criterion, only the even or only the odd
numbered determinants need to be checked.
D
2
=

a
1
a
3
a
0
a
2

8 K
1 32

= 256 K > 0 K < 256


The Hurwitz determinants show that the closed-loop system is stable as long as the gain K is not too
large, more precisely, if and only if 0 < K < 256.
The same result is found graphically and with much less work using the root locus as shown in
Figure 7.11. Note that the root locus does not reveal precisely at which value of K the system becomes
unstable.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 109
Re(s)
Im(s)
-4
K
K
K=0
K=0
Figure 7.11: Root locus for a third-order system.
7.2 Root Locus Background
The root locus is a graphical technique for plotting the closed-loop poles of a transfer function as some
parameter varies from zero to innity. The technique is applied to systems expressed in a special form,
Evans form, which is described in the rst part of this section. The angle criterion is developed in
the second part. The angle criterion expresses a relationship between the phase of open-loop poles and
zeros of a transfer function and the closed-loop poles.
7.2.1 Evans Form
The rules for sketching the root locus are developed for systems expressed in Evans form
G(s) =
G

(s)
1 +KG(s)
where K is the parameter to be varied. The root locus is a plot of the roots of
1 +KG(s) = 0
as K varies from zero to innity.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 110 Root Locus
Example (P-Control System). Consider a system G(s) with proportional feedback control.
G
CL
(s) =
KG(s)
1 +KG(s)
This conguration is already in Evans form.
Example (PD-Control System). Consider the simplied DC motor model G(s) with proportional-
derivative feedback control K(s).
DC motor G(s) =
1
s(s + 1)
PD controller K(s) = K
P
+K
D
s
The closed-loop system G
CL
(s) is given by
G
CL
(s) =
K
P
+K
D
s
s
2
+ (1 +K
D
)s +K
P
In the example of the previous section, a value for K
P
was chosen and the closed-loop poles were
plotted as K
D
varied. This was done after rearranging G
CL
(s) as
G
CL
(s) =
K
P
+K
D
s
s
2
+ (K
D
+ 1)s +K
P
=
_
K
P
+K
D
s
s
2
+s+K
P
_
1 +K
D
_
s
s
2
+s+K
P
_
The root locus is for the roots of
0 = 1 +K
D
_
s
s
2
+s +K
P
_
7.2.2 The Angle Criterion
Finding the root locus by explicitly solving for the closed-loop poles is not practical for any but the very
simplest systems. However, nding just the phase of the closed-loop poles is not too hard and that is
what the angle criterion does. The angle criterion is the essential property of the root locus from which
most other properties are derived. Application of the angle criterion leads to a set of simple rules, a
procedure for sketching the root locus.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 111
Complex Numbers Review
Let z = x +iy be a complex number, and dene R and
by
R
2
= x
2
+y
2
tan =
y
x
Since e
i
= cos +i sin , z has a polar representation
z = x +iy = R(cos +i sin ) = Re
i
.
Let (s z
k
) be a vector in the complex plane from z
k
to s
as shown in Figure 7.12. A vector (s z
k
) in the complex
plane has a polar representation as follows and as shown
in Figure 7.13
s z
k
= R
z
k
e
iz
k
Angle Criterion Development
Consider a closed-loop transfer function:
G
CL
(s) =
G(s)

K(s)
1 +G(s)

K(s)
Re(s)
Im(s)
(s-z
k
)
z
k
s
Figure 7.12: A vector in the complex
plane from z
k
to s.
Re(s)
Im(s)
s-z
k
= R
z
k
s

z
k
z
k
Figure 7.13: A vector in the complex
plane has a polar representation.
The denominator can be written in a factored form
1 +G(s)

K(s) = 1 +K
(s z
1
)(s z
2
) (s z
m
)
(s p
1
)(s p
2
) (s p
n
)
Consider K as a parameter that varies from zero to innity. The root locus is a plot of the closed-loop
poles, the values of s where the denominator goes to zero.
0 = 1 +K
(s z
1
)(s z
2
) (s z
m
)
(s p
1
)(s p
2
) (s p
n
)
for 0 K <
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 112 Root Locus
Put the (s z
j
) and (s p
k
) in polar form

1
K
=
(s z
1
)(s z
2
) (s z
m
)
(s p
1
)(s p
2
) (s p
n
)
=
_
R
z
1
e
iz
1
_ _
R
z
2
e
iz
2
_

_
R
zm
e
izm
_
_
R
p
1
e
ip
1
_ _
R
p
2
e
ip
2
_

_
R
pn
e
ipn
_ = Re
i
where
R =
R
z
1
R
zm
R
p
1
R
pn
= (
z
1
+ +
zm
) (
p
1
+ +
pn
)
While the gain K normally is a real number, it has a representation in the complex plane so that

1
K
=
1
K
e
i(+l2)
where l = 0, 1, 2, . . .
It follows that a point s is on the root locus if and only if the following two requirements are met, the
magnitude criterion and the angle criterion:
R =
Rz
1
Rzm
Rp
1
Rpn
=
1
K
= (
z
1
+ +
zm
) (
p
1
+ +
pn
) = +l2 for some integer l
Denition (Angle criterion). A point s is on the root locus if and only if the sum of the angles it makes
with the zeros minus the sum of the angles it makes with the open-loop poles is 180

.
Re(s)
Im(s)

p
1

p
2

p
3

z
1

z
2
s
Figure 7.14: Applying the angle criterion.
When applying the angle criterion as in Figure 7.14 be careful to
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 113
1. Draw the vector (s z
k
) from z
k
to s.
2. Measure the angle
z
k
counterclockwise from a line extending to the right.
Example (Root Locus on the Real Axis). Apply the angle criterion to a point s on the real axis, step
three of the root locus sketching procedure. As shown in Figure 7.15, angles made by complex poles
and zeros always add up to zero because these poles and zeros always come in conjugate pairs.
Re(s)
Im(s)
s
Figure 7.15: Complex poles and zeros contributezero degrees to the angle criterion.
As shown in Figure 7.15, real poles and zeros to the left of s make an angle of zero degrees with s and
those to the right of s make an angle of 180 degrees.
Re(s)
Im(s)
p z (s-z) (s-p)
0
180
s
Figure 7.16: Real poles and zeros contributezero or 180 degrees to the angle criterion.
The only poles that contribute to the angle criterion are those on the real axis and to the right of s.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 114 Root Locus
Suppose there are p zeros and q poles on the right side of s. Then
(p 180

) (q 180

) = 180

+l360

for some l = 0, 1, 2, . . .
(p q) 180

= 180

+l360

l = 0, 1, 2, . . .
(p q) 180

= l180

l = 0, 1, 3 5, . . .
(p q) is odd
(p q + 2q) is odd
(p +q) is odd
Rule. A root locus segment on the real axis lies on the left of an odd number of poles and zeros.
Example (Aircraft Pitch-Rate Command System). As found earlier, the transfer function for an
aircraft elevator deection to pitch-rate is
G(s) =
q(s)

e
(s)
=
22.5382(s + 1.64)
(s + 2.965)(s 0.575)
The open-loop dynamics have one zero and two
poles
z = 1.64
p
1
= 2.965, p
2
= 0.575
q(s)
K
q
c
(s)
+
-
G(s)
q
e
(s)
e
(s)
Figure 7.17: An aircraft pitch-rate command system.
A block diagram of a proportional control pitch-rate command system is shown in Figure 7.17. The root
locus with respect to the control gain K is easy to draw. No computations are needed.
1 -1 -2 -3 -4
Im(s)
Re(s)
Figure 7.18: Aircraft pitch-rate command system root locus.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 115
Example (DC Motor With a PD Controller). Design a DC motor position control system using a PD
controller. A block diagram is shown in Figure 7.19.
(s)
K(s)

c
(s)
+
-
G(s)

e
(s)
V(s)
Figure 7.19: A DC motor with position command compensation.
First choose the zero location then vary the controller gain to place the closed-loop poles.
G(s) =
(s)
V (s)
=
1
s(s + 1)
, K(s) = K(s z
0
)
The closed-loop transfer function is
G
CL
(s) =
G(s)K(s)
a +G(s)K(s)
where G(s)K(s) =
K(s z
0
)
s(s + 1)
The zeros and open-loop poles are all on the real axis.
z = z, p
1
= 0, p
2
=
1

Figure 7.20 on the left side shows that the root locus segments on the real axis are easy to draw using
the rule found above. The complete root locus using all the sketching rules is shown on the right.
Im(s)
Re(s)
Im(s)
Re(s)
Figure 7.20: The real axis segments and the complete root locus of a DC motor control system.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 116 Root Locus
7.3 Root Locus Sketching Procedure
Depending on the author, there are 5, 6, 8 or 11 rules for sketching the root locus. The rules are much
the same. The difference is in the way they are compiled and counted. Unfortunately, the text [5] does
not provide a clear procedure. Here are the rules arranged to form a procedure for drawing a root locus.
1. Put the system in Evans form. Rearrange, if necessary, the closed-loop transfer function to take
the following form where K is the parameter to be varied.
G
CL
(s) =
G

(s)
1 +KG(s)
2. Find the starting and ending points.
(a) Draw the open-loop poles and zeros G(s). Indicate the n poles on the complex plane by x
and the m zeros by o.
(b) Identify the number of loci, one for each pole. (Nise [5], rule #1)
(c) Identify where the root locus begins and ends. (Nise [5], rule #4)
i. As K 0, the loci approach the open-loop poles, the xs.
ii. As K , m loci approach the zeros, the os. The rest go to innity.
3. Find the real axis segments. On the real axis, draw the root locus to the left of odd numbers of
open-loop poles and zeros.
4. Find the asymptotes. The n m poles that dont approach zeros go to innity along asymptotes.
(a) The asymptote angles are

=
180

+360

n m
where = 0, 1, . . . , n m1.
(b) The asymptote centroid is
=

n
i=1
p
i

m
i=1
z
i
mn
where p
i
and z
i
are the open-loop poles and zeros of G(s).
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 117
5. Find the departure and arrival angles.
(a) The locus departure angle
d
i
from pole p
i
is
m

k=1

z
k

n

j=1
j=i

d
j

d
i
= 180

+l360

where
z
k
is the angle made by a vector drawn
from zero z
k
to pole p
i
and
p
j
is the angle made
by a vector drawn from pole p
j
to pole p
i
.
Re(s)
Im(s)

p
2

p
1
Figure 7.21: Root locus departure angle.
(b) The locus arrival angle
a
i
at zero z
i
is
m

k=1
k=1

z
k

n

j=1

p
j
+
a
i
= 180

+l360

where
z
k
is the angle made by a vector drawn
from zero z
k
to zero z
i
and
p
j
is the angle made
by a vector drawn from pole p
j
to zero z
i
.
Figure 7.22: Root locus arrival angle.
6. Find the real axis breakaway points. Real axis locus segments that meet always break away at
90

. Generally, two approaching loci always meet at a relative angle of 180

, then break away


changing direction by 90

. Breakaway locations are not usually found using hand calculations.


Finding a breakaway point can be as difcult as solving for the root locus directly.
7. Complete the sketch. Remember that the root locus is symmetric about the real axis. Use all
information is steps 1 through 6.
Starting and ending points: as K 0 and K .
Real axis segments.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 118 Root Locus
Asymptote angles and centroid.
Departure and arrival angles
Guess real axis breakaway points: the angle is 90

when two poles meet.


Symmetric about the real axis.
7.4 Proofs and Examples of Root Locus Properties
First, show that as K approaches zero, the root loci approach the open-loop poles and as K approaches
innity, m of the root loci approach the system zeros and the rest go to innity.
Proof. 1 + KG(s) = 0 implies that |KG(s)| = 1 s on the root locus. Now, if |KG(s)| = 1 and K
approaches zero then G(s) has to go to innity. Then, if s
0
is on the root locus as K 0, then s
0
is a
pole of G(s). Similarly, if |KG(s)| = 1 and K approaches innity then G(s) has to go to zero. Write
G(s) as a ratio of polynomials
G(s) =
N(s)
D(s)
=
s
m
+ +b
1
s +b
0
s
n
+ +a
1
s +a
0
, for m n
In order for G(s) 0, either N(s) = s
m
+ + b
1
s + b
0
= 0 so that s is a zero of G(s) or
D(s) = s
n
+ +a
1
s +a
0
which means s is going to innity.
Show that the root locus asymptote angles are

=
180

+360

n m
where = 0, 1, . . . , n m1.
Proof. As s goes to innity, KG(s) may be approximated as
KG(s) = K
s
m
+ +b
1
s +b
0
s
n
+ +a
1
s +a
0
K
s
m
s
n
=
K
s
nm
Now, 1 +KG(s) = 0 for all s on the root locus, so
K
s
nm
= KG(s) = 1 and
s = K
1
nm
For K > 0, a representation for K in the complex plane is K = Ke
i(+2)
where = 0, 1, 2, . . .
so
s =
_
Ke
i(+2)
_ 1
nm
= K
1
nm
e
i(
+2
nm
)
, where = 0, 1, 2, . . . , n m1.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 119
So as K goes to innity, n m poles go to innity along asymptotes with angles given by

=
180

+360

n m
= 0, 1, . . . , n m1.
Example (Root Locus Asymptote Angles). Consider a fourth-order system with one zero.
KG(s) = K
s +a
(s
2
+ 2
n
s +
2
n
)(s +b)
2
, n = 4, m = 1
For this system, n m = 3 so as K goes
to innity the closed-loop poles follow three
asymptotes given by angles

=
180

+360

n m
, = 0, 1, 2

0
=

3
,
1
= ,
2
=
5
3
Im(s)
Re(s)
60
180
240
: Centroid
Figure 7.23: Asymptote calculation detail.
Given only the asymptote angles, the root locus could be sketched in either of two ways as illustrated in
Figure 7.24.
Im(s)
Re(s)
Im(s)
Re(s)
Figure 7.24: The root locus may be sketched in either of two ways.
For small K the root loci are quite different, but as K approaches innity they are identical.
Given that p
i
and z
i
are the open-loop poles and zeros of G(s), show that the root locus asymptote
centroid is
=

n
i=1
p
i

m
i=1
z
i
mn
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 120 Root Locus
Proof. Write KG(s) in a factored form and for simplicity assume all poles and zeros are real.
KG(s) = K
s
m
+b
m1
s
m1
+ +b
1
s +b
0
s
n
+a
n1
s
n1
+ +a
1
s +a
0
= K
(s z
1
) (s z
m
)
(s p
1
) (s p
n
)
Note that
b
m1
=
m

k=1
z
k
and a
n1
=
n

j=1
p
j
Now divide the numerator into the denominator using long division
s
mn
+ + (a
n1
b
m1
)s
nm1
s
m
+b
m1
s
m1
+ +b
0
_
s
n
+ a
n1
s
n1
+ +a
0
s
n
+ b
m1
s
n1
+
(a
n1
b
m1
)s
n1
+
.
.
.
Then
KG(s) = K
1
s
nm
+ (a
n1
b
m1
)s
nm1
+
= 1
and
0 = s
nm
+ (a
n1
b
m1
)s
nm1
+ +K
Suppose the roots of this polynomial are given by s
i
=
i
, i = 1, . . . , n m.
0 = (s
1
)(s
2
) (s
nm
)
= s
nm

_
nm

i=1

i
_
s
nm1
+ +K
Then
a
n1
b
m1
=
nm

i=1

i
Now (a
n1
b
m1
) =

nm
i=1

i
is the sum of the poles going to innity, while
a
n1
=
n

j=1
p
j
is the sum of the open-loop poles and
b
m1
=
m

k=1
z
k
is the sum of the zeros.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 121
Finally, the centroid is given by
=

nm
i=1

i
n m
=

n
j=1
p
j

m
k=1
z
k
n m
=

(open loop poles)

(zeros)
(no. of poles)
(no. of zeros)
Example (Root Locus Asymptotes). Consider again the DC motor position command system with
proportional control. The transfer function from the applied voltage to the motor position is
G(s) =
(s)
V (s)
=
1
s(s + 1)
and the closed-loop transfer function is
G
CL
(s) =
KG(s)
1 +KG(s)
=
1
s
2
+s +K
The root locus is already known to be given as in
Figure 7.25.
Find the asymptote angles. The system has two
poles and no zeros so n m = 2 and the root locus
has two asymptotes. The angles are

=
180

+360

n m
, = 0, 1

0
= 90

and
1
= 270

The asymptote centroid is given by


=

n
j=1
p
j

m
k=1
z
k
n m
=
_
0 +
_

_
[0]
2
=
1
2

K
K= 0 K= 0
Re(s)
Im(s)

1
2
Figure 7.25: Root locus for a DC motor
position command system.
Two asymptotes emanate from =
1
2
at angles 90

and 270

.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 122 Root Locus
Example (Root Locus Asymptotes). Consider the following second-order system G(s) with an
integral controller K(s).
G(s) =
1
(s + 4)
2
+ 16
K(s) =
K
s
The open-loop poles are
p
1,2
= 4 i4, p
3
= 0.
Find the asymptote angles. The system has three
poles and no zeros so n m = 3 and the root locus
has three asymptotes. The angles are

=
180

+360

n m
, = 0, 1, 2

0
= 60

,
1
= 180

,
2
= 300

.
The asymptote centroid is given by
=

n
j=1
p
j

m
k=1
z
k
n m
=
[0 + (4 +i4) + (4 i4)] [0]
3
=
8
3
Re(s)
Im(s)
-4
K
K
K = 0
K = 0
Figure 7.26: Root locus asymptotes show
unstable closed-loop poles at high gain.
Show that the root locus departure angle from pole p
i
is
m

k=1

z
k

n

j=1
j=i

d
j

d
i
= 180

+l360

A proof of the arrival angle relation follows the same idea.


Proof. The departure angle comes directly from an application of the angle criterion. Take any point
s
i
very close to pole p
i
. The angle made by the vector from p
i
to s
i
is
p
i
. The angle made by a vector
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 123
from any other pole or zero to s
i
is very nearly the same as the angle made by a vector from that pole or
zero to p
i
. Now apply the angle criterion
m

k=1

z
k

n

j=1
j=i

p
j

p
i
= 180

+l360

Solve for
d
i
=
p
i
to nd s
i
that lies on the root locus.
Example (Root Locus Departure Angle). Consider the following second-order system G(s) with an
integral controller K(s) as in the last example
G(s) =
1
(s + 4)
2
+ 16
K(s) =
K
s
The open-loop poles, shown in Figure 7.27, are
p
1,2
= 4 i4, p
3
= 0.
Find the departure angle for pole p
2
180

=
m

k=1

z
k

n

j=1
j=i

p
j

p
i
= (135

+ 90

+
p
2
)

p
2
= 315

= 45

.
Figure 7.27: Root locus departure angle
calculation.
The root locus including the departure angle detail is shown in Figure 7.28.
Example (Root Locus Departure and Arrival Angles). Consider the following second-order system
G(s) with an integral controller K(s)
G(s) =
(s 4)
2
+ 16
(s + 4)
2
+ 16
K(s) =
K
s
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 124 Root Locus
Figure 7.28: Root locus with departure angle detail.
The open-loop poles and system zeros are
p
1,2
= 4 i4, p
3
= 0
z
1,2
= 4 i4.
Figure 7.29: Root locus illustrating departure and arrival angle detail.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 125
Find the departure angle
180

=
m

k=1

z
k

n

j=1
j=i

p
j

p
i
= (180

+ 135

) (135

+ 90

+
d
)

d
= 90

.
Find the arrival angle
180

=
m

k=1
k=i

z
k

n

j=1

p
j
+
z
i
= (
a
+ 90

) (0

+ 45

+ 45

a
= 180

.
Example (Root Locus Breakaway Points). Let
G(s) =
1
s(s + 1)
, K(s) = K, G
CL
(s) =
1
s
2
+s +K
Figure 7.30: Breakaway point calculation detail.
Example (Root Locus Sketch). Consider a fourth-order system G(s) in a unity feedback
conguration with an amplier gain K as in Figure 7.31.
G(s) =
s + 3
s(s 2)[(s + 1)
2
+ 4]
The root locus sketching procedure is applied as follows.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 126 Root Locus
Y(s)
K
R(s)
+
-
G(s)
e(s) u(s)
Figure 7.31: Unity feedback conguration.
Evans form: The closed-loop system is
G
CL
(s) =
KG(s)
1 +KG(s)
where K is the parameter to be varied. This system is already in the correct form.
Starting and Ending Points: The open-loop poles
and zeros are p
1
= 0, p
2
= 2, p
3,4
= 1 i2,
and z = 3. There are four poles (n) and one
zero (m) so three (nm) poles go to innity and
one pole goes to the zero.
Real Axis Segments: The root locus lies to the left
of an odd number of poles and zeros on the real
axis.
Re(s)
Im(s)
Figure 7.32: Root locus with real axis segment
drawn.
Draw the Asymptotes: First, nd the asymptote
angles

=
180

+360

n m
, = 0, 1, 2

0
= 60

,
1
= 180

,
2
= 300

.
Now nd the asymptote centroid .
=

p
i

z
i
n m
=
[2 + 0 + (1 +i2) + (1 i2)] (3)
3
= 1
Re(s)
Im(s)
Figure 7.33: Root locus asymptotes.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Page 127
Find Angles of Departure: The angle criterion
requires

p
= 180

and the given data


are

p
1
= tan
1
_
2
1
_
= 117

p
2
= tan
1
_
2
3
_
= 146

p
3
= 90

z
= 45

Re(s)
Im(s)

p
1

p
2

p
3
Figure 7.34: Root locus departure angle detail.
The departure angle is found from
[45

] [
d
+ 117

+ 146

+ 90

] = 180

as

d
= 488

= 232

Find Breakaway Points On The Real Axis: For this, guess. The only loci that meet on the real
axis emanate from p
1
= 0 and p
2
= 2. The breakaway angle is always 90

. The breakaway
point is somewhere between 0 and 2.
Complete The Sketch:
Re(s)
Im(s)
Figure 7.35: The completed root locus sketch.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 128 Root Locus
The root locus shows that the closed-loop system is unstable for all K. Compensation other than a
simple amplier gain is needed to draw the closed-loop roots into the left-half plane and stabilize the
system.
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 8
Root Locus Compensator Design
System closed-loop poles for a unity feedback conguration given by
G
CL
(s) =

KG(s)K(s)
1 +

KG(s)K(s)
as sketched in a root locus are determined by G(s) and K(s). With proportional control, the closed-loop
poles are determined entirely by the system G(s). If there is no point on the root locus that meets the
design specications for speed, accuracy and stability, then dynamics can be added to K(s) to shape the
root locus. This is called compensation.
8.1 Proportional-Derivative Control
Proportional-derivative (PD) control shapes the closed-loop transient response of a system by adding a
zero to the compensator dynamics
K(s) = K
P
+K
D
s = K(1 +Ts)
By carefully placing the PD controller zero, the root-locus can be shaped to improve damping, rise time
and stability without using excessively large control gains.
129
Page 130 Root Locus Compensator Design
Example. The problem is to command the position of a simple mass by applying a force. Two
approaches are considered: proportional control and proportional-derivative control.
The time-differential equation of motion is
m x = f
Given that m = 1 the transfer function from the applied force to
the mass displacement is
X(s) =
1
s
2
F(s),
m
f(t)
x(t)
Figure 8.1: A simple mass with an
applied force.
For a rst approach, command the position of the mass by generating a force F(s) using simple
proportional control as shown in Figure 8.1.
X
K
X
c
+
-
1
s
2
X
e
F
Figure 8.2: Mass position controlled with proportional feedback.
The root locus for G(s)K(s) =
K
s
2
shows that the closed-loop
poles are imaginary for all K. This means the mass position
command to response transfer function has no damping for any
feedback gain K. To achieve a desired, well-damped response,
dynamics are added to the compensator to shift the root-locus to
the left.
The proportional controller may be thought of as acting like a
spring since the mass restoring force is proportional to a distance,
the error in the commanded position. The two components of the
closed-loop system are the mass and the controller.
Re(s)
Im(s)
Figure 8.3: Root locus is along the
imaginary axis.
The mass stores kinetic energy while the proportional controller stores potential energy. Like a spring-
mass system, oscillation occurs as the two elements exchange energy. Since there are no energy
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 131
dissipation elements in the closed-loop system, the impulse response shows no damping. This is seen in
the closed-loop transfer function, a second-order system with a damping coefcient of zero.
G
CL
(s) =
KG(s)
1 +KG(s)
=
K
s
2
+K

1
ms
2
+cs +k
with c = 0.
Now generate the force F(s) using a proportional-derivative control as shown in Figure 8.3.
K(s) = K(Ts + 1) and let T = 1
X
K
P
+K
D
s
X
c
+
-
X
e
F
1
s
2
Figure 8.4: Mass position controlled with proportional-derivative feedback.
The loop transfer function G(s)K(s) is
G(s)K(s) = K
s + 1
s
2
and the closed-loop system is
G
CL
(s) =
G(s)K(s)
1 +G(s)K(s)
=
K(s + 1)
s
2
+Ks +K
The root locus for G(s)K(s) = K
s+1
s
2
shows that the closed-loop
poles are complex for low gains K and both real and stable for
higher gains.
Re(s)
Im(s)
Figure 8.5: Root locus shows that
the PD controller adds damping.
An interpretation is that the proportional-derivative controller acts like a spring and damper since
the mass restoring force is proportional to both a distance and speed. The derivative control component
of the closed-loop acting like a damper provides the system with an energy dissipation element. For a
proportional-derivative controller given by K(s) = K
P
+K
D
s, the closed-loop system is
G
CL
(s) =
G(s)K(s)
1 +G(s)K(s)
=
K
D
s +K
P
s
2
+K
D
s +K
P
The derivative control gain K
D
acts like a damping coefcient.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 132 Root Locus Compensator Design
Example (DC Motor Position Control). Consider again a DC motor position control system. The
transfer function from the applied voltage to the motor position is
G(s) =
(s)
V (s)
=
1
s(s + 1)
The closed-loop system illustrated in Figure 8.6 has either proportional gain feedback K(s) = K or
proportional-derivative feedback K(s) = K(Ts + 1).

K(s)

c
+
-
1
s+1

e
V

1
s
Figure 8.6: DC motor position control system.
With proportional gain feedback, the loop transfer function is
G(s)K(s) = K
1
s(s + 1)
The root locus, shown on the left in Figure 8.7, shows that damping approaches zero as the control gain
increases.
Im(s)
Re(s)
Im(s)
Re(s)
Figure 8.7: Root loci comparison of proportional and proportional-derivative controllers.
With proportional-derivative feedback, the loop transfer function is
G(s)K(s) = K
Ts + 1
s(s + 1)
The root locus, shown on the right in Figure 8.7, shows improved damping as the system becomes
critically damped then overdamped as the control gain increases.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 133
Proportional-derivative control responds to the change in command error so, in some sense, it knows
what is coming next. Sometimes a proportional-derivative controller is described as an anticipatory
controller. Proportional-derivative control is used in two ways:
For the same K (DC gain), proportional-derivative control provides faster rise-time and peak-time
with smaller overshoot.
Increase K to achieve less steady-state error without affecting the transient response.
PD-Control P-Control
Figure 8.8: Transient response comparison of proportional and proportional-derivative controllers.
8.2 Lead Compensation
Proportional-derivative control works well to improve the transient response but pure derivatives tend
to amplify noise. Introduce a small but high frequency disturbance and a PD controller will generate
a large control signal. Lead compensation helps to reduce this effect by balancing the PD control zero
with an added pole.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 134 Root Locus Compensator Design
A lead compensator has the form
K(s) =
s +z
s +p
with 0 z < p.
Lead compensation shifts the root locus to the left. To see
how this works, consider the angle criterion.

p
= 180

. (8.1)
Im(s)
Re(s)
pole far away
from zero
-p -z
Figure 8.9: A lead compensator pole-zero
conguration.
On the left side of Figure 8.10 is a root locus for a simple loop transfer function with two real poles. The
root locus shows that as the gain is increased, the damping decreases. On the right side of Figure 8.10,
a desired closed-loop pole is shown as being shifted to the left of the root locus. With the closed-loop
pole placed here, the rise-time and steady-state error are the same as if the pole were on the root locus
but has less overshoot.
Im(s)
Re(s)

1

2
-(
1
+
2
) = -180
Im(s)
Re(s)

1

2
-(
1
+
2
) = -190
Figure 8.10: Shift the root locus to the left to improve damping.
An application of the angle criterion (8.1) shows that the with an extra pole and zero pair placed far to
the left and on the real axis, the root locus may be altered to pass through a desired closed-loop pole
location.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 135
Im(s)
Re(s)

1

2
10
p z
Figure 8.11: Shifting the root locus requires adding pole-zero pairs to the compensator.
To put the point on the root locus, put in a pole and zero
pair that contribute 10

. If z and p are chosen properly,


they will contribute just enough to put the shifted point
on the root locus. Shifting the root locus to the left
allows for the same damping ratio , increased controller
gain K for improved steady-state response and increased

n
for faster rise-time.
Im(s)
Re(s)

Figure 8.12: The shifted root locus.


Example (DC Motor Position Control). Consider again a DC motor position control system. The
transfer function from the applied voltage to the motor position with a time constant =
1
2
is
G(s) =
(s)
V (s)
=
1
s(s + 1)
=
2
s(s + 2)
The design specications are stated in terms of a required maximum overshoot, minimum rise-time and
maximum steady-state command tracking error.
M
p
0.2
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 136 Root Locus Compensator Design
T
r
0.5 sec
steady-state error 0.1 (10%).
Proportional control: For a rst approach, command the position of the motor by generating a voltage
V (s) using a proportional controller.
Meet the overshoot specication: M
p
0.2.
M
p
= e

1
2
)
0.2 0.46
On the root locus
0 = 1+KG(s) = 1+K
_
1
R
p
1
R
p
2
_
e
i(p
1
p
2
)
Thus, by the magnitude criterion

K
R
p
1
R
p
2

= 1 (8.2)
Find R
p
1
and R
p
2
by simple trigonometry:
R
p
1
sin 27

= 1 and R
p
2
sin 27

= 1.
R
p
1
= R
p
2
= 2.2
Im(s)
Re(s)
27
=0.46
R
p
1
-2
R
p
2
Figure 8.13: Use a root locus to meet an
overshoot specication.
By (8.2)
K K
max
(2.2)
2
= 4.84
Meet the rise-time specication: T
r
0.5 sec.
0.46 and
n
= 1

n
2.2
T
r


2
n
0.72 sec
So overshoot and rise-time specications cant both be met.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 137
Meet the steady-state error specication: The steady-state error is required to be 0.1 for a ramp
input.

e
(s) =
1
1 +KG(s)

c
(s)
where
G(s) =
2
s(s + 2)
and
c
(s) =
1
s
2
Apply the nal value theorem

e
() = lim
s0
s
e
(s) = lim
s0
s
1
1 +K
2
s(s+2)
1
s
2
=
1
K
Then, given the steady-state error specication
e
() 0.1
K 10.
In summary the design specications cannot be met with proportional control since
M
p
0.2 K 4.84

e
() 0.1 K 10
Lead compensation: With lead compensation, the same damping can be achieved with a higher
gain. This way, both steady-state and overshoot requirements are met. For simplicity, put the lead
compensator zero on top of the system pole. Let
K(s) = K
s + 2
s + 5
K(s)G(s) = K
s + 2
s + 5
2
s(s + 2)
= K
2
s(s + 5)
Meet the overshoot specication: M
p
0.2 0.46.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 138 Root Locus Compensator Design
On the root locus

KR
z
R
p
1
R
p
2
R
p
3

K
R
p
1
R
p
3

= 1
Find R
p
1
and R
p
3
:
R
p
1
sin 27

= 2.5
R
p
3
sin 27

= 2.5
R
p
1
= R
p
3
= 5.48
Then
K
max
(5.48)
2
= 1 K K
max
= 30.
Im(s)
Re(s)
27
=0.46
-2 -5 -2.5
R
z
R
p
3
R
p
1
R
p
2
Figure 8.14: Use lead compensation to shiftthe root
locus to the left and improve the damping.
Meet the rise-time specication: T
r
0.5 sec.
0.46 and
n
= 2.5

n
5.5
T
r


2
n
0.29 sec
With a lead compensator, the overshoot and rise-time specications are consistent.
Meet the steady-state error specication: The steady-state error is required to be 0.1 for a ramp
input.

e
(s) =
1
1 +KG(s)

c
(s)
where
G(s) =
2
s(s + 2)
and
c
(s) =
1
s
2
Apply the nal value theorem

e
() = lim
s0
s
e
(s) = lim
s0
s
1
1 +K
2
s(s+5)
1
s
2
=
5
2K
Then, given the steady-state error specication
e
() 0.1
K 25.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 139
In summary, choose K(s) = K
s+2
s+5
with 25 K 30 to meet overshoot, rise-time and steady-
state error specications.
M
p
0.2 K 30

e
() 0.1 K 25
8.3 Proportional-Integral Control
Suppose lead compensation is used to x the transient response but that additional steady-state design
specications remain to be satised. A pole at the origin, an integrator, should help to meet the steady-
state requirements, but it will also change the root locus and the transient response. A proportional-
integral (PI) controller adds a zero close to the origin so the root locus and the transient response are not
affected by the integrator.
K(s) = K
P
+
K
I
s
=
K
P
s +K
I
s
= K
_
s +z
s
_
Angles contributed by the compensator
pole at the origin and the zero close to
the origin nearly cancel:

p
0
Proportional-integral control improves
the steady-state error without changing
the transient response M
p
, T
r
, T
p
.
Im(s)
Re(s)

p
1
Closed-Loop Pole

p
2
p

z
Figure 8.15: The root locus is not changed by the PI controller.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 140 Root Locus Compensator Design
8.4 Lag Compensation
Lag compensation approximates proportional-integral
control by placing the pole close to but not at the
origin. Lag compensators are easier to implement
than proportional-integral controllers as they require no
active circuits. A lag compensator has the form
K(s) =
s +z
s +p
with z > p > 0
Lag compensation improves the steady-state error but
does not change the system type.
Im(s)
Re(s)
pole and zero
close to the origin
-p -z
Figure 8.16: A lag compensator pole-zero
conguration.
The compensator pole and zero pair shift the root locus to the right but only slightly. To see how lag
compensation works, consider a step input to a system in unity feedback form. The transfer function
from the command to the error is
E(s) =
1
1 +K(s)G(s)
R(s)
where the controller and step input are
K(s) =
s +z
s +p
and R(s) =
1
s
.
Y(s)
K(s) G(s)
R(s)
+
-
E(s) U(s)
Figure 8.17: A unity feedback conguration.
Apply the nal value theorem
e() = lim
s0
sE(s) = lim
s0
s
1
1 +K(s)G(s)
1
s
=
1
1 +
z
p
G(0)
So make the ratio
z
p
large to make the steady-state error small.
Example (A Typical Lag Compensator).
K(s) =
s + 0.1
s + 0.01
The system position constant is
K
P
= K(0)G(0) = 10G(0)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 141
The lag improves K
P
by 10 and the steady-state error by about 10.
8.5 Compensator Design By Root Locus
The text [5] uses second-order system design specications and then goes into painful detail to carefully
place poles and zeros with three decimal places of accuracy. Experience shows that second-order
approximations are not good often enough to make this labor worthwhile. Instead use
Rules of Thumb
Trial and Error
Experience
The following sections present suggested approaches to lead, lag and lead-lag compensator design.
8.5.1 Lead Compensator Design
Use proportional-derivative (PD) or lead compensation to improve transient response.
1. Choose the lead compensator zero.
Use the lead zero to cancel the largest real pole of the system unless this is at the origin.
Iterate by moving the lead zero to the right or left of the closest real pole.
Use design criteria M
p
, T
r
, T
s
. Put the lead zero here.
2. Choose the lead compensator pole. Do this by setting the lead pole three to twenty times the
value of the lead zero. This forms a compromise between noise suppression and compensator
effectiveness.
If the pole is too far to the left, the lead zero looks like a pure differentiator, noise
amplication is high and the actuator over-heats from stimulation by noise energy.
If the pole is too far to the right, the lead pole cancels the effect of the zero, the root locus
remains close to the uncompensated shape and the lead compensator is ineffective.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 142 Root Locus Compensator Design
8.5.2 Lag Compensator Design
Use proportional-integral (PI) or lag compensation to improve steady-state error.
1. Find the ratio
z
p
, the lag zero over the lag pole, from the required improvement in the system
position, velocity or acceleration constant.
2. Choose the lag pole location. The ratio
z
p
determines the lag zero location. This forms a
compromise between the rate at which the steady-state error decays and the effect of the lag
on the rest of the root locus.
If the pole is too far to the left, the steady-state error decays quickly but the transient response
design specications may no longer be met.
If the pole is too far to the right, the steady-state error specications are met but only after a
long delay.
8.5.3 Lead-Lag Compensator Design
Put PI and PD or lead and lag compensation together to get PID or lead-lag compensation.
PID: K(s) = K
P
+K
D
s +
K
I
s
Lead-lag: K(s) = K
_
s+z
d
s+p
d
__
s+zg
s+pg
_
with p
d
> z
d
and z
g
> p
g
.
1. Simulate to nd needed improvements in transient response M
p
, T
r
, T
s
.
2. Design the PD: K(s) = K
D
(s +z) or lead: K(s) = K
D
s+z
d
s+p
d
compensator.
3. Simulate:
(a) Redesign the PD or lead compensator if needed.
(b) Find needed improvement in steady-state response.
4. With the PD or lead compensator in place, design a PI: K(s) = K
P
+
K
I
s
or lag K(s) = K
g
s+zg
s+pg
compensator.
5. Simulate. Redesign PD or lead and PI or lag compensators if needed.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 143
6. Form the PID or lead-lag compensator.
PID: K(s) = K
D
(s +z)
_
K
P
+
K
I
s
_
=

K
p
+

K
D
s +

K
I
s
Lead-Lag: K(s) = K
D
s+z
d
s+p
d
K
g
s+zg
s+pg
=

K
(s+z
d
)(s+zg)
s+p
d
)(s+pg)
.
8.6 Aircraft Pitch-Rate Autopilot
An aircraft pitch-rate command system is to be developed for a Lear Jet in steady level cruise at 40,000
feet altitude and Mach 0.7. Figure 8.18 illustrates a functional block diagram.
Compensator Servo Aircraft Dynamics
Pitch Rate Gyro
+
-
q
c
(s)
q
m
(s)
v(s)
e
(s) q(s)
w
g
(s)
Figure 8.18: Aircraft pitch-rate command system functional block diagram.
q: aircraft pitch-rate
e
: aircraft elevator deection
q
c
: commanded pitch-rate v: voltage applied to elevator servo
q
m
: measured pitch-rate w
g
: wind gust
8.6.1 System Component Models
The aircraft dynamics are approximated by the short period longitudinal dynamics
=
2

+q
q =
2
n
[ M

(
e
+w
g
)]
where
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 144 Root Locus Compensator Design
(t): angle of attack
e
(t): elevator deection
q(t): pitch-rate w
g
: wind gust
The constants ,
n
and M

, which depend on the aircraft conguration and ight condition, are


1

= 0.78 sec
1
,
n
= 3.39
rad
sec
, M

= 1.51
The pitch-rate gyro and elevator servo dynamics are very fast and can be neglected.
rate gyro:
qm(s)
q(s)
= 1
volt
rad
volt
elevator servo:
e(s)
V (s)
= 1
rad
volt
8.6.2 Design Goals
The pitch-rate control system design specications are as follows.
low overshoot: overshoot 5% damping ration 0.7
fast rise-time: T
r
0.5 sec
fast settling-time: T
s
2.0 sec
steady-state error: < 5% for a step pitch-rate command
< 5% for a step wind gust disturbance
8.6.3 Design Approach
1. Prepare a block diagram and nd the transfer functions from the pitch-rate command to the pitch-
rate and from the wind gust disturbance to the pitch-rate.
2. Try to meet the design goals using a simple proportional controller.
(a) Prepare a root locus plot and use second-order system approximation to show that overshoot,
rise-time and settling-time specications can be met. Find the proportional control gain.
(b) Using the proportional control gain found in (2a), show that the steady-state error for both a
step pitch-rate command and a step wind gust disturbance are much greater than the required
5%.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 145
(c) Simulate the closed-loop system to verify the conclusions from (2a) and (2b).
3. Design a lag compensator to meet the steady-state error requirements. Simulate and verify that
the steady-state error decays quickly enough to meet the settling-time requirement. Also verify
that the rise-time and overshoot requirements are met.
8.6.4 System Block Diagram
Take the Laplace transform of the system model differential equations
s(s) =
2

(s) +q(s)
sq(s) =
2
n
[(s) M

(
e
(s) +w
g
(s))]
and solve for q(s)
q(s) = G(s)
e
(s) +G(s)w
g
(s)
where
G(s) =

2
n
M

_
s +
2

_
s
2
+
2

s +
2
n
and where
1

= 0.78 sec
1
,
n
= 3.39
rad
sec
, M

= 1.51
Now, plug in the given numbers
q(s) =
17.3(s + 1.56)
s
2
+ 1.56s + 11.5
[
e
(s) +w
g
(s)]
The open-loop poles and zeros are
p
1,2
= 0.78 i3.29 and z = 1.56
Notice that for this model, the open-loop transfer functions from the elevator deection to the pitch-rate
and the wind gust to the pitch-rate are the same.
Now that mathematical models for each system component are formed, a block diagram is
constructed from the functional block diagram, Figure 8.18, and the system component models. The
block diagram is illustrated in Figure 8.19.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 146 Root Locus Compensator Design
K(s)
+
-
q
c
(s)
G(s)

e
(s)
w
g
(s)
q(s)
Figure 8.19: Aircraft pitch-rate command system block diagram.
8.6.5 Closed-Loop Transfer Functions
The closed-loop transfer functions from the pitch-rate command and the wind gust disturbance to the
pitch-rate are
q(s)
q
c
(s)
=
G(s)K(s)
1 +G(s)K(s)
and
q(s)
w
g
(s)
=
G(s)
1 +G(s)K(s)
Later it is convenient to have the pitch-rate error which is
E(s)

=q
c
(s) q(s)
so that
E(s) = q
c
(s)
_
KG(s)
1 +KG(s)
q
c
(s) +
G(s)
1 +KG(s)
w
g
(s)
_
=
1
1 +KG(s)
q
c
(s)
G(s)
1 +KG(s)
w
g
(s)
8.6.6 Design 1: Proportional Control
A proportional controller is the easiest to design and build and should be tried rst. The root locus,
Figure 8.20, shows that a damping ratio = 0.7 can be achieved. Since this is a second-order system, it
is easy to solve for the value of the gain K that yields the closed-loop damping ratio, = 0.7
K = 0.26 = 0.7 and
n
= 4.3
Assuming that the system behaves as a second-order system, the overshoot and settling-time should be
within the specications. This will have to be veried with a simulation. But before that, check the
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 147
-6
-4
-2
0
2
4
6
-12 -10 -8 -6 -4 -2 0

d
= 167

z
=77
= 0.7
K= 0.26
Figure 8.20: Root Locus for a proportional controller.
steady-state error. It was found above that the pitch-rate error is given by
E(s) =
1
1 +KG(s)
q
c
(s)
G(s)
1 +KG(s)
w
g
(s)
First, apply a step pitch-rate command
q
c
(s) =
1
s
and w
g
(s) = 0.
The nal value theorem shows that
E() = lim
s0
sE(s) = lim
s0
s
1
1 +KG(s)
1
s
=
1
1 +KG(0)
= 0.62
since
G(s) =

2
n
M

(s +
2

)
s
2
+
2

s +
2
n
G(0) = M

= 2.36
Now, apply a step wind gust disturbance
q
c
(s) = 0 and w
g
(s) =
1
s
By the nal value theorem, the steady-state error is
E() = lim
s0
sE(s) = lim
s0
s
G(s)
1 +KG(s)
1
s
=
G(0)
1 +KG(0)
= 1.46
The steady-state error is huge and well beyond the 5% design specication for both the step pitch rate
command and the step wind gust disturbance. Figure 8.21 shows the time response from a simulation.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 148 Root Locus Compensator Design
0
0.5
1
0 0.5 1 1.5 2 2.5 3 3.5 4
Step Pitch Rate Command Response
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
0
1
2
3
0 0.5 1 1.5 2 2.5 3 3.5 4
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
Figure 8.21: Step responses with proportional controller.
8.6.7 Design 2: Integral Compensation
Using an integral controller
K(s) =
K
s
the steady-state error for both the step pitch-rate command and the step wind gust disturbance goes to
zero. However, since the root locus shows that the damping is arbitrarily small, lag compensation will
have to be used.
8.6.8 Design 3: Lag Compensation
One approach to lag compensator design is to use the proportional control to get the desired transient
response, then put in the lag compensation to x the steady-state error. A proportional controller was
found in the rst design attempt as K = 0.26. The steady-state error specication requires that
E() =
1
1 +K(0)G(0)
0.05 K(0)G(0) 19
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 149
Using only proportional control, the DC gain is
KG(0) = 0.62
so the lag compensator has to improve the DC gain by a factor of 31 or better. As a rst guess, choose
the lag compensator pole and zero as
K(s) =
s + 0.31
s + 0.01
The pole and zero are both close enough to the origin and to each other so that the root locus is not
affected much. Simulation results, which are not shown here, show that the steady-state error meets the
requirement but because the system response has so far to go, the slow lag pole causes an unacceptably
long settling-time.
To improve the rate at which the lag compensator takes out the steady-state error and improve the
settling-time, move the lag compensator pole and zero to the left. Their ratio is still 31 and the lag zero
coincides with the aircraft zero.
K(s) = K
s + 1.55
s + 0.05
Now the lag pole is fast enough to slightly affect the root locus so the gain K that yields the closed-loop
damping ratio = 0.7 has to be recomputed. This computation is not done by hand.
K = 0.36 = 0.7
The root locus is shown in Figure 8.22. Compare this root locus with the one found using only
proportional control, Figure 8.20. With K = 0.36, the DC gain is larger than required to meet the
pitch-rate command steady-state error specication. The step response shown in Figure 8.23, indicates
the steady-state error is about 5% but decay still is much too slow.
8.6.9 Design 4: Fast Lag Compensation
In this design, the lag zero is placed to the left of the aircraft zero. The ratio of the lag zero to the lag
pole is still 31 but now in the root locus, the lag pole migrates towards the aircraft zero. The complex
aircraft poles migrate towards the lag zero and innity.
K(s) = K
s + 4.65
s + 0.15
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 150 Root Locus Compensator Design
-6
-4
-2
0
2
4
6
-12 -10 -8 -6 -4 -2 0
= 0.7
K= 0.36
Figure 8.22: Root Locus for K(s) = K
s+1.55
s+0.05
.
0
0.5
1
0 0.5 1 1.5 2 2.5 3 3.5 4
Step Pitch Rate Command Response
Time (sec)
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
0
1
2
0 0.5 1 1.5 2 2.5 3 3.5 4
Step Wind Gust Disturbance Response
Time (sec)
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
Figure 8.23: Step Responses for K(s) = K
s+1.55
s+0.05
.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Root Locus Compensator Design Page 151
The root locus, as shown in Figure 8.24, is quite different from the uncompensated root locus of
Figure 8.20. So again, the gain K that yields the closed-loop damping ratio = 0.7 is found by
computer.
K = 0.63 = 0.7
With this gain K, the DC gain is
DC gain = K
s + 4.65
s + 0.15
G(s)

s=0
= 46.0
The computed steady-state error for a step pitch-rate command is about 2.1%. The computed steady-
state error for a step wind gust disturbance is about 5.0%.
-6
-4
-2
0
2
4
6
-12 -10 -8 -6 -4 -2 0
= 0.7
Figure 8.24: Root Locus for K(s) = K
s+4.65
s+0.15
.
The step response shown in Figure 8.25 shows that the steady-state error is 5% or less and the
settling-time is well under 2 sec. The rise-time is less than 0.5 sec and the overshoot is about 5%. The
transient response and steady-state error both meet all design specications so no further compensation
is needed. Bode magnitude and phase plots shown in Figure 8.26 for the fast lag compensator design
show that the gain margin is innite and phase margin is greater than 90

.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 152 Root Locus Compensator Design
0
0.5
1
0 0.5 1 1.5 2 2.5 3 3.5 4
Step Pitch Rate Command Response
Time (sec)
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
Step Wind Gust Disturbance Response
Time (sec)
P
i
t
c
h

R
a
t
e

(
r
a
d
/
s
e
c
)
Figure 8.25: Step Responses for K(s) = K
s+4.65
s+0.15
.
10
-1
10
0
10
1
-100
-50
0
10
-1
10
0
10
1
10
2
10
-1
10
0
10
1
10
2
G
a
i
n
P
h
a
s
e
Frequency (rad/sec)
Frequency (rad/sec)
Figure 8.26: Bode Plots for K(s) = K
s+4.65
s+0.15
.
Rev. May 4, 2010 Introduction to Feedback Control Systems
CHAPTER 9
Frequency Response
An important result is that for a stable linear system, the response to a sinusoidal input is also a sinusoid
but with a different amplitude and phase. System frequency response analysis concerns the relationship
between the output amplitude and phase and the system transfer function.
9.1 Background
The system frequency response can be found experimentally by a frequency sweep, illustrated in
Figure 9.1,
G(s)
Asint Bsin(t+)
B
A
1

0
-90

Figure 9.1: System response by frequency sweep.


or by measuring the system impulse response as in Figure 9.2 where G() = F[g(t)] is the Fourier
transform.
153
Page 154 Frequency Response
G(s)
t
( )
g t
( )

G()

G()
Figure 9.2: System response by impulse response.
A frequency sweep relies on the important observation that for a linear system driven by a sine wave,
the output is also a sine wave with the same frequency as the input but with a different amplitude and
phase. With this observation, frequency response is related to a transfer function G(s) as follows.
Theorem 9.1. Given
Y (s)
R(s)
= G(s) a stable transfer function, let the input be a sine r(t) = Asin t. In
steady-state, the output is
y
ss
(t) = Bsin(t +)
where the gain and phase are
B
A
= |G(i)| , = G(i)
Proof. Since r(t) = Asin t so that R(s) =
A
s
2
+
2
, the output is given by
Y (s) = G(s)R(s)
=

m
k=1
(s z
k
)

n
j=1
(s p
j
)
A
s
2
+
2
where
_
z
k
are the m zeros of G
p
j
are the n poles of G
and where again, G(s) could be a closed-loop or an open-loop transfer function. Find the partial fraction
expansion
Y (s) =
K
1
s p
1
+ +
K
n
s p
n
+
B
1
s +B
2
s
2
+
2
Then
y(t) = K
1
e
p
1
t
+ +K
n
e
pnt
. .
transient response
+Bsin(t +)
. .
forced response
Assume G(s) is stable so that
K
j
e
p
j
t
0 as t
and in steady-state
Y
ss
(s) =
B
1
s +B
2
s
2
+
2
(9.1)
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 155
Now, nd B
1
and B
2
to nd y
ss
(t).
B
1
s +B
2
|
si
= (s
2
+
2
)Y (s)

si
= (s
2
+
2
)G(s)
A
s
2
+
2

si
= G(s)A|
si
So
iB
1
+B
2
= AG(i)
Since G(i) is complex
G(i) = |G(i)| e
i()
= |G(i)| cos () +i |G(i)| sin ()
Multiply by A to get a relation for B
1
and B
2
AG(i) = A |G(i)| cos () +iA |G(i)| sin ()
= B
2
+iB
1
So write B
1
and B
2
as
B
1
= A|G(i)| sin () and B
2
= A |G(i)| cos ()
By (9.1) it follows that
y
ss
(t) = B
1
cos t +
B
2

sin t
= A|G(i)| (sin cos t + cos sin t)
= A|G(i)| sin (t +(i))
Theorem 9.1 states that the frequency response
of a stable transfer function G(s) is found by
evaluating G(s) along the imaginary axis. For
Y (s) = G(s)R(s) with the input r(t) = Asin t,
y(t) = Bsin(t +)
with
B
A
= |G(i)| , = G(i)

1
G
Gain: G(i)

0
Phase:
G(i)
Figure 9.3: System frequency response evaluated as
gain and phase with frequency.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 156 Frequency Response
9.2 System Modeling
System
r(t) =(t) y(t) =g(t)
,
G(s) =

(
s - z
k
)

(
s - p
j
)
G(i) G(i)
L()
F()
transfer function
frequency response
Figure 9.4: System models in the time, Laplace and frequency domains.
Three ways to describe a system model have been discussed. A time-domain model is the system
impulse response, g(t). The Laplace transform of the impulse response, G(s), provides an s-domain
model, the transfer function.
G(s) = L[g(t)] =


0
g(t)e
st
dt
Evaluate G(s) along the imaginary axis
G(i) =


0
g(t)e
it
dt
=

g(t)e
it
dt since g(t) = 0 for t < 0
which is just F[g(t)], the Fourier transform. System characteristics described by each kind of model are
related as in Table 9.1.
9.3 Frequency Response Plots
Plots of the frequency response of a system come in several forms. However, each is a plot of the
open-loop frequency response and each gives information about closed-loop characteristics. Consider
the block diagram of Figure 9.5 and the open-loop transfer function
G(s) =
F(s)
E(s)
= H(s)P(s)K(s)
Three ways to plot the frequency response G(i) are
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 157
time-domain s-domain frequency domain
speed rise-time, T
r

n
bandwidth
settling-time, T
s

n
accuracy steady-state error nal value theorem DC gain
overshoot, M
p
phase margin
stability Re(p
i
) < 0 gain margin > 0
phase margin > 0
Table 9.1: System characteristics in the time, Laplace and frequency domains.
1. Bode Plots . . . . . . . . . . . . |G(i)| vs. and G(i) vs. .
2. Nyquist Plots . . . . . . . . . Im[G(i)] vs. Re[G(i)].
3. Nichols Plots . . . . . . . . . |G(i)| vs. G(i).
Each gives the same information but the interpretation is different.
Y
K(s) P(s)
R
+
-
E U
H(s) F
Figure 9.5: A closed-loop feedback system.
9.4 Bode Plots
Consider the open-loop transfer function G(s)K(s) with the decomposition
G(s)K(s) =

K
(s z
1
) (s z
m
)
(s p
1
) (s p
n
)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 158 Frequency Response
which is sometimes called the root locus form. Rewrite as
GK(i) =

K
z
1
z
m
p
1
z
n
_
1 +
i
z
1
_

_
1 +
i
zm
_
_
1 +
i
p
1
_

_
1 +
i
pn
_
= K
r
z
1
e
iz
1
r
zm
e
izm
r
p
1
e
ip
1
r
pn
e
ipn
where K =

K
z
1
z
m
P
1
p
n
which is sometimes called the Bode form. The loop transfer function gain and phase follow as
|GK(i)| = K
r
z
1
r
zm
r
p
1
r
pn
(9.2)
GK(i) = (
z
1
+ +
zm
) (
p
1
+
pn
) (9.3)
To make plotting easier, usually the gain |GK(i)| is plotted in decibels, where
dB()

= 20 log
10
()
By taking the logarithm, the loop transfer gain found as a product in 9.2 becomes a sum of gains of
lower-order terms. Once it is known how to plot the gain of rst and second-order terms, the gain for
higher-order systems is found easily.
20 log |GK(i)| = 20
_
_
log K +
m

k=1
log

1 +
i
z
k

j=1
log

1 +
i
p
j

_
_
GK(i) =
m

k=1

_
1 +
i
z
k
_

j=1

_
1 +
i
p
j
_
Here are a few numbers to remember
1 0 db
10 20 db 2 6 db

2 3 db
0.1 20 db
1
2
6 db
1

2
3 db
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 159
9.4.1 Pure Differentiators and Integrators
Differentiator Integrator
G(s) = s G(s) =
1
s
G(i) = i G(i) =
1
i
=
i

|G(i)| = , G(i) = 90

|G(i)| =
1

, G(i) = 90

0.1 1 10
20
0
-20
log
db
0.1 1 10
90
0
-90

log
0.1 1 10
20
0
-20
log
db
0.1 1 10
90
0
-90

log
Remark. For the differentiator at high frequency, |G(i)| becomes large. Thus differentiation is
considered a very noisy operation. For the integrator at low frequency, |G(i)| becomes large. Thus
integration is said to induce low frequency drift. This means a small bias can integrate into a large
error over time.
Remark. For an input r(t) = sin t, the differentiator output is y(t) = sin(t + 90

), a 90

phase
lead. For the same input, the integrator output is y(t) =
1

sin(t 90

), a 90

phase lag.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 160 Frequency Response
9.4.2 First-Order Components
First-Order Zero First-Order Pole
G(s) = 1 +
s
c
G(s) =
1
1+
s
c
G(i) = 1 +
i
c
G(i) =
1
1+
i
c
=
1
1+
(

c
)
2
i
(

c
)
1+
(

c
)
2

c
1 |G(i)| 1, G(i) 0

c
1 G(i) i

c
|G(i)| 20 log 20 log
c
G(i) 90

c
= 1 G(i) = 1 +i
|G(i)| =

2 = 3 db
G(i) = 45

c
1 |G(i)| 1, G(i) 0

c
1 G(i)
1
i

c
= i
c

|G(i)| 20 log
c
20 log
G(i) 90

c
= 1 |G(i)| =
1

2
= 3 db
G(i) = 45

0.1
c

c 10
c
20
0
-20
log
db 3db
slope = 20db/dec
0.1
c

c 10
c
90
0
-90
log
slope = 45/dec
0.1
c

c 10
c
20
0
-20
log
db
-3db
slope = -20db/dec
0.1
c

c 10
c
90
0
-90
log
slope = -45/dec
Remark.
c
is called the breakpoint or the break frequency.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 161
9.4.3 Second-Order Components
Second-Order Zero Second-Order Pole
G(s) =
s
2

2
n
+ 2
s
n
+ 1 G(s) =
1
s
2

2
n
+2
s
n
+1
G(i) = 1
_

n
_
2
+ 2i

n
G(i) =
1
1
(

n
)
2
+2i

n

n
1 |G(i)| 1, G(i) 0

n
1 G(i) i
_

n
_
2
|G(i)| 40 log 40 log
n
G(i) 180

n
= 1 G(i) = i2
|G(i)| = 2 20 log 2
G(i) = 90

n
1 |G(i)| 1, G(i) 0

n
1 G(i)
1

(

n
)
2
|G(i)| 40 log
n
40 log
G(i) 180

n
= 1 G(i) =
1
i2
|G(i)| =
1
2
20 log 2
G(i) = 90

0.1
n

n 10
n
40
0
-40
log
db
slope = 40db/dec
=1
<0.5
=0
0.1
n

n 10
n
180
0
-180
log
slope = 90/dec
0.1
n

n 10
n
40
0
-40
log
db
slope = -40db/dec
=1
<0.5
=0
0.1
n

n 10
n
180
0
-180
log
slope = -90/dec
How big is the second-order gain correction?
= 1 |G(i
n
)| =
1
2
6 db = 0.5 |G(i
n
)| = 1 0 db
= 0.7 |G(i
n
)| =
1

2
3 db = 0.05 |G(i
n
)| = 10 20 db
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 162 Frequency Response
Example. Consider the second-order system with a zero
G(s) =
5(s + 1)
s(s + 2)
=
5
2
1 +s
s(1 +
s
2
)
G(i) =
5
2
1 +i
i(1 +
i
2
)
Rewrite G(s) as a product of four rst-order systems
G(i) = G
1
(i)G
2
(i)G
3
(i)G
4
(i)
|G(i)| = |G
1
| |G
2
| |G
3
| |G
4
|
G(i) = G
1
+G
2
+G
3
+G
4
The gain and phase of each component are illustrated in the following gures.
G
1
=
5
2
= 8 db
20
0
-20
log
db
8
90
0
-90

log
G
2
= 1 +i
0.1 1 10
20
0
-20
log
db
20db/dec
0.1 1 10
90
0
-90
log
45/dec

G
3
=
1
i
0.1 1 10
20
0
-20
log
db
-20db/dec
0.1 1 10
90
0
-90

log
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 163
G
4
=
1
1+
i
2
0.1 1 10
20
0
-20
log
db
-20db/dec
2
0.1 1 10
90
0
-90
log
-45/dec
2

Now get the composite system gain and phase by adding each component. Zeros add 20 db per decade
to the gain and 90

to the phase. Poles add 20 db per decade to the gain and 90

to the phase.
0.1 1 10
20
0
-20
log
db
-20db/dec
2
28
8
-20db/dec
0.1 1 10
90
0
-90
log 2 0.2 20
Figure 9.6: The completed Bode gain and phase plot.
9.4.4 Relate Bode Plot to Time and s-Domain System Characteristics
The Bode gain plot provides the system bandwidth and DCgain. The bandwidth gives a direct indication
of the rise-time whereas the DC gain is related to the steady-state tracking error. For type 1 systems, the
DC gain is innite and the Bode gain plot provides a velocity constant. The velocity constant gives the
steady-state tracking error for a ramp command.
Bandwidth
Denition (Bandwidth). The bandwidth of G(s) is the frequency
B
where the amplitude |G(i
B
)| =
3 db.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 164 Frequency Response
For a rst-order system, the bandwidth is given by the time constant . To see this, put |G(i
B
)| =
3 db and solve for
B
G(i) =
1
i + 1
=
1 i
1 + ()
2
|G(i)| =
1

1 + ()
2
= 3 db
Then

B
=
1

The rise-time is given by the bandwidth as


T
r
= 2.2 = 2.2
1

B
For a second-order system the bandwidth is given by the natural frequency
n
. Again, put
|G(i
B
)| = 3 db and solve for
B
. For simplicity, let = 0.7.
G(i) =
1
_
i
n
_
2
+ 2
_
i
n
_
+ 1
|G(i
n
)| =

1
i2

=
1

2
= 3 db
For a second-order system with = 0.7, the rise-time is
T
r
=

2
+

1
2
=

2
+ 1

n
= 3.2
1

B
With lower damping, the bandwidth is slightly higher and the rise-time slightly faster. Verify this by
sketching the Bode gain plot for a second-order system with = 0.7 and compare with one for = 0.1.
DC Gain
Denition (DC Gain). The DC gain is the frequency response magnitude taken in the limit as the
frequency goes to zero: DC gain

=|G(i)|
=0
.
For a unity feedback system with a step input
e() = lim
s0
sE(S)
= lim
s0
1
1 +G(s)
= lim
0
1
1 +G(i)
=
1
1 +K
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 165
where K = |G(i)|
=0
= K
P
is the DC gain and is the same as the position constant. The steady-state
error is inversely related to the DC gain. Notice that as the gain K goes to innity, the steady-state error
e() goes to zero. Thus, to improve the steady-state error, increase the low-frequency gain. This is just
what a lag compensator does.
Example. Consider the lag compensator
K(s) =
s +
1
10
s +
1
100
= 10
1 +
s
0.1
1 +
s
0.01
0.001 0.1 10 0.01
20
0
-20
log
db
90
0
-90
log
0.001
0.1 10 0.01
Figure 9.7: Bode gain and phase plot for a lag compensator.
The Bode plot in Figure 9.7 shows that the gain is large only for low frequencies, frequencies below
0.01, the lag pole location. The gain is 1 at higher frequencies.
Velocity Constant from Frequency Response
For type 1 systems, the DC gain is innite and the steady-state error for a step input is zero. The
steady-state error due to a ramp input is nonzero and is given by the velocity constant.
K
V
= lim
s0
sG(s) = lim
s0
sK
_
1 +
s
z
1
_

_
1 +
s
zm
_
s
_
1 +
s
p
1
_

_
1 +
s
pn
_ = K
For small
|G(i)| =

K
_
1 +
i
z
1
_

_
1 +
i
zm
_
i
_
1 +
i
p
1
_

_
1 +
i
pn
_

K
i

=
K

Now nd K from the Bode plot by extending the low frequency approximation as
|G(i
0
)|
K

0
= 1 0 db
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 166 Frequency Response
Then
K
V
=
0
Example. Let G(s) be in a unity feedback conguration where G(s) =
2
s(s+1)
. Then for a ramp input.
K
V
= lim
s0
sG(s) = 2 e() =
1
2
0.1 10 log 1
20
0
-20
db

0
= K

= 2
Figure 9.8: Velocity constant from the Bode gain plot.
The same information comes from the Bode gain plot as shown in Figure 9.8.
9.5 Nyquist Plots
The Bode plot relates the frequency response to the time-domain and s-domain characteristics by
connecting the bandwidth to the rise-time and natural frequency and by connecting the DC gain to
the steady-state error and the nal value theorem. These relations are expressed on lines one and three
of Table 9.1. Nyquist analysis completes the picture by connecting frequency response to stability. It
also connects the frequency response with overshoot and damping. A Nyquist plot of the frequency
response G(i) is a plot of Im[G(i)] vs. Re[G(i)].
9.5.1 Complex Mappings
Regard G(s) : C C as a mapping from one complex plane into another, that is, s is complex and
w = G(s) is complex.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 167
Im(s)
Re(s)
s=+i

Im(w)
Re(w)
w=+i

w = G(s)
Figure 9.9: A map from a closed curve in the complex plane to a closed curve in the complex plane.
A closed curve in the s-plane will map into another closed curve in the w-plane. Now consider what
happens when encloses zeros or poles. Here are four cases.
1. Let G(s) =
1
s+1
and suppose the contour does not encircle the point 1.
Im(s)
Re(s)
s

s+1

-1
s
Im(w)
Re(w)

=0
w = G(s)
Figure 9.10: Contour does not encircle a pole so contour does not encircle the origin.
Let s +1 = Re
i
. Then w = G(s) =
1
R
e
i
. As a point s goes around the contour , the change
in the angle is zero because does not encircle the point 1. Thus, the corresponding change
in the angle in the w-plane is also zero which means, the contour does not encircle the origin.
2. Let G(s) =
1
s+1
and suppose encircles the point 1.
Im(s)
Re(s)
s

s+1

-1
s
Im(w)
Re(w)

w = G(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 168 Frequency Response
Figure 9.11: Contour encircles a pole so contour encircles the origin.
The contour encloses a pole so the mapped contour encircles the origin once counterclockwise
(CCW).
3. Let G(s) =
s2
s+1
and let encircle the point +1 but not 1.
Im(s)
Re(s)

s+1

-1
s

s-2
Im(w)
Re(w)

w = G(s)
Figure 9.12: Contour encircles a zero so contour encircles the origin clockwise.
This time, w =

s2
s+1
e
i()

. The contour encloses the zero but not the pole so as a point s
goes around , the change in the angle is 360

while the change in is zero. Thus, a point w


going around the contour encircles the origin once clockwise (CW).
4. Consider G(s) with several poles and zeros. Draw a contour in the splane, swept CW. The net
change in the angle of w = G(s) is
360

(# Zeros Enclosed # Poles Enclosed)


The idea being described is known in complex variable theory as the argument principle, stated here
using the same notation as Nise [5].
Theorem 9.2 (Argument principle). Let be a simple closed contour, described as positive in the
clockwise direction, and let G(s) be a function with no poles or zeros on . Then
N = P Z (9.4)
where
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 169
Z : number of zeros of G(s) interior to .
P : number of poles of G(s) interior to .
N: number of CCW encirclements of the origin in the G(s)-plane.
Note that N > 0 means CCW and N < 0 means CW encirclements. Again, this is how Nise [5] denes
N. In any other reference, (9.4) is written as N = Z P with the encirclement count N dened as
positive for CW encirclements.
9.5.2 Applications to Stability
Choose a contour that encloses the entire right half of the s-plane, the RHP.
R=
i
-i
Re(s)
Im(s)
Re(w)
Im(w)
w = G(s)
Figure 9.13: The Nyquist contour encircles the entire right half complex plane.
Then encloses the RHP zeros and unstable poles of G(s) and N = P Z is the number of CCW
encirclements of the origin by .
Y(s)
K
R(s)
+
-
G(s)
e(s) u(s)
Figure 9.14: Closed-loop feedback conguration.
Now consider the closed loop conguration of Figure 9.14. Let G
OL
and G
CL
be the open-loop and
the closed-loop transfer functions.
G
OL
(s) = KG(s)
G
CL
(s) =
KG(s)
1 +KG(s)
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 170 Frequency Response
and let
T(s) = 1 +KG(s) = 1 +G
OL
(s)
For closed-loop stability, T(s) has no poles in the RHP. Now consider T(s) : C C as a mapping
from the s-plane to the w = T(s)-plane. Given that N = P Z where
N: number of times T(s) encircles the origin.
P : number of unstable poles of T(s).
Z : number of RHP zeros of T(s).
the following relations between the open and closed-loop transfer functions
1. T(s) = 1 +G
OL
(s)
2. G
OL
(s) =
N(s)
D(s)
T(s) =
D(s)+N(s)
D(s)
3. G
CL
(s) =
G(s)K(s)
T(s)
provide an alternative interpretation of N, P and Z
N: number of times G
OL
(s) = KG(s) encircles -1.
P : number of unstable poles of G
OL
(s).
Z : number of unstable poles of G
CL
(s).
and a way to nd the number of unstable closed-loop poles from the number of unstable open-loop
poles.
[number of unstable poles of G
CL
(s)]
= [number of unstable poles of G
OL
(s)] [number of times G
OL
(s) encircles 1]
Example. Suppose the open-loop system G
OL
(s) = KG(s) is stable. Then P = 0 and Z = N. If
G
OL
(s) encircles -1, then Z = 0 and the closed-loop system is unstable.
Example.
G(s) = K
1
s(s + 1)
2
As K increases, the curve approaches -1. When K is such that encircles -1, the closed-loop system
is unstable. The root locus also shows this.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 171
Re(w)
Im(w)
-1

Increasing K
>0
0
+
0

<0

Im(s)
Re(s)
Figure 9.15: The Nyquist plot and root locus both show closed-loop instabiliity at high gain.
On the root locus, as K increases from zero, at some point two loci cross the imaginary axis and the
closed-loop system is unstable. The root locus does not show how big K has to be before this happens.
9.5.3 A Renement in the Nyquist Path
It is often the case that G
OL
(s) = HGK(s) has poles on the
imaginary axis as in the last example where there was a pole
at the origin. The value of G
OL
(s) is not dened at a pole
so modify the s-plane contour to go around poles on the
imaginary axis. This is called the Nyquist path and is illustrated
in Figure 9.16.
9.5.4 Counting Encirclements
For complicated Nyquist plots, counting the number of -1
encirclements may seem confusing. Heres how to do it.
R
Re(s)
Im(s)
0
Figure 9.16: The Nyquist path.
1. Draw a Line from -1 in any direction to .
2. Count the number of crossings: +1 for CCW and -1 for CW.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 172 Frequency Response
For example, on the left of Figure 9.17, the 1 point encirclement count is zero. On the right, the
encirclement count is two.
Re(w)
Im(w)
-1
0
+
0

Re(w)
Im(w)
-1
0
+
0

Figure 9.17: Counting encirclements.


Example. An easy way to get part of the Nyquist plot is from the Bode plots. Consider the following
second-order system.
G(s) = K
1
s(s + 1)
From the Bode gain and phase plots build the following table.
|G| G
0 90

0 180

3 db 135

Use the table to help draw the Nyquist plot as in Figure 9.18
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 173
log
1

40
-20
-40
db
-40db/dec
0
-20db/dec
20
0
-90
log
1

-180
Re(w)
Im(w)
-1
0
+
0

+
= 1

Need to
look at the
Nyquist path
to get this part
135
Phase crosses 180 at = Never encircles -1, P=0
Stable for all K
Figure 9.18: Draw the Nyquist plot using the Bode gain and phase plots.
But what does do at ? Figure 9.19 shows two possibilities. On the left, the Nyquist plot never
encircles 1 for any K so the closed-loop system is stable for all K. On the right, the Nyquist plot
encircles 1 for all K so the closed-loop system is unstable for all K.
-1 -1
Figure 9.19: Two Nyquist plot possibilities.
To see where goes at , look at the Nyquist path near the origin. Let s = e
i
where

2


2
and 0. Then, near the origin
G(s) =
1
s(s + 1)

1
e
i
=
1

e
i
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 174 Frequency Response
a
b

Im(s)
Re(s)

s = e
i
Re(s)
Im(s)
a
b
Figure 9.20: Detail of the Nyquist path around a pole.
Re(w)
Im(w)
-1

1/
a
b
Figure 9.21: The completed Nyquist plot.
So, as the Nyquist path goes from point a to b, goes as shown in Figure 9.21 and G(s) is stable for
all K.
Example. Consider a fourth-order system with no zeros, two poles at the origin and two stable poles.
G(s) =

2
n
s
2
(s
2
+ 2
n
s +
2
n
)
A quick root locus sketch shows that this system is unstable for any positive closed-loop gain. To show
this result using the Nyquist plot, rst let
n
= 1 and =

2
4
0.36. Again, build a table from the
Bode gain and phase plots
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 175
|G| G
0 180

0 360

n
= 1 3 db 270

Now use the table to help draw the Nyquist plot as in Figure 9.22.
log
80
-40
-80
db
-80db/dec
0
-40db/dec
40

n
3 db
0
-180
log
-360
Re(w)
Im(w)
-1
0
+
0

+
=1

Figure 9.22: Use the Bode gain and phase plots to help draw the Nyquist plot.
In this example, G(s) has two poles at the origin so the Nyquist path makes a detour. This detour
determines what happens to the Nyquist plot at innity. See Figure 9.23.
Near the origin
G(s) =
1
s
2
(s
2
+ 2
n
s +
2
n
)

1
(e
i
)
2
=
1

2
e
i2
So, as the Nyquist path goes from a to b, goes as shown in Figure 9.24
From the Nyquist plot, there are two CW encirclements of 1 for all K. So N = 2 for all K. Since
there are no unstable open-loop poles, P = 0, by the Nyquist criterion,
N = P Z
Z = 2. There are two unstable closed-loop poles for all K.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 176 Frequency Response
a
b

Im(s)
Re(s)

s = e
i
Re(s)
Im(s)
a
b

Figure 9.23: Nyquist plot detail near the origin.


Re(w)
Im(w)
-1
2
1/
2
a
b
Figure 9.24: The Nyquist plot shows the system is unstable for all positive K.
9.5.5 Relative Stability: Gain and Phase Margin
The Nyquist plot seems like a lot of work. First nd P, the number of
unstable poles of G
OL
(s), plot a complicated curve, count N, then get Z
from N = P Z. Maybe it would be easier to check stability directly by
nding the poles of G
CL
(s). The value of the Nyquist plot is that it shows
how close the system is to instability. Suppose
G(s) = K
1
s(s + 1)
2
As K increases, the curve approaches 1. When K is such that
encircles 1, the closed-loop system is unstable. Note that the Nyquist
plot expands linearly with K.
Re(w)
Im(w)
-1

Increasing K
>0
0
+
0

Figure 9.25: The Nyquist


plot provides an indication
of relative stability.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 177
Denition (Gain margin). The gain margin is the
factor by which the gain should be increased for the
closed-loop system to be neutrally stable:
GM = |G(i
180
)|
1
where
180
is such that G(i
180
) = 180

. The gain
margin is usually expressed in db.
Denition (Phase margin). The phase margin is the
amount by which the phase of G(i) exceeds 180

when |G(i)| = 1.
PM = G(i
c
) 180

where
c
is such that |G(i)| = 1.
Re(w)
Im(w)
-1
PM
1
GM
Figure 9.26: Gain and phase margin
calculation detail.
Denition (Crossover frequency). The frequency
c
where |G(i
c
)| = 1 is the crossover frequency.
Application to Design
Assume a stable open-loop system. Then
GM > 0 Closed-loop system is stable.
GM > 6 db is good Can double the gain without going unstable.
PM > 30

is good
For unstable open-loop systems, PM and GM can give confusing results. Sometimes it is better just to
look at the Nyquist, root locus or Bode plot directly.
Gain and Phase Margins From Bode Plots
While the gain and phase margin denitions are motivated from the Nyquist plot, the Bode gain and
phase plots provide the same information as long as the open-loop systemis stable. The idea is illustrated
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 178 Frequency Response
in Figure 9.27.
-180
0
log
GM
PM
0 db
log
G
G
Re(w)
Im(w)
-1
PM
1
GM
Figure 9.27: Gain and phase margin from the Bode plots.
Interpretation
Gain margin and phase margin indicate the degree of uncertainty that can be tolerated in the open-loop
system before the closed-loop system becomes unstable.
9.5.6 Phase Margin Gives Closed-Loop Damping and Overshoot
Phase margin is related to the closed-loop damping ratio and, therefore, the overshoot. The relation
derived below is for second-order systems. Let
G(s) =
K
s(s + 1)
=
K

s(s +
1

)
=

2
n
s(s + 2
n
)
Then
G
CL
(s) =
G
1 +G
=
K
s +s +K
=
K

s
2
+
1

s +
K

=

2
n
s
2
+ 2
n
s +
2
n
and
G(i) =

2
n
i(i + 2
n
)
=

2
n

2
i2
n

|G(i)| =

2
n

4
+ 4
2
n

2
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 179
Find
c
, the crossover frequency.
|G(i
c
)| = 1

4
n
=
4
c
+ 4
2

2
n

2
c

2
c
=
1
2
_
4
2

2
n

16
4

4
n
+ 4
4
n
_

c
=
n
_

1 + 4
4
2
2
_1
2
Relate the phase margin and
c
.
PM = tan
1
_
Im(G)
Re(G)
_
=c
= tan
1
_
Im(G)
Re(G)
_
=c
But
G(i) =

2
n

2
i2
n

=

2
n
(
2
+i2
n
)

4
2
2

2
n

2
So
PM = tan
1
_
2
n

2
c
_
= tan
1
_
2
n

c
_
= tan
1
2
_

1 + 4
4
2
2
_1
2
Now plot phase margin vs. as in Figure 9.28.
Phase margin is related to both closed-loop relative
stability and overshoot.
Small PM Small and large overshoot
Large PM Large and small overshoot
0
20
40
60
80
0 0.2 0.4 0.6 0.8 1.0
P
h
a
s
e

M
a
r
g
i
n
Damping Ratio,
Figure 9.28: Dependence of phase margin on
damping ratio.
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 180 Frequency Response
9.6 Frequency Domain Design
This section introduces frequency domain design procedures by example. Consider a unity feedback
conguration with
G(s) =
4
s(s + 2)
Y(s)
K(s) G(s)
R(s)
+
-
Figure 9.29: Unity feedback conguration.
Find a compensator K(s) that meets the following design specications.
K
V
20
PM 45

GM 10 db
9.6.1 Satisfy the Steady-State Error Requirement
Assume pure gain compensation and nd K to satisfy K
V
20.
K
V
= lim
s0
sKG(s) = lim
s0
sK
4
s(s + 2)
= 2K 20
K 10
9.6.2 Check Other Design Requirements
Using K from above, construct a Bode plot to check the gain and phase margins. The system has a
single integrator so the low frequency approximation is from the velocity constant. The system has two
poles so the high frequency roll-off is 40 db with a break frequency at 2 rad/sec. The Bode and
Nyquist plot construction is shown in Figure 9.30.
K(s)G(s) =
20
s(
s
2
+ 1)
K
V
= 20 26 db
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 181
0.1 0.2 1 2 10 20
20
40
26
0
-20 db/dec
-40 db/dec

c
0.1 0.2 1 2 10 20
-90
-180

c
= 6.2
PM=18
Never encircles -1
but PM is arbitrarily bad.
Re G(i)
Im G(i)
-1
Figure 9.30: Construct the Nyquist plot from the Bode gain and phase plots.
The gain margin looks good because the phase never reaches 180

.
GM = 10 db
Introduction to Feedback Control Systems Rev. May 4, 2010
Page 182 Frequency Response
Now check the phase at at the crossover frequency =
c
.
KG(i) =
20
i(
1
2
i + 1)
=
20

1
2

2
+i
=
20(
1
2

2
i)
_

1
2

2
_
2
+
2
KG(i)

=c
= tan
1
_

1
2

2
_
=c
= 90

tan
1
_
1
2

2
c

c
_
= 90

tan
1
_
6.2
2
_
= 90

72

= 162

Im(s)
Re(s)

-90
Im G
Re G
Figure 9.31: Crossover phase
calculation detail.
The phase margin is 18

. This is 27

short of the required phase margin.


9.6.3 Design a Lead Compensator
Find a lead compensator that contributes the needed
phase lead at the crossover frequency. Consider a lead
compensator of the following form.
K(s) = K
s +z
s +z
A Bode plot is shown in Figure 9.32. The compensator
design is done in two steps. First, show that the
frequency where the desired phase lead is to be applied
is given by =

z. Next, solve for the phase span
as a function of the desired phase lead.
z z
K
K/
z z

0
Figure 9.32: Bode plot for a lead compensator.
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 183
Find the Maximum Phase Lead.
First, nd the compensator phase as a function of frequency, ().
= K(i) =
_
i +z
i +z
_
=
_

2
+z
2
+iz( 1)

2
+
2
z
2
_
= tan
1
_
( 1)z

2
+z
2
_
Now nd the peak phase by maximizing with respect to . Let
u =
( 1)z

2
+z
2
Then
d
d
=
d
d
tan
1
u =
1
1 +u
2
du
d
where
du
d
=
(
2
+z
2
)( 1)z ( 1)z(2)
(
2
+z
2
)
2
To nd the maximum, put
d
d
= 0 because the slope is 0 at a maximum or a minimum. Then
0 = (
2
+z
2
)( 1)z ( 1)z(2) = (
2
+z
2
) 2
2
= z
2

2
and the maximum () is given by
=

z
Finally, nd the maximum phase
max

max
= tan
1
( 1)z

2
= z
2

z
= tan
1
( 1)

z
2
2z
2
= tan
1
1
2

Find the Phase Span as a Function of Maximum Phase Lead


Find as a function of
max
. It has already been found that
tan
max
=
sin
max
cos
max
=
1
2

Introduction to Feedback Control Systems Rev. May 4, 2010


Page 184 Frequency Response
Now notice that
( 1)
2
+ (2

)
2
=
2
2 + 1 + 4 = ( + 1)
2
So
tan
max
=
1
2

=
1
+1
2

+1
implies that
sin
max
=
1
+ 1
The phase span is
=
1 + sin
max
1 sin
max
z z
K
K/
z
20 log
z z
0
1
+1
z

max
=
Figure 9.33: Phase span calculation detail.
9.6.4 Design a Lead Compensator
The system needs 27

of phase lead at the crossover frequency.

max
= 27

and =
1 + sin
max
1 sin
max
= 2.7
The crossover frequency is
c
= 6.2 so
log z = log
c

1
2
log
z =

c

=
6.2

2.7
= 3.8
rad
sec
log p = log
c
+
1
2
log
p =

c
=

2.7 6.2 = 10.2


rad
sec
Finally, set the DC gain of the lead compensator to 1 so that the system static error constants remain
unchanged
DC gain = lim
s0
K(s) = lim
s0
K
s +z
s +z
=
K

K = = 2.7
Rev. May 4, 2010 Introduction to Feedback Control Systems
Frequency Response Page 185
9.6.5 Check The Compensator Design
The lead compensator is
K(s) = K K
lead
s +z
s +z
= 10 2.7
s + 3.8
s + 10.2
= 27
s + 3.8
s + 10.2
and
G(s)K(s) = 27
s + 3.8
s + 10.2
4
s(s + 2)
=
108(s + 3.8)
s(s + 2)(s + 10.2)
0.1 0.2 1 2 10 20 3.8
-90
-180
PM=38
0.38 38
compensated
uncompensated
100
0.1 0.2 1 2 10 20

c
-20
3.8
20
40
26
0
-20 db/dec
-40 db/dec
crossover
old new
compensated uncompensated
Re G(i)
Im G(i)
-1
uncompensated
compensated
Figure 9.34: Lead compensator design improves the phase margin.
Note that the lead compensator has shifted the crossover frequency so the actual phase margin is less
than the desired
PM < 18

+ 27

= 45

Increase
max
by a few degrees, 5 is about right, and try again.
Introduction to Feedback Control Systems Rev. May 4, 2010
References
[1] C. T. Chen. Linear System Theory and Design. Holt, Rinehart and Winston, Inc., New York, 1984.
[2] Ruel V. Churchill, James W. Brown, and Roger F. Verhey. Complex Variables and Applications.
McGraw-Hill, New York, 1976.
[3] Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. Feedback Control of Dynamic
Systems. Addison-Wesley, 3rd edition, 1994.
[4] Denny K. Miu. Mechatronics: Electromechanics and Contromechanics. Springer-Verlag, 1993.
[5] Norman S. Nise. Control Systems Engineering. John Wiley and Sons, 3rd edition, 1995.
187
Index
additivity, 6, 18
analytic continuation, 12
angle criterion, 109, 110, 112
argument principle, 168
bandwidth, 163
block diagram, 3, 6366, 145
cascade, 63
feedback, 65
parallel, 64
reduction, 66, 97
Bode plot, 157166
differentiator, 159
rst-order system, 160
integrator, 159
second-order system, 161
break frequency, 160, 180
breakpoint, 160
characteristic equation, 23, 70
command reversal, 60
compensation, 129
lag, 140142, 148, 149, 165
lead, 133139, 141, 182
lead-lag, 142
control, 1
convolution integral, 17, 19
crossover frequency, 177, 182
damped frequency, 41
damping, 166
damping ratio, 41
DC gain, 38, 42, 163, 164
disturbance, 2, 94
Evans form, 109110, 116, 126
feedback, 4
Fourier transform, 153, 156
frequency response, 153, 155
design, 180
gain, 154
phase, 154
frequency sweep, 153
functions
Dirac delta, 12, 14
189
Page 190 Index
impulse, 12
pulse, 13
step, 12
gain margin, 177, 181
from Bode plot, 177
interpretation, 178
homework, 21
homogeneity, 6, 18
Hurwitz determinants, 69, 71, 86, 89
Hurwitz test, 71
integral control, 84, 8789, 148
integrating factor, 19
Laplace transform, 11
and convolution, 20
common functions, 12, 31
denition, 11
differentiation, 15
nal value theorem, 25, 166
initial value theorem, 25
integration, 16
inverse, 26
properties, 11
Li enard-Chipart criterion, 72, 108
linear, 6
magnitude criterion, 112
natural frequency, 42, 164
natural response, 67
Nichols plot, 157
Nyquist path, 171
Nyquist plot, 157, 166179
and stability, 169171
open-loop gain, 38, 42
overshoot, 53, 166
maximum, 49, 51
partial fraction expansion, 2631, 154
peak-time, 49, 50
phase margin, 177, 182
and damping, 178
from Bode plot, 177
interpretation, 178
phase span, 182, 183
PID control, 142
plant, 2
pole, 23
proportional control, 79, 103, 104, 108, 110, 114,
129, 146
proportional-derivative control, 105, 110, 129
133, 141
noise amplication, 133
proportional-integral control, 139, 142
rise-time, 46, 48, 52, 163, 164
robustness, 2, 79
root locus, 99128
arrival angle, 117, 122
asymptote, 116, 118122, 126
breakaway point, 117, 125, 127
centroid, 116, 119, 121, 122, 126
departure angle, 117, 122125, 127
design, 129151
sketching procedure, 116
Routh-Hurwitz test, 69
settling-time, 46, 49, 50
sifting property, 15
stability, 2, 37, 68
asymptotic internal, 67
relative, 176
static error constant, 91, 93
steady-state error, 82, 92, 163, 165
Rev. May 4, 2010 Introduction to Feedback Control Systems
Index Page 191
with command shaping, 84
with high gain, 83, 85
with integral control, 84, 88
with ramp input, 87, 88
steady-state response, 37
superposition, 17, 18
system, 1
causal, 18
characteristics, 3844
closed-loop, 2, 4
dynamic, 2, 6
higher-order, 5561
linear, 2, 6, 11, 18
open-loop, 2, 3
overdamped, 40, 43
relaxed, 18
time-invariant, 2, 6, 11, 18
type, 91, 93
underdamped, 41, 43, 52
time constant, 38, 40, 46, 164
tracking error, 92, 163
transfer function, 22, 156
improper, 23
proper, 23
strictly proper, 23
transient response, 37
trial and error, 141
zero, 23
effect on transient response, 5661
minimum phase, 59
non-minimum phase, 59
pole cancellation, 58
Introduction to Feedback Control Systems Rev. May 4, 2010

You might also like