You are on page 1of 604

ISBN: 0-8247-0661-7

This book is printed on acid-free paper.

Headquarters
Marcel Dekker, Inc.
270 Madison Avenue, New York, NY 10016
tel: 212-696-9000; fax: 212-685-4540

Eastern Hemisphere Distribution


Marcel Dekker AG
Hutgasse 4, Postfach 812, CH-4001 Basel, Switzerland
tel: 41-61-261-8482; fax: 41-61-261-8896

World Wide Web


http://www.dekker.com

The publisher offers discounts on this book when ordered in bulk quantities. For more
information, write to Special Sales/Professional Marketing at the headquarters address
above.

Copyright # 2002 by Marcel Dekker, Inc. All Rights Reserved.

Neither this book nor any part may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, microlming, and recording, or
by any information storage and retrieval system, without permission in writing from the
publisher.

Current printing (last digit):


10 9 8 7 6 5 4 3 2 1

PRINTED IN THE UNITED STATES OF AMERICA


Preface

This book introduces the theory, design, and implementation of control systems. Its
primary goal is to teach students and engineers how to eectively design and imple-
ment control systems for a variety of dynamic systems.
The book is geared mainly toward senior engineering students who have had
courses in differential equations and dynamic systems modeling and an introduction
to matrix methods. The ideal background would also include some programming
courses, experience with complex variables and frequency domain techniques, and
knowledge of circuits. For students new to or out of practice with these concepts and
techniques, sufcient introductory material is presented to facilitate an understand-
ing of how they work. In addition, because of the books thorough treatment of the
practical aspects of programming and implementing various controllers, many engi-
neers (from various backgrounds) should nd it a valuable resource for designing
and implementing control systems while in the workplace.
The material herein has been chosen for, developed for, and tested on senior-
level undergraduate students, graduate students, and practicing engineers attending
continuing-education seminars. Since many engineers take only one or, at most, two
courses in control systems as undergraduate students, an effort has been made to
summarize the elds most important ideas, skills, tools, and methods, many of
which could be covered in one semester or two quarters. Accordingly, chapters on
digital controllers are included and can be used in undergraduate courses, providing
students with the skills to effectively design and interact with microprocessor-based
systems. These systems see widespread use in industry, and related skills are becom-
ing increasingly important for all engineers. Students who hope to pursue graduate
studies should nd that the text provides sufcient background theory to allow an
easy transition into graduate-level courses.
Throughout the text, an effort is made to use various computational tools and
explain how they relate to the design of dynamic control systems. Many of the
problems presented are designed around the various computer methods required
to solve them. For example, various numerical integration algorithms are presented
in the discussion of rst-order state equations. For many problems, Matlab1 is used

Matlab is a registered trademark of The MathWorks, Inc.

iii
iv Preface

to show a comparative computer-based solution, often after the solution has been
developed manually. Appendix C provides a listing of common Matlab commands.

CHAPTER SUMMARIES
Each chapter begins with a list of its major concepts and goals. This list should prove
useful in highlighting key material for both the student and the instructor. An
introduction briey describes the topics, giving the reader a big picture view of
the chapter. After these concepts are elaborated in the body of the chapter, a pro-
blem section provides an opportunity for reinforcing or testing important concepts.
Chapter 1 is an introduction to automatic control systems. The historical high-
lights of the eld are summarized, mainly through a look at the pioneers who have
had the greatest impact on control system theory. The beginning of the chapter also
details the advancement of modern control theories (which resulted mainly from the
development of feasible microprocessors) and leads directly into the next two sec-
tions, which compare analog with digital and classical with modern systems. Chapter
1 concludes with an examination of various applications of control systems; these
have been chosen to show the diverse products that use controllers and the various
controllers used to make a design become a reality.
Chapter 2 summarizes modeling techniques, relating common techniques to the
representations used in designing control systems. The approach used here is differ-
ent from that of most textbooks, in that the elements common to all dynamic
systems are emphasized throughout the chapter. The general progression of this
summary is from differential equations to block diagrams to state-space equations,
which pregures the order used in the discussion of techniques and tools for design-
ing control systems. Also included here is a subsection illustrating the limitations and
proper use of linearization. Newtonian, energy, and power ow modeling methods
are then summarized as means for obtaining the dynamic models of various systems.
Inductive, capacitive, and resistive elements are used for all the systems, with empha-
sis placed on overcoming the commonly held idea that modeling an electrical circuit
is vastly different from modeling a mechanical system. Finally, the chapter presents
bond graphs as an alternative method offering many advantages for modeling large
systems that encompass several smaller systems. Aside from its conceptual impor-
tance here, a bond graph is also an excellent tool for understanding how many
higher-level computer modeling programs are developed and thus for avoiding fun-
damental modeling mistakes when using these programs.
Chapter 3 develops some of the concepts of Chapter 2 by presenting the tech-
niques and tools required to analyze the resulting models. These tools include differ-
ential equations in the time domain; step responses for rst- and second-order
systems; Laplace transforms, which are used to enter the s-domain and construct
block diagrams; and the basic block diagram blocks, which are used for developing
Bode plots. The chapter concludes by comparing state-space methods with the
analysis methods used earlier in the text.
Chapter 4 introduces the reader to closed-loop feedback control systems and
develops the common criteria by which they can be evaluated. Topics include open-
loop vs. closed-loop characteristics, effects of disturbance inputs, steady-state errors,
transient response characteristics, and stability analysis techniques. The goal of the
Preface v

chapter is to introduce the tools and terms commonly used when designing and
evaluating a controllers performance.
Chapter 5 examines the common methods used to design analog control sys-
tems. In each section, root locus and frequency domain techniques are used to design
the controllers being studied. Basic controller typessuch as proportional-integral-
derivative (PID), phase-lag, and phase-leadare described in terms of characteris-
tics, guidelines, and applications. Included is a description of on-site tuning methods
for PID controllers. Pole placement techniques (including gain matrices) are then
introduced as an approach to designing state-space controllers.
Chapter 6 completes the development of the analog control section by describ-
ing common components and how they are used in constructing real control systems.
Basic op-amp circuits, transducers, actuators, and ampliers are described, with
examples (including linear and rotary types) for each category. The focus here is
not only on solving text problems but also on ensuring that the controller in
question can be successfully implemented.
Chapter 7 brings the reader into the domain of digital control systems. A
cataloging of the various examples of digital controllers serves to demonstrate
their prevalence and the growing importance of digital control theory. Common
congurations and components of the controllers noted in these examples are then
summarized. Next, the common design methods for analog and digital controllers
are compared. If a student with a background in analog controls begins the text here,
it should help to bridge the gap between the two types of controllers. The chapter
concludes by examining the effects of sampling and introducing the z-transform as a
tool for designing digital control systems.
Chapter 8 is similar to Chapter 4, but it applies performance characteristics to
digital control systems. Open- and closed-loop characteristics, disturbance effects,
steady-state errors, and stability are again examined, but this time taking into
account sample time and discrete signal effects.
Chapter 9, like Chapter 5, focuses on PID, phase-lag, and phase-lead control-
lers; in addition, it presents direct design methods applicable to digital controllers.
Controller design methods include developing the appropriate difference equations
needed to enter into the implementation stage. Also included is a discussion of the
effects of sample time on system stability.
Chapter 10 concludes the digital section by presenting the common compo-
nents used in implementing digital controllers. Computers, microcontrollers, and
programmable logic controllers (PLCs) are presented as alternatives. Methods for
programming each type are also discussed, and a connection is drawn between the
algorithms developed in the previous chapter and various hardware and software
packages. Digital transducers, actuators, and ampliers are examined relative to
their role in implementing the controllers designed in the previous chapter. The
chapter concludes with a discussion of pulse-width modulation, its advantages and
disadvantages, and its common applications.
Chapter 11 is an introduction to advanced control strategies. It includes a short
section illustrating the main characteristics and uses of various controllers, including
feedforward, multivariable, adaptive, and nonlinear types. For each controller, suf-
cient description is provided to convey the basic concepts and motivate further
study; some are described in greater detail, enabling the reader to implement them
as advanced controllers.
vi Preface

Chapter 12 is most applicable for students or practicing engineers interested in


uid power and electrohydraulics. It applies many of the general techniques devel-
oped in the book (modeling, simulation, controller design, etc.) to uid power sys-
tems. Several case studies illustrate the variety of applications that use
electrohydraulics.

ACKNOWLEDGMENTS
It is perilous to begin listing the individuals and organizations that have inuenced
this work, since I will undoubtedly overlook many valuable contributors. Several,
however, cannot go without mention. From the start, my parents taught me the
values of God, family, friendship, honesty, and hard work, which have remained
with me to this day. I am indebted to my mother, especially for her unending
devotion to family, and to my father, for living out the old adage An honest
days work for an honest days pay while demonstrating an uncanny ability to
keep machines running well past their useful life.
Numerous teachers, from elementary to graduate levels, have had a part instil-
ling in me the joy of teaching and helping othersif only I had recognized it at the
time! Coaches Brian Diemer and Al Hoekstra taught me the values of setting goals,
hard work, friendship, and teamwork. Professors Beachley, Fronczak, and Lorenz,
along with my fellow grad students at the University of WisconsinMadison, were
instrumental in a variety of ways, both academic and personal. The faculty and staff
at the Milwaukee School of Engineering have been a joy to work with and have spent
many hours helping and encouraging me. In particular, Professors Brauer, Ficken,
Labus, and Tran, and the staff at the Fluid Power Institute have been important to
me on a personal level. The staff at Marcel Dekker, Inc., has been very helpful in
leading me through the authorial process for the rst time.
Finallysaving the most deserving until the endI would like to express my
gratitude to my wife, Kim, and my children, Rebekah, Matthew, and Rachel. Being
married to an engineer (especially one writing a book) is not the easiest task, and
Kim provides the balance, perspective, and strength necessary to make our house a
home. Finally, I am thankful for the pure joy I feel when I open the door after a long
day and the children yell out, Daddy, youre home! Thank you, Lord.

John H. Lumkes, Jr.


Contents

Preface iii

1. Introduction 1
2. Modeling Dynamic Systems 17
3. Analysis Methods for Dynamic Systems 75
4. Analog Control System Performance 141
5. Analog Control System Design 199
6. Analog Control System Components 279
7. Digital Control Systems 311
8. Digital Control System Performance 343
9. Digital Control System Design 365
10. Digital Control System Components 399
11. Advanced Design Techniques and Controllers 433
12. Applied Control Methods for Fluid Power Systems 495

Appendix A: Useful Mathematical Formulas 567


Appendix B: Laplace Transform Table 569
Appendix C: General Matlab Commands 571
Bibliography 575
Answers to Selected Problems 579
Index 585

vii
1
Introduction

1.1 OBJECTIVES
 Provide motivation for developing skills as a controls engineer.
 Develop an appreciation of the previous work and history of automatic
controls.
 Introduce terminology associated with the design of control systems.
 Introduce common controller congurations and components.
 Present several examples of controllers available for common applications.

1.2 INTRODUCTION
Automatic control systems are implemented virtually everywhere, from work to
play, from homes to vehicles, from serious applications to frivolous applications.
Engineers having the necessary skills to design and implement automatic controllers
will create new and enhanced products, changing the way people live. Controllers are
nding their way into every aspect of our lives. From toasting our bread and driving
to work to riding the train and traveling to the moon, control theory has been
applied in an eort to improve the quality of life. Control engineers may properly
be termed system engineers, since it is a system that must be controlled. These
systems may be a hard disk read head on your computer, your CD player laser
position, your vehicle (many systems), a factory production process, inventory con-
trol, or even the economy. Good engineers, therefore, must understand the modeling
of systems. Modeling might include aeronautical, chemical, mechanical, environ-
mental, civil, electrical, business, societal, biological, and political systems, or pos-
sibly a combination of these. It is an exciting eld lled with many opportunities. For
maximum eectiveness, control engineers should understand the similarities (laws of
physics, etc.) inherent in all physical systems. This text seeks to provide a cohesive
approach to modeling many dierent dynamic systems.
Almost all control systems share a common conguration of basic compo-
nents. A closed-loop single inputsingle output (SISO) system, as shown in Figure
1, is an example of the basic components commonly required when designing control
systems. This may be modied to include items like disturbance inputs, external
inputs (i.e., wind, load, supply pressure), and intermediate physical system variables.
1
2 Chapter 1

Figure 1 Basic control system layout.

The concept of a control system is quite simple: to make the output of the system
equal to the input (command) to the system. In many products we nd servo- as a
prex describing a particular system (servomechanism, servomotor, servovalve, etc.).
The prex servo- is derived from the Latin word servos, or slave/servant. The
output in this case is a slave and follows the input.
The command may be electrical or mechanical. For electrical signals, op-amps
or microprocessors are commonly used to determine the error (or perform as the
summing junction in terms of the block diagram). In a mechanical system, a lever
might be used to determine the error input to the controller. As we will see, the
controller itself may take many forms. Although electronics are the becoming the
primary components, physical components can also be used to develop proportional-
integral-derivative (PID) controllers. An example is sometimes seen in pneumatic
systems where bellows can be used as proportional and integral actions and ow
valves as derivative actions.
The advantages of electronics, however, are numerous. Electronic controllers
are cheaper, more exible, and capable of discontinuous and adapting algorithms.
Todays microprocessors are capable of running multiple controllers and have algo-
rithms that can be updated simply by reprogramming the chip. The amplier and
actuator are critical components to select properly. They are prone to saturation and
failure if not sized properly. The physical system may be a mathematical model
during the design phase, but ultimately the actuator must be capable of producing
some input into the physical device that has a direct eect on the desired output.
Other than simple devices with many inherent assumptions, the physical system can
seldom be represented by one simple block or equation. Finally, a sensor must be
available that is capable of measuring the desired output. It is dicult to control a
variable that cannot be measured. Indirect control through observers adds complex-
ity and limits performance. Sensor development in many ways is the primary tech-
nology enabling advanced controllers. Additionally, the sensors must be capable of
enduring the environment in which it must be placed.
To translate this into something we all are familiar with, lets modify the
general block diagram to represent a cruise control system found on almost all
automobiles. This is shown in Figure 2. When you decide to activate your cruise
control, you accelerate the vehicle to its desired operating speed and press the set
switch, which in turn signals to the controller that the current voltage is the level at
which you wish to operate. The controller begins determining the error by compar-
ing the set point voltage with the feedback voltage from the speed transducer. The
Introduction 3

Figure 2 Automobile cruise control system example.

speed transducer might consist of a magnetic pickup on a transmission gear that is


then conditioned to be a voltage proportional to the frequency of that signal.
Assuming for now a simple proportional controller, the error, in volts, is multiplied
by the controller gain that results in a new voltage level. This signal is then amplied
to the point where it is capable of moving the throttle position, usually with the help
of an engine manifold vacuum. The engine throttle is then open or closed, depending
on if the error is positive or negative, which changes the torque output from the
engine. The change in torque results in an acceleration or deceleration of the vehicle,
hopefully to the desired speed. As the vehicle speed approaches the desired speed, the
error decreases, which decreases the actuator signal and the car gradually
approaches the set point. As we will see, there is a lot more to this simplied
explanation, but certainly the basics will help to provide the proper perspective
until the point of more explanation is reached.
One more addition is noted herethat of the disturbance input. As we all
know, when the vehicle encounters an incline, more throttle inputand hence
more torqueis required to maintain the current vehicle speed. In the case of cruise
control systems, disturbance torque on the engine is commonly seen from hills and
wind gusts. By including this in our model, we can design a control system that is
capable of handling these inputs.
This serves as the backdrop for the remaining sections. The goal is to examine
each signicant block presented above, beginning with models for each block, fol-
lowed by physical components representing each block, and concluding with a sum-
mary of combining in such a way as to design, simulate, and build a functional
controller. Some examples presented are based on the eld of electrohydraulics, an
area still lagging in image but whose capabilities are nally being fully realized
through the application of modern modeling, simulation, and control theory speci-
cally designed for uid power applications.

1.3 BRIEF HISTORY OF AUTOMATIC CONTROLS


The history of automatic controls is rich, long, and diverse, as illustrated in works by
Mayr (1970) and Fuller (1976). Early work centered on developing intuitive solu-
tions to problems encountered at that time. Beginning with the Greeks, oat reg-
ulators were used as early as 300 B.C. to track time. The oat regulators allowed
accurate time keeping by maintaining a constant liquid level in a water tank, thus
4 Chapter 1

providing a constant ow through an outlet (xed orice). This constant ow was


accumulated in a second tank as a measure of time. These water clocks were used
until mechanical clocks arrived in the fourteenth century. Although commonly clas-
sied as control systems, designs during this period were intuitively based and math-
ematical/analytical techniques had yet to be applied to solving more complex
problems.
Two things happened late in the eighteenth century that would turn out to be
of critical signicance when combined in the next century. First, in 1788 James Watt
(17361819) designed the centrifugal y ball governor for the speed control of a
steam engine. The relatively simple but very eective device used centrifugal forces
to move rotating masses outward, thereby causing the steam valve to close, resulting
in a constant engine speed. Although earlier speed and pressure regulators were
developed [windmills in Britain, 1745; ow of grain in mills, sixteenth century;
temperature control of furnaces by Cornelis J. Drebbel (15721634) of Holland,
seventeenth century; and pressure regulators for steam engines, 1707], Watts gov-
ernor was externally visible, and it became well known throughout Europe, espe-
cially in the engineering discipline. Earlier steam engines were regulated by hand and
dicult to use in the developing industries, and the start of the Industrial Revolution
is commonly attributed to Watts y ball governor. Second, in and near the eight-
eenth century, the mathematical tools required for analyzing control systems were
developed. Building on the earlier development of dierential equations by Isaac
Newton (16421727) and Gottfried Leibniz (16461716) in the late seventeenth and
early eighteenth centuries, Joseph Lagrange (17361813) began to use dierential
equations to model and analyze dynamic systems during the time that Watt devel-
oped his y ball governor. Lagranges work was further developed by Sir William
Hamilton (18051865) in the nineteenth century.
The signicant combination of these two events came in the nineteenth century
when George Airy (18011892), professor at Cambridge and Royal Astronomer at
Greenwich Observatory, built a speed control unit for a telescope to compensate for
the rotation of the earth. Airy documented the capability of unstable motion using
feedback in his paper On the Regulator of the Clock-work for Eecting Uniform
Movement of Equatorials (1840). After Airy, James Maxwell (18311879) system-
atically analyzed the stability of a governor resembling Watts governor. He pub-
lished a mathematical treatment On Governors in the Proceedings of the Royal
Society (1868) in which he linearized the dierential equations of motion, found the
characteristic equation, and demonstrated that the system is stable if the roots of the
characteristic equation have a negative real component (see Sec. 3.4.3.1). This is
commonly regarded as the founding work in the eld of control theory.
From here the mathematical theory of feedback was developed by names still
associated with the eld today. Once Maxwell described the characteristic equation,
Edward Routh (18311907) developed a numerical technique for determining system
stability using the characteristic equation. Interestingly, Routh and Maxwell over-
lapped at Cambridge, both beginning at Peterhouse when shortly after Rouths
arrival Maxwell was advised to transfer to Trinity because of Routh being his
equal in mathematics. Routh was Senior Wrangler (highest academic marks),
whereas Maxwell was Second Wrangler (second highest academic marks). At
approximately the same time in Germany and unaware of Rouths work, Adolf
Hurwitz (18591919), upon a request from Aurel Stodola (18591952), also solved
Introduction 5

and published the method by which system stability could be determined without
solving the dierential equations. Today this method is commonly called the Routh
Hurwitz stability criterion (see Sec. 4.4.1). Finally, Aleksandr Lyapunov (18571918)
presented Lyapunovs methods in 1899 as a method for determining stability of
ordinary dierential equations. Relative to the control of dynamic systems, non-
linear in particular, the importance of his work on dierential equations, potential
theory, stability of systems, and probability theory is only now being realized.
With the foundation formed, the twentieth century has seen the most explosive
growth in the application and further development of feedback control systems.
Three factors helped to fuel this growth: the development of the telephone, World
War II, and microprocessors. Around 1922, Russian Nicholas Minorsky (1885
1970) analyzed and developed three mode controllers for automatic ship steering
systems. From his work the foundation for the common PID controller was laid.
Near the same time and driven largely by the development of the telephone, Harold
Black (18981983) invented the electronic feedback amplier and demonstrated the
usefulness of negative feedback in amplifying the voice signal as required for travel-
ing the long distances over wire. Along with Harold Hazens (19011980) paper on
the theory of servomechanisms, this period marked a major increase in the interest
and study of automatic control theory. Blacks work was further built on by two
pioneers of the eld, Hendrik Bode (19051982) and Harry Nyquist (18891976). In
1932, working at Bell Laboratories, Nyquist developed his stability criterion based
on the polar plot of a complex function. Shortly thereafter in 1938, Bode used
magnitude and phase frequency response plots and introduced the idea of gain
and phase stability margins (see Sec. 4.4.3). The impact of their work is evident in
the commonplace use of Nyquist and Bode plots when designing and analyzing
automatic control systems in the frequency domain.
The rst large-scale application of control theory was during World War II, in
which feedback amplier theory and PID control actions were combined to deal with
the new complexity of aircraft and radar systems. Although much of the work did
not surface until after the war, great advances were made in the control of industrial
processes and complex machines (airplanes, radar systems, artillery guidance). Soon
after the war, W. Evans (19201999) published his paper Graphical Analysis of
Control Systems (1948), which presented the techniques and rules for graphically
tracing the migrations of the roots of the characteristic equation. The root locus
method remains an important tool in control system design (see Sec. 4.4.2). At this
point in history, the root locus and frequency response techniques were incorporated
in general engineering curriculum, textbooks were written, and the general class of
techniques came to be known as classical control theory.
While classical control theory was maturing, work accomplished in the late
1800s (time domain dierential equation techniques) was being revisited due to the
age of the computer. Lyapunovs work, combined with the capabilities of the com-
puter, led to his contribution becoming more fully realized. The incentive arose from
the need to eectively control nonlinear multiple inputmultiple output (MIMO)
systems. While classical techniques are very eective for linear time-invariant
(LTI) SISO systems, the complexity increases rapidly when the attempt is made to
apply these techniques to nonlinear time-variant, and/or MIMO output systems. The
computationally intensive but simple-to-program steps used in the time domain are
well adapted to these complex systems when coupled with microprocessors.
6 Chapter 1

Work on using digital computers as automatic controllers began in the 1950s


when aerospace company TRW developed a MIMO digital control system.
Although the cost of the computer at that time was still prohibitive, many companies
and research organizations realized the future potential and followed the work clo-
sely. Whereas an analog systems costs continued to increase as controller size
increased, a digital computer could handle multiple arrangements of inputs and
outputs and for large systems the initial cost could be justied. By the early 1960s,
multiple digital controllers were operating in a variety of applications and industries.
The 1960s also saw the introduction of many new theories, collectively referred to as
modern control theory. In the span of several years, Rudolf Kalman, along with
colleagues, published several papers detailing the application of Lyapunovs work
to time control of nonlinear systems, optimal control, and optimal ltering (Kalman
discrete lter and Kalman continuous lter). Classical techniques were also revisited
and extensions were developed to allow digital controller design.
This new eld has seen explosive growth since the 1960s and the era of solid-
state devices. The 1970s saw the microcomputers come of age, along with the micro-
processor, and in 1983 the PC, or personal computer, was introduced. It is safe to say
that things have not been the same since. Although existing for just a relatively short
time, we now take for granted powerful computers, analog to digital and digital to
analog converters, programmable logic controllers (PLC), and microcontrollers.
Todays applications are remarkably diverse for such a short history. Process con-
trol, aircraft systems, space ight, automobiles, o-road equipment, home utensils,
portable devices and so on will never be viewed in the same way since the micro-
processor and digital control theory was introduced. In spite of the recent advances,
it is safe to say that control theory can still be considered in many respects to be in its
infancy.
Hopefully we have gained some appreciation of the history relative to the
development of control theory. This book presents both classical and modern the-
ories and attempts to develop and teach them in a way that does justice to those who
have so ambitiously laid the foundation.

1.4 ANALOG VERSUS DIGITAL CONTROL SYSTEMS


Although many controllers today are implemented in the digital domain due to the
advent of low cost microprocessors and computers, understanding the basic theory
in the analog domain is required to understand the concepts presented when exam-
ining digital controllers. In addition, the world we live and operate in is one of
analog processes and systems. A hydraulic cylinder does not operate at 10 discrete
pressures but may pass through an innite resolution of pressures (i.e., continuous)
during operation. Interfacing a computer with a continuous system then involves
several problems since the computer is a digital device. That is, the computer is
limited to a nite number of positions, or discrete values, with which to represent
the real physical system. Additional problems arise from the fact that computers are
limited by how often they can measure the variable in question.
Virtually all processes/systems we desire to control are continuous systems. As
such, they naturally incline themselves to analog controllers. Vehicle speed, tempera-
ture, pressure, engine speed, angles, altitude, and height level are all continuous
signals. When we think of controlling the temperature in our house, for example,
Introduction 7

we talk about setting the thermostat (controller) at a specic temperature with the
idea that the temperature inside the house will match the thermostat setting, even
though this is not the case since our furnace is either on or o. Since temperature is a
continuous (analog) signal, this is the intuitive approach. Even with digital control-
lers (or in the case of older thermostats a nonlinear electromechanical device) we
generally discuss the performance in terms of continuous signals and/or measure-
ments. True analog controllers have a continuous signal input and continuous signal
output. Any desired output level, at least in theory, is achievable. Many mechanical
devices (Watts steam engine governor, for example) and electrical devices (operation
amplier feedback devices) fall into this category. Analog controllers are found in
many applications, and for LTI and SISO systems they have many advantages. They
are simple reliable devices and, in the case of purely mechanical feedback systems, do
not require additional support components (i.e., regulated voltage supply, etc.). The
common PID controller is easily constructed using analog devices (operational
ampliers), and many control problems are satisfactorily solved using these control-
lers. The majority of controllers in use today are of the PID type (both analog and
digital). Perhaps as important to us, if we desire to pursue a career in control
systems, is the fact that someone can intuitively grasp many of the advanced non-
linear and/or digital control schemes with a secure grasp of analog control theory.
A digital controller, on the other hand, generally involves processing an analog
signal to enable the computer to eectively use it (digitization), and when the com-
puter is nished, it must in many cases convert the signal from its digital form back
to its native analog form. Each time this conversion takes place, another source of
error, another delay, and a loss of information occurs. As the speed of processors
and the resolution of converters increase, these issues are minimized. Some tech-
niques lend themselves quite readily to digital controllers as one or more of their
signals are digital from the beginning. For example, any device or actuator capable
of only on or o operation can simply use a single output port whose state is either
high or low (although ampliers, protection devices, etc. are usually required). As
computers continue to get more powerful while prices still decline, digital controllers
will continue to make major inroads into all areas of life.
Digital controllers, while having some inherent disadvantages as mentioned
above, have many advantages. It is easy to perform complex nonlinear control
algorithms, a digital signal does not drift, advanced control techniques (fuzzy
logic, neural nets) can be implemented, economical systems are capable of many
inputs and outputs, friendly user interfaces are easily implemented, they have data
logging and remote troubleshooting capabilities, and since the program can be
changed, it is possible to update or enhance a controller with making any physical
adjustments. As this book endeavors to teach, building a good foundation of math-
ematical modeling and intuitive classical tools will enable control system designers to
move condently to later sections and apply themselves to digital and advanced
control system design.

1.5 MODERN VERSUS CLASSICAL CONTROL THEORY


The rst portion of this book primarily discusses classical control theory applied to
LTI SISO systems. Classical controls are generally discussed using Laplace operators
in the complex frequency domain and root locus and frequency domain plots are
8 Chapter 1

used in analyzing dierent control strategies. System representations like state space
are presented along side of transfer functions in this text to develop the skills leading
to modern control theory techniques. State space, while useful for LTI SISO systems,
lends itself more readily to topics included in modern control theory. Modern con-
trol theory is a time-based approach more applicable to linear and nonlinear,
MIMO, time-invariant, or time-varying systems.
When we look at classical control theory and recognize its roots in feedback
amplier design for telephones, it comes as no surprise to nd the method based in
the frequency domain using complex variables. The names of those playing a pivotal
role have remained, and terms like Bode plots, Nichols plots, Nyquist plots, and
Laplace transforms are common. Classical control techniques have several advan-
tages. They are much more intuitive to understand and even allow for many of the
important calculations to be done graphically by hand. Once the basic terminology
and concepts are mastered, the jump to designing eective, robust, and achievable
designs is quite easy. Dealing primarily with transfer functions as the method to
describe physical system behavior, both open-loop and closed-loop systems are easily
analyzed. Systems are easily connected using block diagrams, and only the input/
output relationships of each system are important. It is also relatively easy to take
experimental data and accurately model the data using a transfer function. Once
transfer functions are developed, all of the tools like frequency plots and root locus
plots are straightforward and intuitive. The price at which this occurs is reected in
the accompanying limitations. With some exceptions, classical techniques are suited
best for LTI SISO systems. It rapidly becomes more of a trial-and-error process and
less intuitive when nonlinear, time varying, or MIMO systems are considered. Even
so, techniques have been developed that allow these systems to be analyzed. Based
on its strengths and weaknesses, it remains an eective, and quite common, means of
introducing and developing the concept of automatic controller design.
Modern control theory has developed quickly with the advent of the micro-
processor. Whereas classical techniques can graphically be done by hand, modern
techniques require the processing capabilities of a computer for optimal results. As
systems become more complex, the advantages of modern control theory become
more evident. Being based in the time domain and when linearized, in matrix form,
implementing modern control theories is equally easy for MIMO as it is for SISO
systems. In terms of matrix algebra the operations are the same. As is true in other
matrix operations, programming eort remains almost the same even as system size
increases. The opposite eect is evident when doing matrix operations by hand.
Additional benets are the adaptability to nonlinear systems using Lyapunov
theories and in determining the optimal control of the system.
Why not start then with modern control theory? First, the intuitive feel evident
in classical techniques is diminished in modern techniques. Instead of input/output
relationships, sets of matrices or rst-order dierential equations are used to describe
the system. Although the skills can be taught, the understanding and ramication of
dierent designs are less evident. It becomes more math and less design. Also,
although it is simple in theory to extend it to larger systems, in actual systems,
complete with modeling errors, noise, and disturbances, the performance may be
much less than expected. Classical techniques are inherently more robust. In using
matrices (preferred for computer programming) the system must generally be line-
arized and we end up back with the same problem inherent in classical techniques.
Introduction 9

Finally, the simplest designs often require the knowledge of each state (parameters
describing current system behavior, i.e., position and velocity of a mass), which for
larger systems is quite often not feasible, either because the necessary sensors do not
exist or the cost is prohibitive. In this case, observers are often required, the complex-
ity increases, and the modeling accuracy once again is very important.
The approach in this book, and most others, is to develop the classical tech-
niques and then move into modern techniques and digital controls. Modern control
theories and digital computers are a natural match, each requiring the other for
maximum eectiveness. To maintain a broad base of skills, both classical and mod-
ern control theories are extended into the digital controller realmclassical tech-
niques because that is where many of the current digital controllers have migrated
from and modern techniques because that is where many are beginning to come
from.

1.6 COMMON CONTROL SYSTEM APPLICATIONS


Regardless of the scope with which we dene a controller, it is safe to say that the use
of them is pervasive and the growth forecast would include the term exponential in
the description. This section serves to list some of the common and not so common
controller applications while admitting it is not even remotely a comprehensive list.
By the end, it should be clear what the outlook and demand is for controllers and
engineers who understand and can apply the theory. One startling example can be
illustrated by examining the automobile. In 1968 the rst microprocessor was used
on a car. What is interesting is that it was not a Ferrari, Porsche, or the like; it was
the Volkswagen 1600. This microprocessor regulated the airfuel mixture in an
electronic fuel injection system, boosting performance and fuel eciency. Where
have we gone since then? The 1998 Lincoln Continental has 40 microprocessors,
allowing it to process more than 40 million instructions per second. Current vehicles
oer stability control systems that cut the power or apply brakes to correct situations
where the driver begins to lose control. Global positioning satellites are already
interfaced with vehicles for directions and as emergency locators. The movement
evident in automotive engineering is also rapidly aecting virtually all other areas of
life. Even in o-road construction equipment the same movement is evident. In an
industry where the image is far from clean precise equipment, current machines may
have over 15 million lines of code controlling its systems. Many have implemented
dual processors to handle the processing.
In Table 1, the idea presented is that control systems aect almost every aspect
of our daily life. As we will see, common components are evident in every example
listed. While the modeling skills tend to be specic to a certain application or tech-
nology, understanding the basic controller theory is universal to all.

1.6.1 Brief Introduction to Purchasing Control Systems


The options when purchasing control systems are abundant. Dierent applications
will likely require dierent solutions. For common applications there are probably


Phillips W. On the road to ecient transportation, engineers are in the drivers seat. ASME News,
December 1998.
10 Chapter 1

Table 1 Common Controller Applications

Common large-scale applications


Motion control: hydraulic, pneumatic, electrical
Vehicle systems: engine systems, cruise control, ABS, climate control, etc.
Electronic amps: telephones, stereos, cell phones, audio and RF, etc.
Robotics: welding, assembly, dangerous tasks, machining, painting, etc.
Military: ight systems, target acquisition, transmissions, antennas, radar
Aerospace: control surfaces, ILS, autopilot, cabin pressurization
Computers: disk drives, printers, CD drives, scanners, etc.
Agriculture: GPS interface for planting, watering, etc., tractors, combines, etc.
Industrial processes: manufacturing, production, repair, etc.
Common home applications
Refrigerator, stereo, washing machine, clothes dryer, bread machines, furnace and air-
conditioning thermostat, oven, and water heater
Common human applications
Driving your car, lling a bucket with water, welding, etc.

several o-the-shelf controllers available. O-the-shelf controllers usually contain


the required hardware and software for the application, as opposed to the designer
choosing each component separately. It is possible that many design requirements
may be met by choosing an appropriate o-the-shelf controller. These controllers
may be specic to one application or general to many. This section discusses some of
the primary advantages and disadvantages when using these controllers. When
volumes increase, if the controller is embedded in an application, or when unique
systems are being developed, it becomes more likely that a specic design is required
and the designer must choose each component individually. The choice may be
driven by cost or performance. Of course, when actually choosing a controller, we
nd the whole spectrum represented and must decide where along the scale is best.
Regardless of the controller used, the basic theory presented in this book will help
users understand, tune, and troubleshoot controllers. Basic architectures you might
expect to nd include PLCs, microprocessor based, and op-amp comparators.
General and specic controllers, using various architectures, are abundant and vir-
tually any combination is possible. What follows here is a brief sampling of what you
might nd while searching for a controller. It is not intended to promote or highlight
one specic product but rather represents a range of the types available.
1.6.1.1 General Microprocessor-Based Controllers
General controllers are applicable to many systems (Figure 3). They oer similar
exibility to using a computer or microprocessor and in general act as programmable
microprocessors with exible programming support, multiple analog and digital
input/output (I/O) congurations, onboard memory, and serial ports. Micro-
processor-based controllers packaged with the supporting hardware and software
are often called microcontrollers. With the advancements in miniaturization and
surface mount technologies, the term is descriptive of modern controllers.
A typical (micro)controller may have several types of analog and digital input
channels, may be able to supply up to 5 A of current through relays (ono control
Introduction 11

Figure 3 General programmable controller (Courtesy of Sylva Control Systems Inc.).

only, no proportionality), and provide voltage (05 Vdc) or current (420 mA)
commands to any compatible actuator. As such, when designing a control system
using a system like this, you must still provide the ampliers and actuators (i.e.,
electronic valve driver card if an electrohydraulic valve is used). Advantages are as
follows: dedicated computer and data acquisition system not required, faster and
more consistent speeds due to a dedicated microprocessor, optically isolated inputs
for protection, supplied programming interfaces, and exibility. Many controllers
are packaged with software and cables, allowing easy programming. Disadvantages
are cost, discrete signals that require more knowledge when troubleshooting, and
complexity when compared to other systems. With the progress being made with
microprocessors and supporting electronic components, this type of controller will
continue to expand its range.
For embedded high-volume controllers, it becomes likely that similar systems
(microprocessor, signal conditioning, interface devices, and memory) will be
designed into the main system board and integrated with other functions. In this
case, more extensive engineering is required to properly design, build, and use such
controllers. An example of such applications is the automobile, where vehicle elec-
tronic control modules are designed to perform many functions specic to one (or
several) vehicle production line. The volume is large enough and the application
specic enough to justify the extra development cost. It is still common for these
applications to be developed by third-party providers (original equipment manufac-
turers).
1.6.1.2 Application-Specific Controller Examples
For common applications it may be possible to have a choice of several dierent o-
the-shelf controllers. These may require as little as installation and tuning before
being operational. For small numbers, proof of concept testing or, when time is
12 Chapter 1

short, application specic controllers have several advantages. Primarily, the devel-
opment time for the application is already done (by others). Actuators, signal con-
ditioners, fault indicators, safety devices, and packaging concerns have already been
addressed. Perhaps the largest disadvantage, besides cost, is the loss of exibility.
Unless our application closely mirrors the intended operation, and sometimes even
then, it may become more trouble to adapt and/or modify the controller to get
satisfactory performance. In other words, each application, even within the same
type (i.e., engine speed control), is a little dierent, and it is impossible for the
original designer to know all the applications in advance.
For examples of this controller type, let us look at two internal combustion
(IC) engine speed controllers. First, to illustrate how specic a controller might be,
let us examine the automatic engine controller shown in Figure 4. It does not include
the actual actuators and consists only of the electronics.
Looking at its abilities will illustrate how specic it is to one task, controlling
engines. This controller is capable of starting an engine and monitoring (and
shutting down if required) oil pressure, temperature, and engine speed. It also
can be used to preheat with glow plugs (diesels), close an air gate during over
speed conditions, and provide choke during cold starts. Remember that a sensor is
required for an action on any variable to occur. The module may accept engine
speed signals based on a magnetic pickup or an alternator output. The functions
included with the controller are common ones and saves the time required to
develop similar ones. The packaging is such that it can be mounted in a variety
of places. The main disadvantages are limited exibility; if additional functions or
packaging options are required, cost; and the requirement that all sensors must be
capable with the unit.
Second, lets consider an example of a specic o-the-shelf controller that
includes an actuator(s). The control system shown in Figure 5 includes the speed
pickups (magnetic pickup), electronic controller, and actuator (proportional sole-
noid) as required for a typical installation.
Using this type of controller is as simple as providing a compatible voltage
signal proportional to the engine speed, connecting the linear proportional solenoid

Figure 4 Specic application controller example (Courtesy of DynaGen Technologies Inc.).


Introduction 13

Figure 5 Specic application controller example (Courtesy of Synchro-Start Products Inc.).

to the throttle plate linkage, and tuning the controller. The controller may be pur-
chased as a PI or a PID. Which one is desired? As we will soon see, depending on
conditions, either one might be appropriate. Understanding the design and opera-
tion of control systems is important even in choosing and installing black box
systems. The advantages and disadvantages are essentially the same as the rst
example in this section.
1.6.1.3 Programmable Logic Controllers (Modern)
PLCs may take many forms and no longer represent only a sequential controller
programmed using ladder logic. Modules may be purchased which might include
advanced microprocessor-based controllers such as adaptive algorithms, standard
PID controllers, or simple relays. Historically, PLCs handled large numbers of
digital I/Os to control a variety of processes, largely through relays, leading to
using the word logic in their description. Today, PLCs are usually distinguished
from other controllers through the following characteristics: rugged construction
to withstand vibrations and extreme temperatures, inclusion of most interfacing
components, and an easy programming method. Being modular, a PLC might
include several digital I/Os driving relays along with modules with an embedded
microcontroller, complete with analog I/O, current drivers, and pulse width mod-
ulation (PWM) or stepper motor drivers. An example micro-PLC, capable of more
than just digital I/O, is shown in Figure 6.
This type of PLC can be used in a variety of ways. It comes complete with
stepper motor drivers, PWM outputs, counter inputs, analog outputs, analog inputs,
256 internal relays, serial port for programming, and capable of accepting both
ladder logic and BASIC programs. Modern PLCs may be congured for large
numbers of inputs, outputs, signal conditioners, programming languages, and
speeds. They remain very common throughout a wide variety of applications.
1.6.1.4 Specific Product Example (Electrohydraulics)
Even more specic than a particular application are controllers designed for a
specic product. A common example of this is in the eld of electrohydraulics.
Most electrically driven valves have separate options about the type of controllers
available. When the valve is purchased a choice must be made about the type of
controller, if any, desired. While it is not dicult to design and build an after-
14 Chapter 1

Figure 6 Microprogrammable logic controller (Courtesy of Triangle Research


International Pte. Ltd.).

market controller, it is fairly dicult to get the performance and features com-
monly found on controllers designed for particular products. The disadvantages
are quite obvious in that the manufacturer determines all the types of mounting
styles, packaging styles, features, and speeds. In general, however, these disad-
vantages are minimized since each industry attempts to follow standards for
mounts, ttings, and power supply voltages, etc., and there is a good chance
that the support components are readily available. A distinct advantage is the
integration and range of features commonly found in these controllers. The elec-
trohydraulic example below illustrates this more fully. In addition, the manufac-
turer generally has an advantage in that the internal product specications are
fully known.
The example valve driver/controller shown in Figure 7 is designed to interface
with several electrohydraulic valves. More common in controllers designed for spe-
cic products, the interfacing hardware, signal conditioners, ampliers, and control-
ler algorithms are all integrally mounted on a single board. In addition, many

Figure 7 Electrohydraulic valve driver/controller.


Introduction 15

features are added which are specic only to the product it was designed for. In the
example shown, deadband compensation, ramp functions, linear variable displace-
ment transducer [LVDT] valve position feedback, dual solenoid PWM drivers, and
testing functions are all included on the single board. The driver card also includes
external and internal command signal interfaces, gain potentiometers for the on-
board PID controller, troubleshooting indicators (e.g., light emitting diodes [LEDs]),
and access to many extra internal signals.
Depending on the valve chosen, a specic card with the proper signals must be
chosen. The system is very specic to using one class of valves, as illustrated by the
dual solenoid drivers and LVDT signal conditioning for determining valve spool
position.

PROBLEMS
1.1 Label and describe the blocks and lines for a general controller block diagram
model, as given in Figure 8.
1.2 Describe the importance of sensors relative to the process of designing a con-
trol system.
1.3 Describe a common problem that may occur with ampliers and actuators
when improperly selected. For the problem described, list a possible cause followed
by a possible solution.
1.4 For an automobile cruise control system, list the possible disturbances that the
control system may encounter while driving.
1.5 Choose one prominent person who played an important role in the history of
automatic controls. Find two additional sources and write a brief paragraph describ-
ing the most interesting results of your research.
1.6 Finish the phrases using either analog or digital.
a. Most physical signals are _____________ .
b. Earliest computers were _____________ .
c. For rejecting electrical noise, the preferred signal type is _____________ .
d. Signals exhibiting nite intermediate values are _____________ .
1.7 List two advantages and two disadvantages of classical control design
techniques.
1.8 List two advantages and two disadvantages of modern control design
techniques.
1.9 In several sentences, describe the signicance of the microprocessor relative to
control systems presently in use.

Figure 8 Problem: general controller block diagram


16 Chapter 1

1.10 Briey describe several dierences between microprocessors and microcontrol-


lers.
1.11 What is a disadvantage of choosing an o-the-shelf controller?
1.12 Modern PLCs, while similar to microcontrollers, have additional characteris-
tics. List some of the common distinctions.
1.13 Describe several advantages commonly associated with using controllers
designed for a specic product.
2
Modeling Dynamic Systems

2.1 OBJECTIVES
 Present the common mathematical methods of representing physical
systems.
 Develop the skills to use Newtonian physics to model common physical
systems.
 Understand the use of energy concepts in developing physical system
models.
 Introduce bond graphs as a capable tool for modeling complex dynamic
systems.

2.2 INTRODUCTION
Although some advanced controller methods attempt to overcome limited models,
there is no question that a good model is extremely benecial when designing control
systems. The following methods are presented as dierent, but related, methods for
developing plant/component/system models. Accurate models are benecial in simu-
lating and designing control systems, analyzing the eects of disturbances and para-
meter changes, and incorporating algorithms such as feed-forward control loops.
Adaptive controllers can be much more eective when the important model para-
meters are known.
As a minimum, the following sections should illustrate the commonality
between various engineering systems. Although units and constants may vary, elec-
trical, mechanical, thermal, liquid, hydraulic, and pneumatic systems all require the
same approach with respect to modeling. Certain components may be nonlinear in
one system and linear in another, but the equation formulation is identical. As a
result of this phenomenon, control system theory is very useful and capable of
controlling a wide variety of physical systems.
Most systems may be modeled by applying the following laws:
 Conservation of mass;
 Conservation of energy;
 Conservation of charge;
 Newtons laws of motion.
17
18 Chapter 2

Additional laws describing specic characteristics of some components may be


necessary but usually may be explained by one of the above laws. Of particular
interest to controls engineers is modeling a system comprised of several domains,
since many controllers must be designed to control such combinations.
Although each topic presented could (and maybe should) constitute a complete
college course, an attempt is made to present the basics of modeling and analysis of
dynamic systems relative to control system design. Many of the tasks discussed can
now easily be solved using standard desktop/laptop computers. The goal of this
chapter is to present both the basic theory along with appropriate computer solution
methods. One without the other severely limits the eectiveness of the control engi-
neer.

2.3 AN INTRODUCTION TO MODEL REPRESENTATION


2.3.1 Differential Equations
Dierential equations describe the dynamic performance of physical systems. Three
common and equivalent notations are given below. They are commonly inter-
changed, depending on the preference and software being used.

dx d 2x
x0 x_ and x00 x
dt dt2
When a slash to the right of or a dot over a variable is given, time is assumed to
be the independent variable. The number of slashes or dots represents order of the
dierential. Dierential equations are generally obtained from physical laws describ-
ing the system process and may be classied according to several categories, as
illustrated in Table 1.
Another consideration depends on the number of unknowns that are involved.
If only a single function is to be found, then one equation is sucient. If there are

Table 1 Classications of Differential Equations

Order The highest derivative that appears in the equation


Ordinary (ODE) The function depends only on one independent variable (common
independent variable in physical systems is time)
Partial Contains differentials with respect to two or more variables
(common in electromagnetic and heat conduction systems)
Linear Constant coefcients and no derivatives raised to higher powers
Nonlinear Functions as coefcients or derivatives raised to higher powers
Homogeneous No forcing function (sum of derivatives equals zero)
Nonhomogeneous Differential equation with a nonzero forcing function
Complementary The homogeneous portion of a nonhomogenous differential equation
Auxiliary equation The polynomial formed by replacing all derivatives with variables
raised to the power of their respective derivatives
Complementary Solution to the complementary equation
solution
Particular solution Solution to the nonhomogeneous differential equation
Steady-state value Determined by setting all derivatives in equation to zero
Modeling Dynamic Systems 19

two or more unknown functions, then a system of equations is required. As will be


seen later, a single mass-spring-damper system results in a second-order ordinary
dierential equation. Although most real systems are nonlinear, the equations are
often linearized. This greatly simplies the controller design process. If the controller
remains around its targeted operating point, the linear models do quite well.
Example of a second-order, nonhomogenous, linear, ordinary dierential
equation is as follows:
mx bx_ kx F or mx00 bx0 kx F
Example of a second-order, homogeneous, nonlinear, ordinary dierential
equation is as follows:
g
y sin y 0
l
The rst equation is a common mass-spring-damper system and will be devel-
oped in a later example; the second equation is for a common pendulum system. It is
still ordinary, although nonlinear, since time is still the only independent variable.
Both examples are functions of only one variable (unknown) and thus are described
by a single equation.
Finally, determining the steady-state output value for time-based dierential
equations is accomplished by setting all the derivatives in the equation to zero. Since
the derivative, by denition for time-based equations, is the rate of change of the
dependent variable with respect to time, the steady-state value occurs when they are
all equal to zero. Thus, for the mass-spring-damper system shown above, setting x00
and x0 to zero results in a steady-state displacement of x F=k, as expected.

2.3.2 Block Diagrams


Block diagrams have become the standard representation for control systems. While
block diagrams are excellent for understanding signal ow in control systems, they
are lacking when it comes to representing physical systems. Most block diagram
models are developed using the methods in this chapter (obtaining dierential equa-
tions) and then converting to the Laplace domain for use as a transfer function in a
block diagram. The problems are that block diagrams become unwieldy for moder-
ately sized systems, focus more on the computational structure and not the physical
structure of a system, and only relate one physical variable per connection. In most
physical systems there is a cause and eect relationship between variables
sharing the same physical space, for example, voltage and current in an electrical
conductor and pressure and ow in a hydraulic conductor. Many programs today,
however, allow block diagrams to represent the controller and dierent, higher level,
modeling techniques to be used for the physical system. For example, several bond
graph simulation programs and many commercial systems simulation programs
allow combination models. This section briey describes block diagrams and com-
mon properties and tools useful while using them. The analysis section will further
explain how the blocks represent physical systems.
2.3.2.1 Basic Block and Summing Junction Operations
In block diagrams, signals representing a system variable ow along lines connecting
blocks, which perform operations on the signals. Therefore, each block is simply a
20 Chapter 2

ratio of the output to the input, or what is called a transfer function. Each line
representing a signal is unidirectional and designated by an arrow. Since each line
represents a variable in the system, usually a physical variable with associated units,
the blocks must contain the appropriate units relative to the input and output. For
example, a pressure input to a block representing an area upon which the pressure
acts requires the value of the block representing that area to be expressed in units
where the block output is expressed as a force (force pressure  area). This
relationship where the block is the ratio of the output variable to the input variable
is shown in Figure 1.
Also shown in Figure 1 is a basic summing junction, or comparator. A sum-
ming junction either adds or subtracts the input variables to determine the value of
the output variable. As is true in basic addition and subtraction operations, the units
of the variable must be the same for all inputs and output of a summing junction. In
other words, it does not make sense to add voltages and currents or pressures and
forces. Any number of inputs may be used to determine the single output. Each input
should also be designated as an addition or subtraction using or  symbols
near each line or inside the summing junction itself.
These two items allow us to construct and analyze almost every control system
that we are likely to encounter. The operations illustrated in the remaining sections
use the block and summing representations to graphically manipulate algebraic
equations. Each section will give the corresponding algebraic equations, although
in practice this is seldom done once we become familiar with the graphical opera-
tions.
2.3.2.2 Blocks in Series
A common simplication in block diagrams is combining blocks in series and repre-
senting them as a single block, as shown in Figure 2. Any number of blocks in series
may be combined as long as no branches are contained between any of the pairs of
blocks. A later section illustrates how to move branches and allow for the blocks to
be combined. Remembering that each individual block must correctly relate the units
of the input variable to the output variable, the new block formed by multiplying the
individual blocks will then relate the initial input to the nal output. The simplied
blocks units are obtained by multiplying the units from each individual block.
2.3.2.3 Blocks in Parallel
It is also common to nd block diagrams where several blocks are constructed in
parallel. Many controllers are rst formed in this way to illustrate the eect of each

Figure 1 Transfer function and summing junction operations.


Modeling Dynamic Systems 21

Figure 2 Blocks in series.

part of the controller. For example, the control actions for a proportional, integral,
derivative controller can be shown using three parallel blocks where the paths repre-
sent the proportional, integral, and derivative control eects. A simple system using
two blocks is shown in Figure 3. When combining two or more blocks in parallel, the
signs associated with each input variable must be accounted for. It is possible to have
several blocks subtracting and several blocks adding when forming the new simpli-
ed block.
2.3.2.4 Feedback Loops in Block Diagrams
Perhaps the most common block diagram operation when analyzing closed loop
control systems is the reduction of blocks in loops. Whereas the previous steps
resulted in new blocks formed by addition, subtraction, multiplication, and division,
when feedback loops are formed the structure of the system is changed. Later on
we will see how the denominator changes when the loop is closed, resulting in us
modifying the dynamics of the system. The basic steps used to simplify loops in
block diagrams are illustrated in Figure 4. Thus, we see that a new denominator is
formed that is a function of both the forward (left to right signals) path blocks and
the feedback (right to left signals) path blocks. When we design a controller we insert
a block that we can change, or tune, and thereby modify the behavior of our
physical system.
Even with multiple blocks in the forward and feedback loops, we can use the
general loop-closing rule to simplify the block diagram. The exception to the rule is

Figure 3 Blocks in parallel.


22 Chapter 2

Figure 4 Loop operations in block diagrams.

when the loop has any branches and/or summing junctions within the loop itself. The
goal is to then rst rearrange the blocks in equivalent forms that will enable the loop
to be closed. Several helpful operations are given in the remaining sections.
2.3.2.5 Moving a Summing Junction in a Block Diagram
When loops contain summing junctions that prevent the loop from being simplied,
it is often possible to move the summing junction outside of the loop, as shown in
Figure 5. In general, summing junctions can be moved either ahead or back in the
block diagram, depending on the desired result. By writing the algebraic equations
for the block diagrams shown in Figure 5, we can verify that indeed the two block
diagrams are equivalent. In fact, whenever in doubt, it is helpful to write the equa-
tions as a means of checking the nal result.
2.3.2.6 Moving a Pickoff Point in a Block Diagram
Finally, it may be useful to move picko points (branch points) to dierent locations
when attempting to simplify various block diagrams. This operation is shown in
Figure 6. Once again the included algebraic equations conrm that the block dia-
grams are equivalent, and in fact the algebraic operation and graphical operation are
mirror images of each other.
Using the operations described above and realizing that the block diagram is a
picture of an equation will allow us to reduce any block diagram with the basic

Figure 5 Moving summing junctions in block diagrams.


Modeling Dynamic Systems 23

Figure 6 Moving picko points in block diagrams.

steps. By now the block diagram introduced at the beginning should be clearer. What
we will progress to in Section 2.6 is how to develop the actual contents of each block
that will allow us to design and simulate control systems.

2.3.3 State Space Equations


State space equations are similar to dierential equations, generally being easy to
switch between the two notations. In addition to the inputs and outputs, we also now
dene states. The states are the minimum set of system variables required to dene
the system at any time t  t0 , where each state is initially known at t0 and all inputs
are known for t  t0 . From a mathematical standpoint any minimum set of state
variables meeting these requirements will suce. They need not be physically mea-
surable or present. From a practical point of view, it is benecial in terms of imple-
mentation, design, and comprehension to choose states representing physical
variables or a combination thereof. Perhaps the simplest way to illustrate this is
through an example. Let us examine the motion of a vehicle body modeled as a
two-dimensional beam connected via springs and dampers to the ground, as shown
in Figure 7. There are many possible state variable combinations available for the
vehicle suspension.

a. xrt and xf t along with their rst derivatives (velocities);


b. yt and yt along with their rst derivatives;
c. xrt and yt along with their rst derivatives . . . and so forth.

Figure 7 State variable choices for two-dimensional vehicle suspension.


24 Chapter 2

In all these cases (and in those not listed) once two variables (along with their
rst derivatives) are chosen, the remaining positions, velocities, and accelerations can
be dened relative to the chosen states. The examples listed above all include mea-
surable variables, although this is not a requirement, as stated above. There may be
advantages to choosing some sets of states as it is often possible to decouple inputs
and outputs, thus making control system design easier. This will be discussed more in
later sections. Since more states are generally available (i.e., meet the requirements,
as described above) a state space system of equations is not unique. All representa-
tions, however, will result in the same system response.
The goal is to develop n rst order dierential equations where n is the order of
the total system. The second order dierential equation describing the common mass-
spring-damper system would become two rst-order dierential equations represent-
ing position and velocity of the mass. What about the acceleration? Having only the
position and velocity as states will suce since the acceleration of the mass can be
determined from the current position and velocity of the mass (along with any imposed
inputs on the system). The position allows us to determine the net spring force and the
velocity to determine the net friction force. In general, each state variable must be
independent. Since the acceleration can be found as a function of the position and
velocity, it is dependent and does not meet the general criteria for a state variable.
The general form for a linear system of state equations is using vector and
matrix notation to dene the constants of the states and inputs. The standard nota-
tion looks like the following:
dx=dt A x B u
and
yC xD u
x is the vector containing the state variables and u is the vector containing the inputs
to the system. A is an n  n matrix containing the constants (linear systems) of the
state equations and B is a matrix containing the constants multiplying on the inputs.
The matrix C determines the desired outputs of the system based on the state vari-
ables. D is usually zero unless there are inputs directly connected to the outputs,
found in some feedforward control algorithms. The general size relationships
between the vectors and matrices are listed below.
Dene
n as the number of states dened for the system (the system order);
m as the number of outputs for the system;
r as the number of inputs acting on the system.
Then
x is the state vector and has dimensions n  1 (and for dx/dt).
u is the input vector and has dimensions r  1.
y is the output vector and has dimensions m  1.
A is the system matrix and has dimensions n  n.
B is the input matrix and has dimensions n  r.
C is the output matrix and has dimensions m  n.
D is the feedforward matrix and has dimensions n  r.
Modeling Dynamic Systems 25

If the system is nonlinear, it must be left as a system of rst order dierential


equations. Although state space at rst glance seems confusing, it is a powerful way
to represent higher order systems since the algebraic routines to analyze each equa-
tion do not change and the operations remain the same. Linear algebra theorems are
applicable when the matrix form is used, regardless of the system order. Since it is
computationally ecient and suited particularly well to dierence equations, it is
very common in designing advanced controllers.
Now let us look at the two examples of dierential equations given in the
previous section and develop the equivalent state equations. Later on we will see
more complex models represented with state equations.

EXAMPLE 2.1
Mass-spring-damper system:

mx bx_ kx F

Since this is a second-order system, we would expect to need two states, result-
ing in a 2  2 system matrix, A. First, we need to dene our state variables. This is
easily accomplished by choosing acceleration and velocity as the rst state deriva-
tives, since they are both integrally related to position. Therefore, let x1 , the rst
state variable, equal x, the position, and x2 , the second state variable, equal velocity,
or dx=dt. Using x as the dependent variable in the dierential equation is a little
misleading since x1 in this case is equal to x but is not required to be so. Now we can
set up the following identities and equations:

x1 x position

x_ 1 x2 velocity

u F input

b k 1
x_ 2 x  x2  x1 u acceleration
m m m

We have met the goal of having two rst-order dierential equations as func-
tions of the state variables and constants. Since this is linear we can also represent it
in matrix form.

" # 2 3 2 3
x_ 1 0 1 "x # 0
1
4 k b 5 4 1 5u
x_ 2   x2
m m m

If the position of the mass is the desired output, then the C and D matrices are
 
x1
y 1 0 C x D u
x2
26 Chapter 2

EXAMPLE 2.2
Applying the same procedure to the pendulum equation:
g
y sin y 0
l
x1 y angular position
u 0 input
x_ 1 x2 angular velocity
g
x_ 2 y  sinx1 angular acceleration
l
With nonlinear equations this is the nal form. If the matrix form is desired, the
equations must rst be linearized as shown in the following section. Whether or not
they are written in matrix form, sets of rst-order dierential equations (regardless of
the number) are easily integrated numerically (i.e., Runge-Kutta) to simulate the
dynamic response of the systems. This is further explored in Chapter 3.

2.3.4 Linearization Techniques


Since most classical control system design techniques are based on linear systems,
understanding how to linearize a system and the resulting limitations are impor-
tant. We will rst examine the linearization process and then look at specic cases
for functions of one or two variables in an eort further explain and understand
the strengths and limitations. The section is completed by linearizing the pendu-
lum model, completing the state space matrix form, and concluded by linearizing
a hydraulic valve with respect to pressure and ow. Linearizing hydraulic valves
is a common procedure when designing and simulating electrohydraulic control
systems.
The general equation used when linearizing a system is given as follows:


@f x @f x
y^ f x1o ; x2o ; . . . ; xno x1  x1o x  x2o
@x1 @x2 1
x1o ;x2o ;...xno x1o ;x2o; ...xno

@f x

@xn xn  xno
x ;x ...x
1o 2o; no

The linearized output, y^, is found by rst determining the desired operating
point, evaluating the system output at that point, and determining the linear varia-
tions around that point for each variable aecting the system. It is important to
choose an operating point around where the system is actually expected to operate
since the linearized system model, and thus the resulting design also, can vary greatly
depending on what point is chosen. Once the operating point is chosen, the operating
oset, f x1o ; x2o ; . . . ; xno , is calculated. For some design procedures only the varia-
tions around the operating point are examined since they aect the system stability
and dynamic response. This will be explained further in the next example. Next, each
slope, or the linearized eect of each individual variable, is found by taking the
partial derivative with respect to each variable. Each partial derivative is evaluated at
the operating point and becomes a numerical constant multiplied by the deviation
Modeling Dynamic Systems 27

away from the operating point. When all constants are collected we are left with one
constant followed by a constant multiplied by each individual variable, thus resulting
in a linear equation of n variables. The actual procedure is therefore quite simple,
and most computer simulation packages include linearization routines to perform
this step for the end user. It is still up to the user, however, to linearize about the
correct point and to recognize the valid and usable range of the resulting linearized
equations.
Now let us examine the idea behind the linearization process a little more
thoroughly by beginning with a function (y) of one variable (x). If we were to plot
the function, we might end up with some curve as given in Figure 8. When the
derivative (partial derivative for functions of several variables) is taken of the
curve, the result is another function. We evaluate the derivative of the function at
the operating point to give us the slope of line representing our original nonlinear
function. Depending on the shape of the original curve, the usable linear range will
vary. At some point the original nonlinear curve is quite dierent than the linear
estimated value as shown in the plot.
It is clear in Figure 8 that a very small range of the actual function can be
approximated with a linear function (line). In fact, for the example in Figure 8, the
slope is even positive or negative depending on the operating point chosen. Many times
the actual functions are not as problematic and may be modeled linearly over a much
larger range. In all cases, it pays to understand where the linearized model is valid.
Let us now linearize the inverted pendulum state equation and determine a
usable range for the linear function.

EXAMPLE 2.3
Since sin(y) is our nonlinear function and y 0 the operating point (pendulum is
vertical), then a plot around this point will illustrate the linearization concept.
Performing the math rst, we begin with the nonlinear state equation derived
earlier:
g
x_ 2 y f x1  sinx1
l

Figure 8 Linearization techniques applied to a function of one variable.


28 Chapter 2

Vertical oset f 0:
g
y0 f 0  sin0 0
l
Partial derivative:
@y @f x1 g
 cosx1
@x @x1 l
Slope at x1 0cosx1 ! 1 :
@y @f 0 g

@x @x1 l
Resulting linear state equation:
g g
x_ 2 y  x1  y
l l
Since the state equation is now linear, the system can be written using matrices
and vectors, resulting in the following state space linear matrices:
  " 0 1 #   
x_ 1 x1 0
g u
_x2  0 x2 0
l
 
x
y 1 0 1 C x D u
x2
Common design techniques presented in future sections may now be used to
design and simulate the system. As a caution ag, many designs appear to work
well when simulated but fail when actually constructed due to misusing the linear-
ization region. Developing good models with the proper assumptions is key to
designing good performing robust control systems. To conclude this example, let
us examine the valid region for the linear inverted pendulum model. To do so,
let us plot the linear and actual function (for simplicity, let g=l 1). These plots
are given in Figure 9.
From the graph is it clear that between approximately 0:8 radians  45
degrees) the linear model is very close to the actual. The greater the deviation beyond
these boundaries, the greater will be the modeling error. The linearization of the sine
function, as demonstrated above, is also commonly referred to as the small angle
approximation.

EXAMPLE 2.4
It is also possible to visualize the linearization process for functions of two variables
by using surface plots. For this example the following function is examined:
yx1 ; x2 500 x1  6 x21 500 x2  x32
Although seldom done in practice, it is helpful for the purpose of understand-
ing what the linearization process does to plot the function in both the original
nonlinear and linearized forms. This was already done for the pendulum example
with the function only dependent on one variable. In this case, using a function
dependent on two variables, a surface is created when plotting the function. This
Modeling Dynamic Systems 29

Figure 9 Actual versus linearized pendulum model.

is shown in Figure 10, where the function is plotted for 0 < x1 < 25 and 0 < x2 < 25
around an operating point chosen to be at x1 ; x2 10; 10.
To linearize the function the oset value and the two partial derivatives must
be calculated and evaluated at the operating point of x1 ; x2 10; 10.
Oset value:
yx1 ; x2 y10; 10 5000  600 5000  1000 8400

Slope in x1 direction:

Figure 10 Example: linearizing a function of two variables.


30 Chapter 2

@y
500  12 x1 380
@x1 10;10

Slope in x2 direction:

@y
500  3 x2 470
@x2 10;10

Linearized equation:
y^ 8400 380x1  10 470x2  10
The linearized equation is now the equation for a plane in three-dimensional
space and is the plane formed by the intersection of the two lines drawn in Figure 10.
The error between the nonlinear original equation and the linearized equation is the
dierence between the plane formed by the two lines and the actual surface plot, as
illustrated in Figure 11. Although visualization becomes more dicult, the lineariza-
tion procedure remains straightforward for functions of more than two variables.
Each partial derivative simply relates the output to the change of the variable to
which the partial is taken.
It is common to only include the variations about the operating point when
linearizing system models since it facilitates easy incorporation into block diagrams
and other common system representations. In this case the constant values are
dropped and the system is examined for the amount of variation away from the
operating point. In the case of then simulating the system, the input and output
values are not absolute but only relative distances (or whatever the units may be)
from the operating point. This simplies the equation studied in the previous exam-
ple to the following:
y^ 380 x1 470 x2 (about the point 10,10)

Figure 11 Example: linearization of two variablesplanar surface representation.


Modeling Dynamic Systems 31

When this is done, the system characteristics (i.e., stability, transient response, etc.)
are not changed and the simulation only varies by the oset. Of course, if component
saturation models or similar characteristics are included, then the saturation values
must also reect the change to variations around the operating point.

EXAMPLE 2.5
To conclude this section, let us linearize the common hydraulic valve orice equation
to illustrate the procedure as more commonly implemented in practice. For a more
detailed analysis of the hydraulic valve, please see Section 12.3 where the model is
developed and used in greater detail. The general valve orice equation may be
dened as follows:
p
Qx; PL KV x PS  PL  PT

where Q is the ow through the valve (volume/time); KV is the valve coecient; x is


the percent of valve opening 1 < x < 1; PS is the supply pressure; PL is the
pressure dropped across the load; and PT is the return line (tank) pressure.
For this example let us dene the operating point at x 0:5 and PL 500 psi.
The output Q is in gallons per minute (gpm) and the constants are dened as
gpm
KV 0:5 p
psi
PS 1500 psi, and
PT 50 psi

The oset value is


p
Qx; PL Q0:5; 500 0:5 0:5 1500  500  50 7:7 gpm

The partial with respect to x is as follows:


@Q p p gpm
Kx  KV PS  PL  PT 0:5 950 15:4 00 00
@x Ps 0;PL0 ;PT0 x
The partial with respect to PL is as follows:

@Q KV x 0:5 0:5 gpm
KP   p p 0:004
@PL 2 PS  PL  PT PS0 ;PL0 ;PT0 ;x0 2 950 psi

resulting in the linearized equation

Q^ 7:7 15:4 x  0:5  0:004 PL  500 gpm

If only variations around the operating point are considered, the equation
becomes
QL Ky x  KP PL

or
QL 15:4 x  0:004 PL
32 Chapter 2

As illustrated in the example, the linearized equation must be consistent with phy-
sical units and the user must be informed about what units are required when
implementing.

2.4 NEWTONIAN PHYSICS MODELING METHODS


Newtons laws (physics) are generally taught in most introductory college courses in
the area of dynamics. Along with energy methods, almost all systems can be modeled
using these techniques. The resulting equations may range from simple linear to
nonlinear and highly complex. Regardless of the system modeled, the result is a
dierential equation(s) capable of predicting the physical system response. The com-
plexity of the equation reects the assumptions made and limitations on obtaining
information. In most cases, proper assumptions allow the model to be reduced down
to linear ordinary dierential equations. These equations become increasingly
complex as nonlinearities and multiple systems are modeled.
The cornerstone equation is Newtons law, or force = mass  acceleration. As
we will see, even in electrical systems where voltages are analogous to forces (elec-
tromotive forces), the sum of the forces, or voltages, is zero. Let us rst examine the
contents of Table 2 and see how the basic laws of physics enable us to model virtually
any system. Using the notation of inductance, capacitance, and resistance we see the
commonalities in several dierent physical systems. All systems can be discussed in
terms of these components and the energy stored, power through, and using English
or metric systems of units. The advantage of such an approach, as seen later in bond
graphs, is the recognition that dierent dynamic systems are following the same laws
of physics. This is an important concept as we move to beginning the design of
automatic control systems.

2.4.1 Mechanical-Translational System Example


Let us begin with a basic mechanical-translational system to apply the laws dened
in Table 2. To do so, let us develop a dierential equation that describes system
motion for the mass-spring-damper system in Figure 12. Once the dierential equa-
tion is developed, many options are available for simulating the system response.
These techniques are examined in following sections.
In this case with one mass, the task is to simply sum all the forces acting on the
mass and set it equal to the blocks mass multiplied by the acceleration. Writing the
force equations for systems with multiple masses is easily accomplished using the
lumped mass modeling approach. Simply sum the forces on each mass and set them
equal to that mass multiplied by the acceleration, all with respect to the one mass in
question. The more dicult part is reducing the system of equations down to a single
input-output relationship. Even with only two masses, this can become quite tedious.
Therefore, let us begin by summing all the forces acting on mass m. Remember
to be consistent with signs. What seems to help some learn this is to imagine the
block moving in a positive direction yt and see what each force would be. Imagine
that you are pushing (displacing) the mass and as you push determine the forces
opposing your movement. This results in the following dierential equation.

F Fk  Fb F  mg k y  b y0 F  mg my00
Modeling Dynamic Systems
Table 2 Physical System Relationships

System Inductance Capacitance Resistance Effort Flow Item

Mechanical- Mass, m Springs, k Damper, b Force, F Velocity, v Variable


translational kg N/m N/m/s N m/s Metric units
slugs lb/in lb/in/s lbf in/s English units
F m dv=dt F kx F bv Momentum x v dt Equation
E 12 m v2 E 12 k x2 P b v2 m v F dt PF v Energy/power
Mechanical- Inertia, J Spring, k Damper, b Torque, T Velocity, o Variable
rotational N-m-s2 N-m/rad N-m-s N-m rad/s Metric units
lb-in-s2 lb-in/rad lb-in-s lbf-in rad/s English units
T Ja T ky T bo Momentum
y odt Equation
E 12 Jo2 E 12 ky2 P bo2 J! T dt P To Energy/power
Electrical Inductance, L Capacitance, C Resistance, R Volts, V Current, I Variable
Henries, H Farads, F Ohms,  Flux linkage
Amps, A Metric units
V Ldi=dt V 1=C i dt V Ri L i V dt q i dt Equation
E 12 L i2 E 12 C V 2 P 1=RV 2 PV i Energy/power
Hydraulic Fluid inertia Capacitance Orice Pressure, p Flow rate, Q Variable
pneumatic N s2 / m5 m3 /Pa (linear) (m3 /s)/(N/m2 )1=2 Pascal, Pa m3 /s Metric units
lbf s2 / in5 in3 /psi (linear)
(in3 /s)/(psi)
p
1=2
psi, lb/in2 in3 /s English units
p I dQ=dt p 1=C Q dt Q Kv P Momentum q Q dt Equation
E 12 I Q2 E 12 C p2 PpQ Q I p dt Power p Q Energy/power
Thermal N/A Capacitance Resistance, Rf Temperature, T Heat ow rate, q Variable
J/K K/W Kelvin, K Watts, W Metric units
lb-in/R R/btu Rankine, R Btu/s English units
T 1=C q dt q 1=Rf T Momentum Heat energy Equation
ECT P 1=Rf T not used 1=C q dt Energy/power

33
34 Chapter 2

Figure 12 Mass-spring-damper model.

If we examine the motion about the equilibrium point (where kx mg), then the
constant force mg can be dropped out since the spring force due to the equilibrium
deection always balances it. Separating the inputs on the right and the outputs on
the left then results in
m y00 b y0 k y F
or expressed as
d 2y dy
m b k yF
dt2 dt
This should be review to anyone with a class in dierential equations, vibra-
tions, systems modeling, or controls. Continuing on, we nish the mechanical-trans-
lational section with an example incorporating two masses interconnected, as shown
in Figure 13.
mb is the mass of the vehicle body.
mt is the mass of the tire.
ks is the spring constant of the suspension.
b is the damping from the shock absorber.

Figure 13 Simple vehicle tire/suspension model.


Modeling Dynamic Systems 35

kt is the spring constant of the tire.


rt is the road prole input to the suspension system.
Since we have two masses and two springs (four energy storage devices) we
expect the model to be a fourth-order dierential equation. The two (second-order)
dierential equations are obtained by repeating the process in the rst example and
summing the forces on each individual mass in the suspension. In this case we have
interaction between the masses since they are connected through a spring and a
damper (ks and b). Summing the forces on the vehicle body mass (mb ) results in
F Fks  Fb mb y00
ks y  x  by0  x0 mb y00
And summing the forces on the tire mass (mt ),
F Fks  Fkt  Fb mt x00
ks x  y  kt x  r  bx0  y0 mt x00
The equations can be rearranged to have the system output on the left and the
inputs on the right:
mb y00 b y0 ks y b x0 ks x
mt x00 bx0 ks kt x kt rt by0 ks y
Since y is the motion of the car body and rt is input to the system, it would be
desirable to have y expressed directly as a function of rt. In the next chapter we use
Laplace transforms to achieve this. It is easy to express the system using state space
matrices since the dierential equations are already linear. If we dene the state
variables as the position and velocity of each mass,
x1 y
x2 y0 x01
x3 x
x4 x0 x03
Then the system can be represented as
x01 x2
x02 b=mb x2  ks =mb x1 b=mb x4 ks =mb x3
x03 x4
x04 b=mt x4  ks kt =mt x3 b=mt x2 ks =mt x1 kt =mt rt
2 3
2 3 0 1 0 0 2 3 2 0 3
x_ 1 6 k b k b 7 x1
6 7 6 s  s 76 7 6 0 7
6 x_ 2 7 6 mb m m m 76 x2 7 6 7
6 76 b b b 76 7 6 7 6
r
6 x_ 7 6 0 1 7 6 7 0 7
4 35 6 0 0 74 x3 5 6 4
7
5
4 k b k kt b 5 x kt
x_ 4 s  s  4 m
m t m t m t m t
t
36 Chapter 2
2 3
x1
6 7
6 x2 7
y 1 0 0 0 6
6x 7
7
4 35
x4

It is important to remember that in each example there are physical units


associated with each variable and that consistent sets of units must be used. In
later sections we use the dierential equations developed here to model, design,
and simulate closed loop control systems.

2.4.2 Mechanical-Rotational System Example


Mechanical-rotational systems are closely related to translational systems, and many
devices relate one to the other (i.e., rack and pinion steering). A simple mechanical-
rotational system is shown in Figure 14 where a rotary inertia is connected via a
torsional spring to a xed object and is subject to torsional damping and an input
torque.
To derive the dierential equations simply sum all the torques (eort variable)
acting on the rotary inertia and set them equal to the inertia multiplied by the
angular acceleration.

T 0 s Ky  by0 T Jy00

and rearranging

d 2y dy
J 2
b Ky T
dt dt

When compared with the second-order dierential equation developed for the
simple mass-spring-damper model, the similarities are clear. Each model has inertia,
damping, and stiness terms where the eects on the system are the same (natural
frequency and damping ratio) and only the units are dierent. One common case
with the rotational system is when the spring is removed and only damping is present
on the system. The second-order dierential equation can now be reduced to a rst-
order dierential equation with the output of the system being rotary velocity instead
of rotary position. This model is common for systems where velocity, rather than
position, is the controlled variable.

Figure 14 Mechanical-rotational model.


Modeling Dynamic Systems 37

2.4.3 Electrical System Example


Now let us look at a basic electrical system and see how the modeling of dierent
systems is related. Beginning with the simple electrical RLC circuit, a common
example shown in Figure 15, let us derive the dierential equation to model the
systems dynamic behavior.
Using Table 2 again, let us sum the voltage drops around the entire circuit.
This is commonly referred to as Kirchos voltage law, and along with the Kirchos
current law allows the majority of electrical circuits to be modeled and analyzed.
Remember, Vi adds to the voltage total, while R, L, and C are voltage drops when
traversing the loop clockwise.
General: Vin  VR  VL  VC 0
Using Table 2: Vin  R i  L di=dt  1=C i dt 0
Finally, recognizing that i dq=dt (current is the ow of charge) and substitut-
ing this in gives the recognizable form

d2q dq 1
L R q Vin
dt2 dt C

This is a linear second-order ordinary dierential equation and can thus be


classied and simulated by using natural frequency and damping ratio parameters.
Commonly, this equation is modied to have the output be the voltage across the
capacitor, and not the charge q, since the capacitor voltage is easily measured to
verify the model. Using the following identities between q and Vc , the capacitor
voltage, the transformation is straightforward.

Vc 1=C i dt q=C
and

d 2 VC dV
LC RC C VC Vin
dt2 dt

Constructing the circuit and measuring the response easily veries the resulting
equation. As already noted for other domains, the number of energy storage ele-
ments corresponds to the order of the system. The exception to this is when two or
more energy storage elements are acting together and can thus be combined and
represented as a single component. An example of this is electrical inductors in series;
the mechanical analogy of this would be two masses rigidly connected. Inductive and
capacitive elements both act as energy storage devices in Table 2.

Figure 15 RLC circuit model.


38 Chapter 2

In addition, it should be obvious when comparing the RLC dierential equa-


tion with the mass-spring-damper dierential equation that the two equations are
identical in function and vary only in notation and physical units. Drawing the
analogy further, we see that an inductor is equivalent to a mass, a resistor to a
damper, and a capacitor to the inverse of a spring. Note also that these three
equivalents are each in the same column of Table 2. Also, two energy storage ele-
ments, the mass and the spring, resulted in a second-order system. It should be
becoming clearer as we progress that the skills needed to model one type of system
are identical to those required for another. Using the generalized eort and ow
terminology helps us recognize that all systems are subject to the same physical laws
and behave accordingly. This certainly does not imply that what is a linear com-
ponent in one domain is linear in another but that the relationships between eort
and ow variables are consistent. This will become clearer and we move into
hydraulic system models in the next section.

2.4.4 Basic Hydraulic Positioning Example


Hydraulic systems are commonly found where large forces are required. They can be
mounted in any orientation, have exible power transmission lines, are very reliable,
and have high power densities where the power is applied. More examples of hydrau-
lic systems are given in later chapters. Here, we will use Newtonian physics to
develop a dierential equation for a basic hydraulic positioning device, as shown
in Figure 16. Later this system is examined after feedback has been added; for now
the output simply follows the input and whenever the command is nonzero the
output is moving (until end of stroke is reached).
In modeling the above system shown in Figure 16, two components must now
be accounted for, the valve and the piston. The piston is quite simple and can be
addressed by summing the forces on the mass m. Compared with viscous friction
forces acting on the piston, uid pressure acts on the piston area as the dominant
force. Hydraulic valves, already linearized in Section 2.3.4, relate three primary
variables: pressure, ow, and spool position. At this point we assume a linearized
form of the orice equation describing the valve where the equation is linearized

Figure 16 Hydraulic positioning example.


Modeling Dynamic Systems 39

about the desired operating point and relates ow to load pressure and valve posi-
tion. Let us begin by laying out the basic equations.
Valve ow:
Q dQ=dxx  dQ=dPP Kx x  KP P
Piston ow:
Q Ady=dt
Force balance:
d 2y
F m PA  b dy=dt
dt2
Q is the ow into the cylinder, P is the cylinder pressure, dQ=dx is the slope of the
valve ow metering curve at the operating point, dQ=dP is the slope of the valve
pressure-ow (PQ) curve at the operating point, A is the area of the piston (minus
rod area), and b is the damping from the mass friction and cylinder seal friction.
Solving the valve ow equation for P:
P Q=KP Kx =KP x
Substitute into the force balance:
d 2y
F m Q=KP Kx =KP xA  b dy=dt
dt2
Eliminating Q using the piston ow:
 
d 2y A dy dy
m 2  Kx =KP x A  b
dt KP dt dt
Finally, combining inputs and outputs results in
 
d 2y A dy AKx
m 2 b x
dt K P dt KP
This equation is worthy of several observations. Hopefully, many will become
clear as we progress. First, notice that y in its nonderivative form does not appear.
As Laplace transforms are discussed we will see why we call this a type 1 system with
one integrator. The net eect at this point is to understand that a positive input x will
continue to produce movement in y regardless of whether or not x continues to
increase. That is, y, the output, integrates the position of x, the input. Also, many
assumptions were made in developing this simple model. Beginning with the valve, it
is linearized about some operating point. For small ranges of operation this is a
common procedure. Additionally, the mass of the valve spool was ignored and x is
considered to be moving the spool by that amount. In most cases involving control
systems, a solenoid would provide a force on the valve spool that would cause the
spool to accelerate and move. A separate force balance is then required on the spool
and the equations become more complex. Finally, the uid in the system was
assumed to be without mass and incompressible. At certain input frequencies,
these become important items to consider. Without making these assumptions, the
result would have been a sixth-order nonlinear dierential equation. This task is
40 Chapter 2

probably not enjoyable to most people. What does it take to accurately model such
systems? In a later section, bond graphs are used to develop higher level models.
In conclusion, we have reviewed the basic idea of using Newtons laws to
develop models of dynamic systems. Although only simple models were presented,
the procedure illustrated is the basic building block for developing complex models.
Most errors seem to be made in properly applying basic laws of physics when
complex models are developed. Soon the dierential equations developed above
will be examined further and implemented in control system design.

2.4.5 Thermal System Example


To demonstrate the application of Table 2 to thermal system modeling, we examine a
simplied home water heater. Thermal systems are somewhat limited when conned
to modeling them with ordinary dierential equations, and higher order partial
dierential equations are required for more complete models. Most thermal systems
exhibit interaction between resistance and capacitance properties and do not lend
themselves to lumped parameter modeling. There are three basic ways heat ows are
modeled: conduction, convection, and radiation. In most cases involving control
systems, the radiation heat transfer may be ignored since the temperature dieren-
tials are not large enough for it to be signicant. This simplies our models since
conduction and convection can be linearly modeled as having the heat ow propor-
tional to the temperature dierence. Since most temperature control systems are
designed to maintain constant temperature (process control), the linear models
work quite well. Thus, basic models for the resistance, R, and capacitance, C, prop-
erties in thermal systems may be expressed as in Table 2 (eort variable is tempera-
ture; ow variable is heat ow rate):

Temperature
Heat flow and Heat stored C Temperature
R

If we assume that the water temperature inside the water heater is constant,
then the necessary equilibrium condition is that the heat added minus the heat
removed equals the heat stored. This is very similar to the liquid level systems
examined in the next section. The water heater system can be simplied as shown
in Figure 17 where cold water ows into the tank, a heater is embedded in the tank,
and hot water exits (hopefully).

Figure 17 Thermal system (water heater) model.


Modeling Dynamic Systems 41

yi temperature of water entering the tank


yt temperature of water in and leaving the tank
ya temperature of air surrounding the tank
qi heat ow into the system from water entering
qo heat ow leaving the system from water exiting
qh heat ow into the system from the heater
qa heat ow leaving the system into the atmosphere (through insulation)
C thermal capacitance of water in the tank
R thermal resistance of insulation
S specic heat of water
dm=dt mass ow rate in and out of tank
The governing equilibrium equation is
heat flow in  heat flow out heat flow stored
qi qh  qa  qo Cdyt =dt
Now dene the heat ows:
qi (dm=dtSyi
qh heat ow from heater (system input)
qa yt  ya =R, heat lost through insulation with resistance R
qo (dm=dt) S yt
Substitute in the equilibrium equation:
Cdyt =dt yt  ya =R dm=dtS yt  dm=dtS yi qh
And simplify:
Cdyt =dt yt  ya =R dm=dtSyt  yi qh
Or in terms of yt :
 
dy 1 1
C t m_ S yt qh ya m_ S yi
dt R R
Remember that this model assumes uniform water temperature in the tank, no
heat storage in the tank itself or the insulation surrounding, linear models for con-
duction and convection, and no radiation heat losses. As in all models, attention
must be given to ensure that consistent sets of units are used.

2.4.6 Liquid Level System Example


Liquid level systems can be considered as special cases of the hydraulic/pneumatic
row in Table 2. Several assumptions are made that simplify the models. The rst
common assumption made is the type of ow. Remembering that ow can be
described as laminar or turbulent (not forgetting the transitional region), ow in
liquid level systems is generally assumed to be laminar. Whereas turbulent ow
pressure drops vary with the square of ow rates (eort R  flow2 ), laminar
ow pressure drops are proportional to ow and inherently more linear (eort =
R  flow). In most liquid level systems the uid velocity is relatively slow and the
assumption is valid. The second assumption commonly made is to ignore the eects
of the uid inertia and capacitance, since uid velocities and pressures are generally
42 Chapter 2

low. Finally, instead of dealing with pressure as our eort variable, elevation head is
used as the eort variable. These are related through the weight density where:
Pressure (weight density)  (elevation head) =   h
This leads to the governing equilibrium equation (law of continuity):
Flow in ow out Rate of change in stored volume
or
qin  qout Cdh=dt
If h is the height of liquid in the tank, the C simply represents the cross-sectional area
of the tank (which may or may not be constant as the level changes). Of course in
reviewing previous examples and examining Table 2, this is simply an alternative
form of the electrical capacitor equation (I C dV=dt) or the thermal capacitor
equation from the previous section. To illustrate the concepts in an example, let
us consider the system of two tanks represented in Figure 18 and develop the dier-
ential equations for the system.
qi liquid ow rate into the system
qo liquid ow rate leaving the system from water exiting
qb liquid ow rate from tank 1 into tank 2
h1 liquid level (height) in tank 1
h2 liquid level (height) in tank 2
C1 capacitance of water in tank 1 (cross-sectional area)
C2 capacitance of water in tank 2 (cross-sectional area)
R1 resistance to ow between tank 1 and tank 2 (valve)
R2 resistance to ow between tank 2 and discharge port (valve)
Governing equation for tank 1:
qi  qb C1 dh1 =dt
Governing equation for tank 2:
qb  qo C2 dh2 =dt
Relationships between variables:
h1  h2 h2
qb qo
R1 R2

Figure 18 Liquid level system.


Modeling Dynamic Systems 43

And simplifying:
 
dh dh2 R2 R
R 1 C 1 1 h1 R 1 q1 h2 and R2 C2 1 h 2h
dt dt r1 2 R1 2
In the two resulting equations, each term has units of length (pressure head)
and the equations are coupled. The input to the system is qi and the desired output
(controlled variable) is h2 . In the next chapter we use Laplace transforms to nd the
relationship between h2 and qi .

2.4.7 Composite SystemElectrical and Mechanical


To conclude this section, let us develop a model of a common DC motor with an
inertial load. This model is given in Figure 19. For this system we need to write two
equations, one electrical and one mechanical. The electrical equation comes from the
sum of the voltage drops around the loop and mechanical equation from the torque
acting on the inertial load.
Voltage loop equation:
di
L Vi  R i  Kemf o
dt
Newtons law:
do
J Tem  TL KT i  TL
dt
where R is the armature resistance, L is the armature inductance, Kemf is the back
emf constant, KT is the torque constant, Vi is the applied voltage, i is the armature
current, Tem is the electromagnetic torque, J is the motor and load inertia, and o is
the rotor rotational velocity.
Since the variables are coupled, it is much easier to simply write the state
equations in linear matrix form:
2 3 2 3
" # R Kemf " # 1 " #
i 6 L  7 i
6 L 0 7 Vin
L
64 K
7
5 64
7
o T o 1 5 TL
0 0
J J
So in both examples, it is quite simple to apply the basics to arrive at system
models that later will be examined and ultimately controlled.

Figure 19 Permanent magnet DC motor model.


44 Chapter 2

Several sections from now we will see why recognizing how dierent systems
relate is important. If you feel comfortable modeling one system and simulating its
response, then the ability to do many other systems is already present. This is an
important skill for a controls engineer since most systems include multiple sub-
systems in dierent physical domains.

2.5 ENERGY METHODS APPLIED TO MODELING


One nal method utilizing Newtonian physics is Lagranges equations. This method
is very powerful when the lumped mass modeling approach becomes burdened with
algebraic reductions. The theory is based on Hamiltons principle and can be simply
summarized as follows: A dynamic systems energy remains constant when all the work
done by external forces is accounted for. The power dissipated by components with
friction must also be accounted for. For a conservative system without external
forces, the equation is very simple:
 
d @L @L
 0
dt @q_i @qi

L is the Langrangian and L T  V; where T is the kinetic energy and V is the


potential energy. qi is a generalized coordinate (which would be y in the mass-spring-
damper example above). As we see in the example, kinetic energy relates to dqi =dt
and the potential energy to qi . Since most systems we wish to control will involve
components with losses along with external forces, a more general equation must be
used:
 
d @L @L @P
 Qi
dt @q_i @qi @q_ i

P is the added power function, describing the dissipation of energy by the system,
and Qi is the generalized external forces acting on the system. Common energy
expressions for mechanical and electrical systems are given in Table 3.

EXAMPLE 2.6
This method is illustrated by revisiting the mass-spring-damper example we studied
earlier in Figure 12. First, let us write the expressions describing the kinetic and
potential energy for the system by referencing Table 3.

Table 3 Energy Expressions for Electrical and Mechanical Elements

Energy Type Mechanical Electrical

Kinetic energy Mass Inductor


T T 12 mdx=dt2 T 12 Ldq=dt2
Potential energy Spring, V 12 k x2 Capacitor
V Gravity, V mgh V 12 C V 2 1=2C q2
Dissipative energy Damper Resistor
P 12 bdx=dt2 P 12 R dq=dt2 12 R i2
Modeling Dynamic Systems 45

T 12 m v2 12 m dx=dt2 and V 12 k x2 where qi x; i 1


Thus,
L 12 mv2  12 k x2
Also, for the power dissipated and generalized forces,
P 12bdx=dt2 1=2bv2 and Qi F
Now, insert these expressions into Lagranges equations and simplify.
!
2 2
d @12 mx_  12 kx @1 mx_ 2  12 kx2 @12 bx_ 2
 2 F
dt @x_ @x @x_

d
mx_  kx bx_ F
dt
Finally,
mx bx_ kx F
Remembering the notation where dx=dt x_ and d 2 x=dt2 x , we see that
identical dierential equations were arrived at independently of each other for the
mass-spring-damper system.
Using energy methods simply gives the controls engineer another tool to use in
developing system models. For systems with many lumped objects, it becomes an
easy way to quickly develop the desired dierential equations.

2.6 POWER FLOW MODELING METHODSBOND GRAPHS


2.6.1 Bond Graph Basics
Developing the dierential equations for complex dynamic systems can quickly
become an exercise in penmanship when attempting to keep track of all the variables
interacting between the dierent components. Although applying the basic
Newtonian laws of physics to each subsystem is quite simple, it becomes very dicult
to reduce, simplify, and combine the individual equations. This is especially true
when multiple domains are represented. In these systems, modeling the power ow
through a system is an attractive method with several advantages. We normally
think of power entering a system and being delivered to some load. In most systems,
diagramming the ow of power through the system is fairly intuitive. What each
subsystem does is transport, transform, and/or dissipate some of the power until it
reaches the load. In generalized variables then, the power ow can be represented
throughout the entire system without the confusing conict of notations abundant
between dierent domains. One method particularly attractive for teaching (and
using in practice) the modeling of dynamic systems using the power method is
bond graphs. The goal in this section is not to teach everything required for model-
ing all dynamic systems using bond graphs, but rather to present the idea of model-
ing power ow through a system, a unied and structured approach to modeling,
and incentive for further study regarding bond graphs. If we can get to the point of
understanding the concepts of using bond graphs we will better understand and use
46 Chapter 2

the methods that we already use and are familiar with. This being said, there are
many advantages to learning bond graphs themselves, as we will see in this section.
Rosenburg and Karnopp [1] who extended earlier work by Ezekial and Paynter
[2], helped dene the power bond graph technique in 1983. Bond graphs can easily
integrate many dierent types of systems into one coherent model. Many of the
higher level modeling and simulation programs, such as Boeings EASY5TM, reect
the principles found in bond graphs.
Bond graph models are formed on the basis of power interchanges between
components. This method allows a formal approach to the modeling of dynamic
systems, including assignment of causality and direct formulation of state equations.
Bond graphs rely on a small set of basic elements that interact through power bonds.
These bonds carry causality information and connect to ports. The basic elements
model ideal reversible (C, I) and irreversible (R) processes, ideal connections (0, 1),
transformations (TF, GY), and ideal sources (Se, Sf).
The benets of bonds graphs include modeling power ow reduces the com-
plexity of diagrams, easy steps result in state space equations or block diagrams,
simple computer solutions, and the ability to easily determine causality on each
bond. Every bond has two variables associated with it, an eort and a ow. In a
mechanical-translational system, this is simply the force and velocity. This consti-
tutes a power ow since force times velocity equals power. Thus, Table 2 was given
as a precursor to bond graphs by including the eort and ow variable for each
system. For simple systems, it is just as easy to use the methods in the previous
section, but, as will be seen, for larger systems bond graphs are extremely powerful
and provide a structured modeling approach for dynamic systems.
This section seeks to introduce bond graphs as a usable tool; to completely
illustrate their capabilities would take much longer. In bond graphs, only four vari-
ables are needed, eort and ow and the time integrals of each. Thus, for the
mechanical system, force, velocity, momentum, and position are all the variables
needed. The bonds connect ports and junctions together that take several forms. The
two junction types are 0 junctions and 1 junctions. The 0 junction represents a
common eort point in the system while the 1 junction represents a common velocity
point in the system. Table 4 illustrates the relationship between the general bond
graph elements.
Using the notation in Table 4 and modifying Table 2 will complete the bond
graph library and allow us to model almost all systems quickly and eciently. Table
5 illustrates these results for common systems.
Finally, words about bond graph notation. The basic bond is designated using
the symbol +. The half arrow points in the direction of power ow when e f is
positive. The direction of arrow is arbitrarily chosen to aid in writing the equations.
If the power ow turns out to be negative in the solution, then the direction of ow is
opposite of the arrow direction. The line perpendicular to the arrow designates the
causality of the bond. This helps to organize the component constitutive laws into
sets of dierential equations. Physically, the causality determines the cause and eect
relationships for the bonds. Some bonds are restricted, while others are chosen
arbitrarily. Table 6 lists the causal assignments. To determine system causality:
1. Assign the necessary bond causalities. These result from the eort or ow
inputs acting on the system. By denition, an eort source imposes an eort on the
node and is thus restricted. This is like saying that a force input acting on a mass
Modeling Dynamic Systems 47

Table 4 Basic Bond Graph Elements

Element Symbol Parameters Equations

Bonds j+ ei effort on ith bond Power e  f


fi flow on ith bond
0 Junction 0 NA ei s are equal
P
fi s 0
P
1 Junction 1 NA ei s 0
fi s are equal
Resistor R Resistance, R eRf

Capacitor C Capacitance, C e 1=C f dt e0
e0 , initial value f C de=dt
Inductor I Inductance, I e I df =dt

f0 , initial value f 1=I e dt f0
Effort source Se Amplitude, E eE
Flow source Sf Amplitude, F f F
Transformer TF Ratio, n ein neout
fin n fout
Gyrator GY Ratio, r eout r fin
ein r fout

cannot dene the velocity explicitly, it causes an acceleration which then gives the
mass a velocity. The opposite is true for ow inputs.
2. Extend those wherever possible using the restrictive causalities listed in the
table. Restrictive causalities generally are applied at 0, 1, TF, and GY nodes. For
example, since a 1 junction represents common ows, only one connection can dene
(cause) the ow and thus only one causal stroke points toward the 1 junction. Not
meeting this requirement would be like having two electronic current sources
connected in series.
3. If any bonds are remaining without causal marks, apply integral causality to
I and C elements. Integral causality is preferred but not always possible and is
described in more detail following this section.
4. All remaining R elements can be arbitrarily chosen. For example, one typical
resistor element is a hydraulic valve. We can impose a pressure drop across the valve
and measure the resulting ow; or we can impose a ow through the valve and
measure the resulting pressure drop. An eort causes a ow, or a ow causes an
eort, and both are valid situations.
The causality assignments may also be reasoned out apart from the table.
For example, the eort, or force, must be the cause acting on an inductive
(mass) element and ow, or velocity, the eect, hence the proper integral
causality. The opposite analogy is the velocity being the cause acting on a mass
and the force being the eect. A step change in the cause (velocity) would then
require an innite (impulse) force, which physically is impossible. With the
48
Table 5 Bond Graph Relationships Between Physical Systems

Variable

System Effort Flow Momentum Displacement Power Energy



General et f t p  e dt q  f dt Pt  etf t Ep f dp,
kinetic

Eq e dq,
potential
Translation F, force V, velocity P, momentum x, distance Ft Vt V dp; J
N m/sec N sec m W F dx; J
Rotation T, torque o, ang. vel. H, ang. mnm. y, angle Ttot o dH; J
Nm rad/sec N m sec rad W T dy; J
Electrical e, voltage i, current , ux linkage q, charge et it i d , J (magnetic)
V A Wb C W e dq; J (electric)
Hydraulic P, pressure Q, ow rate Pp , integral of V, volume Pt Qt Q dPp ; J
Pa m3/sec pressure, Pa sec m3 W J
Thermal T, temperature S, entropy ow Not needed S, entropy Tt St T dS; J
K rate, J/K sec J/K W

Chapter 2
Modeling Dynamic Systems 49

Table 6 Bond Graph Causality Assignments

desired integral causality, a step change in the force results in acceleration, the
integral of which is the velocity.
If we end up with derivative causality on I or C elements, additional algebraic
work is required to obtain the state equations for the system. When integral causality
is maintained, the formulation of the state equations is straightforward and simple.
It is possible to have a model with derivative causality, but then great care is required
to ensure that the velocity cause is bounded, thus limiting the required force.
Many times it is possible to modify the bond graph to give integral causality
without changing the accuracy of the model. This might be as simple as moving a
capacitive element to the other side of a resistive element, and so forth. Using the
analogy of a hydraulic hose, the capacitance and resistance in the hose, and the
capacitance and inertia in the oil is distributed throughout the length of the hose.
If a ow source is the input to the section of hose, one of the capacitive elements must
be located next or else the inertial element sees the ow source as a velocity input,
similar to the mechanical analogy above, and creates derivative causality problems.
In reality this is correct since the whole length of hose, and even the tting attaching
it to the ow source, has compliance. Every model is imperfect, and in cases like this
rational thinking beforehand saves many hours of irrational thinking later on in the
problem.
If we are constrained to work with models containing derivative causality,
several approaches may be attempted. Sometimes an iterative solution is achiev-
able and the implicit equations that result from derivative causality may still be
solved. This will certainly consume more computer resources since for each time
step many intermediate iterations may have to be performed. Another option is
to consider Lagranges equations as presented earlier. This may also produce an
algebraically solvable problem. The general recommendation, as mentioned above,
is to modify the original model to achieve integral causality and explicit state
equations.
Once causality is assigned, all that remains is writing the dierential equations.
This is straightforward and easily lends itself to computers. There are many com-
50 Chapter 2

puter programs that allow you to draw the bond graph and have the computer
generate the equations and simulate the system. The equations, being in state
space form, are easily used as model blocks in Matlab. In fact, for many advanced
control strategies, where state space is the representation of choice, bond graphs are
especially attractive because the set of equations that result are in state space form
directly from the model. Several useful constitutive relationships for developing the
state space equations are given in Table 7.

2.6.2 Mechanical-Translational Bond Graph Example


To illustrate the basics, let us review to the basic mass-spring-damper system already
studied using Newtons laws and energy methods. For the general system, eort and
ow variables are associated with each bond. For the mechanical system these
become force and velocity. To begin, locate common velocity points and assign
these to a common 1 junction. If there are any common force points, assign them
to a 0 junction. In the mass-spring-damper system in Figure 20, all the components
connected to the mass have the same velocity and there are no common force points.
The bond graph simply becomes the mass, spring, damper, and input force all
connected to the 1 junction since they all share the same velocity. The resulting
bond graph is given in Figure 20.
The bond graph notation clearly shows that all the component velocities are
equal. To assign causality, begin with Se (a necessary causality assignment). This
stroke does not put constraints on others so assign I to have integral causality. This
can then be extended to the other bonds since the 1 junction is only allowed one ow
as a cause (I) with the rest as eects (R, C, Se ). All causal strokes are assigned and
integral causality throughout the model is achieved. Now the equations can be

Table 7 Equation Formulation with Bond Graphs

Integral Causality Derivative Causality

Element Linear Nonlinear Linear Nonlinear

0 Junction e1 e2 e3 NA NA NA
f1 f2 f3 0
1 Junction e1 e2 e3 0 NA NA NA
f1 f2 f3
R e Rf e ef NA NA
f 1=Re f f e

C e q=C fdt=C eq e f dt f Cde=dt f dqe=dt

I f p=I e dt=I f f p e Idf =dt e df p=dt

Additional Equation Helps

States are always written as the momentum or displacement variable; rst


Useful derivative equals a function of the other states and inputs.
identities dq=dt f (ps, qs, inputs) dp=dt f (ps, qs, inputs)
dpi =dt ei qi =C dqi =dt fi pi =I
Modeling Dynamic Systems 51

Figure 20 Bond graph: mass spring damper model

written using the tables relating to bond graphs. Begin with the two state variables,
p1 and q2 , to write the equations.
dp1 =dt e1 e3  e2  e4
dq2 =dt f2 f3 f4 p1 =I
Writing specic equations for e3 ; e2 , and e4 allows us to nish the equations.
e3 Se input force
e2 q2 =C
e4 Rf4 R=Ip1
Substituting into the state equations for the nal form results in
dp1 =dt Se  q2 =C  R=Ip1
dq2 =dt p1 =I
Remembering that Se F; C 1=k, I m, and R b for mechanical systems
allows us to write the state equations in a notation similar to that used for earlier
equations:
dp1 =dt F  kq2  b=mp1
dq2 =dt p1 =m
Although rst glance might seem confusing, the equations are identical to those
developed before. Notice what the following terms actually represent:
dp1 =dt dmv=dt ma F 0 s
k q2 k x spring force
b=mp b=mmv bv force through damper
dq2 =dt p1 =m mv=m v velocity
Since the example state equations developed above are linear, they could easily
be transformed into matrix form. For a simple mass-spring-damper system, the work
involved might seem more dicult than using Newtons laws. As system complexity
progresses, the real power of bond graphs becomes clear.
52 Chapter 2

2.6.3 Mechanical-Rotational System Bond Graph Example


Mechanical systems with rotation are very similar to translational systems and in
many cases are connected together. Whereas in translational systems the eort vari-
able is force and the ow variable is velocity, in rotational systems the eort variable
is torque and the ow variable is angular velocity. The product of torque and
angular velocity gives us power ow through each bond. In developing the bond
graph we follow similar procedures and rst locate the common eort and ow
junctions. Using the rotational system shown in Figure 21, we see that the only
common eort junction involves the two shafts connected by torsional spring K1 .
Next, locating the common ow junctions, we see that the rotary inertias represent
common ows for connected respective C and R elements. Finishing the graph, we
see that we have a gear train that we can represent as a transformer (TF) and one
input, the torque applied to rotary inertial J1 . The input torque is modeled as an
eort source, Se .
Connecting the components and using Table 6 allows us to assign causality,
which results in the bond graph shown in Figure 22. The causality assignments in
this example are quite simple and once the required assignment is made (the eort
source, Se ), the remaining assignments propagate themselves through the model.
Assigning Se as required and I1 as integral denes the causality on bond 3.
Assigning C1 as integral then denes the causality on bonds 5 and 6. Finally, assign-
ing integral causality on I2 and C2 denes the causality for the remaining bond 7.
All that remains is to develop the dierential equations for the system. To
illustrate the procedure we begin by writing the general bond graph equations and
nish by substituting in the known constants (i.e., J for I elements, b for R, K for C
elements, and input T for Se ). The four resulting state equations have the form
p_2 f p2 ; q4 ; p8 ; q9 q_4 f p2 ; q4 ; p; q9
p_8 f p2 ; q4 ; p8 ; q9 q_9 f p2 ; q4 ; p8 ; q9
Basic equations:
dp2 =dt e2 e1  e3
dq4 =dt f4 f3  f5
dp8 =dt e8 e6  e7  e9
dq9 =dt f9 f8

Figure 21 Mechanical-rotational system using bond graphs.


Modeling Dynamic Systems 53

Figure 22 Bond graph: mechanical-rotational system example.

Constitutive relationships:
e1 S e
e3 e4 q4 =C1
f3 f2 p2 =I1
f5 d2 =d1 f6 d2 =d1 f8 d2 =d1 p8 =I2
e6 d1 =d2 e5 d1 =d2 e4 d1 =d2 q4 =C1
e7 Rf7 Rf8 Rp8 =I2
e9 q9 =C2
f8 p8 =I2

Combining for the general state equations:


dp2 =dt Se  q4 =C1
dq4 =dt p2 =I1  d2 =d1 p8 =I2
dp8 =dt d1 =d2 q4 =C1  Rp8 =I2  q9 =C2
dq9 =dt p8 =I2

Finally, in matrix form with J, K, and b substitutions:


2 3
0 K1 0 0
2 3 61 d 1 72 3 2 3
p_ 2 6 0  2 0 7 p 1
6I d1 I2 7 2
6 q_ 7 6 1 76 q 7 6 0 7
6 47 6 76 4 7 6 7
6 76 d1 R 76 7 6 7 S
4 p_ 8 5 6 0 K1  K2 74 p8 5 4 0 5 e
6 d 2 I 2 7
q_ 9 6 7 q 0
4 1 5 9
0 0 0
I2
The procedure to take the system model, develop the bond graph, and write the
state equations is, as shown, relatively straightforward and provides a unied
approach to modeling dynamic systems. In addition, the state variables represent
physically measurable variables (at least in theory, assuming a sensor is available)
and using the generalized eort and ow notation helps eliminate overlapping
symbols as commonly found in multidisciplinary models.
54 Chapter 2

2.6.4 Electrical System Bond Graph Example


In this section the same bond graph modeling techniques described for mechanical
systems are applied to simple electrical systems. To illustrate the similarities we
develop a bond graph for the electrical circuit in Figure 23, assign causality, and
write the state space matrices for the circuit.
As in the mechanical systems, we begin by locating common eort nodes and
common ow paths in the system. This is especially straightforward for electrical
systems since any components in series have the same ow (current) and any com-
ponents in parallel have the same eort (voltage). For our circuit then, we have a 1
junction for Vi and R, a 0 junction for C1 , and a 1 junction for L and C2 . Our input
to the system is modeled as an eort source (Vi ). Connecting the elements together
and assigning causality results in the bond graph in Figure 24.
The causality assignments begin with Se , the only required assignment. The
preferred integral causality relationships (I and C elements) are made next, and as
before, this assigns the remaining bonds and R elements. Once the causal assign-
ments are made the equation formulation proceeds and the three resulting state
equations will have the form
q_ 4 f q4 ; p6 ; q7 p_6 f q4 ; p6 ; q7 q_ 7 f q4 ; p6 ; q7
Basic equations:
dq4 =dt f4 f3  f5 f2  f6
dp6 =dt e6 e5  e7 e4  e7
dq7 =dt f7 f6
Constitutive relationships:
f2 Se =R
f6 p6 =L
e4 q4 =C1
e7 q7 =C2
f6 p6 =L
Combining for the general state equations:
dq4 =dt Se =R  p6 =L
dp6 =dt q4 =C1  q7 =C2
dq7 =dt p6 =L

Figure 23 Electrical circuit model using bond graphs.


Modeling Dynamic Systems 55

Figure 24 Bond graph: electrical circuit model.

Finally, in matrix form with J, K, and b substitutions:


2 3
1
2 3 6 0  0 2 3
L 72 3 1
q_ 4 6 7 q4
6 7 6 1 6 1 6 7 6R7
7 6
4 p_ 6 5 6 0  7 4 p6 5 4 0 7
5 Se
6 C1 C2 7
7
q_ 7 4 1 5 q7 0
0 0
L
Finally, to nish the example, let us assume that the desired output is the
voltage across the capacitor C2 , designated as VC2 . This voltage is simply the eort
on bond 7 in Figure 24, which can be expressed as a function of the third state
variable, e7 q7 =C2 . Then the corresponding C matrix is written as
2 3
  q
1 4 45
VC2 e7 y 0 0 p6
C2
q7

In summary, for the mechanical and electrical systems shown thus far, the basic
procedures are the same: Locate the common eort and ow points, draw the bond
graph, assign causality, and write the state equations describing the behavior of the
system.

2.6.5 Thermal System Bond Graph Example


Thermal systems are modeled using bond graphs but unlike the previous systems
have characteristics that make them unique. First, there is no equivalent I element.
We would expect this since when we think of heat ow we envision it beginning as
soon as there exists a temperature dierential (or eort). The ow of heat does not
undergo acceleration, and no mass is associated with the ow of heat. Second,
engineering practice has established standard variables in thermal systems that do
not meet the general requirements of eort and ow. The standard eort and ow
variables do exist (temperature and entropy ow rate) but are not always used in
practice. Instead, pseudo bond graphs are commonly used, and the same techniques
still apply. The dierence is that the product of the variables is not power ow. The
eort variable is still temperature, but the ow variable becomes heat ow rate,
which already has units of power. The two components shown in Figure 25, R
56 Chapter 2

Figure 25 Ideal elements in thermal bond graphs.

and C, are used to model thermal systems and are still subject to the same restrictions
discussed in Section 2.4.5.
To illustrate a bond graph model of a thermal system, we once again use the
water heater model in Figure 17. This allows us to see the similarities and compare
the results with the equations already derived in Section 2.4.5. To begin drawing the
bond graph, we still nd common eort (temperature) and ow (heat ow rate)
points in the physical system. The common temperature points include the tank itself
and the water ow leaving the tank and the temperature of the surrounding atmo-
sphere. Since the air temperature surrounding the tank is assumed to remain con-
stant, we can model it as an eort source. The temperature dierence between the
tank water temperature and surrounding air will cause heat ow through the insula-
tion, modeled as an R element. Finally, examining the connections to the 0 junction
representing the temperature of the tank, we have three ow sources and the dier-
ence causing a temperature change in the water. The energy stored in the tank water
temperature is modeled as a capacitance, C. The three heat ow sources are the water
in, water out, and heater element embedded in the tank. Putting it all together results
in the bond graph shown in Figure 26.
When we write the equations for the bond graph, we see immediately that they
are the same as developed earlier in Section 2.4.5.
On the 0 junction:
f4 f5 f6  f7  f3 qi qh  qo  qa (using earlier notation)
On the 1 junction:
e3 e1  e2 ; or e3  e 1 e2

Figure 26 Bond graph: thermal system model.


Modeling Dynamic Systems 57

where f3 qa e2 =R e3  e1 =R. Thus when the symbols used in Section 2.4.5


are substituted in, the governing equations are the same. The same assumptions
made earlierno heat storage in the insulation and uniform tank temperature
still apply for the bond graph model.

2.6.6 Liquid Level System Bond Graph Example


Liquid level systems are essentially modeled using the hydraulic concepts but with
several changes and simplications. As discussed in Section 2.4.6, the uid inertia is
ignored and only the capacitive and resistive elements are used. Speaking in terms of
relative magnitudes, the simplication is a valid one and we do not often see oscil-
latory (pressure) waves in liquid level control systems. It is a simple matter to include
them, as the next case study shows, if we are concerned with the inertial eect. Also,
and in similar fashion, we ignore the capacitive eect of the uid itself (compressi-
bility) and assume that the volumetric ows relate exactly to the rate of change in
liquid volume in the tank. Since the pressures due to elevation heads are many times
smaller than typical pressures in hydraulic systems, the assumption is once again a
valid one. Finally, and only one of notation, is that in liquid level systems the eort
variable is assumed to be the elevation head and not the pressure. These are related
to each other through the weight density, a constant, and thus only the units are
dierent, but elevation head still acts on the system as an eort variable. To demon-
strate the procedure, we develop a bond graph for the liquid level system given in
Figure 27, similar to the liquid level system examined earlier in Section 2.4.6.
To construct the bond graph, we once again locate the common eort and ow
points in the system. Each tank represents a common eort and the common ow
occurs through R1 . There is one input, a ow source, and one ow output, but it is
dependent on h2 and R2 . Using common eort 0 junctions and common ow 1
junctions, we can construct the bond graph as shown in Figure 28.
The causality assignments result in integral (desired) relationships and are
arrived at as follows: The ow source is a required causal stroke and, once made,
causes C1 and bond 3 to be chosen as well. R1 is indierent, so we choose integral
causality on C2 , which then denes the causal relationships for R1 and R2 . The
equations can be derived following standard procedures. The known form of the
state equations is as follows

q_ 2 f q2 ; q6 ; Sf q_ 6 f q2 ; q6 ; Sf

Figure 27 Liquid level system using bond graphs.


58 Chapter 2

Figure 28 Bond graphs: liquid level system.

Basic equations:
dq2 =dt f2 f1  f3 f1  f4
dq6 =dt f6 f5  f7
Constitutive relationships:
f1 Sf qi
f4 1=R1 e4 1=R1 e3  e5 1=R1 e2  e6
f5 f4 (see above equation)
f7 1=R2 e7 1=R2 e6 1=R2 q6 =C2
Combining for the general state equations:
dq2 =dt qi  q2 =C1  q6 =C2 =R1
dq6 =dt q2 =C1  q6 =C2 =R1  q6 =R2 C2
Finally, in matrix form with notation substitutions:
2 3
1 1
      
q_ 2 6 R1 C1 R1 C2 7 q2 1
64
7
5 qi
q_ 6 1 1 q6 0

R1 C1 C2 R2  R1
Remember that a model is just what the name implies, only a model, and
dierent models may all be derived from the same physical system. Dierent assump-
tions, use of notation, and style will result in dierent models. In examining the
variety of systems using bond graphs, hopefully we were able to see the parallels
between dierent systems and the advantages of a structured approach to modeling.
More often than not, modeling complex multidisciplinary systems is just a repetitive
application of the fundamentals. This concept is evident in the case study involving a
hydraulic, mechanical, and hydropneumatic coupled system.

2.6.7 Case Study: Simulation and Validation of Hydraulic P/M,


Accumulators, and Flywheel Using Bond Graphs
To better illustrate the capabilities of bond graphs when multiple types of systems
must be modeled, dierential equations are derived for a mechanical rotational
system coupled to a hydraulic system via an axial-piston bent-axis pump/motor
(P/M). A summarized model is shown in Figure 29. In this case study, experimental
results obtained from the laboratory are compared with the simulated results of the
bond graph model. This model was developed as part of research determining the
Modeling Dynamic Systems 59

Figure 29 Physical model of P/M and valves for vehicle transmission.

feasibility of using valves to switch a motor to a pump to go from acceleration to


braking while driving [3]. If a pump/motor capable of overcenter operation is used,
the valves are not required. However, certain high-eciency pump/motors are not
capable of overcenter operation and thus served as the motivation for the simulation.
The rst step in developing the model is reduction of the physical model to
include the necessary elements. The complete physical system as tested in the
laboratory is shown in Figure 30. Since eciencies are not important during
the switching times (milliseconds), the pump/motor can be modeled as an ideal
transformer (shaft speed proportional to ow and pressure proportional to tor-
que). The hydraulic system is modeled as a single hydraulic line with resistance

Figure 30 Complete physical systemvalves and P/M test.


60 Chapter 2

and capacitance plus the inertia and capacitance of the oil. The valves then switch
this line from low to high pressure during the simulation. Flywheel, pump/motor,
and accumulators are all included in the model. The reduced model showing the
system states is given in Figure 31. In the gure, there are three states in this
section of hydraulic line: oil volume stored in line capacitance, oil momentum,
and oil volume stored in uid capacitance. There are three states in energy
storage devices: ywheel momentum, oil volume stored in low-pressure reservoir,
and oil volume stored in high-pressure accumulator. This model leads directly to
a bond graph and illustrates the exibility in using bond graphs. Figure 32 gives
the bond graph with causality strokes. As in the previous examples, we rst
locate the common ow and eort points and assign appropriate 0 or 1 junctions
to those locations.
The bond graph is parallel to the physical model with the ywheel connected to
the pump using a transformer node (TF). This converts the ywheel speed to a ow
rate. Nodes 0a, 1b, and 0c model the inertial of the oil, resistance of the hose, and
capacitance of the hose and oil. Common ow nodes 1d and 1e state that all the ow
passing through a valve must be the same as that transferred to the accumulator
connected to that valve. It is possible to connect block diagram elements to bond
graph elements as shown in the switching commands to the valves.
In this bond graph model there are no required causality assignments,
eort or ow sources, and the system response is a function of initial conditions
and valve switching commands. To assign the causal strokes, we start with the
integral relationships rst and then assign the arbitrary causal strokes. This
model achieves integral causality, although other model forms may result in
derivate causality. For example, combining the line and oil capacitance into an
equivalent compliance in the system and updating the bond graph results in
derivative causality, even though the model is still correct and simpler. So as
stated earlier, some derivative assignments can be handled by slightly changing
the model.
The bond graph is now ready for writing the state equations, and although
more algebraic work is required for each step, the procedure to develop the equa-
tions is identical to that used in the mass-spring-damper example. The resulting
states will be p1 , q3 , p6 , q8 , q13 , and q14 (all the energy storage elements).

Figure 31 Reduced valve and P/M model with states.


Modeling Dynamic Systems 61

Figure 32 Bond graph model of valves and P/M.

Basic equations:
dp1 =dt e1
q3 =dt f3 f2  f4
dp6 =dt e6 e4  e5  e7
dq8 =dt f8 f7  f9  f10
dq13 =dt f13 f11
dq14 =dt f14 f12
Constitutive relationships:
e1 Dpm e2 Dpm e3 Dpm q3 =Chose
f3 f2  f4 Dpm f1  f6 Dpm =Iflw p1  1=Ioil p6
e6 e4  e5  e7 e3  Rhose f5  e8 q3 =Chose  Rhose =Ioil p6  q8 =Coil
f8 f7  f9  f10 f6  f13  f14 (see below)
f6 p6 =Ioil
f13 Cd16 q8 =Coil  e13 0:5
f14 Cd20 q8 =Coil  e14 0:5
Accumulator models (e13 and e14 ):
To nd the pressure in the accumulators (e13 and e14 ), we must rst choose an
appropriate model for the gas-charged bladder. Most hydropneumatic accumulators
are charged with nitrogen and can be reasonably modeled using the ideal gas law.
62 Chapter 2

For the accumulators in this case, study foam was inserted into the bladder with the
result that the pressure-volume relationship during charging becomes isothermal and
eciencies are greatly increased [4]. This allows us to nish the accumulator (Chigh
and Clow ) models for the state equations as follows:

General isothermal gas law: P1 V1 P2 V2

Let P1 and V1 be the initial charge pressure and gas volume: eh and qh ; el and ql
for the high- and low-pressure accumulators. Then [eorts P2 P1 V1 =V2 :
e13 eh qh =qhi  q13
e13 el ql =ql  q14

This now gives us the accumulator pressures as a function of state variables q13
and q14 , and allows us to nish writing the state equations for the system. The third
state, q8 , references f13 and f14 , which we now have since they are simply the rst
derivatives of states q13 and q14 , solved for in this section.
Combining for the general state equations:
Dpm
p_ 1 q
Chose 3
p1 p
q_ 3 Dpm  6
Iflw Ioil

q3 R q
p_ 6  hose p6  8
Chose Ioil Coil
s
  s
 
p q8 e h qh q8 e l ql
q_ 8 6 s1 Cd16   s2 Cd20 
Ioil Coil qh  q13 Coil q1  q14
s
 
q8 e h qh
q_ 13 Cd16 
Coil qh  q13
s
 
q8 e l  qi
q_ 14 Cd20 
Coil ql  q14

As presented later in the state space analysis section, 3.6.3, once the state
equations are developed programs like Matlab, MathCad, and Mathematica may
be used to simulate and predict system response. We can also write a numerical
integration routine to accomplish the same thing using virtually any programming
language. A second item to mention is the nonlinearity of the state equations.
Modeling the ow through the valves as proportional to the square root of the
pressure drop introduces nonlinearities into the state equations. The nonlinearities
prevent us from writing the state equations using matrices unless we rst linearize the
state equations. Finally, s1 and s2 represent changing inputs. When the system
response is determined, the inputs to the equations must be provided. Due to the
Modeling Dynamic Systems 63

nonlinearities, it is necessary to numerically integrate the state equations to obtain a


response. A closed form solution is virtually impossible for most real nonlinear
systems.
To actually compare the simulation of the model with the experimental results,
the values for the model parameters were calculated and then adjusted to better
reect the actual system. The model parameter values used during the simulation
are listed below and are based on data from the manufacturer and measurement
(dimensions) taken in the laboratory. As in all engineering problems, care must be
taken to perform the simulation with consistent and correct units for all physical
variables.
System values:
!
kg lbf sec2

900 3 8:4 105 (oil density)
m in4
l 4 m 157:5 in: (hydraulic line length)
d 0:0254 m 1 in (hydraulic line diameter)
1 6:95  108 Pa 100,800 psi (bulk modulus of hydraulic line)
1 1:4  109 Pa 203;100 psi (bulk modulus of oil)
d 4
A 5:07  104 m2 0:785 in2 (sectional area of hydraulic line)
4
Vo A l 0:002 m3 123:7 in3 (volume of oil contained in line)
Mass (uid inertia):
!
A m4 in5
I 1:408  107 59:2

l kg lbf sec2

Capacitance (uid compliance):


!
V m3 in3
Coil o 1:45  1012 0:000625
oil Pa psi

Capacitance (hose and tting compliance):


!
V m3 in3
Cline o 2:915  1012 0:00122
line Pa psi

Resistance (hose and ttings):


 
Pa sec psi sec
Rline 1:4  108 0:333
m3 in3

Poppet valve coecients (from manufacturer data):


p
f Cd e
64 Chapter 2
!
6 m3 in3
Cd16 8:75  10 p 44:3 p
sec Pa sec psi

!
5 m3 in3
Cd20 2:10  10 p 106:4 p
sec Pa sec psi

These models were veried in the lab with good correlation between the pre-
dicted and actual responses, as illustrated in Figures 33 and 34. Valve delay functions
simulating the opening and closing characteristics determined in the lab were imple-
mented to determine optimal delays. Using these values and integrating the state
equations yielded responses with natural frequencies of 219 rad/sec and damping
ratios of 0.06. The correlation was very high with the experimental switches; the
exception was that the actual system was slightly softer (in terms of bulk modulus)
than was calculated using the manufacturers data.
When the results are compared directly, as in Figure 35, the accuracy of the
simulation is evident. It certainly is possible to tune the estimated system para-
meters used above to achieve better matches since the same dynamics are present in
the simulated and experimental results.
The model for this example was used to design a delay circuit and strategy for
performing switches minimizing energy losses and pressure spikes. The analogy can
be extended to the poppet valve controller examined in the case study (see Sec. 12.7).
The simulated and experimental results agree quite well, as shown, and the simula-
tion method does not require any special software.
Hopefully this illustrates the versatility of bond graphs in developing system
models utilizing several physical domains. Another case study example is examined
later and compared to an equivalent model in block diagram representation. Bond
graphs are especially applicable to uid power systems with the use of ow and eort
variables and ow rate and pressure being well understood. The use of many trans-
formers like cylinders and pump/motors are easily represented with bond graphs.
As a result, many uid power systems have been developed using bond graphs,
ranging from controllers [5] to components [6] to systems.

Figure 33 Summary of simulation results (valve switches).


Modeling Dynamic Systems 65

Figure 34 Summary of experimental results.

Figure 35 Comparison of simulated and experimental results.

PROBLEMS
2.1 Describe why good modeling skills are important for engineers designing,
building, and implementing control systems.
2.2 The most basic mathematical form for representing the physics (steady-state
and dynamic characteristics) of dierent engineering systems is a
_____________ ______________.
2.3 Ordinary dierential equations depend only on one independent variable, most
commonly ______ in physical system models.
2.4 Find the equivalent transfer function for the system given in Figure 36.
2.5 Find the equivalent transfer function for the system given in Figure 37.
2.6 Find the equivalent transfer function for the system given in Figure 38.
2.7 Find the equivalent transfer function for the system given in Figure 39.
2.8 Find the equivalent transfer function for the system given in Figure 40.
2.9 Find the equivalent transfer function for the system given in Figure 41.
66 Chapter 2

Figure 36 Problem: block diagram reduction.

Figure 37 Problem: block diagram reduction.

Figure 38 Problem: block diagram reduction.

Figure 39 Problem: block diagram reduction.


Modeling Dynamic Systems 67

Figure 40 Problem: block diagram reduction.

Figure 41 Problem: block diagram reduction.

2.10 Find the equivalent transfer function for the system given in Figure 42.
2.11 Find the equivalent transfer function for the system given in Figure 43.
2.12 Find the equivalent transfer function for the system given in Figure 44.
2.13 Given the following dierential equation, nd the appropriate A, B, C, and D
...
matrices resulting from state space representation: y 5 y 32y 5u_ u:
2.14 Given two second-order dierential equations, determine the appropriate A, B,
C, and D matrices resulting from state space representation (z is the desired output, u
and v are inputs): a y by_ y 20u; and cz z_ z v:

Figure 42 Problem: block diagram reduction.


68 Chapter 2

Figure 43 Problem: block diagram reduction.

Figure 44 Problem: block diagram reduction.

2.15 Linearize the following function, plot the original and linearized functions, and
determine where an appropriate valid region is (label on the graph).
Yx 5x2  x3  x sinx About operating point x 2
2.16 Linearize the following function:
Zx; y 3x2 xy2  y About operating point x; y 2; 3
2.17 Linearize the equation y f x1 ; x2 :
yx1 ; y1 4x1 x2  5x21 4x2 sinx2 Around operating point x1 ; x2 1; 0
2.18 Given the physical system model in Figure 45, develop the appropriate dier-
ential equation describing the motion.
2.19 Given the physical system model in Figure 46, develop the appropriate dier-
ential equation describing the motion.
2.20 Given the physical system model in Figure 47, develop the appropriate dier-
ential equation describing the motion. (r is the input, y is the output).
2.21 Write the dierential equations for the physical system in Figure 48.
2.22 Given the physical system model in Figure 49, write the dierential equations
describing the motion of each mass.
Modeling Dynamic Systems 69

Figure 45 Problem: model of physical systemmechanical/translational.

Figure 46 Problem: model of physical systemmechanical/translational.

Figure 47 Problem: model of physical systemmechanical/translational.

2.23 Given the following physical system model in Figure 50:


a. Write the dierential equations of motion.
b. Develop the state space matrices for the system where y3 is the desired
output.
2.24 Write the dierential equations for the physical system model in Figure 51.
2.25 Using the physical system model given in Figure 52, develop the state space
matrices where the input is T and the desired output is y.
2.26 Write the dierential equation for the electrical circuit shown in Figure 53. Vi is
the input and Vc , the voltage across the capacitor, is the output.
2.27 Using the system given in Figure 54, develop the dierential equations describ-
ing the motion of the mass, yt as function of the input, rt: PL is the load pressure,
and a and b are linkage segment lengths. Assume a linearized valve equation.
2.28 Write the dierential equation for the basic oven model given in Figure 55.
Assume that the insulation does not store heat and that the material in the oven
always has a uniform temperature, y.
70 Chapter 2

Figure 48 Problem: model of physical systemmechanical/translational.

2.29 Determine the equations describing the system given in Figure 56. Formulate

Figure 49 Problem: model of physical systemmechanical/translational.

as time derivatives of h1 and h2 .

Figure 50 Problem: model of physical systemmechanical/translational.


Modeling Dynamic Systems 71

Figure 51 Problem: model of physical systemmechanical/rotational.

Figure 52 Problem: model of physical systemmechanical.

Figure 53 Problem: model of physical systemelectrical.

2.30 Determine the equations describing the system given in Figure 57. Formulate
as time derivatives of h1 and h2 .
2.31 Derive the dierential equations describing the motion of the solenoid and
mass plunger system given in Figure 58. Assume a simple solenoid force of FS KS i.
2.32 Develop the bond graph and state space matrices for the system in Problem 2.18.
2.33 Develop the bond graph and state space matrices for the system in Problem 2.19.
2.34 Develop the bond graph and state space matrices for the system in Problem 2.20.
72 Chapter 2

Figure 54 Problem: model of physical system hydraulic/mechanical.

Figure 55 Problem: model of physical systemthermal.

Figure 56 Problem: model of physical systemliquid level.

Figure 57 Problem: model of physical systemliquid level.


Modeling Dynamic Systems 73

Figure 58 Problem: model of physical systemelectrical/mechanical

2.35 Develop the bond graph and state space matrices for the system in Problem 2.21.
2.36 Develop the bond graph and state space matrices for the system in Problem 2.22.
2.37 Develop the bond graph and state space matrices for the system in Problem 2.23.
2.38 Develop the bond graph and state space matrices for the system in Problem 2.24.
2.39 Develop the bond graph and state space matrices for the system in Problem 2.25.
2.40 Develop the bond graph and state space matrices for the system in Problem 2.26.
2.41 Develop the bond graph and state space matrices for the system in Problem 2.27.
2.42 Develop the bond graph and state space matrices for the system in Problem 2.31.

REFERENCES
1. Rosenburg RC, Karnopp DC. Introduction to Physical System Dynamics. New York:
McGraw-Hill, 1983.
2. Ezekial FD, Paynter H. Computer representation of engineering systems involving uid
transients. Trans ASME, Vol. 79, 1957.
3. Lumkes J, Hartzell T. Investigation of the Dynamics of Switching from Pump to Motor
Using External Valving. ASME Publication, no. H01025, IMECE, 1995.
4. Pourmovahead A, Baum S. Experimental evaluation of hydraulic accumulator eciency
with and without elastomeric foam. J Propuls Power 4:185192, 1988.
5. Barnard B, Dranseld P. Predicting response of a proposed hydraulic control system using
bond graphs. Dynamic Syst, Measure Control, March 1977.
6. Chong-Jer L, Brown F. Nonlinear dynamics of an electrohydraulic apper nozzle valve.
Dynamic Syst., Measure Control, June 1990.
This Page Intentionally Left Blank
3
Analysis Methods for Dynamic
Systems

3.1 OBJECTIVES
 Introduce dierent methods for analyzing models of dynamic systems.
 Examine system responses in the time domain.
 Present Laplace transforms as a tool for designing control systems.
 Present frequency domain tools for designing control systems.
 Introduce state space representation.
 Demonstrate the equivalencies between the dierent representations.

3.2 INTRODUCTION
In this chapter four methods are presented for simulating the response of dynamic
systems from sets of dierential equations. Time domain methods are seldom used
apart from rst- and second-order dierential equations and are usually the rst
methods presented when beginning the theory of dierential equations. The s-
domain (or Laplace, also in the frequency domain) is common in controls engi-
neering since many tables, computer programs, and block diagram tools are avail-
able. Frequency domain techniques are powerful methods and useful for all
engineers interested in control systems and modeling. Finally, state space techni-
ques have become much more common since the advent of the digital computer.
They lend themselves quite well to computer simulations, large systems, and
advanced control algorithms.
It is important to become comfortable in using each representation. We will see
that using the dierent representations is like speaking dierent languages. The same
message may be spoken while speaking a dierent language. Each representation can
be translated (converted) into a dierent one. However, some techniques lend them-
selves more to one particular way, other techniques to another way. Once we see that
they really give us the same information and we become comfortable using each one,
we expand our set of usable design tools.

75
76 Chapter 3

3.3 TIME DOMAIN


The time domain includes both dierential equations in their basic form and the
solution of the equations in the form of a time response. Ultimately, most analysis
methods target the solution in terms of time since we operate the system and evaluate
its performance in the time domain. We live in and describe events according to time,
so it seems very natural to us. This section is limited to dierential equations that are
directly solved as an output response with respect to time without using the tools
presented in the following sections. Since most control systems are analyzed in the s-
domain, more examples are presented there, and this section simply connects the
material learned in introductory courses on dierential equations to the s-domain
methods used in the remainder of this text.

3.3.1 Differential Equations of Motion


Section 2.3.1 has already presented an overview of dierential equations and their
common classications as presented in Table 1 of Chapter 2. We now present
solutions to common dierential equations found in the design of controllers. In
general, without teaching a separate class in dierential equations, we are limited
to rst- and second-order, linear, ordinary dierential equations (ODEs). Beyond
this and the solution methods presented later are much faster and a more ecient
use of our time. The general solution is found by assuming a form of the time
solution, in this case an exponential, ert , and substituting the solution into the
dierential equation. From here we can apply initial conditions and solve for the
unknown constants in the solution. Examples for common rst- and second-order
systems are given below.
3.3.1.1 Solutions to General First-Order Ordinary Differential Equations
The auxiliary equation was dened in Section 2.3.1 and is used here to develop a
general solution to rst-order linear ODEs. Each order of dierentiation produces a
corresponding order with the auxiliary equation. Thus, for a rst-order ODE we get
a rst-order auxiliary equation, which will then contain only one root, or solution,
when set equal to zero. This root of the auxiliary equation determines the dynamic
response of the system modeled by the dierential equation. The example below
illustrates the method of using the auxiliary equation to derive the time response
solution.

EXAMPLE 3.1
A general rst-order ODE:

y0 ay 0

Substitute in y ert :
r ert a ert 0
r a ert 0
Gives the auxiliary equation
ra0
Analysis Methods for Dynamic Systems 77

Solution:
y A ert A eat
If the initial condition, y(0) = y0 ,
y0 y0 A
y y0 eat

For the general case,

y0 pty gt

The solution then becomes


 
1
y pt gt dt C
pt
If pt and gt are constants a and b and y0 y0 ,

y a=b1  eat y0 eat

There are other methods to handle special cases of rst-order dierential equa-
tions such as separating the dierential and performing two integrals special cases
with exact derivatives. While certainly interesting, these methods are seldom used in
designing controllers. For more information, any introductory textbook on dier-
ential equations will cover these topics.
We conclude this section by examining the solution methods for homogeneous,
second-order, linear ODEs. It is quite simple using the auxiliary equation as shown
for general rst-order dierential equations.
3.3.1.2 Solutions to General Second-Order Ordinary Differential
Equations
A general second-order ODE:

y00 k1 y0 k2 y 0

Auxiliary equation:

r2 k1 r k2 0

There are three cases that depend on the roots of the auxiliary equation. We
can use the quadratic equation to nd the roots since second-order ODEs will result
in second-order polynomials for the auxiliary equations. Our general form of the
auxiliary equation and the corresponding quadratic equation can then be expressed
as
p
b b2  4ac
a r2 b r c 0 and r1 r2 
2a 2a
Using the quadratic equation leads to the three possible combinations of roots:
78 Chapter 3

Case 1: Real and different roots, a and b:

y A1 eat A2 ebt
Case 1 occurs when the term (b2  4ac) is greater than zero. Both roots will be real
and may be either positive or negative. Positive roots will exhibit exponential growth
and negative roots will exhibit exponential decay (stable). Applying the initial posi-
tion and velocity conditions to the solution solves for the constants A1 and A2 .
Case 2: Real and repeated roots at r:

y A1 ert A2 t ert

Case 2 occurs when the term (b2  4ac) is equal to zero. Both roots will now have the
same sign. As before, positive roots will result in unstable responses, negative roots
in stable responses, and initial conditions are still used to solve for the constants A1
and A2 .
Case 3: Two imaginary roots p j u :

y A1 est cosod t A2 est sinod t

Case 3 occurs when the term (b2  4ac) is less than zero. Roots are always complex
conjugates, and each root will have the same real component (s = b/2a), which
determines the stability of the response. The sinusoidal terms, arising from the
complex portions of the roots, only range between 1 and simply oscillate within
the bounds set by the exponential term est at a damped natural frequency equal to od .
Mathematically, the sinusoidal terms come from the application of Eulers theorem
when the roots of the auxiliary equation are expressed as r1;2 s =  jod . Then:

ert esjod es ejod


Eulers theorem:
ejo cos o j sin o

ejo cos o  j sin o


In all three cases, it is necessary to know the initial conditions to solve for A1
and A2 . The following example problems examine the three types of cases outlined
above. If desired, the sum of sine and cosine terms can be written as either a sine or
cosine term and an associated phase angle. The alternative notation is expressed as
y B est sinod t f
q
B A21 A22

f tan1 A1 =A2
In general, it is convenient to use to the original form when applying initial
conditions to solve for A1 and A2 . Plotting the time response is easily accomplished
using either form.
Analysis Methods for Dynamic Systems 79

EXAMPLE 3.2
Dierential equation and initial conditions for case 1, two real roots:

y00 5 y0 6y 0 and y0 0; y0 0 1

Auxiliary equation:
r2 5r 6 r 2r 3 0
r 2; 3
Solution process using initial conditions:

y A1 e2t A2 e3t

y0 2 A1 e2t  3 A2 e3t

At t 0 these evaluate to
A1 A2 0 and  2 A1  3 A2 1

which gives A1 1; A2 1. Final solution is

y e2t  e3t
It is easy to see then, that for auxiliary equations resulting in two real roots, the
general response can be described as the sum of two rst-order responses.

EXAMPLE 3.3
Dierential equation and initial conditions for case 2, two repeated real roots:
y00 8 y0 16y 0 and y0 1; y0 0 1
Auxiliary equation:
r2 8r 16 r 42 0
r 4; 4
Solution process using initial conditions:
y A1 e4t A2 te4t

y0 4A1 e4t  4 t A2 e4t A2 e4t

At t 0 these evaluate to
A1 1 and  4 A1 A2 1

which gives A1 1; A2 5: Final solution is:

y e4t 5 t e4t
80 Chapter 3

EXAMPLE 3.4
Dierential equation and initial conditions for case 3, complex conjugate real roots:
y00 2 y0 10y 0 and y0 1; y0 0 1
Auxiliary equation:
r2 2 r 10 0
r 1  3j
Solution process using initial conditions:
y A1 et cos3 t A2 et sin3 t
y0 A1 et cos3 t 3A1 et sin3 t  A2 et sin3 t 3A2 et cos3 t
At t 0 these evaluate to
A1 1 and  A1 A2 1
which gives A1 1; A2 2. Final solution is
y et cos3 t 2et sin3 t
Finally, when plotting these responses as a function of time, the arguments of the
sinusoidal terms are expected to have units of radians to be correctly scaled.
Some calculators and computer programs have a default setting where the
arguments of the sinusoidal terms are assumed to be in degrees. Remember that
the responses derived here are all for homogeneous dierential equations (no forcing
function). The following sections show us techniques to use for deriving the time
responses for nonhomogeneous dierential equations.

3.3.2 Step Input Response Characteristics


In this section we dene parameters associated with rst- and second-order step
responses. While in the previous section the solutions developed are for homo-
geneous ODEs, here we consider nonhomogeneous dierential equations responding
to step inputs. Later the parameters we dene here will be used for simple system
identication, that is, using experimental data (time response of system) and devel-
oping a system model from it. Along with a Bode plot, step response plots are very
common for developing these models and for comparing performances of
dierent systems. From a simple step response it is straightforward to develop an
approximate rst- or second-order analytical model approximating the actual
physical system.
3.3.2.1 First-Order Step Response Characteristics
First-order systems, by denition, will not overshoot the step command at any point
in time and can be characterized by one parameter, the time constant t. If we take
the rst-order dierential equation from earlier and let ct be the system output, the
input a unit step occurring at t 0, an initial condition of c0 0, and with a time
constant equal to t, then we can write the nonhomogeneous dierential equation as
below.
Analysis Methods for Dynamic Systems 81

General rst-order dierential equation:

dct
t ct Unit Step Input 1t
0
dt

Using solution methods from previous section results in

Output ct 1  et=t

Since we used a unit step input (magnitude 1) and an initial condition equal
to zero, we call the solution a normalized rst-order system step response. Plotting
the response as a function of the independent variable, time, results in the curve
shown in Figure 1.
So we see that the nal magnitude exponentially approaches unity as time
approaches innity. Being familiar with the normalized curve is useful in many
respects. By imposing a step input on our physical system and recording the
response, we can compare it to the normalized step response. If it exponentially
grows or decays to a stable value, then we can easily extend the data into a simplied
model. Examining the plot further allows us to draw several additional conclusions.
Even if our input or measured response does not reach unity, a linear rst-order
system will always reach a certain percentage of the nal value as a function of the
time constant. As shown on the plot in Figure 1, we know that the following values
will be reached at each repeated time constant interval:

One time constant, t 1t Output 63:2% of nal value


Two time constants, t 2t Output 86:5% of nal value
Three time constants, t 3t Output 95:0% of nal value
Four time constants, t 4t Output 98:2% of nal value

Any intermediate value can also be found by calculating the magnitude of the
response at a given time using the analytical equation.

Figure 1 Normalized step responserst-order system.


82 Chapter 3

EXAMPLE 3.5
Now lets consider the simple RC circuit in Figure 2 and see how this might be used
in practice. When we sum the voltage drops around the loop (Kirchhos second
law), it leads to a rst-order linear dierential equation.
Sum the voltages around the loop:
Vin  VR  VC 0
Constitutive relationships:
VR R I
dV
I C C
dt
Combining:
dVC
RC VC Vin
dt
Taking the dierential equation developed for the RC circuit in Figure 2 and
comparing it with the generalized equation, once we let t RC we have the same
equation. For a simple RC circuit then, the time constant is simply the value of the
resistance multiplied by the value of the capacitance. For the RC circuit shown, if R
1K and C 1 mF, then the time constant, t, is 1 second. If the initial capacitor
voltage was zero (the initial condition) and a switch was closed suddenly connecting
10 volts to the circuit (step input with a magnitude of 10), we would expect then at
1 second to have 6.3 volts across the capacitor (the output variable),
2 seconds to have 9.5 volts,
3 seconds to have 9.8 volts, and so forth.
By the time 5 seconds was reached, we should see 9.93 volts. So we see that if
we know the time constant of a linear rst-order system, we can predict the response
for any step input of a known magnitude. Chances are that you are already familiar
with time constants if you have chosen transducers for measuring system variables.
Knowing the time constant of the transducer will allow you to choose one fast
enough for your system.
3.3.2.2 Second-Order Step Response Characteristics
Whereas in a rst-order system the important parameter is the time constant, in
second-order systems there are two important parameters, the natural frequency,
on , and damping ratio, z. As before, depending on which of the three cases we have,
we will see dierent types of responses.

Figure 2 Example: rst-order system example (RC circuit).


Analysis Methods for Dynamic Systems 83

For systems with two real roots, we see that the response can be broken down
into the sum of two individual rst-order responses. We call this case overdamped.
There will never be any overshoot, and the length of time it takes the output to reach
the steady-state value depends on the slowest time constant in the system. The faster
time constant will have already reached its nal value and the transient eects will
disappear before the eects from the slower time constant. In overdamped second-
order systems, the total response may sometimes be approximated as a single rst-
order response when the dierence between the two time constants is large. That is,
the slow time constant dominates the system response.
When second-order systems have auxiliary equations producing real and
repeating roots, we have a unique case where the system is critically damped.
Although numerically possible, in practice maintaining critical damping may be
dicult. Any small errors in the model, nonlinearities in the real system, or changes
in system parameters will cause deviation away from the point where the system is
critically damped. This occurs since critical damping is only one point along the
continuum, not a range over which it may occur.
Finally, and not related to a combination rst-order response, is when the
auxiliary equation produces complex conjugate roots. The system now overshoots
the steady-state value and is underdamped. When we speak of second-order systems,
the underdamped case is often assumed. Much of the work in controls deals with
designing and tuning systems with dominant underdamped second-order systems.
For these reasons the remaining material in this section is primarily focused on
underdamped second-order systems. Many of the techniques are also applied to
overdamped systems even though they can just as easily be analyzed as two rst-
order systems. To begin with, let us recall the form of the complex conjugate roots of
the auxiliary equation as presented earlier:
a r2 b r c 0 and r1;2 s  j o
In terms of natural frequency and damping ratio, we can write the roots as
r2 2 z on r o2n and r1;2  z on  j od
where od is the damped natural frequency dened as
q
od on 1 x2

The negative sign in front of z on comes from the negative sign in front of the
quadratic equation. As long as the coecients of the second-order dierential equa-
tion are positive, this sign will be negative and the system will exponentially decay
(stable response). Using this notation for our complex conjugate roots, we can write
the generalized time response as
q p!
ez on t 1  x2
ct 1  p sin on 1  x2 t tan1
1  x2 x

To see how the natural frequency and damping ratio aect the step response, it
may be helpful to view the equation above in a much simpler form to illustrate the
eects of the real and imaginary portions of the roots. Combining all constants into
common terms allows us to write the time response as follows:
84 Chapter 3

ex on t
ct 1  sinod t f
K1
Since the sine term only varies between 1, the magnitude, or bounds on the
plot, are determined by the term ex on t . Recognizing that the coecient on the
exponential, zon , is actually the real portion of our complex conjugate roots
from the auxiliary equation, s, we can say that the real portion of the roots deter-
mine the rate at which the system decays. This is similar to our denition of a time
constant and functions in the same manner. Coming back to the sinusoidal term, we
see that it describes the oscillations between the bounds set by the real portion of our
roots and it oscillates at the damped natural frequency of od . Thus, the imaginary
portion of our roots determines the damped oscillation frequency for the system.
Figure 3 shows this relationship between the real and imaginary portion of our roots.
These concepts are fundamental to the root locus design techniques developed in the
next chapter.
In general, when plotting a normalized system, instead of a single curve we now
get a family of curves, each curve representing a dierent damping ratio. When a
system has a damping ratio greater than 1, it is overdamped and behaves like two
rst-order systems in series. The normalized curves for second-order systems are
given in Figure 4. All curves shown are normalized where the initial conditions
are assumed to be zero and the steady-state value reached by every curve is 1.
As was done with the rst-order plot using the output percentage versus the
number of time constants, it is useful to dene parameters measured from a second-
order plot that allow us to specify performance parameters for our controllers.
Knowing how the system responds allows us to predict the output based on chosen
values of the natural frequency and damping ratio or to determine the natural
frequency and damping ratio from an experimental plot. Useful parameters include
rise time, peak time, settling time, peak magnitude or percent overshoot, and delay
time. Figure 5 gives the common parameters and their respective locations on a
typical plot. Knowing only two of these parameters will allow us to reverse-engineer
a black box system model from an experimental plot of our system. Since there are
two unknowns, on and z, we need two equations to solve for them.

Figure 3 Eects of real and imaginary portions of roots (second-order systems)


(!n > rad=sec:
Analysis Methods for Dynamic Systems 85

Figure 4 Normalized step responsessecond-order systems.

1 1
 Rise time, tr ; tr cycle
2 p on 4
If the system is underdamped, generally measured as the time to go from 0
to 100% of the nal steady-state value or the rst point at which it crosses
the steady-state level. If the system is overdamped, it is usually measured as
the time to go from 10% to 90% of the nal value.

p
 Peak time, tp ; tp p
on 1  x2
Time for the response to reach the rst peak (underdamped only).
4
 Settling time, ts ; ts 4t
x on
Time for the response to reach and stay within either a 2% or 5% error
band. The settling time is related to the largest time constant in the system.
Use four time constants for 2% and three time constants for 5%. This
equation comes from the bounds shown in Figure 3 where 1=t equals xon .

Figure 5 Second-order systemsstep response parameters.


86 Chapter 3

Remember that at four time constants the system has reached 98% of its
nal value.
p

p x= 1x2
 Percent overshoot (%OS), %OS 100 e

%OS[(peak value  steady-state value)/steady-state value  initial value]


100. This parameter is only a function of the damping ratio (the only
parameter listed here that is a function of only one variable).
 Delay time, td , time require for the response to reach 1/2 the nal value for
the rst time.

By measuring two of the above parameters it is possible to estimate the natural


frequency and damping ratio for the system if given an experimental plot of a
system. The model is determined by writing two equations with two unknowns
(natural frequency and damping ratio) and solving. As we see in the next section,
once these items are known a transfer function approximation can be developed.
Step response plots provide quick and easy methods for modeling systems with
dominant roots approximating rst- or second-order systems. If we need models
for higher order systems, it is helpful to enter the frequency domain.
The easiest method for determining a second-order transfer function from an
experimental step response plot is to begin by calculating the percent overshoot.
Since the %OS only depends on the damping ratio, it eliminates the need to simul-
taneously solve two equations and two unknowns. Although the equation is dicult
to solve by hand, a plot as shown in Figure 6, where the relationship in the equation
is plotted, can be used to quickly arrive at the system damping ratio. Once the
damping ratio is found, only one additional measurement from the plot is required
(e.g., settling time) and then the natural frequency can also be directly calculated.
Let us conclude this section by examining the common mass-spring-damper
system in the context of the material explained here. To do so we use the dierential

Figure 6 Percent overshoot from a step input as a function of damping ratio for a second-
order system.
Analysis Methods for Dynamic Systems 87

equation developed for the mass-spring-damper system earlier and relate the con-
stants m, b, and k to natural frequency and damping ratio.

EXAMPLE 3.6
The dierential equation for the mass-spring-damper system, as developed in
Chapter 2, is given below:
m x00 b x0 k x F
Divide the equation by m:
x00 b=m x0 k=m x F=m
And compare coecients with the auxiliary equation written in terms of the natural
frequency and damping ratio:
x00 2 z on x0 o2n x F=m
Thus, for the m-b-k system:
o2n k=m and 2z on b=m
By noting where m, b, and k appear in the rst two representations, the natural
frequency and damping ratio are easily calculated for all linear, second-order ODEs.
This allows us to dene a single time response equation with respect to the natural
frequency and damping ratio using the generalized response developed above.
Once we write other general second-order dierential equations in the form
and calculate the system natural frequency and damping ratio, then we can easily
plot the systems response to a step input. It is important to remember that the
generalized response, plot parameters, and methods are developed with respect to
step inputs. If we desire the systems response with respect to other inputs, the
methods described in the following section are more useful.

3.4 s-DOMAIN OR LAPLACE


The s-domain is entered using Laplace transforms. These transforms relate time-
based functions to s variable-based functions. This section introduces the procedures
commonly used when working in the s-domain.

3.4.1 Laplace Transforms


Using Laplace transform methods allows us to convert dierential equations and
various input functions into simple algebraic functions. Both transient and steady-
state components can be determined simultaneously. In addition, virtually all linear
time-invariant (LTI) control system design is done using the s-domain. Block dia-
grams are a prime example. The Laplace transform is very powerful when graphical
methods in the s-plane (root locus plots) are used to quickly determine system
responses. Although several methods are given in section for using Laplace trans-
forms to solve for the time response of a dierential equation, we do well to realize
that in designing and implementing control systems, we seldom take the inverse
Laplace transform to arrive back in the time domain. There are two primary reasons
for this: Virtually all the design tools use the s-domain (software included), and in
88 Chapter 3

most cases we know what type of response we will have in the time domain simply by
looking at our system in the s-domain. The goal of this section then is to show
enough examples for us to make that connection between what equivalent systems
look like in the s-domain and in the time domain.
Using Laplace transforms requires a quick review of complex variables. For
the transform, s s jo, where s is the real part of the complex variable and o is
the imaginary component. This notation was introduced earlier when discussing the
complex conjugate roots from the auxiliary equation. Fortunately, although helpful,
algebraic knowledge of complex variables is seldom required when using the s-
domain. Using the method as a tool to understanding and designing control systems,
we primarily use the Laplace transform of f t and the inverse Laplace transform of
Fs through the use of tables. Making the following denitions will help us use the
transforms.

f t a function of time where f t 0 for t < 0


s s jo, the complex variable
L Laplace operator symbol
Fs Laplace transform of f t

Then the Laplace transform of f t is


1
Lf t Fs f test dt
0

And the inverse Laplace transform of Fs is


cj1
1
L1 Fs f t Fsest ds
2p j cj1

A benet of using this method as a tool means that we seldom (if ever) need
to do the actual integration since tables have been developed that include almost
all transforms we ever need when designing control systems. Looking at the
equations above gives us an appreciation when using the tables regarding the
time that is saved. A table of Laplace transform pairs has been included in
Appendix B and some common transforms that are used often are highlighted
here in Table 1. Additional tables are available from many dierent sources.
The outline for using Laplace transforms to nd solutions to dierential equa-
tions is quite simple.

I. Write the dierential equation.


II. Perform the Laplace transform.
III. Solve for the desired output variable.
IV. Perform the inverse Laplace transform for the time solution to the ori-
ginal dierential equation.

To better illustrate the solution steps, let us take a general ordinary dierential
equation, include some initial conditions, and solve for time solution.
Analysis Methods for Dynamic Systems 89

Table 1 Common Laplace Transforms

Identities

Constants LA f t A Fs
Addition Lf1 t f2 t F1 s F2 s
 
First derivative df t
L s Fs  f 0
dt
" #
Second derivative d 2 f t df 0
L s2 Fs  s f 0 
dt2 dt
 n  Xn k1
General derivatives d
L f t sn Fs  snk f 0
dtn
  k1
Integration Fs
L f tdt
s

Common Inputs f t, Time Domain Fs, Laplace Domain

Unit impulse t 1
Unit step 1t; t > 0 1
s
Unit ramp t 1
s2
Common Transform Pairs f t, Time Domain Fs, Laplace Domain

First-order impulse response eat 1


sa

First-order step response 1 1


1  ea t
a s s a

Second-order impulse response on ez on t o2n


p sin!d t
1 z2 s2 2on s o2n
q
od on 1  z2
Second-order step response ez on t o2n
1  p sinod t f
1 2 ss2 2xon s o2n
qx
od on 1  z2
p
2
1 1  z
f tan
z

I. Write the dierential equation.


For this part we assume we developed the following dierential equation
and initial conditions from applying the concepts in Chapter 2 to a
physical system.
d 2x dx
6 5x 0; x0 0; x_ 0 2
dt dt
90 Chapter 3

II. Perform the Laplace transform.


s2 X s  s x0  x_ 0 6 s X s  6x0 5 X s 0
III. Solve for the desired output variable.
s2 6s 5Xs 2

2 2
Xs
s2 6s 5 s 1s 5
IV. Perform the inverse Laplace transform.
From the tables we see a similar transformation that will meet our needs:
 
ba
L1 eat  ebt
s as b
Rearrange Xs to match the table entry found in Appendix A
(constants carry through):
2 1 4
a 1; b 5
s 1s 5 2 s 1s 5
Then the time response can be expressed as
1 t
xt e  e5 t
2
So we see that, at least in some cases, the solution of an ODE using Laplace
transforms is quite straightforward and easy to apply. What happens more often
than not, or so it seems, is that the right match is not found in the table and we must
manipulate the Laplace solution before we can use an identity from the table.
Sometimes it is necessary to expand the function in the s-domain using partial
fraction expansion to obtain forms found in the lookup tables of transform pairs.
Many computer programs are also available to assist in Laplace transforms
and inverse Laplace transforms. In most cases the program must also have symbolic
math capabilities.

3.4.2 Laplace Transforms: Partial Fraction Expansion


There are two primary classes of problems where we might use partial fraction
expansion when performing inverse Laplace transforms. If we do not have any
repeated real or complex conjugate roots, the expansion is straightforward. When
our system in the s-domain contains repeated rst-order roots or repeated complex
conjugate roots, the algebra gets more tedious as we must take dierentials of both
sides during the expansion. Simple examples of these cases are illustrated in this
section. If more details are desired (usually not required for designing control
systems), most texts on dierential equations will contain sections on the theory
behind each case.
3.4.2.1 Partial Fraction Expansion: No Repeated Roots
To demonstrate the rst case, lets add a nonzero initial position condition to the
example above and examine what happens. Modied system:
Analysis Methods for Dynamic Systems 91

d 2x dx
6 5 x 0; x0 2; x_ 0 2
dt dt
s2 Xs  s x0  x_ 0 6 s Xs  6 x0 5 Xs 0
s2 6s 5Xs 2 s 14
2s 14 2s 14
Xs
s2 6s 5 s 1s 5
With the addition of the s term in the numerator, we no longer nd the solution
in the table. Using partial fraction expansion will result in simpler forms and allow
the use of the Laplace transform pairs found in Appendix B. It is possible for most
cases to nd a table containing our form of the solution (in dedicated books contain-
ing transform pairs) but to include all of the possible forms makes for a confusing
and long table. Also, remember that these techniques are more importantly learned
for the connection they allow us to make between the s-domain and the time domain,
then for the reason that it is a common task when designing control systems (in
general it is not).
For the partial fraction expansion then, let the solution Xs equal a sum of
simpler terms with unknown coecients:
2s 14 K K
1 2
s 1s 5 s 1 s 5
To nd the coecients, we multiply through both sides by the factor in the
denominator and let the value of s equal the root of that factor. Repeating this for
each term allows us to nd each coecient Ki . The process is given below for nding
K1 and K2 :
To solve for K1 : [multiply through by (s 1)]
2s 14 K1 s 1 K2 s 1 K s 1
K1 2
s 5 s1 s5 s5
Now let s ! 1 and we can nd K1 :
 
2s 14 12 k2 s 1
K1 K1 3
s 5 s1 4 s 5 s1
Repeat the process to nd K2 [multiply through by (s 5)]:
2s 14 K1 s 5 K2 s 5 K1 s 5
K2
s 1 s1 s5 s1
 
2s 14 4 K1 s 5
K K2 1
s 1 s5 4 2
s 1 s5
The result of our partial fraction expansion becomes:
2s 14 3 1
Xs 
s 1s 5 s 1 s 5
Now the inverse Laplace transform is straightforward using the table and
results in the time response of
92 Chapter 3

xt 3 et  e5 t
An alternative method, preferred by some, is to expand out both sides and
equate the coecients of s to solve for the coecients. In some cases this leads to
simultaneously solving sets of equations, although this is generally an easy task. To
quickly illustrate this method, let us begin with the same equation:
2s 14 K K
1 2
s 1s 5 s 1 s 5
Now when we cross-multiply to remove the terms in the denominator, we can collect
coecients of the dierent powers of s to generate our equations:
2s 14 K1 s 5 K2 s 1
K1 K2  2s 5K1 K2  14 0
Our two equations now become (with the two unknowns, K1 and K2 ):
K1 K2 2 and 5K1 K2 14
Subtracting the top from the bottom results in
4K1 12 or K1 3
Substituting K1 back into either equation, we get K2 1, exactly the same as
before. Once we have found K1 and K2 , the procedure to take the inverse Laplace
transform is identical and results in the same time solution to the original dierential
equation. The method to use largely depends on which method we are most com-
fortable with.
Finally, it is quite simple using a computer package like Matlab to do the
partial fraction expansion. Taking our original transfer function from above, we
can use the residue command to get the partial fractions. The solution using
Matlab is as follows:
Transfer function:
2s 14 2s 14
Xs
s2 6s 5 s 1s 5
Matlab command:
>> R; P; K residue2 14; 1 6 5
and the output:
R P K
1 5 
3 1

The results are interpreted where R contains the coecients of the numerators and P
the poles (s+p) of the denominator. K, if necessary, contains the direct terms.
Writing R and P as before means we have the 1 divided by (s+5) and the 3 divided
by (s+1); this is exactly the result we derived earlier.
2s 14 3 1
Xs 
s 1s 5 s 1 s 5
Analysis Methods for Dynamic Systems 93

The same command can be used for the cases presented in the following
sections.
3.4.2.2 Partial Fraction Expansion: Repeated Roots
To look at the second case we determine the response of a dierential equation in
response to an input (nonhomogeneous) and assuming the initial conditions are zero.
We will take the simple rst-order system as found when modeling the RC electrical
circuit and subject the system to a unit ramp input. In general terms our system can
be described by the following dierential equation:
dV
0:2 V Ramp input
dt
Take the Laplace transform and solve for the output when the input is a unit ramp:
1
0:2 s 1Vs Unit ramp input
s2
(initial conditions are zero)
The output becomes:
1 5
Vs
s2 0:2s 1 s2 s 5
With repeated roots, the partial fraction expansion terms must include all
lower powers of the repeated terms. In this case then, the coecients and terms
are written as
5 K1 K K
2 3
s2 s 5 s 5 s2 s
To solve for K1 we can multiply through by (s 5) and set s 5:
 
5 K2 s 5 K3 s 5 
K1 
s2 s2 s s5

5 1
K1 2

5 5
To solve for K2 we multiply through by s2 and set s 0.
" #
5 K1 s2 

K2 K3 s 
s 5 s 5 
s0

5
K2 1
5
With the lower power of the repeated root, we now have a problem if we
continue with the same procedure. If we multiply both sides by s and let s 0,
the K2 term becomes innite (division by zero) because an s term is left in the
denominator. To solve for K3 then, it becomes necessary to take the derivative of
both sides with respect to s and then let s 0. This allows us to solve for K3 .
Cross-multiply by s2 s 5 to simplify the derivative:
94 Chapter 3

5 K1 s2 K2 s 5 K3 s 5
Take the derivative with respect to s:
d
) 0 2 K1 s K2 K3 s K3 s 5
ds
Now we can set s 0 and solve for K3 , the remaining coecient:
0 K2 5 K3 K2 1
1
K3 
5
Using the coecients allows us to write the response as the sum of three easier
transforms:
1 1 1 12
Vs 
5 s 5 s2 5 s
And nally, we take the inverse Laplace transform of each to obtain the time
response:
1 1
Vt e5t t 
5 5
As shown in the previous example, we can write and solve simultaneous equa-
tions instead of using the method shown above. For this example it means getting
three equations to solve for the three coecients. If we multiply through by the
denominator of the left-hand side (as we did before we took the derivative), we
get the partial fraction expansion expressed as

5 K1 s2 K2 s 5 K3 ss 5
Now collect the coecients of s to obtain the three equations:

K1 K3 s2 K2 5K3 s 5K2  5 0
The three equations (and three unknowns) are
K1 K3 0
K2 5K3 0
K2 1
Once again we get the same values for the coecients and the inverse Laplace
transforms result in the same time response. For larger systems it is easy to write the
equations in matrix form to solve for the coecients as illustrated below:
2 32 3 2 3 2 3 2 31 2 3 2 1 3
1 0 1 K1 0 K1 1 1 1 0 5
4 0 1 5 54 K2 5 4 0 5 and 4 K2 5 4 0 1 5 5 4 0 5 4 1 5
0 5 0 K3 5 K3 0 5 0 5  15

When written in matrix form there are many software packages and calculators
available for inverting the matrix and solving for the coecients.
Analysis Methods for Dynamic Systems 95

3.4.2.3 Partial Fraction Expansion: Complex Conjugate Roots


To conclude the examples illustrating the use of partial fraction expansion when
solving dierential equations, we look at the case where we have complex conjugate
roots. Any system of at least second order may produce complex conjugate roots. It
is common in both RLC electrical circuits and m-b-k mechanical systems to have
complex roots when solving the dierential equation for the time response. For the
example here we will solve the s-domain solution given below. The output below is a
common form of a second-order system (RLC, m-b-k, etc.) responding to a unit step
input.
1
Ys
ss2 s 1
The system has three roots:
p
1 3
s 0; 
2 2
For second-order terms in the numerator that factor into complex conjugate
roots, we can write the partial fraction expansion where the numerator of the term
containing the complex roots has two coecients, one being multiplied by s as shown
below:
1 K K s K3
Ys 1 22
ss2 s 1 s s s1
To solve for the coecients in this example we will multiply through s s2
s 1 and group the coecients to form the three equations.
1 K1 s2 s 1 K2 K3 s

K1 K2 s2 K1 K3 s K1  1 0
Now it is easy to solve for the coecients:
K1  1 0 ! K1 1
K1 K2 0 ! K2 1
K1 K3 0 ! K3 1
The response can now be written as
1 s1
Ys 
s s2 s 1
When we look at the transform table we nd that we are close but not quite
there yet. We need one more step to put it in a form where we can use the transforms
in table. Knowing the real and imaginary portion of our roots, we can write the
second-order denominator as
p2
2 2 12 3
s jRealj jImagj s s2 s 1 s a2 b2
2 2

Now we have two identities from the table that we can use:
96 Chapter 3
   
1 b sa
L eat sinbt and L 1
eat cosbt
s a2 b2 s a2 b2
With one last step we have the form we need to perform the inverse Laplace.
Take the second-order term and break it into two terms in the form of the two
Laplace transform identities given above.
p
3
1 s 12 1
1 s 1
1
Ys  2  2 2  2 p
 p 2 p
s s s 1 s s 1 s s 12 32 3 s 122 232
2 2

Finally, the time response:


p  p 
12t 3 1 12t 3
yt 1  e cos t  p e ; sin t
2 3 2
There are several things to be learned from this example. First, it is a method
that provides a way of obtaining the time response of systems containing complex
conjugate roots. The method becomes more important when dierent inputs are
combined and the standard step input is not the only one present. This leads to
the second point to be made. The example used here falls into a very common class;
one already examined as some length, the step response of a second-order system. If
the goal of this section was to simply obtain the time response (without teaching a
method applicable to more general systems), all we would have to calculate is the
natural frequency and the damping ratio of the system and we would know the time
response. Again, this is true in this case because the input is a step function and we
can compare this response to a standard response and determine the generalized
parameters.
1
Ys
ss2 s 1
is the same as
o2n
Ys
ss2 2xon s o2n
where
q p
rad 1 3
on 1 and z and od on 1  z2
sec 2 2
With the natural frequency and damping ratio known, the response of a second-
order system to a unit step input is (from Table 1)
s
q 2
ez on t 2 1 1  z
1  p sinod t f od on 1  z f tan
1  x2 z
This is the same response obtained using partial fraction expansion where the sine
and cosine terms have been combined into a sine term with a phase angle.
One of the more important connections we must make at this point is that we
actually knew this was the response from the very time we started the example, once
we calculated the roots of the second-order denominator. The real portion of the
roots equals zon and the imaginary portion of the roots is the damped natural
Analysis Methods for Dynamic Systems 97

frequency od . As we see in Section 3.4.3, this forms the foundation of using the
s-plane to determine a systems response in the time domain.

EXAMPLE 3.7
To conclude this section on Laplace transforms, let us once again use the mass-
spring-damper equation and now solve it using Laplace transforms. Remember
that the original dierential equation, developed using several methods, is
f t m x00 b x0 k x
Then taking the Laplace transform with zero initial conditions:
Fs m s2 Xs b s Xs k Xs Input
Xs can easily be solved for since all the derivative terms have been removed
during the transform. Solving for Xs results in
Output Xs 1=m s2 b s k Input
If the input is a unit step: Unit Step 1=s.
Then the output is given by
1 1
Xs
ms2 bs k s
If we divide top and bottom by m we see a familiar result:
1
m 1
Xs 2 b k

s m s m
s

Aside from a scaling factor k, this is one of the entries in the table where (o2n
k/m) and 2 z on b/m). If the system is overdamped we have two real roots from
the second-order polynomial in the denominator and the system can be solved as the
sum of two rst-order systems. When we have critical damping we have two repeated
real roots and again the solution was already discussed in Section 3.3.1.2. Finally, if
the system is underdamped we get complex conjugate roots and the system exhibits
overshoot and oscillation. Whenever rst- or second-order systems are examined
with respect to step inputs, we can use the generalized responses developed in
Section 3.3.2. If dierent functions are used, then the partial fraction expansion
tools still allow us develop the time response of the system.

3.4.3 Transfer Functions and Block Diagrams


Although the previous section took the inverse Laplace transform to obtain the time
response for the original dierential equation, if is often not required. Transfer
functions and block diagrams can be developed by taking the Laplace transform
of the dierential equation representing the physical system and using the result
directly. Using the algebraic function in the s-domain to represent physical systems
is very common, and many computer programs can directly simulate the system
response from this notation. This section will also begin to introduce computer
solution methods now that we are familiar with the analytical background and
how to represent physical systems in the s-domain.
98 Chapter 3

The most common format used when designing control systems is block dia-
grams using interconnected transfer functions. In our brief introduction to block
diagrams, we learned simple reduction techniques and how the block diagram is
simply representing some equation pictorially. Section 2.3.2 presented some of the
basic properties and reduction steps. The goal in this section is to learn what the
actual blocks represent and how to develop them.
Block diagrams are lines representing variables that connect blocks containing
transfer functions. A transfer function is simply a relationship between the output
variable and input variable represented in the s-domain.

Laplace transform of output


Transfer function
Laplace transform of input

General notation is
Transfer function Gs Cs=Rs

where Rs is input and Cs is output. We use Laplace transforms to convert dier-


ential equations into transfer functions representing the output to input relation-
ships. Since transfer functions only relate the output to the input, we do not include
initial conditions when using Laplace transforms.
Several common examples are given below to illustrate the procedure of con-
verting dierential equations to transfer functions. First, let us develop the transfer
function for the mass-spring-damper system whose dierential equation has already
been derived.

EXAMPLE 3.8
Taking the Laplace of this ODE leads to a transfer function and block as shown:
m s2 Xs b s Xs k Xs Input Rs

Xs=Rs 1=ms2 bs k

With a uniform set of units, Rs is a force input, Cs is a position output, and the
coecients m, b, and k must each be consistent with Rs and Cs. Each s is
associated with units of 1/sec.

EXAMPLE 3.9
Another example that we have already derived the dierential equation for is a rst-
order RC circuit. Taking the dierential equation and following the same procedure,
Analysis Methods for Dynamic Systems 99

RC dc=dt c rt

RC s 1Cs Rs

C=R 1=RC s 1

Now the units of both Rs and Cs are volts, where their relationship to each other
is dened by the transfer function in the block. Once we know the input R(s) we can
develop the output Cs. If we pictorially represent the input as a unit step change in
voltage, then the expected output voltage is a rst-order step response, as shown in
Figure 7.
So now we are getting to the point where, as alluded to in the previous section,
we are able to look at the form of our transfer function and quickly and accurately
predict the type of response that we will have for a variety of inputs.
3.4.3.1 Characteristics of Transfer Functions
Several notes at this point about the Laplace transform and corresponding transfer
function will help us understand future sections when designing control systems. The
denominator of the transfer function Gs is usually a polynomial in terms of s where
the highest power of s relates to the order of the system. Hence, the mass-spring-
damper system is a second-order system and has a characteristic equation (CE) of
CE ms2 bs k. The roots of the CE directly relate to the type of response the
system exhibits. Looking in the Laplace transform tables claries this more. For
example, the rst-order system transfer function can be written as a/(s a). This
corresponds to the time response of eat . The root s a of the characteristic
equation relates directly to the rate of rise or decay in the system and is thus related
to the system time constant, t, where t 1=a. The same relationship between roots
and the system response is found in Table 1 for second-order systems like the mass-
spring-damper system. If the roots of a second-order CE are both negative and real,
the system behaves like two rst-order systems in series. If the roots have imaginary
components, they are complex conjugates according to the quadratic equation and
the system is underdamped and will experience some oscillation. If the real portion
of the roots are ever positive, the system is unstable since the time response now
includes a factor eat , thus experiencing exponential growth (until something breaks).
These relationships were formed while presenting partial fraction expansions and
form the foundation for the root locus technique presented later.

Figure 7 Example: RC circuit transfer function response.


100 Chapter 3

The roots of the characteristic equation are often plotted in the s-plane. The
s-plane is simply an xy plotting space where the axes represent the real and ima-
ginary components of the roots of the characteristic equation. This is shown in
Figure 8. The parameters used to describe rst- and second-order systems are all
graphically present in the s-plane. The time constant for a rst-order system (and the
decay rate for a second-order system) relates to the position on the real axis. The
imaginary axis represents the damped natural frequency, the radius (distance) to the
complex pole is the natural frequency, and the cosine of the angle between the
negative real axis and the radial line drawn to the complex pole is the damping
ratio. Thus, the s-plane is a quick method of visually representing the response of
our dynamic system.
Since anything with a positive real exponent will exhibit exponential growth,
the unstable region includes the area to the right of the imaginary axis, commonly
referred to as the right hand plane, RHP. In the same way, if all poles are to the left
of the imaginary axis, the system is globally stable since all poles will include a term
that decays exponentially and that is multiplied by the total response. (Thus, when
the decaying term approaches zero, so does the total response.) This side is com-
monly termed to left-hand plane, or LHP. The further to the left the poles are in the
plane, the faster they will decay to a steady-state value, a property well worth
knowing when designing controllers. Figure 9 illustrates the types of response
depending on pole locations in the s-plane.
There are two more useful theorems for analyzing control systems represented
with block diagrams: the initial value theorem and nal value theorem. These theo-
rems relate the s-domain transfer function to the time domain without having to rst
take the inverse Laplace transform.
Initial value theorem (IVT):
f 0 lim s Fs
s!1

Final value theorem (FVT):


lim f t lim s Fs
t!1 s!0

In particular, the FVT is extremely useful and frequently used to determine


steady-state errors for various controllers. Simply stated, the nal output value as

Figure 8 Plane notation and root location.


Analysis Methods for Dynamic Systems 101

Figure 9 Response type based on s-plane pole locations.

time continues is equal to the output in the s-domain times s as s in the limit
approaches zero. In almost every case you can determine the steady-state output of a
system by multiplying the transfer function (TF) times s and the input (in terms of s)
and setting s to zero. The resulting value is the steady-state value that the system will
reach in the time domain. For step inputs this becomes very easy since the s in the
theorem cancels with the 1/s representing the step input. Thus, for a unit step input
the nal value of the transfer function is simply the value of the TF when s ! 0.
With the tools described up to this point we can now build the block diagram,
determine the content of each block, close the loop (as our controller ultimately will),
and reduce the block diagram to a single block to easily determine the closed loop
dynamic and steady-state performance.
To work through the application of the FVT, lets solve for the steady-state
output using the two examples discussed in previous sections, the RC circuit and the
m-b-k mechanical system.

EXAMPLE 3.10
We will take the transfer function and block diagram for the RC circuit but now with
a step of 10 V in magnitude. This is the equivalent of closing a switch at t 0 and
measuring the voltage across the capacitor. The transfer function, from before

The Laplace representation of the step input:

1 10
Step input with a magnitude of10 ! Rs 10
s s

Apply the nal value theorem to the output, C(s):


102 Chapter 3

1 10
Csteady state lim ct lim s Cs lim s 10
t!1 s!0 s!0 RCs 1 s
Although no surprises here, the concept is clear and the FVT is easy to use and apply
when working with block diagrams. We nish our discussion of the FVT by applying
it to the mass-spring-damper system developed earlier.

EXAMPLE 3.11
Taking the Laplace of the m-b-k system dierential equation resulted in the transfer
function:

1 F
Step input (force) with a magnitude of F ! Rs F
s s
Apply the nal value theorem to the output, C(s):
1 1 F
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 ms2 bs k s k

This simply tells us what we could have ascertained from the model: that after
all the transients decay away, the nal displacement of the mass is the steady-state
force divided by the spring constant where the force magnitude F. This agrees with
the steady-state value determined from the dierential equations in previous sec-
tions.
While these are simple examples to illustrate the procedure, the method is
extremely fast even when the block diagrams get large and more complex. The
FVT is frequently used in determining the steady-state errors for closed loop con-
trollers.
3.4.3.2 Common Transfer Functions Found in Block Diagrams
Finally, let us examine several common block diagram transfer functions. Several
blocks are often found in block diagrams representing control systems, some of
which tend to confuse beginners. Each block described here may be repeated
throughout the block diagram, each time representing a dierent component and
with dierent physical units. The goal here is not to list all the possible applications
of each block but instead to have us recognize the basic common forms found in all
dierent kinds of systems (electrical, mechanical, etc.). For example, if we under-
stand a rst-order lag term, we will understand its input/output relationship whether
it represents an RC circuit, shaft speed with rotary inertia, or phase-lag electronic
controller. The other note to make here is that all systems can be reduced into
combinations of these blocks. If we have a fth-order polynomial (characteristic
equation) in the denominator of our transfer function, we have several combinations
possible when it is factored: ve real roots corresponding to ve rst-order terms,
three real roots and one complex conjugate pair corresponding to three rst-order
terms and one second-order oscillatory term, or one real root and two complex
conjugate pairs corresponding to one rst-order term and two oscillatory terms.
Analysis Methods for Dynamic Systems 103

So no matter how complex our system becomes, it is easily described as a combina-


tion of the transfer function building blocks described below.
Gain factor, K:

The gain, K, is a basic block and may represent many dierent functions in a
controller. This block may multiple R(s) by K without changing the variable type
(e.g., a proportional controller multiplying the error signal) or represent an amplier
in the system that associates dierent units on the input and output variables. An
example is a hydraulic valve converting an electrical input (volts or amps) into a
corresponding physical output (pressure or ow). The valve coecients resulting
from linearizing the directional control valve are used in this manner. Therefore,
when using this block be sure to recognize what the units are supposed to be and
what the gain units actually were determined with. There are no dynamics associated
with the gain block; the output is always, and without any lead or lag, a multiple K
of the input.
Integral:


This block represents a time integral of the input variable where ct rt dt. Two
common uses include integrating the error signal to achieve the integral term in a
proportional-integral-derivative (PID) controller and integrating a physical system
variable such as velocity into a position. In terms of units, then, it multiplies the
input variable by seconds. If input is an angular acceleration, rad/sec2 , output is
angular velocity, rad/sec. Since most physical systems are integrators (remember the
physical system relationships from Chapter 2), this is a common block.
One special comment is appropriate here. The integral block is not to be
confused with step inputs even though both are represented by 1/s. The block con-
tains a transfer function that is simply the ratio of the output to the input. Thus it is
possible to have an integral block with a step input in which case the output would
be represented by
Cs Gs Rs 1=s1=s 1=s2 ramp output (from tables)
This concept is sometimes confusing when initially learning block diagrams and s
transforms since one 1=s term is the system model and the other 1=s term is the input
to the system.
Derivative:

This block represents a derivative function where the output is the derivative of the
input. A common use is in the derivative term of a PID controller block. Use of
104 Chapter 3

the block requires caution since it easily amplies noise and tends to saturate
outputs. The pattern should be noted where the integral and derivative blocks
are inverses of each other and if connected in series in a block diagram, would
cancel. The units associated with the derivative block are 1/sec, the inverse of the
integral block.
First-order system (lag):

This block is commonly used in building block diagrams representing physical sys-
tems. It might represent an RC circuit as already seen, a thermal system, liquid level
system, or rotary speed inertial system. The input-output relationship for a rst-
order system in the time domain has already been discussed in Section 3.3.2. Based
on the time constant, t, we should feel comfortable characterizing the output from
this system. In the next section when we examine frequency domain techniques, we
will see that the output generally lags the input (except at very low frequencies) and
hence this transfer function is often called a rst-order lag.
First-order system (lead):

This block is found in several controllers and some systems. It is similar to the rst-
order lag except that now the output leads the input. Most physical systems do not
have this characteristic as real systems usually exhibit lag, as found in the previous
block. The similarities and dierences will become clearer when these blocks are
examined in the frequency domain.
Second-order system:

In addition to true second-order systems like a mass-spring-damper conguration


or an RLC circuit, a second-order system block is commonly used to approximate
higher order systems. As later sections show, a system that has dominant complex
conjugate poles can be accurately modeled by a second-order system. If a system is
expressed in this form, we generally assume that it is underdamped and thus exhibits
overshoot and oscillation. If it is overdamped we can just as easily treat it as two
rst-order systems. As with the rst-order model, it is possible to see this form
appearing in the numerator of the transfer function. It is unlikely get this term
from modeling the physics of a system; it more frequently appears as part of a
controller. The common PID controller introduces a second-order term into the
numerator of our system.
Analysis Methods for Dynamic Systems 105

EXAMPLE 3.12
To summarize many of the concepts presented thus far, let us take a model of a
physical system, develop the dierential equation describing the physics of the sys-
tem, convert it to a transfer function, and plot the time response if the system is
subjected to a step input. The system we will examine is a simple rotary group with
inertia J and damping B; a torque, T, as shown in Figure 10, acts on the system.
To derive the dierential equation, we sum the torques acting on the system
and set them equal the inertia multiplied by the angular acceleration.
X do
T 0s J T  Bo
dt

Now we can take the Laplace transform (ignoring initial conditions) and solve for
the output, o:

do 1
J B o T ! Js Bo T ! o T
dt Js B

Solve for the transfer function and write it in generalized terms:

! s 1 1 1 1 1

T s Js B B J=Bs 1 B ts 1

This results in a rst-order system time constant, t J=B, and a scaling factor of
1=B, allowing us to quickly write the time response as

1 1
 1 B

ot 1  e t 1  e Jt
B B

Finally, we can plot the generalized response, including the scaling factor, as
shown in Figure 11. So without needing to perform an inverse Laplace transform, we
have analyzed the rotary system, developed the transfer function, and plotted the
time response to a step input. Since when dealing with linear systems separate
responses can be added, even complex systems are easily analyzed with the skills
shown thus far. Complex systems always factor into a series of simple systems (as
outlined above) whose individual responses are added to form the total response.

Figure 10 Example: analysis of rotary system.


106 Chapter 3

Figure 11 Example: time response of rst-order rotary system.

3.5 FREQUENCY DOMAIN


Analysis methods in the frequency domain complement previous methods already
studied and allow us to model, design, and predict control system behavior. Once we
have obtained transfer functions in the s-domain, it is straightforward to develop a
frequency response curve, or Bode plot as they are commonly called. Another advan-
tage with Bode plots occurs when the process is reversed and a transfer function is
developed from the plot. Whereas it is dicult to develop a model greater than
second order from a step response, higher order models are quite simple using
Bode plots.
The relationship of Bode plots to transfer functions is evident as we describe
the information contained in Bode plots. A transfer function, as dened earlier, is
simply the ratio of a systems output to the systems input. When we construct a
Bode plot, we input a known signal and measure the resulting output after the
transients have decays. The steady-state magnitude of the output, relative to the
input magnitude, and the phase angle between the output and input are plotted as
a function of the frequency of a constant amplitude sinusoidal input. As we found
earlier, if we have a transfer function and we know the input into the system repre-
sented by the transfer function, then we can solve for the system output. Applying a
sinusoidal input to the system and recording the magnitude and phase relationship of
the response is just a specic case of the same procedure. To more fully illustrate the
concept of working in the frequency domain using Bode plots, let us work through
how a Bode plot is constructed.
The input to the system, in general form, is given by
xt X sin ot
with the resulting steady-state output as
yt Y sinot f
If we wait until all the transients have decayed, then the output of the system
will exhibit the same frequency as the input but dier in magnitude and phase angle.
As the input frequency (and thus the output frequency) is changed, the relationships
between the input/output magnitude and phase angle also change. We can show this
more clearly by remembering that a transfer function G(s) is the ratio of the output
to the input. For now let us use G(s) to describe the output/input relationship
between Ys and Xs:
Ys GsXs
Analysis Methods for Dynamic Systems 107

Then X and Y are related in magnitude by |G(s)| and in phase by

f Gs tan1 [Imag/real]

Since s in the Laplace domain is a complex variable, the magnitude and phase
relationship can be more clearly shown graphically using phasors as in Figure 12.
Since the multiplication of a phasor by imaginary number j corresponds to a coun-
terclockwise rotation of 90 degrees, we see that j  j 1, the identity we are already
familiar with. The rst imaginary j is a vertical line of magnitude 1, rotated 90
degrees by multiplication with the other j means it still has a magnitude of 1 but
now is on the negative real axis, hence equal to 1. When we construct Bode plots we
let s jo since the entire real term, s, present in the complex variable s, has decayed
(i.e., steady state). Inserting j o into the transfer functions then allows us to construct
Bode plots of magnitude and phase as o is increased.
Fortunately, the common factors representing real physical models are quite
simple, and it is seldom necessary to worry about phasors, as we see in the next
section. The key points to remember from this discussion are that the real portions of
the s terms decay out (they are the coecient on the decaying exponential term) and
thus each s term, represented now as an imaginary term s jo, introduces 90 degrees
of phase between the input and the output. With this simple understanding we are
ready to relate the common transfer functions examined earlier to the equivalent
Bode plots in the frequency domain.

3.5.1 Bode Plots


3.5.1.1 Common Bode Plot Factors
Bode plots can be constructed from tests performed on the physical system or from
block diagrams with transfer functions. Bode plots typically consist of two plots,
magnitude (decibels, dB) and phase angle (degrees) plotted versus the log of the
input frequency (rad/sec or Hz). A sample plot describing the common layout is
given in Figure 13. Since the upper plot magnitude is plotted in decibels, the magni-
tude is a log versus log scale as far as the original data is concerned. It helps to
remember this since the linear y-axis scale (when plotted in dB) may be misleading as
to the actual output/input magnitude ratio. Also, an output-to-input ratio of unity,
when converted to dB, will be plotted as zero on the y axis. This makes it easy to
describe the relative input and output magnitudes since any positive dB means that

Figure 12 Phasor notation.


108 Chapter 3

Figure 13 Typical layout of Bode plot.

the output signal is greater in magnitude than the input and a negative dB implies
that the output signal is of a lesser magnitude than the input signal.
The phase plot, commonly the lower trace, uses a linear y axis to plot the actual
angle versus the log of the frequency. The magnitude and phase plots share the same
frequency since the data are generated (analytically or experimentally) together. A
positive phase angle means that the output signal leads the input signal and vice
versa for a negative phase angle, commonly termed lag. This will become clearer as
example plots are generated.
To begin the process of constructing and using Bode plots, we start with our
existing block diagram and transfer function knowledge and extend it into the fre-
quency domain. The advantages will be evident once we understand the process. As
we have seen thus far, most physical systems can be factored in subsystems, or
factors. Common blocks (gain, integral, rst order, and second order) were pre-
sented in the previous section when discussing block diagrams. These same transfer
functions (blocks) are the building blocks for constructing Bode plots. Now for the
advantage: The blocks multiply when connected in series (as when factored and in
block diagrams) but they are plotted on a logarithmic scale when using a Bode plot.
Multiplication becomes addition when using logarithms!

logG1 G2 G3 logG1 logG2 logG3

Constructing a Bode plot is as simple as constructing an individual plot for


each factor (block) and adding the plots together. Thus, the entire block diagram
Bode plot can be constructed by adding the plots of the individual blocks (loops
must rst be closed). One note is in order; this requires that we are working with
linear or linearized systems, as have most techniques presented thus far. The process
can also be reversed to determine the original factors used to construct the Bode plot.
This provides a powerful system identication tool to those who understand Bode
plots. Let us move ahead and dene the common factors, progressing in the same
order as in Section 3.4.3.2.
Analysis Methods for Dynamic Systems 109

Gain factor, K:

A transfer function representing a gain block produces a Bode plot where the mag-
nitude represents the gain, K, and the phase angle is always zero, as shown using the
equations for magnitude and phase angle given below.
1
MagdB 20 log K 20 log
K
 
Im
1
Phase f  tan  tan1 0 0 degrees
Re
The phase angle is always zero for a gain factor K since no imaginary term is
present and the ratio of the imaginary to the real component is always zero. Figure
14 gives the Bode plot representing the gain factor K.
Since individual eects add, varying the gain K in a system only aects the
vertical position of the magnitude plot for the total system response. A dierent
value for K does not change anything on the phase angle plot, as the phase angle
contribution is always zero. The example plot given in Figure 14 illustrates this
graphically. When K represents the proportional gain in a controller, we can dene
stability parameters that make it easy to nd what proportional gain will make the
system go unstable given a Bode plot for the system. Using phasor notation, the gain
is represented by a line on the horizontal positive x axis (zero phase angle) with a
length (magnitude) K.
Integral:

The integral block produces a line on the magnitude plot having a constant slope of
20 dB/decade along with a line on the phase angle plot at a constant 90 degrees.
Remember that s is replaced by jo in the transfer function and as o is increased the
magnitude will decrease. The slope tells us that the line falls 20 dB (y scale) for
every decade on the x axis (log of frequency). A decade is between any multiple of 10

Figure 14 Bode plot factorsgain K.


110 Chapter 3

(0.1 to 1, 1 to 10, 5 to 50, etc.). The slope of 20 comes from the equation used to
calculate the dB magnitude of the output/input ratio.
 
1
MagdB 20 log  20 log jjoj
jo
  o
Im
Phase f  tan1  tan1 90 degrees
Re 0
The integrator Bode plot is shown in Figure 15. The line will cross the 0 dB line
at o 1 rad/sec since the magnitude of 1=jo 1 and the log of 1 is 0. Remember
that this is the amount added to the total response for each integrating factor in our
system. The phase angle contribution was explained earlier using phasors, where we
saw that each s (or jo in our case) contributes 90 degrees of phase. That is why two
imaginary numbers multiplied by each other equals 1 (a phasor of magnitude one
along the negative real axis). When the imaginary number is in the denominator, the
angle contribution becomes negative instead of positive (j  j 1 is still true, it just
rotates clockwise 90 degrees for each s instead of counterclockwise).
Understanding this concept makes the remaining terms easy to describe.

Derivative:

A derivative block, s, will have a positive slope of 20 dB/decade and a constant +90
degrees phase angle. Since jo is now in the numerator, increasing the frequency
increases the magnitude of the factor. As the derivative factor Bode plot in Figure
16 shows, the magnitude plot still crosses 0 dB at o 1 rad/sec because the magni-
tude of the factor is still equal to unity at that frequency. The magnitude and phase
angle equations given for the integrator block are the same for the derivative block,
the only exception that the negative signs are positive loga  log1=a.
The same explanation as before also holds true regarding the phase angle,
except for now the imaginary j is in the numerator and contributes a positive 90
degrees. In fact, if we look we see that a factor in the numerator is just the horizontal
mirror image of that same factor in the denominator. For all remaining factors this
property is true; each magnitude and phase plot developed for a factor in the

Figure 15 Bode plot factorsintegrator.


Analysis Methods for Dynamic Systems 111

Figure 16 Bode plot factorsderivative.

numerator, when ipped horizontally with respect to the zero value (dB or degrees),
becomes the same plot as for when the factor appears in the denominator. Thus
when the same factor, appearing once in the numerator and once in the denominator
are added, the net result is a magnitude line at 0 db and a phase angle line at 0
degrees. This relationship is also evident in s-domain using transfer functions; multi-
plying an integrator block (1=s) times a derivative block (s) produces a value of unity,
hence a value of 0 dB and 0 degrees. (Adding factors in the frequency domain is the
same as multiplying factors in the s-domain.)
First-order system (lag):

Remembering once again that s is replaced by jo helps us to understand the plots for
this factor. At low frequencies the magnitude of the jo term is relatively very small
compared to the 1 and the overall factor is close to unity. This produces a low
frequency asymptote at 0 dB and a phase angle of zero degrees. As the frequency
increases, the ts term in the denominator begins to dominate and it begins to look
like an integrator with a slope of 20 dB/decade and a phase angle of 90 degrees.
Plotting this on the logarithmic scale produces relatively straight line asymptotic
segments, as shown in Figure 17. Therefore we commonly dene a low and high
frequency asymptote, used as straight line Bode plot approximations.
The break point occurs at o 1=t since the contribution from the two terms in
the denominator are equal here. The real magnitude curve is actually 3 dB down at
this point, and at points o 0:5=t and o 2=t (an octave of separation on each
side) the actual curve is 1 dB beneath the asymptotes. To calculate the exact values,
we can use the following magnitude and phase equations, similar to before:
  q
 1 
Magd B 20 log   20 log ot2 1
jot 1

Phase f  tan1 ot
Since the phase angle is negative and increases as frequency increases, we call this a
lag system. This means that as the input frequency is increased the output lags
(follows), the input by increasing amounts of degrees (time). The phase angle is
112 Chapter 3

Figure 17 Bode plot factorsrst-order lag.


approximated by a line beginning at zero degrees one decade before the breakpoint
frequency 1=t and ending at 90 degrees one decade after the breakpoint fre-
quency. Both the linear asymptotic line and the actual line cross the breakpoint
frequency, o 1=t, at 45 degrees (f  tan1 1 45 degrees.
First-order system (lead):

When the rst-order factor is in the numerator, it adds positive phase angle and the
output leads the input. The magnitude and angle plots are the mirror images of the
rst-order lag system, as the equations also reveal:
q
MagdB 20 log jjot 1j 20 log ot2 1

Phase f tan1 ot
The magnitude plot still has a low frequency asymptote at 0 dB but now
increases at 20 dB/decade when the input frequency is beyond the break frequency.
The phase angle begins at 0 and ends at 90 degrees and the output now leads the
input at higher frequencies. These characteristics are shown on the Bode plot for the
rst-order lead system in Figure 18.
The same errors (sign is opposite) are found between the low and high fre-
quency asymptotes as discussed for the rst-order lag system and the same explana-
tions are valid. The rst-order lag and lead Bode plots are frequently used elements
when designing control systems and knowing how the phase angle adds or subtracts
allows us to easily design phase lead or lag and PD or PI controllers using the
frequency domain.
Second-order system:

Cs 1
or  
Rs 1 2 2z
s s1
o2n on
Finally, we have the second-order system. As in the step response curves for second-
order systems, we again have multiple curves to reect the two necessary parameters,
Analysis Methods for Dynamic Systems 113

Figure 18 Bode plot factorsrst-order lead.

natural frequency and damping ratio. Each line in Figure 19 represents a dierent
damping ratio. Several analogies can be made from our experience with the previous
factors, only now there are three terms in denominator, as the magnitude and phase
angle equations show.
 s
 1 1   2  
 2z  o2 2zo 2
MagdB 20 log 2 jo2 jo 1  20 log 1 2
 on on  on on
0 1
o
B 2z C
B on C
Phase f  tan1 B 2 C
@ o A
1 2
on

At low frequencies, both s (or jo) terms are near zero, the factor is near unity
(o2n =o2n ), and can be approximated by a horizontal asymptote. At high frequencies
the 1=s2 term dominates and we now have twice the slope at 40 dB/decade for the
high frequency asymptote. The phase angle similarly begins at zero but now has j*j
in the high frequency term and ends at 180 degrees, or twice that of a rst-order
system. Thus for each 1=jo in the highest order term in the denominator, another
90 degrees of lag is added. Therefore a true rst-order system can never be more
than 90 degrees out of phase and a second-order system never more than 180
degrees.
Figure 19 also allows us to determine the natural frequency and damping ratio
by inspection. This is developed further when we discuss how to take our Bode plot
and derive the approximate transfer function from the plot. The intersection of the
low and high frequency asymptotes occurs near the natural frequency (as does the
peak), and if any peak in the magnitude plot exists, the system damping ratio is less
than 0.707. At the natural frequency (i.e., breakpoint) the phase angle is always 90
degrees, regardless of the damping ratio. If the system is overdamped, it factors into
two rst-order systems and can be plotted as those two systems. Thus the second-
order Bode plots only show the family of curves where z  1.
Although not explicitly shown here, with the second-order term appearing in
the numerator we get the same low frequency asymptote, a slope of 40 dB/decade
on the high frequency asymptote and a phase angle starting at zero and ending at
114 Chapter 3

Figure 19 Bode plot factorssecond-order system.

180 degrees. This is exactly what we expect after seeing how the other Bode plot
factors appear in the denominator and the numerator.
3.5.1.2 Constructing Bode Plots from Transfer Functions
When we reach the point of designing controllers in the frequency domain, we must
often plot the dierent factors together to construct Bode plots representing the
combined controller and physical system. In this section we look at some guidelines
to make the procedure easier. The most fundamental approach is to develop a Bode
plot for every factor in our controller and physical system and add them all together
when we are nished. In general this is the recommended procedure. There are also
guidelines we can follow that can usually speed up the process and, at a minimum,
provide useful checks when we are nished. Several guidelines are listed below:
 To nd the low-frequency asymptote, use the FVT to determine the steady-
state gain of the whole transfer function and convert it to dB. The FVT,
Analysis Methods for Dynamic Systems 115

when applied to the whole system, gives us the equivalent steady-state gain
between the system output and input. If the gain is greater than one, the
output level will exceed the input level. This gain may be comprised of
several dierent gains, some electronic and some from gains inherent in
the physical system. Converting the gain into decibels should give us the
value of the low frequency asymptote found by adding the individual Bode
plots.
 If we have one or more integrators in our system, the gain approaches
innity using the FVT. Each integrator in the system adds 20 dB/decade
of slope to the low frequency asymptote. Therefore if we have a low fre-
quency asymptote with a slope of 40 dB/decade, it means we should have
two integrators in our system (1=s2 )
 Recognize that each power of s in the numerator ultimately adds 90 degrees
of phase and a high frequency asymptotes contribution of 20 dB/decade
and the opposite for each power of s in the denominator. For example, a
third-order numerator and a fourth-order denominator appear as a rst-
order system at high frequencies with a nal high frequency asymptote of
20 dB/decade and a phase angle of 90 degrees. Therefore,
1. The high frequency asymptote will be 20 dB/decade times (n  m) where
n is the order of the denominator and m is the order of the numerator.
2. The high frequency nal phase angle achieved will be 90 degrees times
(n  m).
Even with constructing the individual Bode plots, much of the overall system
can be understood in the frequency domain by applying these simple guidelines. As
discussed further, the process can be reversed and a Bode plot may be used to derive
the transfer function for the system.

EXAMPLE 3.13
Let us now take a transfer function and develop the approximate Bode plot to
illustrate the principles learned in this section. We use the transfer function below,
which can be broken down into four terms: a gain K, an integrator, a rst-order lead
term, and a rst-order lag term. The Bode plot will be constructed using the approx-
imate straight line asymptotes for each term.
10s 1
Gs
s0:1s 1
To begin with, let us develop a simple table showing the straight line magni-
tude and angle approximations for each term (Table 2). The gain factor is plotted
as a line of constant magnitude at 20 dB ( 20 log 10) and the phase angle
contribution is zero for all frequencies. The integrator has a constant slope of
20 dB/decade and crosses 0 dB at 1 rad/sec, as shown in Table 2. Its angle
contribution is always 90 degrees. The rst-order lead term in the numerator
has its break frequency at 1 rad/sec (t 1 sec, break is at 1=t). It is a horizontal
line at 0 dB before 1 rad/sec and has a slope of 20 dB/decade after the break
frequency. The phase angle begins at 0 degrees one decade before the break and
ends at 90 degrees one decade after the break frequency. Finally, the rst-order
116 Chapter 3

Table 2 Example: Contribution of Individual Bode Plot Factors

Magnitude Data (all in dB)


20 log (10) (1=s) s 1 1=0:1s 1

o, rad/s Gain K Integrator 1st Lead 1st Lag Total

0.1 20 20 0 0 40
1 20 0 0 0 20
10 20 20 20 0 20
100 20 40 40 20 0

Phase Angle Data (all in degrees)


20 log (10) (1=s) s 1 1=0:1s 1

o, rad/s Gain K Integrator 1st Lead 1st Lag Total

0.1 0 90 0 0 90


1 0 90 45 0 45
10 0 90 90 45 45
100 0 90 90 90 90

lag term has a time constant t 0:1 sec and thus a break frequency at 10 rad/sec.
After its break frequency, however, it has a slope of 20 dB/decade. The angle
contribution is also negative, varying from 0 to 90 degrees one decade before and
after the break frequency of 10 rad/sec.
Once the individual magnitude and phase angle contributions are calculated,
they can simply be added together to form the nal magnitude and phase angle plot
for the system. The total column for the magnitude and phase angle plot, shown in
Table 2, thus denes the nal Bode plot values for the whole system. Graphically,
each individual term and the nal Bode plot are plotted in Figure 20.
Checking our nal plot using the guidelines above, we see that our low fre-
quency asymptote has a slope of 20 dB/decade, implying that we have one inte-
grator in our system. The high frequency asymptote also has a slope of 20 dB/
decade, meaning that the order of the denominator is one greater than the order of
the numerator. Since the phase angle does increase at some range of frequencies on
the nal plot, we also know that we have at least one s term in the numerator adding
positive phase angle.
In this case, knowing the overall system transfer function that we started with,
we see that all the quick checks support what is actually the case. Our system has one
integrator, a term in the numerator, and is an order of one greater in the denomi-
nator than in the numerator (order of two in the denominator versus an order of one
in the numerator). As we reverse engineer the system transfer function from an
existing Bode plot in later sections, we see that these guidelines from the beginning
point for the procedure.
In conclusion of this section, all transfer functions can be factored into the
terms described and each term plotted separately on the Bode plot. The nal plot
then is simply the sum of all individual plots, both magnitude and phase angle.
Analysis Methods for Dynamic Systems 117

Figure 20 Example: constructing nal Bode plot from separate terms.

3.5.1.3 Bode Plot Parameters


Bode plots are frequently discussed using terms like bandwidth, gain margin, phase
margin, break frequency, and steady-state gain. Let us quickly dene each term here
and explain them in greater detail in the following text.

Bandwidth frequency (denition varies, but here are the most common)
 Frequency at which the magnitude is 3dB relative to the low frequency
asymptote magnitude.
 Frequency at which the magnitude is 3dB relative to the maximum dB
magnitude reached. In the case of second-order factors where z  0:35 and
the peak exceeds 3dB, the bandwidth is often considered the range of
frequencies corresponding to the 3dB level before and after the peak
magnitude.
118 Chapter 3

 Frequency at which the phase angle passes through 45 degrees.


 Frequency at which the phase angle passes through 90 degrees.
Gain margin (open loop)
 The margin (in dB) that the magnitude plot is below the 0 dB line when the
system is 180 degrees of phase (below the 0 dB line is positive margin for
stable system).
Phase margin (open loop)
 The margin (in degrees) that the phase angle is above 180 degrees at
crossover frequency (where the magnitude plot crosses the 0 dB line).
Break frequency
 The frequency at which a rst-order system is attenuated by 3 dB or a
second-order system by 6 dB. Corresponds to the intersection of the low
and high frequency asymptotes.
Steady-state gain
 The value of the low frequency asymptote as the frequency is extended to
zero. Hence an integrator with a constant 20 dB/decade slope has innite
gain as o goes to zero.
The dierent methods of measuring bandwidth are shown in Figure 21. The
interesting thing to notice is the dierent methods clearly result in dierent values.
To understand why this happens, we need to rst recognize that bandwidth is a
subjective measurement. The term bandwidth, by denition, is simply a range of
frequencies within a band. Couple this denition with the fact that when we think of
the bandwidth frequency, we associate it with a frequency value that if exceeded the
system no longer performs as expected, and we see where the confusion enters.
Consider the most common bandwidth denition of 3 dB below the low
frequency asymptote dB level. We reach a level of 3 dB when the output level is
0.707 of the input level. This is signicant because it corresponds to the level where
the input power is one half of the maximum input power. That is, the input is only
delivering 12 of the power at the bandwidth frequency that it is at lower frequencies.
The concern in all this is as follows: Dierent manufacturers may use dierent
standards and it becomes dicult to honestly compare two systems. Does using the
3 dB criteria mean that a system is useless after the input frequency exceeds the
bandwidth frequency? Not at all, as we said it is a criterion that seems logical based
on the half-power condition. The other criteria are no better or worse (although
some may benet from one criteria more than others, even when the other systems
also are compared using the same).
The conclusion to make is this: When making decisions based on published
bandwidth frequencies, it is wise to ask what criterion was used to make a fair
comparison.

3.5.2 Nyquist Plots (Polar Plots)


This section denes the layout of Nyquist plots (or polar plots, as sometimes called)
and how they relate to the Bode plots examined previously. As we will see, a Nyquist
plot can be constructed from a Bode plot without any additional information; in
fact, whatever analysis or design technique that can done on a Bode has a similar
counterpart applicable for a Nyquist plot. The advantage of using a Nyquist plot is
that both the magnitude and phase angle relationships are shown on one plot (versus
Analysis Methods for Dynamic Systems 119

Figure 21 Bode plot bandwidth denitions.

separate in Bode plots). The disadvantage is that when plotting a Nyquist plot using
data from a Bode plot, not all data are used and the procedure is dicult to reverse
unless enough frequencies are labeled on the Nyquist plot during the plotting pro-
cedure. Since the data on a Nyquist plot do not explicitly show frequency, the
contribution of each individual factor is not nearly as clear on a Nyquist plot as it
is on a Bode plot.
For these reasons, and since Bode plots are more common, this section only
demonstrates the relationship between Nyquist and Bode plots. The majority of our
design work in the frequency domain will continue to be done in this text while using
Bode plots.
Perhaps the simplest way to illustrate how a Nyquist plot relates to a Bode plot
is to begin with a Bode plot and construct the equivalent Nyquist plot. Before we
begin, let us quickly dene the setup of the axes on the Nyquist plot. The basis for a
120 Chapter 3

Nyquist plot has already been established in Figure 12 when discussing phasors. As
we recall, a phasor has both a magnitude and phase angle plotted on xy axes. The x
axis represents the real portion of the phasor and the y axis the imaginary portion. In
terms of our phasor plot, a magnitude of one and a phase angle of zero is the vector
from the origin, lying on the positive real axis, with a length of one.
To construct a Nyquist plot, we simply start at our lowest frequency plotted on
the Bode plot, record the magnitude and phase angle, convert the magnitude from dB
to linear, and plot the point at the tip of the vector with that magnitude and phase
angle. As we sweep through the frequencies from low to high, we continue to plot
these points until the curve is dened. The end result is our Nyquist, or polar, plot.

EXAMPLE 3.14
To step through this procedure, let us use the Bode plot used to dene bandwidth as
shown in Figure 21, which plots an underdamped second-order system. To begin
with, we record some magnitude and phase angles, as given in Table 3, at a sampling
of frequencies from low to high.
Once we have the data recorded and the magnitude linearly represents the
output/input magnitude ratio, we can proceed to develop the Nyquist plot given
in Figure 22. The rst point plotted from data measured at a frequency of 0.1 rad/sec
has a magnitude ratio of 1.01 and a phase angle of 2:3 degrees. This is essentially a
line of length 1 along the positive real axis, as shown on the plot. At a frequency of
0.8 rad/sec, the curve passes through a point a distance of 2.08 from the origin and at
an angle of 41:6 degrees.
The remaining points are plotted in the same way. The magnitude denes the
distance from the origin and the phase angle denes the orientation. Since our phase
angles are negative (as is common), our plot progresses clockwise as we increase the
frequency. By the time we approach the higher frequencies, the magnitude is near
zero and the phase angle approaches 180 degrees. We are approaching the origin
from the left as o ! 1.
If we understand the procedure, we should see that it is also possible to take a
Nyquist plot and generate a Bode plot, as long as we are given enough frequency
points along the curve. In general though we lose information by going from a Bode
plot to a Nyquist plot. Many computer programs, when given the original magnitude
and phase angle data, are capable of generating the equivalent Bode and Nyquist
plots.

Table 3 Example: Magnitude and Phase Values Recorded from Bode Plot

Frequency Magnitude Magnitude Phase Angle


(rad/sec) (dB) (linear scale) (degrees)

0.10 0.08 1.01 2:31


0.80 6.35 2.08 41:63
0.90 7.81 2.46 62:18
1.00 7.96 2.50 90:00
1.20 3.73 1.54 132:51
10.00 39:92 0.01 177:69
Analysis Methods for Dynamic Systems 121

Figure 22 Example: Nyquist plot construction from Bode plot data.

3.5.3 Constructing Bode Plots from Experimental Data


Because of the desirable information contained in Bode plots, they are often con-
structed when testing dierent products. The resulting plots provide valuable infor-
mation when designing robust control systems. The types of products for which
Bode plots have been constructed vary widely, from electrical to mechanical to
electrohydraulic to combinations of these. Items such as hydraulic directional con-
trol valves are quite complex, and a complete analytical model is time consuming to
construct and conrm. In cases like this, Bode plots provide a much simpler solution.
This section examines some of the advantages, disadvantages, and guidelines for
developing and using Bode plots in product design, testing, and evaluation.
Speaking generally, Bode plots have several distinct advantages and disadvantages
as compared with other methods.

Advantages
 Easier on equipment than step responses
 Step responses tend to saturate actuators and components
 More information available from test
 Allows higher order system models to be constructed
Disadvantages
 More dicult experiment
 Takes more time to construct a Bode plot than a step response
 More dicult analysis
 Design engineer needs to understand the resulting Bode plot

As mentioned in the previous section, the Bode plots graph the relationship
between the input and output magnitude and phase angle as a function of input fre-
quency. What follows here is a brief description of how to accomplish this in practice.
The input signal is a sinusoidal waveform of xed amplitude whose frequency
is varied. At various frequencies the plots are captured and analyzed for amplitude
122 Chapter 3

ratios and phase angles. Thus each point used to construct a Bode plot requires a
new setting in the test xture (generally just the input frequency). It is important to
wait for all transients to decay after changing the frequency. Once the transients have
decayed the result will be a graph similar to the one given in Figure 23. This plot will
result in two data points, a magnitude value and phase angle value (at one fre-
quency), used to construct the Bode plot. This plot is typical of most physical
systems since the output lags the inputs (higher order in the denominator) and the
output amplitude is less than the input.

EXAMPLE 3.15
The data points required for the development of the Bode plot are found as follows
from the plot in Figure 23:
Test frequency: o 2p rad/2 sec p rad/sec (plotted on horizontal log scale)
dB Magnitude: MdB 20 logjYj=jXj 20 log0:5=1:0 6:0 dB
(Peak-to-peak values may also be used for Y=X)
Phase angle: Phase angle f 360 degrees/2 sec) (0.25 sec lag) 45
degrees
These points (6 dB on the magnitude plot and 45 degrees on the phase plot,
both at a frequency of o p rad/sec) would then be plotted on the Bode plot and the
frequency changed to another value. The process is repeated until enough points are
available to generate smooth curves. Remember that we plot the data on a loga-
rithmic scale so for the most ecient use of our time we should space our frequencies
accordingly.
Now that we have our Bode plot completed, it can be used to develop a transfer
function representing the system that was tested. Whereas models resulting from step
response curves are limited to rst- or second-order systems, the Bode plots may be
used to develop higher order models. This is a large advantage of Bode plots when

Figure 23 Input and output waveformsBode plots.


Analysis Methods for Dynamic Systems 123

compared with step response plots when the goal is developing system models. The
following steps may be used as guidelines for determining open loop system models
from Bode plots.
Developing the transfer function from the Bode plot is as follows:
Step 1: Approximate all the asymptotes on the Bode plot using straight lines.
Of special interest are the low and high frequency asymptotes.
Step 2: If the low frequency asymptote is horizontal, it is a type 0 system (no
integrators) and the static steady-state gain is the magnitude of the low
frequency asymptote. If the low frequency asymptote has a slope of 20
dB/decade, then it is a type 1 system and there is one integrator in the system.
If we remember how each factor contributes, then it is fairly easy to recog-
nize the pattern and the eect that each factor adds to the total.
Step 3: If the high frequency asymptote has a slope of 20 dB/decade, the order
of the denominator is one greater than the order of the numerator. If the
slope is 40 dB/decade, the dierence is two orders between the denominator
and numerator. To estimate the order of the numerator, examine the phase
angle plot. If there is a factor in the numerator, the slope at some point
should be positive. In addition, the magnitude plot should exhibit a positive
slope also. If enough distance (in frequency) separates the factors, each order
in the numerator will have a 20 dB/decade portion showing in the magni-
tude plot and 90 degrees of phase added to the total phase angle. This is
seldom the case since factors overlap and judgment calls must be made based
on experience and knowledge of the system. Drawing all the clear asymptotic
(straight line) segments usually helps to ll in the missing gaps.
Step 4: With the powers of the numerator and denominator now determined,
see if any peaks occur in the magnitude plot. If so, one factor is a second-
order system whose natural frequency and damping ratio can be approxi-
mated. The magnitude of the peak, relative to the lower frequency asymptote
preceding it, determines the damping ratio as
1
Mp dB p
2z 1  z2

Mp is the distance in decibels that the peak value is above the horizontal
asymptote preceding the peak. As the damping ratio goes to zero, the peak
magnitude goes to innity and the system becomes unstable. To calculate the
damping ratio, the peak magnitude is used in the same way that the percent
overshoot was used for a step response in the time domain. The graph
showing Mp (in dB) versus the damping ratio is given in Figure 24. The
natural frequency, od , occurs close to the p
peak (damped)

frequency, od ,
and is shifted by the damping ratio, od on 1  z2 . The natural frequency
can easily be found independent of the damping ratio by extending the low
and high frequency asymptotes of the second-order system in question. The
intersection of the two asymptotes occurs at the natural frequency of the
system.
Step 5: Fill in the remaining rst-order factors by locating each break in the
asymptotes. Each break corresponds to the time constant for that factor.
124 Chapter 3

Figure 24 Peak magnitude (dB) versus damping ratio (second-order system).

Look for shifts of 20 dB/decade to determine where the rst-order factors
are in your system. If the asymptote changes from 20dB to 40 dB/decade
(without a peak), then there likely is a rst-order break frequency located at
the intersection of two asymptotes dening the shift in slopes.

Using Bode plots to approximate systems is a powerful method and allows a


much better understanding than simple step response plots. If when designing a
control system we are able to obtain Bode plots for the critical subsystems/compo-
nents, we can develop models and simulate the system with much greater accuracy.
3.5.3.1 Nonminimum Phase Systems
A nal note should be made on minimum and nonminimum phase systems, as it might
arise during the above process that the magnitude and phase angle values do not
agree (i.e., a rst-order system not having both a high frequency asymptote of 20
dB/decade and a nal phase angle of 90 degrees). A minimum phase system will
have the high frequency asymptote slopes correctly correspond with the nal phase
angle. For example, a second-order system will have a high frequency asymptote of
40 dB/decade to correspond with a nal phase angle of 180 degrees. If the phase
angle is greater than the corresponding slope of the high frequency asymptote, we
have a nonminimum phase system. This occurs when we have a delay in our system.
Delays add additional phase angle (lag) without aecting the magnitude plot. This is
important to know since delays signicantly degrade the performance of control
systems.
3.5.3.2 Signal Amplitudes and Ranges
In the case that we proceed to develop our own Bode plots in the laboratory, a few
nal comments on signal amplitudes and appropriate ranges are in order. Ideally, the
expected operating range is known before the experimental Bode plots are devel-
oped. In this case the signal amplitude should be centered with the peak-to-peak
Analysis Methods for Dynamic Systems 125

amplitudes remaining within the expected operating range. In some components with
large amounts of deadband, like proportional directional control valves, the test
should take place in the active region of the valve. The data are relatively mean-
ingless when performed in the deadband of this type of component. For some
components the frequency response curves change little through out the full operat-
ing region, while other may change signicantly (linear versus nonlinear systems). A
general rule of thumb is to center the input signal oset in the middle of the com-
ponents active range and vary the amplitude  25% of the active range. Chapter 12
discusses electrohydraulic valves and deadband in much more detail.

3.6 STATE SPACE


State space analysis methods are fairly simple, and the same procedure may be used
for large, nonlinear, multiple input multiple output (MIMO) systems. This is one of
the primary advantages in using state space notation when representing physical
systems. In general, state space equations represent a higher order dierential equa-
tion with a system of rst-order dierential equations. When using linear algebra
identities to analyze state space systems, the equations must rst be linearized as
shown earlier. Both nonlinear and linear systems are easily analyzed using numerical
methods of integration. Programs like Matlab have multiple routines built in for
numerical integration. This section presents methods of handling both linear and
nonlinear state space systems.

3.6.1 Linear Matrix Methods and Transfer Functions


When the state equations are linear and time invariant, it is possible to analyze the
system using Laplace transforms. The basic procedure is to take the Laplace trans-
form of the state space matrices using linear algebra identities. The result leads to a
transfer function capable of being solved using the inverse Laplace transforms
already examined or root locus techniques presented in later sections. Let us now
step through the process for obtaining a transfer function from a state space matrix
representation.
The original state space matrices in general form are given as
dx=dt Ax Bu
and
y Cx Du
Now take the Laplace transform of each equation:
s Xs  x0 A Xs B Us
and
Ys C Xs D Us
Solve the rst equation for Xs:
s Xs  A Xs B Us
s I  A Xs B Us
If we premultiply each side with sI  A1 , we end up with
126 Chapter 3

Xs sI  A1 B Us
This is the output of our states, so substitute into the output equation for Y:
Ys Cs I  A1 B Us DUs or Ys Cs I  A1 B D Us
The transfer function is simply the output divided by the input, or
Ys=Us or Cs I  A1 B D
so
Gs C s I  A1 B D
It is relatively straightforward to get a transfer function from our state space
matrices, the most dicult part being matrix inversion. For small systems it is
possible to invert the matrix by hand where the inverse of a matrix is given by
Adjo intM
M 1 ; jMj Determinant
jMj
More information on matrices is given in Appendix A.

EXAMPLE 3.16
To illustrate this procedure with an example, let us use our mass-spring-damper state
space model already developed and convert it to a transfer function. Recalling the
mass-spring-damper system and the state matrices already developed earlier:
  " 0
#
1 x 
" #
0  
x_ 1 1 x1
k b 1 u and Y 1 0 0 u
x_ 2   x2 x2
m m m
Then:
Gs Cs I  A1 B D

(  " #)1 " #


s 0 0 1 0
Gs 1 0  k b 1 0
0 s  
m m m
" #1 " #
s 1 0
Gs 1 0 k b 1
s
m m m
To invert the matrix, we take the adjoint matrix divided by the determinant:
2 b 3
s 1
6 m 7
4 5
" #1 k
s 1  s
k b m
s 2 b k
m m s s
m m
Simplifying:
Analysis Methods for Dynamic Systems 127
2 b 3
" #
1 s 1 0
6 m 7
Gs 1 0  4 5 1
b k k
s2 s  s m
m m m
 
b
s 1 "0#
m
Gs 1
2 b k
s s m
m m
And nally, we get the transfer function G(s):
1
m 1
Gs 2
b k ms bs k
s2 s
m m
It should not be a surprise to nd that is exactly what was developed earlier as
the transfer function (from the dierential equations) for the mass-spring-damper
system. The formal approach is seldom needed in practice; as long as you understand
the process and what it represents, many computer programs are developed to
handle these chores for you. Many calculators produced now will perform these
tasks.

3.6.2 Eigenvalues
One important result from the state space to transfer function conversion is realizing
that the characteristic equation remains the same and recognizing where it came
from during the transformation. Remember that when we took the determinant of
the (sI  A) matrix, it resulted in a polynomial in the s-domain. Looking at it closer,
we see that we actually developed the characteristic equation for the mass-spring-
damper system. This determinant is sometimes called the characteristic polynomial
whose roots are dened as the eigenvalues of the system. Thus, eigenvalues are
identical to the poles in the s-plane arising from the roots of the characteristic equation.
We can treat the eigenvalues the same as our system poles and plot them in the
s-plane, examine the response characteristics (time constant, natural frequency, and
damping ratio), and predict system behavior.
Eigenvectors are sometimes calculated by substituting the eigenvalues, , back
into the matrix equation ( I  A x 0 and solving for the relationships between the
states that satisfy the matrix equation. This is more common in the eld of vibrations
were we discuss modes of vibration corresponding to eigenvectors.

3.6.3 Computational Methods


A real advantage of state space notation is in the ease of simulating large nonlinear
systems. Even linearizing the equations and converting them to a transfer function
gets tedious for higher order systems. Since the general notation is a list of rst-order
dierential equations (as functions of the states themselves), at any point in time we
have the slope (derivative) of each state with which to project the value in the future.
The computer routines are almost identical whether the system is second-order (two
state equations) or tenth order. The general notation has already been given as
128 Chapter 3

dXt=dt f Xt; Ut; t


Xt0 X0 , initial values
This notation allows for multiple inputs and outputs, time varying functions, and
nonlinearities. It also works ne for LTI, single input single output systems. Two
basic methods are common for obtaining the time solution to the dierential equa-
tions: one-step methods and multistep methods. The most basic and familiar one-
step method is Eulers. If constant time intervals, h, are used the system is approxi-
mated by
xt h xt h dx=dtjt
The next value is simply the current value added to the time step multiplied by
the slope of the function at the current time. This method is fast and simple to
program but requires very small time steps to consistently obtain accurate results.
The net result is a simulation that requires more time to run than more ecient
routines like Runge-Kutta . The Runge-Kutta routine has been one of the mainstays
for numerical integration. It retains the important feature of only requiring one prior
value of xt to advance the solution by time h. The basic routine allows for higher
order approximations when estimating the slope. The common fourth-order approx-
imation estimates four slopes using the equations below and weights the average to
obtain the solution. While each step requires more processing the Eulers, the steps
can be much larger, thus saving on overall computational time. For the interval from
tk to tk1 :

Slope 1 s1 dx=dtjxk ; tk
Slope 2 s2 dx=dtjxk s1 h=2; tk h=2
Slope 3 s3 dx=dtjxk s2 h=2; tk h=2
Slope 4 s4 dx=dtjxk s3 h; tk h

and nally, to calculate the value(s) at the next time step:


xk1 xtk h xk h=6s1 2s2 2s3 s4
The fourth-order model presented here has a truncation error of order h4 and
requires four evaluations of the derivative (state equations) per step. For many
problems, this represents a reasonable trade-o between accuracy and computing
eciency.
Although not as commonly written by individual end users, most simulation
software incorporates advanced multistep and predictor correction methods.
Multistep methods require the prior values to be saved and used in the next step.
Explicit methods only use data points up to the current tk while implicit methods
require tk1 or further ahead. These advanced routines can estimate the error of the
current prediction, and if it is larger than a user-dened value, the routine backs up a
step, decreases the step size, and tries again. When the system is changing very
slowly, it also may increase the step size to save computational time. These routines
are generally invisible to the user, and programs like Matlab have methods of pre-


Named after German mathematicians C. Runge (18561927) and W. Kutta (18671944).
Analysis Methods for Dynamic Systems 129

senting the output at xed time steps even though the step size changed during the
numerical integration.
In conclusion, it is quite simple to compute a time solution to equations in state
space format even though they may be nonlinear and with multiple inputs and out-
puts. There are books containing the numerical recipes (code segments) for many
dierent numerical integration algorithms (in many dierent programming lan-
guages) if the desire is to do the programming for incorporation into a custom
program.

3.6.4 State Space Block Diagram Representation


If any system in state space representation is linearized, it can be represented in block
diagram notation for simulation by a variety of programs. Figure 25 illustrates the
block diagram representation of a general state space system without feedback. In
future systems the feedback loop will be added and analyzed. The wide paths denote
multiple lines of information, i.e., a column vector is passed for each point in time.
A linear single input single output state space system may be reduced further,
resulting in a more typical block diagram. For example, look again at the state space
mass-spring-damper system.
  " 0
#
1 x 
" #
0
x_ 1 1
k b 1 u
x_ 2   x2
m m m
This can be represented in block diagram form as shown in Figure 26. If we
take the Laplace transform of the integrals in the block diagram and simplify, the
result becomes the same transfer function developed multiple times (and using mul-
tiple methods) previously. By now we should be more comfortable representing
models of physical systems using a variety of formats. Each system has dierent
strengths and weakness, but for the most part the information is interchangeable.

3.6.5 Transfer Function to State Space Conversion


Just as we have seen where it is possible to take a set of state space matrices and form
a transfer function, is it also possible to take a transfer function and convert it to
equivalent state space matrices. By denition, a transfer function is linear and repre-

Figure 25 General state space block diagram.


130 Chapter 3

Figure 26 Mass-spring-damper block diagram.

sents a single input single output system. This simplies the process, especially when
the numerator of the transfer function is constant (no s terms). The following exam-
ple illustrates the ease of constructing the state space matrices from a transfer func-
tion. Remember that converting a state space system to a transfer function results in
a unique transfer function, but the process in reverse may produce many correct but
dierent representations in state space. That is, equivalent but dierent state space
matrices, when converted to transfer functions, also result in the same transfer
function. The opposite is not true. Dierent methods of converting a transfer func-
tion into state space matrices may result in dierent matrices. One thing does remain
true, however. If we calculate the eigenvalues for any of the A matrices, they will be
identical. In the example that follows, we rst develop the matrices manually and
then use Matlab. Even though the resulting matrices dier, we will verify that they
indeed contain the same information.

EXAMPLE 3.17
Convert the following transfer function to state space representation:
Cs 24

Rs s3 6s2 11s 6
The process is to rst cross multiply:
Cs s3 6s2 11s 6 24 Rs

Cs s3 Cs 6s2 Cs 11s Cs 6 24 Rs
Take the inverse Laplace to get the original dierential equation (minus the initial
conditions):
c000 6 c00 11c0 6 c 24 r
Choose the state variables (chosen here as successive derivatives; third order, three
states):
x1 c
x2 c0
x3 c00
Then the state equations are:
Analysis Methods for Dynamic Systems 131

dx1 =dt x2
dx2 =dt x3
dx3 =dt 6x1  11x2  6x3 24 r
Finally, writing them in matrix form:
2 3 2 32 3 2 3 2 3
x_ 1 0 1 0 x1 0 # $ x1
4 x_ 2 5 4 0 0 1 54 x2 5 4 0 5r y 1 0 0 4 x2 5 0u
x_ 3 6 11 6 x3 24 x3
To conclude this example, let us now work the same problem using Matlab and
compare the results, learning some Matlab commands as we progress. We rst dene
the numerator and denominator (num and den in the following program) where the
vectors contain the coecients of the polynomials in decreasing powers of s. Thus to
dene the polynomial (s3 6s2 11s 6 in Matlab we dene a vector as [1 6 11 6],
which are the coecients of [s3 s2 s1 s0 :
Matlab commands:
num24; %Dene numerator of C(s)/R(s)=G(s).
den 1 6 11 6; %Dene denominator of G(s).
sys1 tf num; den %Make and display LTI TF
A; B; C; D tf 2ssnum; den %Convert TF to SS using num,den
lti_ss sssys1 %Convert LTI to state space.
rootsden %Check roots of characteristic equation
eigA %Check eigenvalues of A
eiglti_ss %Check eigenvalues of lti_ss variable
After executing the commands, we nd that the resulting state space system
matrices are slightly dierent. Checking the roots of the denominator (characteristic
equation of the original transfer function), and the eigenvalues of the two resulting A
matrices, gives the results as summarized as would appear on the screen in Table 4).
Even with dierent matrices the eigenvalues are the same and equal to the original
poles of the system. Matlab uses the LTI notation for commands used with LTI
systems. The transfer function command, tf, is used to convert the numerator and
denominator into an LTI variable. For very large systems and systems with zeros in

Table 4 Example: Matlab Results from TF to SS Conversion

Cs 24
Original transfer function: , Poles (roots of CE) 3; 2; 1
Rs s3 6s2 11s 6
Matrices from using tf 2ss command: Matrices from using ss command:
2 3 2 3 2 3 2 3
6 11 6 1 6 2:75 0:375 1
A4 1 0 0 5B 4 0 5 A4 4 0 0 5B 4 0 5
0 1 0 0 0 4 0 0
# $ # $
C 0 0 24 D 0 C 0 0 1:5 D 0

Eigenvalues 3; 2; 1 Eigenvalues 3; 2; 1


132 Chapter 3

the numerator of the transfer function, using tools like Matlab can save the designer
much time.

3.1 PROBLEMS
3.1 Given the following dierential equation, which represents the model of a
physical system, determine the time constant of the system, the equation for the
time response of the system when subjected to a unit step input, and the correspond-
ing plot of the system response resulting from the unit step input.
@x
80 4 x f t
@t
3.2 Given the second-order time response to a step input in Figure 27, measure and
calculate the percent overshoot, settling time, and rise time.
3.3 Using the dierential equation given, determine the transfer function where
Gs Ys=Us.
d 3y d 2y dy
3
3 2 4 9y 10u
dt dt dt
3.4 Given the following dierential equation, nd the transfer function where y is
the output and u is the input:
:
y 5y 32 5u_ u
3.5 Using the dierential equation of motion given,

Figure 27 Problem: step response of second-order system.


Analysis Methods for Dynamic Systems 133

a. Determine the transfer function for the system.


b. Take the inverse Laplace transform to solve for time response if the input is
a unit impulse.
d 2y dy du
2 4 6 y 8 10u
dt2 dt dt
3.6 Write the time response for the following transfer function when the input is a
unit ramp:
2
TF Gs
s2
3.7 Given the following transfer function, solve for the system time response to a
unit step input:
Ys 5s 1

Us s2 7s 10
3.8 Given the following transfer function, solve for the system time response to a
step input with a magnitude of 2.
Ys 2

Us s2 3s 2
3.9 Find the time solution to the transfer function given. Use partial fraction
expansion techniques.
Ys 5s 1

Us s3 5s2 32
3.10 Given the following transfer function, solve for the system time response to a
step input with a magnitude of 5.
23
Ys s 12
25
Us s 7s 12
3.11 Given the s-plane plot below in Figure 28, assume that the poles are at the
marked locations and sketch the response to a unit step input for the system
described by the poles. Assume a steady-state closed loop transfer function gain of 1.
3.12 Using Figure 29, a rst-order system model responding to a unit step input,
develop the appropriate transfer function model. Note the axes scales.

Figure 28 Problem: pole locations in the s-plane.


134 Chapter 3

Figure 29 Problem: step response of rst-order system.

3.13 Given Figure 30, the system response to a unit step input, approximate the
transfer function based on a second-order system model.
3.14 Given the following closed loop transfer function, plot the pole locations in the
s-plane and briey describe the type of response (dynamic characteristics, nal
steady-state value) when the input is a unit step.
18
Gs 2
s 4s 36

Figure 30 Problem: step response of second-order system.


Analysis Methods for Dynamic Systems 135

Figure 31 Problem: system block diagram.

3.15 For the block diagram shown in Figure 31, determine the following:
a. The closed loop transfer function
b. The characteristic equation
c. The location of the roots in the s-plane
d. The time responses of the system to unit step and unit ramp inputs
3.16 Given the physical system model in Figure 32, develop the appropriate dier-
ential equation describing the motion (see Problem 2.18). Develop the transfer func-
tion for the system where xo is the output and xi is the input.
3.17 Given the physical system model Figure 33, develop the appropriate dieren-
tial equation describing the motion (see Problem 2.19). Develop the transfer function
for the system where xo is the output and xi is the input.
3.18 Given the physical system model Figure 34, develop the appropriate dieren-
tial equation describing the motion (see Problem 2.20). Develop the transfer function
for the system where r is the input and y is the output.
3.19 Given the physical system model in Figure 35, develop the appropriate dier-
ential equation describing the motion (see Problem 2.21). Develop the transfer func-
tion for the system where F is the input and y is the output.
3.20 Given the physical system model in Figure 36, develop the appropriate dier-
ential equation describing the motion (see Problem 2.26). Develop the transfer func-
tion for the system where Vi is the input and Vc, the voltage across the capacitor, is
the output.
3.21 Using the physical system model in Figure 37, develop the dierential equa-
tions describing the motion of the mass, yt as function of the input, rt. PL is the
load pressure, a and b are linkage segment lengths (see Problem 2.27). Develop the
transfer function for the system where r is the input and y is the output.

Figure 32 Problem: physical system modelmechanical/translational.


136 Chapter 3

Figure 33 Problem: physical system modelmechanical/translational.

Figure 34 Problem: physical system modelmechanical/translational.

Figure 35 Problem: physical system modelmechanical/translational.

Figure 36 Problem: physical system modelelectrical.


Analysis Methods for Dynamic Systems 137

Figure 37 Problem: physical system modelhydraulic/mechanical.

3.22 Determine the dierential equations describing the system in Figure 38 (see
Problem 2.29). Formulate as time derivatives of h1 and h2 . Develop the transfer
function for the system where qi is the input and h2 is the output.
3.23 Determine the dierential equations describing the system given in Figure 39
(see Problem 2.30). Formulate as time derivatives of h1 and h2 . Develop the transfer
function for the system where qi is the input and h2 is the output.
3.24 For the transfer function given, develop the Bode plot for both magnitude (dB)
and phase as a function of frequency.

10s 1
GHs
s0:1s 1

3.25 For the transfer function given, develop the Bode plot for both magnitude (dB)
and phase as a function of frequency.

Ys 2

Us s2 3s 2

3.26 For the Bode plot shown in Figure 40, estimate the following:
a. What is the approximate order of the system?
b. Damping ratio: underdamped or overdamped?
c. Natural frequency (units)?
d. Approximate bandwidth (units)?

Figure 38 Problem: physical system modelliquid level.


138 Chapter 3

Figure 39 Problem: physical system modelliquid level.

3.27 From the transfer function, sketch the basic Bode plot and measure the follow-
ing parameters:
a. Gain margin
b. Phase margin
c. Bandwidth using 3dB criteria
d. Steady-state gain on the system
5s 4
Gs
ss 1s 25s 1
3.28 Develop a Nyquist plot from the Bode plot given in Figure 40.
3.29 Given the following state space matrices, determine the equivalent transfer
function. Is the system stable and show why or why not?
        
x_ 1 2 5 x1 1 # $ x1
u and y 1 0
x_ 2 3 11 x2 0 x2

Figure 40 Problem: Bode plot.


Analysis Methods for Dynamic Systems 139

3.30 Given the following state space system matrix, nd the eigenvalues and
describe the system response:
 
0 1
A
1 1
3.31 Given the following transfer function, write the equivalent system in state space
representation.
C 2s2 8s 6
s 3
R s 8s2 16s 6
This Page Intentionally Left Blank
4
Analog Control System Performance

4.1 OBJECTIVES
 Dene feedback system performance characteristics.
 Develop steady-state and transient analysis tools.
 Dene feedback system stability.
 Develop tools in the s, time, and frequency domains for determining system
stability.

4.2 INTRODUCTION
Although a relatively new eld, available controller strategies have grown to the
point where it is hard to dene the basic congurations. The advent of the low
cost microcontroller has revolutionized what is possible in control algorithms. This
section denes some basic properties relevant to all control systems and serves as a
backdrop for measuring and predicting performance in later sections. Control sys-
tems are generally evaluated with respect to three basic areas: disturbance rejection,
steady-state errors, and transient response. Open loop and closed loop systems both
are subjected to the same basic criterion. As the complexity increases, additional
characteristics become important. For example, in advanced control algorithms
using a plant model for command feedforward, the sensitivity of the controller to
modeling errors and plant changes is critical. In this case it is appropriate to evaluate
dierent algorithms based on parameter sensitivity, in addition to the basic ideas
presented in this chapter.
A second major concern in controller design is stability. Three basic methods,
Routh-Hurwitz criterion, root locus plots, and frequency response plots, are devel-
oped in this chapter as tools for evaluating the stability of different controllers.
Stability is also closely related to transient response performance as the examples
and techniques illustrate.

141
142 Chapter 4

4.3 FEEDBACK SYSTEM CHARACTERISTICS


4.3.1 Open Loop Versus Closed Loop
Most engineers are familiar with the idea of open loop and closed loop controllers.
By denition, if the output is not being measured, whether directly with transducers
(or linkages) or indirectly with estimators, it is an open loop control system and
incapable of automatically adjusting if the output wanders. When a transducer is
added to measure the output and compare it with the desired input, we have now
closed the loop and have constructed a closed loop controller. An example of an
open loop controller in use by most of us is the washing machine. We insert the
clothes and proceed to adjust several inputs based on load size, cleanliness, colors,
etc. After completing the programming, the start button is pushed and the cycle
runs until completed. Internal loops may be closed to control the motor speed, water
level, etc., but the primary input-output relationship is open loop. Unless something
goes wrong (load imbalance), the machine performs the same tasks in the same order
and for the same length of time regardless of whether the load is clean or still dirty.
What about our common household clothes dryer: Is it an open or closed loop
configuration? The answer depends. Most basic models incorporate a timer that
simply defines the amount of time for the dryer to be on, hence open loop.
However, many models today also incorporate a humidity sensor that can be set
to shut off the dryer when the humidity decreases to a set level; now the dryer is
incorporating closed loop controls with the addition of a sensor and error detector.
This simple feedback system is illustrated in Figure 1.
The dryer example can be contrasted with the following example, a cruise
control system, in several ways. What does the dryer controller do with the trans-
ducer information? Simply open or close a contact switch; there is no proportionality
to the error signal that is received. In this sense it is a very simple closed loop control
system where the error detector could be a simple positive feedback operational
amplifier (OpAmp). Now compare this to a typical automobile cruise control system,
already discussed in Chapter 1. The signal output of the actuator is proportional to
its input. As the error increases, so does the signal to the actuator and the corre-
sponding throttle input to the engine. Most control systems benefit when the com-
mand and actuation may be varied continuously. An advanced controller algorithm
would have little effect on the clothes dryer since the heater is designed to operate
between two states, on and off. Many early programmable logic controllers (PLCs)
closed the loop using simple logic switches and on/off actuation relays. This type of
controller is still common in many industrial applications. Some of the advantages
and disadvantages of each type are listed in Table 1.

Figure 1 Basic closed loop clothes dryer.


Analog Control System Performance 143

Table 1 Characteristics of Open Loop and Closed Loop Controllers

Open loop controllers Closed loop controllers

Cheap (i.e., timers vs. transducers, etc.) Requires additional components ($$)
Unable to respond to external inputs Reduces effects of disturbances
Generally stable in all conditions Can go unstable under certain conditions
No control over steady-state errors Can eliminate steady-state errors

Although in principle open loop controllers are cheaper to design and build
than closed loop controllers, this is not always the case. As microcontrollers, trans-
ducers, and amplifiers become more economical, often a break-even point exists
beyond which the open loop controller is no longer the cheaper alternative. For
example, some things can now be done electronically, therefore removing the most
expensive hardware in the system and simulating it in software.

4.3.2 Disturbance Inputs


Disturbances, unfortunately, are common inputs to all practical (i.e., in use)
controllers. Disturbance inputs might include electrical noise, external (environ-
mental) conditions, and dierent loading conditions. Referring again to the cruise
control system, we can see several potential disturbance inputs. If the cruise
control was set on level ground, no wind, and constant surface properties, the
vehicle should retain the same speed with or without the feedback system as long
as nothing changes. However, in addition to the external disturbances, the vehicle
cruise control must still deal with many disturbances arising from the vehicle
itself. The spark plugs, alternator, distributor, electric clutches, electric motors
(fan and windshield wipers), etc., all produce electrical noise that might interfere
with the electrical feedback signal or command signal. Unless the controller closes
the loop, it does not know how to respond to changes occurring after the com-
mand setting is made.
An example of open loop cruise control systems can be found on some
motorcycles. The cruise control is a simple clamp on the handlebar throttle
that allows the driver to accelerate to the desired speed and, set the clamp; if
nothing changes, the motorcycle continues at the preset speed. Obviously, the
problem is when the driver encounters a hill and/or different wind conditions
the motorcycle speed will change and the driver will have to readjust the throttle
clamp. The changing landscape (hills), wind forces, and electrical noises are what
we call disturbance inputs.
All real controllers have to deal with disturbance inputs. In the case of the
modern automobile cruise control, the controller automatically increases the throttle
if a hill or stronger head wind is encountered. This is one of the primary advantages
of closed loop controllers. In fact, if feedforward algorithms are correctly used for
following the input signal, then adequately handling the disturbance inputs becomes
the primary objective of the feedback loop.
The next section provides means of designing for minimal influences from
disturbances or, as commonly termed, disturbance input rejection.
144 Chapter 4

4.3.3 Steady-State Errors


As we have just seen, one of the primary advantages of feedback is to control/limit
the amount of error between the desired and actual variables. Ideally, the error
should zero at all times, but this can never be achieved in actual practice. This section
develops approaches for determining the steady-state error arising from both com-
mand and disturbance inputs. It is possible to include noise inputs also if desired, but
their eect on the magnitude of the steady-state error is usually small. To begin with,
let us examine a general block diagram with both command and disturbance inputs,
given in Figure 2.
In Section 3.4.3 we developed the skills to find the relationship between the
input and output in a block diagram. How do we deal with the fact that now we have
two inputs (R and D) in the block diagram and one output? Since we are working
with linear systems here, the principal of linear superposition applies and we can
solve for two transfer functions, Cs=Rs and Cs=Ds. The transfer functions can
be found by setting one input to zero and reducing the block diagram with respect to
the remaining input. What each transfer function gives is the individual effect of each
input, command and disturbance, on the output. The total response is simply the two
separate responses added together. Determining the two transfer functions allows us
to talk about steady-state and transient characteristics arising from either the com-
mand input or the disturbance input. The principle of linear superposition does not
apply to nonlinear systems and makes the analysis more difficult if the system is not
linear.
To show this procedure, begin with the block diagram in Figure 2 and first set
the disturbance input to zero as shown in Figure 3. This effectively removes the
summing junction containing Ds, and now we can close the loop to find the
complete transfer function using the same methods from earlier. This results in the
following closed loop transfer function:

Figure 2 General block diagram inputs.

Table 2 Control System Notation in Block Diagrams

Signals Transfer functions

Rs Command GC s Controller
Cs Output G1 s Amplier
Ds Disturbance G2 s Physical System
Hs Transducer
Analog Control System Performance 145

Figure 3 General block diagram with Ds 0.

Cs Gc G1 G2

Rs 1 Gc G1 G2 H
Since this transfer function represents the output over the input, the goal is to have
C=R equal to 1. If we could make this to always be the case, then the error would
always be zero. That is, Cs would always be equal to Rs. If it cannot be made to
always be 1, then how can we optimize it? By making the gain (product) of Gc G1 G2
as large as possible, we make the overall value get closer to 1. As this gain
approaches innity, the ratio of C=R approaches 1, or perfect tracking. Although
this looks good when approached from the point of view of reducing steady-state
errors, we will see that adding criteria to our stability and transient eects will limit
the possible gain in our system. This is one of the fundamental aspects of most design
work, the balancing of several variables to optimize the overall design.
Now let us follow the same procedure but set the command to zero to find the
effects of disturbance inputs on our system. Setting Rs 0 results in the block
diagram shown in Figure 4. This results in the following closed loop transfer func-
tion:
Cs G2

Ds 1 Gc G1 G2 H
For this transfer function we would like C=D to equal zero, in which case the
disturbance input would have no effect on the output. Obviously, this will not be the
case unless the system transfer function equals zero, which cannot happen if we want
to control the system. If G2 equals zero, the controller will also have no relationship
to the output. If we want to minimize the effects of disturbances, then we can make
G2 as small as possible relative to Gc and G1 . Increasing the gain Gc and G1 while
leaving G2 does make the overall gain tend toward zero, as desired. Increasing H also
helps here but hurts with respect to command following performance.
To optimize both, we should try to make Gc and G1 as large as possible.
Although this sounds easy to do even with a simple proportional controller, K,
for Gc , we will see that a trade-off exists. As the gain of Gc is increased, the errors

Figure 4 General block diagram with Rs 0.


146 Chapter 4

decrease but the stability is eroded. Hence, good controller design is a trade-off
between errors and stability. Over the years many alternative controllers have
been developed to optimize this relationship between steady-state and dynamic per-
formance. The discussion here assumes primarily a proportional type controller.
This section thus far has presented general techniques for reducing errors with-
out being specific to steady-state errors. If we actually achieved C=R equal to 1, then
in theory (with feasible inputs) the output would always exactly equal the input and
steady-state errors would be nonexistent. If only this could always be the case. Real
world components never are completely linear with unlimited output and bandwidth
and thus ensure that all decent controls engineers remain in demand.
Now let us turn our attention in specific to steady-state errors. The previous
discussion is a natural lead in to discussing steady-state errors since the beginning
procedure is the same. Once the overall system transfer function is found, the steady-
state error can be determined. Remember though, it is possible to have zero steady-
state error and still have large transient errors. From the block diagram we can
determine our overall system transfer function and then apply the final value theo-
rem (FVT) to solve for the steady-state error. The only wrinkle occurs when an input
different from a unit impulse, step, or ramp is used. If a unit step is used on a type 0
system, the steady value is simply the FVT result from the system transfer function.
The steady-state error is then 1  Css . This can best be illustrated in the following
example.

EXAMPLE 4.1
Using the block diagram in Figure 5, determine the steady-state error due to a

1. Unit step input for the command.


2. Step input with a magnitude of 4 on the disturbance input.
Steady-State Tracking Error
To determine the steady-state error due to a unit step input, Rs, we set Ds 0 and
close the loop. This results in the following transfer function:

Cs 25s 1

Rs s2 6s 30

With R(s) = 1/s, solve for C(s) and apply the FVT:

Figure 5 Example: steady-state errors and FVT.


Analog Control System Performance 147

25s 1 1 25
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 s2 6s 30 s 30
The steady-state error is the nal input value minus the output value:
25 1
Steady-state error ess rss  css 1 
30 6
So even after all the transients decay, the nal output of the system never reaches the
value of the command.
Steady-State Disturbance Error
To solve the error when there is a disturbance acting on the system, we set Rs 0
and solve for Cs=Ds: This results in the following transfer function:
Cs 5s 1

Ds s2 6s 30
With Ds 4=s, solve for Cs and apply the FVT:
5s 1 4 20
Csteady state lim ct lim s Cs lim s 2

t!1 s!0 s!0 s 6s 30 s 30
In this case, the steady-state error is simply the nal output value since the input
value (desired output) is set to zero:
20 2
Steady-state error ess rss  css  
30 3
After all the transients decay form the step disturbance input, the nal output of the
system reaches 0.667, even though the command never changed. Ideally, css in this
case would be zero.
Several interesting points can be made from this example. First, when we
closed the loop relative to Rs and then to Ds, we found that the denominator
of the transfer function remained the same for both cases. Remembering that the
information regarding the stability and dynamic characteristics of the system are
contained in the characteristic equation, this is exactly what we would expect to find.
We still have the same physical system; we are just inserting signals into two different
points. If we had a second-order underdamped response from one input, we would
expect the same from the other. That is to say, we have not modified the physics of
our system by closing the loop at two different points. What caused the difference in
responses was the change in the numerator, which as we saw earlier when developing
transfer functions is precisely where the coefficients arising from taking the Laplace
transform of the input function appear in the transfer function.
The second interesting point found in this example is that the error never goes
all the way to zero, even as time goes to infinity. In fact, as we will see for controllers
with proportional gain only, this will almost always be the case. It can be explained
as follows: In order for the physical system in the example to have a non-zero output
(as requested by the command), it needs a non-zero input. If the input to the physical
system is zero, so will be the output. Since the output of the controller in this
example provides the input to the physical system, it must be non-zero also. With
a simple proportional gain for our controller, we can never have a non-zero output
with a zero input; thus, there must always be some error remaining to maintain a
148 Chapter 4

signal into the physical system. As the proportional gain is increased, the required
input (error) for the same output sent to the physical system is reduced. So as we
found, increasing the proportional gain decreases the steady-state error because it
can provide more output with a smaller error input. As the next section shows, we
can add integrators to our system to eliminate the steady-state error since an inte-
grator can have a non-zero output even when the input is zero.
It is often easy to determine what the steady-state errors will be by classifying
the system with respect to the number of integrators it contains. Remember that an
integrator block is 1=s, and thus the number of 1=s terms we can factor out of the
transfer function is the number of integrators the system has. We saw that the
transfer function for the hydraulic cylinder had one integrator and thus was classi-
fied as a type 1 system. A type 0 system has no pure integrators and a type 2 system
has two integrators (1=s2 factors out) and so forth.

EXAMPLE 4.2
To illustrate how to determine the system type number, let us work an example using
the hydraulic servo system that we modeled in Chapter 2. The dierential equation
governing the system motion is
 
d 2y A dy AKx
m 2 b x
dt KP dt KP
First, take the Laplace transform
A AKx
m s2 K1 s Ys K2 Xs; K1 b; K2
Kp Kp
Write the transfer function
Ys K2 1 K2

Xs sms K1 s ms K1
Since a 1=s term can be factored out of the transfer function, it is classified as a
type 1 system. In this example the integrator is included as part of the physical
system model. Integrators are also added electronically (or mechanically) as part
of the control system. The I term in a common proportional, integral, derivative
(PID) controller represents the integrator that is added.
To illustrate the general case, close the loop on the type 1 system shown in
Figure 6. Then closing the loop produces
Gs
s Gs
Cs Rs Rs
1 Gs
s
s Gs

Figure 6 General type 1 unity feedback system.


Analog Control System Performance 149

Figure 7 Block diagram with gain K and system type.

Applying the FVT to the system for a unit step input means that the input (1/s)
cancels with the s from the FVT. Therefore, if we let s go to zero in the above
transfer function, we end up with Gs=Gs 1: No matter what form the system
model Gs takes, the output is always one. Since the command is also 1, the error is
zero. As we will see with PID controllers, the integral term in the controller forces the
steady-state error to be zero for step input functions.
If we reference the block diagram in Figure 7, we can generalize the steady-
state error results for different inputs using Table 3. This makes the process of
evaluating the steady-state performance of the system quite easy. Knowing the sys-
tem type number, overall system gain, and the type of input allows us to immediately
calculate the amount of steady-state error in the system.
To demonstrate how the table is developed, let us look at two examples and
check the values given for the type 0 and type 1 systems.

EXAMPLE 4.3
To begin, we will use the type 0 system shown in Figure 8 and calculate the steady
state errors for the system. We will rst apply the information given in Table 3 and
then verify it by closing the loop and calculating the steady-state error. To apply the
information in the table, we need to nd the steady-state gain K for the system. That
is, if we put an input value of 1 into the rst block, let all of the transients (s terms)
decay, what would be the output. For this system we have three blocks, and the
overall gain is given by multiplying the three:

K 8
3
2=4 12

From the table, and for a unit step input, the error is equal to 1=1 K, or 1/13.
And for a unit ramp and acceleration input, the error is equal to innity. If we wish
to verify the table, simply close the loop and apply the FVT. Closing the loop results
in the following transfer function:

Cs 48

Rs s2 5s 52

Table 3 Steady-State Errors as Functions of System Type Number

System type number n 0 1 2

Step input Rs 1=s Error 1=1 K 0 0


Ramp input Rs 1=s2 Error 1 Error 1=K 0
Acceleration input, 1=s3 Error 1 Error 1 Error 1=K
150 Chapter 4

Figure 8 Example: steady-state errors for type 0 system.

For a unit step input Rs 1=s:

48 1 48
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 s2 5s 52 s 52

48 4 1
Steady-state error ess rss  css 1 
52 52 13
For a unit ramp input Rs 1=s2

48 1
Csteady state lim ct lim s Cs lim s 1
t!1 s!0 s!0 s 5s 52 s2
2

So we see that the application of the table results in the correct error and can also be
veried by closing the loop and applying the FVT. In the case where we do not have
a system type number (or the table), it is always possible, and still quite fast, to just
close the loop and apply the FVT as demonstrated here.

EXAMPLE 4.4
To nish our discussion on steady-state errors, let us look at an example type 1
system and calculate the errors resulting from various inputs by using both the
system type number and corresponding table followed by closing the loop and apply-
ing the FVT. The block diagram representing our system is given in Figure 9. To
solve for the steady-state errors using Table 3, we rst must calculate the steady-state
gain in the system. Looking at each block, the overall gain is calculated as

K 2
3
2=4 3

Since this is a type 1 system, with one integrator factored out of the third block, we
would expect the following steady-state errors from the dierent inputs:
For a unit step input, the error is equal to 0. For a unit ramp input, the error is equal
to 1=K, or 0.333. And for a unit acceleration input, the error is equal to innity. To

Figure 9 Example: block diagram of type 1 system.


Analog Control System Performance 151

verify these errors, lets close the loop and apply the FVT for the dierent inputs.
The closed loop transfer function becomes
Cs 12
3 2
Rs s 5s 4s 12
For a unit step input Rs 1=s:
12 1 12
Csteady state lim ct lim s Cs lim s 3 2
1
t!1 s!0 s!0 s 5s 4s 12 s 12

12
Steady-state error ess rss  css 1  0
12
For a unit ramp input Rs 1=s2 , we take a slightly dierent approach since the
steady-state output of Cs, using the FVT, will go to innity.
12 1 12
Csteady state lim ct lim s Cs lim s 1
t!1 s!0 s!0 s3 5s2 4s 12 s2 0
This is not a surprise since the ramp input also goes to innity. What we are inter-
ested in is the steady-state dierence between the input and output as the both go to
innity. The easiest way to handle this is to actually write the transfer function for
the error in the system and then apply the FVT. The error can be expressed as the
output of the summing junction where
Es Rs  Cs or Cs Rs  Es
Then if our open loop forward path transfer function is dened as
Cs GOL sEs, we can solve for Es=Rs:

Cs GOL sEs Rs  Es
or
Es 1 ss 1s 4 s 1s 4
s
Rs 1 GOL s ss 1s 4 12 ss 1s 4 12
To nd the error as a function of our input, we apply the FVT to
Es 1 GOL s Rs, except that as compared to before, an extra s term is in
the numerator:
s 1s 4 1 4 1
esteady state lim et lim s Es lim s2 2
t!1 s!0 s!0 ss 1s 4 12 s 12 3
The error 0:333 is the same as that which was calculated earlier using the table. If
we used the same application of the FVT for the acceleration input, the input func-
tion would add one more s term in the denominator (1=s3 ) and as s approaches zero
in the limit, the output approaches innity.
So we see that the application of the table results in the correct determination
of steady-state error for all three inputs. Once again, closing the loop and applying
the FVT verified each table entry.
If controllers you are designing do not fit a standard mold, either reduce the
block diagram and calculate the steady-state errors or perform a computer simula-
tion long enough to let all transients decay. Using the FVT is generally the easiest
152 Chapter 4

method for quickly determining the steady-state performance of any controller block
diagram model.
4.3.3.1 System Type Number from Bode Plots
A similar analysis, using the system type number to determine the steady state error
as a function of several inputs, can be done using Bode plots. If we recall back to
when we developed Bode plots from transfer functions, any time we had an inte-
grator in the system it added a low frequency asymptote with a slope of 20 dB/
decade (dec) and a constant phase angle of 908degrees. From this information it is
a straightforward process to determine the system type number and facilitate the use
of Table 3. For example, if we see that the low-frequency asymptote on our Bode
plot has a slope of 40 dB/dec and an initial phase angle of 180 degrees, then we
know that we have two integrators in our system and therefore a type 2 system. Now
it is a simple matter of using the table as was demonstrated after obtaining the type
number from the transfer function.

4.3.4 Transient Response Characteristics


The transient response is one of the primary design criteria (and often one of the
limits on the design) when choosing the type of controller and in tuning the con-
troller. Improper design decisions may lead to severe overshoot and oscillations. In
Section 3.3.2 the transient responses for step inputs to rst- and second-order sys-
tems were analyzed for open loop systems. The same procedure applies to closed
loop control systems. In fact, as shown in the next section, for linear systems the
total response is simply the sum of all the rst- and second-order responses. For
example, a linear third-order system can be factored into three rst-order systems or
one rst-order and one second-order system. This will become evident using root
locus plots.
The performance specifications developed earlier are also important since they
are often used to classify the relative stability level of the system. The important
parameters to remember are time constant, damping ratio, and natural frequency, as
these also define the closed loop response characteristics. Information from Bode
plots can also be used to define these parameters. The common transient response
measurements are given in Table 4. These are the parameters commonly given to the
designer of the control system as goals for the controller to meet. More often than
not, the specifications are in the time domain as most people more easily relate to
these measurements. Since most design methods take place in the s-domain, it is the
responsibility of the designer to understand the relationships shown in the table to
arrive at the desired pole locations in the s-plane. These relationships are commonly
used to take the time domain specifications and eliminate portions of the s-plane
where the specifications are not met. For example, if a required settling time is one of
the criteria, then all area in the s-plane that does not fall to the left of the magnitude
of the real component that will give this response is invalid. All poles with a real
component greater than or equal to this value are valid. By combining the different
criteria the desired pole locations increasingly become more defined.
The big difference with the transient characteristics of closed loop control
systems versus open loop systems is the idea that implementing different controller
gains can change them. This is one of the great benefits to implementing closed loop
Analog Control System Performance 153

Table 4 Parameters Commonly Used for Evaluating Control System Performance

Specications in the s-domain Specications in the time domain

First-order systems
Time constant or pole location on the Settling time or rise time
real axis
Second-order systems
Ratio of the imaginary to the real components Damping ratio
of complex roots
Radial distance from origin in s-plane Natural frequency
Damping ratio (dened above) Percent overshoot
Natural frequency and damping ratio Rise time
Damped natural frequency or imaginary Peak time
component of complex roots
Natural frequency and damping ratio or real Settling time
component of complex roots

controllers, that is, the ability to make the system behave in a variety of ways simply
by changing an electrical potentiometer representing controller gain. A given system
might be underdamped, critically damped, overdamped, or even unstable based on
the chosen value of one gain. Therefore, our controller design becomes the means by
which we move the system poles to the locations in the s-plane specified by applying
the criteria defined in Table 4. The techniques presented in the next section will allow
us to design controllers to meet certain transient response characteristics, even
though the physical system contains actual values of damping and natural frequency
different from the desired values. Both root locus techniques and Bode plot methods
are effective in designing closed loop control systems.
Another benefit of closed loop controllers is that since we can electronically (or
mechanically) control response characteristics, we can add damping to the system
without increasing the actual power dissipation and energy losses. By adding a
velocity sensor and feedback loop to a typical position control system, it is easy to
control the damping and certainly is beneficial since less heat is generated by the
physical system.

4.4 FEEDBACK SYSTEM STABILITY


The skills to develop models and analyze them are required when we start discussing
the stability and design of controllers. The term stability has taken on many mean-
ings when people have discussed control systems. For the sake of being consistent, a
quick denition will be given.
Global stability refers to a stable system under all circumstances. In other
words, the response of the system will always converge to a finite steady-state
value. Marginal stability refers to the point between being stable or unstable. An
oscillatory motion will continue indefinitely in the absence of new inputs, neither
decaying nor growing in size. This occurs when the roots of the characteristic equa-
tion (denominator of the transfer function) are purely imaginary. Unstable systems,
154 Chapter 4

once set in motion, will continue to grow in amplitude either until enough compo-
nents saturate or something breaks. Finally, relative stability is term you hear a lot
when discussing control system performance. It means different things to different
people. We should think of it as being the measure of how close to being unstable we
are willing to be. As we approach the unstable region, the overshoot and oscillatory
motions increase until at some point they become unacceptable. Hence a computer
controlled machining process might be designed to be always overdamped since no
overshoot is allowed while machining. This limits the response time and is not the
answer for everyone. A cruise control system, for example, might be allowed 5%
overshoot (several mph at highway speeds) to reach the desired speed faster. So we
see that different situations call for different definitions of relative stability, which is
often defined by the allowable performance specifications imposed upon the system.
The root locus and Bode plot methods are powerful tools since they allow us to
quickly estimate things like overshoot, settling time, and steady-state errors while we
are designing our system. These are the topics of discussion in the next several
sections.

4.4.1 Routh-Hurwitz Stability Criterion


The Routh-Hurwitz stability criterion is used to quickly nd whether or not a system
is stable when methods to nd the roots of a polynomial are not readily available. As
seen earlier, when the roots of characteristic equation are known and plotted in the s-
plane the stability of the system is also known. While most handheld calculators can
calculate the roots of the characteristic equation, few can determine symbolically
where the roots become unstable as a function of a variable (the gain K or even a
system model parameter) as the Routh-Hurwitz method can. In general, however,
computer programs do allow us to calculate the gains where the systems go unstable
and have in many respects tended to minimize the use of Routh-Hurwitz techniques.
The steps for using the Routh-Hurwitz method are given below followed by an
example problem to which the method is applied.

Step 1: Write the characteristic equation as a polynomial in s using the following


notation: a0 s2 a1 sn1 a2 sn2 an  1 s an 0

Step 2: Examine the coecients, ai s , if any one is negative or if one is missing the
system is unstable, and at least one root is already in the right hand of the s plane.

Step 3: Arrange the coecients in rows beginning with sn and ending with s0 . The
columns taper to a single value for the s0 row. Arrange the table as follows:

sn a0 a2 a4 a6 (all original even numbered coefficients)


n1
s a1 a3 a5 (all original odd numbered coefficients)
sn2 b1 b3 b4
sn3 c1 c2
: :
s0 f1

The bi s through the end are calculated using values from the previous two rows
using the patterns below:
Analog Control System Performance 155

a1 a2  a0 a3 a1 a4  a0 a5 a1 a6  a0 a7
b1 b2 b3
a1 a1 a1
b1 a3  a1 b2 b1 a5  a 1 b3 b1 a7  a1 b4
c1 c2 c3
b1 b1 b1
c 1 b2  b1 c 2 c 1 b3  b1 c 3
d1 d2
c1 c1
This pattern can be extended until the nth row is reached and all coecients are
determined. The last two rows will only have one column (one coecient) and the
third from last row with two coecients, etc. If an element turns out to be zero, a
variable representing a number close to zero, i.e., e, can be used until the process is
completed. This indicates the presence of a pair of imaginary roots and some part of
the system is marginally stable.
Step 4: To determine stability, examine the rst column of the coecients. If any sign
change ( to  or  to +) occurs, it indicates the occurrence of an unstable root.
The number of sign changes corresponds to the number of unstable roots that are in
the characteristic equation.

EXAMPLE 4.5
To illustrate the process using the block diagram in Figure 10, close the loop to nd
the overall system transfer function and use the Routh-Hurwitz method to determine
stability. When we close the loop we get C=R KG=1 KG, which leads to the
characteristic equation of
1 K=ss 1s 3 0
or
s3 4s2 3s K 0
Then the Routh-Hurwitz table becomes
s3 1 3 0 (all original even numbered coefficients)
s2 4 K (all original odd numbered coefficients)
s1 3  K=4
s0 K
The system will become unstable when a sign change occurs. When K=4
becomes greater than 3, the sign will change. Thus K < 12 is the allowable range
of gain before the system becomes unstable. Also, if K becomes less than zero the
sign also changes, although this is not a normal gain in a typical controller (it
becomes positive feedback). But, we could say the allowable range of K where the
system is stable is

Figure 10 Example: block diagram of system.


156 Chapter 4

0 < K < 12
Several quick comments are in order regarding this example. Based on what we have
already learned, we could see that the open loop poles from the block diagram are all
stable (all fall in the left-hand plane (LHP)). This example then illustrates how
closing a control feedback loop can cause a system to go unstable (range of K for
stability). Also, this is a type 1 system with one integrator. Thus it will have zero
steady-state errors from a step input and errors equal to 1=K for ramp inputs.
What the Routh-Hurwitz criterion does not give us insight into is the type of
response at various gains and how the system approaches the unstable region as
the gain K is increased. It is this ability (among others) that has made the root locus
techniques presented in the next section so popular. Most courses, textbooks, and
computer simulation packages include this technique among their repertoire of
design tools for control systems.

4.4.2 Root Locus Methods


Root locus methods are commonly taught in most engineering control systems
courses and see widespread use. If we are able to develop a model, whether ordinary
dierential equations (ODEs), transfer functions, or state space matrices, then root
locus techniques may be used. If the model is nonlinear it must rst be linearized
around some operating point (which raises the question: what is the valid range of
the root locus plots?). In the larger picture, root locus techniques provide a quick
method of knowing the system response and how it will change with respect to
adding a proportional controller. With advanced controllers and nonproportional
gains, it must be slightly modied if the handbook type of approach is to be used.
Of course, if a computer is programmed to calculate all the roots while plotting, then
any variable may be varied to form the plot. The use of the computer in developing
root locus plots has enabled the method to be powerfully used for multiple gain loci,
parameter sensitivity, and disturbance sensitivity.
The basic idea of root locus plots relates to the fact that the roots of the
characteristic equation largely determine the transient response characteristics of
the control system, as seen in the section on Laplace transforms. Since the roots
of the characteristic equation are primarily responsible for the response character-
istics, it would be helpful to know how the roots change as parameters in the system
change. This is precisely what the root locus plot does show: the migration of the
roots as the gain of the system changes. The s-plane, shown in Figure 8 in Chapter
3, is used to plot the real and imaginary components of the roots as they migrate.
The pole locations in the s-plane correspond to either a time constant (if on the real
axis) or a damping ratio and natural frequency (complex conjugate pairs) and thus
can be used to estimate the available range of system performance possible. These
are exactly the parameters that we related to the performance criterion in the time
domain (Table 4).
Since we are concerned with the location of system poles (roots of the char-
acteristic equation), we must determine how the denominator of the transfer func-
tion changes when various coefficients are changed. Although changing any
coefficient in the denominator causes the roots to move in the s-plane, classical
root locus techniques are developed for the case where only the gain K in the system
is varied. If we wish to examine the effects of additional parameters (i.e., mass in the
Analog Control System Performance 157

system, derivative controller gain, etc.), then we must rearrange the transfer function
to make this variable the gain K, or multiplier on the system. If we cannot, then we
are unable to use the rules developed here for easy plotting of the root loci. The concepts
are still the same, and the pole migrations are still plotted, but another method must be
available to solve for the poles every time the parameter is changed. With the variety of
software packages available today, this is not that difficult a problem.
To develop the guidelines for constructing root locus plots, we need to find the
roots of the characteristic equation. If we close the loop of a typical block diagram
with a feedback path, we get the following overall system transfer function and, of
interest, the characteristic equation:
Cs Gc G1 G2

Rs 1 Gc G1 G2 H
If we let Gc be a proportional gain K and G1 s and G2 s be combined and repre-
sented by one system transfer function Gs, then the characteristic equation can be
represented by
CE 1 KGH
The roots of the characteristic equation are determined by setting it equal to zero
where

1 K Gs Hs 0
or

K Gs Hs 1
These equations provide the foundation for root locus plotting techniques. Since the
product Gs Hs contains both a numerator and denominator represented by a
polynomial in s, it can be written in terms of the roots of the numerator, called
zeros, and the roots of the denominator, called poles, as follows:
s  z1 s  z2 s  zm
K 1
s  p1 s  p2 s  p3 s  pn
Thus using this notation we have m zeros and n poles (from the subscript notation).
Since this equation equals 1 and contains complex variables, we can write this as
two conditions that always must be met in order for the product GsHs to equal
1. These two conditions are called the angle condition and the magnitude condi-
tion.
Angle condition: s  z1 s  z2 s  zm  s  p1
s  p2 s  pn
odd multiple of 180 degrees (negative sign portion)
    
s  z1 s  z2  s  zm 
Magnitude condition: K       1
s  p1   s  p2   s  p3   s  pn 
The angle condition is only responsible for the shape of the plot and the
magnitude condition for the location along the plot line. Therefore, the whole
158 Chapter 4

root locus plot can be drawn using the angle condition. The only time we use the
magnitude condition is to locate are position along the plot. For any physical system,
n  m, and this simplifies the rules used to construct root locus plots. The basic rules
for developing root locus plots are given below in Table 5. For consistency, poles are
plotted using xs and zeros are plotted using os. This simplifies the labeling process.
The rules are based on the angle and magnitude conditions, as will be explained
through the use of several examples.

Step 1: Provides the groundwork for developing the root locus plot. We develop the
open loop transfer function by examining our block diagram and connecting the
transfer functions around the complete loop. The resulting open loop transfer func-
tion needs to be factored to nd the roots of the denominator (poles) and numerator
(zeros). For systems larger than second-order, there are many calculators and com-
puters capable of nding the roots for us.

Table 5 Guidelines for Constructing Root Locus Plots

1 From the open loop transfer function, GsHs, factor the numerator and
denominator to locate the zeros and poles in the system.
2 Locate the n poles on the s-plane using xs. Each loci path begins at a pole, hence
the number of paths are equal to the number of poles, n.
3 Locate the m zeros on the s-plane using os. Each loci path will end at a zero, if
available; the extra paths are asymptotes and head toward innity. The number of
asymptotes therefore equals n  m.
4 To meet the angle condition, the asymptotes will have these angles from the positive
real axis:
If one asymptote, the angle 180 degrees
Two asymptotes, angles 90 and 270 degrees
Three asymptotes, angles 60 and 180 degrees
Four asymptotes, angles 45 and 135 degrees.
5 All asymptotes intersect the real axis at the same point. The point, s, is found by
(sum of the poles)  (sum of the zeros)
number of asymptotes
6 The loci paths include all portions of the real axis that are to the left of an odd
number of poles and zeros (complex conjugates cancel each other).
7 When two loci approach a common point on the real axis, they split away from or
join the axis at an angle of 90 degrees. The break-away/break-in points are found
by solving the characteristic equation for K, taking the derivative w.r.t. s, and
setting dK=ds 0. The roots of dK=ds 0 occurring on valid sections of the real
axis are the break points.
8 Departure angles from complex poles or arrival angles to complex zeros can be
found by applying the angle condition to a test point in the vicinity of the root.
9 The point at which the loci cross the imaginary axis and thus go unstable can be
found using the Routh-Hurwitz stability criterion or by setting s j! and solving
for K (can be a lot of math).
10 The system gain K can be found by picking the pole locations on the loci path that
correspond to the desired transient response and applying the magnitude condition
to solve for K. When K 0, the poles start at the open loop poles; as K ! 1, the
poles approach available zeros or asymptotes

.
Analog Control System Performance 159

Step 2: Now that we have our poles (and zeros) from step 1, draw our s-plane axes
and plot the pole locations using xs as the symbols. When gain K 0 in our system,
these pole locations are the beginning of each loci path. If two poles are identical
(i.e., repeated roots), then two paths will begin at their location. Each pole is the
beginning point for one root loci path.
Step 3: This is same procedure as followed in step 2, except that now we locate each
zero in the s-plane using os as symbols. Each zero location is an attractor of root loci
paths, and as K ! 1, every zero location will have a loci path approach its location
in the s-plane. The remaining steps help us determine how the root loci paths travel
from the poles to the zeros (or asymptotes if there are more poles than zeros).
Step 4: It is easy to see from steps 2 and 3 that if we have more poles than zeros, then
some root loci paths do not have a zero to travel to. In this case (which is actually the
most common case), we will have some paths leaving the s-plane as the gain K is
increased. Fortunately, because of the angle condition, these paths are dened based
on the number of asymptotes that we have in our plot. The angles that the asymp-
totes make with the positive real axis are given in Table 6.
Step 5: All asymptotes intersect the real axis at a common point. This intersection
point can be found from the location of our poles and zeros. Once we know the
location, coupled with the angles calculated in step 4, we can plot the asymptote lines
on the s-plane. Remember that the root loci paths approach the asymptotes as K
approaches innity; they do not necessarily travel directly to the intersection point
and lay on the lines themselves. The intersection point, s, is found by summing the
poles and zeros and dividing by the number of asymptotes, n  m:
P
n P
m
pi  zi
i1 i1

nm
It should be noted that only the real portions of complex conjugate roots need to be
included in the summations since the complex portions are always opposite of each
other and cancel when the pair of complex conjugate roots are summed.
Step 6: Now we are nally ready to start plotting the root locus paths in the s-plane.
We begin by including all sections of the real axis that fall to the left of an odd
number of poles and zeros. Each one of these sections meets the angle condition and

Table 6 Asymptote Angles for Root Locus Plots

Number of asymptotes,
nm Angles with the positive real axis

1 y 180 degrees
2 y 90 and 270 degrees
3 ys 60 and 180 degrees
4 ys 45 and 135 degrees
180 degrees 2k  1
General number, k Angles yi ; k 0; 1; 2; . . .
nm
160 Chapter 4

is a valid segment of the root locus paths. This rule is easy to apply. Locate the right-
most pole or zero and, working our way to the left, mark every other section of real
axis that falls between a pole or zero, beginning with the rst segment, since it is to
the left of an odd number (1).
Step 7: If a valid section of the real axis falls between two poles, then the paths must
necessarily break away from real axis and travel to either zeros or asymptotes. If a
valid section of the real axis falls between two zeros, then the paths must join the axis
between these two points and travel to the zeros as K approaches innity. The points
where the root locus paths leave or join the real axis are termed break-away and
break-in points. Both paths, whether leaving or joining, do so at the same point, and
at an angle of 90 degrees.
Since any path not on the real axis involves imaginary portions, which always
occur in complex conjugate pairs, root locus plots are symmetrical around the real
axis. If we look at the asymptote angles derived in step 4, we see that they are always
mirrored with respect to the real axis.
To solve for the break-away and/or break-in points, we solve the characteristic
equation for the gain K, take the derivative with respect to s and set it zero, and solve
for the roots of the resulting equation. Valid points will fall on those sections of the
real axis containing the root locus paths.
dK
0; Solve for the roots
ds
By using this technique we are nding the rate of change of K with respect to the rate
of change of s. Break-away points occur at local maximums and break-in points at
local minimums (found when we set the derivative to zero and solve for the roots).
Step 8: To determine the direction with which the paths will leave complex conjugate
poles from, or at which the paths will arrive at complex conjugate zeros, we can
apply the angle condition to the pole or zero in question. We will use the fact that the
angle condition is satised whenever the angles add up to an odd multiple of 180
degrees. That is, if we rotate a vector lying on the positive real axis either 180 or
180 degrees, it now lies on the negative real axis, giving us the 1 relationship
[KGsHs 1]. It is also true if we rotate the vector 540 or 900 degrees, 1 12
times around or 2 12 times around, respectively.
Any test point, s, that falls on the root locus path will have to meet the angle
condition. If we place the test point very close to the pole or zero in question, then
the angle that the test point is relative to pole or zero and that meets the angle
condition is the departure or arrival angle. Graphically this may be shown as in
Figure 11. In the s-plane, we have poles at 2  2j and 1 and a zero at 3. If the
pole in question (regarding the angle of departure from it) is at 2 2j, then the sum
of all the angles from all other poles and zeros, and from the test point near the pole,
must be an odd multiple of 180 degrees. This can be expressed as
X
m n
fi 1802k 1; k 0; 1; 2; . . .
i1

Since zeros are in the numerator, they contribute positively to the summation and
poles, and since they are in the denominator, contribute negatively. The way Figure
Analog Control System Performance 161

Figure 11 Angle of arrival and departure calculations.

11 is labeled, f1 is the angle of departure relative to the real axis. To show how these
would sum, let us calculate f1 :
 f1  f2  f3 f4 180 degrees 2k 1
 f1  90 degrees  180 degrees  tan1 0:5 tan1 0:5
 180 degrees 2k 1
 f1  143 degrees 180 degrees

Finishing the procedure, the angle of departure, f1 37 degrees.


In many cases, when developing the root locus plots, it is not necessary to go
through these calculations because the approximate angles of departure or arrival
are obvious. The exact path is not that critical; the general shape and where the path
goes concern us more in designing control systems. When the departure or arrival
paths are not that clear, as when pairs of poles and asymptotes are relatively close to
each other, and it is unsure which pole approaches which asymptote, it is worth
performing the calculations.
Step 9: If the root locus paths cross over into the right-hand plane and the system
becomes unstable, we often want to know the point at which the system becomes
unstable and the associated gain at that point. We have two options for solving for
the roots at this condition, Routh-Hurwitz and substituting in s jo for the poles
and solving for K. The Routh-Hurwitz technique is described in Section 4.4.1 and is
applied to the characteristic equation of the closed loop system. Alternatively, since
at the point where the system becomes unstable we know that the real components of
the poles are zero, we can substitute s jo into the characteristic equation and solve
for K and o by equating the real and imaginary coecients to zero (two equations,
two unknowns). For large systems, the amount of analytical work required can
become signicant and the Routh-Hurwitz method will usually allow for an easier
solution.
Step 10: Assuming that our root locus paths, now drawn on our plot from steps 18,
go through a point at which we want our controller to operate, we want to nd the
system gain K at that point. Steps 18 are based on the angle condition, and now to
nd the required gain at any point along the root locus, we need to apply the
magnitude condition. If our actual root locus paths cross over into the right-hand
plane and are suciently close to the asymptotes (or to a point where we know the
162 Chapter 4

value of), then we can also use the methods from this step to nd the gain K where
the system goes unstable, as step 9 did. Remember that our magnitude condition was
stated earlier as
    
 s  z1   s  z2   s  zm 
K        1
s  p1   s  p2   s  p 3   s  pn 
Graphically, the magnitude condition means that if we calculate the distances from
each pole and zero to the desired point on the root locus path, then K is chosen such
that the total quantity of the factor becomes 1. For example, the distance js  z1 j, as
shown in the magnitude condition, is the distance between the zero at z1 and the
desired location (pole) along our path.
Analytically, we may also solve for K by setting the s in each term equal to the
desired pole location and multiplying each term out. The magnitude can then be
calculated by taking the square root of the sum of the real terms squared and the
imaginary terms squared.
For both methods, if there are no zeros in our system the numerator is equal to
1. This allows us to cross multiply and K is found using the simplified equation
below:
K js  p1 jjs  p2 jjs  p3 j js  pn j
We can apply the magnitude condition any time we wish to know the gain required
for any location on our root locus plots. This is the system gain; to calculate the
controller gain we need to take into account any additional gain that can be carried
out in front of GsHs when the transfer function is factored.
To illustrate the effectiveness of root locus plots, we will work several different
examples in the remainder of this section. The examples will begin with a simple
system and progress to slightly more complex systems as we learn the basics of
developing root locus plots using the steps above.

EXAMPLE 4.6
Develop the root locus plot for the simple rst-order system represented by the open
loop transfer function below:
2
GsHs
s3
Step 1: The transfer function is already factored; there are no zeros and one pole.
The pole is at s 3.
Step 2: Locate the pole on the s-plane (using an x since it is a pole) as in Figure 12:
Step 3: There are no zeros.
Step 4: Since we have one pole, no zeros, then n  m 1  0 1; and we have one
asymptote. For one asymptote, the angle with the positive real axis is 180 degrees
and the asymptote is the negative real axis, all the way out to 1.
Step 5: There is no intersection point with only one asymptote. It is on the real axis.
Step 6: With only one pole, all sections of the real axis to the left of the pole are valid.
For our rst-order system this coincides with the asymptote.
Analog Control System Performance 163

Figure 12 Example: locating poles and zeros in the s-plane (rst-order system).

Step 7: There are no break-away or break-in points for a rst-order system.


Step 8: The angle at which the locus paths leaves the one pole in our system is 180
degrees since there is only one angle to sum and the sum must always equal an odd
multiple of 180 degrees. This is also a good time to verify why the axis to the right of
the pole does not meet the angle condition. If we were to place our test point any-
where to the right of the pole, then the angle from the pole to the test point is 0
degrees (or 360 degrees, 720 degrees, but never an odd multiple of 180 degrees). We
can verify the valid sections of real axis for any system by applying this procedure
using the angle condition.
Step 9: The path never crosses the imaginary axis and the system cannot become
unstable, even when the loop is closed.
Step 10: Let us say that we want a system time constant (t) of 0.25 seconds. Since the
position on the negative real axis equals 1/t, we want the point were the locus path is
at s 1=0:25 4. Since we do not have any zeros in this system, the numerator is
equal to one and we nd K to be
K js  p1 j j  4  p1 j j  4 3j 1
This is the overall loop gain required, not just the gain contributed by the controller.
For the system gain we need to account for the 2 in the numerator of GsHs. Then
our controller gain, Kp , is found as
Kp 2 1
or
Kp 1=2 0:5
Now our final root locus plot, and the pole location when Kp 0:5, is given in
Figure 13. For this example we will connect this plot with what we already know
about block diagram reduction and characteristic equations. If we were to represent
the transfer function used in this example as shown in the block diagram in Figure
14, we can close the loop and analytically vary K to develop the same plot. The
closed loop transfer function becomes:
Cs 2K

Rs s 3 2K
164 Chapter 4

Figure 13 Example: root locus plot for rst-order system.

When K equals 12, we end up with the closed loop transfer function as:

Cs 1

Rs s 4

This is exactly the same controller gain solved for using the root locus plot and
applying the magnitude condition when the desired pole was at s 4. For first- or
second-order systems the analytical solution is quite simple and can be used in place
of or to verify the root locus plot. For example, if we take the closed loop transfer
function above with K still a variable in the denominator, it is easy to see that when
K 0 the pole is at 3, our starting point, and as K is increased the pole move
farther and farther to the left. As K approaches infinity, so does the pole location.
Thus both our beginning point (the open loop poles) and our asymptotes are verified
as we increase the gain K. Finally, since the roots of a second-order system are also
easily solved for as K varies (quadratic equation), this same method can used (as will
be shown in the next example). Beyond second-order systems and the root locus,
techniques are much easier.

EXAMPLE 4.7
Develop the root locus plot for the second-order system represented by the block
diagram in Figure 15.

Step 1: The transfer function is already factored; there are no zeros and two poles.
The poles are at s 1 and s 3.

Step 2: Locate the poles on the s-plane (using xs) as shown in Figure 16:

Step 3: There are no zeros.

Figure 14 Example: block diagram for rst-order root locus example.


Analog Control System Performance 165

Figure 15 Example: block diagram for second-order root locus plot.

Step 4: Since we have two poles, no zeros, then n  m 2  0 2; and we have two
asymptotes. For two asymptotes, the angles relative to the positive real axis are 90
degrees and 270 degrees.
Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. For this example,
3  1 0
 2
2
Step 6: With two poles, the section on the real axis between the two poles is the only
valid portion on the axis. For our example this is between 1 and 3. This section
also includes the intersection point of the asymptotes.
Step 7: There is one break-away point since the root locus paths begin on the real
axis and end along the asymptotes. To nd the break-away point, solve the char-
acteristic equation for K and take the derivative with respect to s. The characteristic
equation is found by closing the loop and results in the following polynomial:
s2 4s 3 4K 0 or K s2  4s  3
Taking the derivative with respect to s:
dK
2s  4 0
ds
s 2
The break-away point for the second-order system coincides with the intersection
point for the asymptotes.
Step 8: The angles at which the locus paths leave the poles in our system are 180
degrees (pole at 1) and 0 degrees (pole at 3). This also coincides with the valid

Figure 16 Example: locating poles and zeros in the s-plane (second-order system).
166 Chapter 4

Figure 17 Example: root locus plot for second-order system.

section of the real axis as determined earlier. These directions can also be ascertained
from the earlier steps since the only valid section of real axis is between the two poles
and the break-away point also falls in this section. We can now plot our nal root
locus plot as shown in Figure 17.

Step 9: The asymptotes never cross the imaginary axis and the system cannot become
unstable, even when the loop is closed.

Step 10: For this example let us suppose that design goals are to minimize the rise
time while keeping the percent overshoot less then 5%. The overshoot specication
means that we need a damping ratio approximately 0.7 or greater. We will choose
0.707 since this corresponds to a radial line, making an angle of 45 degrees with the
negative real axis. Adding the line of constant damping ratio to the root locus plot
denes our desired pole locations where it crosses the root locus path. Our poles
should be placed at s 2  2j as shown in Figure 18.
Now we need to find the gain K required for placing the poles at this position.
Remember that the poles begin at 1 and 3 when K 0, they both travel toward
2 as K is increased, one breaks up, one breaks down, and they follow the asymp-
totes as K continues to increase. To solve for K, we apply the magnitude condition.
Since we do not have any zeros in this system, the numerator is equal to one and we
find K to be
p p
K js  p1 jjs  p2 j j 12 22 jj 12 22 j 5

Figure 18 Example: using root locus plot to locate desired response (second order).
Analog Control System Performance 167

We already have a gain of 4 in the numerator of GsHs so our control gain


contributes the rest. This gives us a required controller gain, Kp 5=4.
As with the first-order example, we will connect this plot with what we already
know about block diagram reduction and characteristic equations. Let us close the
loop and analytically vary K to develop to verify the root locus plot. When we close
the loop, we get the characteristic equation given below.

s2 4s 3 4K 0

If we solve for the roots using the quadratic equation, we can leave K as a variable
and check the various locus paths by varying K and plotting the resulting roots to
the equation.
p
4 42  43 4K
s1;2  
2 2
Let us check various points along our root locus paths by using several values of K.

K 0; s1;2 2  1 1 and  3 (as we expected, our open loop poles)


K 1=4; s1;2 2 and  2 (the value of K at the break-away point)
K 5=4; s1;2 2  2j (our value of K to place us at our desired poles)

So as in the previous example, we are able to analytically solve for the roots as a
function of K and verify our plot developed using the rules from this section. In fact,
it is quite easy to see from the quadratic equation that our poles start at our open
loop poles when K 0, progress to the break-away point when the square root term
becomes zero, and then progress along the asymptotes as K approaches innity.
Once we leave the real axis, the real term always remains at 2 and increasing K
only increases the imaginary component, exactly as the root locus plot illustrated.
From here the remaining examples will only use the root locus techniques since
beyond second-order, no easy closed form solution exists for determining the
roots of the characteristic equation. (Note: it does exist for third-order polynomials,
but it is a multistep process.)

EXAMPLE 4.8
Develop the root locus plot for the block diagram in Figure 19. This model was
already used in Example 4.5 for the Routh-Hurwitz method. Our conditions then
arise from:
K
1
ss 1s 3

Figure 19 Example: block diagram for root locus plot (third order).
168 Chapter 4

Figure 20 Example: locating poles and zeros in the s-plane (third-order system).

Step 1: The transfer function is already factored; there are no zeros and three poles.
The poles are at s 0; s 1; and s 3.

Step 2: Locate the poles on the s-plane (using xs) as shown in Figure 20.

Step 3: There are no zeros.

Step 4: Since we have three poles, no zeros, then n-m = 3 0 = 3; and we have three
asymptotes. For three asymptotes, the angles relative to the positive real axis are 60
degrees and 180 degrees.

Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. This example is given in Figure
21.

0  3  1 0 4
 
3 3

Step 6: With three poles, the root locus sections on the real axis lie between the two
poles at 0, 1, and to the left of the pole at 3. In this example the asymptotes
intersection point does not fall in one of the valid regions.

Step 7: There is one break-away point since the root locus paths begin on the real
axis between 0 and 1 and end along the asymptotes. To nd the break-away point,
solve the characteristic equation for K and take the derivative with respect to s. The

Figure 21 Example: location of asymptotes for third-order system.


Analog Control System Performance 169

characteristic equation is found by closing the loop and results in the following
polynomial:

s3 4s2 3s K 0 or K s3  4s2  3s

Taking the derivative with respect to s:

dK
3s2  8s  3 0
ds
s 0:45; s 2:22

Only one break-away point coincides with the valid section of real axis; the root
locus paths will leave the real axis at 0.45 and start approaching the asymptotes.

Step 8: The angles at which the locus paths leave the poles in our system are clear by
looking at the valid sections of real axis and knowing that leave the poles along these
sections. We can now plot our nal root locus plot as shown in Figure 22.

Step 9: Knowing that the root loci paths follow the asymptotes as K increases means
that any time we have three or more asymptotes, the system is capable of becoming
unstable since at least several of the asymptotes head toward the right-hand plane
(RHP). To nd where the asymptotes cross the imaginary axis for this example, we
can close the loop and apply the Routh-Hurwitz stability criterion; this was done for
this same system in Example 4.5. By visually examining the root locus plot, we could
also apply the magnitude condition using our desired pole locations on the ima-
ginary axis to solve for gain K where the system becomes unstable.

Step 10: From here we have several options. If we want to tune the system to have
the fastest possible response, we could choose the gain K where the two paths
between 0 and 1 just begin to leave the real axis. These types of techniques will
be covered in more detail in the next chapter when we discuss designing control
systems.

EXAMPLE 4.9
Develop the root locus plot for the block diagram in Figure 23.

Figure 22 Example: root locus plot for third-order system.


170 Chapter 4

Figure 23 Example: block diagram for root locus plot (fourth-order system).

Step 1: There is one zero and four poles. When the polynomial in the denominator is
factored, we nd the pole locations to be at s 0, s 2, and s 1  1j. The next
two steps are to place the poles and zero in the s-plane.
Step 2: Locate the poles on the s-plane (using xs), shown in Figure 24.
Step 3: There is one zero at s 3, shown in Figure 24.
Step 4: Since we have four poles, one zero, then n  m 4  1 3; and we have
three asymptotes. For three asymptotes, the angles relative to the positive real axis
are 60 degrees and 180 degrees.
Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. This example is shown in
Figure 25.
0  2  1  1 3 1
 
3 3
Step 6: With four poles and a zero, the root locus sections on the real axis lie between
the two poles on the axis at 0, 2, and to the left of the zero at 3. In this example the
asymptotes intersection point does fall in one of valid regions.
Step 7: There is one break-away point since the root locus paths begin on the real
axis between 0 and 2 and end along the asymptotes. For this example there is also a
break-in point since the zero lies on real axis and must have one path approach it as
K goes to innity. To the left of the zero is also part of the root locus plot (an
asymptote), and two paths must come together at this break-in point.
To find the points, solve the characteristic equation for K and take the deri-
vative with respect to s. The characteristic equation is found by closing the loop and
results in the following polynomial:

Figure 24 Example: locating poles and zeros in the s-plane (fourth-order system).
Analog Control System Performance 171

Figure 25 Example: location of asymptotes for fourth-order system with one zero.

s4 4s3 6s2 4s Ks 3K 0

s4 4s3 6s2 4s
K 
s 3
Taking the derivative with respect to s (intermediate math steps required),

dK 3s4 20s3 42s2 36s 12


 0
ds s 32

s 3:65; 1:54; 0:74  0:41j


Two of the four roots are valid and coincide with the expected locations along the
real axis. Ignoring the extra pair of complex conjugate roots, we have the break-
away point occurring at s 1:54 and the break-in point at s 3:65.
Step 8: The angles at which the loci paths leave the poles in our system are clear
except for the complex conjugate pair at 1  1j. To nd the angle at which the
branches leave these poles, we will place a test point very near to s 1 1j. By
summing all the angles relative to the pole near this point, we can solve the angle that
this test point must be relative to the nearby pole to satisfy the angle condition. Let
us begin by summing all the angles on the s-plane as shown in Figure 26. Remember
that poles contribute negatively and zeros contribute positively. The sum of all
angles must be an odd multiple of 180 degrees:

Figure 26 Example: calculating angle of departure on root locus plot.


172 Chapter 4

 f1  f2  f3  f4 f5 f1  90 degrees  135 degrees  45 degrees


tan1 1=2

f1  243:43 degrees 180 degrees

f1 63:4 degrees
Therefore, the angle that the path leaves the pole from is 63:4 degrees relative to
the real axis. We can use this information to see that the paths leaving the complex
poles head directly toward the 60 degree asymptotes. This leaves the break-away
point on the real axis to wrap back around and rejoin the axis at s 3:65. After
joining, one path progresses to the zero at 3 and the other path follows the
asymptote to innity. We can now plot our nal root locus plot as shown in
Figure 27.
Step 9: The system is capable of becoming unstable since at least several of the
asymptotes head toward the RHP (any time we have  3 asymptotes). To nd
where the asymptotes cross the imaginary axis for this example, we use the char-
acteristic equation developed for step 7 and apply the Routh-Hurwitz stability cri-
terion. Usually, adequate resolution can be achieved by approximating where the
paths cross the axis and applying the magnitude condition using our desired pole
locations on the imaginary axis to solve for gain K where the system becomes
unstable. Since the Routh-Hurwitz method has already been demonstrated several
times, let us assume that our paths cross the imaginary axis at s 1j (from the
plot) and use the magnitude condition where jKGs Hsj 1.
 
 s  z1 
K     
s  p1   s  p2   s  p3   s  p4 
p
 2  p
 3 12  10
K pppp K pp j1j
2 2 2 2
 0 1  1 2  1 0  2 1  2 2 2 2 5 5

K  1:58
Since there is no gain that can be factored out of GsHs, this represents the
approximate gain where the controller can be set to before the system goes unstable.

Figure 27 Example: root locus plot for fourth-order system with one zero.
Analog Control System Performance 173

As a note regarding this procedure, we can include the open loop transfer function
gain in the magnitude equation during the process, in which case the K we solve for
will always be the desired proportional controller gain.

Step 10: From here we have several options. If we want to tune the system to have
the fastest possible response, we could choose the gain K where all poles are as far
left as possible. If one pole is close to the origin, the total response will still be slower.
No matter what gain we choose, this system will experience overshoot and oscillation
in response to a step input. At certain gains all four poles will be oscillatory,
although with the furthermost left pair decaying more quickly than the pair
approaching the imaginary axis. As we progress to the next chapter, it is these
types of decisions regarding the design of our controller that we wish to study and
develop guidelines for.

EXAMPLE 4.10
The remaining example for this section will present the Matlab code required to
solve examples 4.6 4.9. Matlab is used to generate the equivalent root locus plots as
those developed manually in each example. The plots for each earlier example are
given in Figure 28
2
For Example 4.6, GsHs
s3

Figure 28 Example: root locus plots using Matlab.


174 Chapter 4

num6=2 %Denes the numerator


den6=[1 3] %Denes the denominator
sys6=tf(num6,den6) %Converts num and den to transfer function (LTI variable)
rlocus(sys6) %Draws the root locus plot
4
For Example 4.7, GsHs
s2 4s 3
num7=4; %Denes the numerator
den7=[1 4 3]; %Denes the denominator
sys7=tf(num7,den7) %Converts transfer function (LTI variable)
rlocus(sys7) %Draws the root locus plot
1
For Example 4.8, GsHs
ss2 4s 3
num8=1; %Denes the numerator
den8=[1 4 3 0]; %Denes the denominator
sys8=tf(num8,den8) %Converts transfer function (LTI variable)
rlocus(sys8) %Draws the root locus plot
s 3
For Example 4.9, GsHs
ss3 4s2 6s 4
num9=[1 3]; %Denes the numerator
den9=[1 4 6 4 0]; %Denes the denominator
sys9=tf(num9,den9) %Converts transfer function (LTI variable)
rlocus(sys9) %Draws the root locus plot

4.4.3 Frequency Response Methods


This section examines stability using information plotted the frequency domain.
There are many parallels with the s-plane methods from the previous section. We
use two approaches in this section: relate the information from the s-domain to the
frequency domain and develop and present various tools using the frequency
domain. In general, Bode, Nyquist, and Nichols plots are all used when discussing
stability in the frequency domain. Since the information is the same and just pre-
sented in dierent forms, most of this discussion centers on the common Bode plot
techniques and relates these to the other plots. Nyquist plots, sometimes called polar
plots, can be drawn directly from the data used to construct the Bode plots but has
the advantage in the ease of determine stability with a quick glance. Nyquist plots
also allow closed loop performance estimates through the use and M and N circles.
Nichols plots simply plot the loop gain in decibels versus the phase angle in degrees
and when plotted on specially marked grids allow the actual closed loop perfor-
mance parameters to be found. Each method has certain advantages and disadvan-
tages. For each type of plot, however, two important parameters are measured that
relate to system stability: gain margin and phase margin.
4.4.3.1 Gain Margin and Phase Margin in the Frequency Domain
For a Bode plot the gain and phase margin were dened previously and are generally
measured in dB and degrees, respectively. The gain margin (GM) is found by nding
Analog Control System Performance 175

the frequency at which the phase angle is 180 degrees and measuring the distance
that the magnitude line is below 0 dB. The measurement of phase and gain margin is
shown in Figure 29.
If the magnitude plot is not below the 0 dB line when the system is at 180
degrees, the system is unstable. It can be thought of like this: If the system is 180
degrees out of phase, the previous cycle adds to the new one. This is similar to
pushing a child on a swing when each push is timed to add to the existing motion.
Thus, if the magnitude is above 0 dB at this point, each cycle adds to the previous
one and the oscillations begin to grow, making the system unstable. If the magnitude
ratio is less than 1, then even though you push the child still at 180 degrees out of
phase, the output amplitude does not grow larger. The phase margin complements
the gain margin but starts with a magnitude condition and checks the corresponding
phase angle. When the magnitude plot crosses the 0 dB line, the phase margin is
calculated as the distance the phase angle is above 180 degrees. At the point of
marginal stability, these two points are one and the same (for most systems; see later
in this section).

4.4.3.2 Relation of Poles and Zeros in the s-Plane to Bode Plots


Before we progress too quickly, let us quickly relate this concept of stability with
what we learned while using the s-plane. If we know the open loop poles and zeros
and locate them in the s-plane, it is easy to see how the magnitude and phase
relationships in the frequency domain relate to the locations. Knowing from earlier
that for a Bode plot we let s jo, increase o, and measure/record the magnitude and
phase relationships, we can now duplicate the same process in the s-plane. Given
three poles and a zero as the s-plane in Figure 30 shows, let us see how the equivalent
Bode plot would be developed by using the angle and magnitude skills from the
previous section.
When calculating the magnitude and phase for various os along the axis, we
simply multiply the length of the vectors from each zero to test point jo and divide
by the product of the vectors from each pole to the test point. For Figure 30 the
magnitude value is

Figure 29 Bode plot gain and phase margin.


176 Chapter 4

Figure 30 The s-plane to Bode plot relationships.

Magnitude jdj=jajjbjjcj
and the phase angle is
Phase fd  fa  fb  fc
Just from this simple example, we can see the relationships between the two
plots:
 For any positive o, the pole at zero contributes a constant 90 degrees of
phase. Since a pole at zero is simply an integrator, this conrms how an
integrator adds in the frequency domain.
 For each pole or zero not at the origin, the original angle of contribution
starts at zero and progresses to 90 degrees. As we saw in root locus plots,
poles add negatively and zero adds positively. As when developing Bode
plots, this is exactly the relationship attributed to rst-order terms in the
numerator and denominator.
 As o increases, so does the length of each vector connecting the poles and
zero to it, and the overall magnitude decreases. The more poles we have, the
quicker that the denominator grows and the slope of the high frequency
asymptote is larger. Once again, this conrms our experience with Bode
plots.
 Finally, since we need to reach 180 degrees of phase angle before the
system can become unstable, we need to have at least three more poles
than zeros. With the dierence only being two, the maximum angle only
approaches 180 degrees as o approaches innity. This conrms what we
found when we looked at the root locus plots where for any system where n-
m  3, then asymptotes will cross into the right-hand plane. Similarly, the
only way for a Bode plot to show an unstable system is if the system order is
also three or greater.
To complete the picture, we must remember that a root locus plot is the result
of closing the loop and varying the gain K in the system; most Bode plots are found
using open loop input/output relationships, and these analogies from the s-plane to
the Bode plot were determined from the open loop poles. The question then becomes
how can closed loop system performance be determined from open loop Bode plots?
In most cases we can use the gain margin and phase margin defined above to
predict how the system would respond when the loop is closed. Gain and phase
Analog Control System Performance 177

margins are easy to measure, but the open loop system itself must be stable. As we
will see, this method does require some caution because some systems are not cor-
rectly diagnosed when using the gain margin to determine system stability. It is also
possible to use Nyquist and Nichols plots, and sometimes desirable, to determine the
closed loop system characteristics in frequency domain. Of course, we can also just
close the loop and construct another Bode plot to examine the closed loop response
characteristics. With much of the design work being done on computers, many
manual methods are finding less use.
4.4.3.3 Stability in the Frequency Domain
In this section we further examine the concept of system stability using the gain
margin and phase margin measurements as dened above. Recalling Figure 29,
where the margins are dened, we should recognize that if we were to increase the
gain K in the system that the magnitude plot changes vertically and the phase angle
plot does not change at all. Since we know that gain margin is the distance below 0 dB
when the system is 180 degrees out of phase, then increasing the gain K an amount of
the gain margin will bring us to the point of marginal stability (0 dB gain margin). For
systems with less than 2 orders of dierence between the denominator and numerator,
the phase angle never is greater than 180 degrees and the gain margin cannot be
measured. Of course, remembering this case in root locus plots, there were two
asymptotes and the system never became unstable as the gain went to innity. For
systems where the phase angle becomes greater than 180 degrees, we are able to
increase K to where the system becomes unstable. For a Bode plot using a gain K
equal to the gain margin, this marginal stability condition is shown in Figure 31.
Since multiplying factors add linearly on Bode plots, the system becomes mar-
ginally stable when the existing system gain is multiplied by another gain of 1.3. In
this example, both the gain margin and phase margin approach zero at the same time
and in the same place. When this happens, both the phase and gain margin are good
indicators of system stability. One problem that may occur is shown in Figure 32
where the phase angle is the only indicator accurately telling us that the system is
unstable. The gain margin, in error, predicts that the system is stable.

Figure 31 Eects of gain K on Bode plot stability margins.


178 Chapter 4

Figure 32 Dierences between gain and phase margin with increasing phase.

Therefore, we see that although the gain margin indicates a stable system, the
phase margin demonstrates that in fact the system is unstable. Even though the gain
margin is often described as the increase in gain possible before the system becomes
unstable, the phase margin is a much more reliable indicator of system stability. For
most systems the two measures of stability correlate well and can be confirmed by
examining the Bode plot. If we recall back to the section on nonminimum phase-in
systems, we saw how delays in the system change the phase angle and not the
magnitude lines on Bode plots. Another way to consider the phase margin, then,
is as a measure of how tolerant the stability is to delays in the system.
Containing the same information but plotted differently are Nyquist plots. In
fact, the same gain and phase measurements are made. Since the Nyquist plot com-
bines the magnitude and phase relationships, it becomes very easy to see whether or
not a system is stable. As long as the system is open loop stable (no poles in the RHP
when K 0), the Nyquist stability theorem is easy to apply and use. If there are
possibly zeros or poles in the RHP, the mathematical proofs become much more
tedious and subject to many constraints. What follows here is a very precursory
and conceptual introduction to get us to point where we at least understand what a
Nyquist plot tells us regarding the stability of the close loop system using the open
loop frequency data. To begin with, let us revisit the s-plane as shown in Figure 33.
What we need to picture are the angles that the various poles and zeros will go
through as we move a test point s around the contour in the RHP. If we let the
contour begin at s 0, progress to jo 1, follow the semicircle (also with a radius
= 1) around to the negative imaginary axis, and back up to s 0, then we math-
ematically have included the entire RHP. If a pole or zero is in the RHP, the angle it
makes with the point s moving along the contour line will make a complete circle of
360 degrees. Any poles or zero not in the right-hand plane contribute a net angle
rotation of zero. Finally, let our mapping be our characteristic equation,
Analog Control System Performance 179

Figure 33 Closed contour of RHP in the s-plane.

Fs 1 GsHs

Plot GsHs, the open loop transfer function, on the Nyquist plot, and the point of
interest relative to the roots of the our system occurs at the point 1:

GsHs 1

The important result is this: When we developed our Nyquist plot earlier, we
let o begin at zero and increase until it approached infinity (taking the data from the
Bode plot in Sec. 3.5.2). Thus, we have just completed one half of the contour path.
The path of o from negative infinity is just the reverse, or mirror, image of our
existing Nyquist plot. Now, if we look at the point 1 on the Nyquist plot and count
the number of times it circles the point, we can draw conclusions about the stability
of our closed loop system. The concept of using the mapping of the RHP and
checking for the number of encirclements about the 1 point is derived from the
theorem known as Cauchys principle of argument. If there are no poles or zeros in
the RHP, the 1 point will never have a circle around it (including the mirror image
of the common Nyquist plot). There are several potential problems. One problem
occurs because the angle contributions for poles and zeros are opposite and if one
pole and one zero are in the RHP, the angle will cancel each other during the
mapping. The difference is in the direction as the angle from a pole in the RHP
circles the 1 point in the counterclockwise direction and a zero in the clockwise
direction. A second problem is more mathematical in nature where if there are any
poles or zeros on the imaginary axis, the theorem reaches points of singularity at
these locations. The normal procedure is to make a small deviation around these
points. The Cauchy criterion can now be stated: the number of times that G(s)H(s)
encircles the point is equal to the number of zeros minus the number of poles in the
contour (picked to be the entire RHP). Encirclements are counted positive when they
are in the same direction as the contour path. This allows us to write the Nyquist
stability criterion as follows:
A system is stable if Z  0, where
Z N P

where Z is the number of roots of the characteristic equation 1 GsHs in the


RHP, N is the number of clockwise encirclements of the point 1, and P is the
number of open loop poles of GsHs in the RHP.
180 Chapter 4

Adding the mirror image to the Nyquist plot developed earlier, shown in
Figure 34, allows us to now apply the theorem and check for system stability. A
quick inspection reveals that the closed loop system will be stable since the plot never
encircles the 1 point. Since it never circles the point CW or CCW there are neither
poles nor zeros in the RHP. Remember that in general the top half of the curve is not
shown and that including it may help you visualize the number of times the path
encircles 1.
To conclude this section, let us connect what we have learned with root locus
plots, Bode plots, and Nyquist plots to understand how the stability issues are
related. With the Bode plot we already defined and discussed the use of gain margin
and phase margin as measure of system stability. Moving to the Nyquist plot allows
the same measurements and comments to apply. If we consider the process of taking
a Bode plot and constructing a Nyquist plot, the gain and phase margin locations are
easily reasoned out. The radius of the Nyquist plot is the magnitude (no longer in
dB) and the angle from the origin is the phase shift. The gain margin then falls on the
negative real axis since the phase angle when the plot crosses it is 180 degrees. The
magnitude at this point cannot be greater than one if the system is stable, so the
distance that the plot is inside point 1 is the gain margin. This also confirms the
Nyquist stability theorem just developed since if the plot is to the right of 1
(positive gain margin) the path never encircles the 1 point and the theorem also
confirms that the system is stable. Where the theorem sees extended use is when
multiple loops are found and the gain margin is less clear.
The phase margin on the Nyquist plot will occur when the distance that the
plot is from the origin is equal to one. This corresponds to the crossover frequency (0
dB) in the Bode plot. The amount that the angle makes with the origin less than 180
degrees is the phase margin. These measurements are shown for a portion of a
Nyquist plot in Figure 35. Since the phase equals 180 degrees on the negative
real axis, the phase margin, f, is the angle between the line from the origin to the
point where the plot crosses inside the circle defined as having a radius of one and
the negative real axis. Remember that on the Bode plot this corresponds to the

Figure 34 Nyquist stability theorem example plot.


Analog Control System Performance 181

Figure 35 Gain and phase margins on Nyquist plots.

frequency where the magnitude plot passes through 0 dB (the crossover frequency,
oc ).
The gain margin is represented linearly (not in dB) and can be found by the
ratio between lengths a and b where
K2 a b 1 1
K2 K1
K1 b b b
 
K2
GMdB 20 log 20 logK2  20 logK1
K1
Since the axes are now linear, the increase in gain is simply a ratio of the
lengths between the point at 1 and where the line crosses the real axis. In other
words, if a gain of K1 gets the line to cross as shown in the figure (a distance b from
the origin), then K2 is the gain required (allowed) to get us a distance |a bj 1
from the origin before the system goes unstable. Since this data is plotted linearly,
the ratio of the gains is equal to ratio of the lengths. For example, if b 12 and the
current gain K1 on the system is 5, then K2 can be twice that of K1 , or equal to 10,
before the system becomes unstable and the line is to the left of the 1. To report the
gain margin with units of decibels, we can take the log of the ratio or of the differ-
ence between the logs of the two gains and multiply by 20.
To summarize issues regarding stability in the frequency domain, it is better to
rely on phase margin than gain margin as a measure of stability. In most systems
they will both provide equivalent measures and converge to the same point on both
the Bode and Nyquist plots when the system becomes marginally stable. Under some
conditions this is not true and the gain margin may indicate a stable system when in
fact the system is unstable. Gain margin is often thought of as the amount of possible
increase in gain before the system becomes unstable. This is easy to visualize on Bode
plots since only the vertical position of the magnitude plot is changed. Phase margin
is commonly related to the amount of time delay possible in the system before it
becomes unstable. Time delays change the phase and not the magnitude of the
system, and the system is classified as a nonminimum phase system. Finally, both
Bode plots and Nyquist plots contain the same information but in different layouts.
The concepts of stability margins apply equally to both. In addition, Nyquist plots
can be extended even further using the Nyquist stability theorem to determine if
182 Chapter 4

Figure 36 Example: comparison of stability criterion.

there are any poles or zeros in the right-hand plane. The next example seeks to
review the measures of stability used in the different system representations and
show that they all convey similar information, each with different strengths and
weaknesses.

EXAMPLE 4.11
For the system represented by the block diagram in Figure 36:
a. Develop the root locus, Bode, and Nyquist plots.
b. Determine the gain K where the system becomes unstable using
1. The root locus plot.
2. The gain margin from the Bode plot.
3. The gain ratio from the Nyquist plot.
c. Draw each plot again using the new gain.
Part A: To develop the root locus plot, we will follow the guidelines presented in the
previous section. The system has three poles (0; 2, and 4) and no zeros. Therefore
it will have three asymptotes with angles of 60 and 180 degrees. The asymptotes
intersect the real axis at s 2 and the break-away point is calculated to be at
s 0:845. This matches well with the valid sections of real axis that include the
segment between the poles at 0 and 2 and to the left of the pole at 4. This allows
the root locus plot to be drawn as shown in Figure 37. Since the system being
examined has three orders of dierence between the denominator and numerator,
it will go unstable as K goes to innity.
Plotting the different open loop factors found in the transfer function develops
the equivalent Bode plot. We have an integrator and two first-order factors, one with

Figure 37 Example: root locus plot for stability comparison.


Analog Control System Performance 183

t 0:5 seconds and one with t 0:25 seconds. This means that we have a low
frequency asymptote of 20 dB/dec, a break to 40 dB/dec at 2 rad/sec, and a
break to the high frequency asymptote of 60 dB/dec at 4 rad/sec. The phase
angle begins at 90 degrees and ends at 270 degrees. The resulting Bode plot
with the gain and phase margins labeled is shown in Figure 38.
Finally, we can develop the Nyquist plot from the data contained in the Bode
plot just developed. At very low frequencies, the denominator approaches zero and
the steady-state gain goes to infinity. The initial angle on the Nyquist plot begins at
90 degrees with a final angle of 270 degrees. The distance from the origin is equal
to 1 at the crossover frequency (M 0 dB), greater than 1 at lower frequencies, and
less than 1 at greater frequencies. The magnitude goes to zero as the frequency
approaches infinity. This can be represented as the Nyquist plot given in Figure 39.

Part B: With the three plots now completed, let us turn our attention to determining
from each plot where the system goes unstable. For the root locus plot the preferred
method is to apply the Routh-Hurwitz criterion and solve for the gain K where the
system crosses over in the RHP, thus becoming unstable. With the characteristic
closed loop equation equal to

CE s3 6s2 8s 8K

The Routh-Hurwitz array becomes

s3 1 8
s2 6 8K
s1 48  8K=6 0
s0 8K

Figure 38 Example: Bode plot for stability comparison.


184 Chapter 4

Figure 39 Example: Nyquist plot for stability comparison.

When K is greater than or equal to 6, the third term in the rst column becomes
negative and the system becomes unstable.
From the Bode plot, where the gain margin has been graphically determined as
15 dB, we see that if the magnitude plot is raised vertically by 15 dB the system
becomes unstable. For this system both the gain margin and phase margin go to zero
at the same point. We can find what increase in gain is allowable by solving for the
gain resulting in 15 dB of increase (gains multiply linearly but add on the Bode plot
due to the dB scale):

20 log K 15 dB

K 1015=20 5:6

Since the Bode plot uses approximate straight line asymptotes, the gain K varies
slightly from the gain solved for with the root locus plot and using the Routh-
Hurwitz criterion.
The calculation of the allowable gain determination using the Nyquist plot is
found by measuring the ratio of one over the length between the origin and where the
plot crosses the negative real axis. Using the gain approximated from the Bode plot
implies that the fraction should be about 1/5 of the total length between 0 and 1 on
the negative real axis.

Part C: To complete this example, let us redraw the plots after the total gain in the
system is multiplied by 6. Only the Bode plot and the Nyquist plot need to be
updated since the gain is already varied when creating the root locus plot and
contains the condition where the system goes unstable. In other words, we move
along the root locus paths by changing the gain K, while the Bode and Nyquist plots
will be dierent for any unique point along the path. In this part of the example, our
goal is to plot the Bode and Nyquist plot corresponding to the point where the root
locus plot crosses into the RHP. The root locus plot, included for comparison, is
shown in Figure 40 with the Bode and Nyquist plots at marginal stability conditions,
K 6.
To conclude this section, let us work the same example except that we will
answer the questions using Matlab to confirm and plot our results.
Analog Control System Performance 185

Figure 40 Example: Comparison of marginal stability plot conditions: (a) root locus plot;
(b) Nyquist plot; (c) Bode plot.
186 Chapter 4

EXAMPLE 4.12
For the system represented by the block diagram in Figure 41, use Matlab to solve
for the following:

a. Develop the root locus, Bode, and Nyquist plots.


b. Determine the gain K where the system becomes unstable using
1. The root locus plot.
2. The gain margin from the Bode plot.
3. The gain ratio from the Nyquist plot.
c. Draw each plot again using the new gain.

Part A: To generate the plots using Matlab, we can dene the system once and use
the command sequence shown to generate each plot. Each command used has many
more options associated with it. To see the various input/output options, type
>>help command j

and Matlab will show the comments associated for each command.

%Example of Root Locus, Bode, and Nyquist


%Stability Criterion

%Dene System
num=8;
den=[1 6 8 0]
sys=tf(num,den)

rlocus(sys) %Develop the Root Locus Plot

%Find gain at marginal stability


rlocnd(sys) %Place cursor at location, returns K

gure; %Opens a new plot window


bode(sys) %Develop the Bode plot
%Measure the Stability Margins
margin(sys) %Places margins on the plot

gure; %Opens a new plot window


nyquist(sys) %Develop the Nyquist plot

The rlocus command returns the root locus plot shown in Figure 42.

Figure 41 Example: comparison of stability criterion using Matlab.


Analog Control System Performance 187

Figure 42 Example: Matlab root locus plot.

Using the rlocfind command brings up the current root locus plot and allows us
to place the cursor on any point of interest along the root locus plot and find the
associated gain K at that point. Placing it where the paths cross into the RHP returns
K 6, verifying our analytical solution; it also returns the pole locations that you
clicked on.
The bode command generates the following Bode plot for our system, and
when followed by margin, will calculate and label the gain and phase margins for
the system, shown in Figure 43. Here we see that the gain margin equals 15.563 dB,
close to our approximation of 15 dB. The phase margin is 53.4 degrees at a frequency
of 0.8915 rad/sec. If we calculate the gain K required to shift the magnitude plot up
by 15.563 dB, we get

20 log K 15:563 dB

15:563
K 10 20 6

This gain of K 6 agrees with the root locus plot from earlier.
The nyquist command is used to generate our final plot, as shown in Figure 44.
To illustrate the condition of stability around the point 1, the axes have to be set to
zoom in on the area of interest. Remember that the plot begins with an infinite
magnitude at 90 degrees.
Applying the Nyquist stability criterion confirms that the system is stable.
There is no encirclement of the point at (0; 1). The gain margin is also verified
as the plot crosses the negative real axis approximately 1/6 of the way between 0 and
188 Chapter 4

Figure 43 Matlab Bode plot with stability margins (GM 15.563 dB [at 2.8284 rad/sec],
PM 53.411 deg. [at 0.8915 rad/sec]).

Figure 44 Example: Matlab Nyquist plot.


Analog Control System Performance 189

1 on the negative real axis. This means that we can increase the gain in our system
six times before the plot moves to the left of the point at 1.
Finally, if we increase the numerator of our system from 8 to 48 (increasing the
gain by a multiple of K 6), then we can use Matlab to redraw the Bode (see Figure
45) and Nyquist plots. When generating the Nyquist plot in Figure 46, we can show
one close-up section to verify stability and one overview plot giving the general
shape. With the new gain in the system we see that on the Bode plot the gain margin
and the phase margin both went to zero and the system is marginally stable. On the
Nyquist plot we see that the path goes directly through (0; 1), also confirming that
our system is marginally stable. Any further increase in gain and the plot will encircle
the point at 1 and tell us that we have an unstable system.
By now we have a better understanding about how different representations
can be used to determine system stability. Hopefully the comparisons have convinced
us that the same information is only conveyed using different representations. With
the same physical system (as in the examples) we certainly would expect each method
to find the same stability conditions. Different methods have different strengths and
weaknesses. Many times the choice of which one to use is determined by the infor-
mation available about the system and what form it is in. Different computer
packages also have different capabilities. As long as we understand how they relate,
we should be able to design using any one of the methods presented.

Figure 45 Example: Matlab Bode plot at marginal stability (GM 0 dB, PM 0 [unstable
closed loop]).
190 Chapter 4

Figure 46 Example: Matlab Nyquist plot at marginal stability.

4.4.3.4 Closed Loop Responses from Open Loop Data in the Frequency
Domain
As we discussed earlier, most frequency domain methods are done using the open
loop transfer function for the system. Ultimately, it is the goal that we close the loop
to modify and enhance the performance of the system. Since we have the information
already, albeit representing the open loop characteristics, we would like to directly
use this information and infer what the expected results are when we close the loop.
Of course we can always close the loop and redraw the plots for the closed loop
transfer function, but that duplicates some of the work already completed. This is
one advantage of the root locus plot in that the closed loop response is determined
from the open loop system transfer function and the complete range of possible
responses is quickly understood.
If we have a unity feedback system with Hs 1, then we can see the relation-
ship between the open loop and closed loop system response by using the Nyquist
diagram, as illustrated in Figure 47. If we have an open loop system represented by
Gs and unity feedback, then the closed loop system is given as

Cs Gs

Rs 1 Gs

Figure 47 Open loop versus closed loop response with Nyquist plot.
Analog Control System Performance 191

The denominator, where jGsj  1 0, can also be found on the Nyquist plot as the
distance from the point 0; 1 to the point on the plot. Now we know both the
numerator and denominator on the Nyquist plot and if we calculate various values
around the plot, we can construct our closed loop frequency response. In the same
way our closed loop phase angle can be found as
fCL fOL  b
Instead of having to perform these calculations for each point, it is common to use a
Nichols chart where circles of constant magnitude and phase angle are plotted on the
graph paper. After we plot our open loop plot (as done for a Nyquist plot), we mark
each point were our plot crosses the constant magnitude and phase lines for the
closed loop. All that remains is to simply record each intersecting point and con-
struct the closed loop response.
Perhaps the most common parameters specified for open loop frequency plots
(Bode and Nyquist) are the gain and phase margins, as defined and used throughout
this section as a measure of stability. If we have dominant second-order poles, then
we can also use the gain and phase margins as indicators of closed loop transient
responses in the time domain. As we will see, the phase margin directly relates to the
closed loop system damping ratio for second-order systems given by the form
o2n
Gs
ss 2on
When we close the loop, we get the common form of our second-order transfer
function:
o2n
Gs
s 2on s o2n
2

The process of relating our closed loop transfer function to the gain margin is as
follows. If we solve for the frequency where jGsj is equal to 1 by letting s jo, then
we have located the point where our phase angle is to be measured. We now sub-
stitute this frequency where the magnitude is one into the phase relationship and
solve for the phase margin as a function of the damping ratio. The result is
2
Phase margin f tan1 q
p
2z2 1 4z4
It is much more convenient for this relationship to be plotted as in Figure 48.
The second relationship derived from the analysis summarized above relates
the gain crossover frequency to the natural frequency of the system. This relationship
is derived from knowing that at the gain crossover frequency the magnitude of the
system is equal to one and relating it to the natural frequency in the equations above.
The ratio of the crossover frequency to the natural frequency is
r
q
oc
2z2 1 4z4
on
As before, it is useful to plot this relationship as shown in Figure 49.
These two plots allow us to plot the open loop Bode plot for second-order
systems and from the phase margin and gain margin determine the closed loop time
192 Chapter 4

Figure 48 Relationship between OL phase margin and CL damping ratio (second-order


systems).

response in terms of the systems natural frequency and damping ratio. These para-
meters have been discussed and apply frequently in previous sections.
A word of caution is in order. Remember that these figures are for systems that
are well represented as second order (dominant second-order poles). For general
systems of higher orders and systems including zeros, our better alternatives are to
close the loop and redo our analysis, simulate the system on a computer, or develop a
transfer function and use a technique like root locus. If these conditions are not met,
the approximations become less certain and worthy of a more thorough analysis.

Figure 49 Relationship between OL crossover and CL natural frequencies (second-order


systems).
Analog Control System Performance 193

4.5 PROBLEMS
4.1 What are the three primary (fundamental) characteristics that are used to
evaluate the performance of control systems?
4.2 What external factors determine, in part, whether or not we should consider
using an open loop or closed loop controller?
4.3 Given the physical system model in Figure 50, answer the following questions
(see also Example 6.1).
a. Construct the block diagram representing the hydraulic control system.
b. Can the system ever go unstable?
c. To decrease the steady-state error, you should increase the magnitude of
which of the following variables? [ M, B, Kv, K, Ap, a, b ]
4.4 List a possible form that a disturbance that might take in each of following
components of a control system (i.e., electrical noise, parameter uctuation, etc.).
a. Amplier
b. Actuator
c. Physical system
d. Sensor/transducer
4.5 A system transfer function is given below. What is the nal steady-state value
of the system output in response to a step input with a magnitude of 10?
5s2 1
Gs
23s3 11s 10
4.6 Using the system in Figure 51, what is the initial system output at the time a
unit step input is applied, the steady-state output value, and steady-state error?
4.7 A controller is added to a machine modeled as shown in the block diagram in
Figure 52.
a. Determine the transfer function between C and R. What is the steady-state
error from a unit step input at R, as a function of K?
b. Determine the transfer function between C and D. What is the steady-state
error from a unit step input at D, as a function of K?
4.8 The transfer function for a unity feedback Hs 1 control system is given
below. Determine
a. The open loop transfer function.
b. The steady-state error from a unit step input.

Figure 50 Problem: hydraulic control system.


194 Chapter 4

Figure 51 Problem: block diagram of system.

c. The steady-state error from a unit ramp input.


d. The value of K required to make the state state-error from part c equal to
zero.
Cs K sb
2
Rs s as b
4.9 Given the block diagram in Figure 53, use the system type number to deter-
mine the steady-state error resulting from a
a. Unit step input at Rs.
b. Unit ramp input at Rs.
c. Unit step input at Ds.
d. Comment on the results.
4.10 Reduce to block diagram in Figure 54 to a single block. Using the nal transfer
function and the FVT, determine the steady-state output of the system to a unity step
input function.
4.11 Given the block diagram model of the physical system in Figure 55 and using a
unit step input for questions pertaining to inputs R and D, answer the following
questions.
a. What is the natural frequency of the system?
b. What is the damping ratio of the system?
c. What is the percent overshoot?
d. What is the settling time (2%)?
e. What is the steady-state error from a unit step input for command R?
f. What is the steady-state error from a unit step input for disturbance D?
4.12 Using Routh-Hurwitz criterion, determine the range of values for K that
results in the following system being stable.
K
Gs
s3 18s2 77s K
4.13 The characteristic equation is given for a system model as follows:
CE s3 2s2 4s K

Figure 52 Problem: block diagram of system.


Analog Control System Performance 195

Figure 53 Problem: block diagram with disturbance.

Figure 54 Problem: block diagram of system.

a. Develop the Routh array for the polynomial, leaving K as a variable in the
array and determine the range of K for which the system is stable.
b. At the maximum value of K, what are the values of the poles on the ima-
ginary axis and what is the type of response?
4.14 Given the block diagram in Figure 56, use root locus techniques to answer the
following questions.
a. Sketch the root locus plot.
b. Use the magnitude condition and your root locus plot to determine the
required gain K for a damping ratio of 0.866. Show your work.
c. Letting K 2 for this question, what is the steady-state error due to a unit
step input at R?
4.15 Develop the root locus plot and required parameters for the following open
loop transfer function.
s 4
GH
ss 3s2 2s 4

4.16 Develop the root locus and required parameters for the following system.

Figure 55 Problem: block diagram with disturbance.


196 Chapter 4

Figure 56 Problem: block diagram of system.

K
1 KGsHs 1
s 12s 64s2 128s
4 3

K
1 0
ss 4s 4 j4s 4  j4
a. List the results that are obtained from each root locus guideline.
b. Give a brief sentence describing why or why not the dominant poles assump-
tion is valid for this system.
4.17 Develop the root locus plot and required parameters for the following open
loop transfer function.
s2
GHs
2

s 3s 3 s2 8s 12
4.18 Develop the root locus plot and required parameters for the following open
loop transfer function.
s2 6s 8
GHs
2

s 7 s 4s 3 s2 2s 10
After the plot is completed, describe the range of system behavior as K is increased.
4.19 Develop the root locus plot and required parameters for the following open
loop transfer function. Use only those calculations that are required for obtaining
the approximate loci paths.
s 3s 4s 1:5 0:5js 1:5  0:5j
GH
s 2s 1s 0:5s  1
After the plot is completed, describe the range of system behavior as K is increased.
4.20 Given the following system, draw the asymptotic Bode plot (open loop) and
answer the following questions. Clearly show the nal resulting plot.
1
GHs
s2 2s 1
a. What is the phase margin fm ?
b. What is the gain margin in decibels?
c. Is the system stable?
d. Sketch the Nyquist plot.
4.21 Using the Bode plot given in Figure 57, answer the following questions.
a. What is the open loop transfer function?
b. What is the phase margin?
c. Sketch the Nyquist plot.
Analog Control System Performance 197

Figure 57 Problem: Bode plot of system.

Figure 58 Problem: Bode plot for system.

d. Can the system ever be made to go unstable if the gain on a proportional


controller is increased?
4.22 Given the following open loop Bode plot in Figure 58, develop the closed loop
second-order approximation. Show all intermediate steps. The nal result should be
a second-order transfer function in the s-domain. Sketch the closed loop magnitude
(dB) and phase angle frequency response.
This Page Intentionally Left Blank
5
Analog Control System Design

5.1 OBJECTIVES
 Provide overview of analog control system design.
 Design and evaluate PID controllers.
 Develop root locus methods for the design of analog control systems.
 Develop frequency response methods for the design of analog control sys-
tems.
 Design and evaluate phase lag and phase lead controllers.
 Design proportional feedback controller using state space matrices.

5.2 INTRODUCTION
Analog controllers may take many forms, as this chapter shows. However, the
analysis and design procedures, once the transfer functions are obtained, are nearly
identical. A proportional controller might utilize a transducer, operational amplier,
and amplier/actuator and yet perform the same control action as a system utilizing
a set of mechanical feedback linkages. The movement has certainly been to fully
electronic controllers since they have several advantages over their mechanical coun-
terparts. Transducers are relatively cheap, computing power continues to experience
exponential growth, electrical controllers consume very little power, and the cost of
upgrading controller algorithms or changing parameter gains is only the cost of
design time that all updated systems would require regardless. Once the move is
made to digital, a new algorithm is installed by simply downloading it to the corre-
sponding microcontroller.
The algorithms presented here are the mainstay of many control projects today
and are capable of solving most of the problems they encounter. Advanced control
algorithms certainly have many advantages, but the basic controllers continue to be
the majority in most applications.

199
200 Chapter 5

5.3 GENERAL OVERVIEW OF CONTROLLER (COMPENSATOR)


DESIGN
It is quite common as we work in the area of control system design to see the terms
controller and compensator. For the most part, the words are meant to describe the
same thing, that is, a way to make the system output behave in a desirable way. If
any dierences do exist, it could be argued that the term controller includes a larger
portion of our system. Components such as the required linkages, transducers, gain
ampliers, etc., could all be included in the term controller. The term compensator,
on the other hand, is often applied to the portion, or subsystem, of the control
system that compensates (or modies the behavior of) the system, thus we may
hear terms such as PID compensators and phase-lag compensators. Some reference
sources make this distinction and some do not. It poses no problem as long as we are
aware of both terms and how they might be used.
The actual compensator itself may be placed in the forward path or in the
feedback path as shown in Figures 1 and 2. When the compensator is placed in the
forward path, it is often called series compensation, as it is in series with the physical
system. Similarly, when the compensator is in the feedback path, it is often called
parallel compensation (in parallel with the physical system). In many cases the loca-
tion of the compensator, Gc , is determined by the constraints imposed on the design
by the physical system. Practical issues during implementation may make one design
option more attractive than the other. These issues might include available sensors,
system power levels, and existing signals available from the system. As shown in the
following sections, having noisy signals may lead us to implement a combination of
the two forms.
Series compensation is more commonly found in systems with electronic con-
trollers where the feedback and command signals are represented electrically. This
allows the compensator to be placed in the system where the lowest power levels are
found. This is increasingly important as we move to digital control systems where

Figure 1 Compensator placed in the forward path.

Figure 2 Compensator placed in the feedback path.


Analog Control System Design 201

components are not as capable of handling larger power levels (i.e., microproces-
sors).
Parallel compensation, since it occurs in the feedback path, can sometimes
require less components and ampliers since the available power levels are often
larger. For example, as we see in the next chapter dealing with implementation of
control systems, mechanical controllers can operate without any electronics and
directly utilize the physical input from the operator to control the system. The
way these systems are implemented places the compensator (mechanical linkages
in some cases) in the feedback path.
Remember that although classied as two distinct congurations, more com-
plex systems with nested feedback loops may contain elements of both. The impor-
tant thing to note is that regardless of the layout of the system, the design tools and
procedures (i.e., closing the loop, placing system poles, etc.) remain the same. The
exception is that in some systems where the gain is not found in the characteristic
equation as a direct multiplier [1 KGsHs, as required for using root locus
design techniques, some extra manipulation may be required to use these tools.
Finally, in keeping with the layout of this text, both s-plane and frequency
methods are discussed together when discussing the design of various controllers. To
be eective designers, we should be comfortable with both and able see the relation-
ships that exist between the dierent representations. Ultimately, our compensator
should eectively modify our physical system response to give us the desired
response in the time domain, as in the time domain we see, hear, and work with
the system. Whether it is designed in using root locus or Bode plots, our design
criteria almost always are related back to a time response. As a last comment, it
would be wrong in learning the material in the chapter (and text) if we leave with the
impression that we always need to compensate our system to achieve our desired
response. Implementing a control system is not meant to replace the goal of design-
ing a good physical system with the desired characteristics inherent in the physical
system itself. Although many poorly designed physical systems are improved
through the use of feedback controls, even better performance can be achieved
when the physical system is properly designed. In other words, a poor design of
an airplane may result in an airplane that is physically unstable. Even though we can
probably stabilize the aircraft through the use of feedback controls (and time,
money, weight, etc.), it does not hide the fact that the original design is quite awed
and should be the rst improvement made. It is generally assumed when discussing
control systems that the physical system is constrained to be the way it is and our
task is to make it behave in a more desirable manner.

5.4 PID CONTROLLERS


PID controllers are, without question, the most popular designs being used today,
and for good reason. They do many things well, cover the basic transient properties
we wish to control, and are familiar to many people. Virtually all o the shelf
controllers have the option of PID. PID controllers are named for their proportional-
integral-derivative control paths. One path multiplies the error with a gain (output
proportional to input), one path integrates or accumulates the error, and the deri-
vative path produces output relative to the rate of change of error. Although new
advanced controllers are continually being developed, for systems with well-dened
202 Chapter 5

Figure 3 Basic PID block diagram.

physics, with little change in parameters during operation, and that are fairly linear
over the operating range, the PID algorithm handles the job as capably as most
others. The dierent modes of PID (proportional, integral, derivative) give us
options to modify all the steady-state and transient characteristics examined in
Chapter 4.
The basic block diagram representing a PID controller is shown in Figure 3.
The output of the summing junction represents the total controller output that
provides the input to the physical system. Summing the three control actions results
in the following transfer function and time equivalent equations:

Us Ki det
Kp Kd s ut Kp et Ki etdt Kd
Es s dt

Using the PID transfer function as expressed above, it is often common to


combine the three blocks shown in Figure 3 to a single block as shown in Figure 4.
The easiest way to illustrate the equations is to examine the controller output when a
known error is the input. If a ramp change in error is the input into a PID controller,
then each controller will contribute the following control action to the error as
shown in Figure 5. The total controller output is simply the three components
added together. The proportional term simply multiplies the error by a xed con-
stant and only scales the error, as shown. It is the most common beginning point
when implementing a controller and usually the rst gain adjusted when setting up
the controller. The integral term will always be increasing if the error is positive and
decreasing if the error is negative. Its dening characteristic is that it allows the
controller to have a non-zero output even when the error input is zero. The deriva-
tive gain only has an output when the error signal is changing. Each term is exam-
ined in more detail in the next section.

Figure 4 Single block PID representation.


Analog Control System Design 203

Figure 5 PID Controller output as function of a ramp input error.

5.4.1 Characteristics of PID Controllers


The proportional control action is generally assumed to be the beginning when
closing a feedback loop. While the integral gain could function alone (and some-
times does), the derivative gain needs to be supported by the proportional gain.
Proportional controllers have worked well in many applications and generally
allow the controller to be tuned over a wide range. Its primary disadvantages
are the inability to change the shape of the root locus plot (i.e., be able to ran-
domly choose some natural frequency and damping ratio combination) and the
occurrence of steady-state errors for all type 0 systems. Varying the proportional
gain will only move the systems poles along the root loci path dened by the
original system poles and zeros. (The same observation is true in the frequency
domain; the shape of a Bode plot is not changed with a proportional gain, only the
vertical location.) In addition, since some error is necessary to have a non-zero
signal to the actuator, some steady-state error is always present unless the physical
system contains an integrator that allows the system to maintain the current setting
with a zero signal to the actuator. As the proportional gain is increased, the
steady-state errors decrease but the oscillations increase. Thus, the designers job
is to balance these two trade-os so common in closed loop controllersstability
versus accuracy.
The integral gain, when used in conjunction with a proportional gain, can be
used to eliminate the steady-state errors. This can be intuitively explained by realiz-
ing that as long as an error is present in the system the integral gain is collecting
the error and the correction signal to the actuator continues to grow until the error is
reduced to zero. If an integrator is added to a type 1 system, then the error from a
ramp input can be driven to zero, and so forth (it becomes a type 2 system). In these
situations, however, there are two poles at the origin of the s plane and stability may
be compromised. If the integral gain is used solo, it is hard to achieve decent tran-
sient responses with timely reduction of the steady-state errors. With the integral
gain large enough to respond to the transients, a problem arises that the integrator
accumulates too much error, overshoots the command, and repeats the process. This
eect is called integral windup, and many controllers place limits on the error accu-
mulation levels in the integrator. Integral resets are sometimes implemented to reset
204 Chapter 5

the error in the integral term to zero. Many times it is possible to determine whether
the oscillations are from integral windup or excessive proportional gain by noticing
the frequency of oscillation. The proportional gain, when too large, causes the
system to oscillate near its natural frequency, while the integral windup frequency
is commonly much lower and less aggressive. The general tuning procedure is to
use the proportional gain to handle the large transient errors and the integral gain to
eliminate the steady-state errors with only minor eects on the stability.
Implementing the integral portion of a controller is common and generally proves
to be quite eective.
Derivative gains, of the three discussed, are capable of simultaneously helping
and hurting the most. A derivative control action can never be used alone since it
only has an output when the system is changing. Hence, a derivative controller has
no information about the absolute error in the system; you could be a mile from your
desired position and as long as you do not move, the derivative output is zero and
you remain a mile from where you want to be. Therefore, it must always be used in
conjunction with proportional controllers. The benet is that a derivative gain, since
it adds a zero to the system, can be used to attract errant loci paths and thus
contribute to the stability of the system. It anticipates large errors before they happen
and attempts to correct the error before it happens, whereas proportional and inte-
gral gains are reactive and only respond after an error has developed. In many cases
it acts as eective damping in the system, simulating damping eects without the
energy losses associated with adding a damper.
This is the advantage; the disadvantage is how in practice it tends to saturate
actuators and amplify noisy signals. If the system experiences a step input, then by
denition the output of the derivative controller is innite and will saturate all the
controllers currently available. Second, since the output is the derivative of the error,
or the slope of the error signal, the derivative output can have severely large swings
between positive and negative values and cause the system to experience chatter. The
net benet is removed and the derivative term, although stabilizing the overall sys-
tem, creates enough signal noise amplication into the entire system that it chatters.
The eect is shown in Figure 6 and is why a low-pass lter is commonly used with
derivative controllers.
Even though the trend of the error signal always has a positive slope, the
derivative of the actual error signal has large positive and negative signal swings.
The low-pass lter should be chosen to allow the shape of the overall signal to

Figure 6 Noise amplication with derivative term in controller.


Analog Control System Design 205

remain the same while the higher frequency noise is ltered and removed. This allows
the actual derivative output to more closely approach the desired.
Several variations of the PID controller are used to overcome the problems
with the derivative term. Even if we could remove all the noise from the signal, we
would still saturate the amplier/actuator any time a step input occurs on the com-
mand. Since the physical system does not immediately respond, the error input to the
control also becomes a step function. The derivative term, in response to the step
input, attempts to inject an impulse in the system. When we switch to dierent set
points, this resulting impulse into the system is sometimes termed set-point-kick. If
our actual feedback signal is noise free, then we can counteract the step input
saturation problem using approximate derivative schemes to modify the derivative
term, Kd s, to become

s
Kd s  Kd 1
N s1

where the value of N can be adjusted to control the eects. A common value is
N 10. This will make a step input not cause an innite output but rather a decay-
ing pulse as shown in Figure 7, where a step response of the modied approximate
derivative term is plotted for several values of N. When Kd 1, as shown in Figure
7, the eects of N are easily seen since the peak value is simply equal to the value of

Figure 7 Step responses of approximate derivative function for several values of N.


206 Chapter 5

N for that plot. Notice though that the time scales are also shifted, and as N
increases the response decays more quickly (N represents the time constant). As
the value of N approaches innity, the approximate derivate output approaches
that of a true derivative. The output of a true derivative would have an innite
magnitude for an innitesimal amount of time.
Other methods can be used to deal with both set-point-kick (step inputs) and
noise in the signals. These methods, however, some of which are summarized here,
require a change to the structure of our system and additional components. One
alternative to deal with the problem of dierentiating a step input is to move the
derivative term to the feedback path. Since the output of the physical system will
never respond as abruptly as a step, the derivative term is less likely to saturate the
components in the system. In other words, the output of the system will not have a
slope equal to innity as a true step function does. This modication, PI-D, and
resulting block diagram is shown in Figure 8.
Now what we see is that the error is fed forward so that it still directly multi-
plies by Kp , is integrated by Ki =s, and the derivative term only adds the eects from
the rate of change of the physical system output, not of the error signal. To even
further reduce abrupt changes in the signal that the controller sends to the system,
we might choose to also include the proportional term in feedback loop as shown in
Figure 9. The I-PD is similar to the PI-D except that now only the integral term
directly responds the change in error. Even if a step input is introduced into the
system, the integral of the step is a ramp and relatively easy for the system to respond
to without saturating. The proportional term is fed directly through from the feed-
back along with the derivative of the feedback.
The one problem we still have with these alternatives is noise in the feedback
signal itself. If we have a noisy signal, the derivative term will still amplify the
noise and inject it back into the system. An alternative to reduce both step input
and noise- related derivative problems is to use a velocity sensor. When velocity
sensors are used (assuming position is controlled) neither the error nor feedback
signal themselves are dierentiated and the velocity signal acts as the derivative of
the position error, as shown in Figure 10. When the closed loop transfer function is
derived, the derivative gain, Kd , adds to the system damping and allows us to
stabilize the system. Since the feedback signal is from the velocity sensor and
not obtained by dierentiating the position signal, the problems with noise ampli-
cation are minimized. There are many variations to this model depending on
components, access to signals, and system congurations. The remaining sections
help us design these controllers.

Figure 8 Block diagram with PI-D controller.


Analog Control System Design 207

Figure 9 Block diagram with I-PD controller.

5.4.2 Root Locus Design of PID Controllers


Root locus techniques are one of the most common methods used to design and tune
control systems. Chapters 1 through 4 developed the tools used for designing con-
trollers, and we are now able to start combining the dierent skills. We should be
able to eectively model our system, evaluate the open loop response using root
locus or Bode plots, choose a desired response according to one or more perfor-
mance specications, and design the controller. Understanding root locus plots
allows us to design our controllers for specic requirements and to immediately
see the eects of dierent controller architectures.
As illustrated in the s-plane plots, the damping ratio, natural frequency, and
damped natural frequency are all easily mapped into the s-plane. Lines of constant
damping ratio are radials directed outward from the origin where cosine of the
angle between the negative real axis and the line equals the damping ratio. The
imaginary component of the poles corresponds to the damped natural frequency
and the radius from the origin to the poles equals the natural frequency. Thus, if
the natural frequency and damping ratio are specied for the dominant closed loop
poles, the desired pole locations are also known. The question then becomes, how
do we get the poles to be at that location? Root locus plots are a valuable tech-
nique for doing this. Since root locus techniques vary the proportional gain, the
technique needs some modications for tuning the derivative and integral gains in
a PID controller.
The most eective way for quickly designing controllers is placing the poles
and/or zeros based on our knowledge of how the root loci paths will respond. For
example, with a PID controller we can place two zeros with the pole being assigned
to the origin. Selectively placing the zeros will determine the shape of the loci paths,
and we can vary the proportional gain to move us along the loci. If we have two
problem poles, we can generally attract them to a better location through the use of
tuning the PID gains to place the zeros in such a way to have those two poles end at
our zeros. Knowing the root loci paths are attracted to zeros enables us to shape

Figure 10 PD controller with velocity sensor feedback.


208 Chapter 5

our plot to achieve our desired response. Also, if we recognize that we add two zeros
and only one pole, then our compensated system will have one less asymptote than
our open loop system. This will change our asymptote angles and the corresponding
loci paths. It is this ability to quickly and visually (graphically) see the eects of
adding the compensator that makes the root locus technique so powerful, not only
for PID compensators but also for most others.
Another method for tuning multiple gains is through the use of contour plots. If
we develop a root locus plot varying Kp for various values of integral and/or deri-
vative gains, we can map out the combination that suits our needs the best. This will
be seen in an example at the end of this section. This allows us to partially overcome
one limitation of the graphical rules for plotting the root loci where the rules require
that it is the gain on the system that is varied to develop the paths. Here we still vary
the gain, but we do so multiple times, changing either the integral or derivative gain
between the plots. When we are done, we get families of curves showing the eects of
each gain. We pick the curve that best approaches our desired locations for the poles
of the system.
Finally, at times it is possible to close the loop and analytically determine the
gains that will place the poles at their desired locations. By comparing coecients of
the characteristic equations arising from closing the loop with gain variables and the
coecients from the desired polynomial, we can determine the necessary gains. All
these methods are limited to the accuracy of the model being used, and at times the
best approach is to use the knowledge in a general sense as giving guidelines for
tuning through the typical trial and error approaches. Methods are presented in
later chapters for cases where this approach results in more unknowns than equa-
tions.
With the computing power not available to most users, methods exist to easily
plot the root locus plots as functions of any changing parameter in our system,
whether it be an electronic gain or the physical size or mass of a component in
the system. These methods are presented more fully in Chapter 11. In this section,
however, we limit our discussion to the design of PID controllers using root locus
techniques.
To illustrate the principles of designing PID controllers using root locus tech-
niques, the remainder of this section consists of examples that are worked for various
designs and design goals.

EXAMPLE 5.1
A machine tool is not allowed any overshoot or steady-state errors to a step input.
The system is represented by the open loop transfer function given below.

a. Develop the closed loop block diagram using a proportional controller.


b. Draw the root locus plot.
c. Find the controller gain K where the system has the minimum settling time
and no overshoot.

s6
Gs K
ss 4s 5

Part A. The block diagram is given in Figure 11, where Hs 1.


Analog Control System Design 209

Figure 11 Example: block diagram for proportional controller.

Part B. Develop the root locus plot. Summarizing the rules, we have three poles
at 0, 4, and 5 and one zero at 6. Therefore, we have two asymptotes with angles
of 90 degrees. The asymptote intersection point will occur at s 1:5. The valid
sections of real axis are between 0 and 4 and between 5 and 6. The break-away
point occurs at 1:85. Now we are ready to draw the root locus plot in Figure 12.
Part C. Solve the gain K where we have the minimum settling time without any
overshoot. The minimum settling time will occur when we have placed all roots as far
to the left as possible. Remember that since all individual responses add for linear
systems, the slowest response will also determine the settling time for the system. To
avoid any overshoot, we must keep all poles on the real axis. The point that meets
both conditions is the break-away point at 1.85. To nd our controller gain for this
point, we apply the magnitude condition.
 
s  z1  j6  1:85j 4:15
K    K K j1j
s  p1   s  p2   s  p 3  j1:85jj4  1:85jj5  1:85j 12:53
K3

Now we have achieved our design goals for the system using a proportional
controller. The steady-state error from a step input is zero because it is a type 1
system, and with a gain of 3 on the proportional controller all the system poles are
negative and real and the system should never experience overshoot (it does become
possible if errors exist in the model). The system settling time can be found by
knowing that in four time constants (of the slowest pole) that it is within 2% of
the nal value. The time constant is the inverse of 1.85 and the settling time is then
calculated as 2.2 seconds.

Figure 12 Example: root locus plot for P controller design.


210 Chapter 5

If we wish to have our system settle more quickly, we realize that we will not be
able to achieve this using only a proportional controller. With a proportional con-
troller we can only choose points along the root locus plot, we cannot change the
shape of the plot. Later examples will illustrate how we can move the poles to more
desirable locations.

EXAMPLE 5.2
Using the system transfer function below, determine
a. The block diagram for a unity feedback control system;
b. The steady-state error from a step input as a function of Kp using a
proportional controller;
c. The root locus plot using a PI controller;
d. Descriptions of the response characteristics available with the PI control-
ler.
System transfer function:
4
Gs
s 1s 5
Part A. Block diagram for a unity feedback control system is given in Figure 13:
Part B. To nd the error from a step input using only the proportional gain, we
set Ki to zero and nd the closed loop transfer function:
Cs 4Kp

Rs s2 6s k 4Kp

We apply the nal value theorem (FVT) and let Rs 1=s to nd the error as
et rt  ct:
4Kp 5 4Kp 4Kp 5
Css and ess 1  css 
5 4Kp 5 4Kp 5 4Kp 5 4Kp
So for example, if Kp 5, our steady-state error to a step input would be 0.2.
Part C. To demonstrate the eects of going from P to PI, let us draw both plots
on the same s-plane. With proportional only, the root locus plot is very simple where
it falls on the real axis between 1 and 5 with a break-away and asymptote
intersection point at 3. There are two asymptotes at 90 degrees and the system
never becomes unstable.
Moving to the PI controller, we see that we add a pole at zero but we also add a
zero and so we still will have two asymptotes at 90 degrees. The zero we can place
by how we choose our gains Kp and Ki . To illustrate the eects of dierent zero

Figure 13 Example: block diagram for PI controller.


Analog Control System Design 211

locations, we will draw two plots corresponding to two dierent choices. Since our
overall goal is to eliminate our steady-state error and, if possible, increase the
dynamic response, we will compare both results with initial P controller. For both
cases we will keep the math simple by choosing zero locations that cancel out a pole.
This is not necessary and in some cases not desired (as when trying to cancel a pole in
the right-hand plane [RHP]).
Case 1: Let Ki =Kp 1, then the zero is 1 and cancels the pole at 1. This leaves us
with poles at 0 and 5. The valid section of real axis is between these poles. The
asymptote intersection point and the break-away point are then both at 2.5.
Case 2: Let Ki =Kp 5, then the zero is at 5 and cancels the pole at 5. This leaves us
with poles at 0 and 1.The valid section of real axis is between these poles. The
asymptote intersection point and the break-away point are both at 0.5.
To compare the eects, let us now plot all three cases on the s plane as shown in
Figure 14.
Part D. To summarize this example we will comment on both the steady-state
and transient response characteristics for each controller. With the proportional
control we had a type 0 system and when we closed the loop it veried the existence
of the error being proportional to the inverse of the gain. With the asymptotes at 3,
it had a time constant of 1/3 seconds and therefore a settling time of 4/3 seconds.
When we added an integral gain, the system became a type 1 system and the
steady-state errors due to a step input become zero. This is true regardless of where
we place the zero using the ratio of Ki =Kp . The transient responses, however, varied.
When we place the zero at 5 our asymptotes intersect at  12 and we have a slow
system with a settling time of 8 seconds (2% criterion). Moving the zero in to 1
places our asymptote at 2.5 and our settling time decreases back down to 1.6
seconds, much closer to the original proportional controller. In both cases, however,
adding the integral gain tended to destabilize the system, as we would expect when a
pole is placed at the origin. So we see that in fact the integral gain does drive our
steady-state error to zero for a step input but also hurts the transient response to
dierent degrees, depending on where we place the zero.
Finally, we must mention the eects of using a zero to cancel a pole. Although in
theory this is easy to do, as just shown, in practice this is nearly impossible. It relies on
having accurate models, linear systems, and no parameter changes. If we are just a
little o (say at 1.1 or 0.9 for the second case), our root locus plot is completely
dierent as shown in Figure 15. Even though the basic shape is the same, we now a

Figure 14 Example: root locus plot for P and PI controllers.


212 Chapter 5

Figure 15 Example: root locus plot for PI without pole-zero cancellation.

much slower pole that never gets to the left of the zero. If the zero is slightly to the left
of the pole at 1, we have much the same eect but with another break-away (and
break-in) point around the pole at 1. There still remains a pole much closer to the
origin. It is for these reasons that it is not generally wise to try and cancel out an
unstable pole with a zero but is better to use the zero to draw the pole back into the
left-hand plane (LHP). If we do design for pole-zero cancellation, we should always
check what the ramications are if the zero does not exactly cancel the pole.

EXAMPLE 5.3
Using the block diagram below where the open loop system is unstable, design the
appropriate PD controller that stabilizes the system and provides less than 5%
overshoot in the system.
The block diagram for the PD unity feedback control system is given in Figure
16. With only a proportional controller there is no way to stabilize the open loop
system. We have an open loop pole in the RHP and the asymptote intersection point
and break-away points also fall in the RHP using a proportional controller. This
means that not only will one open loop pole be unstable, but as the gain is increased
the other pole also becomes unstable.
We will now add derivative compensation to the controller which allows us to
use a zero to pull the system back into the LHP. Once we add a zero, we decrease the
original number of asymptotes (2) by one and now have only one falling on the
negative real axis. If the zero is to the left of 1; the loci will break away from the
real axis between 5 and 1 and break in to the axis somewhere to the left of the zero,
thus giving us the response that we desire. For this example let us place the zero at
5 and draw the root locus plot. Solving the characteristic equation for the gain K
and taking the derivative with respect to s allows us to calculate the break-away and

Figure 16 Example: block diagram for PD controller.


Analog Control System Design 213

Figure 17 Example: root locus plot for PD controller design.

break-in points as 1.35 and 11.35, respectively. This leads to the root locus plot in
Figure 17.
Since our only performance specication calls for less than 5% overshoot, we
can pick the break-in point for our desired pole locations. This gives us the fastest
settling without any overshoot. Any more or less gain moves at least one pole further
to the right. To solve for the gain required (knowing that Kp =Kd 5, we apply the
magnitude condition for our desired pole location at s 11:325. For this example
we will include the system gain of 5 in the magnitude condition, making the K that
we solve for equal to our desired proportional gain.
 
5 s  z 1  j11:325  6j 5:325

K    K K j1j
 
s  p1 s  p2  j 11:325  1 j j11:325 5 j 168:556
Kp  31:6
and
Kd  6:3

So we have achieved the eect of stabilizing the system without any overshoot
by adding the derivative portion of the compensator. It is important to remember the
practical issues associated with implementing the derivative controller due to the
noise amplication problems. A good controller design on paper may actually
harm the system when implemented if the issues described earlier in this section
are not considered and addressed.

EXAMPLE 5.4
Using the system represented by the block diagram in Figure 18, nd the required
gains with a PID controller to give the system a natural frequency of 10 rad/sec and
damping ratio of 0.7. First, let us see how the system currently responds with only a
proportional controller and what has to be changed. With a proportional controller
there are two asymptotes and the intersection point is at 4. The system gain can be
varied to produce anywhere from an overdamped to underdamped response, but the
root locus path does not go through the desired points.
The desired pole locations can be found directly from the performance speci-
cations since the natural frequency is the radius and damping ratio is the cosine of
214 Chapter 5

Figure 18 Example: block diagram for PID controller.

the angle (a line with an angle of 45 degrees from the negative real axis). Thus, the
desired locations are calculated at
Real component s 10 0:7 7
Imaginary component od 102  72 0:5 7j

Without the I and D gains the loci paths follow the asymptote at 4. To
illustrate an alternative method, we will rst solve for the gains analytically and
verify them using root locus plot. To solve for the gains we rst close the loop
and derive the characteristic equation

CE s3 8 10 Kd s2 15 10 Kp s 10Ki

To equate the coecients we still need to place the third pole to write the desired
characteristic equation. Let us make it slightly faster than the complex poles and
place it 10. Our desired characteristic equation becomes

CE s 7  7js 7 7js 10 s3 24 s2 238 s 980


All we have to do to solve for our gains to place us at these locations is to equate the
powers of s and determine what gains make them equal. This is particularly easy here
since each gain only appears in one coecient. Our gains are calculated as
8 10 Kd 24 Kd 1:6
15 10 Kp 238 Kp 22:3
10 Ki 980 Ki 98
This allows us to update the system block diagram as given in Figure 19.
Finally, to conrm our controller settings, let us develop the root locus plot for
the system using the gains calculated. The value of Kp is varied during the course of
developing the plot, but if we include it here our gain at the desired pole locations
should be K 1. To develop the plot, we have three poles at 0, 3, and 5 and two
poles from the controller at 7 3:5j. This gives us one asymptote along the real
axis and valid sections of real axis between 0 and 3 and to the left of 5. Therefore,
our loci must break away from the axis between 0 and 1 and travel to the two

Figure 19 Example: block diagram solution for PID controller.


Analog Control System Design 215

Figure 20 Example: root locus plot for PID controller design.

zeros, while the third path just leaves the pole and 5 and follows the asymptote to
the left. The resulting plot is drawn in Figure 20.
After developing the root locus plot, we see that nding the desired character-
istic equation and using the gains to make the coecients match results in the poles
traveling through the desired locations. This method will not work for all systems
depending on the number of equations and unknowns. An alternative method using
contour lines is presented in a later example.
A good approach to take when we have multiple gains (or parameters that
change) is just placing the poles and zeros that we have control over in locations that
we know cause the desired changes. We know how the valid sections of real axis,
asymptotes, poles ending at zeros, etc., will all aect our plot and we can use our
knowledge when we design our controller.
Finally, since we added an integral compensator, we made the system go from
type 0 to type 1 and we will not have steady-state errors when the input function is a
step (or any function with a constant nal value).

EXAMPLE 5.5
To illustrate the use of computer tools, we work the previous examples using Matlab
to generate the root locus plots and solve the desired gains, taking the system from
Example 5.1 as shown here in Figure 21.
To solve for the proportional gain where the system does not experience over-
shoot and has the fastest possible settling time, we will dene the system in Matlab,
generate the root locus plot, and use rlocnd to locate the gain. The commands in m-
le format are as follows:

Figure 21 Example: block diagram for P controller (Matlab).


216 Chapter 5

%Matlab commands to generate Root Locus Plot and nd desired gain K

num=[1 6]; %Dene numerator s+6


den=[1 9 20 0]; %Dene denominator from poles
sys=tf(num,den) %Make LTI System variable

rlocus(sys) %Generate Root Locus Plot

Place crosshairs at critical damping point


Kp=rlocnd(sys) %Use crosshairs to nd gain

Executing these commands then produces the root locus plot in Figure 22 and allows
us to place the cross hairs at our desired pole location and click. After doing so we
will see back in the workspace area the gain K that moves us to that location.
After clicking on the break-away point we nd that the gain must be K 3, or
exactly what we found earlier when solving this problem manually. From this point
we can easily modify the numerator and multiply it by 3, make a new transfer
function (linear time invariant [LTI] variable) and simulate the step or impulse
response of the system. In addition, using rltool we can interactively add poles
and zeros in both the forward and feedback loops and observe their eects in real
time. We can drag the poles or zeros around the s plane and watch the root locus

Figure 22 Example: Matlab root locus plot for P controller.


Analog Control System Design 217

plots as we do so. As was mentioned earlier, most computer packages designed for
control systems have similar abilities.

EXAMPLE 5.6
In this example we again take a system from earlier and use Matlab to design a PI
controller to remove the steady-state error (from a step input) and then choose the
fastest possible settling time. The system block diagram with the controller added is
shown in Figure 23.
In Matlab we dene three transfer functions, the plant, the controller with
Ki =Kp 1, and the controller with Ki =Kp 5.
%Matlab commands to generate Root Locus Plot and nd desired gains Kp and Ki
sysp=tf([4],[1 6 5]); %Dene the plant transfer function
z1=1; z2=5; %Dene ratios of Ki/Kp = 1 and 5
syspi1=tf([1 z1],[1 0]); %Controller transfer function with Ki/Kp=1
syspi5=tf([1 z2],[1 0]); %Controller transfer function with Ki/Kp=5
subplot(1,2,1);
rlocus(syspi1*sysp) %Generate Root Locus Plot
subplot(1,2,2);
rlocus(syspi5*sysp) %Generate Root Locus Plot
When we execute these commands, it produces one plot window with two subplots
using the subplot command, shown in Figure 24. We see that the results correspond
well with the results from the earlier example.
When Ki =Kp 1 the system settles much faster since the poles are further to
the left. This relies on canceling a pole by using a zero, and if we are slightly o, large
or small, the results become as shown in Figure 25. Here we see that even if our zeros
are only slightly away from the pole that was intended to be canceled, the root locus
plot changes and a very dierent response is obtained. In general it is not considered
good practice to rely on mathematically canceling poles with zeros. Remember that
our system is only a linear approximation to begin with, let alone additional errors
that may by introduced which will cause the poles to shift.
To nish this example, let us close the loop using the original system with only
a proportional controller and Kp 2 followed with our PI controller where Kp 2
and Ki 2. We will use Matlab to generate a step response for both systems and
compare the steady-state errors. The responses are given in Figure 26.
As expected from previous discussions, the steady-state error went to zero
when we added the PI controller and the system became type 1. Also, as noted
from the root locus plots, adding a pole at the origin, as the integrator does, hurts

Figure 23 Example: block diagram for PI controller (Matlab).


218 Chapter 5

Figure 24 Example: Matlab root locus plots for PI controllers.

Figure 25 Example: Matlab root locus plots with PI controllers and errors.
Analog Control System Design 219

Figure 26 Example: Matlab step response plots for P and PI controllers.

our transient response and the settling time is less. If overshoot is allowable, we
could also make the P controller more attractive by increasing the gain and having
slightly more overshoot.

EXAMPLE 5.7
Using the block diagram in Figure 27, use Matlab to design a PD controller that
stabilizes the system and limits the overshoot to less than 5%.
In this system we see that the plant transfer function is unstable due to the pole
in the RHP. To solve this using Matlab, we will generate the root locus plot using a
simple proportional controller (basic root locus plot) and using a PD controller to
stabilize the system. The commands used are as follows:

%Matlab commands to generate Root Locus Plot


% and find desired gains Kp and Kd
clear;

sysp=tf([5],[1 -4 -5]); %Dene the plant transfer function

syspd=tf([1 5],[1]); %PD Controller TF with zero at -5

Figure 27 Example: block diagram for PD controller design (Matlab).


220 Chapter 5

subplot(1,2,1);
rlocus(sysp) %Generate Root Locus Plot w/ P controller
subplot(1,2,2);
rlocus(syspd*sysp) %Generate Root Locus Plot w/ PD controller
k=rlocnd(syspd*sysp);
sys1=tf(5,[1 -4 -5]); % Compare the step responses using P and PD
sys2=tf(5*5*[1 5],[1 -4 -5]);
sys1cl=feedback(sys1,1) %Close the loop
sys2cl=feedback(sys2,1)
gure; %Create a new gure window
step(sys1cl); hold; %Create step response for P controller
step(sys2cl); %Create step response for PI controller

When we plot the step responses it is easy to see that the system controlled
only with a proportional gain quickly goes unstable, while the PD controller
stabilizes the system and minimizes the overshoot. The step responses are given
in Figure 29.
The interesting point that should be made is that the proportional gain chosen
for the PD controller was intended to be at the repeated roots intersection and yet
the step response clearly shows an overshoot of the nal value. What we must
remember is that the second-order system is no longer a true second order-system

Figure 28 Example: Matlab root locus plots for P and PD controllers.


Analog Control System Design 221

but includes a rst-order term in the numerator (the zero from the controller). This
alters the response as shown in Figure 29 to where the system now does experience
slight overshoot.
The eect of adding the derivative compensator clearly demonstrates the added
stability it provides. The disadvantage, as discussed earlier, is that the system
becomes much more susceptible to noise amplication because of the derivative
portion.

EXAMPLE 5.8
Recalling that we solved for the required PID gains in Example 5.4, we now will use
Matlab to verify the root locus plot and also check the step response of the com-
pensated system. We tuned the system using analytical methods to have a damping
ratio equal to 0.7 and a natural frequency equal to 10 rad/sec. Using the gains from
earlier allows us to use the system block diagram as given in Figure 30 and simulate
it using Matlab.
To achieve the damping ratio and natural frequency, we know that the
poles should go through the points s 7 7j. Using the Matlab commands
given below, we can generate the root locus plot and the step response for the
compensated system. The sgrid command allows us to place lines of constant
damping ratio (0.7) and natural frequency (10 rad/sec) on the splane. This
command, in conjunction with rlocnd, allows us to nd the gain at the desired
location.

Figure 29 Example: Matlab step responses for P and PD controllers.


222 Chapter 5

Figure 30 Example: block diagram solution for PID controller (Matlab).

%Matlab commands to generate Root Locus Plot


clear;

sysp=tf([10],[1 8 15]); %Dene the plant transfer function

syspid=tf([1.6 22.3 98],[1 0]); %PD Controller TF with zero at -5

rlocus(syspid*sysp) %Generate Root Locus Plot w/ PID controller


sgrid(0.707,10) %Place lines of constant zeta and wn
k=rlocnd(syspid*sysp);

syscl=feedback(syspid*sysp,1)
gure; %Create a new gure window
step(syscl); %Create step response for P controller

The resulting root locus plot is shown in Figure 31.

Figure 31 Example: Matlab root locus plot for PID controller.


Analog Control System Design 223

The gains solved for earlier provide the correct compensation, and the root
locus paths go directly through the intersection of the lines representing our desired
damping ratio and natural frequency. Once again, we can verify the results by using
Matlab to generate a step response of the compensated system as given in Figure 32.
As we see, the compensated system behaves as desired and the response quickly
settles to the desired value. In the next example we see how to generate contours
to see the results of not only varying the proportional gain (assumed to be varied in
generating the plot), but also the eects from varying additional gains. The result is a
family of plots called contours.

EXAMPLE 5.9
In this example we wish to design a P and PD controller for the system block
diagram shown in Figure 33 with a unity feedback loop and compare the results.
Let us now use Matlab to tune the system to have a damping ratio of 0.707 and
to solve for the gain that makes the system go unstable. The Matlab commands used
to generate the root locus plot in Figure 34 and to solve for the gains are given
below.
%Program commands to generate Root Locus Plot
% and nd various gains, K
num=1; %Dene numerator
den=conv([1 0],conv([1 1],[1 3])); %Dene denominator from poles
sys1=tf(num,den) %Make LTI System variable

Figure 32 Example: Matlab step response for PID compensated system.


224 Chapter 5

Figure 33 Example: block diagram for P and PD contours (Matlab).

rlocus(sys1) %Generate Root Locus Plot


Place crosshairs at marginal stability point
Km=rlocnd(sys1) %Use crosshairs to nd gain

sgrid(0.707,0.6); %Place lines of constant damping...


Place crosshairs at intersection point %...ratio (0.707) and w (0.6 rad/sec)
Kt=rlocnd(sys1) %Find gain for desired tuning

When we examine this plot, we see that we can achieve any damping ratio but that
the natural frequency is then dened once the damping ratio is chosen. Using the
rlocnd command we can nd the gain where the system has a damping ratio equal
to 0.707 and where the system goes unstable.
The system becomes marginally stable at Kp 11 and has our desired damping
ratio when Kp 1. The corresponding natural frequency is equal to 0.6 rad/sec. If
we want to change the shape of the root locus paths, we need to have something

Figure 34 Example: Matlab root locus plots for initial P controller.


Analog Control System Design 225

Figure 35 Example: block diagram for PD contours on root locus plot (Matlab).

more than a proportional compensator. If we switch to a PD controller as shown in


Figure 35 and use Matlab to generate root contour plots, we can demonstrate the
eects of changing the derivative gain and how the actual shape of the root locus
paths change.
Our new open loop transfer function that must be entered into Matlab is

Kp Kd s 1K Kp s
d

GH Kp
ss 1s 3 ss 1s 3

Remember that Matlab varies a general gain in front of Kp and thus GH must be
written as above. The contours are actually plotted as functions of Kd =Kp . To generate
the plot, we use the following commands:
%Program commands to generate Root Locus Contours
% and nd various gains, Kp and Kd
den=conv([1 0],conv([1 1],[1 3]));
Kd=0; %First value of Kd
num=[Kd 1]; %Dene numerator
%Dene denominator from poles
sys2=tf(num,den) %Make LTI System variable
rlocus(sys2) %Generate Root Locus Plot
sgrid(0.707,2); %place lines of constant damping
%ratio (0.707) and w (2 rad/s)
hold;
Kd=0.3; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
Kd=1; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
Kd=10; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
hold; %Releases plot

The root locus plot developed in Matlab is given in Figure 36, where we see
that using the hold command allows us to choose dierent controller gains and plot
several plots on the same s-plane. Here we see the advantage of derivative control
and how we can use it to move the root locus to more desirable regions. Even using
the damping ratio criterion from the P controller we are able to achieve faster
responses by moving all the poles further to the left. If we are having problems
with stability and can control/lter the noise in your system, then it will be benecial
to add derivative control.
226 Chapter 5

Figure 36 Example: Matlab root contour plots for PD controller.

In working through this section and examples, we should see how the tools
developed in the rst three chapters all become a key component in designing stable
control systems. The usefulness of our controller design is directly related to the
accuracy of our model; if our model is poor, so likely will be our result.
Understanding where we want to place the poles and zeros is also of critical impor-
tance. Since many computer packages will develop root locus plots, we need to know
how to interpret the results and make correct design decisions.

5.4.3 Frequency Response Design of PID Controllers


Designing in the frequency domain is similar to designing in the s-domain using root
locus plots. Whereas in the s-domain we used our knowledge of poles and zeros to
move the loci paths to more desirable locations, in the frequency domain we use our
knowledge about the magnitude and phase contributions of each controller and how
they add to the total frequency response curve, which allows us to shape the curve to
our specications. This stems back to our previous discussion of Bode plots and how
blocks in series in a block diagram add in the frequency domain. Since a controller
placed in the forward path of our system block diagram is in series with the physical
system models, the eects of adding the controller in the frequency domain is accom-
plished by adding its magnitude and phase relationships to the existing Bode plot of
the physical system. What our goal becomes is to dene how each controller term
adds to the magnitude and phase of the existing system. The amount of magnitude
and phase that we wish to add has already been dened in terms of gain margin and
Analog Control System Design 227

phase margin. To begin our discussion of designing PID controllers using frequency
domain techniques, let us rst dene the Bode plots of the individual controller
terms.
The proportional gain, as in root locus, does not allow us to change the shape
of the Bode plot or the phase angle, but rather allows us only to vary the height of
the magnitude plot. Thus, we can use the proportional gain to adjust the gain margin
and phase margin, but only as is possible by raising or lowering the magnitude plot.
Remember that when we adjust the gain margin and phase margin for our open loop
system, we are indirectly changing the closed loop response. The phase margin
relates the closed loop damping ratio and the crossover frequency to the closed
loop natural frequency. It is easy to determine the proportional controller gain K
that will cause the system to go unstable by measuring the gain margin. If K is
increased by the amount of the gain margin, the system becomes marginally stable.
Assuming the gain margin, GM, is measured in decibels, then K is found as
GMdB
K 10 20

The integral gain has the same eects in the frequency domain as in the s-domain
where we saw it eliminate the steady-state error (in many cases, not all) and also tend
to destabilize the system (moves the loci paths closer to the origin). In the frequency
domain it adds a constant slope of 20 dB/decade (dec) to the low frequency
asymptote, which gives our system innite steady-state gain as the frequency
approaches zero. The side eect of this is stability where also see that the integral
term adds a constant 90 degrees of phase angle to the system. This tends to
destabilize the system by decreasing the phase margin in the system. Some of the
phase margin might be reclaimed if after adding the integral gain we can use a lower
proportional gain since the steady-state performance is determined more by the
integral gain than the proportional gain. In other words, if we had a large propor-
tional gain for the reason of decreasing the steady-state error (previously to adding
the integral gain), while allowing more overshoot because of the high proportional
gain, then after adding the integral gain to help our steady-state performance, we
may be able to reduce the proportional gain to help stabilize (reduce overshoot) our
overall system. This eect is easily seen in the frequency domain where we achieve
high steady gains in our system by adding the low frequency slope of 20 dB/dec and
at the same time lowering the overall system magnitude plot by reducing propor-
tional gain K in our system.
Finally, the same comparisons between the s-domain and frequency domain
hold true when discussing the derivative gain. In the s-domain we used the derivative
gain to attract root loci paths to more desirable locations by placing the zero from
the controller where we wanted it on the s plane. In the frequency domain we
stabilize the system by adding phase angle into our system. A pure derivative term
adds 90 degrees of phase angle into our system, thus increasing our phase margin
(and corresponding damping ratio).
The advantage of Bode plots is seen where with knowledge of controller
shapes we can properly pick the correct controller to shape our overall plot to
the desired performance levels. To design our controller, we determine the amount of
magnitude and phase lacking in the open loop system and pick our controller (type
and gains) to add the desired magnitude and phase components to the open loop
228 Chapter 5

Figure 37 Bode plot contributions from PI controllers.

plot. This is quite simple using the PI, PD, and PID Bode plots shown in Figures 37,
38, and 39, respectively. As is clear in the frequency domain contributions shown in
the gures, PID plots combine the features of PI and PD. Any time that we design a
controller, we must remember that we still need the instrumentation and components
capable of implementing the controller.
In summary, to design a P, PI, PD, or PID controller in the frequency domain,
simply draw the open loop Bode plot of our system and nd out what needs to be
added to achieve our performance goals. Use the proportional gain to adjust the
height of the magnitude curve, the integral gain to give us innite gain at steady-state
(o approaches 0, 20 dB/dec slope goes to 1), and the derivative gain to add
positive phase angle. Each controller factor can be added to the existing plot
using the same procedure as when developing the original plot.

EXAMPLE 5.10
Using the block diagram in Figure 40, design a proportional controller in the fre-
quency domain which provides a system damping ratio of approximately 0.45.
To nd the equivalent open loop phase margin that will give as approximately
z 0:45 when the loop is closed, we will use the approximation that PM 100 z.
This results in us wanting to determine the gain K required to give us an open loop
phase margin of 45 degrees. To accomplish this we will draw the open loop uncom-
pensated Bode plot, determine the frequency where the phase angle is equal to 135

Figure 38 Bode plot contributions from PD controllers.


Analog Control System Design 229

Figure 39 Bode plot contributions from PID controllers.

degrees (45 degrees above 180 degrees), and measure the corresponding magnitude
in dB. If we add the gain K required to make the magnitude plot cross the 0-dB line
at this frequency, then the phase margin becomes our desired 45 degrees (remember
that the phase plot is not aected by K). Our open loop Bode plot is given in Figure
41.
Examining Figure 41, we see that at 10 rad/sec the phase angle is equal to 
135 degrees and the corresponding magnitude is equal to 20 dB. Therefore, to
make the phase margin equal to 45 degrees we need to raise the magnitude plot
20 dB and make the crossover frequency equal to 10 rad/sec. We can calculate our
required gain as
GMdB
K 10 20

K 10
When we add the proportional gain of 10 to the open loop uncompensated Bode
plot, the result is the compensated Bode plot in Figure 42.
Now we see that we have achieved the desired result, a phase margin equal to
45 degrees. This gives us an approximate damping ratio of 0.45 with the closed loop
system. As before with root locus plots, the proportional controller does not allow us
to change the shape, only the location. We must incorporate additional terms if we
wish to modify the shape of the Bode plot. Also, we will still experience steady-state
error from a step input. With the proportional gain we only have a type 0 system.
Since our low frequency asymptote is at 20 dB (compensated system), corresponding
to our gain of 10, we will have a steady-state error from a unit step input equal to
1=K 1 or 1=11.

Figure 40 Example: block diagram of system with P controller.


230 Chapter 5

Figure 41 Example: open loop Bode plotuncompensated.

Figure 42 Example: open loop Bode plotcompensated with P gain.


Analog Control System Design 231

For this example, let us close the loop manually and nd the characteristic
equation to verify the results obtained in the frequency domain. When the loop is
closed, it results in the following characteristic equation:

CE 0:1 s2 1:1 s 1 K
If we let K 10, we can solve for the roots of the characteristic equation as
p
1:1 1:12  40:111
s1;2  5:5 9 j
20:1 20:1
The damping ratio is calculated as
  
1 9
z cosy cos tan 0:5
5:5
So we see that even with the straight line approximation and the open loop to
closed loop approximation, the design methods in the frequency domain quickly
gave us a value close to our desired damping ratio. The advantages of the working
in the frequency domain are not so evident in this example where we knew the system
model, but much more when we are given Bode plots for our system and are able to
quickly design our controller directly from the existing plots. This bypasses several
intermediate steps and is also quite easy to do without complex analysis tools.

EXAMPLE 5.11
In this example we will further enhance the steady-state performance of the system
from the previous example (Figure 40) by adding a PI controller while maintaining
the same goal of a closed loop damping ratio equal to 0.45. The contributions of a PI
controller in the frequency domain were given in Figure 37 as:
K
1 Kpi
TFPI Ki
s
The contributions can be added as three distinct factors: a gain Ki , rst-order lead
with a break frequency of Kp =Ki , and an integrator. If we choose Ki 10 and Kp
10; then we will add a low frequency asymptote of 20 dB/dec with no change in the
magnitude curve after 1 rad/sec. The phase angle is decreased an additional 90
degrees at 0.1 rad/sec, 45 degrees at 1 rad/sec (the break frequency of the numera-
tor), and no change after 10 rad/sec. Since we have raised the overall magnitude by a
factor of 10 (Ki ) and have not altered the phase angle at 10 rad/sec, the resulting
phase margin is identical to the previous example and equal to 45 degrees at a
crossover frequency of 10 rad/sec. If we show the additional factors on a Bode plot
and add them to the open loop uncompensated system, the result is the compensated
Bode plot given in Figure 43.
So by proceeding from a simple proportional compensator to a proportional
+ integral compensator, we have achieved the damping ratio of approximately 0.45
while eliminating our steady-state error from step inputs. There are dierent ways
this can be designed while still achieving the design goals since we have two gains and
one goal. If the crossover frequency was also specied, it would likely require dif-
ferent gains to optimize the design relative to meeting both goals.
232 Chapter 5

Figure 43 Example: Bode plot of PI compensated system.

EXAMPLE 5.12
The nal example demonstrating design techniques in the frequency domain will be
to stabilize an open loop marginally stable system using a PD controller. The open
loop system is Gs 1=s2 . A stabilizing controller is required (i.e., PD) since the
system is open loop marginally stable. The performance specications for the open
loop compensated system are
 Phase margin of approximately 45 degrees.
 Crossover frequency of approximately 1 rad/sec.
Therefore, using our open loop to closed loop approximations, we would
expect the closed loop controlled system to have a damping ratio of 0.45 and a
natural frequency of 1.2 rad/sec (see Figure 49 of Chapter 4, at z 0:45,
oc =on  0:82). We start by drawing the open loop Bode plot as shown in Figure 44.
We have constant phase of 180 degrees, a crossover frequency of 1 rad/sec,
and a constant slope of 40 dB/dec. The system is marginally stable and needs a
controller to stabilize it. The PD controller, to meet the specications, must keep the
crossover frequency at its current location while adding 45 degrees of phase to the
system. The PD controller is a rst-order system in the numerator given as
Kd
Gc s Kp 1 s
Kp

Therefore, it will add 45 degrees at the break point, Kp =Kd . Since we want a phase
margin equal to 45 degrees, making Kp =Kd 1 meets the phase requirements. Now
Analog Control System Design 233

Figure 44 Example: open loop Bode plot for PD controller design.

we must adjust the magnitude to maintain the crossover frequency location at 1


rad/sec. When we plot the straight line approximations, we nd that the break is
already at the desired crossover frequency, so Kp 1 will place us close to the
desired magnitude. To ne tune, remember that the actual magnitude will be
+3 dB at the break and therefore the proportional gain can be adjusted to lower
the plot 3 dB at the break point:
Kp log1 3=20 103=20 0:7
The nal open loop, controller, and total Bode plots are shown in Figure 45.
Thus we see that the PD controller stabilized the system as desired and results in a
phase margin equal to 45 degrees. As we discussed earlier in this chapter, implement-
ing a derivative compensator makes the system very susceptible to noise and must
also be addressed when designing in the frequency domain. Only the design method
is dierent; the resulting algorithm and controller implementation constraints
remain the same.

5.4.4 On-Site Tuning Methods for PID Controllers


If we have a system that is very complex and we only wish to purchase a PID (or
variation of) controller and tune it, then several methods exist as long as we have a
dominant real closed loop pole or pair of dominate complex conjugate closed loop
poles. In other words, if we have a dominate real pole, the system response can be
well dened by a simple rst-order response, and if we have dominant complex
conjugate roots (overshoot and oscillation) the system can be well dened by a
second-order system. The most common method used when these conditions are
234 Chapter 5

Figure 45 Example: Bode plot with PD compensation.

present, and the one covered here, is the Ziegler-Nichols method. The guidelines are
mathematically derived to result in a 25% overshoot to a step input and a one-fourth
wave decay rate, meaning each successive peak will be one-fourth the magnitude of
the previous one. This a good balance between quickly reaching the desired com-
mand and settling down. In some cases this may be slightly too aggressive and
slightly less proportional gain could be used. Two variations exist, the rst based
from an open loop step response curve and the second from obtaining sustained
oscillations in the proportional control mode. The step response method works well
for type 0 systems with one dominant real pole, (i.e., no overshoot). The ultimate
cycle method is based on oscillations and must therefore have a set of complex
conjugate dominant roots for the system to oscillate. To facilitate the procedure,
the following notation is introduced for PID controllers:
 
1
Gc Kp 1 Td s
Ti s

Instead of an integral gain and derivative gain, an integral time and derivative time
are used. Since we can switch between the two notations quite easily, it is simply a
matter of personal preference. Ti and Td tend to be more common when using the
Ziegler-Nichols method since when measuring signal output on an oscilloscope they
correspond directly and simplify the tuning process.
In the rst case, the goal is to obtain an open loop system response to a unit
step input. This should look something like Figure 46. Simply measure the delay time
and rise time as shown and use Table 1 to calculate the controller gains depending on
which controller you have chosen.
Analog Control System Design 235

Figure 46 Ziegler-Nichols step response S-curve measurements.

If we examine the PID settings a little closer and substitute the tuning para-
meters into Gc in place of Kp , Ti , and Td , it results in a controller transfer function
dened as
 2
s D1
Gc 0:6T
s
Here we see that this tuning method places the two zeros at 1=D on the real axis
along with the pole that is always placed at the origin (the integrator).
The second method, dening an ultimate cycle, Tu , is useful when the critical
gain can be found and oscillations sustained. To nd the critical gain, use only the
proportional control action (turn o I and D) and increase the proportional gain
until the system begins to oscillate. Record the current gain and capture a segment of
time on an oscilloscope for analysis. The measurement of Tu can be made as shown
in Figure 47 to allow the gains to be calculated according to Table 2.
Once again, if we examine the PID settings more closely and substitute the
tuning parameters into Gc in place of Kp , Ti and Td , the controller transfer function
becomes

2
s T4U
Gc 0:075Ku Tu
s
Similar to before, we see that this tuning method places the two zeros on the real axis
at 4=TU along with the pole that is always placed at the origin.
It is also possible to use the Ziegler-Nichols method analytically by simulating
either the step response or determining the ultimate gain at marginal stability.
Although this might serve as a good starting point, more options can be explored
using the root locus and Bode plot techniques from the previous section. For exam-

Table 1 Ziegler-Nichols Tuning ParametersStep Response S Curve

Controller type Kp Ti Td

P T/D 1 0
PI 0.9 T/D D / 0.3 0
PID 1.2 T/D 2D 0.5 D
236 Chapter 5

Figure 47 Ziegler-Nichols ultimate cycleoscillation period.

ple, sometimes it is advantageous to place the two zeroes with imaginary components
to better control the loci beginning at the dominant close loop complex conjugate
poles. Nonetheless, this a common method used frequently on the job and it
provides at least a good starting point, if not a decent solution.

5.5 PHASE-LAG AND PHASE-LEAD CONTROLLERS


Phase-lag and phase-lead controllers have many similarities with PID controllers.
However, they have some advantages over PID with respect to noise ltering and
real world implementation. Instead of true integrals (a pole at the origin) or true
derivatives (step inputs become impulses), they approximate these functions and
provide similar performance gains with some implementation advantages. They
can be designed using root locus and frequency plots with the same procedures
shown for PID controllers. This section highlights the similarities and then illustrates
the design procedures using both root locus and frequency domain techniques. The
transfer function for each type of controller is as follows:

 
T2 s 1
Phase lag: =K
T1 s 1

 
T1 s 1
Phase lead: K
T2 s 1

Lag-lead: Phase lag  Phase lead


As the terms imply, phase-lag controllers add negative phase angle, phase-lead con-
trollers add positive phase angle, and lag-lead controllers add both (at dierent
times) to combine the features of both. With the notation above it means that

Table 2 Ziegler Nichols Tuning ParametersUltimate Cycle

Controller type Kp Ti Td

P 0.5 Ku 1 0
PI 0.45 Ku Tu / 1.2 0
PID 0.6 Ku 0.5 Tu 0.125 Tu
Analog Control System Design 237

T1 > T2 . As demonstrated in the following sections, the lag and lead portions may be
designed separately and combined after both are completed.

5.5.1 Similarities and Extensions of Lag-Lead and PID Controllers


Perhaps the clearest way to compare phase-lag-lead and PID controller variations is
with respect to where they allow us to place the poles and zeros contributed by the
controller. Figure 48 illustrates where each controller type is similar and dierent.
The phase-lag pole-zero locations approach the pole-zero locations of PI controllers
as the pole is moved closer to the origin. Phase-lag is used similarly to PI to reduce
the steady-state errors in the system by increasing the steady-state gain in the system.
The dierence is that the gain does not go to innity as the frequency (or s)
approaches zero, as it does in a PI controller. The benet of phase-lag is that the
pole is not placed directly at the origin and therefore tends to have a lesser negative
impact on the stability of the system. Similarly, the phase angle contribution from a
phase-lag controller is negative only over a portion of frequencies, as opposed to an
integrator adding a constant 90 degrees over all frequencies. These eects are more
clearly seen in the frequency domain.
The phase-lead pole-zero locations approach the pole-zero locations of PD as
the pole is moved to the left. At some point the pole becomes negligible since it
decays much faster and its eect is not noticeable. Phase-lead and PD are both used
to increase system stability by adding positive phase angle to the system. Phase lead
adds positively over only a portion of frequencies (progresses from 0 to 90 degrees
to 0 degrees) while PD increases from 0 to 90 and then remains there.
Finally, as the left pole of the lag-lead controller is moved to the left and the
right pole to the origin, it begins to approximate a PID controller. The same obser-
vations made with phase-lag and phase-lead individually apply here regarding phase
angle and stability. In fact, when designing a combination lag-lead compensator, it is
common to design the lag portion and lead portion independently and combine them
when nished. The lag portion is designed to meet the steady-state performance
criterion and the lead portion to meet the transient response and stability perfor-

Figure 48 Phase lag/lead and PID s-plane comparisons/similarities.


238 Chapter 5

mance criterion. In general, lag-lead and PID controllers are interchangeable, with
minor dierences, advantages, and disadvantages.

5.5.2 Root Locus Design of Phase Lag/Lead Controllers


Let us rst examine the general recommendations for tuning phase-lag controllers.
Since the overall goal of the phase-lag controller is to reduce the steady-state error, it
needs to have as large of a static gain as possible without changing the existing root
loci and making the system become less stable. The usual approach to designing a
phase-lag controller is to place the pole and zero near the origin and close to each
other. The reasoning is this: If the added pole and zero are close to each other, the
root locus plot is only slightly altered from its uncompensated form; by placing the
pole and zero close to origin it allows us to have a fairly large steady-state gain. For
example, if the pole is at s 0:01 and the zero at s 0:1, we increase the gain in
our system by a factor of 10 (0.1/0.01) while the pole and zero are still very close
together. The steps below are guidelines to accomplish this.
5.5.2.1 Outline for Designing Phase-Lag Controllers in the s-Domain
Step 1: Draw the uncompensated root loci and calculate the static gain for the
open loop uncompensated system. Since we know that the steady-state error from a
step input to a type 0 system is 1/(1+K), and from a ramp input to a type 1 system is
1/K, etc., we can calculate the total K required. Knowing the two gains for the
controller and system, multiply them and calculate the controller gain required.
Step 2: Place the pole and zero suciently close to the origin where a large gain
is possible without adding much lag (instability) into the system. For example, if the
controller requires a gain of 10, place the pole at 0.01 and the zero at 0.1 to
obtain this gain. This will not appreciably change the existing root locus plot since
the pole and zero essentially cancel each other with respect to phase angle.
Step 3: Verify the controller design by drawing the new root locus plot.

EXAMPLE 5.13
Using the block diagram of the system represented in Figure 49, nd the propor-
tional gain K that results in a closed loop damping ratio of 0.707. Design a phase-lag
controller that maintains the same system damping ratio and results in a steady-state
error less than 5%. Predict the steady-state errors that result from a unit step input
for both cases.

Step 1: We begin by drawing the root locus plot for the uncompensated system
and calculating the gain K required for our system damping ratio of 0.707. This

Figure 49 Example: block diagram of physical system.


Analog Control System Design 239

damping ratio corresponds to the point where the loci paths cross the line extending
radially from the origin at an angle of 45 degrees from the negative real axis. The
root locus plot for this system is straightforward and consists of the section of real
axis between 3 and 5 with the break-away point at 4. The asymptotes and the
paths both leave the real axis at angles of 90 degrees as shown in Figure 50.
So for our desired damping ratio we want to place our poles at s 4 4j. To
nd the necessary proportional gain K we can apply the magnitude condition at one
of our desired poles. Our gain is calculated as
15 15 15
K  
 
 K pp K pp j1j
    
s  p1 s  p2  12 42   12 42  17 17

K  1:13
Our total loop gain is 15 K or 17, and since we have a type 0 system our error from
a step input is 15=15 K 15 or 15/32. This is almost 50% and results in very poor
steady-state performance. To meet our steady-state error requirement, we need the
total gain greater than or equal to 19; this results in an error less than 1/20, or 5%.
Since the proportional gain already provides a gain of 1.13 as determined by our
transient response requirements, we need another gain factor equal to 20/1.13 (or
17.7). To exceed our requirement and demonstrate the eectiveness of phase-lag
compensation, we will set the gain contribution from our phase-lag factor equal to
20.
Step 2: Now that we know what gain is required for the phase-lag term, we can
proceed to place our pole and zero. The pole must be closer to the origin to increase
our gain. This results in a slight negative phase angle introduced into the system since
the angle from the controller pole is always slightly greater than the angle from the
zero (as it is always further to the left). We minimize this eect by keeping the pole
and zero close. For this example let us place the pole at 0.02 and the zero at 0.4; this
gives us our additional gain of 20 without signicantly changing our original root
locus plot. Thus we can describe our phase-lag controller transfer function as
 
s 0:4 2:5s 1
Gc Phase lag 20
s 0:02 50s 1
Step 3: To verify our design, we can add the new pole and zero to the root locus
plot in Figure 50 and redraw it as given in Figure 51. So we see that the root locus

Figure 50 Example: root locus plot for uncompensated system.


240 Chapter 5

Figure 51 Example: root locus plot for phase-lag compensated system.

plot does not signicantly change because the pole and zero added by the controller
almost cancel each other. If we wish to calculate the amount of phase angle that
was added to our system, we can approximate it by calculating the angle that the new
pole and zero each make with our desired operating point, s 4 4j.
Angle from pole : fp 180  tan1 4=3:98 134:86 degrees

Angle from zero : fz 180  tan1 4=3:6 131:99 degrees

Net phase angle added to the system (lag) 2:87 degrees


Since the net angle added with the phase-lag controller is very small, the original
angle condition and corresponding root locus plot is still valid.
Now at this point it is interesting to recall the warning about pole-zero can-
cellation with controllers, as this is nearly the case here. If we examine the earlier
warning (and the eects demonstrated in Example 5.6), we do nd a key dierence
here. Whereas before we used the zero from the controller to cancel a pole in the
system, now both the pole and zero are part of the controller. This is signicant since
we know that the pole of the physical system is an approximation to begin with and
that it will vary at dierent operating points for any system that is not exactly linear.
In the phase-lag case we can nely tune both the pole and the zero and be relatively
sure that they will not vary during system operation. This is generally true for most
electronic circuits that are properly designed. It is true that if our pole varies even
slightly, say from 0.01 to 0.2, our gain from the phase-lag term goes from being
equal to 10 to being equal to one half and the system performance is degraded, not
enhanced. If we cannot verify the stability and accuracy of our compensator, then we
should be concerned about the performance of actually implementing this controller
(and others similar to it).
The second point is that to implement the phase-lag compensator we are add-
ing a slow pole into the system since even at high gains of K, it only moves as far
left as 0.4, where the zero is located. It is a balancing act to choose a pole-zero
combination close enough to the origin that high gain can be achieved without
signicantly altering the original root locus plot and yet enough to the left that
the settling time is satisfactory. The eect of adding the slow pole is seen in the
following Matlab solution to this same problem.
Analog Control System Design 241

EXAMPLE 5.14
Verify the system and controller designed in Example 5.13 using Matlab. Use Matlab
to generate the uncompensated and compensated root locus plots, determine the
gain K for a damping ratio of 0.707, and compare the unit step responses of the
uncompensated and compensated system to verify that the steady-state error require-
ment is met when the phase-lag controller is added.
From Example 5.13 recall that the open loop system transfer function and the
resulting phase-lag controller were given as
15
Gs Open loop system
s 3s 5
 
s 0:4 2:5s 1
Gc Phase lag 20
s 0:02 50s 1
To verify this controller using Matlab, we can dene the original system transfer
function, the phase-lag transfer function, and generate both the root locus and step
response plots where each plot includes both the uncompensated and compensated
system. The Matlab commands are given below.
%Program commands to generate Root Locus and Step Response Plots
%for Phase-lag example
clear;
K=1.13;
numc=[1 0.4]; Place zero at -0.4
denc=[1 0.02]; %Place pole at -0.02
num=K*15; %Open loop system numerator
den=conv([1 3],[1 5]); %System denominator
sysc=tf(numc,denc); %Controller transfer function
sys1=tf(num,den) %System transfer function
sysall=sysc*sys1 %Overall system in series
rlocus(sys1); %Generate original Root Locus Plot
sgrid(0.707,0) %Draw line of constant damping ratio on plot
k=rlocnd(sys1); %Verify gain at zeta=0.707
hold;
rlocus(sysall); %Add compensated root loci to plot
hold;
gure; %Open a new gure window
step(feedback(sys1,1)); %Generate step response of CL uncompensated system
hold; %Hold the plot
step(feedback(sysall,1)); %Generate step response of CL compensated system
hold; %Release the plot (toggles)
When these commands are executed, we can verify the design from the previous
example. The rst comparison is the uncompensated and compensated root locus
plot shown in Figure 52.
It is clear in Figure 52 that the only eect from adding the phase-lag compen-
sator is that the asymptotic root loci paths are slightly curved as we move further
242 Chapter 5

Figure 52 Example: Matlab root locus plot for proportional and phase-lag systems.

from the real axis. When implementing and observing this system, the eects would
not be noticeable relative to the root locus path being discussed. What we must
remember is that we now have a pole and zero near the origin and, as we see for
this example, becomes the dominant response. To compare the transient responses,
the feedback loop was closed for the proportional and phase-lag compensated sys-
tems and Matlab used to generate unit step responses for the systems. The two
responses are given in Figure 53. From the two plots our earlier analysis in
Example 5.13 is veried. The uncompensated (relative to phase-lag compensation)

Figure 53 Example: Matlab step responses for proportional and phase-lag compensated
systems.
Analog Control System Design 243

system, when tuned for a damping ratio equal to 0.707, behaves as predicted with a
very slight overshoot and with a very large steady-state error.
The steady-state error, predicted to be just less than 50% in Example 5.13, is
found to be just less than 50% in the Matlab step response. When the phase-lag
compensator is added, the steady-state error is reduced to less than 5%, as desired,
and we also see the eects of adding the slower pole near the origin as it dominates
our overall response. It would be appropriate, since we are allowed 5% overshoot, to
increase the gain in the compensated system, as shown in Figure 54, and make the
response better during the transient phases while meeting our requirement of 5%
overshoot.
By increasing the proportional gain by a factor of 4 the total response, as the
sum of the second-order original and compensator, has a shorter settling time, less
overshoot, and better steady-state error performance.
Phase-lag controllers are easily implemented with common electrical compo-
nents (Chapter 6) and provide an alternative to the PI controller when reducing
steady-state errors in systems.
5.5.2.2 Outline for Designing Phase-Lead Controllers in the s-Domain
For a phase-lead controller the steps are slightly dierent since the goal is to move
the existing root loci paths to more stable or desirable locations. As opposed to the
phase-lag goals, we now want to modify the root locus paths and move them to a
more desirable location. In fact, the beginning point of phase-lead design is to
determine the points where we want the loci paths to go through. The steps to
help us design typical phase-lead controllers are listed below.
Step 1: Calculate the dominant complex conjugate pole locations from the
desired damping ratio and natural frequency for the controlled system. These values
might be chosen to meet performance specications like peak overshoot and settling
time constraints. Once peak overshoot and settling time are chosen, we can convert

Figure 54 Example: Matlab step response of phase-lag compensated system (additional gain
of 4).
244 Chapter 5

them to the equivalent damping ratio and natural frequency and nally into the
desired pole locations.
Step 2: Draw the uncompensated system poles and zeros and calculate the total
angle between the open loop poles and zeros and the desired poles. Remember that
the angle condition requires that the sum of the angles be an odd multiple of 180
degrees. The poles contribute negative phase and zeros contribute positive phase.
Use these properties of the angle condition to calculate the angle that the phase-lead
controller must contribute. These calculations are performed as demonstrated when
calculating the angle of departures/arrivals using the root locus plotting guidelines.
Step 3: Using the calculated phase angle required by the controller, proceed to
place the zero of the controller closer to the origin than the pole so that the net angle
contributed is positive, assuming phase angle must be added to the system to stabi-
lize it. Figure 55 illustrates how the phase-lead controller adds the phase angle.
   
2 2
b tan1 63:4 degrees and  90 tan1
32 21
153:4 degrees

Thus this conguration of the phase-lead controller will add a net of 90 degrees to
the system.
Step 4: Draw the new root locus including the phase-lead controller to verify
the design. If our calculations are correct, the new root locus paths should go directly
through our desired design points. Finish by applying the magnitude condition to
nd the proportional gain K required for that position along the root locus paths.
Phase-lead controllers are commonly added to help stabilize the system, and
therefore the desired eect requires adding phase to the system. Since we are adding
a pole and a zero to the system, the net phase change is positive as long as our
controller zero is closer to the origin than our controller pole.

EXAMPLE 5.15
Design a controller to meet the following performance requirements for the system
shown in Figure 56. Note that this system is open loop marginally stable and needs a
controller to make it stable and usable.
 A damping ratio of 0.5
 A natural frequency of 2 rad/sec

Figure 55 Calculating phase-lead angle contributions.


Analog Control System Design 245

Figure 56 Example: system block diagram for phase-lead controller design.

Since this problem is open loop unstable we need to modify the root loci and move
them further to the left. Thus a phase-lead controller is the appropriate choice.
Step 1: The desired poles must be located on a line directed outward from the
origin at an angle of 60 degrees (relative to the negative real axis) to achieve our
desired damping ratio of z 0:5. The radius must be 2 to achieve our natural
frequency of on 2 rad/sec. Taking the tangent of 60 degrees means that our ima-
ginary component must be 1.73 times the real component while the radius criteria
means that the square root of the sum of the imaginary and real components squared
must equal 2 (Pythagorean theorem). An alternative, and simpler, method is to
realize that the real component is just the cosine of the angle (or damping ratio
itself) multiplied by the radius and the imaginary component is simply the sin of
the angle multiplied by the radius. The points that meet these requirements are
s1;2 1 1:73j
Step 2: The total angle from all open loop poles and zeros must be an odd
multiple of 180 degrees, to meet the angle condition in the s plane. For our system
in this example we have two poles at the origin each contributing 120 degrees and
one pole at 10 contributing 10:9 degrees. These add to 250:9 degrees and if s
1 1:73j is to be a valid point along our root locus plot we need to add an
additional 70:9 degrees of phase angle to be back at 180 degrees and meet our
angle condition. These calculations are shown graphically in Figure 57. Angle from
OL poles: 120  120  tan1 (1.73/9) 250:9; angle required by controller:
180 250:9 70:9.
Step 3: To add the 70:9 degrees of phase we need to add the zero and pole
such that the zero is closer to the origin than the pole and where the angle from the
zero (which contributes positively) is 70.9 degrees greater than the angle introduced
by the pole (which contributes negatively). A solid rst iteration would be to place
the zero at s 1 where it contributes exactly 90 degrees of phase angle into the
system. Since we now know that the controller pole must add 19.1 degrees, we can

Figure 57 Example: calculation of required angle contribution from controller.


246 Chapter 5

Figure 58 Example: resulting phase-lead controller.

place the pole of the phase-lead controller as shown in Figure 58. Placing a zero at
1 adds tan1:73=0 90 degrees. Then the pole must contribute 19:1 degrees and
tan1 (1.73/d 19:1 degrees, or d 5. So the pole must be placed at p 6. Our
nal phase-lead controller becomes
!
s1 s1
Gc Phase lead 6
s6 1= s 1
6

Step 4: To verify our design we add the zero and pole from the phase-lead
controller to the s-plane and develop the modied root locus plot. We still must
include our two poles at the origin and the pole at 10 from the open loop system
transfer function.
For the root locus plot we have four poles and one zero, thus three asymptotes
at 180 degrees and 60 degrees. The asymptote intersection point is at s 5 and
the valid sections of real axis fall between 1 and 6 and to the left of 10 (also the
asymptote). This allows us to approximate the root locus plot as shown in Figure 59.
Without the phase-lead controller the two poles sitting at the origin immedi-
ately head into the RHP when the loop is closed and the gain is increased. Once the
controller is added, enough phase angle is contributed that it pulls the loci paths
into the LHP before ultimately following the asymptotes back into the RHP. As
designed, the phase-lead causes the paths to pass through our desired design points

Figure 59 Example: root locus plot for phase-lead compensated system.


Analog Control System Design 247

of s 1 1:73j. As many previous examples have shown, we can apply the mag-
nitude condition to solve for the required gain K at these points.

EXAMPLE 5.16
Verify the phase lead compensator designed in Example 5.15 using Matlab. Recall
that our loop transfer function, as given in Figure 56, is

1
GHs
s2 0:1s 1

The corresponding phase-lead compensator that was designed in Example 5.15 is


!
s1 s1
Gc Phase lead 6
s6 1= s 1
6
The phase-lead compensator pole and zero were chosen to make our system root
locus paths proceed through the points s1 ;2 1 1:73j corresponding to z 0:5
and on 2 rads/sec.
To verify this solution using Matlab, we dene the system numerator and
denominator and the phase-lead compensator and proceed to develop the root
locus and step response plots for both the uncompensated and compensated models.
The commands listed below are used to perform these tasks.

%Program commands to generate Root Locus Plot


% for Phase-lead exercise
clear;
numc=6*[1 1]; %Place zero at -1
denc=[1 6]; %Place pole at -6
nump=1; %Forward loop system numerator
denp=[1 0 0]; %Forward loop system denominator
numf=1; %Feedback loop system numerator
denf=[0.1 1]; %Feedback loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysf=tf(numf,denf); %System transfer function in feedback loop
sysl=sysp*sysf; %Loop transfer function
sysall=sysc*sysl %Overall compensated system in series
rlocus(sysl); %Generate original Root Locus Plot
hold;
rlocus(sysall); %Add new root loci to plot
sgrid(0.5,2); %place lines of constant damping
hold; %ratio (0.5) and w (2 rad/s)
hold;
tsys=[0:0.05:30];
gure; %Open a new gure window
step(feedback(sysp,
sysf),tsys); %Generate step response of CL uncompensated system
hold; %Hold the plot
248 Chapter 5

step(feedback(sysc*sysp,
sysf),tsys); %Generate step response of CL compensated system
hold; %Release the plot (toggles)

When the commands are executed, the rst result is the root locus plot for the
compensated and uncompensated system, given in Figure 60.
As desired, the unstable open loop poles are attracted to our desired loca-
tions when the phase-lead compensator is added to the system. The system still
proceeds to go unstable at higher gains. The complex poles dominate the response
of this system since the third pole is much further to the left (much faster). The
location of the zero, however, will aect the plot and our response is still not a true
second-order response. When the uncompensated and compensated systems are both
subjected to a unit step input, the results of the root locus plot become very clear as
the uncompensated system goes unstable. The two responses are given in Figure 61.
Using Matlab, we see that the phase-lead controller did indeed bring the new
loci through our desired design point and resulted in a stable system. The step
response of our compensated system is well behaved and quickly decays to the
desired steady-state value. For any controller to behave as simulated, we must
remember that it assumes linear ampliers and actuators capable of moving the
system as predicted. The more aggressive of response we design for the more power-
ful actuators that are required when implementing the design (and thus more costly).
There are realistic constraints to how fast we actually want to design our system to
be.
5.5.2.3 Outline for Designing Lag-Lead Controllers in the s-Domain
When both transient and steady-state performances need to be improved, we may
combine the two previous compensators into what is commonly called lag-lead. The

Figure 60 Example: Matlab root locus plot for phase-lead compensated system.
Analog Control System Design 249

Figure 61 Example: Matlab step responses of phase-lead comparison.

lag-lead controller is simply the phase-lead and phase-lag compensators connected in


series.   
T s 1 T3 s 1
Lag-lead: Phase lag Phase lead K 2
T1 s 1 T4 s 1

In general, designing a lag-lead controller is simply a sequential operation applying


the previous two methods since their eects on the system are largely uncoupled. The
lead portion should modify the shape of the paths, K is used to locate the poles along
the paths, and the lag portion is used to increase the system gain, thereby reducing
the steady-state error. The lag portion should not change the shape of the existing
loci paths. The steps outlined here will more clearly illustrate this concept.
Step 1: Begin by designing the phase-lead portion rst with the goal of meeting
the transient response specications. Calculate the dominant complex conjugate pole
locations from the desired damping ratio and natural frequency for the controlled
system and follow the steps for designing a phase-lead compensator (see Sec. 5.5.2.2).
In summary, this involves calculating the phase angle that must be added to the
system to make the root loci go through the desired pole locations. We achieve this
by placing our controller pole and zero at specic locations. Finally, calculate the
required gain K to place the poles at their desired locations. This step is needed
before we can assess how much additional gain is required to achieve our steady-
state performance specication.
Step 2: Once the phase-lead portion is designed and the locus paths go through
the desired points and we know the required proportional gain at that point, we can
proceed to design the phase-lag portion. To goal of the phase-lag is to increase the
gain in our system without changing the root locus plot. To do so we can determine
the system type number and together with our type of input calculate the required
total gain in the system to meet the steady-state error specication and nd out what
gain is specically required by the lag portion. Remember that the system and
proportional gain also act in series with the phase-lag compensator gain. Now design
250 Chapter 5

Figure 62 Example: block diagram of physical system for lag-lead controller.

the phase-lag portion by placing the pole and zero near the origin as described in
Section 5.5.2.1.
Step 3: Draw the new root locus including the complete lag-lead controller to
verify the design. If our calculations are correct, the new root locus paths should go
directly through our desired design points and our steady-state error requirements
should be satised. Simulating or measuring the response of our system to the
desired input easily determines the steady-state error.

EXAMPLE 5.17
Using the system represented by the block diagram in Figure 62, design a lag-lead
controller to achieve a closed loop system damping ratio equal to 0.5 and a natural
frequency equal to 5 rad/sec. The system also needs to have a steady-state error of
less than 1% while following a ramp input. To design this controller, we will use the
lead portion to place the poles and the lag portion to meet our steady-state error
requirements.
Step 1: Using the damping ratio and natural frequency requirements, we know
that our poles should be at a distance 5 from the origin on a line that makes an angle
of 60 degrees with the negative real axis. This results in desired poles of

s1 ;2 2:5 4:3j

The total angle from all open loop poles and zeros must be an odd multiple of 180
degrees to meet the angle condition in the s-plane. For our system in this example we
have a pole at the origin contributing 120 degrees and one pole at 1 contributing
109.1 degrees. These add to 229.1 degrees and if s 2:5 4:3j is to be a valid point
along our root locus plot, we need to add an additional 49:1 degrees of phase angle to
be back at 180 degrees and meet our angle condition. This can be achieved by placing
our zero at 2.5 and our pole at 7:5. These calculations are shown graphically in Figure

Figure 63 Example: resulting phase-lead portion of the controller.


Analog Control System Design 251

63. Placing a zero at 2:5 adds tan(4.33/0)=+90 degrees. Then the pole must contribute
40:9 degrees and tan1 (4.33/d) 40:9 degrees, or d 5. So the pole must be placed at
p 7:5.
The last task in this step is to calculate the proportional gain required to move
us to this location on our plot. To do so we apply the magnitude condition (from the
open loop poles at 0, -1, and 7.5 and zero at 2.5) using our desired pole location.
 
5 s  z 1  5j4:33j
K  


 
 K ppp
    
s  p1 s  p2 s  p3  2:52 4:32  1:52 4:32  52 4:32 
21:65
K ppp j1j
25 20:74 43:75
K7

Our nal phase-lead controller becomes


0:4 s 1
Phase lead 7
0:13 s 1
Step 2: Now we can design the phase-lag portion to meet our steady-state error
requirement. In this example we have a type 1 system following a ramp input so the
steady-state error will be proportional to 1=K where K, is the total gain in the
system. Recall that we already have 7 from the phase-lead compensator and 5
from the system, or 35 total. Our requirement is still not met since we want less
than 1% error, or a total gain of 100. This indicates that our phase-lag portion must
introduce an additional gain of 100/35, or approximately three times the current
gain. We will choose to add four times the gain by placing our pole at 0.02 and our
zero at 0.08. This results in the phase-lag compensator of
s 0:08
Phase lag
s 0:02
Step 3: Combining the lead and lag terms gives us the overall controller that
needs to be added to the system:
s 0:08 0:4 s 1
Lag-lead 7
s 0:02 0:13 s 1
To verify the design we need to plot the root locus plot and check that our roots do
pass through the desired points. In addition, it is helpful to check the time response
of the system to a ramp input and verify the steady-state errors. The verication of
this example is performed in the next example using Matlab.

EXAMPLE 5.18
Use Matlab to verify the root locus plot and time response of the controller and
system designed in Example 5.17. Recall that the goals of the system include a
damping ratio equal to 0.5, natural frequency of 5 rad/sec, and less than 1%
steady-state error to a ramp input.
To verify the design we will dene the uncompensated and compensated system in
Matlab, generate a root-locus plot for the compensated system, and generate time
252 Chapter 5

response (from a ramp input) plots of the uncompensated and compensated systems.
The commands used are included here.

%Program commands to generate Root Locus Plot


% for lag-lead exercise
clear;
numc=7*conv([1 0.08],[0.4 1]); %Place zeros at 2.5 and 0.08
denc=conv([1 0.02],[0.13 1]); %Place poles at 7.5 and 0.02
nump=5; %Forward loop system numerator
denp=[1 1 0]; %Forward loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysc*sysp %Overall compensated system in series
rlocus(sysp); %Generate original Root Locus Plot
hold;
rlocus(sysall); %Add new root loci to plot
sgrid(0.5,5); %place lines of constant damping
hold; %ratio (0.5) and w (5 rad/s)
hold;
tsys=[0:0.05:6];
gure; %Open a new gure window
lsim(feedback(sysp,1),tsys,tsys); %Generate ramp response of CL
%uncompensated system
hold; %Hold the plot
lsim(feedback(sysc*sysp,1),tsys,tsys); %Generate ramp response of CL compensated
%system
lsim(tf(1,1),tsys,tsys); %Generate ramp input signal on plot
hold; %Release the plot (toggles)

These commands dene the compensated and uncompensated system from Example
5.17 and proceed to draw the root locus plot given in Figure 64. We see that the lag-
lead controller does indeed move our locus paths to go through our desired design
points of s1 ;2 2:5 4:3j, giving us our desired damping ratio of 0.5 and natural
frequency of 5 rad/sec. The uncompensated root locus plot follows the asymptotes at
0.5 and is not close to meeting our requirements.
To verify the steady-state error criterion we will use Matlab (commands
already given above) to generate the ramp response of the uncompensated and
compensated system. The results are given in Figure 65. Except for a very short
initial time, the compensated system follows the desired ramp input nearly exactly.
The uncompensated physical system response has a much larger settling time and
steady-state error.
This example, in conclusion, illustrates the use of phase-lead and phase-lag
compensation to modify the transient and steady-state behavior of our systems.
As discussed in the introduction to this section, these controllers share many attri-
butes with the common PID controller where the integral gain is used to control
steady-state errors and the derivative gain to modify transient system behavior.
Looking at the parallel design methods as they occur in the frequency domain will
conclude this section on phase-lag and phase-lead controllers.
Analog Control System Design 253

Figure 64 Example: Matlab root locus plot of lag-lead compensated system.

Figure 65 Example: Matlab ramp responses of lag-lead compensated and uncompensated


systems.
254 Chapter 5

5.5.3 Frequency Response Design of Phase Lag/Lead Controllers


Bode plots provide a quick method of using the manufacturers data to design a
controller, since frequency response plots are often available from the manufacturer
for many controller components. Remember that to construct the open loop Bode
plot for the system, we simply take each component in series in the block diagram
and add their respective frequency response curves together. Therefore, if we know
what the open loop system is lacking in magnitude and/or phase, we simply nd the
correct controller curve that when added to the open loop system results in the
nal desired response of our system. These same concepts were already discussed in
Section 5.4.3 when we designed several variations of PID controllers in the frequency
domain using Bode plots. In fact, there are fewer design method dierences between
PID and phase-lag/lead in the frequency domain than in the s-domain using root
locus plots. For example, as Figure 66 illustrates, the phase-lag and PI frequency
plots are very similar and dier only at low frequencies.
The corresponding magnitude and angle contributions for the phase-lag and PI
controllers are as follows:
 
T s1 T1
Phase lag K 2  tan1 T2 o  tan1 T1 o a 20log
T1 s 1 T2
1 KKPi s o
PI Ki  90 degrees tan1
s Ki
Kp

In terms of magnitude, the phase-lag compensator has a nite gain at low


frequencies while the integrator in the PI compensator has innite gain as o ! 0.
This dierence is seen in the s-plane as the phase-lag controller does not place a pole
directly at the origin as does the integrator. Relative to phase angle, both compen-
sators end without any angle contribution at high frequencies. The phase-lag term
begins at zero and only adds negative phase angle over a narrow range of frequencies
while the PI term begins at 90 degrees even at very low frequencies. With respect to
stability, this gives a slight edge to the phase-lag method since negative phase angle is
added over a narrower range.
The net eect is that the methods presented earlier with respect to designing PI
controllers in the frequency domain also apply to directly to designing phase-lag
controllers. This is also true for the other variations (phase-lead and lag-lead). If we

Figure 66 Bode plot comparisons of phase-lag and PI controllers.


Analog Control System Design 255

compare the phase-lead and PD controllers, given in Figure 67, we see the same
parallels.
The corresponding magnitude and angle contributions for the phase-lead and
PD are as follows:
 
T1 s 1 T1
Phase lead K  tan1 T1 o  tan1 T2 o a 20 log
T2 s 1 T2
 
Kd o
PD Kp 1 s  tan1
Kp Kp
K d

Whereas the phase-lag and PI controllers diered only at low frequencies, here
we see that phase-lead and PD controllers only vary at higher frequencies. A phase-
lead controller does not have innite gain at high frequencies as compared to the PD
and only contributes positive phase angle over a range of frequencies. The PD
controller continues to contribute 90 degrees of phase angle at all frequencies greater
than Kp =Kd . It is likely that the phase-lead controller will handle noisy situations
better than pure derivatives (i.e., PD) since it does not have as large of gain at very
high frequencies.
To obtain the Bode plot for the lag-lead controller, since we are working in the
frequency domain and the lag and lead terms multiply, we simply add the two curves
together as shown in Figure 68. As before, the lag-lead compensation is compared
with its equivalence, the common PID. Since the lag-lead compensation is the sum-
mation of the separate phase-lag and phase-lead, the same comments apply. The lag-
lead controller has limited gains at both high and low frequencies, whereas the PID
has innite gain at those frequencies. Similarly, the lag-lead controller only contri-
butes to the phase angle plot at distinct frequency ranges while the PID begins
adding 90 degrees and ends at 90 degrees.

EXAMPLE 5.19
Using the system represented by the block diagram in Figure 69, design a controller
that leaves the existing dynamic response alone but that achieves a steady-state error
from a step input of less than 2%. Without any controller Gc 1, we can close the
loop and determine current damping ratio and natural frequency.

Figure 67 Bode plot comparisons of phase-lead and PD controllers.


256 Chapter 5

Figure 68 Bode plot comparison of lag-lead and PID controllers.

Cs 75

Rs s3 9s2 25s 100

The system poles (roots of the denominator) are

s1 ;2 0:78 3:58j; s3 7:5

Thus, without compensation the system currently has a damping ratio equal to 0.21
and a natural frequency of 3.7 rad/sec.
We can calculate the unit step input steady-state error from either the block
diagram or closed loop transfer function. Using the block diagram, we see that we
have a type 0 system with a gain of 75/25, or 3. Since the steady-state error equals
1=K 1 we will have a steady-state error from a step input of one fourth, or 25%.
This agrees with what can nd by closing the loop and applying the nal value
theorem to see that the steady-state output becomes 75/100, or an error of 25%.
To achieve a steady-state error less than 2%, we need to have a total gain of 49
in our system ess 1=K 1. Since we have a gain of 3, we need to add approxi-
mately another gain of 17 (giving a total of 51) to meet our error requirement. To do
this we will add a phase-lag controller with a pole at 0.02 and the zero at 0.34,
giving us an additional gain of 17 in our system. To begin with, let us examine our
open loop uncompensated Bode plot and our compensator Bode plot to see how this
is achieved. The uncompensated system plot is given in Figure 70 and the phase-lag
contribution is given in Figure 71.
The phase-lag compensator is

Figure 69 Example: block diagram of physical system for phase-lag compensation.


Analog Control System Design 257

Figure 70 Example: Bode plot of OL uncompensated system (GM 8.5194 dB [at+5 rad/
sec], PM 39.721 deg. [at 33.011 rad/sec]).

 
s 0:34
Phase lag
s 0:02

This gives us our additional gain of 17, or 24.6 dB (20 log 17), as shown in Figure 71.

We see in Figure 70 that the uncompensated open loop system has a gain
margin equal to 8.5 dB and a phase margin equal to 39.7 degrees (at a crossover
frequency of 3 rad/sec). Since we do not wish to signicantly change the transient
response of the system, the margins should remain approximately the same even
after we add the phase-lag compensator.

Figure 71 Example: Bode plot of phase-lag compensator.


258 Chapter 5

In Figure 71, the Bode plot for the phase-lag compensator, we see that at low
frequencies approximately 25 dB of gain is added to the system, thereby decreasing
the steady-state error. The phase-lag term does not change our original system at
higher frequencies, as desired. For a period of frequencies the phase-lag compensator
does tend to destabilize the system since it contributes additional negative phase
angle. If it occurs at low enough frequencies, it will not signicantly change our
original gain margin and phase margin.
When we apply the phase-lag controller to the system and generate the new
Bode plot, given in Figure 72, we can again calculate the margins to verify our
design. Recalling that our uncompensated system has a gain margin equal to 8.5
dB and a phase margin equal to 39.7 degrees (at a crossover frequency of 3 rad/sec),
the compensated system now has a gain margin equal to 7.5 dB and a phase margin
equal to 33.4 degrees (at a crossover frequency still at 3 rad/sec). So although they
are slightly dierent (tending slightly more towards marginal stability), they did not
change signicantly and should not noticeably aect the transients. If we overlay all
three Bode plots as shown in Figure 73, it is easier to see where the phase-lag term
aects our system and where it does not.
When the eects from the phase-lag compensator are examined alongside the
original system, we see how it only modies the original system at the lower fre-
quencies and not at higher frequencies. The phase-lag term raises the magnitude plot
enough at the lower frequencies to meet our steady-state error requirements and
adds some negative phase angle, but only over a range of lower frequencies. It is also
clear in Figure 73 how the uncompensated open loop (OL) system and the phase-lag
compensator add together and result in the nal Bode plot.
Perhaps the clearest way of evaluating our design is by simulating the uncom-
pensated and compensated system as shown in Figure 74. The step responses clearly
show the reduction in the steady-state error that results from the addition of the
phase-lag controller. The uncompensated system reaches a steady-state value of 0.75,

Figure 72 Example: Bode plot of phase-lag compensated OL system (GM 7.47 dB [at
4.741 rad/sec], PM 33.399 deg. [at 3.0426 rad/sec]).
Analog Control System Design 259

Figure 73 Example: Bode plot comparison of OL system, phase-lag, and compensated


system.

as predicted earlier in this example using the FVT. When the phase-lag controller is
added, the response approaches the desired value of 1. The percent overshoot, rise
time, peak time, and settling time were not signicantly changed after the compen-
sator was added (this was one of the original goals).
The Matlab commands used to calculate the margins and verify the system
design are included here for reference.
%Program commands to generate Bode plots
% for phase-lag exercise
clear;

Figure 74 Example: step responses of original and phase-lag compensated systems.


260 Chapter 5

numc=[1 0.34]; %Place zero at -0.34


denc=[1 0.02]; %Place pole at -0.02
nump=75; %Forward loop system numerator
denp=conv([1 5],[1 4 5]); %Forward loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysc*sysp; %Overall compensated system in series
syscl=feedback(sysp,1) %Uncompensated closed loop TF
sysclc=feedback(sysall,1) %Compensated closed loop TF
margin(sysp) %Generate Bode plot with PM and GM for plant
gure;
bode(sysc,{0.001,100}); %Generate Bode plot for phase-lag
gure;
margin(sysall); %Generate Bode plot with PM and GM for nal OL
%system
gure;
bode(sysp,sysc,sysall)
tsys=[0:0.05:10];
gure; %Open a new gure window
step(syscl,tsys); %Generate step response of CL uncompensated system
hold; %Hold the plot
step(sysclc,tsys); %Generate step response of CL compensated system
hold; %Release the plot (toggles)

EXAMPLE 5.20
Given the system represented by the block diagram in Figure 75 (see also Example
5.19), use Bode plots to design a controller that modies the dynamic response to
achieve a damping ratio of 0.5 and a natural frequency of 8 rad/sec. Without any
controller (Gc 1), we can close the loop and determine current damping ratio and
natural frequency.
Cs 75
3 2
Rs s 9s 25s 100

The system poles (roots of the denominator) are


s1 ;2 0:78 3:58j; s3 7:5

Thus, without compensation the system currently has a damping ratio equal to
0.21 and a natural frequency of 3.7 rad/sec where the goals of the controller (z 0:5
and on 8 rad/sec) are to approximately double the damping ratio and natural

Figure 75 Example: block diagram of physical system for phase-lead compensation.


Analog Control System Design 261

frequency of the uncompensated system. These changes will reduce the overshoot
and settling time of the system in response to a step input.
To begin with, we rst need to relate our closed loop performance requirements to
our open loop Bode plots. Once we know what our desired Bode plot should be, we
can nd where the uncompensated OL system is lacking and design the phase-lead
controller to add the required magnitude and phase angle to make up the dierences.
Since we want our compensated system to have a damping ratio equal to 0.5, we
know that we need a phase margin approximately equal to 50 degrees (see Figure 48,
Chap. 4), which needs to occur at our crossover frequency. We nd our crossover
frequency from the damping ratio and natural frequency requirements. Using Figure
49 from Chapter 4, we nd the oc =on 0:78 at a damping ratio of 0.5. Knowing that
we want a closed loop natural frequency, on , equal to 8 rad/sec, means that open
loop crossover frequency should equal approximately 6.3 rad/sec. Now we can dene
our requirements in terms of our compensated OL Bode plot measurements:

Phase margin 50 degrees at a crossover frequency of 6.3 rad/sec

To calculate what is lacking in the uncompensated system, we must draw the


uncompensated system open loop Bode plot and measure magnitude and phase
angle at our desired crossover frequency. Recognizing that this system is the same
as in Example 5.19, we can refer ourselves to Figure 70, where the Bode plot is
already completed. We see that at our desired crossover frequency (6.3 rad/sec)
the phase angle for the uncompensated system is equal to 200 degrees and the
magnitude is equal to 15 dB. Thus, to achieve our close loop compensated perfor-
mance requirements we need to raise the magnitude plot +15 dB and add 70
degrees of phase angle at our crossover frequency of 6.3 rad/sec. This will make
our nal system have a phase margin of 50 degrees at our crossover frequency
(since we are currently at 200 degrees, we need to add 70 degrees to be 50 degrees
above 180 degrees at oc .
To design our phase-lead controller, let us make T1 0:8 and T2 0:02 and
generate the Bode plot.
 
0:8 s 1
Phase lead
0:02 s 1

Our rst break occurs at 1=T1 , or 1.25 rad/sec, and the controller adds positive phase
angle and magnitude, resulting in Figure 76.
Examining the Bode plot we see that our goal of adding approximately 15 dB
of magnitude and 70 degrees of phase angle at our desired crossover frequency of
6.3 rad/sec is achieved. When this response is added to the uncompensated OL
systems response, the result should be a phase margin of 50 degrees at oc 6:3
rad/sec. To verify that this is indeed the case, let us generate the Bode plot for the
compensated system and measure the resulting phase margin. This plot is given in
Figure 77 where we see that when the two responses are added we achieve a phase
margin of 52 degrees at a crossover frequency of 6.7 rad/sec, slightly exceeding our
performance requirements.
It is easier to see how the phase-lead compensator adds to the original system
in Figure 78, where all the terms are plotted individually. We see that at low and high
frequencies there is little change to the original system, and that the phase-lead
262 Chapter 5

Figure 76 Example: Bode plot of phase-lead compensator (phase lead T1 0:8, T2 0:02).

compensator adds the desired phase angle and magnitude at the range of frequencies
determined by the T1 and T2 .
Finally, to verify the design in the time domain, let us examine the step
responses of the uncompensated and compensated systems, given in Figure 79. As
we expected, and hoped for, the compensated system exhibits less overshoot and
shorter response times (rise, peak, and settling times) than the uncompensated sys-
tem in response to unit step inputs. On a related note, however, we see that the
steady-state performance is not improved, as it was with phase-lag controller. To
address this issue, we look at one nal example where the phase-lag and phase-lead

Figure 77 Example: Bode plot of phase-lead compensated system (GM 17.482 dB [at
20.053 rad/sec], PM 52:101 deg. [at 6.738 rad/sec]).
Analog Control System Design 263

Figure 78 Example: Bode plot of OL, phase-lead, and compensated factors.

terms are combined as a lag-lead controller. Also, it is worth remembering that there
is a price to pay for the increased performance. To implement the phase-lead con-
troller, we may need more expensive physical components (ampliers and actuators)
capable of generating the response designed for. For example, if we were to plot the
power requirements for each response, we would nd that the compensated system
demands a much higher peak power to achieve the faster response. The goal is not
usually to design the fastest controller but one that balances the economic and
engineering constraints.
The Matlab commands used to generate the plots and measure the margins are
given below.

Figure 79 Example: closed loop step responses of original and phase-lead compensated
systems.
264 Chapter 5

%Program commands to generate Bode plots


% for phase-lead exercise
clear;
numc=[0.8 1]; %Place zero at -1.25
denc=[0.02 1]; %Place pole at -50
nump=75; %Forward loop system numerator
denp=conv([1 5],[1 4 5]); %Forward loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysc*sysp; %Overall compensated system in series
syscl=feedback(sysp,1) %Uncompensated closed loop TF
sysclc=feedback(sysall,1)
margin(sysp) %Generate Bode plot with PM and GM for plant
gure;
bode(sysc,{0.001,100}); %Bode plot with PM and GM for nal OL system
gure;
margin(sysall);
gure;
bode(sysp,sysc,sysall)
tsys=[0:0.05:10];
gure; %Open a new gure window
step(syscl,tsys); %Generate step response of CL uncompensated system
hold; %Hold the plot
step(sysclc,tsys); %Generate step response of CL compensated system
hold; %Release the plot (toggles)

EXAMPLE 5.21
Use Matlab to combine the phase-lag and phase-lead controllers from Examples 5.19
and 5.20, respectively, and verify that both the steady-state and transient require-
ments are met when the controllers are implemented together as a lag-lead. Generate
both the Bode plot (with stability margins) and step responses for the lag-lead
compensated OL system.
Recall that we were able to meet our steady-state error goal of less than a
2% error resulting from a step input using the phase-lag controller and our
transient response goals of a closed loop damping ratio equal to 0.5 and a
natural frequency equal to 8 rad/sec using the phase-lead controller. Combining
them should enable us to meet both requirements simultaneously. The nal lag-
lead controller becomes

s 0:34 0:8 s 1
Lag-lead
s 0:02 0:02 s 1

To verify the response we will use Matlab to generate the compensated OL Bode plot
and the resulting step response plot. The commands used to generate these plots are
as follows:
Analog Control System Design 265

%Program commands to generate Bode plots


% for lag-lead exercise
clear;
numclag=[1 0.34]; %Phase-lag, Place zero at -0.34
denclag=[1 0.02]; %Phase-lag, Place pole at -0.02
numclead=[0.8 1]; %Phase-lead, Place zero at -1.25
denclead=[0.02 1]; %Phase-lead, Place pole at -50
nump=75; %Forward loop system numerator
denp=conv([1 5],[1 4 5]); %Forward loop system denominator

sysclag=tf(numclag,denclag); %Phase-lag Controller TF


sysclead=tf(numclead,denclead); %Phase-lead Controller TF
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysclag*sysclead*sysp; %Overall compensated system in series
syscl=feedback(sysp,1) %Uncompensated closed loop TF
sysclc=feedback(sysall,1)

margin(sysall); %PM and GM Bode plot for nal system


gure;
bode(sysp,sysclag,sysclead,sysall) %Individual components plotted
tsys=[0:0.05:10];
gure; %Open a new gure window
step(syscl,tsys); %Step response of CL uncompensated system
hold; %Hold the plot
step(sysclc,tsys); %Generate step response of CL compensated
%system
hold; %Release the plot (toggles)
Since the open loop uncompensated Bode plot is already given in Figure 70, let us
proceed to plot the lag-lead compensated plot and verify that our phase margin and
crossover frequency have remained the same as designed for in Example 5.20. The
nal Bode plot is given in Figure 80, where we see that we now have a phase margin
equal to 49 degrees and a crossover frequency equal to 6.7 rad/sec, only slightly
lower than in the previous example (due to adding the lag term) but still very close to
our design goals.
To see how the individual phase-lag and phase-lead terms add to the original
OL system, we can plot each term separately as shown in Figure 81. For the most
part the phase-lag term only modies the system at lower frequencies and the phase-
lead terms at higher frequencies and when added together the result is that both the
steady-state and transient requirements are still met.
Finally, in Figure 82 we can verify that the requirements are met in the time
domain by examining the step response plots for the uncompensated and compen-
sated systems. Looking at the responses we see that the transient response of the
closed loop compensated system has less overshoot and shorter rise, peak, and
settling times than the closed loop uncompensated system and also that the
steady-state error is also much improved after adding the lag-lead controller. So
as in root locus s plane design methods, we can design the lag and lead portions
266 Chapter 5

Figure 80 Example: Bode plot of lag-lead compensated system (GM 17:073 dB [at 19.602
rad/sec], PM 49:337 deg. [at 6.7435 rad/sec]).

independently in the frequency domain to achieve both satisfactory steady-state and


transient performance.
In conclusion, phase-lag and phase-lead controllers provide alternative design
options with performance very similar to PID controllers. As also mentioned, there
are sometimes practical advantages to phase-lag and phase-lead controllers during
implementation with regard to noise amplication and components. It is easy to see
in both the s-plane pole and zero contributions or the frequency domain magnitude
and phase angle contributions that many similarities exist between PID and lag-lead
compensation techniques.

Figure 81 Example: Bode plot of individual terms (lag, lead, original, and nal systems).
Analog Control System Design 267

Figure 82 Example: step responses of original and lag-lead compensated systems.

5.6 POLE PLACEMENT CONTROLLER DESIGN IN STATE SPACE


The methods presented thus far in this chapter are limited (regarding design and
simulation) when dealing with any system that is not linear, time invariant, and
single input-single output. Optimal and adaptive controllers, mostly time varying
and nonlinear, must be analyzed using other techniques, many of which use state
space representations. Nonlinear and time varying systems can be designed using
conventional techniques, largely through trial and error, but the optimal result is
seldom achieved. Complex systems are generally designed using some performance
index that indicates how well a controller is performing. This index largely deter-
mines the system behavior, since it is the yardstick used to measure the perfor-
mance. Two common design approaches are used when designing control systems
with the state space techniques. Pole placement techniques are common introductory
techniques and are presented in this section to introduce us to state space controller
design. An alternative is using a quadratic optimal regulator system that seeks to
minimize an error function. The error function must be dened and might consist of
several dierent error functions depending on the application. Practical limitations
are also placed on the controller by placing a constraint on the control vector that
serves to provide limits on the control signal corresponding to actuator saturation(s).
This resulting system seeks to reach a compromise between minimizing the error
squared and minimizing the control energy. The matrices, Q and R, are used as
weighting functions in the performance index, commonly called the quadratic per-
formance index. Finding solutions to equations with more unknowns than equations
is the primary reason for the use of quadratic optimal regulators. These techniques
are discussed in Chapter 11.
Pole placement techniques may also be used to stabilize state space controllers.
Although the idea is quite simple, it assumes that we have access to all the states, that
is, that they can be measured. This is seldom the case, and observers must be used to
estimate the unknown states, as shown in Section 11.5.2. The advantage is that for
268 Chapter 5

controllable systems, all poles can be placed wherever we want them (assuming the
physics are possible). Controllability is determined by nding the rank of the con-
trollability matrix:

rank BjABj jAn1 Bj

If the rank is less than the system order (or size of A), the system is controllable.
Essentially, the system is controllable when all of the column vectors are linearly
independent and each output is aected by at least one of the inputs. If one or more
of the outputs are not aected by an input, then no matter what controller we design,
we cannot guarantee that all the states will be well behaved, or controlled. Now we
can review the original state space matrices:
dx=dt A x B u

where we have an arbitrary input u. If we close the loop and introduce feedback into
our system, our additional input into the system becomes
u K x

Although the gain vector K can be determined by using a transformation and


linear algebra (easily done in Matlab), a simpler approach for third-order systems
and less is possible by forming the modied system matrix, solving for the eigen-
values as a function of K and choosing gains to place each pole at a predetermined
location. The gain vector K can be found by equating the coecients of our char-
acteristic equation (determinant) from our system matrix with the coecients from
our desired characteristic equation, formed from our desired pole locations.
To illustrate how the feedback loop and gain vector aects the system matrix,
let us substitute the feedback, u = K x, back into the original state space matrix
equation. Then
dx=dt A x  B K x A  B K x

A modied system matrix, Ac , is formed which includes the control law. Since our
controller gains now appear in the system matrix, the eigenvalues (poles) of Ac can
be matched to the desired poles by adjusting the gains in the K vector. This method is
shown in the following example and assumes that all states are available for feed-
back.

EXAMPLE 5.22
Using the dierential equation describing a unit mass under acceleration, determine
the state space model and design a state feedback controller utilizing a gain matrix to
place the poles at s 1 1j. Note that the open loop system is marginally stable
and the controller, if designed properly, will stabilize it and result in a system damp-
ing ratio equal to 0.707 and a natural frequency equal to 1.4 rad/sec. The dierential
equation for a mass under acceleration is where c is the position of the mass, m, and r
is the force input on the system:

d 2c 1
r
dt2 m
Analog Control System Design 269

First we must develop the state matrices. To do so, let the rst state x1 be the
position and the second state x2 the velocity. Then x1 c and the following matrices
are determined:
x_ 1 x2 c_

1
x_ 2 c
r
m
      
x_ 1 0 1 x1 0
r
x_ 2 0 0 x2 1=m
To determine if it is controllable we take the rank of the controllability matrix M.
The rst column is the B vector and the second column is the vector resulting from
A  B. Since the resulting M matrix is nonsingular and rank 2, the system is
controllable.
 
0 1=m
M BjAB ; rank 2; controllable
1=m 0
Using the controller form developed above:
dx=dt A x  B K x A  B K x
The control law matrix, B K, is
   
0   0 0
BK k1 k2
1=m k1 =m k2 =m
The new system matrix (i.e., poles and zeros) is
       
 s 0 0 1 0 0   s 1 
jsI  A BKj    
0 s 0 0 k1 =m k2 =m   k1 =m s k2 =m 
The characteristic equation becomes a function of the gains:
ss k2 =m  k1 =m s2 k2 =ms k1 =m
To solve for the gains that are required to place our closed loop poles at s 1 1j,
we can multiply the two poles to get the desired characteristic equation and compare
it with the gain dependent characteristic equation. Our desired characteristic equa-
tion is
s 1 1js 1  1j s2 2s 2
To place the poles using k1 and k2 we simply compare coecients. Thus by inspec-
tion from the s0 term, k1 =m 2, and from the s1 term, k2 =m 2. Therefore,
k1 2 m and k2 2 m
For example, if we have a unit mass equal to 1, then the desired gain matrix becomes
 
K 2 2
As a reminder, although for controllable systems we can place the poles wherever we
wish, it is always dependent on having those states available as feedback, either as a
270 Chapter 5

measured variable or through the use of estimators. In this example it means having
both position and velocity signals available to the controller.
For higher order systems it is advantageous to use the properties of linear
algebra to solve for the gain matrix. Ackermanns formula allows many computer-
based math programs to solve for the gain matrix even for large systems. Deferring
the proofs to other texts in the references, we can dene the gain matrix, K, equal to

K 0 0 0 0 1 M1 Ad
where K is the resulting gain matrix; M is the controllability matrix that, if the system
is controllable, is not singular and has an inverse that exists; and Ad is the matrix
containing the information about our desired poles. It is formed as shown below.
Desired characteristic equation:
sn a1 sn an1 s an 0
And using the original A matrix:

Ad An a1 An1 an1 A an I
Many computer packages with control system tools have Ackermanns formula
available and thus we would only have to supply the desire poles and the system
matrices A and B to have the gain matrix calculated for us. More examples using
state space techniques are presented in Chapter 11.

EXAMPLE 5.23
Use Matlab to verify the state feedback controller designed in Example 5.22. Recall
that the physical system was open loop marginally stable, described by a force input
to a mass without damping or stiness terms. The resulting state space matrices for
the system are
      
x_ 1 0 1 x1 0
r
x_ 2 0 0 x2 1=m

To design the controller we will rst dene the state matrices in Matlab along
with the desired pole locations. Then controllability and the resulting gain matrix
can be solved using available Matlab commands, shown below.
%Pole placement controller design for State Space Systems
m=1; %Dene the mass in the system
A=[0 1;0 0]; %System matrix, A
B=[0;1/m]; %Input matrix, B
C=ctrb(A,B) %Check the controllability of the system
rank(C) %Check the rank of the controllability matrix
det(C) %Determinant must exist for controllability
%Rank = system order for controllable systems
P=[-1+j,-1-j]; %Vector of desired pole location
K=place(A,B,P) %Calculate the gain matrix using the place command
Analog Control System Design 271

When the commands are executed Matlab returns the A and B system matrices, the
controllability matrix, C, and the rank and determinant of C. Finally, the desired
pole locations are dened and the place command is used to determine the required
gain matrix K.
Output from Matlab gives the controllability matrix as
C
0 1
1 0
with the rank of C equal to 2 and the determinant of C equal to 1. Either method
may be used to check for controllability since the rank of a matrix is the largest
square matrix contained inside the original matrix that has a determinant. Since the
rank of C equals 2, which is the size of C, the determinant of the complete matrix
exists, as given by the det command, showing that C is indeed nonsingular.
Finally, after dening our desired pole locations as 1 1j, the place com-
mand returns:
K
2:0000 2:0000
This corresponds exactly with the gain matrix solved for manually in the previous
example. Since we are working with matrices, the Matlab code given in this example
is easily applied to larger systems. The only terms that must be changed are the A
and B system matrices and the vector P containing our desired pole locations.
To summarize this section on pole placement techniques with state space
matrices, we should recognize that the same eect of placing the poles for this system
could be achieved by using a PD controller algorithm. If we were to add velocity
feedback, as would be required for the state space design, the same design would
result. The reason this section is included is to introduce the topic as it relates to
similar design methods and systems (LTI, single inputsingle output) already pre-
sented in this chapter. Where state space techniques become valuable are when
dealing with larger, nonlinear, and multiple inputmultiple output systems.
Chapter 11 introduces several state space design methods for applications such as
these.

5.7 PROBLEMS
5.1 Briey describe the typical goals for each term in the common PID controller.
What is each term expected to achieve in terms of system performance?
5.2 Describe integral windup and briey describe a possible solution.
5.3 Briey describe an advantage and disadvantage of using derivative gains.
5.4 What is the reason for using an approximate derivative?
5.5 List three alternative congurations of PID algorithms and describe why they
are sometimes used.
5.6 What is the assumption made when it is said that the system has dominant
complex conjugate poles?
5.7 To design a system with a damping ratio equal to 0.6 and a natural frequency
equal to 7 rad/sec, where should the dominant pole locations be located in the s-
plane?
272 Chapter 5

5.8 To design a system that reaches 98% of its nal value within 4 seconds, what
condition on the s-plane must be met?
5.9 A simple feedback control system is given in Figure 83. As a designer, you have
control over K and p. Select the gain K and pole location p that will give the fastest
possible response while keeping the percentage overshoot less than 5%. Also, the
desired settling time, Ts , should be less than 4 seconds.
5.10 For the system given in the block diagram in Figure 84, determine the K1 and
K2 gain necessary for a system damping ratio z 0:7 and a natural frequency of 4
rad/sec.
5.11 The current system exhibits excessive overshoot. To reduce the overshoot in
response to a step input, we could add velocity feedback, as shown in the block
diagram in Figure 85. Determine a value for K that limits the percent overshoot to
10%.
5.12 Velocity feedback is added to control to add eective damping to the system, as
shown in the block diagram in Figure 86. Determine a value for K that limits the
percent overshoot to 5%.
5.13 Using the plant model transfer function given, design a unity feedback control
system using a proportional controller.
a. Develop the root locus plot for the system.
b. Determine (from the root locus plot and using the appropriate root locus
conditions) the gain K required for a damping ratio 0:2:

Figure 83 Problem: system block diagram with unity feedback.

Figure 84 Problem: system block diagram with gain feedback.

Figure 85 Problem: system block diagram with velocity feedback.


Analog Control System Design 273

Figure 86 Problem: system block diagram with velocity feedback.

5
Gs
s2 7 s 10
5.14 Using the plant model transfer function, design a unity feedback control sys-
tem using rst a proportional controller (K 2) and then a PI controller
(K 2; Ti 1). Draw the block diagrams for both systems and determine the
steady-state error for both systems when subjected step inputs with a magnitude
of 2:
5
Gs
s5
5.15 Use the system block diagram given in Figure 87 to answer the following
questions.
a. If Gc K, what is the steady-state error due to a unit step input?
 
1
b. If GC K 1 , what is the steady-state error due to a unit step
Ti s
input?
c. Using the PI controller in part b, will the system ever go unstable for any
gains K>0 and Ti > 0? Use root locus techniques to justify your answer.
5.16 Given the block diagram model of a physical system in Figure 88:
a. Describe the open loop system response characteristics in a brief sentence
(no feedback or Gc ).

Figure 87 Problem: block diagram of controller and system model.

Figure 88 Problem: block diagram of controller and system model.


274 Chapter 5

b. Add a PD controller, Gc K 1 Td s, and nd K and Td such that on


3 and z 0:8.
c. Will the actual system exhibit the response predicted by on and z ? Why or
why not? Use root locus techniques to defend your answer.
5.17 A block diagram, given in Figure 89, includes a physical system (plant) transfer
function that is unstable. Design the simplest possible controller, GC , which will
make the feedback system stable and meet the following requirements:
a. Steady-state errors from step (constant) inputs zero
b. System settling time, Ts , of 4 seconds
c. System damping ratio, z, of 0.5
(Begin with P, then I, then PI, then PID, until you nd the simplest one that will
meet the requirements. Document why each one will or will not meet the require-
ments.)
5.18 Using the block diagram in Figure 90, design the simplest controller which,
using some possible combination of proportional, integral, and/or derivative gains,
meets the listed performance requirements. System requirements: z  0:7, Tsettling  1
sec, ess (step)  0:40.
5.19 Given the open loop step response in Figure 91, determine the PID controller
gains using Ziegler-Nichols methods.
5.20 Given the open loop step response in Figure 92, determine the PID controller
gains using Ziegler-Nichols methods.
5.21 Given the system in Figure 93, draw the asymptotic Bode plot (open loop) and
determine the gain K such that the phase margin is 45 degrees.
5.22 Given the OL system transfer function, draw the asymptotic Bode plot (open
loop) for K 1 and answer the following questions. Clearly show the nal resulting
plot.
a. When K 1, what is the phase margin fm ?
b. When K 1, what is the gain margin?
c. What value of K will make the system go unstable?
10 K
GHs
10s 1s 10:1s 1

Figure 89 Problem: block diagram containing unstable plant model.

Figure 90 Problem: block diagram with controller and plant model.


Analog Control System Design 275

Figure 91 Problem: open loop step response.

Figure 92 Problem: open loop step response.

Figure 93 Problem: block diagram of controller and system model.


276 Chapter 5

5.23 With the system in Figure 94 and using the frequency domain, design a PI
controller for the following system that exhibits the desired performance character-
istics. Calculate the steady-state error from a ramp input using your controller gains.
System requirements : fm 52 degrees; oc 10 rad/sec:

5.24 Use the open loop transfer function and frequency domain techniques to
design a PD controller where the phase margin is equal to 40 degrees at a crossover
frequency of 10 rad/sec.
24
Gs
s 42
5.25 Using root locus techniques, design a phase-lead controller so that the system
in Figure 95 exhibits the desired performance characteristics. System requirements:
z  0:35, Tsettling  4 sec.
5.26 Given the system block diagram in Figure 96, design a controller (phase-lag/
lead) to achieve a closed loop damping ratio equal to 0.5 and a natural frequency
equal to 2 rad/sec. Use root locus techniques.
5.27 Using the system shown in the block diagram in Figure 97, design a phase-lag
compensator that does not signicantly change the existing pole locations while
causing the steady-state error from a ramp input to be less than or equal to 2%.
5.28 With a third-order plant model and unity feedback control loop as in Figure
98,
a. Design a compensator to leave the existing root locus paths in similar loca-
tions while increasing the steady-state gain in the system by a factor of 25.

Figure 94 Problem: block diagram of controller and system model.

Figure 95 Problem: block diagram of controller and system model.

Figure 96 Problem: block diagram of controller, system, and transducer.


Analog Control System Design 277

Figure 97 Problem: block diagram of controller and system.

Figure 98 Problem: block diagram of controller and system.

b. Where are the close loop pole locations before and after adding the com-
pensator?
c. Verify the root locus and step response plots (compensated and uncompen-
sated) using Matlab.
5.29 Using the system shown in the block diagram in Figure 99, design a compen-
sator that does the following:
a. Places the closed loop poles at 2 3:5j. Dene both the required gain and
compensator pole and/or zero locations.
b. Results in a steady-state error from a ramp input that is less than or equal to
1.5%.
c. Verify your design (root locus and ramp response) using Matlab.
5.30 Given the open loop system transfer function, design a phase-lag controller to
increase the steady-state gain in the system by a factor of 10 while not signicantly
decreasing the stability of the system. Include
a. The block diagram of the system with unity feedback.
b. The open loop uncompensated Bode plot, gain margin, and phase margin.
c. The transfer function of the phase-lag compensator.
d. The compensated open loop Bode plot, gain margin, and phase margin.

Figure 99 Problem: block diagram of controller and system.

Figure 100 Problem: block diagram of controller and system


278 Chapter 5

20
Gs
s 1s 2s 3

5.31 Given the open loop system transfer function, design a phase-lead controller to
increase the system phase margin to at least 50 degrees and the gain margin to at
least 10 dB. Include
a. The block diagram of the system with unity feedback.
b. The uncompensated open loop Bode plot, gain margin, and phase margin.
c. The transfer function of the phase-lead compensator.
d. The compensated open loop Bode plot, gain margin, and phase margin.

1
Gs 2
s s 5

5.32 Using the system shown in the block diagram in Figure 100, design a compen-
sator that does the following
a. Results in a phase margin of at least 50 degrees and a crossover frequency of
at least 8 rad/sec.
b. Results in a steady-state error from a step input that is less than or equal to
2%.
c. Verify your design (Bode plot and step response) using Matlab.
5.33 Given the dierential equation describing a mass and spring system, determine
the state space model and design a state feedback controller utilizing a gain matrix to
place the poles at s 1 1j.

d 2 y dy
2 r
dt2 dt

where y is the output (position) and r is the input (force).


5.34 Given the dierential equation describing the model of a physical system,
determine the state space model and design a state feedback controller utilizing a
gain matrix to:
a. Have a damping ratio of 0.8.
b. Have a settling time less than 1 second.
c. Place the third pole on the real axis at s 5.

d 3y d 2y dy
3
4 2
3 2y r
dt dt dt

where y is the output and r is the input (force).


6
Analog Control System Components

6.1 OBJECTIVES
 Introduce the common components used when constructing analog control
systems.
 Learn the characteristics of common control system components.
 Develop the knowledge required to implement the controllers designed in
previous chapters.

6.2 INTRODUCTION
Until now little mention has been made about the actual process and limitations in
constructing closed loop control systems. A paper design is just as it states: no
physical results. This chapter introduces the basic components that we are likely
to need when we move from design to implementation and use. The fundamental
categories, shown in Figure 1, may be summarized as error detectors, control action
devices, ampliers, actuators, and transducers.
The goal of this chapter is to introduce some common components in each
category and how they are typically used when constructing control systems. The
ampliers and actuators tend to be somewhat specic to the type of system being
controlled. There are physical limitations associated with each type, and if the wrong
one is chosen, the system will not perform well no matter how our controller
attempts to control system behavior. Ampliers, as the name implies, tend to simply
increase the available power level in the system. The actuators are then designed to
use the output of ampliers to eect some change in the physical system. If our
actuator does not cause the output of the physical system to change (in some pre-
dictable manner), the control system will fail.
The control action devices provide the desired features discussed in previous
chapters. How do we actually implement the proportional-integral-derivative (PID),
or phase-lag, or phase-lead controller that works so well in our modeled system?
Two basic categories include electrical devices and mechanical devices. Electrical
devices will be limited to analog in this chapter and later expanded to include the
rapidly growing digital microprocessor-based controllers. For the most part the
operational amplier is the analog control device of choice. It is supported by a

279
280 Chapter 6

Figure 1 Typical layout of system components.

multiple array of lters, saturation limits, safety switches, etc. in the typical con-
troller. If fact, you may have to search the circuit board just to nd the chips
performing the basic control action; the remaining components add the features,
safety, and exibility required for satisfactory performance. The controller typogra-
phies presented in previous chapters can all be implemented quite easily using opera-
tional ampliers.
Mechanical controllers utilize the feedback of the actual physical variable to
close the loop. Example variables include position (feedback linkages), speed (cen-
trifugal governor), and pressure. In these controllers a transducer is generally not
required, and they may operate independently of any electrical power. We are
obviously constrained by physics as to what mechanical controller feedback systems
are possible. There are many mechanical controllers still in use and providing reliable
and economical performance.
As the move is made to electronic controllers, the importance of transducers,
actuators, and ampliers is increased. While actuators are still required in mechan-
ical feedback systems (i.e., hydraulic valve), transducers and ampliers generally
include supporting electrical components. To have an electrical component repre-
senting the summing junction in the block diagram, we must be able to provide an
electrical command signal and feedback signal (proportional to the actual controlled
variable). The output of such controllers is very low in power (generally current
limited) and depends on linear ampliers capable of causing a physical change in
the actuator and ultimately in the physical output of the system. Sometimes posing
an even greater problem is the transducer. The lack of suitable transducers has in
many cases limited the design of the perfect controller. For a system variable to be
controlled, it must be capable of being represented by an appropriate electrical signal
(i.e., a transducer). The goal of this chapter is to provide information on the basic
components found in the four categories (controller, transducer, actuator, and
amplier).

6.3 ANALOG CONTROLLER COMPONENTS


6.3.1 Implementation Using Basic Analog Circuits
PID, phase-lag, and phase-lead controllers can all be constructed with circuits utiliz-
ing operational ampliers or, as more commonly called, OpAmps. Although looking
at a typical PID controller on a printed circuit board would lead us to believe we
Analog Control System Components 281

could not construct a controller ourselves, most of the components are the additional
lters, ampliers, and safety functions. The simple circuits presented here still per-
form quite well in some conditions. Manufactured controller cards have multiple
features, range switches for gains, robust ltering, and often include the nal stage
amplication and thus appear much more complex than what is required for the
basic control actions themselves. The additional features are usually designed for the
specic product and in many cases make it desirable over building our own. Even
then, in many large control systems (i.e., an assembly line) this controller might only
be a subset of the overall system and we would still be responsible for the overall
system performance.
The following circuits in Table 1 illustrate the basic circuits used in the com-
mon controller typographies examined and designed in Chapter 5. Each controller
utilizes the basics: i.e., inverting and noninverting amplication, summing, dier-
ence, integrating, and dierentiating circuits to construct the proper transfer func-
tion for each controller. Additional information on OpAmps is given in Section
6.6.1. These circuits can be found in most electrical handbooks along with the
calculations for each circuit. Using capacitors in the OpAmp feedback loop inte-
grates the error and using capacitors in parallel with the error signal dierentiates the
error. Picking dierent combinations of resistors chooses the pole and zero locations.
Potentiometers are commonly used to enable on-line tuning. Remember that nal
drivers (i.e., power transistors or pulse-width-modulation [PWM] circuits) are
required when interfacing the circuit output with the physical actuator.
Many issues must be considered when using the circuits given in Table 1,
discussed here in terms of internal and external aspects. Internally, that is, between
the error input and controller output connections, several improvements are com-
monly made when implementing the circuits. In terms of controller design, there are
realistic constraints internal to the circuit as to where poles and zeros may feasibly be
placed. Since every resistor and capacitor is not an ideal component, we have limited
values and combinations that work in practice. For example, to place a zero and/or
pole very close to the origin, as common in phase-lag designs, we would need to nd
resistors and capacitors with very large values, a challenge for any designer.
Second, to avoid integral windup (collecting to much error and having to
overshoot the desired location to dissipate error), we might consider adding diodes
to the circuit to clamp the output at acceptable levels. Even if it is benecial to
accumulate more error using the integral term, we will always have limited output
available from each component before it saturates. Other times it is also common to
include an integral reset switch that discharges the capacitor under certain condi-
tions.
Finally, internal problems arise with building pure derivatives using OpAmps
because of resulting noise and saturation problems. As shown in Figure 2, it is
common to add another resistor in series with the capacitor that, when the equations
are developed, results in adding a rst-order lag term to the denominator. The new
controller transfer function becomes
Controller output R2 Cs

Error R1 Cs 1
The modied transfer function should be familiar as conceptually it was already
presented and discussed in Section 5.4.1 as the approximate derivative transfer func-
282 Chapter 6

Table 1 Operation Amplier Controller Circuits

Function Eo s signal out OpAmp circuit


Gs
Ein s error in

Summing Error rvolts  cvolts


junction

P R4 R2
R3 R1

PI R4 R2 R2 C2 s 1
R3 R1 R2 C2 s

PD R4 R2
R C s1
R3 R1 1 1

PID R4 R2 R1 C1 s 1R2 C2 s 1
R3 R1 R2 C2 s

Lead or R4 R2 R1 C2 s 1
lag R3 R1 R2 C2 s 1

Lag-lead R4 R2 R1 R5 C1 s 1
:
R3 R1 R2 R6 C2 s 1
R6 C2 s 1

R5 C1 s 1
Analog Control System Components 283

Figure 2 Modied derivative function using OpAmps.

tion. Figure 7 of Chapter 5 presented the output of the approximate derivative term
in response to a step input. To add this function to the PD controller from Table 1,
we can modify the circuit and insert the extra resistor as shown in Figure 3.
Now when we develop the modied transfer function for the controller, we can
examine the overall eect of adding the resistor.

R4 R2 R2 R1 Cs 1
PDAPPR
R3 R1 R5 R1 R5
Cs 1
R1 R5

We still place our zero from the numerator, as before, but we also have added a pole
in the denominator, as accomplished in the approximate derivative transfer function.
The interesting result comes from comparing the modied PD with a phase-lead
controller. Although how we adjust them is slightly dierent, we nd that both
algorithms place a zero and pole and functionally are the same controller. This
agrees with earlier observations made about the benets of using phase-lead over
derivative compensation because of better noise attenuation at high frequencies.
Both the modied PD and phase-lead terms would have similar shapes to their
respective Bode plots.
Moving on to several external aspects, it is important to realize that even if we
now get our internal circuits operating correctly in the lab, there are still external
issues to consider before implementation. Noise, load requirements, physical con-
straints, extra feature requirements, and signal compatibility should all be considered
regarding their inuence on the controller. Noise may consist of actual signal noise
from the system transducers, connections, etc., but also may be electromagnetic
noise aecting unshielded portions of the circuit, and so forth. Noisy signals may

Figure 3 Modied PD OpAmp controller with approximate derivative.


284 Chapter 6

seriously hinder the application of lead or derivative compensators. One approach is


the lter the input signals and shield the actual components from electromagnetic
noise. Good construction techniques (connections, shielding, etc.) should be fol-
lowed at all times.
Load requirements must be compatible with the output from the OpAmp
devices in our circuits. It may be required that we add an intermediate driver chip
(amplier) or similar component before connecting to our primary amplier. For the
most part, treat the output of the controller as a signal only with no power delivery
expectations.
Physical constraints include mounting styles, machine vibration, heat sources,
and moisture. Each application is dierent as to what the critical constraints are.
Extra features also need to be designed into the existing controller. For example,
with electrohydraulic proportional valves, it is common to add a deadband elimi-
nator. It is usually the combination of extra features, safety devices, and drivers that
are more complex and take up more space than the original compensator designed
for the system.
Finally, consider signal compatibility when designing and building controllers
using analog (and in some ways even more so with digital) components. For the best
performance (signal to noise ratio, for example) choose transducers, potentiometers,
wire gauges, etc., that are designed for the need at hand. Some of these components
are examined in more detail later in this chapter. Of particular concern is that the
output ranges of the transducers and the input ranges of the ampliers are compa-
tible with the OpAmps being used to construct the compensator. Both current and
voltage requirements should be considered.
This is only a brief introduction into the construction and implementation of
analog controllers. At a minimum we should see that there are ways to implement
the designs resulting from our work in Chapter 5. Hopefully, and beyond that, we
can develop the ability to build and implement some of our designs to bring us to the
next level of satisfaction, moving from a simulation to a physical realization.

6.3.2 Implementing Basic Mechanical Controllers


There are still complete controllers that do not use any electrical signals and utilize
all mechanical devices to close the loop. These controllers have the advantage of not
requiring any external electrical power, transducers, or control circuits, therefore
being more resistant to problems in noisy electrical environments. The interesting
thing is that most of the OpAmp circuits from the previous section can be duplicated
in hydraulic and pneumatic circuits by using valves, springs, and accumulators in
place of resistors and capacitors. In fact, as examined with regard to modeling in
Chapter 2, the analogies between electrical and mechanical components can also be
applied to designing and implementing mechanical control systems. For example,
using dierent linkage arrangements can serve as gain adjustments in a proportional
controller. Basic mechanical controllers are still very common and found in our
everyday use in items such as toasters, thermostats, and engine speed governors
on lawnmowers.
In general, other than with simple (proportional or on/o) controllers, these
mechanical controllers will often cost as much or more, not be as exible when
upgrading, more dicult to tune, and consume more energy when compared with
Analog Control System Components 285

electronic controllers. Whereas the resistor in the OpAmp circuits passes current in
the range of microamps, the valves or dampers inserted into the mechanical control
circuit will have an associated energy drop and thus generate some additional heat in
the circuit. If the mechanical control elements are small, this might be an insignif-
icant amount of the total energy controlled by the system, but the advantages and
disadvantages must always be considered.
The good news is that whether the system is electrical or mechanical, the same
eects are present from each gain (P, I, and D). The concepts regarding design and
tuning are the sameonly the implementation and actual adjustments tend to dier.
Because of this reason, and since most new controllers are now electronic, only a
brief introduction is presented here about how mechanical control systems can be
implemented.

EXAMPLE 6.1
Design a mechanical feedback system to control the position of a hydraulic cylinder.
Develop the block diagram, including an equivalent proportional controller, and the
necessary transfer functions using the model given in Figure 4. Make the following
assumptions to simplify the problem and keep it linear:
 The mass of the piston and cylinder rod is negligible.
 There is a constant supply pressure, Ps .
 Flow through the valve is proportional to valve movement, x.
 The coecient Kv accounts for the pressure drop across both orices (ow
paths in and out of the valve).
 Flow equals the area of the piston times the piston velocity.
 The uid is incompressible.
 Notation: r input, y output.

First, write the equation representing the input command to the valve, x, as a
function of the command input, r, and the system feedback, z. This should look
familiar as our summing junction (with scaling factors) whose output is the error
between the desired and actual positions.

a b
x r z
ab ab

Figure 4 Example: hydraulic proportional controller.


286 Chapter 6

Now, sum the forces on the mass and develop the transfer function between y and z:
X
F M z y  zK y_  z_B
Take the Laplace transform of the equation:
M s2 Zs K Ys  K Zs B s Ys  B s Zs
And then write as the transfer function between Zs and Ys:
Zs Bs K
2
Ys Ms Bs K
Finally, relate the piston movement to the linearized valve spool movement where
the ow rate through the valve is assumed to be proportional to the valve position.
This simplication does ignore the pressure-ow relationship that exists in the valve
(see Sec. 12.4). The law of continuity (assuming no leakage in the system) relates the
valve ow to the cylinder velocity.
Q A dy=dt Kv x
dy=dt Kv =Ax
Take the Laplace transform and develop the transfer function between Ys and
Xs.
Ys KV 1

Xs A s
Now the block diagram can be constructed as shown in Figure 5. Recognize that if
we desired to have Zs as our output, the block diagram could be rearranged to
make this the case and Ys would be an intermediate variable in the forward path.
To change the gain in such a system, we now must physically adjust pivot points,
valve opening sizes, piston areas, etc., to tune the system. In this particular example
the linkage lengths allow us to adjust the proportional gain in the system.
At this point the design tools presented in previous chapters can be used to
choose the desired gains that lead to the proper linkages, springs, and dampers.
Although the systems in general have a more limited tuning range, they are imper-
vious to electrical noise and interference, making them very attractive in some
industrial settings. They also do not depend on electrical power and provide for
addition mobility and reliability, especially in hazardous or harsh environments.
Among the disadvantages, we see that to change the type of controller, we must
actually change physical components in our system. Also, as mentioned earlier,
whereas with electrical controllers we can add eective damping without increasing

Figure 5 Example linear mechanical hydraulic proportional controller.


Analog Control System Components 287

the actual energy losses, in mechanical (hydraulic, or pneumatic, also being consid-
ered mechanical) systems we actually increase the energy dissipation in the system
to increase the system damping.

6.4 TRANSDUCERS
Sensors are key elements for designing a successful control system and, in many
cases, the limiting component. If a sensor is either unavailable or too expensive,
the control of the desired variable becomes very dicult. Sensors, by denition,
produce an output signal relative to some physical phenomenon. The term is derived
from the Latin word sensus, as used to describe our senses or how we receive
information from our physical surroundings. Transducer, a term commonly used
interchangeably with sensor, is generally dened to cover a wider range of activities.
A transducer is used to convert a physical signal into a corresponding signal of
dierent form, usually to a form readily used by analog controllers. The Latin
word transducer simply means to transfer, or convert, something from one form
to another. Thus a sensor is also a transducer, but not vice versa. Some transducers
might just convert from one signal type to another, never sensing a physical
phenomenon. We will assume the transducers described here include a sensor to
obtain the original output change from a physical phenomenon change. Only trans-
ducers dealing with analog signals are presented here (see Sec. 10.7 for a similar
discussion on digital transducers).

6.4.1 Important Characteristics of Transducers


When we choose a transducer we should know certain important characteristics
about the transducer before we purchase it for our system. In most cases this infor-
mation is available (or can be requested) from the manufacturer of the component.
Also, items such as the range are commonly dened when ordering the actual part,
and when designing the system there may be several ranges to choose from. Several
important characteristics of transducers are summarized in Table 2.
The important point in choosing transducers with certain characteristics is to
match them to our system. A response time that is too slow will cause stability
problems in our system, and a response time that is faster than required may be
more expensive. In general, the cost is related to both the volume produced and
performance level of the transducer, often not being a linear relationship in terms of
performance. It may be possible to use a better transducer for a lower cost if we can
locate a common type used in many other applications.

6.4.2 Transducers for Pressure and Flow


6.4.2.1 Common Pressure Transducers
Several varieties of transducers are used to measure pressure. Three common meth-
ods of converting a pressure to an electrical signal are as follows:

 Strain gages
 Piezoelectric materials
 Capacitive devices
288 Chapter 6

Table 2 Important Characteristics of Transducers

Characteristic Brief description

Range The input range describes the acceptable values on the input, i.e.,
01000 psi, 010 cm, and so forth. The output range determines the
type and level of signal output. If your data acquisition system only
handles 05 V, then a transducer whose output is 10 V would be
more difcult to interface. Current output signals are also becoming
more popular and are discussed more in later sections. Many
transducers and controller cards have user selectable ranges.
Error These ratings are commonly broken into several categories. Sensitivity,
hysteresis, linearity, and repeatability are all components of error that
will degrade your accuracy. High precision means high repeatability
but not necessarily high accuracy.
Stability The amount of signal drift as a function of time. The drift may be
related to the transducer warming up and thus diminishing once the
temperature is stable.
Dynamics This should be specied as in earlier sections using terms like response
time, time constant, rise time, and/or settling time. These are important
if we are trying to control a relatively fast system where the transducer
might not be fast enough to measure our variable of interest.

They all do credible jobs and are readily available. Strain gage types measure the
strain (deection) caused by the pressure acting on a plate in the transducer.
Piezoelectric devices use the pressure to deform the piezoelectric material, producing
a small electrical signal. Finally, capacitive devices measure the capacitance change
as the pressure forces two plates closer together. With each type, there are generally
three pressure ratings. The normal range where output is proportional to the input is
where the transducer should be used. Two failure ratings are also relevant. The rst
failure point is where the measurement device is internally damaged (diaphragm is
deformed, etc.) and the transducer is no longer useful. The nal failure point, and the
most severe, is the burst pressure rating. It is dangerous to exceed this rating.
Pressure transducers are common, and thus all types come in a variety of
voltage and current outputs. Common voltage ranges include 010, 10, 01, and
05 V. The most common current output range is 420 mA and is discussed in
Section 6.6 relative to the noise rejection advantages of using current signals.
Many transducers now have the signal conditioning electronics mounted inside the
transducer for a compact unit that is easy to use and install. An example of this type
is shown in Figure 6. Signal conditioning is required for most transducers (not just
pressure) since the sensor output (i.e., strain gage) is very small and must be ampli-
ed. The sooner that this occurs, the better our signal-to-noise ratio is for the
remainder of the system.
Finally, response times of most pressure transducers are very fast relative to the
types of systems installed on cylinders/motors with large inertia. Response time
may be a concern when attempting to measure higher order dynamics (uid
dynamics, etc.) in the system. Also, since the accuracy of most transducers is depen-
dent on the transducer range, sometimes it is necessary to use dierential pressure
Analog Control System Components 289

Figure 6 Typical strain gage pressure transducer construction.

transducers. These transducers can measure small dierences between two pressures
even though both pressures are very large. For example, it is hard to measure small
changes in a very large pressure using a transducer designed to output a linear signal
from low pressures all the way up to high pressures. The available output resolution
will be spread over a much larger range.
6.4.2.2 Common Flow Meters
Flow meters have been the larger problem of the two, and accuracy and response
time are more often questionable. Flow is more dicult to measure for several
reasons. Turbulent and laminar ow regimes, a logarithmic dependence of viscosity
on temperature, and superimposed pressure eects all make the measurement more
dicult. Most precision ow meters are of the turbine type, where the uid passes
through a turbine whose velocity is measured. An example is shown in Figure 7.
Once the turbulent regime is well established, many meters are fairly linear and
capable of 0:1% accuracy. To take advantage of the transition regions, higher
order curve ts must be used, sometimes a dierent curve t for each region of
operation. In addition, care must be taken when using in reverse since the calibration
factors are commonly quite dierent. As higher accuracies are required, temperature
and pressure corrections may also be required. For smaller ows and high precision
measurements, some positive displacement meters have been designed for use in
several specialty applications (medical, etc.).
Other ow meters include ultrasonic, laser, and electromagnetic devices; strain
gage devices; and orice pressure dierentials. Ultrasonic ow meters pass high
frequency sound waves through the uid and measure the transmit time. It does

Figure 7 Typical axial turbine ow meter.


290 Chapter 6

require additional circuitry to process the signals. Laser doppler eects may be used
to measure ow in transparent channels by measuring the scatter of the laser beam
using Doppler techniques. Electromagnetic devices place a magnet on two sides of
the channel and measure the voltage on the perpendicular sides. The voltage is
proportional to the rate at which the elds are cut and thus to the velocity of the
uid. Strain gage devices are used to measure the deection of a ram inserted into the
ow path to measure ow rate. Their main advantage is potentially better response
times relative to the other methods.
Finally, simply measuring the pressure of each side of a known orice allows a
ow to be measured, as shown in Figure 8. It does tend to be quite nonlinear outside
of the calibrated ranges but is commonly used to sense ow in mechanical feedback
components such as ow control valves. It creates a design compromise between
resolution and allowable pressure drops.
There are many variations that have been developed for dierent applications.
Using Bernoullis equation allows us to solve for the pressure drop as a function of
the ow since we know the ow into the meter equals ow out of the meter. In
general, the ow will be proportional to the square root of the pressure drop.

6.4.3 Linear Transducers


Linear transducers are most commonly used to measure position, velocity, and, to
some extent, acceleration. They are very common and can be found in many dierent
varieties, shapes, and sizes. Prices and accuracy demonstrate the same wide range. It
is probable that the most commonly controlled output variable is position.
6.4.3.1 Position Transducers/Sensors
Linear position transducers come in all shapes and sizes, and what follows here is
only a brief introduction to them. The goal here is simply to present some of the
basic attributes of the common types and give guidelines for choosing position
transducers. The decision primarily becomes a function of the role it must play in
the system. Questions that should be addressed include what is the required length of
travel? What is the required resolution? What is the size (packaging) requirement?
What is the required output signal? How will it interface with the physical system?
What is the available monetary budget? The list given here presents some of the
commonly available options.

Figure 8 Measurement of ow using an orice.


Analog Control System Components 291

Limit switches: The most basic measurement of position. Useful to sequence


events, calibrate open loop position, and provide safety limits. There is no
proportionality; the switch is either open or closed. Very common when
adding additional safety features to controllers or to begin new series of
events, since when they are mechanically closed the power ratings allow
them to be used to directly actuate other devices in the system.
Potentiometer: Very common and relatively cheap, especially for shorter
lengths. The basic operation is that of a voltage divider where the wiper
arm is adjustable. The output voltage range is thus equal to input excitation
voltage, which may be variable within certain limits, depending on power
dissipation. The output is proportional to the input when the wiper is
moved, normally over wire wound or conductive lm resistors. The main
problem is wear if the system oscillates around one point. Accuracies of
better than 0.1% are possible, depending on the construction.
Linear variable dierential transformer (LVDT): An inductive device with two
secondary windings and one primary, the LVDT requires a sinusoidal vol-
tage for excitation. The input frequencies usually range between 1000 and
5000 Hz. The two secondary windings are on opposite sides of the primary
winding, which is excited by the input sinusoidal, as shown in Figure 9. A
small ferrite core is moved between the coils and the magnetic ux between
them changes. As the core is moved toward a secondary coil, the induced
voltage is increased, whereas the opposite secondary winding experiences a
decrease in voltage. An LVDT requires external circuitry to provide the
correct input signal and a usable output signal. Although the cost is much
greater relative to simple potentiometers, the resolutions are as ne as 1 m
and no contact is required between the core and coils; thus, for high cyclic
rates the LVDT provides many benets. Along with the cost is the added
burden of external circuitry, thus limiting its use on original equipment
manufacture (OEM) and other high-volume applications.
Magnetostrictive technology: These sensors utilize magnetostrictive properties
to measure position. By passing a magnet over the magnetostrictive material,
a reection wave is returned when a pulse is sent down the waveguide.
Timing the pulse round trip allows position to be calculated. Although

Figure 9 Typical arrangement of LVDT.


292 Chapter 6

requiring additional electronics to process the signal, an advantage, espe-


cially in hydraulics, is that the magnet does not contact the material and may
be placed inside a hydraulic cylinder (magnet on piston, sensing shaft inside
cylinder rod), reducing the chance of external damage. An additional benet
is that velocity may also be calculated and both position and velocity signals
are then available simultaneously. These sensors can be used to measure up
to 72 inches of travel with excellent resolution. Precision on the order of
0:0002 inches is possible with only 0.02% nonlinearity. Signal outputs that
are possible include analog voltage or current and digital start/stop or PWM
signals. Update times are usually around 1 msec. An example of linear
position measuring transducer using magnetostrictive materials is shown in
Figure 10.
Additional transducers: There are many specialty sensors used to measure posi-
tion, but most are limited in range, cost, and/or durability for control sys-
tems. Laser interferometers have extremely good resolution and response
times; they also require a reective surface and external power supplies.
Many capacitive sensors also are available. Some sense position by sliding
the inner plate in reference to the outer plates or vary the distance between
the plates, thus changing the capacitive. Good resolutions are possible but at
the expense of small movement limits, external circuitry, and high sensitivity
to plate geometry and dirty environments. Hall eect transducers eectively
measure the length of a magnet and the output is proportional as movement
occurs between the N and S poles. Finally, strain gages in a sense also
measure displacement, albeit very small.

In summary, Figure 11 illustrates the accuracy, cost, and measurement range


for common linear position transducers. There are many transducers available for
measuring linear position, and the preceding discussion only provides an introduc-
tion and an overview. Each type of environment, application, and eld of use will
likely have additional options developed specically suited for such. (Note: Digital
transducers used to measure linear movement are discussed in Sec. 10.7.)
6.4.3.2 Velocity Transducers/Sensors
The position transducers listed above are capable of being modied for use as
velocity sensors but require an additional dierentiating circuit to be added. This

Figure 10 Typical magnetostrictive linear position measurement transducer.


Analog Control System Components 293

Figure 11 Comparison of common linear position transducers. (From Anderson, T.


Selecting position transducers. Circuit Caller INK; May 1998.)

can be accomplished using a single OpAmp, capacitor, and resistor but will likely
require additional components to combat noise problems. The simplest analog linear
velocity sensor is accomplished by moving magnets past coils to generate a voltage
signal. Displacement ranges tend to be quite small, on the order of 50 mm.
The magnetostrictive technology also develops velocity signals through on
board circuitry. This technology has been described above.

6.4.4 Rotary Transducers


Rotary transducers share many characteristics with the linear examples from the
preceding section and many of the same terms apply. There are, however, additional
rotary transducers, many of which are digital and which are thus discussed in
Chapter 10. Rotary transducers may be designed to measure position and/or velo-
city, as the examples show. As before, this is only a brief introduction and there are
many more types available.
6.4.4.1 Rotary Position Transducers/Sensors
Rotary potentiometers: Similar in design and function to linear potentiometers,
these sensors have limited motion (up to 10 turns is common), are cheap and
simple, and are readily available. The same resolutions and features apply to
rotary and linear potentiometers.
Rotary resolvers: Inductive angle transducers that output a sinusoidal varied in
phase and amplitude when excited with a sinusoidal input. The coupling
between the dierent windings change as the device is rotated, thus changing
the output signal. The signal will repeat every revolution, so a counter is
necessary to track absolute position. The output is nonlinear and either
phase or amplitude modulation may be used to process the signals.
Resolutions of 10 min of arc are possible.
294 Chapter 6

Many of the available rotary position sensors have digital output signals.
Optical angle encoders, hall eect sensors, and photodiodes are examples. With
additional circuitry it is possible to convert some to compatible analog signals.
6.4.4.2 Rotary Velocity Transducers/Sensors
Magnetic pickup: Magnetic pickups are common, cheap, and easy to install.
Any ferrous material that passes by the magnet will produce a voltage in the
magnets coil. Although the output is a sinusoidal wave varying in frequency
and amplitude, it is easily converted to an analog voltage proportional to
speed using an integrated circuit. The benet of this signal is that the fre-
quency is directly proportional to shaft speed and fairly immune to noise (at
normal levels). There are several frequency to voltage converters containing
charge pumps and conditioning circuits integrated directly into single-chip
packages. If a direct readout is desired, any frequency meter can be used
without any additional circuitry (unless protective circuits are desired). The
disadvantage is at low speed where the signal gets too small to accurately
measure. Through the appropriate signal conditioning (Schmitt triggers,
etc.), a magnetic pickup may be used to provide a digital signal. These topics
are covered in greater detail in Chapter 10. Also, optical encoders and other
digital devices may be used in conjunction with the frequency to voltage
converter chip but with the same limitations as with the magnetic pickup.
D.C. tachometer/generator: Another common component used to measure
rotary velocity is the DC tachometer. It is simply a direct current generator
whose output voltage is proportional to the shaft speed. An advantage is that
it does not require any additional circuitry or external power to operate; a
simple voltage meter can be calibrated to rotary speed and little or no signal
conditioning is required. A disadvantage, however, is that it does require a
contact surface, for example, a contact wheel or drive belt, to operate, which
will add some additional friction to the system when installed.

6.4.5 Other Common Transducers


Accelerometers: Acceleration is easily measured using accelerometers, although
additional circuitry is required to amplify the small signals. Accelerometers
are usually rated with allowable gs of acceleration/deceleration. Picking one
with the appropriate size is important, as a relatively large one will change
the test itself with the added mass. Generally of the strain gage or piezo-
electric variety, they measure deection of a known mass (small) undergoing
the acceleration. They must be rigidly mounted to the test specimen. It is
possible to integrate the signal to obtain velocity, and even position,
although errors will accumulate over time. A typical piezoelectric acceler-
ometer is shown in Figure 12.
Since piezoelectric materials generate an electrical charge when deformed,
they are well suited for use in accelerometers. Due to their extremely high
bandwidth, they are nding their way into many new applications.
Hall eect transducers: Hall eect transducers are commonly used as proximity
switches, liquid level measurements, deection sensing, and in place of mag-
netic pickups for better low speed performance and signal clarity. Some ow
Analog Control System Components 295

Figure 12 Typical piezoelectric accelerometer construction.

measurement devices use the Hall eect to measure a turbine blade passing
by. Hall eect devices have several advantages and disadvantages when
compared to magnetic pickups. Whereas magnetic pickups have a signal
that becomes very small at lower speeds (past the magnet), Hall eect devices
do not need a minimum speed to generate a signal; the presence of a mag-
netic eld causes the output (voltage) to change. This allows them to be used
as proximity sensors, displacement transducers (quite nonlinear), in addition
to speed sensors. The disadvantages are that they require an external power
source, a magnet on the moving piece, and signal conditioning.
Strain gages: Already mentioned relative their use as embedded in other trans-
ducers, these are very common devices used to measure strain and then
calibrated for acceleration, force, pressure, etc. The resistance in the strain
gage changes by small amounts when the material is stretched or com-
pressed. Thus, the output voltage is very small and an amplier (bridge
circuit) is required for normal use.
Temperature: Several common temperature transducers are bimetallic strips
(toaster ovens), resistance-temperature-detectors (RTDs), thermistors, and
thermocouples. The bimetallic strip simple bends when heated due to dis-
similar material expansion rates and can be used as safety devices or tem-
perature dependent switches. RTDs use the fact that most metals will have
an increase in resistance when temperature is increased. They are stable
devices but require signal amplication for normal use. Thermistors have
a resistance that decreases nonlinearly with temperature but are very rugged,
small, and quick to respond to temperature changes. They exhibit larger
resistance changes but at the expense of being quite nonlinear.
Thermocouples are very common and can be chosen according to letter
codes. They produce a small voltage between two dierent metals based
on the temperature dierence.

6.5 ACTUATORS
Actuators are critical to system performance and must be carefully chosen to avoid
saturation while maintaining response time and limiting cost. Many specic actua-
tors are available in each eld, and this section only serves to provide a quick over-
view of the common actuators used in a variety of systems. To emphasize an
underlying theme of this entire text, we must remember that no matter what our
controller output tells our system to do, unless we are physically capable of moving
the system as commanded, all is for naught. The performance limits (physics) of the
system are not going to be changed as a result of adding a controller. For this reason
296 Chapter 6

the importance of choosing the correct ampliers and actuators cannot be over-
stated.
It should also be noted that most actuators relate the generalized eort and
ow variables, dened in Chapter 2, to the corresponding input and output. For
example, cylinder force is proportional to pressure (the output and input eorts) and
the cylinder velocity is proportional to the volumetric ow rate (the output and input
ows). The same relationship is true for the hydraulic motors. An exception occurs
in solenoids and electric motors where the force is proportional to the current (out-
put eort relates to input ow).

6.5.1 Linear Actuators


Linear actuators can take many forms and are found almost everywhere. Hydraulic,
pneumatic, electrical, and many mechanical forms that take one motion and convert
to another (gear trains, cams, levers, etc.) can be found in a wide range of control
system applications. The choice of a linear actuator depends largely on the power
requirements (force and velocity) of the system to be controlled. When power
requirements and stroke lengths are relatively small, electrical solenoids are the
most commonly used devices, whereas cylinders (hydraulic or pneumatic) are
more commonly found in high power applications. Many times the solenoid is
used to actuate the valve that in turn controls the cylinder motion. Multiple stages
of amplication/actuation are required in many systems to go from very low power
signals to the high forces and velocities required as the end result.
Some linear actuators are the result of using a rotary primary actuator and a
secondary mechanical device. Cams, rack and pinion systems, and four bar linkages
are all examples that can be found in many applications. For example, in our typical
automobile, camshafts convert rotary motion to linear (to open and close the engine
exhaust and intake valves), rack and pinion systems convert rotary steering wheel
inputs into linear tie rod travel, and a four bar linkage is used to allow the windshield
wipers to travel back and forth. The rotary portion of these systems is covered in the
next section.
6.5.1.1 Hydraulic/Pneumatic Cylinders
The two primary uid power actuators, excluding valves, are hydraulic cylinders and
hydraulic motors (discussed in the next section). Many devices are required to imple-
ment them, including the primary energy input device (electric motor or gas/diesel
engine); a hydraulic pump to provide the pressure and ow required by the actua-
tors; all hose, tubing, and connections; and, nally, safety devices such as relief
valves. Directional control valves, as used in most systems, act more as an amplier
(or control element) than an actuator in a hydraulic control system. The advantage
of hydraulics, once the supporting components are in place, is that relatively small
actuators can transmit large amounts of power. It comprises a relatively sti system
capable of providing its own lubrication and heat removal. Chapter 12 presents
additional information on designing and modeling electrohydraulic control systems.
Linear actuators, or cylinders, are generally classied as single or double
ended; include ratings for maximum pressure, stroke length, and side loads; and
are sized according to desired forces and velocities. Single-ended cylinders exhibit
dierent extension and retraction rates and forces due to unequal areas. The force
Analog Control System Components 297

generated is simply equal to pressure multiplied by area. Although the basic equa-
tions for force and ow rates with respect to cylinders are very common, they are
presented here for review. A basic cylinder can be described as in Figure 13. The bore
and rod diameters are often specied, allowing calculation of the respective areas. It
is helpful to dene several items:
 Diabore Diameter of the cylinder bore
 Diarod Diameter of the cylinder rod
 ABE Area of the bore (cap) end
2
 ABE Dia4bore
 ARE Area of the rod end
 ARE 4 Dia2bore  Dia2rod
 PBE Pressure acting on the bore end
 PRE Pressure acting on the rod end
The ow and force equations are desired in the nal modeling since the correspond-
ing valve characteristics correspond to them. Flow, Q, rates in and out of the cylin-
der are given by the following equations:
dPBE
QBE v  ABE CBE 
dt
dPRE
QRE v  ARE CRE 
dt
If compressibility, C, is ignored or only steady-state characteristics examined, the
capacitance terms are zero and the ow rate is simply the area times the velocity, v,
for each side of the cylinder. It is important to note that the ows are not equal with
single-ended cylinders as shown above. For many cases where the compressibility
ows are negligible, the ows are simply related through the inverse ratio of their
respective cross-sectional areas. In pneumatic systems the compressibility cannot be
ignored and constitutes a signicant portion of the total ow. If compressibility is
ignored, the ratio is easily found by setting the two velocity terms equal, as they
share the same piston and rod movement.
ABE
QBE Q
ARE RE
The cylinder force also plays an important role in system performance. In steady-
state operation, only the kinematics of the linkage will change the required force
since the acceleration on the cylinder is assumed to be zero. In many systems the
acceleration phase is quite short as compared to the total length of travel. The

Figure 13 Typical hydraulic cylinder nomenclature.


298 Chapter 6

steady-state assumption becomes less valid as performance requirements increase,


due to the increase in the dynamic acceleration requirements. The basic cylinder
force equation can be given as follows:

dv d2x
PBE  ABE  PRE  ARE  FL m  m 2
dt dt

As before, in steady-state operation the dynamic component can be set to zero


and then the sum of the forces equal zero. In general, the above ow and force
equations adequately describe cylinder performance even though leakage ows
and friction forces are ignored. The leakage ows are usually quite small, and com-
pared to the overall cylinder force, the friction force is also negligible. The exception
is at startup where there can be fairly large amounts of stiction forces for pumps,
valves, and cylinders.
6.5.1.2 Electrical Solenoids
The electrical solenoid is probably the most common linear actuator used. Solenoids
are used to control throttle position in automobile cruise control systems, automatic
door locks, gates on industrial lines, etc. Smaller movements may be amplied
through mechanical linkages but at the expense of force capabilities, since the
power output remains the same. Some controller packages include the solenoid, as
illustrated in Chapter 1.
In operation, solenoid force is proportional to current. When current is passed
through the solenoid, a magnetic eld is produced which exerts a force on an iron
plunger, causing linear movement. Proportionality is obtained by adding a spring in
series with the plunger such that the movement is proportional to the current
applied, not taking into account the load eects. This leads to a design compromise
where we want a sti spring for good linearity but a soft spring for lower power and
size requirements. A typical solenoid is shown in Figure 14. To use lighter springs
and still achieve good linearity, we sometimes close the loop on solenoid position
and achieve improved results through the use of a nested inner feedback loop. This
method is used with several hydraulic proportional valves.
Piezoelectric materials: Another method nding acceptance is the use of piezo-
electric materials. We mentioned previously that they produce an electrical signal
when deformed. The reverse is also true. When an electrical current is applied the

Figure 14 Typical solenoid construction.


Analog Control System Components 299

material will deform. Very small motions limit their usefulness but they may operate
at frequencies in the MHz range.

6.5.2 Rotary Actuators


There is much more diversity in rotary actuators, and as mentioned in the preceding
section some rotary actuators are used through cams and/or pulleys to act as linear
actuators. The most common actuators include hydraulic/pneumatic motors, AC
and DC electric motors, servomotors, and stepper motors (covered more in
Chap. 10).
6.5.2.1 Hydraulic/Pneumatic Motors
Hydraulic motors share many characteristics with hydraulic pumps and in some
cases may operate as both. Common types of hydraulic motors include axial piston
(bent-axis and swash-plate), vane, and gear (internal and external) motors. Other
than the gear motors, the units listed are capable of having variable displacement
capabilities and which if used, allow even more control of the system. The basic
equations as typically used in designing control systems are straightforward. Using
the theoretical motor displacement, DM (regardless of xed displacement pump
type), to calculate the output ow rate and necessary input torque produces the
following equations:

Theoretical flow rate: QIdeal DM N


Theoretical torque: TIdeal DM P

A constant is generally necessary depending on the units being used. For example, if
Q is in GPM, DM in in3 =s, N in rpm, T in ft-lb, and P in psi, then

DM N DM P
QIdeal TIdeal
231 24 

Hydraulic power is another useful quantity in describing pump characteristics. Using


W for power to avoid confusion with pressure leads to the following equations. Since
the ideal pump is without losses,

WIn WOut

The hydraulic input power is thenWIn PQ: The power out is mechanical and
Wout TDM . Unfortunately, losses do occur, and it is convenient to model the
resulting eciencies in two basic forms, volumetric and mechanical. While still
remaining a simple model, the following eciencies are dened:

TorqueActual
Mechanical efficiency: tm
TorqueTheoretical
FlowTheoretical
Volumetric efficiency: vm
FlowActual

Overall efficiency: mechanical efficiency volumetric efficiency


300 Chapter 6

Summarizing, the ideal hydraulic motor acts as if the output speed is propor-
tional to ow and torque proportional to pressure. In reality, more ow is required
and less torque obtained than what the equations predict. This can be simply mod-
eled using mechanical and volumetric eciencies. In an ideal motor, the power in
equals the power out. In an actual system with losses, the product of the mechanical
and volumetric eciencies provides the ratio of power out to power in for the motor
since the following is true:

T DM N TN WOut
Overall efficiency oa tm vm
DM P Q PQ WIn

Many systems use hydraulic motors as actuators. Conveyor belt drives and
hydrostatic transmissions are examples found in a variety of applications. Several
advantages are good controllability, reliability, heat removal and lubrication inher-
ent in the uid, and the ability to stall the systems without damage.
6.5.2.2 DC Motors
DC motors are constructed using both permanent magnets and electromagnets,
which are further classied as series, combination, or shunt wound. In the typical
DC motor, coils of wire are mounted on the armature, which rotates because of the
magnetic eld (whether from a permanent magnet or electromagnet). To achieve a
continuous rotation and to minimize torque peaks, multiple poles are used and a
commutator reverses the current in sequential coils as the motor rotates. The down-
side of such an arrangement is that we have sliding contacts prone to fail over time
and the brushes must be replaced on regular intervals. A typical DC motor in shown
in Figure 15. The eld poles may be generated using either permanent magnets or
electromagnets. Permanent magnet motors do not require a separate voltage source
for the eld voltage, resulting in higher eciencies and less heat generation.
Motors that use separate windings to generate the magnetic eld (electromag-
nets) provide more constant eld excitation levels, allowing smoother control of
motor speed. In both cases the torque is generally proportional to current input
(and the magnetic ux, which is commonly assumed to be relatively constant over
the desired operating range). The back electromotive force (emf, or voltage drop) is
proportional to shaft speed.

Torque: T Kt I
Voltage: V  RI Kv !

Figure 15 Typical DC motor construction (w/brushes).


Analog Control System Components 301

T is the output torque, I is the input current, V is the voltage drop across the motor,
R is the resistance in the windings, and ! is the output shaft speed. The constants Kt
and Kv are commonly referred to as the torque and voltage constants, respectively.
When using electromagnets to generate the elds, we have several options on
how to wire the armature and eld, commonly termed shunt or series wound electric
motors. To compromise between the properties of both types, we may also use
combinations of shunt and series windings, giving performance curves as shown in
Figure 16.
Shunt motors, which have the armature and eld coils connected in parallel,
are more widely used because of better starting torque, lower no-load speeds, and
good speed regulation regardless of load. The primary disadvantage is the lower
startup torque as compared to series wound motors. Series wound motors, although
they have a higher starting torque due to having the armature and eld coils in series,
will decrease in speed as the load is increased, although this may be helpful in some
applications. Combination wound motors that have some pairs of armature and eld
coils in series and some in parallel try to achieve a good startup torque and decent
speed regulation.
The speed of DC motors can be controlled by changing the armature current
(more common) or the eld current. Armature current control provides good speed
regulation but requires that the full input power to the armature be controlled. One
method of interfacing with digital components is using pulse-width modulation to
control the current (see Sec. 10.9).

Brushless DC motors: To avoid the problem of sliding contact and having


brushes that wear out, DC motors were developed without brushes (there-
fore called brushless motors). Although the initial expense is generally
greater, such motors are more reliable and require less maintenance. The
primary dierence in construction is that the permanent magnet is used on
the rotor and thus requires no external electrical connections. The outside
stationary portion, or stator, consists of stator coils that are energized
sequentially to cause the rotor (with permanent magnets) to spin on its
axis. The current must still be switched in the stator coils and is generally
accomplished with solid state switches or transistors.
Servomotors: Servomotors are variations of DC motors optimized for high
torque and low speeds, usually by reducing their diameter and increasing
the length. They are sometimes converted to linear motion through the use
of rack and pinion gears, etc.

Figure 16 DC motor torque-speed characteristics for dierent winding connections.


302 Chapter 6

Stepper motors: Stepper motors are covered in more detail in the digital section
(see Sec. 10.8) since special digital signals and circuit components are
required to use them. They are readily available and can be operated open
loop if the forces are small enough to always allow a step to occur. The
output is discrete with the resolution depending on the type chosen.
6.5.2.3 AC Motors
AC motors have a signicant advantage over DC motors since the AC current
provides the required eld reversal for rotation. This allows them to be cheaper,
more reliable, and maintenance free. The primary disadvantage is that it xes the
operating speeds unless additional (expensive) circuitry is added. Generally classied
as being one of two major types, single phase or multiphase, they are further classi-
ed within each category either of induction or synchronous types. Induction AC
motors have windings on the rotor but no external connections. When a magnetic
eld is produced in the stator it induces current in the rotor. Synchronous AC
motors use permanent magnets in the rotor and the rotor follows the magnetic
eld produced in the stator.
Single-phase induction motors do not require external connections to the rotor
and the AC current is used to automatically reverse the current in the stator wind-
ings. Because it is single phase it is not always self-starting and a variety of methods
is used to initially begin the rotation. Once started the motor rotates at a velocity
determined by the frequency of the AC signal. There is, however, some slip always
present and the motor actually spins at speeds 13% less than the synchronous
speed.
A three-phase induction motor is similar to the single-phase type except that
the stator has three windings, each 120 degrees apart. The motor now becomes self-
starting since there is never a part of the rotation where the net torque becomes zero
(as in the single phase motors). Another advantage is that the torque becomes much
smoother, similar in concept to adding more cylinders in an internal combustion (IC)
engine. A primary problem of induction motors is that they require a vector drive to
operate as servomotors and additional cooling and calibration are required for
satisfactory performance.
Synchronous motors have a very controlled speed but are not self-starting and
need special circuits to start them. Multiple-phase motors are usually chosen over
single phase when the power requirements are high.
To achieve good speed regulation in AC motors, we must now add special
electronics since the motor speed is related to the frequency of the AC signal.
Whereas we control the speed in DC motors by adjusting the voltage (current), in
AC motors we now must adjust the frequency of the input. A common method of
achieving this is to convert the AC power input to DC and then use a DC-to-AC
converter to output a variable frequency AC signal. Very good speed regulation is
achieved with this method (sometimes by closing the loop on speed), and although
the prices continually fall, it still remains more expensive.

6.6 AMPLIFIERS
Two main types of ampliers are discussed in this section: signal ampliers and
power ampliers. Signal ampliers, such as an OpAmp, are designed to amplify
Analog Control System Components 303

the signal (i.e., voltage) level but not the power. Power ampliers, on the other hand,
may or may not increase the signal level but are expected to signicantly increase the
power level. Thus in many control systems there are both signal ampliers and power
ampliers, sometimes connected in series to accomplish the task of controlling the
system. Each type encounters unique problems, with signal ampliers generally being
susceptible to noise and power ampliers to heat generation (and thus required
dissipation). This section introduces several common methods as found in many
typical control systems.

6.6.1 Signal Amplifiers


Whenever amplication of a voltage signal is desired, the component of choice is
almost always the versatile OpAmp, or operational amplier. Modern solid-state
OpAmps are cheap, ecient, capable of gains 100,000 times or larger, with band-
widths in the MHz, have a wide range of input/output signal level options, and have
input impedances in the M range. A typical OpAmp will require ve connections,
as shown in Figure 17.
There are many power supplies designed to power OpAmps. Specialty
OpAmps have been developed with dierent input/output voltage ranges, larger
power capabilities, single-side operation (where V- is not available), and with narrow
rails. The term rail is commonly used to describe the maximum output of an OpAmp
given the excitation voltage range. For example, if a 15 V power supply is used and
an OpAmp has a 1 V rail, the maximum output possible is 14 V.
There are two primary uses of OpAmps when implementing control systems.
One use has already been discussed earlier in the chapter (see Sec. 6.3.1) where
OpAmps are used to construct PID and phase-lag/lead controllers. A second com-
mon use, discussed here, is signal conditioning. Most sensors produce very small
output signals and must be signicantly amplied before they can be used through-
out the control system. Even if the signal is ultimately converted to a digital repre-
sentation, it must be amplied.
Two basic OpAmp circuits are discussed here. Many dierent circuits have
been developed using OpAmps, but the basic functions are well represented by the
two given here and many of the advanced circuits are adaptations of these. The
controller circuits from Section 6.3.1 also use these circuits as common building
blocks. The most common building block is the inverting amplier, shown in
Figure 18.

Figure 17 Typical operational amplier connections.


304 Chapter 6

Figure 18 Inverting OpAmp circuit.

Assuming a high input impedance and no current ow into the OpAmp leads
to a gain for the circuit of
Vout R
Inverting gain:  2
Vin R1
Remember that the output voltage is limited and only valid input ranges will exhibit
the desired gain. If a noninverting amplier is required, we can use the circuit given
in Figure 19.
The gain of the noninverting OpAmp circuit is derived as
Vout R1 R2
Noninverting gain:
Vin R1
As mentioned, many additional functions are derived from these basic circuits. An
example list is given here:
 Integrating amplier: replace R2 with a capacitor
 Dierentiating amplier: replace R1 with a capacitor
 Summing amplier: replace R1 with a separate resistor in parallel for each
input
In addition to the summing junction (error detector) from Table 1, one other
circuit should be mentioned relative to constructing an analog controller, a com-
parator. A comparator simply has the two inputs each connected to a signal and no
feedback or resistors in the circuit. Such an arrangement has the property of satur-
ating high if the positive input is greater than the negative input or vice versa if
reversed. This allows the OpAmp to be used as a simple on-o controller, similar in

Figure 19 Noninverting OpAmp circuit.


Analog Control System Components 305

concept to a furnace thermostat. An example of such a controller is presented in the


case study found in Section 12.7.
Finally, it is quite common to incorporate lters, protection devices, and com-
pensation devices in the amplication stage. Filters may be active or passive with
active lters found on integrated circuits or designed using components like
OpAmps. Passive lters will have some attenuation of the signal even at frequencies
where attenuation is not desired. Common protection devices include fuses and
diodes. Fuses may be connected in series with sensitive components, whereas diodes
(such as Zener diodes) may be connected in parallel to protect from excessive vol-
tage. Optoisolators are commonly used for digital signals since they completely
eliminate the electrical connection between the input and output terminals.

6.6.2 Power Amplifiers


Most power ampliers fall into the electrical category and are found to some degree
in almost every system. Most systems initially convert electrical energy into another
form, the exception being mobile equipment, which begins with chemical energy
(fuel) being converted into heat energy and ultimately mechanical energy from an
engine. Only electrical systems are discussed here as engines are beyond the scope of
this text and even they convert some of their output into electrical energy to drive
solenoids and other control system actuators.
An example where the primary power amplication is electrical is the common
hydraulic motion control system. The primary power is initially electric and con-
verted via an electric motor into mechanical power. A hydraulic pump is simply a
transformer, not a power amplier, and converts the mechanical power into hydrau-
lic power. In fact, beyond the power amplication stage, each conversion process
results in less power due to losses inherent in every system. The second location of
power amplication found in this motion control example is taking the output of the
controller, whether microprocessor or OpAmp based, and causing a change in the
system output. In most cases the power levels are amplied electrically to levels that
allow the controller to change the size of the control valve orice (linear solenoid
activation). In both areas then, the power amplication takes place in the electrical
domain.
Electrical power ampliers can be divided into two basic categories, discrete
and continuous. Discrete ampliers are much easier to obtain and install. The exam-
ple is the common electromechanical or solid-state relay. Relays are capable of
taking a small input signal and providing large amounts of power to system. The
disadvantage is the discrete output, resulting in the actuator being either on or o. If
the system is a heat furnace, this type of signal works out well and the problem is
nearly solved. If we want a linear varying signal, however, the task becomes more
dicult. Some methods use discrete outputs switched fast enough to approximate an
analog signal to the system (i.e., switching power supplies and PWM techniques) as
covered in Section 10.9.
To achieve a continuously variable power level output, we generally use tran-
sistors. Transistors have revolutionized our expectations of electronics in terms of
size and performance since replacing their predecessor, the vacuum tube amplier.
The current terminology of transistors is traced to the original vacuum tube termi-
nology. Transistors have many advantages in that they are resistant to shock and
306 Chapter 6

vibration, fairly ecient, small and light, and economical. Their primary disadvan-
tage is found in their sensitivity to temperature. This is the primary reason for using
switching techniques like PWM since it signicantly minimizes the heat generation in
transistors. When a transistor is used as linear amplier it must be designed to
dissipate much greater levels of internal heat generation. This is primarily because
it is asked to drop much larger voltage and current levels internally as compared to
when operated as a solid-state switching device. The design of practical linear ampli-
ers is beyond the scope of this text, and many references more fully address this
topic. The design of solid-state switches is given in Section 10.9.

6.6.3 Signal-to-Noise Ratio


In most systems the eects of noise should be considered during the design stage. It is
much easier to design properly at the beginning than to use one x after another
during the testing and production stages. Noise can occur from a variety of sources,
and some are more preventable than others. The majority of this section deals with
electrical noise issues stemming from the components and the surrounding environ-
ment.
6.6.3.1 Location of Amplifiers
In most applications we prefer to amplify the signal to usable levels as quickly as
possible. The advantage is that we can minimize the eects of external noise by
transmitting a signal with a larger magnitude (assuming all else remains the same).
Having noise with an average amplitude of 2 mV adding to 7 V is relatively negli-
gible; that same noise, when added to a signal of only several mV, becomes very
problematic. Thus, the fewer lines that are run with very small signal levels, espe-
cially in the presence of external electrical noise, the better will be our signal to noise
ratio. Remember that the controller acts on measured error and if noise contributes
to the signal the controller output will also reect (and usually amplify) the noise,
feeding it back into our system. This is especially true when implementing derivative
compensators in our controller.
Since the goal of the amplier is to only amplify the desired output and not
noise, we should not only locate it near the sensor output, but we should also care-
fully shield the amplication circuit components. Using shielded wires and compo-
nent boxes can make a signicant dierent in the quality of our signal.
There are situations where we can compensate the signal to improve our signal
to noise ratio, allowing us to have longer runs with low signal levels and to aid in
removing unwanted physical eects (i.e., the temperature of the system). Common
systems that include temperature compensation are thermocouples and strain gages.
Since the properties of these sensors vary signicantly, we typically compensate them
using a modied Wheatstone bridge amplier. In the case of the strain gage, we
simply place a dummy gage alongside the active gage and compare the change
between the two. Since the dummy gage experiences the same temperature eects,
then the dierence between the two readings should be due to the actual strain
experienced. In a similar fashion, we can remove the eects of temperature-induced
resistance changes in the signal wires by placing a third lead between the sensor and
amplier, allowing us to only amplify the change in signal output, not output caused
by changing temperatures. With compensation then, we can run longer wire lengths
Analog Control System Components 307

and still maintain good signal to noise ratios. In general, the devices that benet from
compensation techniques will already have it included when we purchase it for our
use in control systems.
6.6.3.2 Filtering
Filtering is commonly added at various locations in a control system to remove
unwanted frequency components of our signal. Filters may be designed into the
amplier or added by us as we design the system. Three common types of lters,
shown in Figure 20, are low pass, high pass, and band pass. Low-pass lters are
designed to allow only low frequency components of the signal to pass through. Any
higher frequency components are attenuated. High-pass lters are designed to allow
only high frequency components of the signal through, and band-pass lters only
allow a specied range of frequencies through.
When designing a lter we can apply our terminology learned with Bode plots
to design and describe the performance. From our Bode plot discussions we recall
that a rst-order denominator attenuates high frequencies at a rate of 20 dB/dec-
ade. In lter terminology it is common to refer to the number of poles that the lter
has. Thus, if we have a four-pole lter, it will attenuate at a rate of 80 dB/decade.
Even with higher pole lters we do not achieve instantaneous attenuation of signals.
It is interesting to note that the lters illustrated in Figure 20 look similar to several
of the compensators designed.
Designing basic lters using Bode plots is quite simple. For example, if we
connect a resistor in series and a capacitor in parallel with out signal, we have just
added a single pole passive lter to the system. The analysis is identical to the
techniques learned earlier where we found a time constant of RC and a transfer
function with one pole in the denominator. The Bode plot then has a low frequency
horizontal asymptote, a break frequency at 1=, and a high frequency attenuation
slope of 20 dB/decade. Comparing this to Figure 20, we see that it is a simple low-
pass lter. To achieve sharper cut-o rates we would add more poles to the lter. A
band-pass lter can be designed following the same procedures except that we add a
rst-order term (zero) in the numerator followed by two rst-order terms in the
denominator dening the cut-o frequencies.
Along with the performance descriptions above, we further distinguish lters as
active or passive. The simple RC lter is a passive lter since it requires no external
power and draws its power from the signal. This has the disadvantage of sometimes
changing the actual signal, especially as we implement passive lters with more poles.
To overcome this and add lters with high input impedances (thus drawing no

Figure 20 Descriptions of common lters.


308 Chapter 6

current from the signal), we use active lters. Active lters require a separate power
source but allow for greater performance. OpAmps are commonly used and provide
the high input impedance desired by lters. Also available are IC chips with active
lters designed on the integrated circuits.
6.6.3.3 Advantages of Current-Driven Signals
Most transducers and many controllers now have options allowing us to use current
signals instead of voltage signals. This section quickly discusses some the advantages
of using current signals and how to interface them with standard components expect-
ing voltage inputs.
The primary advantage is easily illustrated using the eort and ow modeling
analogies from Chapter 2. Voltage is our electrical eort and current is our electrical
ow variable. Using the analogy of our garden hose, we know that if we have a xed
ow rate entering at one end, then the same ow will exit at other end, regardless of
what pressure drops occur along the length of the hose (assuming no leakage or
compressibility). Thus, our ow is not aected by imposed disturbances (eort
noises) acting on the system. In the same way, even if external noise is added to
our electrical current signal as voltage spikes, the current signal remains constant,
even though the voltage level of it picks up the noise. The advantage becomes even
more pronounced as we require longer wire lengths through noisy electrical loca-
tions. Although it is possible to also induce currents in our signal wires (magnet
moving by a coil of wire), it is much more likely that the noise is seen as a voltage
change. Thus, our primary concern in using current signals is that our transducer (or
whatever is driving our current signal) is capable of producing a constant, well
regulated, current signal in the presence of changing load impedances.
Even if our signal target requires voltage (i.e., an existing AD converter chip),
we can still take advantage of the noise immunization of current signals by transfer-
ring our signal as a current and converting it to a voltage at the voltage input itself.
This is easily accomplished by dropping the current over a high precision resistor
placed across the voltage input terminals, as shown in Figure 21.
Only two wires are needed to implement the transducer, and if desired a
common ground can be used. Most transducers will give the allowable resistance
(impedance) range where it is able to regulate the current output. Recognize that
with a current signal we no longer will get negative voltage signals and, in fact, do
not reach zero voltage. The voltage measurement range is found by taking the lowest
and highest current output (usually 420 mA) and multiplying them by the resistance
value.

Figure 21 Converting a current signal to a voltage signal.


Analog Control System Components 309

6.7 PROBLEMS
6.1 Briey describe the role a typical amplier in a control system.
6.2 Briey describe the role a typical actuator in a control system.
6.3 An actuator must be able to . . . (nish the statement).
6.4 What is the advantage of using an approximate derivative compensator?
6.5 List several possible sources of electrical noise as aecting the control system
signals.
6.6 Describe an advantage and disadvantage of mechanical feedback control sys-
tems.
6.7 What is the importance of the transducer in a typical control system?
6.8 List two desirable characteristics of transducers and briey describe each one.
6.9 What are three important pressure ratings for pressure transducers?
6.10 List three types of transducers that may use a strain gage as the sensor.
6.11 Liquid ow meters are analogous to meters in electrical systems.
6.12 What are two types of noncontact linear position transducers?
6.13 Why is a velocity transducer desirable over manipulating the position feedback
signal to obtain a velocity signal?
6.14 List one advantage and disadvantage of the common magnetic pickup relative
to measuring angular velocity.
6.15 Hydraulic cylinders might be the linear actuator of choice when what charac-
teristics in an actuator are needed?
6.16 Locate an electrical solenoid in a product that you currently use and describe
its function in the system.
6.17 Name two methods of controlling the speed in a DC motor.
6.18 Brushless DC motors have what advantages over conventional DC motors?
6.19 All AC motors are self starting. True or False?
6.20 What are the advantages and disadvantages of AC motors as compared with
DC motors?
6.21 What are two major types of ampliers?
6.22 Why is high input impedance desirable for an amplier?
6.23 Name the common electrical component used in electrical power linear ampli-
ers.
6.24 Why is the signal to noise ratio an important consideration during the design of
a control system?
6.25 Passive lters require a separate power source. True or False?
6.26 Under what conditions will current signals perform much better than voltage
signals?
6.27 Construct a speed control system for the system in Figure 22. The system to be
controlled is a conveyor belt carrying boxes from the lling station to the taping

Figure 22 Problem: conveyor belt speed control.


310 Chapter 6

Figure 23 Single-axis motion control system.

station. It must run at a constant speed regardless of the number and weight of boxes
placed on it. Build the model in block diagram form, where each block represents a
simple physical component. Details of each block are not required, just what com-
ponent you are using (i.e., an block which requires an actuator might use a DC
motor or a solenoid) and where. In addition, attach consistent units to each line
connecting the blocks. Note: Label each block and line clearly. Include all required
components. For example, certain components require power supplies, such as some
transducers, etc. Number each block and dene: category (transducer, actuator, or
amplier) type (LVDT, OpAmp, etc.), inputs, outputs, and additional support compo-
nents (power supplies, converters, etc.).
6.28 Design a closed loop position control system for the system in Figure 23. The
system to be controlled is a single-axis welding robot. A high force position actuator
is required to move the heavy robot arm. Build the model in block diagram form,
where each block represents a simple physical component. Attach consistent units to
each line. Note: Label each block and line clearly. Include all required components. For
example, certain components require power supplies, such as some transducers, etc.
Number each block and dene: category (transducer, actuator, or amplier) type
(LVDT, OpAmp, etc.), inputs, outputs, and additional support components (power
supplies, converters, etc.).
7
Digital Control Systems

7.1 OBJECTIVES
 Introduce the common congurations of digital control systems.
 Compare analog and digital controllers.
 Review digital control theory and its relationship to continuous systems.
 Examine the eects and models of sampling.
 Develop the skills to design digital controllers.

7.2 INTRODUCTION
It seems that every several years the advances in computer processing power make
past gains seem minor. As a result of this cheap processing power available to
engineers designing control systems, advanced controller algorithms have grown
tremendously. The space shuttle, military jets, general airline transport planes,
along with our common automobile have beneted from these advancements.
Things are being done today that were once thought impossible due to modern
controllers. The modern military ghter jet would be impossible to y without the
help of the onboard electronic control system.
In this chapter we begin to develop the skills necessary for designing and
implementing advanced controllers. Since virtually all controllers at this level are
implemented using digital microprocessors, we spend some time developing the
models, typical congurations, and tools for analysis. When we compare analog
and digital controllers, we notice two big dierences: Digital devices have limited
knowledge of the system (data only at each sample time) and limited resolution when
measuring changes in analog signals. There are many advantages, however, that tend
to shift the scales in favor of digital controllers. An innite number of designs,
advanced (adaptive and learning) controller algorithms, better noise rejection with
some digital signals, communication between controllers, and cheaper controllers are
now all feasible options. To simulate and design digital controllers, we introduce a
new transform, the z transform, allowing us to include the eects of digitizing and
sampling our signals.

311
312 Chapter 7

7.2.1 Examples and Motivation


Digital computers allow us to design complex systems with reduced cost, more
exibility, and better noise immunity when compared to analog controllers.
Adaptive, nonlinear, multivariable, and other advanced controllers can be imple-
mented using common programmable microcontrollers. The ability to program
many of these microcontrollers using common high level languages make them
accessible to all of us who do not wish to become experts in machine code and
assembly language. This section seeks to lay the groundwork for analysis of digital
controllers in such a way that we can extend what we learned about continuous
system and now apply it to our digital systems. This allows us to do stability analysis,
performance estimates, and calculate required sample times before we build each
system.
Our quality of life in almost every area of activity is inuenced through micro-
processors and digital controllers. Our modern automobile is a complex marriage of
mechanical and electrical systems. Factories are seeing better quality control,
increased production, and more exibility in the assembly process, leading to
increased customer satisfaction. Home appliances are smarter than ever, and the
security of our country is more dependent now on electronics than at any other
time in our history. Early warning detection systems; weapon guidance systems;
computer-based design tools; and land, air, and sea vehicles all rely heavily on
electronics. It is safe to say that skills in designing digital control systems will be a
valuable asset in the years to come.

7.2.2 Common Components and Configurations


In Chapter 1 we looked at the various major components required for actually
building and implementing control systems. Now, let us quickly look at the addi-
tional digital components and the common way these components are connected
together. As we would expect, the overall conguration (controller ! amplifier !
actuator ! system ! transducer feedback) is very close to the analog system pre-
sented in Figure 1 of Chapter 1 during the introduction to the text. A general digital
control system conguration is shown in Figure 1 that illustrates how the digital
components might interface with the physical system.
In examining the dierences between the analog and digital control system
components, we see that the computer replaces the error detector and controller
and that new interfaces are required to allow analog signals to be understood and
acted upon by the microprocessor. One advantage is that many inputs and outputs

Figure 1 General digital control system conguration.


Digital Control Systems 313

can be handled by the computer and used to control several processes. As Figures 2
and 3 illustrate, computer-based controllers may be congured as centralized or
distributed control congurations. Many times combinations of these two are used
for control of large complex systems.
In centralized schemes, the digital computer handles all of the inputs, processes
all errors, and generates all of the outputs depending on those errors. It has some
advantages since only one computer is needed, and because it monitors all signals, it
is able to recognize and adapt to coupling between systems. Thus if one system
changes, it might change its control algorithm for another system which exhibits
coupling with the rst. Also, simply reprogramming one computer may change the
dynamic characteristics of several systems. The disadvantages include being depen-
dent on one processor, limited performance with large systems since the processor is
being used to operate many controllers, and component controller upgrades are
more dicult.
The distributed controller falls on the opposite end of the spectrum where every
subsystem has its own controller. Advantages are that it is easy to upgrade one
specic controller, easier to include redundant systems, and lower performance pro-
cessors may be used. It may or may not cost more, depending on each system. Since
simple, possibly analog, controllers can be used for some of the individual subsys-
tems, both analog and digital controllers can coexist and it is possible to sometimes
save money. The primary computer is generally responsible for determining opti-
mum operating points for each subsystem and sending the appropriate command
signals to each controller. Depending on the stability of the individual controllers,
the primary computer may or may not record/use the feedback from individual
systems.
For many complex systems the best alternative becomes a combination of
centralized and distributed controllers. If a subsystem has a well-developed and
cost-eective solution, it is often better to ooad that task from the primary con-
troller to free it for others. If a complex or adaptive routine is required, such as
dealing with coupling between systems, then the central computer might best serve
those systems. In this way our systems can be optimized from both cost and perfor-
mance perspectives. The mnemonic commonly used to describe these systems,
SCADA, stands for Supervisory Control and Data Acquisition. A PC (or program-
mable logic controller with a processor) in this case provides the supervisory control
with multiple distributed controllers through either half or full duplex modes. Half
duplex means the supervisory initiates all requests and changes and the distributed

Figure 2 Centralized control with a digital computer.


314 Chapter 7

Figure 3 Distributed control with a digital computer.

components respond but do not initiate contact. The advantage of these systems is
that the link may be through wires, radio wave (even satellite), Internet, etc.
As we see in the next section, adding these capabilities and the input and
output interfaces changes our model and new techniques must be used.
Fortunately, the new technique can be understood in much of the same way as
the analog techniques, but with the addition of another variable, the sample time.

7.3 COMPARISON OF ANALOG AND DIGITAL CONTROLLERS


7.3.1 Characteristics and Limitations
Analog controllers are continuous processes with innite resolution. When errors of
any magnitude occur, the controller should have an appropriate control action.
Analog controllers, presented in the previous chapters, generally incorporate analog
circuits for electronic controllers (OpAmps) or mechanical components for physical
controllers. Digital controllers, in contrast, use microprocessors to perform the con-
trol action. Microprocessors require digital inputs and outputs to operate and thus
require additional components to implement. Component costs have steadily
decreased as technology improves and digital controllers are becoming more pre-
valent in almost all applications.
Since most physical signals are analog (i.e., pressure, temperature, etc.), they
rst must be converted to digital signals before the controller can use them. This
involves a process called digitization, which introduces additional problems into the
design problem, as future sections will show. This same process must then be
reversed at some stage in the process to generate the appropriate physical control
action. This conversion might occur right at the output of the microprocessor or not
until the system responds to the discrete physical action (i.e., stepper motor). Table 1
lists many of the advantages and disadvantages of digital controllers.
At this point it should be clear as to why the movement is so strong toward
digital controllers. Advanced controller algorithms like adaptive, neural nets, fuzzy
logic, and genetic algorithms have all become possible with the microprocessor.
Todays systems are commonly combinations of centralized and distributed control-
lers working in harmony. Clearly the skills to analyze and design such systems are
invaluable. A brief history review will illustrate the growth of digital controllers.
In the 1960s minicomputers became available and some of the rst controllers
were developed. Processing times were on the order of 2 sec for addition and 7 sec
for multiplication. Costs were still prohibitive, and only specialized applications
could justify or overcome the cost to benet from digital controllers. In the 1970s
Digital Control Systems 315

Table 1 Advantages and Disadvantages of Digital Microprocessor-Based Controllers

Advantages Disadvantages

Controller algorithms changed in software. Requires more components.


Control strategy changed in real time Digitizing analog signals results in limited
depending on situations encountered. resolution.
Multiple processes can be controlled using Adding more functions might limit
one microprocessor. performance (sample time increases).
Innite algorithms are possible. Requires better design skills.
Once signal is converted, noise and drift Digital computers cannot integrate signals,
problems are minimized. must be converted to products and sums.
Easy to add more functions, safety Components inherently susceptible to damage
functions, digital readouts, etc. in harsh environments.

the modern microcomputer became available. The early 1970s saw prices up to
$100,000 for complete systems. By 1980 the price had fallen to $500 with quantity
prices as low as $50 for small processors. The 1990s have seen prices fall to only a few
dollars per microprocessor. Complete systems are aordable to companies of all
sizes and have allowed the use of digital controllers to become the standard.
Virtually all automobile, aviation, home appliance, and heavy equipment controllers
are microprocessor based and digital. Programmable logic controllers (PLCs), intro-
duced in the 1970s, have become commonplace, and prices continue to fall. More
PC-based applications are also found as prices also have decreased signicantly.
A danger in this is when we simply take existing analog control systems and
implement them digitally without understanding the dierences. Not only will we be
more likely to have problems and be unsatised with the results, but we also miss out
on the new opportunities that are available once we switch to digital. This chapter,
and the several following this, attempts to connect what we have learned about
analog systems with what we can expect when moving towards digital implementa-
tions.

7.3.2 Overview of Design Methods


Early digital controllers were designed from existing continuous system design tech-
niques. As digital controllers have become common, more and more controllers are
directly designed in the digital domain to take advantage of the additional features
available. Since all physical systems operate in the continuous time domain, the skills
developed in the rst section are imperative to designing high performance digital
controllers. It is dangerous to assume that everything wrong with the system can be
xed by adding the latest and greatest digital controller. In fact, it is often the lack of
understanding the physics of our real system that causes the most trouble. Proper
design ows from a proper understanding of the physics involved. The image that
comes to mind is trying to design a cruise control system for a truck and using a
lawnmower engine. Although we might laugh at this analogy, the point is made that
our rst priority is designing a capable physical system that incorporates the proper
316 Chapter 7

components to achieve our goals. The design methods presented in this text are all
based on this initial assumption.
Now, in addition to proper physical system design, we need to account for the
digital components. The problem arises in modeling the interface between the digital
and analog systems and its dependence on sample time. As we will see, we might
design a wonderful controller based on one sample time, have someone else also
claim processor time, resulting in more processor tasks per sample and longer sam-
pling periods. Consequently, we now have stability problems based on the longer
sample time even though our controller itself never changed. An example is with
automobile microprocessors where new features, not initially planned on, are con-
tinually added until the microprocessor can no longer achieve the performance the
initial designer was planning on.
This leads to two basic approaches for designing digital controllers: we can
convert (or design in the continuous domain and then convert) continuous based
controllers into approximate digital controllers or we can begin immediately in the
discrete domain and design our controller using tools developed for designing digital
controllers. Both methods have several strengths and weakness, as discussed in the
next section. Chapter 9 will present the actual methods and examples of each type.
7.3.2.1 Designing from Continuous System Methods
One common approach to designing digital controllers is to design the controller in
the continuous domain as taught in the previous chapters, and once the controller is
designed, use one of several transformations to convert it to a digital controller. For
example, a common proportional, integral, derivative (PID) controller can be
designed in the continuous domain and approximated using nite dierences for
the integration and derivative functions. Additionally, the bilinear transformation
(Tustins method), or the impulse invariant method, may be used to convert from the
s-domain to the z-domain. The z-domain is introduced later as a digital alternative to
the s-domain that allows us to use the skills we have already developed. Thus if you
are familiar with controller design using classical techniques, with a little work you
can design controllers in the digital domain. Finally, a simple technique of matching
the poles of zeros of the continuous controller with equivalent poles and zeros in the
z-domain may be used, aptly called the pole-zero matching method.
There are several advantages to beginning with (or using) the continuous sys-
tem. It is in fact how our physical system responds; there are many tools, textbooks,
and examples to choose from; and most people feel more comfortable working with
real components. As the next section shows, there are also several disadvantages
with this method.
7.3.2.2 Direct Design of Digital Controllers
One primary advantage of designing controllers directly in the digital domain is that
it allows us to take advantage of features unavailable in the continuous domain.
Since digital controllers are not subject to physical constraints (with respect to
developing controller output as compared to using OpAmps, linkages, etc.), new
methods are available with unique performance results. Direct design allows us to
actually choose a desired response (it still must be a feasible one that the physics of
the system are capable of) and design a controller for that response. There are fewer
limitations since the controller is not physically constructed with real components. It
Digital Control Systems 317

will become clear, however, as we progress that there still are some limitations, most
being unique to digital controllers.
Also building upon our continuous system design skills are root locus techni-
ques that have been developed for digital systems. The concepts are the same except
that we now work in the z-domain, an oshoot of the s-domain. Root locus tech-
niques in the z-domain may be used to directly design for certain response charac-
teristics just as learned for continuous systems. A primary dierence is that now the
sample time also plays a role in the type of response that we achieve. Also, dead-beat
design can be used to make the closed loop transfer function equal to an arbitrarily
chosen value since any algorithm is theoretically possible. Dead-beat designs settle to
zero error after a specied number of sample times. Of course, as mentioned pre-
viously, physical limitations are still placed on the actuators and system components
and one of the costs of aggressive performance specications is high power require-
ments and more costly components. Finally, Bode plot methods may be used using a
w transform.

7.4 ANALYSIS METHODS FOR DIGITAL SYSTEMS


The previous section illustrates several dierent controller congurations utilizing
microprocessors. All the systems share one common trait that diers from their
analog counterparts: They must sample the data and are unable to track it in
between these samples. It is impossible for the controller to know exactly what is
happening between samples; it only knows the response of the system at each sample
time.
When implementing a digital controller, the normal procedure is to scan the
inputs, process the data according to some control law, and update the outputs to
the new values. It is obvious that the loop time or sample time will play a large part
in determining system performance. On one hand, if the sample time is extremely fast
relative to the system, it begins to approximate a continuous controller since the
system is unable to change much at all in the time between samples. If, however, the
sample times are long compared to the system response, the digital controller will be
unable to control the response because each correction occurs after the system has
reached the targeted operating point and thus becomes unstable.
An even more interesting case it that it is now possible that the digital com-
puter thinks it is correctly controlling the system while unbeknownst to the computer
the system is actually oscillating in between samples. This section will examine the
characteristics that sampling has on the measurement and response of physical
systems.

7.4.1 Sampling Characteristics and Effects


As we saw, sampling is a common trait in all digital computers. When we sample a
continuous signal we end up with a sequential list of numbers whose values represent
the value of the analog signal at each individual sample time. The sample rate is
measured in samples per second (Hz) and hence the sample period T equals the
inverse of the sample frequency. If the sample rate is constant (common for most
digital controllers), then the list of values will be equally spaced in time. Also, if we
assume that the computer is innitely fast for each sample, then each value repre-
318 Chapter 7

sents that one distinct moment in time. We can think of it being like a switch that is
momentarily closed each time a computer clock sends a pulse. This idea is illustrated
in Figure 4.
If the sample times are not constant, the vertical lines are no longer spaced
evenly and modeling the sampling process becomes very dicult. In addition, it is
possible in Figure 4 that the analog signal had gone below zero and returned to its
normal amplitude in between samples; our reconstructed signal is unable to follow
this. Remember in the switch analogy that the switch is only momentarily closed and
has no knowledge of the signal between samples. Therefore, our reconstructed signal
might look very nice but may be completely wrong. This commonly occurs with
oscillating signals where the sampling process creates additional frequencies, called
aliasing, as shown in Figure 5.
Several methods are used to minimize the eects of aliasing. To avoid aliasing
up front, we can apply the Nyquist frequency criterion. The Nyquist frequency is
dened as one half of the sample frequency and represents the maximum frequency
that can be sampled before additional lower frequencies are created. Only those
frequencies greater than one half of our sample frequency create additional lower
frequency (articial) components.
That being said, however, higher frequencies are always created, called side
bands, as a result of the sampling process. To reduce this problem, it is common to
install antialiasing lters in the input to remove any frequencies greater than one half
of the sample frequency, because once the signal is sampled it is impossible to
separate the aliasing eects from the real data. The problem often occurs that
even though the highest frequency in our system might be 15 Hz, there might be
noise signals at 60 Hz. Thus if our sample rate is less than 120 Hz we will experience
aliasing eects from the noise components, even though our primary frequency is
much lower.
The beat frequency, dened as the dierence between one half the sample
frequency and the highest frequency, might be very small (i.e., 0.1 Hz), which
leads to aliasing eects that look like DC drift unless longer periods of time are
plotted to see the very slow superimposed wave from the aliasing. This eect is seen
in movies where the airplane propeller or tire spokes will seem to rotate much slower,
or even in the reverse direction, than which the actual object rotates at. Since the
movie frames are updated on a regular basis (i.e., sample time) as the object rotates
at dierent speeds, the eects of aliasing are easily seen.

Figure 4 Sampling and reconstructing an analog signal.


Digital Control Systems 319

Figure 5 Aliasing problems with dierent sample rates.

The best solution is a good low-pass lter with a cut-o frequency above the
highest frequency in the signal and below the Nyquist frequency. Many options are
available: passive lters ranging from simple RC circuits (see Sec. 6.6.3.2) to multi-
pole Butterworth or Chebyche lters. Passive lters inject the least amount of
added noise into the signal but will always attenuate the signal to some extent.
Active lters, a good all-around solution, can have sharper cut-os and gains
other than one. Several options are available o the shelf, including switched capa-
citor lters or linear active-lter chips. For best eects, place the lters as close to the
AD converter input as possible and use good shielding and wiring practices from the
beginning of the design.

7.4.2 Difference Equations


There are two basic approaches to simulating sampled input and output signals. We
can recognize that physical system signals are represented by dierential equations
and use dierential approximations or we can try to model the computers sampling
process and dene delay operators using a new transform for digital systems (similar
to the Laplace transform for continuous systems). In this section we develop approx-
imate solutions to dierential equations by rst approximating the actual derivative
terms and second by numerically integrating the equation to nd the solution. In
both cases we see that the result is a set of dierence equations that are easy to use
within digital algorithms. Building on this basic understanding, the following section
then uses a model of the actual computer sampling process to determine what the
sampled response should be. This model of the computer leads us into the z-domain
and provides another set of tools for designing and simulating digital control
systems.
320 Chapter 7

7.4.2.1 Difference Equations from Numerically Approximating Differentials


First, let us quickly explore the idea of numerically dierentiating a signal to approx-
imate a dierential equation, which in this case represents our physical dierential
equation. Let us begin with our basic rst-order dierential equation:
dx
 x ut
dt
If this is our equation to be sampled, we can approximate the dierential using
Euhlers theorem, where
dx @x
lim
dt @t!0 @t
Now we can use the current and previously sampled value to approximate the
dierential and base it on discrete values:
xk  xk  1
x_ k
T
If we assume constant sample times where T tk  tk1 , then tk kT and k is an
integer representing the number of samples. Also, when using this notation, xk is
the current value of x at tk and xk  1 is the value of x at tk1 , or the previously
sampled value. Now we can take the dierence approximation and insert it in place
of the actual dierential:
xk  xk  1
 xk uk
T
If we solve for xk, the current value, in terms of xk  1, the previous value, and
uk the input, we obtain the following dierence equation:
  
xk 1 xk  1 uk
T T
Solve for xk :
 xk  1
uk
xk T   
 1  1
T T
Rearrange, and nally
 T
xk xk  1 uk
 T  T
So now we have a dierence equation representing a general rst-order equation
with time constant, .
We can follow the same procedure and develop a similar dierence equation
for a second-order dierential by writing the dierence between the current and
previous rst-order approximations and again dividing by the time.

EXAMPLE 7.1
Use dierence equation approximations to solve for the sampled step response of a
rst-order system having a time constant, . Calculate the result using both a sample
Digital Control Systems 321

time equal to 1/2 of the system time constant and equal to 1/4 of the system time
constant. Compare the approximate results (found at each sample time) to the
theoretical results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.
2 1
For T 1=2 xk xk  1 uk
3 3
4 1
For T 1=4 xk xk  1 uk
5 5
Theoretical xt 1  et=
We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant, we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 2. As we would expect,
with shorter sample times we more accurately approximate the theoretical response
of our system. The same analogy is found in numerical integration routines. As we
will see, there are more accurate approximations available that allow us better results
at the same sample frequency.

It should now be clear how we use dierence equations to approximate dier-


ential equations. As our sample time decreases, we see from Table 2 that the accu-
racy increases, typical of numerical routines. In fact, a T approaches zero the
equation becomes a true dierential and the values merge.
7.4.2.2 Difference Equations from Numerical Integration
We can also solve dierential equations by numerical integration. This leads to
several more dierence equation approximations that can be used to represent the
response of our system. To begin with, let us use the rst-order dierential equation
from the preceding section.
dx
 x ut
dt

Table 2 Calculation of Difference Equations (Numerical Approximation of Differential)


Values Based on Sample Times
2 1 4 1
T 1=2; xk xk  1 uk T 1=4; xk xk  1 uk
3 3 5 5

Sample No. Actual value Difference eq. Actual value Difference eq.

1 x1=2 0:393 x1 1=3 0:333 x1=4 0:221 x1 1=5 0:200


2 x 0:632 x2 5=9 0:556 x1=2 0:393 x2 9=25 0:360
3 x3=4 0:527 x3 61=125 0:488
4 x 0:632 x4 0:590
322 Chapter 7

Instead of numerically approximating the dierential, let us now integrate both sides
to solve for the output x.
dx 1 1
x ut
dt  
Take the integral of both sides:
kT
dx 1 kT 1 kT
xk  xk  1  u
k1T dt  k1T  k1T

Now use the trapezoidal rule to approximate each integral using a dierence equa-
tion:
T xk xk  1 T uk uk  1
xk  xk  1 
 2  2
Finally, collect terms and simplify to express the solution as a dierence equation:
2  T T
xk xk  1 uk uk  1
2 T 2 T
Once again we have an approximate solution to the original rst-order dier-
ential equation. The next example problem will compare this method with the results
from the previous section. With these simple dierence equation approximating
integrals and derivatives, we can now develop simple digital control algorithms. In
Section 9.3 we will see how these simple approximations can be used to implement
digital versions of our common PID controller algorithm. This is one of the primary
motivations for this discussion.
The trapezoidal rule, as shown here, is sometimes called the bilinear transform
or Tustins rule, and generally results in better accuracy with the same step size but
requires more computational time each step. The next section outlines a method
using z transforms, similar to Laplace transforms, to model the digital computer
and provide us with another powerful tool to develop and program controller algo-
rithms on digital computers.

EXAMPLE 7.2
Use numerical integration approximations to solve for the sampled step response of
a rst-order system having a time constant, . Calculate the result using both a
sample time equal to 1/2 of the system time constant and equal to 1/4 of the system
time constant. Compare the approximate results (found at each sample time) to the
theoretical results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.
3 1
For T 1=2 : xk xk  1 uk uk  1
5 5
7 1
For T 1:4 : xk xk  1 uk uk  1
9 9
Theoretical xt 1  et=
Digital Control Systems 323

We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant, we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 3. When we compare
these results with those in Table 2 there is a surprising dierence between the two
methods. Although numerical integration with the trapezoidal approximation
requires additional computations, the results are much closer to the correct values,
even at the lower sampling frequencies. At T 1=4 we are within 0.002 of the
correct answer at one system time constant.

While using the methods discussed here to simulate the response of physical
systems are useful in and of itself, our primary benet will be seen once we start
designing and implementing digital control algorithms. Digital computers are very
capable when it comes to working with dierence equations, and by expressing our
desired control strategy as a dierence equation we can easily implement controllers
using microprocessors. With the basic concepts introduced in this section we are
already able to approximate the derivative and integral actions of the common PID
controller. The proportional term is even simpler. While these methods result in
dierence equation approximations, we are still lacking the design tools analogous
to those we learned in the s-domain. The next section introduces one of these com-
mon tools, the z transform, and concludes our discussion of techniques used to
obtain digital algorithms by modeling the computer sampling eects and introducing
the new transform.

7.4.3 z Transforms
The most common tool used to design and simulate digital systems is z transforms.
We will see that although z transforms have many advantages, it is similar to Laplace
transforms in that they only represent linear systems. Nonlinear systems must be
modeled using dierence equations. Remember from the beginning discussion that
the computer instantaneously samples at each clock pulse and thus the continuous
signal is converted into a series of thin pulses with an amplitude equal to the
amplitude of the analog signal at the time the pulse was sent. For analog inputs
this type of model works well since the computer uses each discrete data point as

Table 3 Calculation of Difference Equations (Numerical Integration) Values Based on


Sample Times
T 1=2 T 1=4
Sample No. Actual value Difference eq. Actual value Difference eq.

1 x1=2 0:393 x1 2=5 0:400 x1=4 0:221 x1 2=9 0:222


2 x 0:632 x2 16=25 0:640 x1=2 0:393 x2 32=81 0:395
3 x3=4 0:527 x3 386=729 0:529
4 x 0:632 x4 0:634
324 Chapter 7

represented by that one instant in time. In terms of analog outputs, however, when
this signal is sent from the DA converter (analog output), it is fairly useless to the
physical world as a series of innitely thin pulses. Before the physical system has time
to respond, the pulse is already gone. To remedy this, a hold is applied that
maintains the current pulse amplitude value on the output channel until the next
sample is sent. This is seen in Figure 6 where the computer can now approximate a
continuous analog signal by a continuous series of discrete output levels, as opposed
to just pulses.
If we assume that the time to actually acquire the sample (i.e., latch and
unlatch the switch) is very small, we can approximate the pulse train of values
using the impulse function . At the time of the kth sample, the impulse function
is innitely high and thin with an area under the curve of 1. Although this obviously
is not what the computer actually does, the method does approximate the outcome
and, as we will see, allows us to model the sample and hold process. Using the
impulse function allows us to write the sampled pulse train as a summation with
each pulse occurring at the next kth sample.
X
1
f t f kTt  kT
k0
 
t  kT is 1 when t kT and 0 whenever t 6 kT

The benet of using the impulse function is seen when we take the Laplace
transform. Since the Laplace transform of an impulse function, , is 1, and the
Laplace transform of a delay of length T is eTs , we can convert the sampled
pulse equation into the s-domain, where
X
1
F s f kTekTs
k0

This is simply representing the original sequence of pulses in the s-domain.


Now, let us dene a new variable, z, where z eTs . This simply maps the s-
domain into the z-domain and z becomes a shift operator where each z1 is one step
before the last, allowing us to modeled our sequence of pulses. This is much more
convenient than writing each delay in the time domain.

Figure 6 Analog signal being sampled and held.


Digital Control Systems 325

Finally, if the signal is an output of our digital device, we must include in our
model the fact that we want the signal to remain on the output port until the next
commanded signal level is received. We can model this eect as the sum of two step
inputs, one occurring one sample later (and opposite in sign) to cancel out the rst
step. This is called a hold. If we recognize that with the hold applied each sampled
value will remain until the next, we can model each pulse with the hold applied as a
single pulse, with the total output being the sequence of pulses with width T as
shown in Figure 6. To model the zero-order sample and hold, we can model the
sequence of pulses as one step input followed by another equal and negative step
input applied one sample time later, as shown in Figure 7.
So we see that a zero-order hold (ZOH) in the s-domain can be used to model
the sampling eects of the analog-to-digital converter and that z1 eTs will allow
us to map our models from the s-domain into the z-domain (sampled domain) and
vice versa. The important concept to remember is that when we take a continuous
system model and develop its discrete (sampled) equivalent format, we must also add
a ZOH to model the eect of the components used to send the sampled data. The
ZOH may be used in several forms using the identity z1 eTs :

1  eTs 1 z 11
ZOH 1  z1
s s z s

It is common to include the 1=s as part of the Laplace to z transform and include the
additional (1  z1 ) separately. Since the z transform has been derived from the
Laplace transform, many of the same analysis procedures apply. For example, we
can again develop transfer functions; talk about poles, zeros, and frequency
responses; and analyze stability. However, we must remember that z itself contains
information about the sampling rate of our system since it is dependent on the
sample period, T.
Since z acts as a shift operator, we can directly relate it to the concept of
dierence equations discussed earlier. A transfer function in the z-domain is easily
converted into a dierence equation using the equivalences:

Figure 7 Sample and hold modeled as the sum of two steps.


326 Chapter 7

Cz
If z1
Rz
Then Cz z1 Rz or ck rk  1
Or z Cz Rz or ck 1 rk
Conclusion Cz zn Rz or ck rk  n
As in Laplace transforms, tables have been developed that allow us to transform
from time to z or s interchangeably. The inverse property also is true and presents us
with yet another method of analyzing systems. To demonstrate the concept, let us
use the table of z transforms in Appendix B to develop a dierence equation for a
rst-order system and compare its sampled output with that obtained by the numer-
ical approximations from the two previous sections.

First-order system:
dx
 x ut
dt
Take the Laplace transform and develop the transfer function:
Xs 1

Us s 1
Now we can use the table in Appendix B for the z transform, but remember that we
must rst add our ZOH model to the continuous system transfer function since we
want the sampled output. The ZOH and the rst order transfer function become (the
1=s term is grouped with the continuous system transfer function):
 
Xz z  1 1 1
Z
Uz z s s 1
Let a 1= to match the tables:
 
Xz z  1 1 a
Z
Uz z s s a
Take the z transform:
 
Xz z  1 z1  eaT

Uz z z  1z  eaT
After simplication, the result becomes a transfer function in the z-domain where the
actual coecients (zero and pole values) are a function of 1=, or a, and our sample
time, T.
Xz 1  eaT

Uz z  eaT
As we will see in subsequent chapters, our pole locations are still used to
evaluate the stability and transient response of our system. The primary dierence
now, as compared with continuous systems, is that the pole locations also change as
a function of our sample time, not just when physical parameters in our system
undergo change.
Digital Control Systems 327

Finally, let us convert the discrete transfer function to a dierence equation


using the identity z1 xk xk  1. To begin, we can multiply the top and bottom
by z1 :

Xz 1  eaT z1

Uz 1  eaT z1

Now cross-multiply:


Xz  1  eaT z1 Uz  1  eaT z1



Xz  eaT  Xz  z1 1  eaT  Uz  z1

Now we can use z1 as a shift operator and write it as a dierence equation:

xk eaT xk  1 1  eaT uk  1

As expected, the coecients of the dierence equation are dependent on the sample
time.

EXAMPLE 7.3
Use z transform approximations to solve for the sampled step response of a rst-
order system having a time constant, . Calculate the result using both a sample time
equal to 1/2 of the system time constant and equal to 1/4 of the system time constant.
Compare the approximate results (found at each sample time) to the theoretical
results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.

For T 1=2 : eaT 0:60653; xk 0:60653  xk  1 0:39347  uk  1


For T 1=4 : eaT 0:77880; xk 0:77880  xk  1 0:22120  uk  1
Theoretical xt 1  et=t

We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 4. When we compare
these results with those in Tables 2 and 3 we see the advantage of using z transforms
to model the computer hardware. Even at low sample times the results match the
analytical solution exactly. In fact, when we examine the dierence equations, we see
that eaT as used in the dierence equations corresponds to the et in the continuous
time domain response equation.

EXAMPLE 7.4
Use Matlab to solve for the sampled step response of a rst-order system having a
time constant, . Calculate the result using both a sample time equal to 1/2 of the
328 Chapter 7

Table 4 Calculation of Difference Equations (z Transform) Values Based on Sample


Times
T 1=2 T 1=4

Sample No. Actual value Difference eq. Actual value Difference eq.

1 x1=2 0:39347 x1 0:39347 x1=4 0:221 x1 0:22120


2 x 0:63212 x2 0:63212 x1=2 0:393 x2 0:39347
3 x3=4 0:527 x3 0:52763
4 x 0:632 x4 0:63212

system time constant and equal to 1/4 of the system time constant. Plot the approx-
imate results (found at each sample time) to the theoretical results.
Matlab can also be used to quickly generate z-domain transfer functions. Using
the following commands in Matlab will generate our rst-order transfer function,
dene the same sample time, and convert the continuous system to a discrete system
using the ZOH model. There are several models available in Matlab for approximat-
ing the continuous system as a discrete sampled system.

%Program to convert rst order system


%to z-domain transfer function

Tau=1; %System time constant


T2=Tau/2; %Sample time equal to 1/2 the time constant
T4=Tau/4; %Sample time equal to 1/4 the time constant

sysc=tf(1/Tau,[1 1/Tau]) %Make LTI TF in s

sysz2=c2d(sysc,T2,zoh) %Convert to discrete TF using zoh and sample time


sysz4=c2d(sysc,T4,zoh) %Convert to discrete TF using zoh and sample time

Press any key to generate step response plot


pause;
step(sysc,sysz2,8)
gure;
step(sysc,sysz4,8)

This results in the following output to the screen:

Continuous system transfer function:


1
s1

Discrete transfer function when T 0:5s:


0:3935
z  0:6065
Digital Control Systems 329

Discrete transfer function when T 0:25s:


0.2212
z  0:7788

Using the step command allows us to compare the continuous system response and
the discrete sampled response for each sample time as shown in Figures 8 and 9.
From the step response plots we can easily see that although both sample times
are accurate during the time of the sample, the shorter sample time leads to a much
better approximation of the continuous system when reconstructed. As we will see in
subsequent chapters, there are many additional tools in Matlab that can used to
design and simulate discrete systems.

EXAMPLE 7.5
For the sampled system, Cz, derive the sampled output using
1. Dierence equations resulting from a transfer function representation;
2. Dierence equations resulting from the output function in the z-domain;
3. Matlab.
We begin with the discrete transfer function of the system, dened as
Cz 1
Gz 2
Rz z  0:5z
Recognizing that a step input in discrete form is given as
 
z 1
Unit step In the s-domain
z1 s
Then we can also write Cz as a transfer function subjected to a step input, Rz):
1 z 1
Cz   Rz
z2  0:5z z  1 z2  0:5z

Figure 8 Sampled step response using Matlab with T 0:5 sec.


330 Chapter 7

Figure 9 Sampled step response using Matlab with T 0:25 sec.

Simplifying, we have the following sampled output of a system represented as a


discrete function in the z-domain. In this case the information about the input acting
on the system will be included in the resulting dierence equation:
z
Cz
z  1z2  0:5z
The two representations, a transfer function subjected to a step input or a sampled
output, can be simulated using dierence equations but with slight dierences in how
the input sequence occurs. In the remainder of this example, the sampled response is
derived using both notations.
First, let us assume we are given the transfer function and asked to calculate
the response of the system to a unit step input. To derive the dierence equations, we
have two options. First, we may cross-multiply and represent the output ck as a
function of previous values of ck  i and of a general input, rks). Second, we may
substitute in the discrete representation of a step input and form the dierence
equation only as a function of cks) and the delta function. This, however, becomes
the same as the sampled output Cz that is examined in part 2.
Solution 1: Using the Discrete Transfer Function and General r k s)
To develop the general dierence equation, multiply numerator and denominator by
z2 , cross-multiply, and leave the input in the dierence equation.
Cz 1
2
Rz z  0:5z

Cz 1 z2 z2
2
Rz z  0:5z z2 1  0:5z1

C(z)(1 - 0.5z^{ - 1} ) = z^{ - 2} R(z)


Digital Control Systems 331

Cz  0:5z1 Cz z2 Rz
Cz 0:5z1 Cz z2 Rz

Now we can write the dierence equation as

ck 0:5ck  1 rk  2

Assuming initial conditions equal to zero allows us to calculate the sampled output
(rst eight samples) as

k0 c0 0 0 0
k1 c1 0 0 0
k2 c2 0 1 1 (step input, Rz
k3 c3 0:5 1 1:5
k4 c4 0:51:5 1 1:75
k5 c5 0:51:75 1 1:875
k6 c6 0:51:875 1 1:9375
k1 c1 2:0

Notice that in this solution once the step occurs in the dierence equation, rk  2
always retains a value of the step input, in this case using unit step, equal to 1. This
diers from the second method, shown next.
Solution 2: Using the Sampled Output That Includes the Input Effects in
the z-Domain Representation
The procedure used to develop the general dierence equation remains the same and
we multiply the numerator and denominator by z2 , cross-multiply, and write the
dierence equation. Now the output sequence includes the eects from the step
input.
z
Cz
z  1z2  0:5z

Expand the denominator terms


z
Cz
z3  1:5z2 0:5z
We can simplify the output since a z cancels in the numerator and denominator:
1
Cz
z2  1:5z 0:5
Multiplying the numerator and denominator by z2 :

z2
Cz
1  1:5z1 0:5z2
332 Chapter 7

Now we can cross-multiply and develop the dierence equation, recognizing that the
general input, rk, does not appear:

Cz 1  1:5z1 0:5z2 1  z2


Cz  1:5z1 Cz 0:5z2 Cz 1  z2
Cz 1:5z1 Cz  0:5z2 Cz 1  z2

Now we can write the dierence equation as


ck 1:5ck  1  0:5ck  2 1  k  2
Of particular interest is the necessary use of the delta function in this dierence
equation. When we convert the term 1z2 into the sampled (time) output the inverse
z transform of 1 is simply a delta function, or unit impulse, delayed by two sample
times (the z2 ). It therefore does not have an eect on the solution except for the
single sample instant that k 2. This is dierent than in solution 1 where the inverse
z transform of Rz is simply the value of rt delayed two sample periods (as used in
the solution, rk  2). Remember that in this solution the step input, Rz, is inher-
ent in the dierence equation as evidenced by the additional ck  2 term and
dierent coecient values when compared with the dierence equation in solution 1.
Finally, to calculate the sampled outputs we assume initial conditions equal to
zero and calculate the sampled output (rst six samples) as
k0 c0 0  0 0 0
k1 c1 0  0 0 0
k2 c2 0  0 1 1 (delta function, only when k 2
k3 c3 1:5  0 0 1:5
k4 c4 1:51:5  0:51 0 1:75
k5 c5 1:51:75  0:51:5 0 1:875
k6 c6 1:51:875  0:51:75 0 1:9375
k1 c1 2:0

These are the same values calculated using the transfer function representation in
part 1.
Solution 3: Using Matlab to Simulate the Sampled Output
Many computer packages also enable us to quickly simulate the response of sampled
systems. We can dene the discrete transfer function in Matlab as

 sysz tf 1,[1  0:5 0],1

This results in the discrete transfer function, sysz:


Cz 1
sysz
Rz z2  0:5z
Digital Control Systems 333

To solve for the rst set of sampled output values:


 Y; T stepsysz
k= T= Y=
0 0 0
1 1 0
2 2 1.0000
3 3 1.5000
4 4 1.7500
5 5 1.8750
6 6 1.9375
7 7 1.9688
8 8 1.9844
9 9 1.9922
10 10 1.9961
11 11 1.9980
12 12 1.9990
And nally, to generate the discrete step response plot given in Figure 10:
 Y; T stepsysz
Thus, in conclusion, all methods produce the identical sampled values. The rst
method allows us to input any function, whereas the second method contains the
input eects in the z-domain output function, Cz. Matlab also provides an easy
method for simulating discrete systems. In general when we design controllers we will
use the representation (transfer function) examined in rst method since the input to
the controller, the error, is constantly changing and best represented as a general
input function.
In conclusion of this section, we have seen that z transforms are able to model
the sample and hold eects of a digital computer and produce results nearly identical
to the theoretical. Regardless of the method used, once we have derived the dier-

Figure 10 Sampled step response using Matlab.


334 Chapter 7

ence equations for a system, it is very easy to simulate the response on any digital
computer. Also, now that we are able to represent systems using discrete transfer
functions in the z-domain, we can apply the knowledge of stability, poles and zeros,
and root locus plots to design systems implemented digitally. Since we know how the
s-domain maps into the z-domain, we can easily dene the desired pole/zero loca-
tions in the z-domain, with the key dierence that we can also vary the system
response (pole locations) by changing the sample time.
One method for moving from the s-domain into the sampled z-domain is to
simply map the poles and zeros from the continuous system transfer function into
the equivalent poles and zeros in the z-domain using the mapping z esT . This
method is examined more in Chapter 9 were it is presented as a method for con-
verting analog controller transfer functions into the discrete representation, allowing
them to be implemented on a microprocessor.

7.4.4 Discrete State Space Representations


In the same way that we can represent linear dierential equations using transfer
functions and state space matrices in the continuous domain, we can also represent
them in the discrete domain. In the previous section we saw how the transfer func-
tions use the z-domain to approximate continuous systems. Discrete state space
matrices are time based, the same as with continuous systems, and now will represent
the actual dierence equations obtained (using several dierent methods) in the
previous section. The general state space matrix representation is similar to before
and is given below:

xk 1 Kxk Lrk

ck Mxk Nrk

where x is the vector of states that are sampled; r is the vector of inputs; and c is the
vector (or scalar) of desired outputs. K, L, M, and N are the matrices containing the
coecients of the dierence equations describing the system.
Many of the same linear algebra properties still apply, only now the matrices
contain the coecients of the dierence equations. Instead of the rst dierential of
a state variable being written as a function of all states, the next sampled value is
written as a linear function of all previously sampled values. The order of the system,
or size of the square matrix K, depends on the highest power of z in the transfer
function. There are many equivalent state space representations and dierent forms
may be used depending on the intended use. One advantage of state space is that we
can use transformations to go from one form to another. For example, if we diag-
onalize the system matrix, the values on the diagonal are the system eigenvalues.
There are several ways to get the discrete system matrices, although it generally
involves one of the previous methods used to write the system response as a dier-
ence equation (or set of dierence equations). If we already have a discrete transfer
function in the z-domain we can write the dierence equations and develop the
matrices as illustrated in the next example.
Digital Control Systems 335

EXAMPLE 7.6
Convert the discrete transfer function of a physical system into the equivalent set of
discrete state space matrices.

Cz z
Gz
Rz z2  z 2

First, convert the discrete transfer function to a dierence equation:

Cz z1
Gz
Rz 1  z1 2z2
ck ck  1  2ck  2 rk  1
ck 1 ck  2ck  1 rk

Since ck 1 depends on two previous values, we will need two discrete states so
that each state equation is in the form ck 1 f ks). Therefore, lets dene our
states as

x1 k ck
x2 k ck  1

Now substitute in and write the initial dierence equation as two equations where
each state at sample k 1 is only of function of states and inputs at sample k:

x1 k 1 ck 1 x1 k  2x2 k rk
x2 k 1 ck x1 k

No we can easily express the dierence equations in matrix form:


" # " #" # " #
x1 k 1 1 2 x1 k 1
xk 1 rk
x2 k 1 1 0 x2 k 0

And if our output is simply ck:


 
  x1 k  
ck 1 0 0 rk
x2 k

Once we have linear dierence equations it becomes straightforward to represent


them using matrices. Many of the linear algebra analysis methods remain the same.

To analyze the discrete state space matrices, the required operations are similar
as those used to nd the eigenvalues of the continuous system state space matrices.
Now, when we examine the left and right sides of the state equations, we see that
they are related through z1 instead of through s, as was the case with continuous
representations. Using this identity, the linear algebra operation remains the same
and we can solve for xk as
336 Chapter 7

xk 1 Kxk Lrk
Xz KXzz1 LRzz1
Xz  KXzz1 LRzz1

I  Kz1 Xz LRzz1

Now we can premultiply both sides by the inverse of (I  Kz1 ) and solve for xz :

1
Xz I  Zz1 LRzz1

Finally, we can substitute xz into the output equation and solve for cz:

1
Cz M I  Kz1 LRzz1 NRz

or
h
1 i
Cz M I  Kz1 Lz1 N Rz

As with the continuous system, we now have the methods to convert from discrete
state space matrices into a discrete transfer function representation. It still involves
taking the inverse of the system matrix and results in the poles and zeros of our
system.

EXAMPLE 7.7
Derive the discrete transfer function for the system represented by the discrete state
space matrices.
" # " #" # " #
x1 k 1 1 2 x1 k 1
xk 1 rk
x2 k 1 1 0 x2 k 0
" #
  x1 k  
ck 1 0 0 rk
x2 k

The relationship between discrete state space matrices and discrete transfer functions
has already been dened as
h
1 i
Cz M I  Kz1 Lz1 N Rz

Substitute the K, L, M, and N matrices:


2 " # " #!1 " # 3
  1 0 z1 2z1 1
Cz 4 1 0     z1 5  Rz
0 1 z1 0 0
Digital Control Systems 337

Combine and take the inverse of the inner matrices by using the adjoint and deter-
minant:
2 " #!1 " # 3
  1  z1 2z1 1
Cz 4 1 0    z1 5  Rz
z1 1 0
" # " #
  1 2z1 1
1 0    z1
1 1
z 1z 0
Cz  Rz
1  z1 2z2
And nally, we can perform the nal matrix multiplications, resulting in
z1
Cz  Rz
1  z1 2z2
Since we started this example using the discrete state space matrices from
Example 7.6 we can easily verify our solution. Recall that the original transfer
function from Example 7.6 was
Cz z
Gz 2
Rz z  z 2
We see that we did the same result if we just multiply the top and bottom of our
transfer function by z2 to put it in the same form. Thus the methods develop in
earlier chapters for continuous system state space matrices are very similar to the
methods we use for discrete state space matrices, as shown here.

The process of deriving the discrete state space matrices becomes more dicult
when the input spans several delays (i.e., rk  1 and rk  2) and relies on rst
getting dierence equations or z transforms. To be more general, we would like
either to take existing dierential equations (which may be nonlinear) or, if linear,
to convert directly from the A, B, C, and D matrices already developed. The next two
methods address these cases.
If we begin with the original dierential equations describing the system, we
can simply write them as a set of rst-order dierential equations and approximate
the dierence equations using either the backward, forward, or bilinear approxima-
tion dierence algorithms. Represent each rst-order dierential by the dierence
equation and solve each one as a function of xk 1 f xk, xk  1 . . . ; rk,
rk  1   ) and then, if linear, represent in matrix form. Examples of three dierent
dierence equation approximations are given in Table 5. The procedure is similar to
those presented in Section 7.4.2 and is just repeated for each state equation that we
have. An advantage of using this method is that nonlinear state equations are very
easy to work with; the only dierence being that we cannot write the resulting non-
linear dierence equations in matrix form and use linear algebra techniques to
analyze them (as was done in Example 7.7). Of the three alternatives given in
Table 5 the bilinear transformation provides the best approximation but requires
slightly more work to perform the transformation.
Finally, although only introduced here, it is possible to approximate the trans-
formation itself, z esT , through a series expansion, allowing even better approx-
imations. Since computers can include many terms in the series, this provides good
338 Chapter 7

Table 5 State Space Continuous to Discrete Transformations

Alternative transformations from continuous to discrete rst-order ODEs

Method Difference equation z-Domain

Backward rectangular _ xk  1 =T
xk z1
x_ x
Tz

Forward rectangular xk 1  xk =T z1


x_ x
T

Bilinear transformation Approximates z esT x_


2z  1
x
Tz 1

results when implemented in programs like Matlab. The assumption used with this
method is that the inputs themselves remain constant during the sample period.
While obviously not the case, unless sample times are large, it does provide a
good approximation. This allows us to represent our discrete system matrix, K, as
dened previously, to represent the outputs delayed one sample period relative to
our original system matrix, A, for the continuous system. Then we can include as
many of the series expansion terms as we wish:

AT 2 AT 3
KkT eAT I AT 
2! 3!

where K is the discrete equivalent of our original system matrix A, T is the sample
period.
As with the continuous system matrices, A, B, C, and D, we can use the discrete
matrices derived in this section to check controllability, design observers and esti-
mators, etc. When we begin looking at discrete (sampled) MIMO systems later, this
will be the representation of choice.

7.5 SAMPLE TIME GUIDELINES


It is important to know what sample times should be used when designing digital
control systems. We must at minimum meet certain sampling rates and thus choose a
processor capable of meeting these specications. Several guidelines are given here
for determining what sample time is required for dierent systems. In general, faster
is always better if the cost, bits of resolution, service, etc., are all the same. The only
disadvantage to faster sampling rates is amplifying noise when digital dierentiation
is used. Since T is in the denominator and becomes very small, a small amount of
noise in the measure signals in the numerator cause very large errors. This can be
dealt with in dierent ways so we still prefer the faster sample time.
For rst-order systems classied by a time constant, we would like to sample at
least 410 times per time constant. Since a time constant has units of time (seconds),
it is easy to determine what the sampling rate should be. For example, if our system
is primarily rst order with a time constant of 0.2 sec, then we should have a
minimum sample rate of 20 Hz and preferably a sample rate greater than 50 Hz.
Digital Control Systems 339

Second-order systems use the rise time as the period of time where 410
samples are desired. Remember that these are minimums and, if possible, aim
for more frequent samples. In cases where we have a set of dominant closed
loop poles and thus a dominant natural frequency, we will nd that sampling
less than 10 times the natural frequency will no longer allow equivalence between
the continuous and sampled responses and they diverge. In these cases direct
design of digital controllers is recommended. If we can sample at frequencies
greater than 20 times the natural frequency, we nd that the digital controller
closely approximates the continuous equivalent. Since the systems natural fre-
quency is close to the bandwidth as measured on frequency response plots, the
same multipliers may be used with system bandwidth measurements. In most cases
where the sampling frequency is greater than 40 times that of the bandwidth or
natural frequency of our physical system, we can directly approximate our con-
tinuous system controller with good results.
One additional advantage should also be mentioned in regards to sampling
frequency. Better disturbance rejection is found with shorter sampling times.
Physically this can be understood as limiting the amount of time a disturbance
input can act on the system before the controller detects it and appropriate action
taken.
Finally, the real challenge for the designer is determining what the signicant
frequencies in our system are. The guidelines above are easily followed but all based
on the assumption that we know the properties of our physical system. Even if we
sample fast enough to exceed the recommendations relative to our primary
dynamics, it does not necessarily follow that we are sampling fast enough to control
all of our signicant dynamics. A signicant dynamic characteristic might be much
faster than the dominant system frequency and yet if it contributes in such a way as
to signicantly aect the nal response of our system we may have problems.

7.6 PROBLEMS
7.1 List three advantages of a digital controller.
7.2 What are primary components that must added to implement a digital con-
troller?
7.3 List two advantages of using centralized controller congurations.
7.4 List two advantages of using distributed controller congurations.
7.5 List two primary distinctions of digital controllers (relative to analog control-
lers) that must be accounted for during the design process.
7.6 Describe one advantage and one disadvantage of using analog controllers as
the basis for the design of digital controllers.
7.7 If our signal contains a frequency component greater than the Nyquist fre-
quency, what is created in our sampled signal?
7.8 To minimize the eects of aliasing, it is common to use what component in our
design?
7.9 What guideline should we use regarding sample rate if we wish to convert an
existing analog controller into an equivalent digital representation and experience
good results?
7.10 A sampled output, Cz, is given in the z-domain. Use dierence equations to
calculate the rst ve values sampled.
340 Chapter 7

1
Cz
z 0:1
7.11 A sampled output, Cz, is given in the z-domain. Use dierence equations to
calculate the rst 10 values sampled.
0:632z
Cz
z  1z2  0:736z 0:368

7.12 Use the z transform to derive the dierence equation approximation for the
function xt teat . Treat as a free response (no forcing function) and leave the
coecients of the dierence equation in terms of a and T.
7.13 Use the continuous system transfer function and apply a ZOH, convert into the
z-domain, derive the dierence equation, and calculate the rst ve values
(T 0:5 sec in response to a unit step input. Use partial fraction expansion if
necessary.
s3
Gs
ss 1s 2
7.14 Use the dierential equation describing the motion of a mass-spring-damper
system and
a. Derive the continuous system transfer function.
b. Apply a ZOH and derive the discrete system transfer function.
c. Using T 1 sec, write the dierence equations from the discrete transfer
function, and solve for the rst eight values when the input is a unit step.
d 2y dy
5 6y rt
dt2 dt
7.15 Develop the rst ve sampled values for a rst-order system described as
having a system time constant equal to 2 sec. Assuming a sample time of 0.8 sec,
use the dierentiation approximation, numerical integration, and z transforms to
develop a dierence equation for each method. Use a table to calculate the rst ve
sampled values for each dierence equation and compare the results. The outputs are
in response to a unit step input.
7.16 Set up and use a spreadsheet to solve problem 7.15.
7.17 Convert the discrete transfer function into the equivalent discrete state space
matrices.
z
Gz 2
z  2z 1
7.18 Use the discrete state space matrices and solve for the equivalent discrete
transfer function.
" # " 2 #
1 T T =2
xk 1 xk rk
0 1 T


yk 1 0 xk
Digital Control Systems 341

7.19 Using the dierence equation describing the response of a physical system,
develop the equivalent discrete transfer function in the z-domain.
yk 0:5yk  1 0:3rk  1

7.20 Using the dierence equation describing the response of a physical system,
develop the equivalent discrete transfer function in the z-domain.
yk 0:5yk  1  0:3yk  2 0:2rk
This Page Intentionally Left Blank
8
Digital Control System Performance

8.1 OBJECTIVES
 To relate analog control system performance to digital control system
performance.
 To demonstrate the eects and locations of digital components.
 To examine the eects of disturbances and command inputs on steady-state
errors.
 To develop and dene system stability in the digital domain.

8.2 INTRODUCTION
This chapter parallels Chapter 4 in dening the performance parameters for control
systems. The dierence is that the parameters are examined in this chapter with
respect to digital control systems, not analog, as done earlier. By using the z trans-
form developed in the previous chapter, many of the same techniques can still be
applied. Block diagram operations are identical once the eects of sampling the
system are included, and the concept of stability on the z-plane has many parallels
to the concept of stability on the s-plane. The measurements of system performance,
since they still deal with the output of the physical system in response to either a
command or disturbance input, remain the same, and we have new denitions for the
nal value theorem and initial value theorem for use with transfer functions in the z-
domain. An underlying theme is evident throughout the chapter; in addition to the
parameters that aected steady state and transient performance in analog systems,
we now have the additional eects of quantization (nite resolution) and sampled
inputs and outputs that also aect the performance.

8.3 FEEDBACK SYSTEM CHARACTERISTICS


As with the analog systems, steady-state errors and transient response characteristics
are the primary tools to quantify control system performance. Although the deni-
tions remain the same, we have additional characteristics inherent in our digital
devices that must now be accounted for during the design and operation phases.

343
344 Chapter 8

8.3.1 Open Loop Versus Closed Loop


In both open loop and closed loop systems with digital controllers, we must modify
the model to include the zero-order hold (ZOH) eects. Instead of receiving smooth
analog signals, the system receives a series of small steps from the digitization pro-
cess. The magnitude of each step is never less than the discrete levels determined by
the number of bits conversion process used. It may obviously be much larger since
step inputs and other commands can cause the output to jump many discrete levels in
one sample period. An open loop and closed loop system, including a digital con-
troller and ZOH, are shown in Figure 1. For both the open and closed loop systems
the number and location of the AD and DA (analog to digital and digital to analog)
converters may vary. Their purpose is to allow the analog signals of the physical
system to interact with the digital signals in the microprocessor. If we generate the
command to the system internal to the microprocessor, then the rst AD converter
for either system is not required. Likewise, if the output of the system is actuated by
a digital signal (i.e., stepper motor or PWM) or if the sensor output is digital (i.e.
encoder), then the output from the computer or the feedback path does not require a
DA converter. In general, when we add a digital controller to an existing analog
system we will require the sampling devices as shown in Figure 1.
The properties, advantages, and disadvantages of open loop versus closed loop
controllers, are the same as with the equivalent continuous system models. The
dierences are the quantization and sampling eects.
To analyze the systems for either transient or steady-state performance, we
follow the same procedures learned earlier and just substitute the appropriate ZOH
model developed in the previous chapter. With the open loop system, the result is the
nal transfer function in the z-domain relating the input to the output. For the
closed loop system we must rst close the loop and derive the new transfer function
for the overall system, including now the eects of the ZOH. Figure 2 illustrates the
process of including the samplers and ZOH to obtain transfer functions in the z-
domain. In place of the actual AD and DA converters, we place the samplers and
ZOH models developed in the preceding chapter. This allows us to close the loop and

Figure 1 General open and closed loop digital controller diagrams.


Digital Control System Performance 345

Figure 2 Block diagram representations of digital controller components.

develop the discrete transfer function that includes the eects of the sample time. To
simplify the procedure, we can take each sampler (on the command and feedback
paths) and, since the sample occurs at the same time, move them past the summing
junction as represented by the single sampler. Physically, this results in the same
sampled error because we get the same error whether we sample each signal sepa-
rately and then calculate the error or whether we sample the error signal itself.
Using the single sampler and ZOH now allows us to substitute the ZOH model
into the block diagram (s-domain) and, along with the physical system model, con-
vert from the s-domain to the z-domain as shown in Figure 3. As we see in the block
diagram and remembering our model of the ZOH, the eects of the sampler are
included in the ZOH since it is dependent on the sample time, T. The result is a single
closed loop transfer function, but in the z-domain and including the eects of our
digital components. Now we can apply similar analyses to determine steady-state
error and transient response.
Remember from the previous chapter that the equations modeled by the dis-
crete block diagrams are now dierence equations as opposed to continuous func-
tions (dierential equations). Figure 4 gives several common discrete block diagram
components and the equations that they represent. They perform the same role as in
our analog systems, only they rely on discrete sets of data, represented as dierence
equations.
The reduction of block diagrams in the z-domain is very similar to the reduc-
tion of block diagrams in the s-domain. The primary dierence is locating and
modeling the ZOHs in the system. One item must be mentioned since it is not
obvious based on our knowledge of continuous systems. Figure 5 illustrates the
problem when we attempt to take two continuous systems, separately convert
each into sampled systems, and then multiply to obtain the total sampled input
output relationship. As is clear in the gure, Ga 6 Gb because the additional sampler
is assumed in Gb even though the output of G1 and the input of G2 are continuous.
346 Chapter 8

Figure 3 Block diagram reduction of digital controller and physical system.

In other words, the input of G2 is not sampled since it is based directly on the
continuous output signal of G1.
In general terms then,

Z G1sG2s 6 Z G1sZ G2s

When we model the discrete and continuous systems, we must be aware of where the
samplers are in the system and treat them accordingly. If a sampler exists on the input
and output, the z transform applies to all blocks between the two samplers.
We can show how this works by reducing the block diagram given in Figure 6.
Relative to the forward path only Gs is between two samplers and needs to have the
z transform applied accordingly. When we consider the complete loop, then Gs and

Figure 4 Discrete block diagram components and dierence equations.


Digital Control System Performance 347

Figure 5 Multiplying transfer functions in z dierences.

Hs are between two samplers and the transform should take this into account.
When we apply the ZOH to the forward path and total loop as described, we can get
the following transfer function in the z-domain:
  h i
Cz Dz  1 z 1  Z 1s Gs
h i
Rz 1 Dz  1 z 1  Z 1 GsHs
s
Once we have the closed loop transfer function we have many options. If we
want the time response of the system, we can write the dierence equations from the
discrete transfer function and calculate the output values at each sample time as done
in the previous chapter. Also, as demonstrated in the remainder of this chapter, we
can use the nal value theorem (FVT) (in terms of z) to nd the steady-state error or
develop a root locus plot to aid in the design of the controller or to predict the
transient response characteristics.

8.3.2 Disturbance Inputs


One problem unique to digital systems occurs when we attempt to close the loop
relative to a disturbance input since the input acts directly on the continuous portion
of the system. This section examines block diagram reduction techniques when dis-
turbance inputs are added to our model. To begin, let us add a disturbance input to
our system model as shown in Figure 7. We can begin to simplify the block diagram
by setting Rs 0 (as in linear analog systems) and rearranging the block diagram
to that shown in Figure 8.
Now, let us write the equations from the block diagram and try to relate the
system output to the disturbance input. First, write the equation for the output Cs
accounting for the summing junction and blocks.
 
Cs G2s Ds Gc zZOHG1sC s

Figure 6 Sampled block diagram with sensor in feedback.


348 Chapter 8

Figure 7 Digital controller with disturbance input.

At this point we still need to dierentiate between the sampled output, C s, and the
continuous output, Cs. If we move the sampler before the feedback path and look
at the sampled output, Cz, then we can collect terms and solve for the output, Cz,
as

Z G2sDs
Cz   hG1sG2si
1 Gc z 1 z 1 Z s

The interesting dierence, as compared to analog systems, is that once we add the
disturbance we can no longer reduce the block diagram to a single discrete transfer
function. We cannot transform G2s and Ds independently of each other since
there is not a sampler between them. This limits us in trying to solve for Cz since
there is a portion of the equation which remains dependent on Ds. If we dene the
disturbance input in the s-domain and multiply it with G2s before we take the z
transform, we can solve for the sampled system response to a disturbance. This is a
general problem whenever we have an analog input acting directly on some portion
of our system without including a sampler.

8.3.3 Steady-State Errors


Using the techniques presented in earlier chapters on analog systems, it may or may
not be possible to reduce the system to a single transfer function representing the
sampled inputs and outputs. If we can reduce the system to a single transfer function,
we can use our knowledge of the relationship between s and z (z esT ) to apply a
modied form of the FVT and initial value theorem (IVT). Remembering that z
esT and s ! 0 for the continuous system FVT, it is easy to see that now z ! 1 for the
discrete system. Using the same transform between s and z, then for the continuous
system IVT, s ! 1 and so also will z ! 1 for the discrete IVT. This leads to the
equivalent IVT and FVT for discrete systems, given as follows:

Figure 8 Simplied sampled block diagram with disturbance.


Digital Control System Performance 349

z 1
FVT z yk ! 1 yss lim Yz
z!1 z
IVT z y0 y0 lim Yz
jzj!1

Now the same procedures learned earlier can be used to determine the steady-state
error from dierent controllers.

EXAMPLE 8.1
Using the continuous system transfer function, nd the initial and nal values using
the discrete forms of the IVT and FVT. Assume a unit step input and a sample time
equal to 0.1 seconds.
6
Gs
s2 4s 8
The rst thing we must do is convert from the continuous domain to the discrete,
sampled, domain. Write Gs in the form of

6 20 6 a2 b2
Gs
20 s 22 4 20 s a2 b

And apply the ZOH:


 
6 z 1 1 20
Gz  Z 
20 z s s 22 4

Now we can use the tables in Appendix B where a 2 and b 4, and the transform
for the portion inside the brackets is
z Az B

z 1 z2 2ze aT cosbT e 2aT
a
A 1 e aT cosbT e aT sinbT
b
2aT a aT
Be e sinbT e aT cosbT
b
Recognizing that [z=z 1 cancels with portion of the ZOH outside of the trans-
form, including the 6/20 factor, and substituting in for a, b, and T, results in
0:026z 0:023
Gz
z2 1:605z 0:670
This is now the discrete transfer function approximation of the continuous system
transfer function. To nd the initial and nal values, we can apply the discrete forms
of the IVT and FVT. For both cases we need to add the step input since we have only
derived the discrete transfer function, Gz, not the system output Yz. In discrete
form the step input is simply
 
1 z
Discrete Unit Step Z
s z 1
350 Chapter 8

To get the initial value, multiply Gz by the step input and let z approach innity:
0:026z2 0:023z
y0 y0 lim Yz lim 0
jzj!1 jzj!1 z 1z2 1:605z 0:670
With the discrete FVT the step input and the term included with the theorem
cancel, as they did with the continuous form of the theorem. For a unit step input
only, we can thus simply let z approach unity in the discrete transfer function to
solve for the nal value of the system.
0:026z 0:023
yk ! 1 yss lim 0:754
z!1 z2 1:605z 0:670
We know from the original transfer function in the s-domain that the nal value does
approach 6/8, or 0.75. In the discrete form we introduce some round-o errors,
although minor in terms of what we are trying to accomplish in control system
design.

EXAMPLE 8.2
Using the continuous system transfer function, nd the discrete initial and nal
values. Assume a unit step input and a sample time equal to 0.1 seconds. Use
Matlab to perform the conversion and plot the resulting step response to nd the
discrete initial and nal values.
6
Gs
s2 4s 8
Matlab allows us to dene the continuous system transfer function, designate the
sample time and desired model of the sampling device, and it then develops the
equivalent discrete transfer function. The command used in this example are given as
% Program to to verify IVT and FVT
%using z-domain transfer function

sysc=tf(6,[1 4 8]) %Make LTI TF in s


sysz=c2d(sysc,0.1,zoh) %Convert to discrete TF using zoh and sample time

Press any key to generate step response plot


pause;
step(sysc,sysz) %Plot the step response of the continuous and sampled systems

When these commands are executed, Matlab returns the following discrete transfer
function:
0:0262z 0:02292
Gz
z2 1:605z 0:6703
which is identical to one developed manually in the previous example. The resulting
step responses of the continuous and discrete systems are given in Figure 9.
In conclusion, with discrete systems the FVT and IVT still apply and can be
used to determine the nal and initial values of a system. The procedure learned with
analog systems is used also for digital systems, with two exceptions. First, when we
close the loop we must be careful where we apply the samplers and ZOH eects. The
z transform is only applied between two samplers. Second, when we have inputs
acting directly on the continuous portion of our physical system (i.e., disturbances),
Digital Control System Performance 351

Figure 9 Example: Matlab step responses of continuous and discrete equivalent systems.

we cannot close the loop and solve for the closed form transfer function without
knowing what the disturbance input is since it is include in the z transform and is not
a sampled input.

8.4 FEEDBACK SYSTEM STABILITY


Until now, we have used dierence equations to simulate the response of discrete
(sampled) systems. While this is an easy method to nd the response of sampled
systems, it is more limited when used during the actual design of the control system.
We would prefer to design digital systems using root locus tools as performed with
analog systems. What is presented in this section is the background material that
allows us to use root locus design tools, similar to the analog methods, to design
digital systems (using the z transform and discrete transfer functions).
As several examples have shown, it is easy to represent dierence equations as
transfer functions in the z-domain and vice versa; any transfer function in z can easily
be converted to a dierence equation recognizing that z 1 is a delay of one sample
period. Since the transfer function contains the same information, we can use them
directly to calculate the sampled response without explicitly writing the equivalent
dierence equations. There are two other methods that allow us to do this. The rst is
long division where the numerator is divided by the denominator and the response at
each sample time is calculated. This does not require recursive solutions but is very
computationally intensive, especially for larger systems. Second, it is possible to use
the z transform tables (Appendix B) to invert the transfer function back into the time
domain. Again, this becomes very labor intensive for all but the simplest of systems.
Finally, we can calculate the poles and zeros of the transfer function, as done in the s-
domain, and estimate the response as the sum of individual linear rst- and second-
order responses. This is also the method used with root locus to design dierent
controllers to meet specic performance requirements. The dierence equation
352 Chapter 8

method is still used often when verifying the nal design since it is easy to get and is
easily programmed into computers or calculators to obtain responses. Spreadsheets
can easily be congured to solve for and plot sampled responses.
For the times when we want to know how the response is aected by dierent
parameters changing and we would rather not be required to calculate the dierence
equation for each case, we use the root locus plots. Fortunately, as the root locus
plots allowed us to predict system performance and design controllers in the con-
tinuous realm, the same techniques apply for discrete systems. Since the z-domain is
simply a mapping from the s-domain where z esT , we can apply the same rules but
to the dierent boundaries determined by the mapping between s and z. In other
words, when we close the loop the same magnitude and angle conditions are still
required to be met when the root locus plot is developed. This leads to us using the
identical rules as presented for analog systems represented in the s-domain.
To begin our discussion of stability using root locus techniques, let us see where
our original stable region in the s-plane occurs when we apply the transform to get
into the corresponding z-plane. The method is quite simple; since we know what
conditions are required in the s-plane for stability, apply those values of s into the
transform in the z-domain and see what shape and area the new stability region is
dened by. Our original pole locations in continuous systems were expressed as
having (possibly) a real and imaginary component, where
s   j!
Then substituting s into z esT , results in
z ej!T eT ej!T
Knowing that the system is stable whenever s has a negative real part,  < 0,
and is marginally stable when  0, we can determine the corresponding stability
region in the z plane. If  0, regardless of j!T (oscillating component), then
e0 1, dening the equivalent stability region in the z-domain. All points that
have a constant magnitude of one relative to the origin are simply those dening a
unit circle centered on the origin, and thus dening the stable region in z. When
 < 0, e is always less than 1 and approaches 1 as  approaches zero from the left
(negative). Therefore it is the area inside the unit circle that denes a stable system
and the circle itself denes the marginally stable border. We can determine additional
properties by holding all parameters constant except for one, varying the one in
question, and mapping the resulting contour lines. When this is done, the z-plane
stability regions and response characteristics can be found with respect to lines of
constant damping ratio and natural frequency. The contours of constant natural
frequencies and damping ratios are shown in Figure 10.
Any system inside the unit circle will be stable and the unit circle itself repre-
sents where the damping ratio approaches zero (marginal stability). What is inter-
esting in the z-plane is the added eect of sample time. By changing the sample time
we actually make the poles move on the z-plane. In fact, as the sample period
becomes too long, the system generally migrates outside of the unit circle, thus
becoming unstable. Knowing the natural frequency and damping ratio contour
lines are not as helpful in the z-plane since their shape excludes the option of an
easy graphical analysis unless special grid paper is used. However, most programs
like Matlab can overlay the locus plot with the grid and thus enable the same
Digital Control System Performance 353

Figure 10 Contours of n and in the z-plane.

controller design techniques learned about with continuous system design methods.
Several observations can be made about z-plane locations:
 The stability boundary is the unit circle and jzj  1.
 In general, damping decreases from 1 on the positive real axis to 0 on the
unit circle the farther out radially we go.
 The location z 1 corresponds to s 0 in the s-plane.
 Horizontal lines in the s-plane (constant !d ) map into radial lines in the
z-plane.
 Vertical lines in the s-plane (constant decay exponent , or 1=) map into
circles within the unit circle in the z-plane.
As done with analog systems in Figure 9 of Chapter 3, we can show how responses
dier, depending on pole locations in the z-plane, as demonstrated in Figure 11.

Figure 11 Transient responses and z-plane locations.


354 Chapter 8

Finally, since we can relate transient response characteristics to pole location in


the z-plane, we are ready to design and simulate digital controllers using the methods
presented for the s-plane. The rules for developing the loci paths are identical
whether in the s-plane or z-plane, so the skills required for designing digital con-
trollers using root locus plots are identical to those we learned earlier when designing
continuous systems. For review, the summaries of the rules, initially dened in
Section 4.4.2, are repeated here in Table 1.
This chapter concludes with several examples to illustrate the use of root locus
techniques and z-domain transfer functions for determining the dynamic response of
sampled (discrete) systems.

Table 1 Guidelines for Constructing Root Locus Plots

1 From the open loop transfer function, GzHz, factor the numerator and
denominator to locate the zeros and poles in the system.
2 Locate the n poles on the z-plane using xs. Each loci path begins at a pole, hence
the number of paths are equal to the number of poles, n.
3 Locate the m zeros on the z-plane using os. Each loci path will end at a zero, if
available; the extra paths are asymptotes and head towards innity. The number
of asymptotes therefore equals n m.
4 To meet the angle condition, the asymptotes will have these angles from the positive
real axis:
If one asymptote, the angle 180 degrees
Two asymptotes, angles 90 degrees and 270 degrees
Three asymptotes, angles 60 degrees and 180 degrees
Four asymptotes, angles 45 degrees and 135 degrees
5 The asymptotes intersect the real axis at the same point. The point, , is found by
sum of the poles) sum of the zeros
number of asymptotes
6 The loci paths include all portions of the real axis that are to the left of an odd
number of poles and zeros (complex conjugates cancel each other).
7 When two loci approach a common point on the real axis, they split away from or
join the axis at an angle of 90 degrees. The break-away/break-in points are
found by solving the characteristic equation for K, taking the derivative w.r.t. z,
and setting dK=dz 0. The roots of dK=dz 0 occurring on valid sections of the
real axis are the break points.
8 Departure angles from complex poles or arrival angles to complex zeros can be
found by applying the angle condition to a test point in the vicinity of the root.
9 Locating the point(s) where the root loci path(s) cross the unit circle and applying
the magnitude condition nds the point(s) at which the system becomes unstable.
10 The system gain K can be found by picking the pole locations on the loci path that
correspond to the desired transient response and applying the magnitude
condition to solve for K. When K 0 the poles start at the open loop poles, as
K ! 1 the poles approach available zeros or asymptotes.
Digital Control System Performance 355

EXAMPLE 8.3
Convert the continuous system transfer function into the discrete equivalent using
the ZOH approximation. Determine the poles and zero when:
a. Sample time, T 0:1s
b. Sample time, T 10s
Comment on the system stability between the two cases.
4
Gs
ss 4
To convert from the continuous into the discrete domain we need to apply the ZOH
and take the z transform:
 
z 1 1 4
Gjz Z 2
z s s4
Letting a 4 and using the z transform from the table:
   aT   

z 1 1 a z 1 z aT 1 e z 1 e aT aTe aT
Gz Z 2  
z s s1 z az 12 z e aT

And nally, we can simplify the terms:


   
aT 1 e aT z 1 e aT aTe aT
Gz  
az 1 z e aT

To nd the rst discrete transfer function, let a 4 and T 0:1:


0:0176z 0:0154
Gz
z2 1:67 0:6703
Poles: 1, 0.6703; zero: 0:8753. For the second discrete transfer function, let a 4
and T 10:
9:75z 0:25
Gz
z2 z
Poles: 1, 0; zero: 0:0256.
It is interesting to note how the poles change as a function of our sample time
even though our physical system model has not changed. Additionally, in both cases,
the pole at the origin of the s-plane (integrator) maps into the z 1 point of similar
marginal stability in the z-domain. The next example will demonstrate the construc-
tion of a root locus plot in the z-domain.

EXAMPLE 8.4
Develop the root locus plot for the system given in Figure 12, when the sample time
is T 0:5 sec. Describe the range of responses that will occur and compare them
with the results obtained when the system is implemented using an analog controller
instead of a digital controller.
356 Chapter 8

Figure 12 Example: physical system model with ZOH and sampler.

To develop the root locus plot, we rst need to derive the discrete transfer
function for the system. To do so we apply the ZOH model to the continuous system
and take the z transform of the system. With the ZOH the system becomes
 
z 1 1 4
Gz Z 
z s s 22
Letting a 2 and using the z transform from the table:
" #
z 1 1 a2
Gz Z 
z s s a2
 aT


z 1 z 1 e aTe aT z e 2aT e aT aTe aT


  2
z z 1z e aT
And nally, we can simplify the terms:
 
1 e aT aTe aT z e 2aT e aT aTe aT
Gz  2
z e aT
To nd the rst discrete transfer function, let a 2 and T 0:5:
0:264z 0:0135
Gz
z2 0:736z 0:135
Poles: 0.368, 0.368; zero: 0:512.
Using the open loop discrete transfer function let us now apply the same root
locus plotting guidelines and draw the root locus plot in the z-domain.
Step 1: The transfer function is already factored; there is one zero and two
poles. The poles are repeated at z 0:368 and the zero is located at
z 0:512.
Steps 2 and 3: Locate the poles and zero on the z-plane (using an xs and os) as
shown in Figure 13.
Step 4: Since we have two poles and one zero, then n m 21 1, and we
have one asymptote. For one asymptote, the angle relative to the positive
real axis is 180 degrees.
Step 5: The negative real axis is the asymptote and there is no intersection
point.
Step 6: The section on the real axis to the left of the zero is the only valid
portion on the real axis. For our example this is everything to the left of
0:512.
Step 7: There is one break-away point which coincides with the two poles since
they must immediately leave the real axis as we begin to increase the gain K.
Digital Control System Performance 357

Figure 13 Example: locating poles and zeros in the z-plane.

There is also one break-in point that can be found by solving the character-
istic equation for K and take the derivative with respect to z. The character-
istic equation is found by closing the loop and results in the following
polynomial:
0:264z 0:0135
1K 2
0
z 0:736z 0:135
z2 0:736z 0:135
K
0:264z 0:013
Taking the derivative with respect to z:
dK 0:264z2 0:270z 0:135

dz 0:264z 0:0132
To solve for the break-in point we set the numerator to zero and nd the
roots.
z 1:39 and 0:368
One root is lies on valid break-in section of real axis while, as expected, the
second root coincides with the location of the two real poles and the corre-
sponding break-away point. Thus the break-in point is at z 1:4.
Step 8: The angles of departure are 90 degrees when the loci paths leave the
real axis (at the two repeated poles). We can now plot our nal root locus
plot as shown in Figure 14.
Step 9: The system becomes unstable when the root loci paths leave the unit
circle. Although one pole does return at higher gains, there is always one
pole (on asymptote) that remains unstable. If we wished to determine at
what gain the system crosses the unit circle, we can either use the magnitude
condition and apply it to the intersection point of the root loci paths and the
unit circle or we could close the loop and solve for the gain K that causes the
magnitude to become greater than 1 (square root of the sum of the real and
imaginary components squared).
358 Chapter 8

Figure 14 Example: root locus plot for discrete second-order system.

Step 10: A similar procedure as that described in step 9 can be used to solve for
the gain that results in the desired response characteristics. As with analog
systems in the s-domain we can use the performance specications to deter-
mine desired pole locations in the z-plane. The diculty, however, is that
now the lines of constant damping ratio and natural frequency are nonlinear
and to use a graphical method we must overlay our root locus plot with a
grid showing these lines (see Figure 10). As the next example demonstrates,
this task is much easier to accomplish using a software design tool such as
Matlab.
To conclude this example, let us plot the root locus plot in the s-domain
assuming that we have an analog controller as in earlier chapters. Then we can
compare the dierences regarding stability between continuous (analog) and discrete
(sampled) systems. Referring back to our original system described by the block
diagram in Figure 12 and only including our original continuous system (ignoring
the ZOH and sampler), we see that we have a second-order system with two repeated
poles at s 2 and no zeros. The root locus plot is straightforward with two
asymptotes that intersect the axis at s 2, no valid sections of real axis, and the
poles immediately leaving the real axis and traveling along the asymptotes. The
continuous system root locus plot is shown in Figure 15.
Comparing Figure 14 and Figure 15 allows us to see an important distinction
between analog and digital controllers. Whereas the analog only system never goes
completely unstable (crosses into the RHP), the sampled (digital) system now leaves
the unit circle in the z-plane and will become unstable as gain is increased. Adding
the digital ZOH and sampler tends to decrease the stability in our system (it always
adds a lag, as noted earlier) and stable systems in the continuous domain may
become unstable when their inputs and outputs are sampled digitally.

EXAMPLE 8.5
Using Matlab, develop the root locus plot for the system given in Figure 16 when the
sample time is T 0:5 sec. Tune the system to have a damping ratio equal to 0.7 and
plot the response of the closed loop feedback system when the input is a unit step.
Digital Control System Performance 359

Figure 15 Example: equivalent continuous system root locus plot for comparison (second
order).

As demonstrated with earlier analog examples, tools such as Matlab provide


many features to help us design control systems. This example uses Matlab to apply
the ZOH and convert the system into an equivalent discrete transfer function, draw
the root locus plot, overlay the plot with lines of constant damping ratio and natural
frequency, and solve for the gain K resulting in a damping ratio  0:7. This setting
is veried using Matlab to generate the sampled output of the system responding to a
unit step input. The commands used to perform these tasks are listed here.
%Program to draw root locus plot
%using z-domain transfer function

sysc=tf(4,[1 4 4]) %Make LTI TF in s

sysz=c2d(sysc,0.5,zoh) %Convert to discrete TF using zoh and sample time

rlocus(sysz); %Draw the root locus plot


zgrid; %Add lines of constant damping ratio and natural frequency
K=rlocnd(sysz) %Solve for K where zeta = 0.7

syscl=feedback(K*sysc,1); %Close the loops for each system


syszl=feedback(K*sysz,1);
gure;
step(syscl,syszl) %Plot the CLTF step response of the continuous and sampled systems

Executing these commands gives us our discrete transfer function as

0:264z 0:0135
Gz
z2 0:736z 0:135

Figure 16 Example: physical system model with ZOH and sampler.


360 Chapter 8

Figure 17 Example: Matlab discrete root locus plot and grid overlay.

This agrees with our result from the previous example. The corresponding root locus
plot as generated by Matlab is given in Figure 17.
Placing the crosshairs where the root locus plot crosses the line of damping
ratio equal to 0.7 returns our gain K 0:45. Now we can close the loop with K and
use Matlab to generate the step responses for the continuous and sampled models.
This comparison is given in Figure 18
From the discrete step response plot we see that we reach a peak value of 0.32
and a steady-state value of 0.31, corresponding to a 4% overshoot. This agrees very

Figure 18 Example: Matlab comparison of continuous and sampled step responses.


Digital Control System Performance 361

well with the expected percent overshoot from a system with a damping ratio equal
to 0.7.
In conclusion, we see that the stability of systems that are sampled and include
a ZOH is reduced when compared with the continuous equivalent. Not only does the
gain aect our stability (as with analog systems), now the sample time changes the
shape of our root locus plot because it changes the locations of the system poles and
zeros (as easily seen in the dierent z transforms). Although the guidelines used to
develop root locus plots remain the same, the process of determining the desired pole
locations on the loci paths becomes more dicult on the z-plane due to nonlinear
lines of damping ratio and natural frequency. As we progress then, we tend to use
computers in increased roles during the design process. It is important to remember
the fundamental concepts even when using a computer since most design decisions
can be made and most errors can be caught (even though the computer will likely
think everything is solved) based on what we know the overall shapes should be. This
frees us to use the computer to then help us with the laborious details and calcula-
tions.

8.5 PROBLEMS
8.1 When we close the loop on a sampled system, the z transform applies to all
blocks located where?
8.2 To obtain the closed loop transfer function of digitally controlled systems, the
input and output must be _____________.
8.3 Discrete transfer functions may be linear or nonlinear. (T or F)
8.4 Dierence equations may be linear or nonlinear. (T or F)
8.5 Given the discrete transfer function, use the dierence equation method to
determine the output of the system. Let rt be a step input occurring at the rst
sample period and calculate the rst ve sampled system response values. Using the
FVT in z, what is the nal steady-state value?
Cz z
2
Rz z 0:1z 0:2
8.6 Given the discrete output in the z-domain, use the dierence equation method
to determine the output of the system. Calculate the rst ve sampled system
response values. Using the FVT in z, what is the nal steady-state value?
z3 z2 1
Yz
z3 1:3z2 z
8.7 Given the discrete output in the z-domain, nd the initial and nal values using
the IVT and FVT, respectively.
zz 1
Yz
z 1z2 z 1
8.8 Given the continuous system transfer function:
a. Using the FVT in s, what is the steady-state output value in response to a
unit step input for the continuous system?
b. Applying a sampler and ZOH with a sample time of 1 s, derive the equiva-
lent discrete transfer function.
362 Chapter 8

c. Write the corresponding dierence equations and calculate the rst ve


sampled outputs of the system. Let rt be a step input occurring at the
rst sample period.
d. Using the FVT in z, what is the nal steady-state value in response to a unit
step input? How does this answer compare with that obtained in part (a) for
the continuous system?
1
Gs
ss 1

8.9 Given the block diagram in Figure 19,


a. Determine the rst ve values of the sampled output for the closed loop
system. The input is a unit impulse and the sample time is T 0:1 s.
b. Use the discrete FVT to determine the steady-state error if the input is a unit
step. Use the same sample time, T 0:1 s.

Figure 19 Problem: physical system block diagram.

8.10 Given the block diagram in Figure 20, develop the discrete transfer function
and solve for the range of sample times for which the system is stable (see Problem
8.9).

Figure 20 Problem: physical system block diagram.

8.11 Using the system transfer function given, develop the sampled system transfer
function and solve for the range of sample times for which the system is stable.
1
Gs
ss 1

8.12 For the continuous system transfer function given, derive the sampled system
transfer function, the poles and zeros of the sampled system, and briey describe the
type of response. If required, use partial fraction expansion.

10s 1
Gs
ss 4

8.13 Given the following discrete open loop transfer function:


a. Sketch the root locus plot.
b. Does the system go unstable?
c. Approximately what is the range of damping ratios available?
Digital Control System Performance 363

z2 z
Gz
z2 0:1z 0:2

8.14 Sketch the root locus plot for the system in Figure 21 when the sample time is
0.35 s.

Figure 21 Problem: physical system block diagram.

8.15 Sketch the root locus plot for the system in Figure 22 when the sample time is
0.5 s.

Figure 22 Problem: physical system block diagram.

8.16 Use the computer to solve Problem 8.14:


a. Sketch the plot when T 0:35 sec and if the system goes unstable solve for
the gain K at marginal stability.
b. Sketch the plot when T 1:5 sec and if the system goes unstable solve for
the gain K at marginal stability.
8.17 Use the computer to solve Problem 8.15. Find the gain K resulting in an
approximate damping ratio equal to 0.5.
8.18 Use the computer to develop the root locus plot for the system in Figure 23.
The sample time, T, is 0.1 sec. Find K where the damping ratio is equal to 0.9.

Figure 23 Problem: physical system block diagram.

8.19 For the system transfer function given, use the computer to
a. Convert the system to a discrete transfer function using the ZOH model and
a sample time of 0.5 sec.
b. Draw the root locus plot.
c. Solve for the gain K required for a closed loop damping ratio equal to 0.7.
d. Close the loop and generate the sampled output in response to a unit step
input.
s 4
Gs
ss 1s 6
364 Chapter 8

8.20 For the system transfer function given, use the computer to
a. Convert the system to a discrete transfer function using the ZOH model and
a sample time of 0.25 sec.
b. Draw the root locus plot.
c. Solve for the gain K required for a closed loop damping ratio equal to 0.5.
d. Close the loop and generate the sampled output in response to a unit step
input.
s 2s 4
Gs
s 3s 6s2 3s 6
9
Digital Control System Design

9.1 OBJECTIVES
 Develop digital algorithms for analog controllers already examined.
 Develop tools to convert from continuous to discrete algorithms.
 Discuss tuning methods for digital controllers.
 Develop methods to design digital controllers directly in the discrete
domain.

9.2 INTRODUCTION
If we enter the z-domain the design of digital control algorithms is almost identical to
the design of continuous systems, the obvious dierence being the sampling and its
eect on stability. Since digital control algorithms are implemented using micropro-
cessors, the common representation is dierence equations. Although these require
some previous values to be stored, it is a simple algorithm to program and use. In
fact, any controller that can be represented as a transfer function in the z-domain is
quite easy to implement as dierence equations, the only requirement being that is
does not require future knowledge of our system. When we get nonlinear and/or
other advanced controllers that are unable to be represented as transfer functions in
z, the design process becomes more dicult. Nonlinear dierence equations, how-
ever, are still quite simple to implement in most microprocessors once the design is
completed.

9.3 PROPORTIONAL-INTEGRAL-DERIVATIVE (PID) CONTROLLERS


Even in the digital realm of controllers, PID algorithms are extremely popular and
continue to serve many applications well. Since we saw earlier that using dierent
approximations will result in dierent dierence equations, we nd many dierent
representations of PID controllers using dierence equations. PID algorithms are
popular in large part because of their previous use in analog systems and many
people are familiar with them. As this chapter demonstrates, however, many addi-
tional options become available when using digital controllers, and if classical PID

365
366 Chapter 9

algorithms are not capable of achieving the proper response, we can directly design
digital controller algorithms using the skills from the previous two chapters.

9.3.1 Digital Algorithms


Although the goal is obtain a dierence equation(s) representing our control law, the
method by which it is obtained is varied. We may simply use dierence approxima-
tions for the dierent controller terms or we may use z transform techniques and
convert a controller from the s-domain into the z-domain, therefore enabling us to
write the dierence equations. As we saw earlier, both methods work but dierent
results are obtained. Even within these two approaches are several additional
options. For example, when using dierence approximations we can use forward
or backward approximations and when converting from the s-domain into the z-
domain we have the zero-order hold (ZOH), bilinear transform, or rst-order hold
transformations. The important conclusion to be made is that we only approximate
analog systems when we convert them to digital representations.
9.3.1.1 Approximating PID Terms Using Difference Equations
The easiest way to understand the development of a control algorithm is to write
dierence equations for each of the dierent control actions. In our PID controller
examined here, this means approximating a summation (integral) and dierential
(derivative) using dierence equations; the proportional term remains the same. The
derivative term can be approximated using forward, backward, or central dierence
techniques by calculating the dierence between the appropriate error samples and
dividing by the sample period. These approximations are shown below where ek is
the sampled output of the summing junction (our error at each sample instant):
Approximating a derivative:

ek  ek  1
Backward difference:
T
ek 1  ek
Forward difference:
T
ek 1  ek  1
Central difference:
2T

Although the central dierence method provides the best results since it averages
over two sample periods, a future error value is needed and so from a programming
perspective it is not that useful. The same is true for the forward dierence approx-
imation. In general, then, the backward dierence is commonly used to approximate
the derivative term in our PID algorithm.
The same concepts can be used for approximating the integral where the
area under error curve between samples can be given as three alternative dierence
equations:
Approximating an integral:

Backward rectangular rule: T ek  1


Forward rectangular rule: T ek
Trapezoidal rule: T ek ek  1
2
Digital Control System Design 367

The trapezoidal rule gives the best approximation since it integrates the area deter-
mined by the width, T, and the average error, not just the current or previous. To
operate as an integral gain, it must continually sum the error and thus must include a
memory term as shown in the following trapezoidal approximation. Otherwise, the
error is only that which accumulated during the current sample period.
Finally, realizing that the proportional term is just uk Kp ek, we can pro-
ceed to write the dierence equations for PI and PID algorithms, recognizing that
dierent approximations will result in slightly dierent forms. Two forms are given,
termed the position and velocity (or incremental) algorithms. The position algorithm
results in the actual controller output (i.e., command to valve spool position, etc.),
while the velocity algorithm represents the amount to be added to the previous
controller output term. This is seen in that the error is only used to calculate the
amount of change to the output and then it is simply added to the previous output,
uk  1. Velocity algorithms have several advantages in that it will maintain its
position in the case of computer failure and that it is not as likely to saturate
actuators upon startup. This is an easy way to implement bumpless transfer for
cases where the controller is switched between manual and automatic control. In a
normal position algorithm the controller will continually integrate the error such
that when the system is returned back to automatic control, a large bump occurs.
Bumpless transfer can be implemented in position algorithms by initializing the
controller values with the current system values before switching back to automatic
control. The velocity (incremental) command can also be used to interface with
stepper motors by rounding the desired change in controller output to represent a
number of steps required by the stepper motor. In this role the stepper motor acts as
the uk  1 term since it holds it current position until the next signal is given.
Position PI algorithm using trapezoidal rule:
uk Kp ek sk
T
sk sk  1 Ki ek ek  1
2
When implementing the position algorithm, the integral term, sk, must be calcu-
lated separately so that it is available for the next sample time as sk  1.
To derive the velocity form of the PID algorithm we take uk, decrement k by
1 in each term, resulting in another expression for uk  1, and subtract the two
resulting algorithms. After we simplifying the result of the subtraction we can write
the velocity algorithm.
Velocity PI algorithm using trapezoidal rule:
T
uk  uk  1 Kp ek  ek  1 Ki ek ek  1
2
Or we may collect and group terms according to the error term and express the
velocity PI algorithm in such a way where it is more amendable to programming:
   
T T
uk  uk  1 Kp Ki ek Ki  Kp ek  1
2 2
Finally, the complete PID algorithm can be derived using a trapezoidal
approximation for the integral and a backward rectangular approximation for the
derivative.
368 Chapter 9

Kd
uk Kp ek sk ek  ek  1
T
T
sk sk  1 Ki ek ek  1
2
We can follow the same procedure as with the PI and derive the velocity (incremen-
tal) representation as

T
uk uk  1 Kp ek  ek  1 Ki ek ek  1
2
Kd
ek  2ek  1 ek  2
T
Sometimes the integrated error is only included in the term uk  1 and Ki only acts
on the current error level, in which case the algorithm simply becomes (common in
PLC modules):

T
uk uk  1 Kp ek  ek  1 Ki ek
2
Kd
ek  2ek  1 ek  2
T
Finally, if so desired, we can express the PID algorithm using integral and derivative
times, Ti and Td , as used in earlier analog PID representations and Ziegler-Nichols
tuning methods:

T
uk uk  1 Kp ek ek  1 ek  ek  1
Ti

Td
ek  2ek  1 ek  2
T

Several modications to the algorithms are possible which help them to be more
suitable under dicult conditions. First, it is quite easy to prevent integral windup by
limiting the value of sk to maximum positive and negative value using simple if
statements. Also, modifying the derivative approximations can help with noisy sig-
nals. The derivative approximation can be further improved by averaging the rate of
change of error over the previous four (or whatever number is desired) samples to
further smooth out noise problems. The disadvantage is that it does require addi-
tional storage values and introduces additional lag into the system. A similar eect is
accomplished by adding digital lters to the input signals.
Finally, it is now easy to implement I-PD (see Sec. 5.4.1) since the physical
system output is already sampled and can be used in place of the error dierence
each sample period. It requires that we use the integral gain since it is the only term
that directly acts on the error. The new algorithm becomes

uk uk  1 Kp ck  1  ck Ki T rk  ck
Kd
ck 2ck  1  ck  2
T
Digital Control System Design 369

Remember that many of the signs are reversed because ek rk  ck and now
only ck, the actual physical feedback of the system, is used for the proportional and
derivative terms. This helps with set-point-kick problems.
Concluding this section on PID dierence equations, we see that there are
many options since the dierence equation is only an approximation to begin
with. Having the controller implemented as a dierence equation does make it
easy for us to use many of our analog concepts (e.g., I-PD) and approximate deri-
vatives. Many manufacturers of digital controllers (PLCs, etc.) have slight proprie-
tary modications designed to enhance the performance in those applications that
the components are designed for.
9.3.1.2 Conversion from s-Domain PID Controllers
Direct conversion from the s-domain into the z-domain is very easy (using a program
like Matlab) and quickly results in getting a set of dierence equations to approx-
imate any controller represented in the s-domain. This is advantageous when sig-
nicant design eort and experience has already been gained with the analog
equivalent. The disadvantages are the lack of understanding and thus the corre-
sponding loss of knowing what modications (in the digital domain) can be used
to build in certain attributes. It does allow us to account for the sample time (as least
in a limited sense) since the conversion methods use both the continuous system
information and the sample time.
Using our design experience and knowledge of PID controllers in the contin-
uous domain to develop the digital approximations works very well if the sample
time is great enough. It is feasible under these conditions to design the control system
using conventional continuous system techniques and simply convert the resulting
controller to the z-domain to obtain the dierence equations. This allows us to use
all our s-domain root locus and frequency techniques to design the actual controller.
As discussed earlier, if the sampling rate is greater than 20 times the system band-
width or natural frequency, the resulting digital controller will closely approximate
the continuous controller and the method works well; otherwise, it becomes bene-
cial to use the direct design methods discussed below, allowing us to account for the
sample and hold eects, while not being constrained to P, I, and D corrective
actions.
The simplest conversion is to simply use the transformation z esT and map
the pole and zero locations from the continuous (s) domain into the discrete (z)
domain. This is commonly called pole-zero matching and will be demonstrated
when developing digital approximations of phase-lag and phase-lead controllers.
The transformation is also the starting point for the bilinear, or Tustins, approx-
imation. If we solve the transformation for s, we get
1
s lnz
T
The bilinear transformation is the result when we perform a series expansion on s
and discard all of the higher order terms. The term that is retained then becomes our
rst order approximation of the transform. It is applied as follows:
2 z1
s
Tz1
370 Chapter 9

It can be shown that it is very similar to the trapezoidal approximation used in the
preceding sections. The bilinear transformation process can get tedious when devel-
oping the dierence equations by hand but is easily done using computer programs
like Matlab. The concept, however, is simple. To convert from the s-domain, we
simply substitute the transform in for each s that appears in our controller and
simplify the result until we obtain our controller in the z-domain.

EXAMPLE 9.1
Convert the PI controller, represented as designed in the s-domain, into the z-
domain using the bilinear transform. Write the corresponding dierence equation
for the discrete PI approximation.
10
Gc s 100
s
To nd the equivalent discrete representation substitute the bilinear transform in for
each s term.
100 200 10Tz  200  10T
Gc z Dz 100
2 z1 2z  1
Tz1
Consistent with our earlier results where sampled system transfer functions are
dependent on sample time, we see the same eect with our equivalent discrete con-
troller. To derive the dierence equation lets assume T 0:1s which results in a
discrete controller transfer function of
Uz 201z  199

Ez 2z  2
Multiplying the top and bottom by z1 allows us to write our dierence equation as
201 199
uk uk  1 ek  ek  1
2 2
So we see that the bilinear transform can be used to develop approximate controller
algorithms represented as dierence equations.

EXAMPLE 9.2
Convert the PI controller, represented as designed in the s-domain, into the z-
domain using Matlab and the bilinear transform. Write the corresponding dierence
equation for the discrete PI approximation if the sample time is 0.1s.
10 100s 10
Gc s 100
s s
In Matlab we can dene the continuous system transfer function and convert to the
z-domain using the bilinear transform by executing the following commands:
>>num=[100 10];
>>den=[1 0];
>>sysc=tf(num,den)
>>sysz=c2d(sysc,0.1,tustin)
Digital Control System Design 371

This results in the following transfer function, identical to the one developed man-
ually in the preceding example.

Uz 100:5z  99:5

Ez z1

Using a transformation manually quickly becomes an exercise in algebra for all


but simple systems. Using tools like Matlab allows us to use the same techniques but
on much larger systems and still get dierence equations for implementing our
controller. This allows us to take any analog design, not just PID as primarily
discussed here, and convert it into an equivalent discrete design which may be
implemented digitally.

9.3.2 Tuning Methods


In the digital domain we still have similar procedures regarding the tuning of our
controller. This section summarizes several items relating the tuning of analog con-
trollers, as learned earlier, to now tuning digital controllers. The primary dierence is
that we now have additional variables that tend to complicate matters slightly when
compared to continuous system tuning. The rst dierence is the sample time. If the
sampling intervals are short compared to the system response, as discussed in the
preceding sections, then continuous system tuning methods like Ziegler-Nichols
work well. If sample time is slower, however, the Ziegler-Nichols methods serve
only to provide a rough estimation and the values become very sensitive to the
sample time. For all tuning methods with digital controllers there will be a depen-
dence on the sample time, and if it is changed another round of tuning is likely
required to achieve optimum performance. To use the Ziegler-Nichols method with
discrete algorithms, we simply follow the same procedure presented for analog con-
trollers, only we must operate the controller at the sampling rate that it will be
operating at when nished. Then we turn the integral and derivative gains o,
increase the proportional gain until the system oscillates (or use the step response
method), and record the gain. Since it is implemented digitally it is a straightforward
procedure to apply the values in Table 1 or Table 2 of Chapter 5 and tune the
controller. If we know that this method is to be used, we can express our dierence
equations using the integral time and derivative time notation as presented in the
preceding section.
Additional problems occur in the hardware and software. When converting
from analog to digital we have nite resolution that may become a problem on the
lower end converters. Also, in software, we may be limited to word lengths, integers,
etc., which all determine how we set up or controller and tune it. We may have to
scale the inputs and outputs to take better advantage of our processor. Hardware
and software related issues are discussed more fully in Chapter 10.
When designing and tuning discrete algorithms, if we incorporate integral
windup protection we should set the upper and lower limits to closely approximate
the saturation points of our physical system. This will allow maximum performance
from our system without integral windup problems. This is where understanding the
physics of our system is very useful.
372 Chapter 9

Finally, since microprocessors can change the parameters during operation, it


is possible to incorporate auto-tuning methods based on system response. For exam-
ple, the proportional gain can continually be adjusted to limit the system overshoot.
If it system begins to overshoot more, the gain is decreased. Sometimes it is benecial
to base changes o of a physical parameter (i.e., adaptive control). These methods
are discussed more fully in Chapter 11.

9.4 PHASE-LAG AND PHASE-LEAD CONTROLLERS


Phase-lag and phase-lead controllers, as with PID, can be designed in the analog
domain using common techniques and converted to equivalent discrete representa-
tions. The bilinear transform, demonstrated with PID controllers, can be used again
when dealing with phase-lag and phase-lead designs. Another method that lends
itself primarily well to phase-lag and phase-lead is the pole-zero matching technique.
Since phase-lag/lead controllers are designed to place one pole and one zero on the
real axis, it is very simple to use the transform z esT to map the s plane pole and
zero locations directly into the z-domain. The caution here is that we must also make
sure that the steady-state gain of each representation remains the same. This is easy
to accomplish by applying the continuous and discrete representations of the nal
value theorem (FVT) to each controller. First nd the steady-state gain of the analog
controller and set the discrete controller gain to result in the same magnitude. This
will be demonstrated in the example problems.

EXAMPLE 9.3
Given an analog phase-lead controller, use the pole-zero matching technique to
design the equivalent discrete controller. Express the nal controller as a dierence
equation that could be easily implemented in a digital microprocessor. Use a sample
time T 0:5s.
s1
Gc s 10
s5
To nd the equivalent dierence equations we will use the identity z esT and solve
for the equivalent z-plane locations. This will allow us to express the controller as a
transfer function in the z-domain and then to write the dierence equations from the
transfer function.
Beginning with our original analog controller, we note that the zero location is
at s 1 and the pole location is at s 5. If our sample time for this example is
1=2 second, we can use the identity to nd each discrete pole and zero location.
Zero location in z: e esT e10:5 0:61
Pole location in z: z esT e50:5 0:08
Now we can write the new controller transfer function as
z  0:61
Gc z K
z  0:08
We still must equate the steady-state gains using the two representations of the FVT.
This allows us to determine K for our discrete controller.
Digital Control System Design 373

Gc zjz1 Gc sjs0
 
z  0:61 s 1
Gc zjz1 K G sj s
z  0:08z1 c s0
s 5s0
Solve for K:
1  0:61 1
K 0:424K 10 2 K 4:72
1  0:08 5
Finally, the equivalent phase-lead controller in z is
Uz z  0:61
Gc s 4:72
Ez z  0:08
Cross-multiplying the controller transfer function and using z1 as our delay shift
operator enables us to derive our dierence equation.
Uzz  0:08 Ez4:72z 0:61
   
Uz 1  0:08z1 Ez4:72 1  0:61z1
Our nal dierence equation that represents the discrete approximation of the
original phase-lead controller is given as
uk 0:08uk  1 4:72ek  2:88ek  1
Remember that this particular dierence equation is developed with the assumption
that we will have a sample time equal to 1=2s. As is true with all digital controllers
derived in this fashion, when we change the sample time we must also update our
discrete algorithm. Of course, as the sample time becomes too long the design fails
altogether.

EXAMPLE 9.4
For the system given in Figure 1, use Matlab to design a phase-lead controller using
continuous system techniques. Use pole-zero matching to convert the controller into
the z-domain and verify that the system is stable and usable. Generate a step
response using Matlab.
 A damping ratio of 0.5
 Sample time, T 0:01s
Since this problem is open loop marginally stable, we need to modify the root loci
and move them further to the left. Thus a phase-lead controller is the appropriate
choice. We start by adding 36 degrees, (conservative choice) by placing the zero at
s 3 and the pole at s 22. Using Matlab allows us to choose the point where

Figure 1 Example: system block diagram for phase lead digital controller design using pole-
zero matching.
374 Chapter 9

the loci paths cross the radial line representing a damping ratio equal to 1/2 and nd
the gain K at that intersection point. The Matlab commands used to dene the
phase-lead controller and plant transfer function are
clear;
T=0.01;
numc=1*[1 3]; %Place compensator zero at -3
denc=[1 22]; %Place compensator pole at -22
nump=1; %Forward loop system numerator
denp=[1 0 0]; %Forward loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysc*sysp %Overall compensated system in series

Now, execute the following commands and use Matlab to generate the root locus
plot and solve for K, giving us the root locus plot in Figure 2.
rlocus(sysp); %Generate original Root Locus Plot
hold;
rlocus(sysall); %Add new root loci to plot
sgrid(0.5,2); %place lines of constant damping
Kc=rlocnd(sysall)

As we see, this attracts our loci paths into the stable region and crosses the radial line
( 1=2) when K 75. Therefore our continuous system phase-lead controller can
be dened as
s3
Gc PhaseLead 75
s 22
To implement the controller digitally, we can use pole-zero matching to nd
the equivalent controller in the z-domain. Beginning with the analog controller, we
note that the zero location is at s 3 and the pole location is at s 22. Using our

Figure 2 Example: Matlab root locus plot with continuous system phase-lead compensa-
tion.
Digital Control System Design 375

sample time of 0:01s and the transformation identity allows us to nd each discrete
pole and zero location.
Zero location in z: z esT e30:01 0:97
Pole location in z: z esT e220:01 0:80
Now we can write the new controller transfer function as
z  0:97
Gc z K
z  0:80
We still must equate the steady-state gains using the two representations of the nal
value theorem. This allows us to determine K for our discrete controller.
Gc zz1 Gc sjs0
z  0:97 23
Gc zjz1 K jz1 Gc sjs0 75 j
z  0:80 s 22 s0
Solve for K:
1  0:97 3
K 0:15K 75 10:2 K
70
1  0:80 22
Finally, the equivalent phase-lead controller in z is
Uz z  0:97
Gc s 70
Ez z  0:80
To simulate the system using Matlab we need to dene the continuous system
transfer function, convert it to the z-domain, add our phase-lead controller, and
generate the step response. The commands that can be used to achieve are listed here.
numzc=70*[1 -0.970]; %Place zero at 0.97
denzc=[1 -0.8]; %Place pole at 0.8

syszc=tf(numzc,denzc,T); %Controller transfer function


sysp=tf(nump,denp); %System transfer function in forward loop

%Verify the discrete root locus plot


gure;
rlocus(syszc*c2d(sysp,T,zoh)); zgrid;
Kz=rlocnd(syszc*c2d(sysp,T,zoh))

To verify our design, let us use Matlab again and now generate the discrete
root locus plot to see how we have pulled our system loci paths into the unit circle
(stability region). We can use the commands given to convert our continuous system
into a discrete system with a sample time equal to 0:01s and with a ZOH applied to
the input of the physical system. The result is our root locus plot given in Figure 3.
%Verify the discrete root locus plot
gure;
rlocus(syszc*c2d(sysp,T,zoh)); zgrid;
Kz=rlocnd(syszc*c2d(sysp,T,zoh))

Since our discrete root locus plot does illustrate that our phase-lead controller sta-
bilizes the system, we expect that our step response will also exhibit the desired
376 Chapter 9

Figure 3 Matlab root locus plot of equivalent phase-lead compensation (pole-zero match-
ing).

response. Executing the following Matlab commands closes the loop and generates
the corresponding step response plot, given in Figure 4.
%Close the loop and convert to z between samplers
cltfz=(syszc*c2d(sysp,T,zoh))/(1+syszc*c2d(sysp,T,zoh))
gure;
step(cltfz,5);

Our percent overshoot with the digital implementation of our continuous system
design is approximately 35%, larger than the expected value of 15%. Further tuning

Figure 4 Example: Matlab step response of phase-lead digital controller (pole-zero match-
ing).
Digital Control System Design 377

could reduce this; such tuning is also easy to do in the z-domain as the next section
demonstrates.
It should be noted that given the small sample times and with both the pole and
zero being close to unity leads to a controller that is sensitive to changes in para-
meters. If the pole or zero location (or sample time) was to change, the controller
could become unstable. Fortunately, the digital controller allows us to consistently
execute the desired commands and the coecients do not change during operation.
So as we see in the example, it is quite simple to design a controller in the s-
domain and convert it to the z-domain using pole-zero matching techniques. The
same comments regarding sample time that were applied to converting other con-
trollers from continuous to digital still apply here.

9.5 DIRECT DESIGN OF DIGITAL CONTROLLERS


Many times we are unable to have sampling rates greater than 20 times the band-
width, at which point the emulated designs based on continuous system controllers
begin to diverge from desired performance specications. As the sampling rate falls
below 10 times the bandwidth, the controller will likely tend toward instability or
actually go unstable. Looking at Figure 5, it is clear that any digital control intro-
duces lag into the system, moving the system towards becoming unstable. The
amount of lag increases as the sampling period increases.
Since we know that lag is introduced when we move to discrete controllers, we
would like to design the system accounting for the additional lag, especially as
sample times become longer. This leads us into direct design methods in the z-
domain. Two primary methods are presented here, root locus design in the z-plane
and deadbeat response design. In particular, the deadbeat design is able to take
advantage of the fact that realizable algorithms, not physical components, are the
only limits. As a result, we are able to implement (design) control actions that are
impossible to duplicate in the analog world.

9.5.1 Direct Root Locus Design


We will rst develop techniques to directly design controllers in the z-domain using
root locus plots. This method is familiar to us through the analogies it shares with its
s-plane counterpart. In fact, as we learned when dening stability regions in the z-
plane, the rules for drawing the root locus plots are identical and only the stability,
frequency, and damping ratio locations have changed. Thus, once we know the type
of response we wish to have, we follow the same procedures used in the s-plane, but

Figure 5 Sampled signal lag of actual signals.


378 Chapter 9

with several special items relating to the z-plane. First, review the type of responses
relative to the location of system poles in the z-plane as given in Figures 10 and 11 of
Chapter 8. In general we will place poles in the right-hand plane (analogous to the s-
domain) and inside the unit circle. If sample times get long relative to the system
bandwidth, we will be forced to use the left-hand plane, an area without an s-domain
counterpart. The closer we get to the origin, the faster our response will settle. When
the poles are exactly on the origin, we have a special case called a deadbeat response,
which is a method presented in the next section.
The discrete root locus design process, when compared with analog root locus
methods, is nearly identical except for two items. First, let us examine the similarities
with the s-domain. When we designed our controller we used our knowledge of poles
and zeros and how they aected the shape of the loci paths to choose and design the
optimal controller. We follow the same procedure again and use our discrete con-
troller poles and zeros to attract the root loci inside the unit circle. The z-domain,
however, also accounts for the sample time and if the sample time is changed we need
to redraw the plot.
The two points where we diverge from the continuous system methods are
related. Since we no longer have to physically build the controller algorithm (i.e.,
with OpAmps, resistors, capacitors, etc.), we can place the poles and zeros wherever
we wish. This allows us additional exibility during the design process. The second
point, related to the rst, is that even though we do not build the algorithm, we still
must be able to program it into a set of instructions that the microprocessor under-
stands; this is our constraint on direct design methods. A controller that can be
programmed and implemented is often said to be realizable. The net eect is that
we do have more exibility when designing digital controllers, even when subject to
being realizable. The process of designing a control system in the z-domain and check-
ing whether or not it is realizable can best be demonstrated through several examples.

EXAMPLE 9.5
Consider designing a controller for the second-order marginally stable system:
1
Gs
s2
The desired specications are to have less than 17% overshoot and a settling time
< 15 sec. Using our second-order specications, this relates to a damping ratio of
0.5 and natural frequency of 0.5 rad/sec. The rst task is to convert the system to a
discrete transfer function. This is accomplished by taking the z transform of the
physical system with a ZOH.
 
z  1 1 z  1 T 2 z 1
Gz Z 2
z s z 2 z  13
Simplifying:
T2 z 1
Gz
2 z  12
To illustrate direct design methods, let us choose a long sample time of 0.5 sec, or 2 Hz.
After substituting in the sample time we get the following discrete transfer function:
Digital Control System Design 379

z1
Gz 0:125
z  12

Using the rules presented to develop root locus plots in Table 1 from Chapter 8, the
open loop uncompensated locus paths can be plotted as shown in Figure 6.
Remember that the rules remain the same and thus we have two poles at z 1
(marginally stable) and a zero at z 1. This means we have one asymptote (the
negative real axis) and the only valid section of real axis lies to the left of the zero at
z 1. The root locus plot is, as we would expect, unstable, since even the contin-
uous system is marginally stable. In fact, and as shown earlier, when we sample our
system we are no longer marginally stable but actually become unstable.
Using our knowledge of root locus we know that more than a proportional
controller is needed since the shape has to be changed to pull the loci into the stable
regions. In the same way that adding a zero adds stability in the s-plane, we can use
the same idea to attract the loci in the z-plane. Let us rst try placing a zero at z
1=2 to simulate a PD controller.
After placing the controller zero at z 1=2 and the new plot is constructed, we
get the compensated root locus plot shown in Figure 7. By adding the zero, we have
two zeros and two poles and thus no asymptotes. Additionally, we constrained the
only valid region on the real axis to fall within the stable unit circle region. At !n
3=10T 1:8 rad/sec, the damping is approximately 0.7 and all the conditions
have been met. To determine the gain, we can use the magnitude condition as
shown in earlier sections or use Matlab. At this point let us represent the gain as
K and verify that the controller is realizable (i.e., can be programmed as a dierence
algorithm). This is easily determined by developing the dierence equations from the
controller transfer function:

Gc z Kz  1=2

Figure 6 Example: open loop root locus plot in z-plane.


380 Chapter 9

Figure 7 Example: z root locus modied by adding zero.

This gives the following dierence equation:

K
uk  1 K ek  ek  1
2

Now we see that we have a problem implementing this particular controller since if
we shift one delay ahead to get uk, the desired controller output, we would also
need to know ek 1, or the future error. This is a common problem occurring
when the denominator is of lesser order than the numerator (in terms of z). To
remedy, let us go back and add another pole to the controller to increase the
order of the denominator by 1. This allows us to keep one of our valid real axis
root loci sections in the stable unit circle. After we add an additional pole at z
0:25 and move the zero to z 0:9, we can redraw the root locus plot as shown in
Figure 8.

Figure 8 Example: discrete root locus of system compensated with a pole and zero.
Digital Control System Design 381

Although adding the additional pole creates the situation where the system
does become unstable at high gains (the loci paths leave the unit circle and an
asymptote is now at 180 degrees), the compensator does attract all three paths
to the desired region if the proper gain is chosen. The original zero from the rst
controller attempt was moved to 0.9 to pull the paths closer to the real axis. Now we
can use Matlab to nd the gain that results in the desired response and controller
transfer function. The commands listed here dene the original system, convert it
into the z-domain using a ZOH, and generate the compensated and uncompensated
root locus plots. Finally, Matlab is used to close the loop and generate the closed
loop step response, verifying our design.
%Program commands to design digital controller in z-domain

clear;
T=0.5;

nump=1; %Forward loop system numerator


denp=[1 0 0]; %Forward loop system denominator

sysp=tf(nump,denp); %System transfer function in forward loop


sysz=c2d(sysp,T,zoh)
rlocus(sysz); %Draw discrete root loci
zgrid; %place z-grid on plot

numzc=[1 -0.9]; %Place zero at 0.9


denzc=[1 0.25]; %Place pole at -0.25
syszc1=tf(numzc,denzc,T);
hold;
rlocus(syszc1*sysz); %Draw compensated discrete root loci

K=rlocnd(syszc1*sysz)

%Close the loop and convert to z between samplers


cltfz=feedback(3.4*syszc1*sysz,1)
%Verify the discrete root locus plot
gure;
step(cltfz,10);

The Matlab plot showing the uncompensated and compensated root locus plots is
given in Figure 9.
Using the rlocfind command returns a gain of 3.4 when we place our three
poles all near the real axis ( 1). This results in the nal compensator transfer
function expressed as
Uz 3:4z  0:9
Gc z
Ez z 0:25
To implement our controller in a microprocessor we can cross-multiply and express
our transfer function as a dierence equation:
uk 0:25uk  1 3:4ek  3:06ek  1
This is realizable and easily implemented in a digital computer, as opposed to the
previous design that only placed one zero in the z-domain. For this dierence equa-
tion the controller output, uk, is only dependent on the current error, ek, and the
previous controller output and error input. It is also possible to reduce our memory
requirements by designing a similar controller that added the system pole directly on
382 Chapter 9

Figure 9 Example: Matlab discrete root locus plots of compensated and uncompensated
systems.

the origin, instead of at z 0:25. This has the desired eect of still making our
controller realizable and when we cross multiply the z added in the denominator only
acts as a shift operator on uk and does not create a need for storing uk  1. For
example, our transfer function would become

Uz Kz  0:9
Gc z
Ez z

And the new dierence equation becomes

uk Kek  0:9Kek  1

With this formulation only one storage variable, ek  1, is needed. This does have
the eect of modifying our root locus to that shown in Figure 10. When we constrain
the pole to be located at the origin, it does limit our design options but as shown we
still are able to stabilize the system while requiring one less storage variable.
To verify that we do achieve our desired response with the sample rate of 2 Hz,
we can use Matlab to generate the step response of the closed loop system. The step
response for the system compensated with a zero at z 0:9 and a pole at z 0:25
is given in Figure 11. As this example demonstrates, we can directly design a digital
controller in the z-domain using root locus techniques similar to those developed for
the s-domain; only the mapping is dierent, the rules for plotting are the same. The
digital controller stabilized the system, as shown in Figure 11, even though we added
a pole to the controller to make the algorithm realizable. Since the controller is
developed directly in the discrete domain and accounts for the additional lag created
by the sampling period, we can expect better agreement when implemented and
tested than when a continuous system controller is converted into the discrete
domain.
Digital Control System Design 383

Figure 10 Example: discrete root locus of system compensated with a pole at the origin and
a zero.

EXAMPLE 9.6
For the system given in Figure 12, use Matlab to design a discrete controller using
discrete root locus techniques in the z-domain. Verify that the system meets the
requirements and generate a step response using Matlab.
 A damping ratio,  0:7
 Steady-state error, ess 0
 Settling time, ts 2 sec
 Sample time, T 0:05 s

Figure 11 Example: Matlab discrete step response plot of compensated system.


384 Chapter 9

Figure 12 Example: system block diagram for discrete root locus controller design.

In the continuous domain the physical system is open loop stable with two poles
located at 2 3:5j, giving a natural frequency equal to 4 rad/sec and a damping
ratio equal to 1=2. To meet the system design goals, we will need to make the system
type 1 (add an integrator), meet the system damping ratio, and have a system natural
frequency greater than approximately 3 rad/sec.
To begin the design process, we will use Matlab to add a ZOH and convert the
physical system into an equivalent sampled system. The uncompensated root locus
plot is drawn and grid lines of constant damping ratio and natural frequency shown
on the plot.
%Program commands to design digital controller in z-domain
clear;
T=0.05;

nump=8; %Forward loop system numerator


denp=[1 4 16]; %Forward loop system denominator

sysp=tf(nump,denp); %System transfer function in forward loop


sysz=c2d(sysp,T,zoh)
rlocus(sysz); %Draw discrete root loci
zgrid; %place z-grid on plot

To design the compensator we will place a pole at z 1, the equivalent of placing a


pole at the origin in the s-domain (z esT ). Since this tends to make the system go
unstable very quickly, we will also place a zero at z 0:3 to help attract the loci path
inside the unit circle and to help add positive phase angle back into the system. This
is somewhat analogous to a PI controller where both a zero and pole is added from
the controller. Now we can use Matlab to verify the root locus plot and choose a
gain that allows us to meet our design specications. An interactive design tool is
also included in later versions of Matlab. It is invoked using the command rltool.
The Matlab commands used to dene the compensator and generate the compen-
sated root locus plot are given as
numzc=[1 -0.3]; %Place zero at 0.3
denzc=[1 -1]; %Place pole at 1
syszc1=tf(numzc,denzc,T);
hold;
rlocus(syszc1*sysz); %Draw compensated discrete root loci
K=rlocnd(syszc1*sysz)

Using these commands (in combination with the previous segment of commands)
generates the plot given in Figure 13 and results in a controller gain equal to 0.2.
After adding the compensator we actually add additional instability (lag) into
the system as shown by the outer root locus plot. Choosing root locations near z
0:9 allows us to retain our original dynamics (which were close to meeting the
dynamic requirements) while signicantly improving our steady-state error perfor-
Digital Control System Design 385

Figure 13 Example: Matlab discrete root locus plots of compensated and uncompensated
systems.

mance. As with a PI controller in the analog realm, this controller, by placing a pole
at z 1, tends to decrease the stability of the system. Finally, we will use the Matlab
commands listed to generate a sampled step response of the compensated and
uncompensated systems, shown in Figure 14.
%Close the loop and convert to z between samplers
cltf_c=feedback(0.2*syszc1*sysz,1)
cltf_uc=feedback(sysz,1)

%Verify the discrete root locus plot


gure;
step(cltf_c,cltf_uc,4

Figure 14 Example: Matlab discrete step response plot of compensated system.


386 Chapter 9

From Figure 14 we see that the controller signicantly improves our steady-
state error while coming close to meeting our settling time of 2 sec. To complete the
solution, let us develop the dierence equation for our discrete controller. Since we
place one pole and one zero and have equal powers of z in the numerator and
denominator, it should be realizable. The controller transfer function is
Uz 0:2z  0:3
Gc z
Ez z1
And the new dierence equation becomes
uk uk  1 0:2ek  0:6ek  1
Our controller as designed is realizable and can easily be programmed as a sampled
input-output algorithm. Recognize that had we started with an analog PI controller
we would have achieved a similar dierence equation, at least in form.
In conclusion, we see how the root locus techniques developed in earlier analog
design chapters allow us to also design digital controllers represented in the z-
domain. The concept of using the time domain performance requirements (i.e.,
peak overshoot and settling time) can also be applied in the z-domain where we
pick controller pole and zero locations to achieve desired damping ratios and natural
frequencies. The primary dierence we noticed is that in addition to the analog root
locus observations of how dierent gains aect the root locus paths, changing the
sample period modies the pole and zero locations and thus changes the shape of the
discrete root locus plot. Picking the desired pole locations in the z-domain also
becomes more dicult since the lines of constant damping ratios and natural
frequencies are nonlinear.

9.5.2 Direct Response Design


The direct response design method (deadbeat in special cases) demonstrates addi-
tional dierences between analog and digital controllers. Analog controllers are
limited by physical constraints (i.e., number of OpAmps feasibly used in design),
while digital controllers can approximate functions that would be impossible to
duplicate with analog components. Before dening direct response design methods,
however, a word of caution is again in order. It does not matter if the controller is
analog, digital, or yet to be invented, if our physical system is not capable of produ-
cing the desired response we have already failed. This theme has been repeated
several times in preceding chapters and yet it bears repeating. A good control system
must begin with a properly designed physical system, that is, with ampliers, actua-
tors, and plant components that are design for the performance requirements that we
have set. Another situation might arise when designing the controller rst and then
realizing when specifying components that our aggressive design has quadrupled
the cost of what is actually required. The bottom line is that we should design the
physical system and controller to meet our real requirements. With this stated, let
us move on to direct response design methods.
The idea behind the direct response method is to pick a transfer function with
some desired response, set the controller and system transfer function equal to it, and
solve for the corresponding controller. This is easily shown using the following
notation. Let:
Digital Control System Design 387

Dz be the controller transfer function


Tz be the desired transfer function (response)
Gz be the discrete system transfer function (ZOH, continuous system, and
sample eects)
Cz be the sampled system output
Rz be the sampled system command input
Then when we close the loop for a unity feedback system we obtain the closed loop
transfer function that can be set equal to the desired transfer function, Tz.
Cz DzGz
Tz
Rz 1 DzGz
This method is highly dependent on the accuracy of the models being used since the
controller is based directly on our system model, Gz. Modeling accuracy is perva-
sive throughout all levels of control system design, as all of the design tools demon-
strated thus far (analog and digital) rely on the accuracy of the models used during
the design process.
The simplest response to pick, a special case described as deadbeat control, is
making the response equal to the command one sample time later. This is achieved
by dening Tz z1 . When we cross-multiply we see that the system output Cz
becomes the same as the system input, only one sample period later:
Cz Rzz1 or ck rk  1
There are obvious limitations, and deadbeat design deserves some additional com-
ments, provide in Table 1.
Further development of guidelines presented in Table 1 are as follows:
1. There are physical limitations imposed on the desired response. For exam-
ple, we could design a deadbeat controller to position a mass such that the mass
should always be at its command position one sample period later. For this to work,
the physical actuators must always be able to move the mass the required distance in
the desired time. This may be limited by cost (extremely large actuators, $$$) or
simply providing command sequences that are not feasible (i.e., large step inputs). A
simple modication can sometimes alleviate the problem of actuators being unable
to follow the controller commands, thus saturating. By changing Tz to z2 or z3 ,

Table 1 Guidelines for Designing Digital Controllers for a Deadbeat Response

1 The physical system must be able to achieve the desired effect within the span of
one sample for true deadbeat response.
2 The method replies on pole-zero cancellation and is thus very dependent on the
accuracy of the model used.
3 Good response characteristics do not guarantee good disturbance rejection
characteristics
4 The algorithm that results must be programmable (realizable). For general use it
must not require knowledge of future variables.
5 Approximate deadbeat response can be achieved by allowing the response to reach
the command over multiple samples; the intermediate values can also be dened.
388 Chapter 9

we can design for a deadbeat response to occur in a set number of sample periods,
thus providing more time for the system to respond. So for a deadbeat controller to
be feasible, it must be physically able to follow the desired command prole.
Designing the physical components to satisfy the desired prole or limiting the
input sequences to those that are feasible trajectories for the existing physical system
best accomplishes this.
2. The direct design method depends on pole-zero cancellation and is thus
highly dependent on model accuracy, especially if the original system poles were
originally unstable. For example, if the deadbeat controller cancelled a pole at 2
in the z-plane by placing a zero there and the physical system changed or was
improperly modeled, the unstable pole is no longer cancelled and will cause
problems.
3. The direct design controller is targeted at producing the desired response
and no guarantee is made with respect to disturbance rejection properties. These
should be simulated to verify satisfactory rejection of disturbances. It is possible to
achieve good characteristics when responding to a command input and yet have very
poor rejection of disturbances.
4. The controller resulting from direct response design methods must be
computationally realizable, that is, able to be implemented using a digital computer.
It is obvious from the example in the previous section that the lowest power of z1 in
the denominator must be less than or equal to the lowest power of z1 in the
numerator. Thus, the desired response Tz must be chosen where the lowest
power of z1 is equal to or greater than the lowest power of Gz. It is recommended
to add powers of 1  z1 ) to the numerator if it is of lower order than the denomi-
nator; add the number of powers required that make the orders equal.
5. Finally, it is common to develop large overshoots and/or oscillations
between samples with deadbeat controllers since they require large control actions
(high gains) to achieve the response and exact cancellation of poles and zeros seldom
occurs. Modifying the control algorithm to accept slower responses will alleviate
these tendencies. One option is to use something like a Kalman controller. A
Kalman controller chooses the output to be a series of steps toward the correct
solution, each step dening the desired output level at the corresponding sample.
If we wanted to reach a nal value of 1, then we would dene the intermediate values
such that each successive increase in output is the next coecient and where all the
coecients, when added, are equal to 1. Thus, if we wanted zero error in ve samples
(four sample periods) from a unit step input, instead of during one sample period, we
might use the output series:

Ydesired z 0 0:4z1 0:3z2 0:2z3 0:1z4

Assuming that our system components are capable of the desired response and that
our model is accurate, we would have the following output value at each sample:

Sample 1 y1 0
Sample 2 y2 0:4
Sample 3 y3 0:7 (add the previous to the new, 0:4 0:3)
Sample 4 y4 0:9 (add the previous to the new, 0:7 0:2)
Sample 5 y5 1:0 (add the previous to the new, 0:9 0:1)
Digital Control System Design 389

As the example demonstrates we can use this same method to pick the shape of our
response for any system, subject of course to the physics of our system. This concept
is further illustrated in a later example problem.
In broader terms than described for deadbeat response, the advantage of direct
response design is that we can choose any feasible response type for our system. For
example, we can choose a controller that will cause our system to respond as if it is a
rst-order system with time constant, , as long is our system is physically capable
responding in such a way and we have accurate models of our system. We can choose
virtually any response that meets the following three criteria. One, the response must
be feasible. Two, the resulting controller algorithm must be realizable. And three, an
accurate model of the system must be available. Several examples further illustrate
the concept of direct design methods for digital controllers.

EXAMPLE 9.7
Design a deadbeat controller whose goal is to achieve the desired command in two
sample periods. The sample frequency is 2 Hz and the physical system is given as
1
Gs
s2
To design the system we rst need to convert the continuous system into an equiva-
lent discrete representation. The system transfer function after including the ZOH is
 
z  1 1 z  1 T 2 zz 1 T 2 z 1
Gz Z 3
z s z 2 z  13 2 z  12
Substituting in the sampling rate of 2 Hz (T 0:5):
z1
Gz 0:125
z  12
Now we can close the loop and derive the closed loop transfer function which can
then be set equal to our desired response of Tz z2 . This is the same as expressing
our input and output as ck rk  2) or that our output should be equal to the
input after two sample periods. Dz, our controller, becomes our only unknown in
the expression
1 z1
Dz
Cz 8 z  12
Tz z2
Rz 1 z1
1 Dz
8 z  12
This expression can now be solved for Dz:
Uz 8z2  16z1 8
Dz
Ez z3 z2  z  1
Uz 8z1  16z2 8z3
Dz
Ez 1 z1  z2  z3
Dz can easily be converted to dierence equations for implementation. It is com-
putationally realizable since the lowest power of z1 occurs in the denominator.
390 Chapter 9

uk uk  1 uk  2 uk  3 8ek  1  16ek  2 8ek  3


Recognize that we need six storage variables to implement this control algorithm and
that is highly dependent on the accuracy of our model since it relies on using the
controller to cancel the poles of the physical system.

EXAMPLE 9.8
Use Matlab to verify the deadbeat controller designed in Example 9.7. When the
controller and control loop is added, we can represent the system with the block
diagram in Figure 15. In Matlab we will dene the physical system, apply the ZOH,
and convert it into the z-domain where it can be combined with the controller Dz
and the loop closed. Both a step and ramp response can be calculated and plotted.
%Program commands to direct design digital controller in z-domain
clear;
T=0.5;
nump=1; %Forward loop system numerator
denp=[1 0 0]; %Forward loop system denominator
sysp=tf(nump,denp); %System transfer function in forward loop
sysz=c2d(sysp,T,zoh)
numzc=[8 -16 8]; %Numerator of controller
denzc=[1 1 -1 -1]; %Denominator of controller
syszc1=tf(numzc,denzc,T);
%Close the loop and convert to z between samplers
cltfz=feedback(syszc1*sysz,1)
%Verify the discrete step and ramp response plots
step(cltfz,4);
gure;
lsim(cltfz,[0:0.5:4]);

After these commands are executed, we get the step and ramp responses shown in
Figures 16 and 17, respectively. It is very important to remember that these plots
only tell us that the system is at the desired location during the actual sample time,
two samples after the command has been given. The system may be oscillating in
between the sample periods. Additionally, it is necessary to calculate the power
requirements for this system to achieve the responses in Figures 16 and 17. The
power requirements increase as we make our desired response faster. It is primarily
the ampliers and actuators acting on the physical system that become cost prohi-
bitive under unrealistic performance requirements.

EXAMPLE 9.9
Use Matlab to design a controller for the system given in Figure 18. The response
should approximate a rst-order system and take four sample periods to reach the
command. The sample frequency for the controller is 5 Hz, T 0:2 sec. Since we

Figure 15 Example: system block diagram for discrete direct response controller design.
Digital Control System Design 391

Figure 16 Example: Matlab discrete step response plot of compensated system.

Figure 17 Example: Matlab discrete ramp response of compensated system.

Figure 18 Example: system block diagram for discrete direct response controller design.
392 Chapter 9

want the system to approximate a rst-order system response, we can dene our
desired response to be based on the standard rst-order unit step response values:

Cz 0 0:63z1 0:235z2 0:085z3 0:05z4 Rz
Assuming that our system components are capable of the desired response and that
our model is accurate, this should result in the following output value at each
sample:
Sample 1 c1 0
Sample 2 c2 0:63
Sample 3 c3 0:865 (add the previous to the new, 0:63 0:235)
Sample 4 c4 0:95 (add the previous to the new, 0:865 0:085
Sample 5 c5 1:0 (add the previous to the new, 0.95 + 0.05)
We should recognize this as the normalized rst-order system in response to a unit
step input and have a system time constant equal to one sample period. To derive the
transfer function representation we can solve for C=R and multiply the numerator
and denominator by z4 .
Cz 0:63z3 0:235z2 0:085z 0:05
Tz
Rz z4
To facilitate the use of the computer we can solve directly for our controller Dz in
terms of our system transfer function Gz and our desired response transfer function
Tz. The desired response transfer function is set equal to the closed loop transfer
function, as dened earlier:
Cz DzGz
Tz
Rz 1 DzGz
For this example with unity feedback we can now solve directly for Dz:
Tz
Dz
Gz 1  Tz
This allows us to now use Matlab to convert our analog system into its discrete
equivalent, dene our desired response Tz, and subsequently to solve for our
controller Dz. We can verify our design by closing the loop and plotting the unit
step input response of the system. The Matlab commands use are given as
%Program commands to design digital controller in z-domain
clear;
T=0.2;
nump=8; %Forward loop system numerator
denp=[1 4 16]; %Forward loop system denominator
sysp=tf(nump,denp); %System transfer function in forward loop
sysz=c2d(sysp,T,zoh)
sysTz=tf([0 0.63 0.235 0.085 0.05],[1 0 0 0 0],T)
Dz=sysTz/((1-sysTz)*sysz)
sysclz=feedback(Dz*sysz,1);
step(sysclz,2)

After dening the sample time and continuous system transfer function we use
Matlab to calculate the discrete equivalent, with the ZOH, which is returned as
Digital Control System Design 393

0:1185z 0:09037
sysz Gz
z2  1:032z 0:4493
The controller transfer function, Dz, is then calculated and expressed as
Uz
Dz
Ez
0:63z5  0:4149z4 0:1257z3 0:06791z2  0:01338z 0:02247

0:1185z5 0:0157z4  0:08478z3  0:03131z2  0:01361z  0:004518
This is realizable since no future knowledge of our system is required and we can
express our controller as a dierence equation that can be implemented digitally.
Finally, the loop can be closed, including now our controller Dz, and the unit step
response of the system plotted as shown in Figure 19.
It is easy to see that we have reached our desired output values at the corre-
sponding sample times and that the response of the system approximates the
response of a rst-order system with a time constant of one sample period.
Remember that since this is a simulation the model used to develop the controller
and the model used when simulating the response are identical and therefore the
results behave exactly as designed for. In reality it is dicult to accurately model the
complete system, especially with linear models as constrained to with the z-domain,
and our results need to be evaluated in the presence of modeling errors and distur-
bances on the system.
As this chapter demonstrates, many of the same design techniques (conversion
and root locus) that were developed for analog systems can be used for digital
systems. The primary dierence is the addition of the sample eects and how it
also modies the response and stability characteristics of our system. If the sample
frequency is fast relative to our physical system, we can even use controllers designed
in the s-domain with good results. As our sample time becomes an issue, however,

Figure 19 Example: Matlab discrete step response plot of compensated system.


394 Chapter 9

designing directly in the z-domain is advantageous since we account for the sample
time at the onset of the design process. Finally, and for which there is no analog
counterpart, we can choose the response that we wish to have and solve for the
controller that gives us the response. Although a straightforward process analyti-
cally, this method relies on us understanding the physics of our system to the extent
that we can choose realistic goals and design specications. Unrealistic specications
may lead to unsatisfactory performance and/or high implementation costs.
Finally, with all controllers designed here we should also verify our disturbance
rejection properties. A good response due to a command input does not guarantee a
good rejection of a disturbance input. Since the disturbance usually acts directly on
the physical system, it is dicult to analytically model unless we predene the dis-
turbance input sequence to enable us to perform the z transform. Tools like
Simulink, the graphical block diagram interface of Matlab, become useful at this
point since we can include continuous, discrete, and nonlinear blocks in one block
diagram. This is demonstrated in Chapter 12 when the nonlinearities of dierent
electrohydraulic components are included in our models.

9.6 PROBLEMS
9.1 What should the sample time be when converting existing analog controllers
into discrete equivalents and expecting to achieve similar performance?
9.2 Zeigler-Nichols tuning methods are applicable to discrete PID controllers.
(T or F)
9.3 What is the goal of bumpless transfer?
9.4 What are the advantages of expressing our dierence equations as velocity
algorithms?
9.5 Write the velocity algorithm form of a dierence equation for the PI-D
controller. Let u be the controller output and e the controller input (error).
9.6 For the following phase lag controller in s, approximate the same controller in
z using the pole-zero matching method. Assume a sample time of 1=2 sec.

0:25s 1
Gc s
s1

9.7 Use Tustins approximation to nd the equivalent discrete controller when


given the following continuous system controller transfer function (leave T as a
variable in the discrete transfer function):

s1
Gc s
0:1s 1

9.8 For the system block diagram in Figure 20,


a. Design a phase-lead controller in the domain (see Example 5.15) to achieve a
closed loop damping ratio equal to 0.5 and a natural frequency equal to 2
rad/sec. Use root locus techniques in the s-domain.
b. Convert the phase-lead controller into a discrete equivalent using pole-zero
matching with a sample time, T 0:05 sec. Draw the block diagram and
verify the unit step response of the discrete system using Matlab.
Digital Control System Design 395

Figure 20 Problem: block diagram of controller, system, and transducer.

9.9 For the phase-lead controller in Figure 21 and using a sample time, T 0:5s,
a. Find the equivalent discrete controller using pole-zero matching.
b. Find the equivalent discrete controller using Tustins approximation.
c. Use Matlab to plot the step response of each discrete controller and com-
ment on the similarities and dierences.

Figure 21 Example: block diagram of physical system for phase-lead controller.

9.10 For the system in Figure 22, tune the phase-lag digital controller directly in the
z-domain using root locus techniques. Draw and label the root locus plot and solve
for the gain K that results
pin repeated real roots (point where loci leave real axis).
The sample time is T 2 sec.

Figure 22 Problem: block diagram of discrete controller and system.

9.11 For the system in Figure 23, develop the z-domain root locus when the sample
time is T 0:5 sec.
a. Draw and label the root locus plot.
b. Describe the range of response characteristics that can be achieved by
varying the proportional gain.

Figure 23 Problem: block diagram of discrete controller and system.

9.12 For the plant model transfer function, design a unity feedback control system
using a PI controller (K 2, Ti 1). Draw the block diagrams for both the con-
tinuous system (see Problem 5.14) and the equivalent discrete implementation. Use
396 Chapter 9

Tustins approximation to derive the equivalent discrete controller transfer function.


Determine the steady-state error for both systems when subjected to step inputs with
a magnitude of 2; Use the FVT in the s and z-domains.
5
Gs
s5
9.13 For the system shown in the block diagram in Figure 24, use Matlab to design
a discrete controller with a sample time of T 0:5 sec, that has
a. Less than 5% overshoot due to a step input.
b. Zero steady-state error due to a step input.
The solution should contain the Matlab root locus plots, the step response of the
system, and the dierence equation that is required to implement the controller.

Figure 24 Problem: block diagram of discrete controller and system.

9.14 With the third-order plant model and unity feedback control loop in Figure 25,
and using a sample time T 0:8 sec, use Matlab to
a. Design a discrete controller that exhibits no overshoot and the fastest
possible response when subjected to a unit step input.
b. Write the dierence equation for the controller. Be sure that it is realizable.
c. Verify the root locus and step response plots (compensated and uncompen-
sated, i.e., Dz 1 using Matlab.

Figure 25 Problem: block diagram of discrete controller and system.

9.15 For the system shown in the block diagram in Figure 26, design a discrete
compensator that
a. Places the closed loop poles at approximately z 0:25 0:25j using a
sample time of T 0:04 sec. Design the simplest discrete controller possible
and derive both the required gain and compensator pole and/or zero loca-
tions.
b. Leaving the controller values the same, change the sample time to T 1 sec
and again plot the discrete root locus. Comment on the dierences.

Figure 26 Problem: block diagram of discrete controller and system.


Digital Control System Design 397

c. Verify both designs with a unit step input response plot using Matlab.
Comment on how well this correlates with parts a and b.
9.16 For the system shown in the block diagram in Figure 27, design a discrete
compensator that achieves a deadbeat response in one sample. The sample period is
T 0:2s.

Figure 27 Problem: block diagram of discrete controller and system.

9.17 For the system shown in the block diagram in Figure 27, design a discrete
compensator that achieves a deadbeat response over two sample periods. The sample
period is T 0:1s.
9.18 For the system shown in the block diagram in Figure 28, design a discrete
compensator that achieves an approximate ramp response to the desired value over
four sample periods. The sample period is T 0:4s.

Figure 28 Problem: block diagram of discrete controller and system.

9.19 For the system shown in the block diagram in Figure 28, design a discrete
compensator that achieves deadbeat response over one sample period. The sample
period is T 0:6s:
a. Use Matlab to solve for the controller transfer function.
b. Verify the unit step response of the closed loop feedback system.
c. Build the block diagram in Simulink, connect a scope to the output of
the controller block, and comment on whether there are any inter-sample
oscillations.
9.20 For the system shown in the block diagram in Figure 29, design a discrete
compensator that achieves deadbeat response over three sample periods. On the
second sample the system should be 65% of the way to the desired value and
100% of the desired value on the third sample. The sample period is T 1s:
a. Use Matlab to solve for the controller transfer function.
b. Verify the unit step response of the closed loop feedback system.
c. Build the block diagram in Simulink, connect a scope to the output of
the controller block, and comment on whether there are any inter-sample
oscillations.

Figure 29 Problem: block diagram of discrete controller and system.


This Page Intentionally Left Blank
10
Digital Control System Components

10.1 OBJECTIVES
 Examine the dierent interfaces between analog and digital signals.
 Learn the common methods of implementing digital controllers.
 Examine strengths and weakness of dierent implementation methods.
 Present basic programming concepts for microprocessors and PLCs.
 Identify the components available when digital signals are used.

10.2 INTRODUCTION
With the rate at which computers are developing, it is with a measure of caution that
this chapter is included. The goal is not to predict the future of digital controllers,
benchmark past progress, or provide a comprehensive guide to the available hard-
ware and software. Rather, the goal is to provide an overview regarding the types of
digital controllers in use, how they typically are used, and how the designs from the
previous chapter are commonly implemented and allowed to become useful. The
basic components have somewhat stabilized for the present and the noticeable devel-
opments occurring in terms of speed, processing power, and communication stan-
dards. It is safe to say that the inuence of digital controllers will continue to
increase.
In this chapter, the three broad categories of digital controllers are presented as
computer based, microcontrollers, and programmable logic controllers (PLCs). In
general, each type relies ultimately on the common microprocessor. The strengths
and weaknesses of each type are examined from a hardware and software perspec-
tive. The common digital transducers and actuators used in implementing digital
controllers are also presented. Finally, the problem of interfacing low level digital
signals to real actuators is examined. An example of this is the common method
using pulse width modulation (PWM).

10.3 COMPUTERS
Computers have become commonplace when implementing digital controllers. In
many situations it is the cheapest and most exible option, and yet computers

399
400 Chapter 10

have struggled in industrial settings. While reliability is the primary reason, it is not
the reliability of the hardware half as much as it is of the software. Current operating
systems are just not robust enough to run continuously in a control environment and
provide the level of safety required. To be fair, they are designed to run all tasks
satisfactory, not designed to run one task reliably. In industrial environments, PLCs
are still dominant and are discussed in more detail in a subsequent section.
With the continuing increase in performance in the midst of decreasing prices,
it is envisioned that more and more PC-based applications will be developed. The PC
has many advantages once the reliability issues are resolved. It is extremely easy to
upgrade and is exible in adding capabilities as time progresses. As compared to
microcontrollers and PLCs, there are many straightforward programming packages,
some with a purely graphical interface (more computer overhead). Microcontrollers
run best when programmed with assembly language, and PLCs generally have an
application where the program is written using ladder logic and downloaded to the
chip. Computers, on the other hand, allow many dierent computer languages,
compilers, programs, etc., to interface with the outside data input/output (IO)
ports. Since the PC processor is performing all the control tasks in real time, slider
bars, pop-up windows, etc., can be used to tune the controller on the y and imme-
diately see the eects. In addition, with the large hard drives, cheap memory
upgrades, etc., the PC can collect data on the y and store it for long periods of
time. Hundreds of dierent control algorithms could be stored for testing or even
switching between during operation. Microcontrollers and PLCs have limitations on
the number of instructions, byte lengths, words, and often use integer arithmetic,
requiring good programming skills. It should be clear that at this point in time, the
PC-based controller is a great choice for developing processes, research, and testing
but is not as suited for continuous industrial control or where the volumes are high,
as in OEM (original equipment manufactured) controllers. What we are seeing, and
will continue to see, is the lines between the two systems becoming more blurred.
New bus architectures and interface cards are allowing computer processors to act as
programmable controllers in industrial settings. Now we will examine some of the
required components to allow us to use our PC as control headquarters. Figure 1
illustrates the common components used in PC-based control.
It is obvious at this point that for our computer to control something, it must
be capable of inputs and outputs which can interface with physical components. This
is really the heart of the matter. The computer (any processor) is designed to work
with low level logic signals using essentially zero current. Physical components, on
the other hand, operate best when supplied with ample power. The goal then
becomes developing interfaces between low level processor signals and the high
power levels required to eect changes in the physical system. Various computer
interfaces can be purchased and used, each with dierent strengths and weaknesses.
An additional problem is the quantization of analog signals describing physical
phenomenon (i.e., the temperature outside is not xed at 30 and 50 degrees but
may innitely vary between those two points and beyond). Converters, examined
below, are used to convert between these two signal types, but with limited resolu-
tion. Finally, both isolation (computer chips are not fond of high voltages and
currents) and power amplication devices are required to actually get the computer
to control a physical system. These last items are the same concerns all microcon-
trollers and PLCs share and are overcome using similar methods and components.
Digital Control System Components 401

Figure 1 Basic PC-based control system architecture.

To begin with we will look at the common components used to interface


computers with physical systems for the purpose of closed loop control and data
acquisition.

10.3.1 Computer Interfaces


This section will break the common computer components into three sections, com-
puter hardware, interface hardware, and software. By the end we should have a good
idea of what components are required for our applications.
10.3.1.1 Computer Hardware
The main choices discussed here are desktop, laptop, and dedicated digital signal
processing (DSP) systems. Todays laptops are capable of the processing speed,
memory, and interface ports required for closed loop controllers. Their largest pro-
blem is cost and throughput. For the same processing power, memory, and monitor
size, laptops cost considerably more than their desktop counterparts. However, if the
controller must be mobile (i.e., testing antilock braking systems at the proving
grounds), then laptops are worth the cost and a viable choice. The largest problems,
assuming the cost dierence is worth the portability, are interface limitations and the
resulting additional costs. Aside from going to a dedicated DSP, there are still
limited options for interfacing data acquisition cards with laptops for maximum
performance. While serial port devices are cheap, they are severely limited in max-
imum data transfer rates, often limiting the sampling frequency to around 120 Hz
maximum for one channel. In addition, few exist with analog output capabilities.
That being said, for slow processes, controlling items at home, and learning, they are
small, light, cheap, and fun.
The next level up is to use the parallel port. For a slightly higher cost, units
exist with digital IO, analog in, and analog out capabilities. The data transfer rate is
also higher but still limited by parallel port transfer speeds. To take full advantage of
the laptops processing power, Personal Computer Memory Card International
402 Chapter 10

Association (PCMCIA) or Universal Serial Bus (USB) ports must be used. Much
greater data throughputs are possible, as shown in Table 1.
The PCMCIA standards have various levels and not all are as high as
100 megabits/sec. In addition, the newer cards are not compatible in the older
slots, and these cards are sometimes dicult to set up. USB devices are now common
and are capable of plug-n-play operation, supplying enough power to sometimes run
connected devices, daisy chains of up to 128 components, and compatible with all
computers incorporating such a port (i.e., IBM compatible, Apple, Unix, etc.).
Desktop systems (within the PC category) are usually the best value if port-
ability and cutting edge performance (DSP) are not required. Boards using the ISA
(original Industry Standard Architecture) are common as seen by the many compa-
nies producing such boards. Price ranges from several hundred dollars to over
several thousand dollars, depending on the features required. Peripheral component
interconnect (PCI), the latest mainstream bus architecture and faster and friendlier
than industry standard architecture (ISA), is also becoming popular with interface
card manufacturers. PCI is equivalent to having DMA (direct memory access, allows
the board to seize control of the PCs data bus and transfer data directly into
memory) with an ISA card. Both bus systems provide more than enough data
throughput to keep up with the converters on the data acquisition card. It is also
possible to install multiple boards in one computer for adding additional capabilities.
Systems have been constructed with hundreds of inputs and outputs.
If extremely high speeds (MHz sampling rates) along with many channels are
required, then dedicated digital signal processors are required. They generally consist
of their own dedicated high-speed processor (or multiple ones) and conversion chips
and only receives supervisory commands from the host PC. Costs are generally much
greater the typical PC systems installed with interface boards and hence limited to
special applications.
10.3.1.2 Data Acquisition Boards
In this section we dene the characteristics common to the hardware data acquisition
boards used in the various systems listed in the previous section. Data acquisition
boards are very common and used extensively in interfacing the PC with analog and
digital signals. There are many specialty boards with dierent inputs and outputs

Table 1 Comparative Throughput Rates of Various


Computer Interfaces

Transfer rates (megabytes per


Interface (port) type second)

Serial 0.01
Parallel 0.115
USB 1.5
SCSI-1 and 2 510
(Wide) ultra SCSI 2040
IEEE 1394 rewire 12.550
PCMCIA Up to 100
Digital Control System Components 403

that, although similar, are not discussed here. Table 2 lists the common features to
compare when deciding on which board to use.
The common bus architectures have already been examined in Table 1. In
general, speed and resolution are usually proportional to cost (when other features
are similar). The speed is generally listed as samples/second and if the board is
multiplexed, the speed must be divided by the number of channels being sampled
to obtain the per channel sample rate. We will currently hold o on resolution and
software and discuss them in more detail in subsequent paragraphs.
When dealing with any inputs or outputs, the voltage (or current) ranges must
be compatible with the system you are trying to control, namely the transducers and
actuator voltage levels. Many PC-based data acquisition boards are either software
or hardware selectable and therefore exible in choosing a signal range that will
work. However, many boards select one range that will then apply to all channels.
The discussion on resolution will illustrate some potential problems with this.
Common ranges are 01 V, 05 V, 010 V, 5 V, 10 V, and 420 mA current
signals.
The number of inputs and outputs range with a larger number of digital IO
ports generally available. Common boards include up to 16 single-ended analog
inputs and thus 8 double-ended inputs. With double-ended inputs a separate ground
is required for each channel since the board only references the dierential voltage
between two channels. This has many advantages when operating in noisy environ-
ments but usually requires two converters for one signal. Single-ended inputs have
one common ground and each AD (analog-to-digital) converter channel references
this ground.
Some boards will include counter channels that can be congured to count
pulses or read a pulse train as a frequency. In addition, various output capabilities
are found beside analog voltages. Four-to-20-mA outputs are becoming more com-
mon, PWM outputs, and stepper motor drivers can all be found. It is generally
cheaper with PWM or stepper motor outputs since digital ports are used and a
DA (digital-to-analog) converter circuit is not required. Finally, extras to consider
are current ratings, warranties, linearity, and protection ratings. For most controller
applications, the linearity is not a major issue, being much better than the compo-
nents connected to. Current ratings, in general, are small and assume that we always
will have to provide some amplication before a signal can be used. The over-voltage
protection ratings are important if you do not include isolation circuits and expect to
operate in a noisy environment. Input and output impedance is also available from
many board manufacturers.

Table 2 Basic Features of Data Acquisition Boards

Basic features No. inputs and outputs Extra considerations

Cost of digital outputs Over-voltage protection


Platform/bus of analog inputs DMA capability
Speed (kHz-MHz) of analog outputs Accuracy, linearity
Resolution of digital inputs Terminals/accessories
Software drivers Counters/pulse frequency Current capabilities
Voltage ranges (in/out) Extra channels (PWM) Warranty
404 Chapter 10

Finally, we conclude this section by dening resolution. The most common


component when analog signals are required, and a common chip made by many
integrated circuit manufacturers, is the AD and DA converter. AD represents the
analog-to-digital conversion process and DA the digital-to-analog conversion. Most
commercial designs use successive approximation or ash converters. Flash conver-
ters are faster because the comparators act in parallel, versus in series for the suc-
cessive approximation technique. All conversions take time, generally 1100 sec,
which, if multiplexed to save money (i.e., one channel is converted, the next is
switched to the same converter, etc.) each channel converted slows down the total
conversion time to acquire all the data.
The AD/DA chips dene the best resolution possible under ideal conditions
and generally range from 8- to 16-bit converters. Realized accuracies must include
the before and after operations on the signal and depend in part on the accuracies of
the input resistors on the operational amplier. Without the actual conversion
details, let us see how the resolution aects us. Remember that we are representing
an analog (i.e., continuous) signal in digital form. This means that there are limited
values with which the analog signal may be represented. The number of pieces that
the analog signal can be represented as is determined by the resolution of the AD
converter. The common resolutions and resulting number of possible values are
shown in Table 3. The resolution of the actual signal can be found with the following
equation:

Vmax  Vmin
Vquantization
2n  1

The signicance of the quantization error can be shown through a simple example.

EXAMPLE 10.1
Determine the quantization voltage (possible error) when a voltage signal is
converted into an 8-bit digital signal and the range of the AD converter is 010 V
and 10 V.
Using our system with a 8-bit converter for a range of 010 V, our signal is
represented by 256 individual levels (00000000 through 11111111 in binary). We can
calculate the quantization voltage as

Vmax  Vmin 10 V  0 V
Vquantization 39 mV
2n  1 28  1

Table 3 Common AD and DA Converter Resolutions

Bits Representation No. of discrete values possible


8
8 2 256
10 210 1024
12 212 4096
16 216 65,536
Digital Control System Components 405

Thus the smallest increment is 39 mV. If we congure the computer to accept 10 V
signals, we now have twice the voltage range to be represented by the same number
of discrete values.
Vmax  Vmin 10V  10V
Vquantization 78 mV
2n  1 28  1
Now our voltage level must increase or decrease by almost 0.1 V before we see the
binary representation change.
The practical side is this: When we design systems it is best to choose all signals
to use the full range of the AD converter to maximize resolution, unless of course,
each channel on the board can be congured separately. If some sensors already
have 10 V outputs, then choosing one with a 0 to 1 V output severely limits the
resolution, since we are only using 1/20th of the range that is already limited to a
nite number of discrete values. Obviously, as resolution is increased this becomes
less of a concern, but nonetheless good design practices should be followed.
10.3.1.3 Software
Finally, one of the major considerations when using data acquisition boards is soft-
ware support. If an investment has already been made into one specic software
package, then the decisions are probably narrowed down. Most vendors supply
proprietary software packages and little has been done to generate a standard pro-
tocol. Many nontraditional software programs, for an extra fee, are beginning to
develop drivers allowing data acquisition boards to interface with them. Matlab is
one such example and allows us through the toolboxes in Matlab to run hardware in
the loop controllers.
There are two main branches to consider when choosing software for imple-
menting digital controllers on PCs. Graphical-based programs are easy to use and
program, but the graphical overhead limits the throughput when operating control-
lers real time. If the hardware and software support DMA (direct memory access),
then batches of data can be acquired at very high sample rates and post processed
before being displayed or saved. This technique, while great for capturing transients,
has little benet when operating PC based controller in real time where the signal is
acquired once, processed, and the controller output returned to the outside system. If
learning to program is something we do not wish to do, then graphical-based pro-
grams are the primary option and work ne for slower processes incorporating less
advanced controllers. For many slower systems and for recording data these soft-
ware packages work well.
To take full advantage of the PC as our controller, we must be able to program
in a language using a compiler. There are hybrids were the initial design is done using
a graphical-based interface and then the software program is able to compile an
executable program from the graphical model. Sample rates are much faster and
access to unlimited controller algorithms should be all the incentive needed to learn
programming. Programming is greatly simplied when using predened commands
(drivers) supplied by the manufacturer of the boards. Most boards come with a basic
set of drivers and instructions for the common programming languages like C,
Pascal, Basic, and Fortran.
A brief overview of two possible programming methods is given here. Since
processors typically perform operations much faster than attached input/output
406 Chapter 10

devices, synchronization needs to occur between dierent devices. Since the proces-
sor normally waits to execute the next instruction, something must tell the processor
is ready to accept data or perform the next execution. The two common options are
to either have the program control the delays or use interrupts to control the delays.
Program control is the simplest and can easily be explain with an illustration of
inserting a loop in your program. While the loop is running, every time it gets back
to the command to read or write from/to the data acquisition card, it retrieves the
data and continues to the next line of code. In one aspect, this ensures that program
is always operating at the maximum sampling frequency, but it also means you do
not have direct control over the sampling frequency. For example, suppose a con-
dition in one the loops might cause a branch in the program to occur, like only
updating the screen every 50 loops. Then obviously the sample time for this loop will
be longer. Also, if the program tries to write to the hard drive and it is being used,
longer delays can be expected. Programming in a Windows environment exacerbates
this since multitasking is expected to occur.
A second method to control the delays is through interrupt driven program-
ming. An interrupt is exactly what the name suggests, a method to halt the current
operation of the processor and perform your priority task. The advantage with using
interrupts is that very consistent sample times are achievable. We base the interrupt
o of an external (independent) clock signal and tell the program how often to
interrupt the process to collect or write data. The catch is this, while this technique
sounds great, a problem arises if the computer has not yet nished processing the
command as required before the next interrupt occurs. In this event the memory
location storing the controller output might not be updated and the old value is sent
again to the system; we may wish to monitor this and alert the user if our program
misses a set number of code instructions, especially when multiple devices may
send interrupt signals and further tie up the processor. All in all, interrupt program-
ming, when done correctly, is more ecient because it controls the amount of time
spent waiting for other processes, most of which (i.e., moving a mouse or sound
eects) can wait until the processor actually does have time without limiting the
performance of the controller routine.
Examples of both methods can be found and much of the discussion above
applies also to microcontrollers and PLCs, as we will now see.

10.4 MICROCONTROLLERS
Microcontrollers are now used in a surprising number of products: microwaves,
automobiles (multiple), TVs, VCRs, remote controls, stereo systems, laser printers,
cameras, etc. Many of the same terms dened for PC-based control systems will
apply when talking about microcontrollers and PLCs.
First, what is a microcontroller, and why not call it a microprocessor? A
microcontroller, by denition, contains a microprocessor but also includes the
memory and various IO arrangements on a single chip. A microprocessor may be
just the CPU (central processing unit) or may include other components.
Microcomputers are microprocessors that include other components but not neces-
sarily all components required to function as a microcontroller. For the most part,
when we discuss microcontrollers we tend to interchange the two terms simulta-
neously. Certainly, when discussing microcontrollers, we have in mind the complete
Digital Control System Components 407

electrical package (processor, memory, and IO) capable of controlling our system, an
example of which is shown in Figure 2.
Microcontrollers have much in common with our desktop PCs. They both have
a CPU that executes programs, memory to store variables, and IO devices. While the
desktop PC is a general purpose computer, designed to run thousands of programs, a
microcontroller is design to run one type of program well. Microcontrollers are
generally embedded inside another device (i.e., automobile) and are sometimes called
embedded controllers. They generally store the program in read-only memory (ROM)
and include some random access memory (RAM) for storing the temporary data
while processing it. The ROM contents are retained when power is o, while the
RAM contents are lost. Microcontrollers generally incorporate a special type of
ROM called erasable-programmable ROM (EPROM) or electrically erasable-
programmable ROM (EEPROM). EPROM can be erased using ultraviolet light
passing through a transparent window on the chip, shown in Figure 3. EEPROM
can be erased without the need for the ultraviolet light using similar techniques used
to program it; there is usually a limited number of read/write cycles with most
EEPROM memory chips.
Microcontroller power consumption can be less than 50 mW, thus making
possible battery-powered operation. LCD are often used with microcontrollers to
provide means for output, but at the expense of battery life.
Microcontrollers range from simple 8-bit microprocessors containing 1000
bytes of ROM, 20 bytes of RAM, 8 IO pins, and costing only pennies (in quantity)
to microprocessors with 64-bit buses and large memory capacities. Today even home
hobbyists can purchase microcontrollers (programmable interface controllers, etc.),
which can be programmed using a simplied version of BASIC and a home com-
puter. A BASIC stamp is a microcontroller customized to understand the BASIC
programming language. Popular microcontrollers we might encounter include
Motorolas 68HC11, Intels 8096, and National Semiconductors HPC16040. The
Motorola, for example, comes in several versions, with the MC68HC811E2 contain-
ing an 8-bit processor, 30 I/O bits, and 128 kilobytes of RAM. Since there are so
many variations within each manufacturers families of models and since the tech-
nology is changing so fast, little space is given here in discussing the details of specic
models. If we understand some of the terminology, as presented here, then we should
be able to gather information from the manufacturer and choose the correct micro-
controller for our system. Now let us examine and dene some useful terms to help
us when we are designing control systems using embedded microcontrollers.

Figure 2 Example of typical microcontroller with EEPROM.


408 Chapter 10

Figure 3 Example of typical microcontroller with EPROM (ultraviolet erasable).

To interface the microprocessor with the memory and IO ports, address buses
are used to send words between the devices. Words are groups of bits whose length
depend on the width of the data path and thus aect the amount of information sent
each time. An 8-bit microcontroller sends eight lines of data that can represent 256
values. Common word lengths are 4, 8, 16, and 32. Four-bit microcontrollers are still
being used in simple applications with 8-bit microcontrollers being the most com-
mon. The other factor aecting the microcontroller performance is the clock speed.
This is probably the most familiar performance specication through our exposure
to PCs and the emphasis on processor clock speeds. It is important to know both the
bus width (amount of information sent each clock cycle) and the processor clock
speed since they both directly inuence the overall performance.
Finally, we discuss the programming considerations. Microcontrollers ulti-
mately need an appropriate instruction set to perform a specic action. Instruction
sets are dependent on the microprocessor being used and thus specic commands
must be learned for dierent microprocessors. Microprocessors work in binary code,
and the instruction sets must be given to the processors in binary format.
Fortunately, short-hand codes are used to represent the binary 0s and 1s. A common
shorthand code is assembly language. Since computer programs (assembler pro-
grams) are available to covert the assembly code into binary, the binary code is
not such a large obstacle to designing microcontrollers.
A third level of programming, and the most useful to the control system
designer, is the use of many high-level computer languages to compile algorithms
into assembly and machine code. Common high-level languages include BASIC, C,
FORTRAN, and PASCAL. There is enough similarity between languages that once
you have programmed in one, you will understand much of another. Since only the
syntax changes and not the program ow chart, learning the language is one of the
easier aspects to developing a good control algorithm. Most engineers, once the ow
chart (logic) of the program is developed, can learn the language specic commands
required to implement to controller.
The one disadvantage of programming in high-level languages is speed. Even
when converted to assembly language, they tend to result in larger programs that
take longer to run than programs originally written in assembly. This gap is narrow-
ing and some compilers convert to assembly code quite eciently.
Digital Control System Components 409

10.5 PROGRAMMABLE LOGIC CONTROLLERS


Programmable logic controllers are found in virtually every production facility and
are used to control manufacturing equipment, HVAC systems, temperatures, animal
feeders, conveyor lines; to name but a few. Originally designed to replace sequential
relay circuits and timers used for machine control, they have evolved today to where
they are microcontrollers and logic operators all packaged into one. Most PLCs are
programmed using ladder logic, a direct result from replacing physical relays with
simulated relays and timers. The ladder logic diagram ends up closely resembling the
circuit an electrician would have to build to complete the same tasks using physical
components.
PLCs were introduced to the 1960s to replace physical relays and timers, which
had to be rewired each time a design change or upgrade was required. The use of
PLCs allowed designers to quickly change a ladder diagram without having to rewire
the system. Once microprocessors became cheaper and powerful, they were also
incorporated into the PLC. In the 1970s PLCs began to communicate with each
other and, together with analog signal capabilities, began to resemble the products of
today. What really has changed since then is decreasing size while now utilizing
processors much powerful than their predecessors. Modern PLCs accept high level
programming languages, a variety of input and output signal types, and output
displays all the while maintaining their reputation for being robust and stable con-
trollers capable of operating in extreme environments. The dierences between them
and microcontrollers continue to diminish, and there exists a gray area shared by
both products. In general though, PLCs have more signal conditioning and protec-
tion features on board, accept ladder logic programming (along with others), and are
designed for multiple purposes instead of one dedicated purpose as common with
microcontrollers. Most PLCs continue to have large numbers of relays included. In a
way, the microcontroller has become a subcomponent of many PLCs. As mentioned
earlier, personal computers have replaced PLCs in some areas and if reliability
improves may be replacing them more.
Given that current PLCs use microcontrollers and AD/DA converters, very
little needs to be said about the way they operate. Combine the features of computers
with data acquisition boards and programmable microcontrollers, design it to be
rugged and reliable, and you have todays PLC. For dedicated high volume pro-
ducts, the microcontroller holds many advantages. For exibility in programming,
numbers and types of inputs and outputs, and performance per cost unit, computer-
based systems hold many advantages. PLCs eectively bridge this gap by combining
some features of both. They range from simple, small, self-contained units to mod-
ular systems capable of handling large numbers of inputs and outputs, linking
together with other units, and being controlled remotely via telephone lines or the
internet.
A more recent category, now found in most manufacturers product lines, is the
micro-PLC, a small self-contained PLC. Let us now examine some features that
might be found on a micro-PLC, an example of which is given in Figure 4. Table
4 lists some of the features that may be found on a micro-PLC.
For many industrial, concept development, and testing-related projects, a small
PLC as described here may be all that we need. It does illustrate the number of
features that can be included on a small board the size of a common music compact
410 Chapter 10

Figure 4 Example of microprogrammable logic controller. (Courtesy of Triangle Research


International Pte. Ltd.)

disc. What may or may not be included are extra inputs and outputs, software
drivers, displays and user interfaces, etc. Therefore, it is a good idea when choosing
a PLC (or a computer-based system or microcontroller) to compare features based
on what is included and then what the prices are for additional features. Factory
support policies should also be considered, although a companys reputation for
providing it is probably more important.

10.5.1 Ladder Logic Basics


Ladder logic diagrams are the most common method used to program PLCs. The
programs run sequentially (see scan time below) and rst scan the inputs, scan the

Table 4 Features of Modern Small Self-Contained PLC

(Example: Triangle Research T100MD-1616)

Host PC RS232 (serial) programming port 16 digital outputs (24 V @ 1 A each)


Built-in LCD header 16 digital inputs (24 V NPN)
LCD display 1 analog current output (420 mA)
24 VDC power IN 4 analog inputs (01 V2x and 05 V2x )
RS485 network two-wire interface Two PWM outputs
Programming software and simulator One stepper motor controller
Ladder logic and BASIC compatible Counter, encoder, and frequency inputs
Digital Control System Components 411

program lines with the new inputs and perform the desired operations, with the cycle
completed by scanning the outputs (writing the outputs). This is similar to program
control over processor delay times dened earlier. Where todays PLCs confuse the
matter is by combining traditional ladder logic with text based programming based
on interrupts. As we will see, with some PLCs a user routine can simply be inserted
on a ladder rung and used with an interrupt generator (pulse timer). This section
provides a brief overview to get us started with ladder logic programming. Most
programs can be written using a basic set of commands.
The primary components used in constructing ladder diagrams are rails, rungs,
branches, input contacts, output devices, timers, and counters. Most programs can
be constructed using these simple components. Although each manufacturer has
dierent nomenclature for text-based programming (i.e., mnemonic for an input
contact), the resulting ladder diagram is fairly standard and easy to understand.
Some programs allow the program to be graphically developed directly in the ladder
framework. In addition, many PLCs now allow special functions to be written using
high level programming languages, like BASIC. Let us look at the primary compo-
nents.
The rails are vertical lines representing the power lines with the rungs being the
horizontal lines connecting the two power lines (in and out). Thus, when the proper
inputs are seen, the input contact closes and energizes the output by connecting the
two vertical rails across the load. The rungs contain the inputs, branches (if any), and
outputs (coils or functions). Most programs use the basic commands and symbols,
listed in Table 5.
Each contact switch, normally open or closed, may represent a physical input
or condition from elsewhere in the program. In addition to a true or false condition,
each contact switch may also represent a separate function including timers (inter-
rupts), counters, ags, and sequencers. Dierent manufacturers usually include addi-
tional functions that can be assigned to contacts. What follows is a brief overview of
common instructions applied to contacts.
The most common function of a contact switch is to scan a physical input and
show its result. Thus, when a normally open switch assigned to digital input channel

Table 5 Basic Ladder Logic Components


412 Chapter 10

1 scans a high signal, or input, the contact switch closes signifying the event has
occurred. It might be someone pushing a start button or the completion of another
event signied by the closing of a physical limit switch. This is the most common use
of contact switches. The normally closed switch will not open until an event has
occurred. Contact switches may be external, representing a physical input, or inter-
nal, representing an internal event in other parts of the ladder diagram. Herein lies a
primary advantage of PLCs: All the internal elements can be changed without phy-
sically rewiring the circuits. The internal relays can be congured to act as Boolean
logic operators without physical wiring. The same idea holds true for outputs, or
relays. The term relay, or coil, is derived from the fact that early PLCs energized a
coil to activate the relay that then supplied power to the desired physical component/
actuator. In modern, electronic PLCs, the coils may also be internal or external.
Special bit instructions may also be used to open or close switches. Timers
generally delay the on time by a set amount of time. Delay on times can then be
constructed in the ladder diagram to act as delay o timers. Counters may be used to
keep track of occurrences and switch contact positions after a number has been
reached. Counters can generally be congured as up or down counters, and many
include methods for resetting them if other events occur. Some PLCs also include
functions to shift the bit registers, sequencers, and data output commands.
Sequencers can be used to program in xed sequences, such as a sequence of motions
to load a conveyor belt, or to drive devices like stepper motors.
Let us quickly design the ladder diagram for creating a latching switch that will
turn an output on when a momentary switch is pressed and turn it o when another
momentary switch is pressed. To begin the ladder circuit, we need to dene two
inputs, one output, and a switch dependent on the output relay. One input is nor-
mally open and the other normally closed. The relay may or may not be a physical
relay and may only be used internally. In this case we will choose a marker relay
(does not physically exist outside of the PLC) and use it to mark an event. We are
now ready to construct the ladder diagram given in Figure 5.

Figure 5 Example ladder logic circuit of latch.


Digital Control System Components 413

This is a circuit commonly used to start and stop a program without requiring
a constant input signal. When the START button is pushed (digital input port
receives a signal), the switch closes and since STOP is normally closed, the relay
RUN is energized. When the RUN relay is energized, it also closes the switch
monitoring its state, and thus the circuit remains active even after the START switch
is released. To stop the circuits, press the STOP (another digital input port signal)
button, which momentarily opens the STOP switch. This deactivates the relay that in
turn causes the RUN switch to open. Now when STOP is released (it closes again),
the circuit is still deactivated. In normal operation the remaining rungs would each
contain a function to be executed each pass. The scan time then refers to the time
required to complete all the rungs of programs and return back to the beginning
rung again.
In PLCs allowing for subroutines written in a high-level programming lan-
guage, it becomes straightforward to implement our dierence equations, which
themselves were derived from the controllers designed in the preceding chapter.
More general concepts relating to implementing dierent algorithms are discussed
in the next section, but we can mention several items relating to implementing
algorithms within the scope of ladder logic programming methods.
Within the ladder logic framework we are generally provided with two basic
options for implementing subroutines containing our controller algorithms. If the
PLCs set of instructions allows us to insert a timer (interrupt) component, then we
can simply insert the timer on a rung followed by the subroutine (function) contain-
ing our algorithms. Every time the timer generates an interrupt, the subroutine is
executed. What takes place in the subroutine is discussed more in the next section.
This method allows us to operate our controller at a xed sample frequency regard-
less of where the PLC is currently at executing other rungs in the ladder logic
program. Most PLCs attempt to service all interrupts before progressing to the
next ladder rung. However, if we ask too much (i.e., several interrupt driven timers),
the controller will not operate at our desired frequency and will miss samples.
A second basic method is to leave the function in the normal progression of
ladder rungs where the controller algorithm is executed once per pass through all of
the rungs. This is similar to our program control method discussed earlier. In this
case we are not guaranteed a xed sample period and the sample period may change
signicantly depending on what inputs are received, causing to the ladder logic
program to run more or less commands each pass.

10.5.2 Network Buses and Protocols


In increasing numbers, multiple PLCs (and other types of controllers) are being
networked to enable communication between devices. Centralized and distributed
controllers, explained in Section 7.2, both require communication between compo-
nents, and in the case of the distributed system, between other controllers. In general
we can discuss the communication in terms of hardware and software, or as com-
monly called, in terms of what network bus is used and what protocol is used. In
many cases these two are developed together and one term describes both aspects.
Lower level buses are commonly used to connect controllers to smart com-
ponents such as sensors, ampliers, and actuators. Higher level buses are designed to
handle the interactions between other controller systems. As speed and capabilities
414 Chapter 10

increase the line separating the two becomes blurred. CAN-based buses (CANopen,
Probus, and DeviceNet) are examples of lower level buses, while Ethernet and
Firewire are more characteristic of higher level buses.
The largest problem is understanding what, if any, standards are enforced for
the dierent buses. This has led to both open and proprietary architectures.
Common open communication networks include Ethernet, ControlNet, and
DeviceNet. Using one of these usually means that we can buy some components
from company A and other components from company B and expect to have them
work together. Independent organizations are formed to maintain these standards.
The ip side includes proprietary architectures. Many manufacturers, in addi-
tion to supporting some open standards, also have specialized standards not avail-
able to the public and designed to work with their products (and partner companies)
only. This has the advantage of being able to optimize the companys products and
being better able to support the product. The obvious disadvantage is that we can no
longer interface with devices from other companies.
There are many alternative bus architectures that are beyond the scope of this
text, many of which are specic to certain applications. STD, VME, Multibus, and
PC-104 are examples. One becoming more common is the PC-104, embedded-PC
standard, being used in embedded control applications with a variety of modules
available. As with the common PC, these network devices and protocols are con-
stantly changing and upgraded.
Finally, the third and often overlooked element is the user interface. In addi-
tion to the network bus and protocol, the user interface should be considered in
terms of the learning curve, exibility, and capabilities.

10.6 ALGORITHM IMPLEMENTATION ISSUES


Regardless of the device used, PC, microcontroller, or PLC, there are several prac-
tical issues that should be considered. In general, the controller is implemented using
the dierence equation notation developed in previous chapters. As controllers
become more advanced, dierent sets of equations may be used depending on a
number of dierent factors (input values, etc.), and logic statements, lookup tables,
and safety features are commonly added in addition to the basic dierence equation.
The advantage of dierence equations is that we can design our system in the z-
domain using techniques similar to classical analog techniques and then easily imple-
ment the controller in a microprocessor. Since the coecients of the dierence
equations are all dened within the microprocessor, we can also modify our system
behavior by changing the values while the program is running (this leads us in to the
realm of adaptive controllers, introduced in the next chapter).
The two basic programming structures, program control and interrupt control
(presented in earlier sections), can both be used with dierence equations, logic
statements, adaptive routines, etc., with few dierences. Early PLCs were examples
of program control, while modern PLCs, discussed in the previous section can be
combinations of the two programming methods. One controller algorithm might be
program controlled while another might be interrupt driven.
When a program is used to control the processor delay times, we commonly
dene a scan time. Scan time arises from the idea that while the computer program
(or, more guratively, a ladder logic diagram) runs, it scans from line to line,
Digital Control System Components 415

sequentially completing the tasks. Thus, in a normal operation without GOTO


statements in the code, it must make one complete cycle through the code before
it performs that line again. The normal procedure in closed loop controllers is to
scan all the inputs, process the data, and send the results to the outputs. Scan time is
important when reading digital signals since it is possible to miss an event if the on
time of the event is shorter than the controller scan times. An example of this is a
counter channel where scan times are too long. A pulse may come and go before the
port is scanned and thus the pulse is not counted. While analog controllers continu-
ously read, correct, and send out signals, a digital controller only accesses the input
ports at distinct moments in time. Fortunately, most scan times are short relative to
the pulse inputs being received, and many times the largest concern is keeping scan
times short to enhance the stability of the controlled system.
If the scan time must be decreased, whether to increase stability or catch
momentary inputs, we have several options: optimize our code to run faster, simplify
the requirements of our controller, and upgrade to a more powerful processor. If
missing digital input pulses is the only problem, we can simply make the pulse longer
by modifying its source. To optimize the code, we need to examine what is required
and what is extra. For example, most controllers do not require the use of double
precision variables that only serve to occupy more memory and take longer to
operate on. Also, good programmers, especially those programming microcontrol-
lers in assembly code, are artists at creating small ecient segments of code. We have
progressively moved away from this approach as electronic component prices have
fallen. To simply our controller, we simply need to dierentiate between what is
required and what constitutes bells and whistles.
Adding more functions relying on interrupts will also result in longer scan
times. Each time an interrupt occurs, it takes clock cycles potentially destined to
complete the next sequential command. Some PLCs rely on preset scan times and we
must make sure the scan time is long enough to perform all the tasks each loop. The
general algorithm then, might take form as shown in Table 6. To use program
control we simply skip step 2 and only repeat steps 3 through 8. With the interrupt
routine, though, we are able to place higher priority on the controller portion of our
overall program since this segment of code is executed every time an interrupt is
received, even if other portions of the program must be temporarily halted. Also,

Table 6 General Program Flow for Implementing Control


Algorithms

Steps Action

1 Initialize variables and hardware


2 Wait for interrupt driven signal
3 Read analog inputs
4 Calculate the error
5 Execute the control algorithm
6 Write controller outputs to analog (or digital) channels
7 Update the history variables
8 Repeat steps 28 until stop signal is received
416 Chapter 10

whereas step 5 is the actual dierence equation, step 7 is required for many algo-
rithms and consists of saving the current error and controller values for use during
the next time the algorithm is called (i.e., uk  1, ek  1, ek  2 . . ., depending
on the algorithm).
The control algorithms in particular, developed as dierence equations, will
take an input that was read in by previous program steps, perform some combina-
tion of mathematical functions on the input, possibly in relationship to previous
values or with respect to multiple inputs, and then proceed to send the value to the
appropriate output. In this way, we can change from a PID to a PI to an adaptive to
a fuzzy logic controller using the same physical hardware simply by downloading a
new algorithm (steps 5 and 7) to the programmable microprocessor. PCs, embedded
microcontrollers, and PLCs (with programming capabilities) are capable of this type
of operation.
Finally, remember that Table 6 is a general outline and many more lines of
code are required to deal with all the practical issues that are bound to arise (e.g.,
emergency shutdown routines, data logging, maximums and minimums to prevent
problems like integral windup, etc). These extras are often fairly specic to the
system being controlled and are based on an experts knowledge of the system limits
and characteristics.

10.7 DIGITAL TRANSDUCERS


We have already discussed several types of analog transducers in Chapter 6, and this
section now summarizes some of the additional transducers available for use with
digital controllers. While many can be made to work with analog controllers, there is
usually an analog alternative that already exists. In addition, all the transducers
listed in Section 6.4 will work with digital controllers using an AD converter chip.
Some analog transducers also can be congured to send digital outputs using cir-
cuitry included with the transducer. The assumption here is that these transducers
are able to interface directly with digital IO ports on the microprocessor without the
use of an AD converter. Even with this assumption, several signal conditioning
devices may be required to protect the microprocessor from incompatible inputs
such as large voltages. It is usually simpler to convert digital signals to dierent
ranges since we only need to covert between two distinct levels, not a continuous
range of values. Noisy digital signals may be cleaned up using components such as
Schmitt triggers.
In addition, several new comments about resolution and accuracy apply when
using digital outputs compared with the analog output transducers listed earlier.
Since the output is now digital, the same comments applied to data acquisition
boards apply here. There will only be a nite number of positions with which to
represent the transducer output signal. This will be seen in some of the transducers
listed below. Instead of digitizing the signal using the AD converter, the transducer
eectively performs this conversion (physical phenomenon are continuous events)
and at various resolutions, depending on the component and range of operation. The
advantage of this, however, is that since the digital signal is transmitted from the
transducer to the controller, the signal to noise ratio is much better, almost being
immune to common levels of electrical noise.
Digital Control System Components 417

10.7.1 Optical Sensors


Digital encoders are commonly used to measure linear and rotary position. Most
encoders are circular devices in the shape of a disk with digital patterns engraved in
the disk. The simplest ones are incremental optical angle encoders where a single
light source is on one side of the disk and a photodetector lined up with the source on
the other side of the disk. As the disk rotates, whether from direct shaft rotation or
from corresponding linear motion (i.e., rack and pinion), slots in the disk continually
interrupt the light source and provide a series of pulses to the computer. Thus, the
resolution is simply the distance (and corresponding angle) between successive slots.
This will only measure the incremental position by counting the pulses that have
occurred. In the same way, velocity can be found by measuring the frequency of the
pulse train. The obvious downside is that unless knowledge of the starting position is
known and two light sources are used, only position relative to the initial position is
known. The two light sources are required so that direction can be determined and
the computer will know whether to count up or count down.
Incremental optical encoders, although limited in resolution, are noise free
since only the number of pulses matter, not the absolute magnitude or cleanliness.
A typical incremental encoder example is shown in Figure 6. While incremental
encoders work ne for velocity measurements, the actual position is often desired
and absolute encoders must be used. As the shaft rotates a dierent pattern is
generated depending on position and direction. As Figure 7 shows, the number of
rings determines the number of resolution bits for simple designs. Thus, as shown, a
3-bit encoder can recognize eight discrete positions, and each spans a range of 45
degrees. The input sequence that the digital ports would see if connected to the
encoder is also given in Figure 7.
Although the resolution in this example is not very good, encoders are avail-
able with 16-bit resolutions. Even 12-bit encoders result in less than 0.1 degrees of
resolution (360 degrees/4096 levels). Thus, it is possible to have good resolution and
noise immunity using optical encoders. The code sequence listed is called Gray code,
named after Frank Gray of Bell Laboratories, where only one bit changes at a time.
Hence, if one window is misread, errors are less likely than when straight binary
sequencing is used and several digits change at one time.

Figure 6 Typical incremental encoder.


418 Chapter 10

Figure 7 Example 3-bit absolute encoder output (Gray code).

Finally, this whole idea can be applied in one or two linear dimensions where a
grid is set up with light sources (commonly LEDs) and the X-Y position for an
object can be determined.
The encoders used for velocity are identical to those describe above but now
the frequency of the pulse is desired. Some boards directly accept frequency inputs
while others must be programmed to count a specic number of pulses and divide by
the elapsed time. There is a trade-o between response and accuracy since measuring
the frequency using the distance between one set of pulses is very quick but prone to
have very large errors. Averaging more pulses decreases the error but increases the
response time and corresponding lag. When digital signals are transmitted instead of
analog signals, we also have the possibility of using ber optics instead of traditional
wire.

10.7.2 Additional Sensors


There are other ways to get digital pulses representing position or velocity.
Mechanical microswitches can be switched on and o to represent position or be
used to calculated velocities. These switches also can determine the number of pieces
made by having the signal go to a digital port congured as a counter. Other sensors
might generate a voltage change but not of the proper magnitude or sharpness
required by a digital IO port. Variable reluctance proximity sensors, hall-eect
proximity sensors, and even magnetic pickups near rotating gear teeth in velocity
applications can all be used by squaring up the signal using Schmitt triggers. Schmitt
triggers, cheap and up to six packaged on one IC, will take a noisy oscillating signal
and convert it to a square pulse train, just what the digital ports like to see. In fact,
simple PWM circuits can be constructed by sliding a sine wave up and down relative
to the switching level of a Schmitt trigger. Since Schmitt triggers have hysteresis built
into the chip, the chances of getting additional pulses in noisy signals is greatly
reduced. This eect is shown in Figure 8.
This same type of device can be used to make many other signals compatible
with our digital IO ports. For example, many axial turbine ow meters use magnetic
pickups, hall eect sensors, or variable reluctance sensors. Obtaining a pulse train
allows us to use our digital input ports as counters/frequency inputs and directly
read the output from such meters. Almost any transducer that outputs an oscillating
analog signal can be modied using devices like Schmitt triggers.
Digital Control System Components 419

Figure 8 Schmitt trigger used to obtain square wave pulse train from oscillating signal.

10.8 DIGITAL ACTUATORS


The primary actuator capable of directly receiving digital outputs is the stepper
motor. To drive it directly from the microprocessor still requires relays (solid-state
switches) since the current requirements are much larger than what a microprocessor
is capable of. Many stepper motor driver circuits are available which contain the
required logic (step sequences) and current drivers, allowing the microprocessor to
simply output the direction in the form of a high or low signal and the number of
steps to move in the form of a digital pulse train. In this case only two digital outputs
are required to control the actuator.
Being discrete, the same criterion applies where resolution (no. of steps) is of
primary concern. The same advantages also apply, and we have excellent noise
immunity. The stepper motor has become a strong competitor to the DC motor in
terms of cost and performance. DC motors are capable of higher speeds and torque
but are harder to interface with digital computers and cannot be run open loop.
Since the stepper motor is a digital device, stability is never a problem, its brushless
design constitutes less wear, it is easy to interface with a digital computer, and it can
be run open loop in many situations by recording the commands that are sent to it to
monitor its position.
There are two primary types of stepper motors, the permanent magnet and
variable reluctance congurations. The permanent magnet conguration, as the
name implies, has a rotor containing a permanent magnet and a stator with a
number of poles. Then, as shown in Figure 9, the poles on the stator can be switched

Figure 9 Basic permanent magnet stepper motor.


420 Chapter 10

and the rotor magnet will always try to align itself with the new magnetic poles.
Permanent magnet stepper motors are usually limited to around 500 oz-in of torque
while variable reluctance motors may go up to 2000 oz-in of torque. Permanent
magnet motors are generally smaller and as a result also capable of higher speeds.
Speeds are measured in steps per second and some permanent magnet motors range
up to 30,000 steps per second. Resolutions are measured in steps per revolution with
common values being 12, 24, 72, 144, 180, and 200, which ranges from 30 degrees/
step down to 1.8 degrees/step. Special circuitry allows some motors to half-step (hold
a middle position between two poles) or microstep, leading up to 10,000 steps per
revolution or more. The trade-o is between speed and resolution since for any given
conguration the steps per second remains fairly constant.
Variable reluctance stepper motors have a steel rotor that seeks to nd the
position of minimum reluctance. Figure 10 illustrates a simple variable reluctance
stepper motor. As mentioned, variable reluctance motors are generally larger in size
and slower than permanent magnet types but have the advantage in torque rating
over their counterparts.
To operate a stepper motor using open loop control (no position feedback), we
must compare our required actuation forces with the stepper motor capabilities.
These specications are presented in Table 7.
The holding torque is essentially zero when power is lost in variable reluctance
stepper motors. Since permanent magnet motors will stay aligned with the path of
least reluctance, there is always a holding torque even without power, called detent
torque, although it is much less than the holding torque with power on. Most stepper
motors will slightly overshoot each step since they are designed for maximum
response times. Variable reluctance stepper motors generally have lower rotor inertia
(no magnets) and thus may have a slightly faster dynamic response than comparably
sized permanent magnet stepper motors. The pull-in and pull-out parameters and
slew range are shown graphically in Figure 11. Typical of most electric motors, as the
required torque increases, the available speed decreases, with the opposite also being
true. Also, the pull-in torque is greater than the pull-out torque.
An example is given in Figure 12 of how a stepper motor can be used to
provide open loop control of engine throttle position. In this gure a stepper
motor is connected to the engine throttle linkage using Kevlar braided line. In the
laboratory setting this provided an easy way for the computer to control the position

Figure 10 Basic variable reluctance stepper motor.


Digital Control System Components 421

Table 7 Important Parameters When Choosing Stepper Motors

Item Description

Holding torque The maximum torque that can be applied to a motor without causing
rotation. It is measured with the power on.
Pull-in rate Maximum stepping rate a motor can start at when loaded before losing
synchronization.
Pull-out rate Maximum stepping rate a motor can stop with when loaded, before
losing synchronization.
Pull-in torque Maximum torque a motor can be loaded to before losing
synchronization while starting at a designated stepping rate.
Pull-out torque Maximum torque a motor can be loaded to before losing
synchronization while stopping from a designated stepping rate.
Slew range Ranges of rates between the pull-in and pull-out rates where the motor
runs ne but cannot start or stop in this range without losing steps.

of the IC engine throttle without requiring the use of feedback. As long as the
required torques and desired stepping rates are always within the synchronized
range, the computer can keep track of throttle position by recording how many
pulses (and the direction) are sent to the motor. In addition to the specialized
application in this example, stepper motors are found in a variety of consumer
products, include many computer printers, machines, and stereo components.

10.9 INTERFACING DIGITAL SIGNALS TO REAL ACTUATORS


Whenever actuators, analog or digital, are used with digital signals, we have to
provide additional components to interface the digital signals to the higher power
levels required by actuators. Although a variety of components is used to increase
the power levels, the most common one is the transistor. Mechanical relays, while
great for switching large amounts of power and inductive (i.e., coils) loads, are not
very fast and have limited cycles before failure occurs. Trying to modulate an elec-
trical signal using a mechanical relay would quickly wear it out. Solid-state relays
(transistors), on the other hand, have lower power ratings but can be switched very

Figure 11 Common stepper motor characteristics.


422 Chapter 10

Figure 12 Stepper motor control of engine throttle position.

rapidly with almost innite life if properly cooled. As we will see, this provides the
basis for PWM. However, since inductive loads tend to generate a back current when
switched on, we must include protection diodes when using solid-state (transistor)
relays with inductive loads. If the controller is especially sensitive to electrical noise
and spikes, we can also add optical isolators for further protection. Optical isolators
couple an LED with a photo detector so that the switch itself is completely isolated
physically from the high current signals. These are economical devices, easy to
implement, and common components in a variety of applications.

10.9.1 Review of Transistor Basics


Since most of our interfacing is done with transistors, let us quickly review the basics.
The terminology used with transistors stems back to the component that they
replaced, the vacuum tube amplier. The base, collector, and emitter were physical
components inside the tube. It is safe to say that the transistor has aected all areas
of our lives. As time progresses we tend to forget the large size that stereo ampliers,
radios, computers, TVs, transmitters, etc., all were due to the size of the vacuum
tubes used. We take it for granted that virtually every electronic gadget can be made
to operate from batteries and carried in a small bag or pocket. Transistors have had
the same eect on control systems since they provide an economical and ecient
method for connecting microprocessors with the actual actuators.
Two basic types of transistors are commonly used: the bipolar transistor, com-
monly referred to as simply transistor, and the eld eect transistor, commonly
referred to as a FET. Also seeing increasing use is the insulated-gate bipolar transis-
tor, or IGBT. The primary role of a transistor is to act as an amplier. It may be used
as a linear amplier (stereo amp) when driven with small input currents or as a solid-
state relay when driven with larger currents. The advantages and disadvantages of
each will become clear as we progress through this section. A special conguration
obtained when using high gain bipolar transistors connected together is the silicon-
controller rectier, or SCR. With the proper gains these devices are able to latch and
maintain load current even when the input signal is removed.
Digital Control System Components 423

Without all the inner construction details and electrical phenomenon describ-
ing how it works internally, transistors (and diodes) are made from silicon materials
that either want to give up electrons (n-type) or receive electrons (p-type). Diodes
consist of just two slices (p and n) and act as one way current switches or precision
voltage regulators. When we connect three slices together, we get the common bipo-
lar transistor, acting as a switch (or amplier) that can be controlled with a much
smaller current. This allows us to take signals from components like microprocessors
and operational ampliers and amplify them to usable levels of power. The two basic
bipolar junction transistors, an npn and pnp are shown in Figure 13.
The basic operation is described as follows: a small current injected into the
base is able to control a much larger current owing between the collector and
emitter. The current amplication possible is the beta factor, dened in Figure 13.
Normal beta factors are around 100 for a single transistor. If we need higher ampli-
cation ratios, we can use Darlington transistors. Darlington transistors are two
transistors packaged together in series such that the beta ratios multiply and gains
greater than 100,000 are possible.
The transistor is capable of operating in two modes, switching (saturation) and
amplication. Linear amplication is much more dicult, and generally it is best to
use components designed as linear ampliers for your actuator. Heat, being the
primary destroyer of solid-state electronics, is a much larger problem with linear
ampliers. Switching is much easier, and as long as we keep the transistor saturated
while operating we should have fewer problems. To explain this fundamental con-
cept further, let us examine the cuto, active, and saturation regions for a common
emitter transistor circuit. Figure 14 illustrates the common curves illustrating these
regions. The manufacturers of such devices typically provide these curves.
The basic denitions are as follows:
 IC is the current through the load (and thus also the current through the
transistor between the collector and emitter).
 IB is the current supplied to the base from the driver and is used to control
the power delivered to the load.

Figure 13 Type npn and pnp transistors.


424 Chapter 10

Figure 14 Common emitter transistor circuit characteristics.

 VCE is the voltage drop across the transistor as measured between the
collector and emitter.
 VBE is the voltage dierence between the base and emitter.
 VCEsat is the voltage drop between the collector and emitter when the
transistor is operating in the saturated region.

If the transistor is not in saturation and actively regulating the current (linear
amplication), then VCE is may or may not be close to zero. Since electrical power
equals V  I, the power (W) dissipated by the transistor is VCE  IC . Therefore
during linear amplication in the active region, neither VCE nor IC are very close
to zero and heat buildup becomes a large problem. However, if we supply enough
base current, IB , and ensure that the transistor is saturated, then for most transistors
the voltage drop across the transistor, VCE , is less than 1 V and most of the power is
drawn across the load. This reduces the problem of heat buildup, the main source of
failure in transistors, and leads to the arguments in favor of methods like PWM. The
design task then is determining what the base current needs to be without supplying
so much that we build up heat from the input source. Example 10.2 demonstrates the
process of choosing the required base resistance value that will supply enough cur-
rent to keep the transistor operating in the saturated region. As manufacturers
curves show, when we choose the values for our calculations they are very dependent
on temperature.
During operation npn transistors are turned on by applying a high voltage (in
digital levels) to the base, while pnp transistors are turned on by applying a low
voltage (ground level) to the base. The connections between the load and transistor
are commonly termed to be common emitter, common collector, or common base.
In the common emitter connection the load is connected between the positive power
supply and the collector while the emitter is connected to the common (or ground)
signal. With the common collector connection the load is connected to the emitter. In
addition, npn transistors are cheaper to manufacture and thus the most common
solid state switches are congured with common emitter connections using npn
transistors.
Power gain is the greatest for the common emitter connection and thus it is the
type most commonly seen. Figure 15 illustrates the common emitter connection and
how the switching occurs when a base current is supplied to the npn transistor.
Digital Control System Components 425

Figure 15 Using a npn transistor to switch a load on and o (common emitter connection).

When Vin is increased enough to saturate the transistor, VC is pulled near


ground and the load is activated. In this type of circuit the negative terminal of
the load oats high to the supply voltage and there is no voltage drop across the
load when no base current is supplied to the transistor. Even though the transistor
now sees a larger voltage drop there is no associated current and the power dissi-
pated by the transistor is near zero.

EXAMPLE 10.2
Referencing the common emitter circuit in Figure 15 and the typical characteristic
curves given in Figure 14, determine the proper resistor value between Vin and the
base to ensure that the transistor remains saturated. Assume that the transistor is
sized to handle the load requirements and the following values are for the circuit and
transistor (the transistor values are obtained from the manufactures data sheets):

VBE 0:7 V @ 25 C
VCEsat 0:7 V @ 25 C
V 24 V
Rload 2:3
 1000
Vin 5 V

We want to keep the transistor operating in the saturated region to minimize heat
buildup problems. First, we can calculate the required current drawn through the
load and passing through the transistor when switched on as

V  VCEsat 24  0:7
IC 10:1 amps
Rload 2:3

Using the beta factor allows us to calculate the base current required for keeping the
transistor in the saturated operating region:

IC 10:1
IB 10:1 mA
 1000

Finally, we can calculate the maximum resistor value that still provides the proper
base current to the transistor:
426 Chapter 10

Vin  VBE 5 V  0:7 V


RB 425
IB 0:0101 A

To ensure a small safety margin (making sure the transistor remains saturated), we
could choose a resistor slightly less than 425, resulting in a slightly larger base
current. We do want to be careful to overdrive the base as this unnecessarily puts
extra strain on the driver circuit and builds up additional heat in the resistor.
Finally, to complete our discussion on transistors let us examine eld eect and
insulated gate bipolar transistors, or FET and IGBT. A common one used in many
controller applications is the MOSFET, or metal oxide semiconductor eld eect
transistor. They are very similar to the silicon layer bipolar junction, but instead
we vary the voltage (not the current) at the control terminal to control an electric
eld. The electric eld causes a conduction region to form in much the same way that
a base current does with bipolar types. The advantage is this: The controlling cur-
rent, commonly called the gate current, sees a very high impedance (
1014 ) and
thus we do not have to worry about proper biasing as we did regarding the base
junction of our bipolar junction transistors. Figure 16 illustrates this dierence.
Since the gate impedance is so high in the MOSFET device, the gate current is
essentially zero and only the gate voltage controls the power amplication. This
allows us to directly interface a digital output with a eld eect transistor to control
actuators requiring more power. The only thing that prevents us from directly driv-
ing a MOSFET with a microprocessor (TTL) output is the voltage level. MOSFET
devices require 10 V to ensure saturation and thus a voltage multiplier circuit (or step
up transformer) may be required. Since the input impedance is very large, they can
be driven with much larger voltages without damage. To operate them as a linear
amplier, we would need to vary the gate voltage to control the power delivered to
the load.
IGBT devices combine many of the properties of bipolar and eld eect tran-
sistors. Its construction is more complex and uses a MOSFET, npn transistor, and
junction FET to drive the load, thus exhibiting a combination of characteristics. The
advantage is that we get the high input impedance of a MOSFET and the lower
saturation voltage of a bipolar transistor. Table 8 compares the characteristics of the
dierent types of switching devices with same power ratings.

Figure 16 MOSFET and BJT switching comparison.


Digital Control System Components 427

Table 8 Bipolar, MOSFET, and IGBT Characteristics

Characteristic Bipolar MOSFET IGBT

Drive signal Current Voltage Voltage


Drive power (relative) Medium to large Small Small
Comparison (equal ratings)
Current rating (A) 20 20 20
Voltage rating (V) 500 500 600
Resistance (@258C) 0.18 0.20 0.24
Resistance (@1508C) 0.24 0.6 0.23
Rise times (nsec) 70 20 40
Fall times (nsec) 200 40 200

Takesuye J, Deuty S. Introduction to Insulated Gate Bipolar Transistors. Application Note AN1541,
Motorola Inc. and Clemente S, Dubashni A, Pelly B. IGBT Characteristics. Application Note AN983A,
International Rectier.

From Table 8 we see the advantages and disadvantages of the dierent devices
commonly used as power ampliers and high speed solid-state relays. MOSFET
devices are generally more sensitive to temperature but are easy to interface and
very fast. IGBT devices are less resistant to temperature and still easy to interface (no
current draw) but have longer switching times. Bipolar transistors, on the other
hand, are less susceptible to static electricity and are capable of handling larger
load currents. All the transistors are still susceptible to over-voltage and should be
protected when switching inductive loads. Inductive loads, when turned o, produce
a large voltage spike. To protect the transistor it is common to include a diode to
protect the transistor. As shown in Figure 16, the diode (commonly called a yback
diode) is placed across the load to protect the transistor from the large voltages that
may occur when inductive loads are turned o. Since coils of wire, as found in
virtually all motors and solenoids, are inductors, the yback diode is a common
addition to transistor driver circuits.
By now it should be clear how we can use transistors as switches and why we
would like to be able to always keep them in the saturated region while operating.
This last reason is why PWM has become so popular. When transistors operate in
their saturated regions the power dissipation, and thus the heat buildup within the
transistor is greatly reduced.

10.9.2 PWM
PWM is a popular method used with solid-state switches (transistors) to approxi-
mate a linear power amplier without the high cost and size. Transistors are very
easy to implement in circuits when they act as switches and operate in the saturated
range, as the preceding section demonstrates. An example of this is the cost of
switched power supplies versus linear power supplies with similar ratings. Of course,
there are trade-os, and if cost, size, power requirements, and design times were not
considered, we would always choose a linear amplier. However, since cost tends to
carry overwhelming weight in the decision process, for many applications PWM
methods using simple digital signals and transistors make more sense. First, let us
quickly dene what PWM is and the basis on which it works.
428 Chapter 10

PWM can be dened with three terms: amplitude, frequency, and duty cycle.
The amplitude is simply voltage range between the high and low signal levels, for
example, 05 Volts where 0 V represents the magnitude when the pulse is low and 5 V
represents the magnitude when the pulse is high. Common voltage levels range
between 5 and 24 V. The frequency is the base cycling rate of the pulse train but
does not represent the amount of time a pulse is on. We normally think of square
wave pulse trains having equal high and low voltage signals; this is where PWM is
dierent. Although the pulses occur at the set frequency, xed by the PWM gen-
erator, the amount of time each pulse is on is varied, and thus the idea of duty cycles.
Since the pulse train operates at a xed frequency, we can dene a period as

1 1
Period T
f frequency(Hz)

The duty cycle is then the amount of time, t, which the pulse train is at the high
voltage level as a percentage of the total period, ranging between 0 and 100%.

t
Duty cycle 100%
T

At a 75% duty cycle, the pulse is on 75% of the time and only o 25% of the time.
This concept and the terms used are shown in Figure 17. Notice that the period never
changes, only the percentage of time during each period that the signal is turned on.
The idea behind using PWM is to choose the proper frequency such that our
actuator acts as an integrator and averages the area under the pulses. Obviously, if
our actuator bandwidth is very high and near the PWM frequency, we will have
many problems since the actuator is trying to replicate the pulse train and introdu-
cing large transients into our system. But, for example, if the pulse frequency is high
enough, with a 5-V amplitude and 30% duty cycle, we expect the actuator current
level to be as if it is receiving 1.5 V. Remember our basic inductor relationship where
V L di=dt, or solving for the current, i 1=L V dt; thus inductors will take the

Figure 17 General characteristics of PWM signals.


Digital Control System Components 429

switched voltage, integrate it, resulting in an average current. The current the device
actually uses is illustrated in Figure 18.
The goal is to cycle the pulses quickly enough that the current does not rise or
fall signicantly between pulses. Most devices that include PWM outputs will have
several selectable PWM frequencies. In general, it is best to use the highest frequency
possible. Some exceptions are when using devices with high stiction (overcoming
static friction) and some dithering is desired. In this case it is possible that too high of
frequency will result in a decrease in performance since it allows the device to
stick. A better guideline during the design process is to use the PWM signal to
average the current and to add a separately controlled dither signal at the appro-
priate amplitude and frequency. This allows us to decouple the frequencies and
amplitudes of the PWM and dither signals and optimize both eects, instead of
compromising both eects.
When we progress to building these circuits bipolar (or Darlingtons),
MOSFET, and IGBT types may all be used. The bipolar types are current driven
and the eld eect types are voltage driven. Thus, with the bipolar types we need to
size the base resistor to ensure saturation (see Example 10.2). As shown in Figure 19,
if we cannot drive the device with enough current, then we can stage them, similar in
concept to using a single Darlington transistor. In addition, there are many IC chips
available that are specically designed for driving the dierent types of transistors.
With the MOSFET devices we only need to ensure that the voltage on the gate
is large enough to cause saturation. This is usually 10 V, and therefore even though
the current requirements are essentially zero, the voltage may be greater than what a
microprocessor outputs. Sometimes a simple pull-up resistor will allow us to inter-
face MOSFETs directly with microprocessor outputs. A simple PWM circuit using a
MOSFET is given in Figure 20.
Since the resistance of a MOSFET device increases with the temperature at the
junction, it is considered stable. If our load current is too large and the MOSFET
warms up, the resistance also increases, which tends to decrease the current through
the load and resistor. If the opposite were true (as is possible in some other types),
then the transistor tends to run away since the resistance falls, it continues draw
more current and build up additional heat, self-propagating the problem. Where this

Figure 18 Current averaging of PWM signals by inductive loads.


430 Chapter 10

Figure 19 Typical actuators driven by PWM bipolar transistor circuits.

characteristic of MOSFET devices works to our advantage is when we need more


current capabilities and thus connect several MOSFET devices in parallel as shown
in Figure 21.
If one of the MOSFET ampliers begins to draw more current than the others,
it will increase more in temperature. This leads to an increase in resistance relative to
the other transistors and therefore less current. In this way each MOSFET is self-
regulating and stable when connected in parallel.
So, we see that digital PWM outputs can be interfaced eectively with many
actuators without the cost and complexity of linear ampliers. There are also addi-
tional extensions of PWM that allow us meet additional system needs. With the
addition of a lter we can make the PWM signal act as an analog output with the
DA converter and even use it as a waveform generator by changing the duty cycle
into the lter with a prescribed pattern (i.e. sinusoidal wave). For this to work, the
lter needs to integrate the area under the pulse in much the same way an actual
actuator does. A low-pass lter will accomplish this task for us if we limit the
analog output signal frequencies to approximately 1=4 of the PWM frequency.
As an example, if we wish to use our PWM output to generate a sinusoidal command
signal of 1 kHz, then we need a PWM frequency of at least 4 kHz. While simple RC
lters (remember from our earlier work that the corner frequency is simply 1/(time
constant) on Bode plots) with a corner frequency 1=4 of the PWM frequency will
work, we can achieve much cleaner signals by using active lters (i.e., OpAmps)
because of their atter response and sharper cuto.

Figure 20 Typical actuators driven by PWM MOSFET circuits.


Digital Control System Components 431

Figure 21 Increasing the power capabilities using MOSFET devices in parallel.

A similar alternative to PWM is PFM, or pulse-frequency-modulation. The


analogies are similar except that with PFM the pulse width is constant and the
frequency at which they occur of varied. In general PWM is more common. There
are still applications using PFM, an example being found in most RC servomotors.
In conclusion, PWM is very common, largely due to the ease at which digital
signals can be amplied and used to control analog actuators. There are many styles
and sizes of transistors, each optimized to a certain task, and virtually every man-
ufacturer supplies design notes and applications notes about interfacing power tran-
sistors and PWM control. Because of their widespread use many supporting
components are readily available, from suppression diodes and driver circuits to
the software algorithms generating the signal.

10.10 PROBLEMS
10.1 List three advantages associated with using a PC as a digital controller.
10.2 List three characteristics associated with PLCs.
10.3 What is the advantage of using dierential inputs on computer data acquisi-
tion boards (AD converters)?
10.4 Why is it benecial it have multiple user selectable input ranges on computer
data acquisition boards?
10.5 What is the minimum change in signal level that can be detected using a 12-bit
AD converter set to read 010 V, full scale?
10.6 What is the minimum change in signal level that can be detected using a 16-bit
AD converter set to read 05 V, full scale?
10.7 Describe an advantage and disadvantage of using program controlled sample
times in digital controllers.
10.8 Describe an advantage and disadvantage of using interrupt controlled sample
times in digital controllers.
10.9 Describe the primary ways in which a microcontroller diers from a micro-
processor.
10.10 List the two primary factors that aect how fast a microcontroller will
operate.
10.11 PLCs are commonly programmed using what type of diagrams?
432 Chapter 10

10.12 What term describes the elements used to carry the power and ground signals
in ladder logic diagrams?
10.13 Controllers are commonly implemented in microprocessors using algorithms
represented in the form of what type of equation?
10.14 What devices may be used to clean up noisy signals before being read by a
digital input port?
10.15 What term describes the type of optical encoder capable of knowing the
current position without needing prior knowledge?
10.16 A 16-bit rotary optical encoder will have a resolution of how many degrees?
10.17 Describe two advantages of using stepper motors over DC motors.
10.18 What type of stepper motors, permanent magnet or variable reluctance, are
generally capable of the largest torque outputs?
10.19 What type of stepper motors, permanent magnet or variable reluctance, are
generally capable of the largest shaft speed for equivalent resolutions?
10.20 With stepper motors the maximum shaft speed is a function of what two
characteristics?
10.21 List two advantages to using transistors as switching devices as opposed to
linear ampliers.
10.22 When using a transistor as a solid-state switch, it is important to operate it in
what range when it is turned on?
10.23 What are two advantages to using eld eect transistors as switching devices
as compared with bipolar transistors?
10.24 What is the importance of placing a yback diode across an inductive load
that is driven by a solid-state transistor?
10.25 Describe the three primary values used to describe a PWM signal.
10.26 The PWM period must be fast enough that the actuator responds in what way
to the signal?
10.27 What type of transistors are easily used in parallel to increase the total current
rating of the system?
10.28 Describe a similar but alternative method to PWM.
10.29 Locate the device characteristics and part numbers and sketch a circuit using a
npn bipolar transistor to PWM control a 10A inductive load.
10.30 Locate the device characteristics and part numbers and sketch a circuit using a
MOSFET to PWM control a 20A solenoid coil.
11
Advanced Design Techniques and
Controllers

1.1 OBJECTIVES
 Develop the terminology and characteristics of several advanced control-
lers.
 Learn the strengths and weaknesses of each controller.
 Develop design procedures to help choose, design, and implement a variety
of advanced controller algorithms.
 Learn some applications where advanced controllers are successful.

11.2 INTRODUCTION
In this chapter we examine the framework around advanced controller design. Some
controllers examined here are becoming more common, and it may no longer be
correct to label them advanced, although in reference to classic, continuous, lin-
ear, time-invariant controllers the term is appropriate. The goal here is to introduce
several dierent controllers, their options, relative strengths and weaknesses, and
current applications. It is hoped that this chapter whets the appetite for additional
learning. The eld is very exciting, if not overwhelming, when we follow and realize
how fast things are growing/changing/learning. Here are some introductory com-
ments regarding advanced controller design.
First, all methods presented thus far in this text relate to the design, modeling,
simulation, and implementation of feedback controllers, that is, controllers that
operate on the error between the desired command and actual feedback. In all
cases, before and including this chapter, the skills for acquiring/developing accurate
physical system models (and a good understanding of the system) are invaluable.
Although in some cases the model is only indirectly related to the actual controller
design, the knowledge of the system (and hence the model) will always help in
developing the best controller possible. In general for feedback controllers, regard-
less of algorithms or implementations, the goal is to design and tune them to capably
handle all the system unknowns (always present), which cause the errors in our
system. These errors arise primarily from disturbances and incorrect models.
433
434 Chapter 11

One problem is that feedback controllers are reactive and must wait for an
error to develop before appropriate action can be taken; thus, they have built in
limitations. Examples of previous controllers in this category include using integral
gains can help eliminate steady-state errors and state space feedback controllers
using full state command and feedback vectors. In addition to being reactive, we
seldom have access to all states and then must use observers to estimate unmeasured
states. As this chapter demonstrates, we can use the measured states to force the
observer to converge to the measured variables. Instead of waiting for errors to
develop, as in the above examples, we can, whenever possible, use feedforward
techniques to provide disturbance input rejection and minimal tracking error. In
some cases these are essentially free techniques and can provide greatly enhanced
performance. Feedforward design techniques are presented in this chapter.
Additional topics include examining multivariable controllers, where we attempt
to decouple the inputs such that each input has a strong correlation with one output.
This helps in controller design and performance.
Finally, in addition to considering the appropriate feedback and feedforward
routines, we can assess the value of using adaptive controller methods to vary the
feedback and feedforward gains for zero tracking error. Model reference adaptive
controllers, system identication algorithms, and neural nets are all examples in this
class and are presented in following sections. Some of these fall into the nonlinear
category, which brings additional concerns of stability and performance.
While in preceding chapters we developed the groundwork on which millions
of controllers have been designed and are now operating successfully, this chapter
illustrates that the previous chapters are analogous to studying the tip of iceberg that
is visible above the surface and that once we have nished that material, a whole new
world, much larger than the rst, awaits us as we dig deeper into the design of
control systems. With the rapid pace that such systems are developing, we can easily
spend a lifetime uncovering and solving new problems.

11.3 PARAMETER SENSITIVITY ANALYSIS


In most controllers, whether or not the nal performance is adequate is related to the
quality of the model used during the design phase. Some understanding of how the
controller will behave in the presence of errors is the goal of performing parameter
sensitivity analysis. In particular, we wish to be able to predict how sensitive the
performance is, due to errors in any of the parameters used in designing the con-
troller.
In the era of computers, this analysis has become much easier. Analytical
techniques do exist based on methods originally suggested by Bode, but they quickly
become very tedious for larger systems. The concept, however, is quite simple and
can be partially explained using our knowledge of root locus paths. Until now we
have generally varied the proportional gain K to develop the plot, which enables us
to use the convenient graphical rules. When we had several controller gains (i.e.,
proportional derivative [PD]), as in Chapter 5 (see Example 5.9), root contour plots
were used to draw root loci paths for dierent combinations of gains. This same
technique can be used to develop multiple loci paths for dierent system parameters.
For example, we can develop the rst root locus plot for our expected values (mass,
damping, stiness, etc.), and then repeat the process, drawing addition loci paths for
Advanced Design Techniques and Controllers 435

dierent values of mass, damping, etc. In this way we can see the eect that adding
additional mass might have on the stability of our system. An example is this: For a
typical cruise control system we expect the vehicle to have an average mass, including
several passengers. This is our default conguration for which the controller is
designed. A good question to ask, then, is what happens when the vehicle is loaded
with passengers and luggage? How will the control system now behave? Using root
contour lines we can plot the default loci paths, vary the mass, and plot additional
paths to see how the stability changes, allowing us to verify satisfactory performance
at gross vehicle weights.
A second way, made possible with the computer and instead of plotting multi-
ple lines, is to use the computer to vary the parameter under investigation, instead of
the gain K, solve for the system poles at each parameter change, and sequentially
plot the pole migrations on the root locus plot to see how the poles move when the
parameter is varied. This does require that the computer solve for the roots of the
characteristic equation multiple times. It is possible to sometimes move the para-
meter of interest where it behaves as a gain, allowing the standard rules to be used.
Finally, in both methods, we still have not answered the whole question of
parameter sensitivity since we have not examined the rate of change. It is possible
from each method to extract this information. With the contour plots, assuming that
we varied the second parameter by equal amounts (1; 2; 3 . . .), we can look at the
spacing between the lines to see the rate of change. If successive loci paths are close
together, the parameter does not cause a large rate of change over the existing loci
paths. However, when they are space far apart, it signies that an equal variation in
the parameter caused a much larger change in the placement of the loci paths and the
system is sensitive to changes in that parameter. In a similar fashion with the second
method, if we plot the individual points used to generate the loci, the distance
between the points indicates the sensitivity to that parameter. When the points are
far apart it signies that an equal change in that parameter caused the loci to move
much further along the loci paths. This is basis for several of the analytical methods
where we calculate the rate of change of root locations as a function of the rate of
change of parameter variations.

EXAMPLE 11.1
Given the closed loop transfer function, use Matlab to vary the gains m and b, from
1=2 to twice their nominal values, and determine how sensitive the system is to
variations in those parameters. The nominal values are

m 4; b 12; and k8
Cs k

Rs ms2 bs k

The nominal poles are placed at s 1 and s 3, two rst-order responses. What
we are interested in is how the poles move from these locations when m and b are
varied. To generate these plots in Matlab, we can dene the parameters and the
transfer function, vary the parameters, and after each variation, recalculate the pole
locations and plot them. The example code is as follows:
436 Chapter 11

%Parameter Sensitivity Analysis


%Dene the nominal values
m=4; b=12; k=8;Points=41;

%Vary m and b
mv=linspace(m/2,m*2,Points);
bv=linspace(b/2,b*2,Points);

%Generate the poles at each m


for i=1:Points
mp(:,i)=pole(tf(k,[mv(i) b k]));
bp(:,i)=pole(tf(k,[m bv(i) k]));
end;

%Plot the real vs. imag components


plot(real(mp(1,:)),imag(mp(1,:)));hold;
plot(real(mp(2,:)),imag(mp(2,:)))

%Plot the real vs. imag components


hold; gure;
plot(real(bp(1,:)),imag(bp(1,:)));hold;
plot(real(bp(2,:)),imag(bp(2,:)))

First, the mass m is varied and each time the new roots of the characteristic equation
are solved for and plotted. The resulting root loci paths are given in Figure 1. In
similar fashion, the damping can be varied and new system poles calculated as shown
in Figure 2.
What is quite evident is that in both cases the parameter variations will cause
signicant changes in the response of our system. Remembering that the nominal
closed loop pole locations are at s 1 and 3, due to the mass and damping
variations, they either spread out along the negative real axis or become complex
conjugates and proceed toward the imaginary axis and marginal stability. As the
mass and damping parameter become further away from their nominal values, the
rate of change, or sensitivity, also decreases as noted by the spacing between succes-
sive iterations.

Figure 1 Example: root loci resulting from variations in mass (Matlab).


Advanced Design Techniques and Controllers 437

Figure 2 Example: root loci resulting from variations in damping (Matlab).

Using the concepts describe here, most simulation programs will allow us to
change model parameters and reevaluate the new response of the system. Even if we
have large block diagrams or higher level object models, we can still vary dierent
parameters and observe how the simulated response is aected. Since most applica-
tions are expected to experience variations of system parameters used in the model
the procedure is important if we desire to check for global stability even in the
presence of changes in our system. In other words, even if our original root locus
plot predicts global stability for all gains, it is possible with variations in our para-
meters to still become unstable. Parameter sensitivity analysis is one tool that is
available for evaluating these tendencies.

11.4 FEEDFORWARD COMPENSATION


Feedforward compensation is fairly common and is generally used in two ways: the
rst to reduce the eects of measurable disturbances and the second to provide
superior tracking characteristics by applying it to the input command. There are
variations on how they are implemented, and we examine several types in the two
sections below. It is important to remember that the success of these methods gen-
erally depends on the accuracy of the model used during development.
The actual idea behind feedforward control is very simple: Instead of waiting
for an error to develop as a feedback controller must do, feed the proper input which
causes the system error to be zero forward to the amplier/actuator, thus taking the
proactive approach (and appropriately called feedforward). Whereas the feedback
controllers presented up until this point are reactive, feedforward devices are proac-
tive and take corrective action before the error has a chance to occur. This is why
feedforward controllers can be used to improve tracking accuracy. The extent to
which they improve accuracy depends largely on our modeling skills. The better the
model, the more improvement (assuming it can be implemented). Except in simple
cases, feedforward algorithms are much more practical to implement using micro-
processors.
438 Chapter 11

There are several physical reasons why it may not be possible to implement
feedforward control. When discussing disturbance input rejection, we need to have
some measurement reecting (the direct one is best, but not always necessary) the
current disturbance amplitude. Some disturbances are then a lot easier to remove
than others. With command feedforward, we need (in general) to know the com-
mand in advance. Or in other words, we need have access to the future values. For
many processes that follow a known command signal, this is no problem since we
know what each consecutive command value will be. An example is a welding robot
used on an assembly line where the same trajectory is repeated continually. Now let
us examine each method in a little more detail.

1.4.1 Disturbance Input Rejection


Disturbance input rejection seeks to measure the disturbance and apply the opposite
input (force, voltage, torque, etc.) required to negate the real disturbance input.
Obviously, the extent to which this is accomplished is very dependent on the ability
to measure the disturbance and having an actuator capable of negating it. It is
possible, although less eective, to sometimes arrive at the disturbance indirectly
through a dierent variable being measured. This usually has some dynamic loss
associated with the dynamics of the system between the measured variable and the
estimated disturbance. For those cases where the disturbance is easily measured, for
example, the slope of the hill acting on a vehicle with cruise control, the procedure is
quite straightforward. Consider the general block diagram in Figure 3. This is the
block diagram for a general controller and system where the system is acted on by a
disturbance input. For a cruise control system the variable between GC and G1 would
be the throttle position command (i.e., volts), G1 contains the throttle actuator and
engine model (torque output) and G2 contains the vehicle model (torque in, speed
out). In this case the disturbance may be caused by hills and/or wind gusts, and G3
would relate how the wind or hills aect the torque. Earlier we closed the loop to
obtain the relationship between the output and disturbance and found the following
closed loop transfer function:
C G2 G3

D 1 GC G1 G2

The eects are minimal from the disturbance when GC and G1 are large com-
pared to G2 , but the eect of the disturbance is always present, especially if G3 is
large. Now, assuming we can measure the disturbance input, let us examine the
system in Figure 4. The rst step again is to nd the transfer function between the

Figure 3 General system block diagram with disturbance input.


Advanced Design Techniques and Controllers 439

Figure 4 General system block diagram with disturbance rejection.

disturbance and system output. This can be done using our block diagram reduction
techniques covered earlier.
C G2 G3 D G1 G2 GD D  GC G1 G2 C
C 1 GC G1 G2 G2 G3 G1 G2 GD D
and
C G2 G3 G1 GD

D 1 GC G1 G2
We can make several observations. The denominator stays the same and thus feed-
forward disturbance input rejection does not aect our system dynamics (i.e., pole
locations from the characteristic equation). Hence, for example, we cannot use it to
make our system exhibit dierent damping ratios. The upside of this is that we have
another tool to reduce the eects from disturbances without increasing the propor-
tional and integral gains and causing more overshoot and oscillation.
Examining the numerator, we see the opportunity to make the numerator
equal to zero. If this is possible, then in theory at least the disturbance has absolutely
no eect on our system output since C=D 0. To solve for GD , set the numerator
equal to zero, resulting in a desired transfer function for GD :
G3 G1 GD 0
G3
GD 
G1
This denes our GD required to eliminate the eects of disturbances. In terms of
our cruise control example, GD is found by measuring the wind or grade, using G3 to
convert it to an estimated torque disturbance, and dividing by G1 to go from esti-
mated torque to estimated volts of command to the throttle actuator. The negative
sign implies that if the disturbance is a negative torque (i.e., downhill), the throttle
command must be decreased. It should be noted that the numerator would also be
zero if G2 equaled zero. However, this implies that no input to the physical system
will cause a change in the system output and the system itself can never be controlled.
There are several reasons why we cannot completely eliminate the disturbance
eects using controllers on real systems. First, G1 and G3 are models representing
physical components (engine and disturbance to torque relationships) and thus our
rejection of disturbances is dependent (once again) on model quality. Especially in
the case of a typical internal combustion engine, using linear models will limit the
440 Chapter 11

eectiveness of rejecting disturbances since the actual engine is inherently very non-
linear. Second, since we are measuring the disturbance, there will be both measure-
ment errors and noise along with the problem that our measurement does not solely
explain the change in system output. Finally, we have physical limitations. Some
disturbances will simply be too large for our controller to handle. For example, at
some grade the vehicle can no longer maintain the command speed due to power
limitations. In this case any controller implementation (feedforward, feedback, adap-
tive, etc.) will have the same problem in responding the disturbance.
In conclusion, disturbance rejection is commonly used to provide enhanced
tracking error without relying completely on the feedback controller. The system
dynamics are not changed and the eectiveness of the feedforward controller is
limited to modeling capabilities. In discrete systems we are limited to using dierence
equations only based on the current and past disturbance measurements.

EXAMPLE 11.2
Find the transfer function GD that decouples the disturbance input from the eects
on the output of the system given in Figure 5. Assume that the disturbance is
measurable. When we close the loop for C=D and set it equal to zero, as shown in
this section, we get the requirement that
G3 G1 GD 0
G3
GD 
G1
For our system in Figure 5:
5
G1 and G3 2
s 1s 5
Thus to make the numerator always equal to zero (no eect then from the distur-
bance input), we set GD equal to
2 2
GD   s 1s 5
5 5
s 1s 5
If we were able to implement GD , measure Ds, and assuming our models were
accurate, we would be able to cancel out the eects of disturbances. Even though in

Figure 5 Example: system block diagram for disturbance input decoupling.


Advanced Design Techniques and Controllers 441

practice it is dicult to completely cancel out all disturbances due to the assumptions
made while solving for GD , we can generally still enhance the performance of our
system. In this example, where we have a second-order numerator and zero-order
denominator, making it dicult to implement (requires future values in a dierence
equation), we can treat GD as a gain block that cancels out the steady-state gains of
G3 =G1 , or GD 2. This is feasible to implement and in general, if the amplier/
actuator G1 is relatively fast, still gives good performance, only ignoring the short-
term dynamics of the amplier/actuator.

11.4.2 Command Feedforward and Tracking


There are many parallels between feedforward disturbance rejection and command
feedforward. Whereas the goal in the previous section was to eliminate the eect
from measurable disturbance inputs, the goal here is to eliminate tracking errors due
to command changes. Using the same analogy of cruise control, the idea is that if we
know we have to accelerate the vehicle, why wait for an error to develop before the
signal is sent to the actuator. This type of system is shown in Figure 6. As before, we
want to close the loop to evaluate the changes resulting from adding command
feedforward. If GF is zero, as in the original system in Figure 3, then we can develop
the following closed loop transfer function between the system output and command
input, assuming the disturbance is zero.
C GC G1 G2

R 1 GC G1 G2
For perfect tracking we need C=R 1, such that the output always equals the input.
This transfer function will not approach unity unless the loop gain approaches
innity. Since higher loop gain tends to make the system less stable, we usually
compromise between error and stability. If we assume now that the feedforward
block, GF , is active in the model, we can again close the loop and now get the
following transfer function:
C GF G1 G2 GC G1 G2

R 1 GC G1 G2
Since the goal is to make C=R 1, we want to make numerator equal to the
denominator, or
1
GF
G1 G2

Figure 6 General system block diagram with command feedforward.


442 Chapter 11

With this transfer function, the inverse of the physical plant, the numerator becomes
equal to the denominator for all inputs. Physically we are solving the system such
that our inputs and outputs are reversed. We use our desired output to calculate the
inputs required to give us the physical output. As before, the extent to which feed-
forward accomplishes improved tracking depends on the accuracy of the models. If
our models are reasonably accurate, then large improvements in tracking perfor-
mance are realized.
Also similar are the physical limitations of our system. A step input for the
command will never be realized on the outputs since an innite eort (impulse) is
required to follow the command. It is therefore the job of the designer, and good
practice to begin with, to only give feasible command trajectories and avoid un-
necessarily saturating our components in the system. In the case where we do not
know and are unable to control the command input, we have limited eectiveness in
using command feedforward techniques.
If we do know the command sequence in advance, then we can also use the
alternative algorithm given in Figure 7. This conguration allows us to precompute
all the input values and thus enables us to implement methods like lookup tables for
faster processing. The system inputs now include both the reference (original) com-
mand and the feedforward command fed through the plant inverse, and the con-
troller should only be required to handle modeling errors and disturbances.
If we close the loop and develop the transfer function, we get
C GF0 GC G1 G2 GC G1 G2

R 1 GC G1 G2
Using GF0 we can still make C=R 1. When we compare this result with the transfer
function from the previous conguration, it leads to the following equivalence:
1 1
GF0 G
GC G1 G2 GC F
If we wished to precompute all the modied inputs, we would simply take the desired
command and multiply it by GF0 as shown:
 
1
Rwith feedforward 1 R
GC G1 G2 original
This has many advantages since the entire command signal can be precomputed and
no additional overhead is required in real time. With many industrial robots where
the same task is repeated over and over and the model remains relatively constant,

Figure 7 Alternative command feedforward conguration.


Advanced Design Techniques and Controllers 443

this technique can signicantly improve tracking performance at no additional


operational costs, only upfront design costs.
Putting everything together, disturbance rejection and command feedforward
results in the system controller illustrated in Figure 8. This controller, limited by
the accuracy of the physical system models, measurable disturbances, and known
trajectories, will exhibit large improvements over basic feedback controllers.
A benet is that neither controller aects the stability of the original closed
loop feedback system, which may be designed and tuned using the basic methods
dened earlier (the denominator of the system is never changed). It is likely when
implementing feedforward controllers that the original controller is less critical and
all that may be needed is a simple proportional controller to handle the extra
errors and dictate the type of system response to these errors. Finally, to illustrate the
advantages and disadvantages of command feedforward, let us examine the follow-
ing example.

EXAMPLE 11.3
Given the second-order system and discrete controller in the block diagram of Figure
9, design and simulate a command feedforward controller. Use a sinusoidal input
(must be feasible trajectory) and compare results with and without modeling errors
present. We have a second-order physical system with a damping ratio of 1=2 and a
natural frequency of 5 rad/sec, where

!2n 25
Gs
s2 2!n s !2n s2 5s 25
We will use the Matlab commands as given at the end of this example to convert it to
the z-domain with a zero-order hold (ZOH) and a sample time of 0.05 (20 Hz). This
results in the discrete transfer function for our system:
0:02865z 0:02636
Gz
z2  1:724z 0:7788
For this example, a simple zero-pole discrete controller, similar to the one developed
in Section 9.5.1, is used as the feedback controller.
3z  2
GC z
z

Figure 8 Disturbance input rejection and command feedforward controllers.


444 Chapter 11

Figure 9 Example: command feedforward system block diagram.

After converting the system model and ZOH into the sampled domain, we can close
the loop to nd the overall discrete closed loop transfer function:

Cz 0:08596z2 0:02177z  0:5272


3
Rz z  1:638z2 0:8006z  0:05272

For a baseline performance plot without using command feedforward, we can


simulate the system using Matlab, resulting in the plot in Figure 10.
It is clear that our controller is not very good with regard to tracking accuracy,
and while we certainly could design the controller for better performance, let us
examine the eects of adding the feedforward controller, GF0 , to the system.
Remember from earlier in this section that

C GF0 GC G1 G2 GC G1 G2 1
and GF0
R 1 GC G1 G2 GC G1 G2

Since we only have one second-order plant, the transfer functions G1 and G2 are
combined and represented as one transfer function, GSys . Now we can form the
feedforward transfer function using our discrete controller and system model.

Figure 10 Example: system response using Matlab (without command feedforward).


Advanced Design Techniques and Controllers 445

z3  1:724z2 0:7788z
GF0
0:08595z2 0:02177z  0:05272
When we compare the original closed loop transfer function with the transfer func-
tion containing the feedforward block, we realize that our new transfer function can
be written as (factor out GC and GSys )
 0

C GC GSys 1 GF

R 1 GC GSys

Compared with the original closed loop transfer function, the only dierence is
1 GF0 and therefore
 
  1
Rwith feedforward 1 GF0 Roriginal 1 R
GC G1 G2 original
Thus, to get the modied closed loop transfer function (containing the eects of GF0 )
we simply multiply our original closed loop transfer function by (1 GF0 ), resulting
in

Cz 0:08596z5  0:119z4  0:01956z3 0:09924z2  0:04335z 0:002779



Rz 0:08596z5  0:119z4  0:01956z3 0:09924z2  0:04335z 0:002779
If we look at it closely we see that the numerator and denominator are identical and
therefore are guaranteed (in simulations at least) to produce Figure 11, where the
command and response appear as one line, since the C=R ratio is always unity. So if
we can develop accurate models, we in theory, with feasible trajectories, can drive the
tracking error to zero.
A good question to ask is what happens when our models are not accurate? Let
us change the system damping in our estimated model to 1, twice that of the original
system, and see what happens. First, we will develop another discrete model of the

Figure 11 Example: system response using Matlab (with command feedforward and no
errors in the model.
446 Chapter 11

second-order continuous system, but this one will contain the modeling error. The
new continuous system model (with errors) is
!2n 25
Gerrors s 2 2
2
s 2!n s !n s 10s 25
Using Matlab as before, convert it to the discrete equivalent with a ZOH:
z3  1:558z2 0:6065z
Gzerrors
0:0795z2 0:01429z  0:04486
Now when we create the new overall system transfer function, we will use GF0
based on the previous model, leaving our original system parameters unchanged.
This results in the new closed loop transfer function, including the feedforward
controller:

Cz 0:08596z5  0:1053z4  0:03153z3 0:08758z2  0:3371z 0:002365

Rz Model 0:0795z5  0:1195z4  0:004625z3 0:08072z2  0:3667z 0:002365
Errors

In contrast to the transfer function where our model matched the system, now the
overall system transfer function is no longer unity and we will have some tracking
errors. To show this, we can simulate the system again and compare the system
output with the desired input. This results in the plot given in Figure 12, where
additional tracking errors are evident.
Whereas the rst system, without any feedforward loop in place, was attenu-
ated and lagged the input, in this case using an imperfect feedforward transfer
function we now we see larger than desired amplitudes due to the modeling error.
The lag between the output and command, however, is removed. Thus we see that
this system is fairly sensitive to changes in system damping and care must be taken to
use the correct values. This example also serves as motivation for adaptive control-
lers, as we examine in later sections. For example, if the friction was not constant and

Figure 12 Example: system response using Matlab (with command feedforward and errors
in the model).
Advanced Design Techniques and Controllers 447

changed as the oil temperature in a damper changed, the command feedforward


controller might work well at some operating points but very poorly at others.
Unless the system can adapt, we must develop either temperature dependent models
or choose other alternatives. Finally, for any feedforward controller, disturbance, or
command, it is wise to perform at least a basic parameter sensitivity analysis to judge
potential future problems.
To wrap up this example, let us quickly look at the dierence equations of GF0 ,
given earlier as
z3  1:724z2 0:7788z
GF0 z
0:08595z2 0:02177z  0:05272
Multiplying top and bottom by z3 :
1  1:724z1 0:7788z2
GF0 z
0:08595z1 0:02177z2 0:05272z3
Cross-multiplying and writing the dierence equations, we get
0:08595r 0 k  1 0:02177r 0 k  2  0:05272r 0 k  3 rk  1:724rk  1
0:7788rk  2
Since we are interested in r 0 k, the modied input sequence, we need to shift each
sample by 1 so that we can write r 0 k as a function of previously modied values
and the original input values. This leads to the dierence equation, expressed as
0:08595r 0 k 0:02177r 0 k  1 0:05272r 0 k  2 rk 1  1:724rk
0:7788rk  1
Thus, as we suspected, we must know (for this case) the command one step in
advance, rk 1, to implement this particular command feedforward controller.
For many applications this is not a problem, and if good models can be developed,
command feedforward will give signicant improvements in tracking. If we do know
the input sequence one sample in advance, it is also likely that we also know the
entire sequence of input values. If this is the case then, not only can we implement the
dierence equation as written, we can precompute the entire modied input sequence
and store them in a table.
Finally, what follows here are the Matlab commands used to develop the
transfer functions and plot the responses used throughout this example.
%Program to implement and
%test command feedforward
%wn = 5, z = 0.5
%Ts = 0.05 = 20 Hz

Dene Continuous TF, w=5,z=0.5


sys1=tf(25,[1 5 25])
pause;
Dene Discrete Controller
sysc=tf([3 -2],[1 0],0.05)
pause;
Convert physical model to discrete TF
sys1z=c2d(sys1,0.05,zoh)
pause;
448 Chapter 11

Close the loop for the CLTF


syscl=feedback(sys1z*sysc,1)
pause;
Dene sinusoidal input and simulate response
[u,T]=gensig(sin,1,10,0.05);
[y1,T1]=lsim(syscl,u,T);
plot(T1,y1,T,u);axis([0 10 -2 2]);
pause;
Dene the feedforward TF
sysc=(1/(sys1z*sysc))
Implement into system TF
sysnal=((1+sysc)*syscl)
pause;
Simulate and create new plot
[y2,T2]=lsim(sysnal,u,T);
gure;
plot(T2,y2,T,u);axis([0 10 -2 2]);

Dene the 2nd feedforward TF


Double the damping ratio
sys2c=tf(25,[1 10 25]);
sys2=c2d(sys2c,0.05,zoh);
sysc2=(1/(sys2*sysc))
Implement new Gf into system TF
sysnal2=((1+sysc2)*syscl)
pause;
Simulate and plot
[y3,T3]=lsim(sysnal2,u,T);
gure;plot(T3,y3,T,u);axis([0 10 -2 2]);

In conclusion it is clear that there are many advantages to implementing feed-


forward controllers into our design. As part of a recurrent theme, we nd again that
good models are necessary for well-performing controllers. Whereas it is quite easy
to follow rules and tune a black box proportional-integral (PI) controller, we need
additional skills to correctly design and implement more advanced controllers and
take advantage of their benets.

11.5 MULTIVARIABLE CONTROLLERS


Several methods have been developed for designing, tuning, and implementing mul-
tiple input, multiple output (MIMO) controllers, also referred to as multivariable
controllers. In this section we introduce two methods, both extensions of methods
presented earlier. First, we can model MIMO systems using combinations of single
input; single output (SISO) transfer functions where some of the transfer functions
represent the coupling terms. This method does allow us to understand the system in
terms of familiar block diagrams and thus is presented rst.
The second common procedure is to represent MIMO systems using state
equations and, if linear, then system matrices. An advantage is the ease that systems
with many inputs and outputs can be represented. In addition, using linear algebra,
the techniques (and hence computer algorithms) remain the same regardless of sys-
tem size. Whereas earlier we used Ackermans formula as a closed form solution
when solving for the gain matrix required to place our system poles at preset
(desired) locations, we no longer have deterministic solutions and must make deci-
Advanced Design Techniques and Controllers 449

sions about how to deal with the extra degrees of freedom. This is where the term
optimal control comes from: that some rule, performance index, or cost function is
dened and the controller is optimized to minimize or maximize this function. The
function may be controller action, accuracy, power used, etc. Common methods
developed to minimize these functions include the linear quadratic regulator
(LQR), variations of least squares, and Kalman lters. Most optimal controllers
are implemented using state variable feedback, similar to the SISO examples.
Since, as we mentioned above, we cannot place all of our poles (more gains than
poles), the computational eort is greatly increased from that of earlier SISO sys-
tems. Fortunately, many design programs have the methods listed above already
programmed, and it is not necessary to do this design work manually.
The question then becomes which method to use. If the number of inputs and
outputs are relatively small, say three or less, it is quite easy to design controllers
using coupled transfer functions. It is possible to go larger, but the matrix sizes grow
along with it. Transfer functions are generally more familiar to most people and the
relationships easily dened. On the other hand, the larger the system grows, the
easier it is to work in state space. In fact, the techniques to design each optimal
controller are virtually identical (procedurally) regardless of system size. State vari-
able feedback methods work very well when good models are used. Since so much is
dependent on the model (observers, states, interaction, etc.), poor models rapidly
lead to very poor controllers. Thus, care needs to be taken in regards to developing
system models. Applications where the models are well dened have seen excellent
results when optimal controllers are used with state variable feedback. Caution is
required as the opposite end of the spectrum is also evident.

11.5.1 Transfer Function Multivariable Control Systems


In this section we will look at two inputtwo output (TITO) systems. Transfer
function methods are much easier if the number of inputs equal the number of
outputs since we can work with square matrices and then decouple each input and
output. Decoupling is not completely possible with unequal numbers of inputs and
outputs, since each input cannot be directly related to a single output. If there is only
one output and multiple inputs, the tools from previous chapters can be applied in
the same way that both command and disturbance inputs act on a system and were
dealt with by examining the eects of each separately and then adding the results.
The general block diagram representing our two input two output system is shown in
Figure 13. We can now dene U, Y, and G for the block diagram:
 U system inputs, the number of rows equal to the number of inputs.
 Y system outputs, the number of rows equal to the number of outputs.
 G transfer function matrix, each element, g, represents a SISO transfer
function, the number of rows equals the number of outputs and the number
of columns equal the number of inputs. Thus, if the number of inputs do
not equal the number of outputs, G is not a square matrix.
Dening the matrices for the system in Figure 13:
" # " #" #
y1 g11 g12 u1

y2 g21 g22 u2
450 Chapter 11

Figure 13 General TITO block diagram model.

There are no arguments for the matrices since both continuous (s) and discrete (z)
transfer functions can be represented by this conguration. The rst two steps are to
determine the amount of cross-coupling and the second, to try and decouple the
inputs and outputs. To determine the coupling, the common experimental system
identication techniques applied to step inputs and frequency responses can be used.
The dierence we see here is that for each input we measure two outputs (or more or
less for general MIMO systems). For example, for the system in Figure 13, if we put
a step input on u1 , we will get two response curves, one for each output. The plot of
y1 can be used to nd g11 and the plot of y2 to determine g21 . An example is given in
Figure 14.
For the example system given where one output exhibits overshoot and the
other decays exponentially, the two curves would likely be t to the following rst-
and second-order system transfer functions, both functions of input u1 :
!2n a
g11 and g21
s 2!n s !2n
2 sa
A similar plot could be recorded to determine the remaining two transfer function
elements of G. If Bode plots were developed, higher order system models could then
be estimated from the resulting plots. Whether the transfer functions have been

Figure 14 Experimental evaluation of transfer function matrix.


Advanced Design Techniques and Controllers 451

determined analytically or experimentally, we ideally want to decouple them such


that each input only aects one output. If we can design such a system, it becomes
like two SISO systems, and all our techniques from earlier can be used for system
controller design, tuning, etc. In matrix form this would now be represented as
" # " 0 #" #
y1 g11 0 u1
0
y2 0 g22 u2
This system behaves like two separate systems, as shown in Figure 15. Since we know
the format of the desired transfer function matrix, let us examine the steps required
to transform our initial coupled system from Figure 13 into the decoupled system in
Figure 15. Begin by dening a possible controller transfer function matrix, GC , the
same size as the system transfer function matrix. Congure it as in Figure 16. Using
the TITO system as an example and matrix notation, allowing us to easily extend it
to larger systems, we write the following transfer function matrices and relationships.
System output:
Y GU
System input:
U GC E GC R  Y
Combine:
Y GGC R  Y
Simplify:
I GC GY GC GR
Finally, the closed loop transfer function (CLTF):
Y I GC G1 GC GR
The closed loop system transfer function should look very familiar to its SISO
counterpart. Now all we have to do is compare our transfer function matrix from

Figure 15 TITO system when decoupled.


452 Chapter 11

Figure 16 General TITO controller block diagram.

above with the desired transfer function matrix when the system is decoupled and
solve for the controller that results (i.e., the only unknown matrix of transfer func-
tion in the equation). Let GD be our desired transfer function matrix (generally
diagonal if we wish to make the system uncoupled), and set the transfer function
matrix (equation of matrices) equal to it.
Setting them equal:

GD I GC G1 GC G

Solving for GC :

GC G1 GD 1 GD 1

Now GC contains our desired controller which when implemented should produce
the response characteristics dened in the diagonal terms of the desired transfer
function matrix, GD :
This is but one possible method that can be used to design controllers using
transfer functions in multivariable systems. In continuous systems it would be highly
unlikely that the resulting controller could be easily implemented using physical
analog components. With digital controllers, however, we are generally not limited
unless the controller ends up not being realizable (programmable due to the necessity
of unknown or unavailable samples) or if the physics of the system prevent it from
operating the way it should. For this method, the stability of our system is deter-
mined by the type of response we chose for the terms in the diagonal of GD . In all
cases, feasible trajectories should also be used, avoiding unnecessary saturation of
components.
In cases where the system becomes too complicated to try and decouple it
completely, we can often achieve good results by making it mostly decoupled by
minimizing the o-diagonal terms and maximizing the diagonal terms. This type of
approach is our only option if the number of inputs and outputs are not equal. The
Rosenbrock approach using inverse Nyquist arrays is based on this idea of making
our diagonal terms dominant. Other methods seeking to maximize the diagonal
Advanced Design Techniques and Controllers 453

terms relative to the o diagonal terms are the Perron-Frobenius (P-F) method based
on eigenvalues and eigenvectors and the characteristic locus method. The advantage
is very important for continuous systems since the controller can often be an array of
gains selectively chosen to achieve this trait, thus capable of being implemented.
Even for digital controllers, the work to implement is greatly reduced. These systems
generally take the form shown in Figure 17 where G is the physical system, K is the
gain matrix, and P is an optional postsystem compensator acting on the system
outputs. To check stability for such controllers, we must close each individual
loop (diagonal and coupling terms) and verify that unstable poles are not present.
There are additional stability theorems for these controller design techniques but
require more linear algebra theorems that are not covered here.

11.5.2 State Space Multivariable Control Systems


Since we have been developing and using state space techniques along side transfer
functions, dierential equations, and dierence equations, the groundwork has
already been put in place for multivariable control system design. In fact, the tech-
niques used earlier to design a state space control system and tune the gain matrix
are the same as what we wish to do now. There is one caveat, however, and that is
because instead of nding a unique gain matrix as is the case for SISO systems, for
MIMO systems we get a set of equations with more unknowns than equations. This
leads to an innite number of solutions, and additional methods are required to
reduce the number of unknowns down to where the gains can be found. While the
advantage of state space is that the same techniques are applied whether a third- or
twenty-third-order system is designed, designing stable robust controllers is a process
that takes trial and error and experience. State space controllers are very sensitive to
modeling errors and good fundamentals in modeling are required.
There are two primary methods used to design controllers in state space, pole
placement and optimal control. Each method has numerous branches. In pole place-
ment, the overall goal is to place all of the poles in such a way as to produce our
desired response characteristics. For multivariable systems this means that with the
additional gains we can try to further shape or control the direction of the poles
placed with the other gains. This may be an iterative or intuitive process. If we know
that coupling does not exist between one input and one of the outputs, we can set
that gain to zero, thus getting us closer to a solution. If optimal control is used, the
gains are chosen such that they minimize or maximize some arbitrary control law.
Thus, dierent control laws will result in dierent gains. What is hard in both
controllers to accomplish is good disturbance rejection.
The one assumption made thus far is that we have access to all of the states.
This is seldom the case (except in single output systems) and we must rely on

Figure 17 Grain matrix to make diagonal terms dominant in system.


454 Chapter 11

observers, or estimators as commonly called. Since all the states are seldom available,
or too costly to measure, the goal of an observer is to predict, or estimate, the
missing states. Just as we determined earlier if a system was controllable, we can
also determine if a state space system is observable. Controllability depends on the A
and B matrices such that an input is capable of producing an output and thus
controlling the system. In the same way, observers are dependent on the A and C
matrices since the system states must correlate with the system output to be obser-
vable. A system is observable, then, if the rank of the observability matrix is equal to
the system order. The observability matrix is dened as
h    2  n1 T i
MC CT AT CT  AT CT j


j AT C

Thankfully, many computer programs have been programmed to perform this


check. The important concept is that each state must be related to a change in at
least one output in order for the system to be fully observable.
There are many dierent implementations for observers. The simplest imple-
mentation is an open loop estimator in parallel with the physical system as given in
Figure 18. The problems with the open loop observer are quite obvious. If the initial
conditions are wrong and/or modeling errors and/or disturbances are present, the
states never converge to the proper value. Implementing a separate feedback loop on
the observer and forcing its estimated output to converge with the actual remedies
this shortcoming. This improvement is shown in Figure 19.
By adding the closed loop feedback, we now force the observer to converge to
the proper state estimates. Since there are no physical components (i.e., implemented
digitally), we can make the convergence times for the observer much faster than for
the physical system. Additionally, the feedforward serves to remove lags since the
states do not wait for an error to occur. If there are no measured states, this would be
called a full state observer. When we are able to measure some states, we generally
use a reduced state observer. The advantages of full state observers are better noise
ltering, less costly than actual transducers, and easier design techniques.
Disadvantages include doubling the order of the system, possible inaccuracies
when actual measurements are possible, and much higher computational demands.
Since some states are usually available, we can implement a reduced order state
observer. This observer takes advantage of the actual signal being available, reduces

Figure 18 General open loop state observer.


Advanced Design Techniques and Controllers 455

Figure 19 Observer with closed loop feedback and feedforward.

the order of the system relative to using a full state observer, and simplies the
algorithm and reduces the computational load. The actual model of a reduced
order state observer, shown in Figure 20, is very similar to the full state observer
but combines actual and estimated states. The reduced order state observer only
estimates the states in the z vector (not aliated with the z transform) and combines
these with the measured states using the partitioned matrix to produce the output
vector, Y, used for the feedback loop:

1
C
T

where C is the original state space matrix, and T is (no. of rows equal the no.
states  no: measured outputs, columns equal the no. states) a matrix which, when
partitioned with C, produces a square matrix whose rank equals the system order.
There are usually more unknowns than equations and choices are required when
choosing T.

Figure 20 Reduced order state observer.


456 Chapter 11

Convergence time (observer pole locations), regardless of observer type, should


be chosen to be approximately 510 times faster than the system. The goal is to be
fast enough to always converge quickly relative to the physical system but slow
enough to provide the desirable ltering qualities. Too fast and unnecessary noise
is added (conveyed) to the estimated states. It is common to assign damping ratios
near 0.707 since this provides near optimal rise times and less than 5% overshoot. If
the controller is implemented digitally direct methods like dead beat controllers may
also be used.
Once the observer is designed and we have access to all states, there are many
options available regarding controller algorithms. Several are mentioned here, but
the reader is referred to additional references listed in the bibliography for further
studies. Some controllers resemble the familiar SISO controllers developed earlier.
For example, Figure 21 illustrates the implementation of a gain matrix and integral
gain matrix to provide for elimination of steady-state errors. We see that for this
conguration the states are operated on by the gain matrix and the errors between
the desired and actual output(s) by the integral gain matrix and integral function.
The integral gain can help remove eects from constant disturbance inputs that
might otherwise cause a steady-state error.
Another class falls under the term optimal controllers. Briey dened above,
these controllers seek to optimize a performance index function as a method of
developing enough equations to solve for the unknown gains. Remember that a
MIMO state feedback controller will result in more unknown gains than equations
when pole placement techniques are used. One the most common optimal control
laws is the LQR. The LQR seeks to minimize the cost function containing accuracy
and controller activity measures. The balance is then between performance and
controller requirements. Weighting matrices are used to determine the relative
importance of each one. Other cost functions may be based on Lyapunov equations,
Riccati equations, least squares estimations, or Kalman lters, along with many
others. Since there is an innite number of solutions, each method will result in
slightly dierent results. If noise is a primary factor in your system, there may be
enough advantages to use a Kalman lter. A Kalman lter is based on a stochastic
(random) noise signal and is often able the extract the states even from noisy signals.
The trade-o is more intense computational and programming considerations, and
hence we have seen only limited use of Kalman lters (GPS processors, for example).
Since the Kalman lter is normally implemented as a true time varying controller, a
recursive solution must be implemented.

Figure 21 Multivariable integral control with state feedback.


Advanced Design Techniques and Controllers 457

EXAMPLE 11.4
Describe the basic Matlab commands and tools that are available for designing
optimal control algorithms. Matlab contains many functions already developed
for the purpose of optimal controller design. LQRs are designed using dlqr and
Kalman lters (compensators) are designed using kalman. The function dlqr per-
forms linear-quadratic regulator design for discrete-time systems. This means that
the controller gain K is in the feedback where:
uk Kxk
Then for the closed loop controlled system:
xk 1 Axk Buk
To solve for the feedback matrix K we also need to dene a cost function, J:

J Sum x 0 Qx b 2 x 0 Nu
J is minimized by Matlab where the syntax of dlqr is
K; S; E dlqrA; B; Q; R; N
K is the gain matrix, S is the solution of the Riccati equation, and E contains the
closed-loop eigenvalues that are the eigenvalues of (A  BK).
The function kalman develops a Kalman lter for the model described as
xk 1 Axk Buk Gwk
yk Cxk Duk Hwk vk
where w is the process noise and v is the measurement noise. Q, R, and N are the
white noise covariances as follows:

E ww 0 Q; E; vv 0 R; E wv 0 N
The syntax of kalman is
Kfilter; L; P; M; Z kalmansys; Q; R; N
This only serves to demonstrate that many controllers can be designed using
programs such as Matlab. Certainly, the brief introduction given here is meant to
point us forward to new horizons as control system design engineers. Many refer-
ences in the Bibliography contain additional material.
Concluding our discussion on multivariable controllers, we see that all our
earlier techniques, transfer functions, root locus, Bode plots, and state space system
designs can be extended to multivariable input-output systems. Even if all the states
are unavailable for measurement, we can implement full or reduced order observers
to estimate the unknown states. The larger problem arises since generally with
MIMO systems a deterministic solution is unavailable and we get more unknown
gains than equations used to solve for them. There are many books dedicated to
solving this problem in an optimal manner. Even without all the details, it is easy to
simulate many of the optimal controllers using programs like Matlab, which has
most of the optimal controllers already programmed in as design tools. In all these
controllers, it is important to develop good models. In the adaptive controller section
we will introduce some techniques that allow us to update the model in real time.
458 Chapter 11

11.6 SYSTEM IDENTIFICATION TECHNIQUES


System identication is used in several ways. First, is it a common approach to
developing the component models for use in designing a control system and simulat-
ing its response. Second, it is a technique that can be done recursively and thus
provide updated information to the controller in real time. When system identica-
tion routines are linked to the parameters in the controller, we call the controller
adaptive, since it can adapt to changes in the environment in which it acts or to
changes internal to the system components themselves (wear, leakage, etc.). There
are two basic types of models, nonparametric and parametric. Nonparametric models
are not examined here but include system models like Bode plots and step response
plots. The information required to design the control system is contained in Bode
plots and yet the parameters must still be extracted from the plot to develop the
model for use in a simulation program. We have already examined this procedure in
earlier chapters. When the parameters themselves are the goal, as in coecients of
dierence equations, we are developing a parametric model. Most adaptive algo-
rithms require parametric models. If the system contains delays and we wish to
develop a continuous system model, it is common to use a Pade approximation.
Digital systems are much easier in this respect due to the delay operator.
Recalling earlier material, we have already learned several system identication
techniques, the step response and frequency response. The step response is limited,
for the most part, to rst- and second-order systems while the Bode plot requires
numerous data points and does not contain the desired parameters explicitly. In
addition, both methods require a break in the control routine to perform the test.
We are unable to identify the system in real time while the system is operating.
Additional methods, introduced in this section, do allow us the capability to identify
the system in real time.
The least squares method is a common approach, capable of batch or recursive
solutions and capable of tting the input-output data to a large variety of system
orders and congurations. We generally need to know something about our system
to avoid the trial and error approach when determining the structure of our model.
Start with the lowest order that we think will provide a good t. If it provides
acceptable results, then we can save ourselves computational time and if it does
not then we can increase the order of our model. Some of the more advanced
programs t the data to a variety of model congurations and automatically select
the best one. This is helpful if we have no idea of our physical model structure.
The rst area of concern when performing system identication regards the
input sequences. The term garbage ingarbage out applies to system identication,
and if we choose the wrong signals we will get the wrong model. If the input is simply
a constant value, then the output will likely also be a constant value and our model
will simply contain information about the steady-state gain of our system. The
important dynamics of our system may be completely missed. If possible, we can
use input sequences that are similar to the control outputs from the controller (thus
inputs to the physical system) and can be condent the even if the model is not
physically correct, it does model the process as it is being used since the model is
based on what the controller is doing. This is similar to how many adaptive con-
trollers are congured where at a lower frequency, such as where the controller is
attempting to operate the system at, a higher order model might be better realized
Advanced Design Techniques and Controllers 459

using a simple rst- or second-order model, since it captures the dynamics of impor-
tance.
In general, though, for accurate models our input-output data must contain
sucient information to determine the best model parameters. Thus, inputs should
contain various frequency components within and around the systems bandwidth
for maximum information. Some of the components should allow the system to
nearly settle to equilibrium. If a particular frequency or amplitude is not part of
our input-output sequence, the model will not accurately reect those conditions.
There are several ways to verify the quality of the model. The rst and obvious
one is to use the nished model with a set of measured inputs and compare the model
(predicted) output with the actual measured output. This has the advantage of being
very intuitive and easy to judge the quality from. We can also examine the loss
function, which is the sum of errors squared, since it represents the amount of
variation not explained by the model.
Finally, most model structures determine the coecients of the dierence equa-
tions for a particular model. This is a natural extension of the input-output sampling
process used to collect the data and also represents a common structure in how they
are implemented back in adaptive controllers and other algorithms. Newer system
identication tools like fuzzy logic and neural nets are the exception and use con-
nection properties and function shapes to adapt the model. Dierence equations are
easy to use since we can easily convert analog models into discrete equivalents, write
the dierence equations, and use least squares techniques, as the next section shows,
to determine the coecients of the equations.

11.6.1 Least Squares


The use of least square methods are common in almost all branches of engineering.
Least squares methods may be implemented recursively in addition to batch proces-
sing and thus allow variations of adaptive controllers to implement the routine real
time. Several variations of least squares routines have evolved over the years. Many
can now be implemented using popular decomposition techniques to make the
method more reliable and computationally more ecient. This section introduces
the batch and recursive methods of least squares.
11.6.1.1 Batch Processes Using Least Squares
Batch processing models are easy to implement and thus commonly used to estimate
the parameters of the desired model structure. The data can be taken while the
controller is on-line and processed later as opposed to step and frequency response
plots. The process is quite simple and can be used to nd solutions to most problems
with more equations than unknowns (over-determined). There are many engineering
problems, such as linear regression and curve tting, which use least squares
methods.
Of particular interest in this section is the use of least squares in determining
the identity of system model parameters. This is a good case of having more equa-
tions than unknowns where we measure many input and output data points and
desire to nd the model parameters that minimize the error between the actual data
and simulated data. Since the structure of the model is determine beforehand in
almost all cases, we still must rely on our understanding of the system and choose
460 Chapter 11

the correct model. For example, if we choose a rst-order model, containing one
parameter, the time constant, then we are limited to always minimizing the errors
based on these limits. Our model will never predict an overshoot and oscillation,
even if our system exhibits the behavior. The goal then is to choose a model that
includes the signicant dynamics (number of zeros and poles). If we have three
dominant poles, then a third-order model should produce an accurate model and
the system identication routines will converge to a solution.
The general procedure is to model our system using dierence equations since
system identication routines are implemented in microprocessors and the discrete
input-output data is easy to work with in matrix form. Our beginning point is to
dene the structure of the dierence equation in the form where
X
d X
n
ck yai ck  i ybi rk  i
i1 i0

This is simply a general representation of our dierence equations from earlier


chapters and it allows us to dene the coecients, ys, of the terms on the right-
hand side (inputs and previous values) of our dierence equations. The terms begin
at ck  1 and rk as they did earlier. Depending on the size and order of our
physical system model, c and r may vary in the number of terms that are required.
The advantage of least squares is that even if we have three unknowns and one-
thousand equations (sets of input-output data points), then we can use all equations
and solve for the coecients of our dierence equations (the unknowns) while mini-
mizing the sum of the errors squared.
This technique is founded upon least squares linear algebra identities where the
coecients, inputs, and outputs are all written using matrices. The most common
class is when we have equal numbers of equations and unknowns and we write our
matrix equations as
(u y
where u is the matrix of desired coecients, y is the matrix of known outputs, and
( is the matrix of known input points. We are familiar with this solution where we
premultiply both sides by the inverse of ( to nd the solution.
(1 (u (1 y
And nally, our solution to our unknowns:
u (1 y
This method is straightforward since having equal numbers of equations and
unknowns leads to a square matrix which can be inverted. A simple example will
review this process.

EXAMPLE 11.5
Solve for the two unknown coecients, a and b, given the two linear equations.
7 3a 2b
2 6a  2b
Advanced Design Techniques and Controllers 461

The left side numbers are known outputs and the right side numbers are known
inputs. In terms of a dierence equation, the rst equation would be represented as
ck a
ck  1 b
rk  1
7a
3b
2
Therefore, if we record the inputs to the system and the resulting outputs, we can
easily t our data to dierent dierence equations to nd the best t (postprocessing
the data). To solve for our unknown coecients, we will write the equations in
matrix form as follows:
(u y
" # " # " #
3 2 a 7

6 2 b 2
To solve:
(1 (u (1 y
u (1 y
" # " #1 " #
a 3 2 7

b 6 2 2
The solution is
a1
b2
This provides the groundwork but is limited since we are tting our model coe-
cients based on only two input-output data points.
To generalize the procedure, we need to develop a method that allows us to
utilize many input-output data sets and minimizes the error to give us the best
possible model. Fortunately, there are many other applications desiring the same
solution and linear algebra methods have been developed and are easily applied to
our problem. Many mathematical texts demonstrate that a matrix, (, containing our
data, although not being a square matrix when there are more equations than
unknowns, will minimize the total sum of the squares of the error at each data
point between the observed known value and the calculated value if the new matrix
(T ( is nonsingular and the inverse exists. Taking the transpose and multiplying by
the original matrix results in a square matrix and then allows the inverse to be used
to solve for the solution to u:
(T (u (T y
 1
u (T ( (T y
where u is the matrix of desired coecients, y is the matrix of known outputs, and (
is the matrix of known input points. The solution takes advantage of the linear
algebra properties and can be used to solve many dierent problems where there
are more equations than unknowns. Another benet is that the (T (, a matrix
462 Chapter 11

transpose and multiplication, is simply a series of multiplications and additions and


results in the matrix to be inverted being of size equal to the number of coecients.
Computationally, this allows the routine to be implemented recursively since
extremely large matrix inversions are not required.
Also of interest is the error that would result from us implementing our model
using the same input-output data. It is straightforward to compute the errors as

yk actual kth data value


yest k kth data value using model estimates
ek yk  yest k the error between the kth actual and predicted values

The least squares method seeks the solution where the sum of the squared errors is
minimized. The total summation of errors squared is dened as
!
XN
2
Sum of Errors Squared e
i1

We now have the tools required to extend the least squares method to a general
batch of input-output data used to t a particular data model.

EXAMPLE 11.6
Solve for the two unknown coecients, a and b. Use the three equations to solve for
the two unknowns
Now we will add an additional equation to the problem solved in Example
11.5. In this case we no longer get exact solutions since the number of equations is
greater than the number of unknowns. This case is more typical of what we get in
system identication. The three equations are

3a 2b 7
6a  2b 2
9a 4b 18

Writing them in matrix form:

(u y
2 3 2 3
3 2 " # 7
6 7 a 6 7
66 2 7 6 7
4 5 4 2 5
b
9 4 18

Now our ( matrix is no longer square and we cannot simply take the inverse. To
solve this system we must now use the equation dened above as:

(T (u (T y

Substitute in our matrices:


Advanced Design Techniques and Controllers 463
2 3 2 3
" # 3 2 " # " # 7
3 6 9 6 7 a 3 6 9 6 7
6 6 2 7 6 2 7
4 5 4 5
2 2 4 b 2 2 4
9 4 18
" #" # " #
126 30 a 195

30 24 b 82
This now resembles our initial case with equal numbers of equations and unknowns.
Notice that our number of unknowns determines the size of our inverse matrix, not
the number of data points that we record. Now the solution can be found where
 1
u (T ( (T y
" # " #1 " #
a 126 30 195

b 30 24 82
" # " #
a 1:0452

b 2:1102
From the results in Example 11.5 we know that the rst two equations resulted in
a 1 and b 2. Adding the third equation, which does not exactly agree with rst
two, slightly changes the value of our coecients. As more equations are added, the
values would continue to change as the least squares method seeks to minimize the
squared errors. However, now we have a procedure to t many input-output data
pairs to the coecients of our dierence equations.

EXAMPLE 11.7
Use the least squares method to nd the coecients representing the best second-
order polynomial curve t. The data are given as

u  inputs y  outputs

0 2.22
1 3.34
2 4.90
3 6.90
4 9.34
5 12.22
6 15.54

The general equation to be modeled is given by


yk b0 b1 uk b2 uk2


bn ukn ek
where yk is the actual kth data value, n is the order of t selected, and bi is the ith
coecient to be determined in the model. Note that the desired coecients are linear
even though the u input values are nonlinear. Since we want to t the data to a
second-order polynomial, our general model reduces to
yk b0 b1 uk b2 uk2
464 Chapter 11

Therefore we have six data pairs and three unknowns, b0 , b1 , and b2 . Using the least
square matrix method represented with matrices gives us
2 3 2 3
1 0 0 2:22
6 7 6 7
61 1 1 7 6 3:34 7
6 7 6 7
6 72 3 6 7
6 1 2 4 7 b0 6 4:90 7
6 7 6 7
6 76 7 6 7
6 1 3 9 76 b1 7 6 6:90 7
6 74 5 6 7
6 7 6 7
6 1 4 16 7 b2 6 9:34 7
6 7 6 7
6 7 6 7
6 1 5 25 7 6 12:22 7
4 5 4 5
1 6 36 15:54

For this second-order t, (k 1 uk uk2 ; which involves the known inputs


for this case. For each row then, the input is 1, uk, and uk2 . Once we have ( and y
we use the equation
 1
u (T ( (T y

The solution to u gives us our coecients b0 , b1 , and b2 as


2 3
2:22
6 7
u 4 0:90 5
0:22

The nal equation, expressed in more common notation, which best describes the
input-output behavior of our system is

y 2:22 0:90x 0:22x2

Now we are ready to develop the system identication routines as commonly


implemented in adaptive controllers. As mentioned previously, most models are of
the dierence equation format and can be developed from continuous system models
using either z transforms or numerical approximations. The basic rst-order system
with constant numerator can be written as

Cz b

Rz z  a
ck  a
ck  1 b
rk  1

Or, rearranging

ck 1 a
ck b
rk

Thus, in the kth row, (k ck rk , where it involves both known inputs and
outputs and is called autoregressive. For the output vector, c, the kth row is ck 1.
This can be expressed in matrix form as
Advanced Design Techniques and Controllers 465
2 3 2 3
c1 r1 c2
6 7" # 6 7
6 c2 r27 6 c3 7
6 7 a 6 7
6 7 6 7
6 .. .. 7 6 .. 7
6 . . 7 b 6 7
4 5 4 . 5
cN  1 rN  1 cN
The same procedure can be followed for a second-order denominator, rst-order
numerator dierence equation given as
ck 2 a1 ck 1 a2 ck b1 rk 1 b2 rk
and
2 32 3 2 3
c2 c1 r2 r1 a1 c3
6 76 7 6 7
6 c3 c2 r3 76 7 6 c4 7
r2
6 7 6 a2 7 6 7
6 76 7 6 7
6 .. .. .. .. 76 7 6 .. 7
6 . . . . 7 4 b1 5 6 . 7
4 5 4 5
cN  1 cN  2 rN  1 rN  2 b 2 cN
Although using least square methods requires the rst several outputs to be dis-
carded (in terms of output data), this seldom poses a problem due to the amount a
data collected using computers and data acquisition boards. Many programs, includ-
ing Matlab, contain the matrix operations required to solve for the coecients. Also,
when we look at how dierent columns containing the output data, ck, are repeated
and only shifted multiples of the sample time, we can save data with how we
construct and store the matrices.

EXAMPLE 11.8
Using the input-output data given, determine the coecients of the dierence equa-
tion derived from a discrete transfer function with a constant numerator and rst-
order denominator. The recorded input and output data are

k Input data, rk Measured output data, ck

1 0 0
2 0.5 0
3 1 0.1967
4 1 0.5128
5 1 0.7045
6 0.4 0.8208
7 0.2 0.6552
8 0.1 0.4761
9 0 0.3281
10 0 0.1990
11 0 0.1207

The model to which the data will be t is given as


Cz b

Rz z  a
466 Chapter 11

The transfer function can be converted into an equivalent dierence equation.


ck  a
ck  1 b
rk  1
Or, rearranging
a
ck b
rk ck 1
Finally, this can be expressed in matrix form as
2 3 2 3
c1 r1 c2
6 7" # 6 7
6 c2 r2 7 6 c3 7
6 7 a 6 7
6 7 6 7
6 .. .. 7 6 .. 7
6 . . 7 b 6 7
4 5 4 . 5
cN  1 rN  1 cN

Inserting the input and output values:


(u y
2 3 2 3
0 0 0
6 7 6 7
6 0 0:5 7 6 0:1967 7
6 7 6 7
6 7 6 7
6 0:1967 1 77 6 7
6 6 0:5128 7
6 7 6 7
6 0:5128 1 7 6 0:7045 7
6 7 6 7
6 7" # 6 7
6 0:7045 7
1 7 a 6 7
6 6 0:8208 7
6 7 6 7
6 0:8208 0:4 7 6 0:6552 7
6 7 b 6 7
6 7 6 7
6 0:6552 0:2 7 6 0:4767 7
6 7 6 7
6 7 6 7
6 0:4761 7
0:1 7 6 7
6 6 0:3281 7
6 7 6 7
6 0:3281 0 7 6 0:1990 7
4 5 4 5
0:1990 0 0:1207
The solution is dened as
 1
u (T ( (T y

The solution to y gives us our coecients a and b as




0:6065
u
0:3935
Our dierence equation that best minimizes the sum of the squared errors is
ck 1 0:6065 ck 0:3935 rk
And our model is
Cz 0:3935

Rz z  0:6065
Advanced Design Techniques and Controllers 467

Knowing that our sample time is T 0:1 sec allows us to take the inverse z trans-
form into the s-domain and nd the equivalent continuous system transfer function
as
Cs 1

Rs 0:2s 1
This model was in fact used to generate the example data and is returned, or veried,
by the least squares system identication routine. Also, in this example only a 2 2
matrix is inverted since we only are solving for two unknowns, even though we have
10 equations.
To conclude this section we discuss one modication that allows us to weight
the input-output data to emphasize dierent portions of our data; the method is
appropriately called weighted least squares. The solution calls for us to dene an
addition matrix W called the weighting matrix. W is a diagonal whose terms, wi , on
the diagonal are used to weight the data. With the weighting matrix incorporated,
the new solution becomes
(T W(u (T Wy
 1
u (T W( (T Wy
where u is the matrix of desired coecients, y is the matrix of known outputs, ( is
the matrix of known input points, and W is the weighting matrix (diagonal matrix).
If we make W equal to the identity matrix, I, we are weighting all elements
equally and the equation reduces to the standard least squares solution developed
previously. One common implementation using the weighting matrix is to have every
diagonal element slightly greater than the last w1 < w2 <


wi . This has the eect
of weighting the solution in favor of later data (more recent) data points and
deemphasizing the older data points. One common method for choosing the values
is use the equation
wi 1  Ni
This weights the more recent data points over the past ones and produces a ltering
eect operating on the square of the error that can reduce the eects of noise in our
input-output data.
11.6.1.2 Recursive Algorithms Using Least Squares
While the least squares system identication routines from the previous section are
useful in and of themselves, they do require a matrix inversion and generally require
more processing time than is feasible to perform in between each sample. Thus, the
techniques described are batch processing techniques where we gather the data and
after it is collected, proceed to process it. If we wish to perform system identication
on-line while the process is running, we can implement a version of the least squares
routine using recursive algorithms. This has the advantage of not requiring a matrix
inversion and only calculates the change in our system parameters as a result of the
last sample taken.
The same basic procedure is used when implementing least squares system
identication routines recursively. It becomes a little more dicult to program due
the added choices that must be made. Upon system startup, the input and output
468 Chapter 11

matrices must be built progressively as the system runs. It is possible once a large
enough set of data is recorded to insert the newest point while simultaneously drop-
ping the last point and the overall size remains constant. The solution has been
developed using the matrix inversion lemma that requires that only one scalar for
each parameter needs to be inverted each sample period. Called the recursive least
squares algorithm (RLS), it only calculates the change to the estimated parameters
each loop and adds the change to the previous estimate. Computationally, it has
many advantages since a matrix inversion procedure is not required. It converts the
matrix to a form where the inverse is simply the inverse of each single value.
To develop the equations, let us rst dene our data vector, w, and as before,
our parameter vector, y. These are both column vectors given as
rT ck  1; ck  2; . . . ; ck  na ; rk  1; rk  2; . . . ; rk  nb
uT a1 ; a2 ; . . . ; ana ; b1 ; b2 ; . . . ; bnb
where yk is the wT u (for any sample time, k, and knowing past values); na is the
number of past output values used in the dierence equation, and nb is the number of
past input values used in the dierence equation. Recall that our ( matrix in the
preceding section contained the same data (formed from multiple input-output data
points) and can be formed from the w vectors as
2 3
w1T
6 7
6 w2T 7
6 7
(6 6 . 7
7
6 .. 7
4 5
wNT
The number or columns in ( equal to the number of parameters, na nb , and the
number of rows is equal to the number of data points used, N.
The goal in recursive least squares parameter identication is to only calculate
the change that occurs in each estimated parameter whenever another data sample is
received. First, lets examine the term (T ( and see how additional data aects it.
Dene
!1
 T 1 X k
T
Pk ( ( wiw i
i1

Then
!
  X
k1
T
1
P k ( ( wiw i wkwT k
T

i1

Writing P as this summation now allows us to calculate the change in P each time a
new sample is recorded since
P1 k P1 k  1 wkwT k
Remember that the solution to our system parameters is
 1
u (T ( (T y
Advanced Design Techniques and Controllers 469

This, in combination with our denition of P, gives us


!1 !
Xk X k
T
uk wiw i wiyi
i1 i1
! !
X
k1
uk Pk wiyi wkyk
i1
!
X
k1
uk  1 Pk  1 wiyi
i1

Now that we have current and previous values of our parameter vector, u, we
can nd the dierence that occurs from each new sample and using the matrix
inversion lemma to remove the necessity of performing the matrix inversion each
step allows us to develop the nal formulation. Two steps are required where we rst
calculate the new P matrix each step and then use it to nd the new change in the
parameter. The equations below also include the weighting eects that allow us to
favor the recent values over past values. The factor  is sometimes termed the
forgetting factor since it has the eect of forgetting older values and favoring
the recent ones.


1 Pk  1wkwT kPk  1
Pk Pk  1 
  wT kPk  1wk
 
uk uk  1 Pkwk yk  wT kuk  1
The general procedure to implement recursive least squares methods is to choose P
and u, sample the input and output data, calculate the updated P, and nally apply
the correction to u. For online system identication the process operates continually
while the system is running.
There are several guidelines applicable when implementing such solutions.
First, we must choose initial values for P and u. If possible we can simply record
enough initial values, halt the process, batch process (as in the previous section) the
data, and calculate P (T (1 for our initial conditions. Finishing the process will
also result in initial parameters values contained in u. If interrupting the process to
the determine P and u using this method is not feasible, then it is common to choose
P to be a diagonal matrix with large values for the diagonal terms. The parameter
vector u can be initialized as all zeros and letting it converge to the proper values
once the process begins. Finally, l is commonly chosen between 0.95 and 1 for initial
values. When  1 we get the standard recursive least squares solution.
In practice, once a certain number of data points are being used, we commonly
begin to discard the oldest and add the newest value, keeping the length of all vectors
a constant. This number is chosen such that the amount of data being used is enough
to ensure converge to the correct parameter values.
There are many alternative methods to the least squares approach that are not
mentioned here. The least squares approach is very common and fairly straightfor-
ward to program, especially for batch processing. Dierent subsets of least squares
routines use more robust matrix inversion algorithms like QR or LU decomposi-
tions. Any numerical programming analysis textbook will describe and list these
470 Chapter 11

routines. This is only an introductory discussion of the least squares methods, and
references are included for further studies. As the next section demonstrates, system
identication routines using recursive least squares (or others) adds another class of
possibilities: adaptive controllers.

11.7 ADAPTIVE CONTROLLERS


Adaptive controllers encompass a wide range of techniques and methods that modify
one or more controller parameters in response to some measured input(s). The level
of complexity ranges from self-tuning and gain scheduling to more complex model
reference adaptive and feedforward systems. This section briey describes some
common congurations in use. First, let us examine some basics regarding the design
of adaptive controllers. One important item to note is that proofs for stability when
using adaptive controllers are very dicult since the control system becomes time
varying and usually nonlinear. It is often very dicult to predict all combinations of
parameters that the controller might adapt to during operation. This leads to a
variety of programming approaches ranging from intuitive to complex mathematical
models. In general, the better our models and the more they are used during the
design process leads to better controllers with more stability and less controller
actuator demands.
Noise is another issue that tends to degrade the performance of our controllers.
If possible, it generally helps to add lters at the input of our AD converters. To
evaluate the performance it is helpful to use programs like Matlab to allow multiple
simulations and design iterations in a short period of time. In addition to our normal
parameters of dynamic and steady-state response characteristics, we must evaluate
the convergence rate of the adaptive portion. This also gives us an indication of the
system stability with the adaptive controller added.

11.7.1 Gain Scheduling


Gain scheduling is often the simplest method since most work is done up front when
designing the controller. A typical system conguration is given in Figure 22. The
general operation using gain scheduling is to change controller parameters (usually
gains, although congurations or operating modes may also be changed) based on

Figure 22 Adaptive controllergain scheduling conguration.


Advanced Design Techniques and Controllers 471

the inputs it receives. These inputs into the adaptive algorithm may be command
inputs, output variables from the process, or external measurements. For example, in
hydraulic controllers the system pressure acts in series as another proportional gain
in the forward loop. Thus if the controller is tuned at 1500 psi and the operating
pressure is changed to 3000 psi, it is likely that the controller will now be more
oscillatory or unstable. The gain scheduling controller would measure the system
pressure and, based on its value, determine the appropriate gain for the system. The
gain scheduling may or may not be comprised of distinct regions. It may follow a
simple rule, for example, if the pressure doubles the electronic gain is 1=2 of its initial
value. From a practical standpoint, noise in the signals must be ltered out or the
gain will constantly be jumping around with the noise imposed on the desired signal.
The general approach is to break the system operation into distinct regions and
implement dierent controllers and/or gains depending on the region of operation.
The regions may be functions of several variables, as listed above. The regions might
be determined by the nonlinearities of the model in a way that each region is
approximately linear and allows classic design techniques to be used within each
linearized operating range. The advantage of gain scheduling, the ability to pre-
program the algorithms, is also its weakness, in that it is only adaptive to prepro-
grammed events. Because of its simplicity, it does see much use in practice. Since the
changes are predetermined, it also allows us to verify stability, at least from changes
in the controller. Changes in the system parameters may still cause the system to
become unstable.

11.7.2 Self-Tuning Controllers


Self-tuning, or autotuning, is used to replace the manual tuning procedures studied
thus far. It mimics what the control operator might do if they were physically
standing at the machine tuning the controller. Common parameters the controller
tunes too are overshoot and settling. Many PLCs with PID controllers use auto-
tuning techniques. The autotuning algorithm is usually initiated by the operator at
which point the controller injects a series of step inputs into the system and measures
the responses. The step inputs may occur while the controller is active if the inputs
are superimposed over the existing commands. Slightly better results are generally
possible if the controller is not online and extraction of response data is more
straightforward. Recent algorithms use a continuous recursive solution and con-
stantly tune the controller for optimal performance. A general self-tuning
controller conguration is shown in Figure 23.
The self-tuning conguration is based on collecting the appropriate input-
output data. Once the input-output response data are collected (usually several
input-output cycles), the algorithm proceeds to calculate the new gains based on
the current overshoot and settling time (or whatever parameters are chosen). It is a
good idea to then repeat the test and verify the new gains. The operator can often
choose the level of response required (i.e., fast, medium, slow) and the accepted
trade-os that accompany each type. Some algorithms allow the test data to also
be saved to a le for additional processing by the engineer.
The algorithms for self-tuning controllers can be based on methods similar to
the pole placement and Ziegler-Nichols tuning methods presented earlier. If we know
the desired type of response for our application, the algorithm can be programmed
472 Chapter 11

Figure 23 Adaptive controllerself-tuning conguration.

for that one specic type. Knowing how each gain aects the response (covered in
many earlier sections) is necessary when developing the autotuning algorithm.
A commercial self-tuning PID algorithm (Kraus and Myron, 1984) is presented
here to illustrate the process of implementation with digital controllers. The process
requires the system mass and initial gains as inputs and proceeds to determine the
closed loop step response. The peak times are used to determine the damped natural
frequency, fd , and the amplitude ratio of successive peaks to determine the decay
ratio, DR. The following two equations (derived by trial and error) then calculate the
equivalent ultimate gain and ultimate period (Tu 1=fu ) as required for use by
Ziegler-Nichols methods.

Kinitial fd

K U s
and fU " #3:51
1  8  DR2 8  DR3:5
1
55 1110

Once KU and TU are found, the equations presented in Table 2 of Chapter 5 can be
used depending on the controller type being implemented. The gain equations found
using Ziegler-Nichols equations may also modied further depending the desired
response characteristics. This is just one example of an empirically based solution
to the autotuning controller.

11.7.3 Model Reference Adaptive Controllers


Model reference adaptive controllers, or MRAC, take on many forms with the
general conguration shown in Figure 24. The goal is to design the controller to
make the errors between the reference model output and physical system output
equal to zero, thus forcing the system to have the response dened by the reference
model. For MRAC systems to work, the reference model and reference inputs must
be feasible for the physical system to achieve. There are many algorithms that have
been proposed to achieve this. In a sense the MRAC is a subset of a self-tuning
controller. Instead of being based from desired response measurements, the desired
response is derived from the reference model. Any time that there is an error between
the reference model output (desired output) and the actual system output, the con-
troller is modied such that the two responses become equal.
Advanced Design Techniques and Controllers 473

Figure 24 Adaptive controllerMRAC conguration.

11.7.4 Model Identification Adaptive Controllers


Model identication adaptive controllers (MIAC) implement the system identica-
tion techniques from the previous section to improve and adapt the controller in real
time. It is dierent from MRAC in that the model parameters are continuously
estimated and used in the control system. MRAC may or may not do this.
Simpler MRAC systems are based only on the reference model and actual error.
The advantage of MIAC is that once we know the model (and track its changes) we
can implement eective feedforward and controller algorithms. Consider the block
diagram in Figure 25. As the system parameters change (i.e., dierent inertial loads
on a robot arm), the recursive system identication algorithm continually updates
the model parameters. The model tracks the changes to the system and allows more
accurate implementation of feedforward along with the adaptive controller routines
listed above.
Of the methods presented in this section, gain scheduling and self-tuning con-
trollers are already quite common in industrial applications. Although the potential
benets are greater, the MRAC and MIAC methods require additional development
work and in terms of stability are more dicult to analyze.

Figure 25 Adaptive controllerMIAC conguration.


474 Chapter 11

11.8 NONLINEAR SYSTEMS


All physical systems are inherently nonlinear. These nonlinearities, as we will see,
range from natural nonlinearities in the physical system to nonlinearities introduced
from the controller (adaptive controllers, bang-bang controllers, etc.). Some non-
linearities are continuous and can thus be approximated by linear functions around
some operating point. Discontinuous nonlinearities cannot be approximated by lin-
ear models and include items like hysteresis, backlash, and coulomb friction. A
major problem that we have with nonlinear systems is determining stability. Once
the system is nonlinear, the principle of superposition is no longer valid and hence
the transfer function, root locus, and Bode plot techniques are also invalid. State
space equations in a general form are valid but we are unable to take advantage of
the linear algebra techniques based on having linear system matrices. Nonlinear
systems have several other dierences when compared to linear systems. There is
no longer a single equilibrium point, and dierent equilibrium points are possible
depending on the initial conditions. Thus we must view both local and global sta-
bility as separate issues. Nonlinear systems can exhibit limit cycles (circles on the
phase plane) and sustained repetitive oscillations. Bode plots become dependent on
input amplitude. Some nonlinear elements might represent two or more possible
outputs at the same input, thus leading to jump resonance.
This section outlines some of the common nonlinearities and possible solutions
when working with nonlinear systems. The adaptive methods from the previous
section are also commonly used in controlling nonlinear systems since the design
goal is to control widely varying plants.

11.8.1 Common Nonlinearities


Nonlinearities are commonly classied according to several characteristics:
 Continuous or discontinuous
 Hard or soft
 Single valued or multiple valued
 Natural or articial
Continuous nonlinearities have nite higher derivatives as seen in nonlinear springs
and valve pressure-ow curves. Discontinuous nonlinearities, unfortunately, are
more common and include items like saturation (common one), hysteresis, dead-
band, etc. The terms hard and soft are an alternative method of representing con-
tinuous and discontinuous nonlinearities (hard being discontinuous). Single valued
nonlinearities are such that a vertical line always intersects just one value, for exam-
ple, saturation. Multiple valued nonlinearities will have multiple values intersected
by the vertical line, for example, hysteresis. Finally, natural nonlinearities occur in
physical systems and their models, while articial nonlinearities are added through
dierent controller algorithms, for example, adaptive control. The common non-
linearities are summarized in Table 1.
The nonlinearities above are found to some degree in almost every physical
system. Whether or not it is recommended to include them in the models is only
determined by experience, intuition, and trial and error. Since they can have a
signicant impact on system performance and stability, several methods are intro-
duced below to evaluate the necessity of including them.
Advanced Design Techniques and Controllers 475

Table 1 Common Nonlinearities in Physical Systems

11.8.2 Numerical Evaluation Techniques


Simulation is the most widely used and commonly available method for evaluating
performance and stability of nonlinear systems. Virtually all the blocks in Table 1
can be found in Matlab/Simulink, along with many additional nonlinear functions.
It is easy to build the system models, include the nonlinearities where desired, and
evaluate system performance. As the systems get more complex, it may be advanta-
geous to use state space equations and numerically integrate them instead of trying
to develop a block diagram including linear and nonlinear components. Examples of
using state space equations can be found in the bond graph simulations.
The major disadvantages of numerical simulations are time and global stability
issues. More often than not, as the system gets more complex, the time issue becomes
less of a disadvantage unless we are well studied in the analytical techniques like
Lyapunovs methods, phase plane techniques, or Popovs criterion (and others). The
determination of global stability, another disability, occurs because simulation can
only tell us about the response to one particular input sequence and set of operation
conditions. Global stability can never be completely proven using only numerical
simulations. However, for most designs, multiple simulations are easy to run, and in
general a fairly broad mapping can be accomplished. If the mapping of operating
conditions covers the ranges of operation seen in practice, the system should remain
stable. Thus, simulations are frequently used and a fairly easy extension from tech-
niques learned earlier, especially when control system simulation programs are used.

11.8.3 Lyapunovs Methods


Since Lyapunovs methods represent a common analytical approach method for
evaluating nonlinear system, a brief overview is given here. There are three types
of stability we must be concerned with when dealing with nonlinear systems: global,
local, and Lyapunov stability, as shown in Figure 26. The stability regions are
further classied as being asymptotically stable and exponentially stable.
Asymptotically stable systems eventually go to the equilibrium point but not neces-
476 Chapter 11

Figure 26 Dierent stability regions for nonlinear systems.

sarily via the most direct method. That is, they always tend toward stability but at
dierent rates of decay. Exponentially stable systems decay exponentially to the
equilibrium point providing a more desirable response.
Two methods of Lyapunov equations are commonly used, the direct and indir-
ect. The indirect method involves nding the critical points of the system and solving
for the linearized system eigenvalues at each critical point. The critical points are
locations where all the derivatives are zero and thus constitute a feasible equili-
brium point for the system. In the common pendulum example, we obviously have
two equilibrium points, one stable and one unstable. By linearizing the state equa-
tions about these two points and determining the eigenvalues, the local stability
around each critical point is found. A variety of numerical methods is used to nd
the critical points. The indirect method of Lyapunov is more intuitive and bridges
the gap between our linear system tools and nonlinear stability analysis.
The second method, often called the direct method, is a rather complex topic
but does not require any approximations to be made during the stability analysis. It
can be used on any order, linear or nonlinear, time varying or invariant, multivari-
able, and system model containing nonnumerical parameters. Since it employs state
space notation, it is limited to continuous nonlinearities (eliminates many common
nonlinearities). The most dicult portion of the method is generating a positive
denite function containing the system variables. This function is commonly called
the V function, or Lyapunov function. The second method is based on the energy
method and can be summarized as follows: If the total energy in the system is greater
than zero (ET > 0) and if the derivative of the energy function is negative
(dET =dt < 0), the net energy is always decreasing and therefore the system is stable.
There are mathematical proofs available for this method (see references), but the
general idea is somewhat intuitive. Being based on the energy method, a good
beginning attempt at nding a V function that is positive denite and where the
partial derivative exists is to use the sum of the kinetic and potential energies in the
system. Several methods for nding Lyapunov functions have been developed, the
Krosovski, Variable gradient, and Zubovs construction methods are examples of such.

11.9 NONLINEAR CONTROLLER ALTERNATIVES


Many other methods besides adaptive controllers are commonly applied to nonlinear
systems: sliding mode control, feedback linearization, fuzzy logic, neural nets, and
genetic algorithms, to mention just a few. The basic ideas, strengths, and weakness of
each method are briey presented here to encourage further research.
Advanced Design Techniques and Controllers 477

11.9.1 Sliding Mode Control


Sliding mode control is designed to represent nonlinear higher order systems as a
series of rst-order nonlinear systems that are much easier to control. Good perfor-
mance is achieved even in the presence of model errors and disturbances but at the
price of high controller activity. Sliding mode control has been used successfully in
robots, vehicle transmissions and engines, electric motors, and hydraulic valves.
Sliding mode control was developed in the 1960s in the former Soviet Union and
was based on the direct method of Lyapunov discussed above. As before, we need to
nd a positive-denite energy function for our system. The controller is designed to
discontinuously vary the controller parameters to force the states to a predened
switching surface. Once the state reaches this surface it slides along it, guaranteeing
stability and dened closed loop dynamics. The trick in getting the states to all be
attracted to the surface is dening the proper Lyapunov function such that the
control law always makes the energy function derivative negative (decreasing energy,
movement toward equilibrium points).
The practical problem of sliding mode control is the implementation of a
discontinuous control switching law, which commonly introduces chatter into the
system once the sliding surface is reached. For systems where the chattering fre-
quency is much higher than the bandwidth of the system, it is not a large problem
and direct implementation of sliding mode control provides very good results.
Otherwise, continuous approximations for the control law must be made, which
somewhat degrades the controller performance. If the time is taken to perform a
Lyapunov stability analysis the extension to implement a sliding mode controllers
may be well worth it to achieve excellent stability for the control system.

11.9.2 Feedback Linearization


Feedback linearization is another technique applied to the control of nonlinear
systems. The idea is to use state transformations and feedback to algebraically
linearize the system while leaving the nonlinear system equations intact. These tech-
niques have been successfully used in high performance aircraft and robots. The
attractive characteristic of using feedback linearization to algebraically linearize
the system is that once accomplished, all our linear control design techniques may
be used. Since it only algebraically linearizes the system, it is subject to several
problems and limitations. For some nonlinear systems it is impossible to algebrai-
cally linearize. It can be used as a partial linearization but includes no guarantees
about global stability. The same is also true for feedback linearized systems since the
algebraic model may contain errors and unmodeled dynamics. Finally, it requires the
measurement of all states to be eectively implemented (Slotine and Li, 1991). An
example where the nonlinear wind drag on a automobile cruise control system is
canceled out is given in Figure 27. Within the digital controller the nonlinear eects
of the wind are numerically canceled and the controller is designed as if it were a
linear system. This method obviously is very dependent on the quality of our model


Gamble J, Vaughan N. Comparison of Sliding Mode Control with State Feedback and PID Control
Applied to a Proportional Solenoid Valve. Journal of Dynamic Systems, Measurement, and Control,
September 1996.
478 Chapter 11

Figure 27 Automobile cruise control with feedback linearization of aero drag.

but does provide advantages over simply linearizing and then designing around a
single operating point.

11.9.3 Fuzzy Control


Fuzzy logic controllers have become very common and are used in a large range of
applications. They originated in 1964 by Zadeh at the University of California in
Berkeley. The concept initially took a long time to generate support and nally in
recent decades has become very popular. In 1987, Yasunobu and Miyamoto
described the widely known application of controlling Sendais subway system.
Since then it has rapidly found its way into many products, some of which are listed
in Table 2.
There are several reasons why it possibly took so long for fuzzy logic to become
more widely used. First, the term itself is not particularly attractive in situations
where safety is of critical importance. We generally would not tell the passengers on
an airliner that the landing systems are controlled using fuzzy logic. Even engineers,
unless they understand the process, are not likely to endorse fuzzy designs. A second
reason is the lack of a well-dened mathematical model. It is impossible to analyti-

Table 2 Common Applications of Fuzzy Logic

Automotive systems ! Fuel management Traction control


Antilock brakes Automatic transmissions
Emission controls Vehicle ride control

Washing machines Refrigerators Cameras


Camcorders Cranes Incineration plants
Elevators Household appliances Electronic systems
Financial systems Economic systems Social, biological systems


Yasunobu S, Miyamoto S. Automatic Train Operation System by Predictive Fuzzy Control. Industrial
Applications of Fuzzy Control, ed. M. Sugeno, North-Holland, Amsterdam, 1985.
Advanced Design Techniques and Controllers 479

cally prove a systems stability apart from a mathematical model. Critics of this
position quickly point out that linear models are seldom valid throughout a systems
operating range and therefore also do not guarantee global stability.
This section is only an introduction and designed to explain the basic theory
and implementation techniques of fuzzy logic. The easiest way to begin is to describe
fuzzy logic as a set of heuristics and rules about how to control the system. Heuristic
relates to learning by trying rather than by following a preprogrammed formula. In a
sense, it is the opposite of the word algorithm. It therefore is a human approach
to solving problems. We seldom say it is 96 degrees Fahrenheit and therefore it must
be hot, rather we say is hotter than normal. In a similar fashion fuzzy logic is based
on rules of thumb. Thus, instead of the input value being larger or smaller than
another, it may be rather close or very far from the other number. This is done
through the use of membership functions that take dierent shapes.
Where fuzzy logic works well is with complex processes without a good math-
ematical model and highly nonlinear systems. In general, conventional controllers
are as good or better if the model is easily developed and fairly linear where common
design techniques may be applied. The question then arises as to under what circum-
stances are fuzzy logic techniques particularly attractive. Circumstances in favor of
using fuzzy logic include when a mathematical model is unavailable or so complex
that it cannot be evaluated in real time, when low precision microprocessors or
sensors are used, and when high noise levels exist. To implement a fuzzy logic
controller, we also have several conditions that must be met. First, there needs to
be an expert available to specify the rules describing the system behavior and, sec-
ond, a solution must be possible.
Although fuzzy logic is often described as a form of nonlinear PID control, this
limited understanding does not encompass the whole concept. The idea stems from
the many reports on using fuzzy logic with rules written in the same way that PID
algorithms operate. For example, with an SISO system a rule might read if the error
is positive big and the error change is positive small, then the actuator output is
negative big. This simply results in a nonlinear PD controller. A better application
to illustrate the concept of fuzzy logic is the automatic transmission in vehicles.
Standard control algorithms must make set decisions based on measured inputs;
fuzzy algorithms are able to apply sets of rules to the inputs, infer what is desired,
and produce an output. Because of this inference, a fuzzy controller will respond
dierently as dierent drivers operate the vehicle. For example, a fuzzy system is able
to judgments about the operating environment based in the measured inputs. This is
where the expert enters the picture. The rules are written by experts who realize that
people prefer to not continually shift up and down on winding roads but do need to
quickly downshift if on a level road and desiring to pass another vehicle. Thus, we
write the rules such that if the throttle is uctuating by large amounts, as if on a
winding road, to not continually shift the transmission and yet if the throttle is
relatively constant before undergoing a change, to quickly shift the transmission.
It is along these lines that expert knowledge is used to describe typical driving
behavior and infer what the transmission shift patterns should be. The benet of
fuzzy logic is the ease that such rules can be written and implemented in a controller.
Once written, it is also easier for other users to read the rules, understand the
concept, and make changes, instead of pouring over many details hidden in math-
ematical models.
480 Chapter 11

The best way to demonstrate the concept and terms of fuzzy logic controllers is
by working a simple example. The next section works through a common simple
example of a fuzzy logic controller, controlling the speed of a fan based on tempera-
ture and humidity inputs.
11.9.3.1 Fuzzy Logic ExampleFan Speed Controller
The goal of this example is to introduce the common terms and ideas associated with
fuzzy logic controllers within the framework of designing a fan speed controller.
There are two sensors for the system, temperature and humidity, and they are
used to determine the speed setting of the fan. The rules are written using everyday
language in the same way that we would decide what the fan speed should be. Thus,
for this example, we get to be the expert.
First let us explain some denitions used with fuzzy logic. Whereas classical
theory distinguishes categories using crisp sets, with fuzzy logic we dene fuzzy sets,
as shown in Figure 28. Using the temperature analogy, with a crisp set we might say
the any temperature less than 40 F is cold, a temperature between 40 F and 75 F is
warm, and above 75 F is hot. Clearly with the crisp set we would have people who
still think that 41 F is cold even though it is classied as warm. Similarly, 39:9 F is
classied as cold even though 40:18F would be classied as warm. A more natural
representation is found with the fuzzy set where everyone might agree that below a
certain temperature (40 F) is cold and that above a certain temperature (65 F) it is
no longer cold. Between those two temperatures fall people who each think dier-
ently about what should be called cold or warm.
The sets of data are called membership functions, and although straight line
segments are used in Figure 28, it is not required, and dierent shapes will be shown
later in this example. The expert who has knowledge of the system determines the
appropriate membership function. The level to which someone belongs is called the
degree of membership (mx). With the crisp set either you belong or you do not. This
is like buying a membership at a health club. We cannot say please give me 30%

Figure 28 Crisp and fuzzy data sets.


Advanced Design Techniques and Controllers 481

membership for this month. We either belong or we do not. With the fuzzy set it is
possible to have full membership, no membership, or some intermediate value. Now
we can belong partially to one set, and as Figure 29 shows, at the same time also
belong partially to another set. Now we see in the fuzzy set that between 40 F and
65 F we belong to both the cold and warm set at the same time. This is where the
term fuzzy is appropriate since we are both cold and warm at the same time.
The scope is the range where a membership function is greater than zero and
the height is the value of the largest degree of membership contained in the set. For
the fuzzy warm set the scope is 40 F to 85 F and the height is 1. The height of any
function is commonly set to one although it is not constrained such that is has to be.
There are many possibilities for membership function shapes, as shown in
Figure 30. Ideally we know enough about our system that we initially choose the
membership function that best describes the characteristics. It is not required that we
choose the exact one or even that it exists, and a primary method of tuning fuzzy
logic controllers is by changing the shape of the membership functions. It may help if
it can be described with a mathematical function, although simple lookup tables are
commonly used when implemented in microprocessors.
In addition to triangles, trapezoids, s, z, Normal, Gaussian, and Bell curves,
and Singletons, other shapes can be used. Design tools like Matlabs Fuzzy Logic
Toolbox contain a variety of membership functions, as shown in Figure 31.
If we wish to modify shapes we use what is called hedging. Recall that our
degree of membership is represented by mx, which at this point we will assume is
between zero and one. If we raise m to dierent powers, we change the shape of the
original membership function described by mx. The use of hedging in this way is
shown in Figure 32. Since mx is less than 1, when raised to a coecient greater than
1 it becomes more constricted and when the coecient N is less than 1 it becomes
more diused. We may use words like very, less, extremely, and slightly when we
write the rules for our system. We can implement them using hedges. For example,
with our fan speed controller, we may wish to know if it is hot, or very hot.
Finally, let us look at one more denition before we move more fully into fuzzy
logic design. Now that we can dene membership functions using a variety of shapes,
we need to learn how to combine them since it is possible they may overlap, as when
we were simultaneously cold and warm at the same time. We combine the member-
ship functions using logical operators: And (minimum), Or (maximum), Not,
Normalization, or Alpha-Cuts. There are others but the concepts can be explained
using these listed. As Figure 33 shows, when membership functions overlap the
dierent logical operators result in dierent overall membership shapes.

Figure 29 Combined membership in crisp and fuzzy data sets.


482 Chapter 11

Figure 30 Example membership functions.

The logical operators AND, OR, and NOT each result in dierent combina-
tion of the fuzzy sets. The norm operator takes the mean of the membership func-
tions and the -cut operator places a line between 0 and 1; any portions of
membership functions above  are included in the combination. As will become
more clear as we progress through this example, using these operators allow us to
remove (or decide about) some of the ambiguity of being both warm and cold at the
same time.
To begin the process of putting the denitions and concepts together, let us
examine the overall picture of how the denitions above t in to fuzzy logic control
system design. The basic functional diagram of a fuzzy logic controller is given in
Figure 34. The middle block containing our rules is inference based and comes from
our knowledge of how our system should perform. At rst glance it seems that since
we start and end with crisp data that the fuzzy logic controller is only extra work on
our part. We certainly are constrained to start and end with crisp data since sensors

Figure 31 Membership functions in Matlab (Fuzzy Logic Toolbox).


Advanced Design Techniques and Controllers 483

Figure 32 Changing the shape of membership functions using hedging.

and actuators do not eectively transmit or receive commands like warm or cold.
Our controller still must receive and send signals such as voltages or currents.
However, what the fuzzication and defuzzication allow us to do is describe and
modify our system using rules that we all can understand. As opposed to developing
detailed mathematical formulas describing the rules of our system, we simply gra-
phically represent our membership functions and dene our rules to design our
controller. As mentioned earlier, since it is assumed that a mathematical model
does not exist, we need to have some knowledge and experience with the actual
system to write meaningful rules. Even in the case of two inputs and one output,
as in this example, where we can ultimately describe the fuzzy logic controller as an
operating surface, the method to develop the surface is more intuitive and easier to
modify than obtaining the same results through mathematical models, trial-and-
error, or extensive laboratory testing. As Figure 34 illustrates, we still have a unique
output (crisp) for any given combination of inputs, and fuzzy logic techniques
provide tools to develop the nonlinear mapping using the two crisp sets of data.
Referencing Figure 34, we can dene the following terms:
 Fuzzication: The process of mapping crisp input values to associated fuzzy
input values using degrees of membership.

Figure 33 Operations on fuzzy membership sets.


484 Chapter 11

Figure 34 Functional diagram of fuzzy logic controller.

 Defuzzication: The process of mapping fuzzy output values to crisp output


values using aggregation.
 Aggregation: Methods used to combine fuzzy sets into a single set with the
goal of obtaining a crisp output value.
 Rule-based inferences: The process of mapping fuzzy input values to fuzzy
output values. Rules are used to represent the behavior of the system.

Rules are usually implemented using IF (antecedent) THEN (consequent) state-


ments. For simple systems we can represent the rules in tabular form using a
fuzzy association matrix (FAM). Extending the function diagram in Figure 34 to
our particular example of fan speed control results in Figure 35.
To design our controller, we now need to do the fuzzication, rules, and
defuzzication for our fan speed control. Since we have two inputs, we will need
to map the crisp data into two membership functions and combine them using the
rules. To begin with we will assign linguistic names to describe each variable. The
temperature, as done already, is described as cold, warm, or hot. This means that we
will need three membership functions to perform the fuzzication of the crisp tem-
perature input. In addition, our rules can be written using cold, warm, and hot in the
decision-making process (as in how we describe our surroundings to one another).
The membership function for each linguistic variable is given in Figure 36.
For the humidity input we will use the linguistic variables low, average, and
high, again using three categories. Using the same shapes as for the temperature, we
can develop the membership functions for humidity as shown in Figure 37. Finally,
we will need to do the defuzzication to obtain the crisp output determining the fan
speed. Our linguistic variables for this output will be slow, medium, and fast. Once
again using the same shapes and number of membership functions, we get the result
in Figure 38.

Figure 35 Fuzzy logic controller diagram for control of fan speed.


Advanced Design Techniques and Controllers 485

Figure 36 Membership functions for temperature input.

Now that all our linguistic variables are dened, we can write the rules. The
rules are simply based on our knowledge of the system, which for this example, we all
have some knowledge of. We will use nine rules, shown in Table 3, to map our fuzzy
inputs (temperature and humidity) into a fuzzy output (fan speed). The way the rules
are currently stated, using AND, provides a minimum, since both inputs must be
true. If we changed to using OR we would get a maximum number of active rules
since either condition could be true to have a non-zero rule output. For larger
systems the logical operators may be combined in each rule. It is easy for two inputs
and one output to develop an FAM, shown in Figure 39.
To nish this example and see how the actual procedure works, let us choose
an input temperature of 80 F and a humidity of 45%. Figure 40 shows us our
memberships in cold, warm, or hot, using our temperature input of 80 F. We
have no membership in cold, 0.25 in warm, and 0.75 in hot. Performing the same
process with our humidity of 45% leads to Figure 41, with a degree of membership
equal to 0.2 in low, 0.8 in medium, and 0.0 in high.
With the two inputs dened and the membership functions calculated, we are
ready to re the rules, or perform the implication step. Using the AND operators will
provide the minimum outputs for this example, as shown in Table 4. After ring each
rule we see that only four rules are active (4, 5, 7, and 8) and that rules 5 and 7 are
mapped to the same output. If we combine rules 5 and 7 using OR (maximum), we
end with a degree of membership for medium equal to 0.25. The results from Table 4
can also be graphically illustrated using the FAM, shown in Figure 42.

Figure 37 Membership functions for humidity input.


486 Chapter 11

Figure 38 Membership functions for fan speed output.

At this point the only item left is defuzzication of the fuzzy output to produce
a crisp output value for the fan speed. As with the inputs, we have many options in
how we choose to combine the dierent membership functions. First, we can take the
values for the outputs of each membership function after ring the rules (Table 4)
and overlay them with our output membership functions from Figure 38. When each
membership function is clipped, the combined function becomes as shown in Figure
43. During implication, each degree of membership is used to clip the corresponding
output variable, slow, medium, or fast. For aggregation we again have many options
(Figure 33) for combining the three membership functions. Using the maximum
(OR) for each function produces the nal function given in Figure 44.
To determine the nal crisp output value, the goal of this entire process, we
apply a defuzzication method, some of which are listed here:

 Bisector
 Centroid: often referred to as center of gravity (COG) or center of area
(COA)
 Middle of maximum (MOM)
 Largest of maximum (LOM)
 Smallest of maximum (SOM)

As with the dierent membership functions, Matlabs Fuzzy Logic Toolbox contains
a variety of defuzzication methods, as shown in Figure 45.

Table 3 Rules for Controlling the Fan Speed

Rule no. Descriptions

1 IF temp is cold AND humidity is low, THEN speed is slow.


2 IF temp is cold AND humidity is avg, THEN speed is slow.
3 IF temp is cold AND humidity is high, THEN speed is medium.
4 IF temp is warm AND humidity is low, THEN speed is slow.
5 IF temp is warm AND humidity is avg, THEN speed is medium.
6 IF temp is warm AND humidity is high, THEN speed is fast.
7 IF temp is hot AND humidity is low, THEN speed is medium.
8 IF temp is hot AND humidity is avg, THEN speed is fast.
9 IF temp is hot AND humidity is high, THEN speed is fast.
Advanced Design Techniques and Controllers 487

Figure 39 Fuzzy association matrix (FAM) for fan speed output.

The actual output speeds that result from applying the Centroid, LOM, MOM,
and SOM methods are shown in Figure 46. The dierent output speeds, depending
on the method used, are

650 rpm ! smallest of maximum (SOM)


667 rpm ! centroid of area (COA)
825 rpm ! middle of maximum (MOM)
1000 rpm ! largest of maximum (LOM)

Fortunately, the process of fuzzication, generating rules, and defuzzication can be


done with the computer. Matlab includes a Fuzzy Logic Toolbox with many built in
shapes and methods to allow us to quickly check the eects of dierent combina-
tions. In addition to design software packages, there are many microprocessors now
developed and optimized for fuzzy logic control systems. The instruction sets of the
chips contain many of these functions.
To conclude, let us briey summarize the process of designing a fuzzy logic
controller. First, we assume that experts are available to describe the system beha-
vior when developing the rules and that a good mathematical model either does not
exist or is too complex to implement. Next, we need to dene all the input and
output variables. For each input or output we need to dene the quantity, shape,
and overlapping areas of the respective membership functions. The quantity, shape,

Figure 40 Degrees of membership for temperature input.


488 Chapter 11

Figure 41 Degrees of membership for humidity input.

and amount of overlap of membership functions have signicant impact on the


behavior of the system. Once these decisions are made, linguistic labels are dened.
These should describe the ranges of the variables (i.e., cold, warm, and hot for the
temperature input) such that the rules are written using language that is natural to
how we describe the problem.
When writing the rules for our system, we must choose the implication and
composition method using our logical operators. Most rules are written using IF-
THEN statements. Since multiple rules will likely be active, we now need to dene
the aggregate method. For example, which medium fan speed degree of membership
do we use when two are non-zero (0.2 and 0.25 in the example). We could use the
minimum, maximum, average, etc. Finally, we need to select the defuzzication
method (SOM, LOM, etc.) to convert our aggregate outputs into crisp data sets.
When these steps are completed, programs such as Matlab allow us to slide the
inputs through several values and watch what the output becomes. For the case in
this example we can also develop a surface plot where x and y are the inputs
temperature and humidity and the z axis (height) is the output of our fuzzy logic
controller. If possible we should simulate the system with expected input data and
perform initial tuning. To implement, we take our nal design and compile it into
machine code for operation on a microprocessor.

Table 4 Implications: Firing and Rules for Controlling the Fan Speed (Temperature
808F, Humidity 45%)

Rule no. Descriptions

1 IF temp is cold (0.0) AND humidity is low (0.2), THEN speed is slow (0.0).
2 IF temp is cold (0.0) AND humidity is avg (0.8), THEN speed is slow (0.0).
3 IF temp is cold (0.0) AND humidity is high (0.0), THEN speed is medium (0.0).
4 IF temp is warm (0.25) AND humidity is low (0.2), THEN speed is slow (0.2).
5 IF temp is warm (0.25) AND humidity is avg (0.8), THEN speed is medium
(0.25).
6 IF temp is warm (0.25) AND humidity is high (0.0), THEN speed is fast (0.0).
7 IF temp is hot (0.75) AND humidity is low (0.2), THEN speed is medium (0.2).
8 IF temp is hot (0.75) AND humidity is avg (0.8), THEN speed is fast (0.75).
9 IF temp is hot (0.75) AND humidity is high (0.0), THEN speed is fast (0.0).
Advanced Design Techniques and Controllers 489

Figure 42 Fuzzy association matrix (FAM) for fan speed output after ring the rules and
taking the minimums (AND operator).

11.9.4 Neural Nets


Neural nets are almost always applied to nonlinear models of input-output relation-
ships. The basic analogy they are modeled after is the way a human brain operates.
Our brain (in a very simplied sense) uses interconnections between neurons, and as
we learn, the weighted gain between each one varies. Each interconnection begins
with essentially zero. Thus, neural nets begin with each neuron connected to the
inputs by interconnections, and as we learn, the weight on each interconnection
between each input, neuron, and output is varied.
Simple single layer neural networks contain only input and output neurons and
then determine the weighted gain between them. They are fairly limited so it is
common to add one or more hidden layers, as shown in Figure 47. As the number
of hidden layers is increased, the number of possible connections multiplies.
The sigmoid functions inside the hidden and output layer neurons are called
activation functions and determine how the consecutive level is activated. Other
functions like steps and ramps are also used (numerically easier). There are many
methods used to determine how the neural net learns. In all cases it is benecial to
begin with good estimates for a faster learning time. Gradient methods, the perceptron

Figure 43 Combined membership functions for fan speed output after ring the rules
(temperature 80 F, humidity 45%).
490 Chapter 11

Figure 44 Aggregate membership function using maximums (temperature 80 F,


humidity 45%).

Figure 45 Defuzzication methods in Matlab (Fuzzy Logic Toolbox).

Figure 46 Results of defuzzication using various methods.


Advanced Design Techniques and Controllers 491

Figure 47 General conguration of neural net with hidden layer.

learning rule (change in weighting function proportional to the error between the
input and output), and least squares have all been used to structure the learning
process. Where neural nets have been successful is in applications requiring a process
that can learnapplications like highly nonlinear systems control, pattern recogni-
tion, estimation, marketing analyses, and handwriting signature comparisons.
As technology develops, the number of neural net applications increases and
many noncontroller applications, such as modeling complex business or societal
phenomenon, are now being done with the concepts of neural nets.

11.9.5 Genetic Algorithms, Expert Systems, and Intelligent Control


Genetic algorithms, expert systems, and intelligent controllers are additional
advanced controllers being studied and applied to a variety of systems and control
processes. Specialty suites in Matlab (toolboxes) have already been developed for
many of them. Genetic algorithms are well suited for situations where little or no
knowledge about the process is available because they are designed to search the
solutions using stochastic optimization methods. If it gets stalled (i.e., local mini-
mum), it can jump (mutate) to a new location and begins again. Thus, genetic
algorithms are capable of searching the entire solution space with a high likelihood
of nding the global optimum. They are modeled after the natural selection process
and the algorithm rewards those relationships that are healthy (evaluated using
tness functions). More recently, work has been done demonstrating that genetic
algorithms can determine an optimum solution while requiring much less computa-
tional time than traditional optimization routines.
Expert systems are related to fuzzy logic systems but might include more than
just rules. It has the ability to determine which neurons actually re and send a signal
based on elaborate inference strategies. Intelligent controllers encompass a large
branch of controllers designed to automate large processes. A recent example is
the discussion of intelligent vehicle and highway systems. The knowledge is based
from existing human experts, solutions, and articial intelligence. As with fuzzy logic


Senecal P, Reitz R. Simultaneous Reduction of Engine Emissions and Fuel Consumption Using Genetic
Algorithms and Multi-Dimensional Spray and Combustion Modeling. CEC/SAE Spring Fuels &
Lubricants Meeting and Exposition, SAE 2000-01-1890, 2000.
492 Chapter 11

and other intelligent systems the initial strategies come from experts in the respective
elds.
Hopefully this chapter stimulated further studies with these (and other)
advanced controllers. There are remarkable advancements made almost every day,
and new exciting applications are always being developed. Many of the concepts in
this chapter are founded upon the material in the previous chapters.

11.10 PROBLEMS
11.1 Briey describe the goal of parameter sensitivity.
11.2 Feedforward controllers are reactive. (T or F)
11.3 Feedforward controllers can be used to enhance what two areas of controller
performance?
11.4 Feedforward controllers change the stability characteristics of our system.
(T or F)
11.5 To implement disturbance input decoupling, we must be able to ___________
the disturbance.
11.6 Describe the role of our system model when used to implement command
feedforward algorithms.
11.7 Describe two possible disadvantages of using command feedforward.
11.8 When are observers required for state space multivariable control systems?
11.9 List two possible advantages of using observers.
11.10 In general, least squares system identication routines solve for the para-
meters of ______________ equations.
11.11 What are the primary dierences between batch and recursive least squares
methods?
11.12 Desribe an advantage and disadvantage of adaptive controllers.
11.13 What is the goal of an MRAC?
11.14 Why is an expert of the system being controlled a requirement for designing
fuzzy logic controllers?
11.15 What are linguistic variables in fuzzy logic controllers?
11.16 What is the model for neural net controllers?
11.17 What are two advantages of genetic algorithms?
11.18 Find the transfer function GD that decouples the disturbance input from the
eects on the output of the system given in Figure 48. Assume that the disturbance is
measurable.

Figure 48 Problem: system block diagram for disturbance input decoupling.


Advanced Design Techniques and Controllers 493

11.19 Given the second-order system and discrete controller in the block diagram of
Figure 49, design and simulate a command feedforward controller when the sample
time is T 0:1 sec. Use a sinusoidal input with a frequency of 0.8 Hz and compare
results with and without modeling errors present. For the modeling error, change the
damping from 6 to 3. Use Matlab to simulate the system.
11.20 Using the input-output data given, determine the coecients, a and b, of the
dierence equation derived from a discrete transfer function with a constant numera-
tor and rst-order denominator. Use the least squares batch processing method. The
model to which the data will be t is given as
Cz b

Rz z  a

The recorded input and output data are as follows:


k Input data, rk Measured output data, ck

1 0 0
2 0.5 0
3 1 0.1813
4 1 0.5109
5 1 0.7809
6 0.4 1.0019
7 0.2 0.9653
8 0.1 0.8628
9 0 0.7427
10 0 0.6080
11 0 0.4978

11.21 Using the input-output data given, determine the coecients, a1 , a2 , b1 , and
b2 , of the dierence equation derived from a discrete transfer function with a con-
stant numerator and second-order denominator. Use the least squares batch proces-
sing method. The model to which the data will be t is given as
Cz b1 z b

Rz z2  a1 z a2

Figure 49 Problemcommand feedforward system block diagram.


494 Chapter 11

The recorded input and output data are as follows


k Input data, rk Measured output data, ck

1 1 0
2 1 0.7000
3 1 1.0169
4 1 1.0168
5 1 1.0010
6 0 0.9993
7 0 0.2999
8 1 0:0169
9 1 0.6832
10 1 1.0159
11 0 1.0175

11.22 Using the denitions (membership functions, rules, etc.) for the fuzzy logic fan
speed controller in Section 11.9.3.1, determine what the fan speed command would
be if the humidity input is 60% and the temperature is 60 F. Approximate the fan
speed for
1. LOM defuzzication
2. MOM defuzzication
12
Applied Control Methods for Fluid
Power Systems

12.1 OBJECTIVES
 Develop analytical models for common uid power components.
 Demonstrate the inuence of dierent valve characteristics on system per-
formance.
 Develop feedback controller models for common uid power systems.
 Examine case study of using high-speed on-o valves for position control.
 Examine case study of computer control of a hydrostatic transmission.

12.2 INTRODUCTION
Fluid power systems, as the name implies, rely on uid to transmit power from one
area to another. Two common classications of uid power systems are industrial
and mobile hydraulics. Within these terms are a variety of applications, as Table 1
shows. The general procedure is to convert rotary or linear motion into uid ow,
transmit the power to the new location, and covert back into rotary or linear motion.
The primary input may be an electrical motor or combustion engine driving a
hydraulic pump. The common actuators are hydraulic motors and cylinders. The
downside is that every energy conversion results in a net loss of energy and eciency
is therefore an important consideration during the design process. Figure 1 shows the
general ow of power through a typical hydraulic system where arrows pointing
down represent energy losses in our system. The energy input and output devices
primarily consist of pumps (input), motors, and cylinders. Of primary concern in this
chapter is the energy control component, usually accomplished through the use of
various control valves (pressure, ow, and direction). In addition to these basic three
categories, many required auxiliary components are necessary for a functional sys-
tem. Examples include reservoirs, tubing or hoses, ttings, and appropriate uid. It
is also usual procedure to add safety devices (relief valves) and reliability devices
(lters and oil coolers).
Most valves, controlling how much energy is delivered to the load, do so by
determining the amount of energy dissipated before it reaches the load. This method
495
496 Chapter 12

Table 1 Typical Applications: Industrial and Mobile Hydraulics

Industrial hydraulics Mobile hydraulics

Machine tools Off-road vehicles


(clamps, positioning devices, rotary tools) (loaders, graders, material handling)
Assembly lines On-road vehicles
(conveyors, loading and unloading) (steering, ride, dumping, compacting)
Forming tools Aerospace
(stamping, rolling) (control surface, landing gear, doors)
Material handlers and robot actuators Marine (control surfaces, steering)

has two negative aspects associated with it: excessive heat buildup and large input
power requirements. Alternative power management techniques are discussed later
in this chapter in Section 12.6. Since using valves to control the amount of energy
dissipated provides good control of our system, it remains a popular method of
controlling uid power actuators. This concept of tracking energy levels throughout
our system is shown in Figure 2. We see the initial energy input provided the pump,
slight losses occurring in the hoses and ttings, energy loss over the relief valve
(auxiliary component) to provide constant system pressure, and the variable energy
loss determined by the spool position in the control valve. The remaining energy is
available to do useful work.
The remaining sections in this chapter provide an introduction into control
valves, how they are used in control systems, strategies for developing ecient and
useful hydraulic circuits, and two case studies of similar applications.

12.3 OVERVIEW OF CONTROL VALVES


Valves perform many tasks in a typical hydraulic system and may be the most critical
element in determining whether or not the system achieves the goals it was designed
for. This section provides an overview of dierent hydraulic valves and develops
basic steady-state and dynamic models for popular valves. Valves usually act as the
hydraulic control actuator in the circuit by controlling the energy loss or ow. In an
energy loss control method, the valve consumes excess power when not needed, as
typical of most relief valves. Valves may also act as the actuator on a variable
displacement pump in a volume control strategy, as seen with a pressure compen-
sated pump. Volume control systems are more ecient but their initial cost is greater
due to the variable displacement pump and valve.

Figure 1 Energy transmission in a typical hydraulic system.


Applied Control Methods 497

Figure 2 Energy ow and levels in a typical hydraulic system.

12.3.1 Terminology and Characteristics


Control valves are classied by their function with three broad classications:
 Pressure control;
 Flow control;
 Directional control.
These valves may or may not use feedback, either mechanical or electrical, to control
the pressure, ow, or direction. This section introduces the common types for each
function and describes their operation. Many of the types do use feedback and may
be analyzed using the techniques from earlier chapters. Also, these valves are found
in a wide variety of control system applications, some of those described in the
previous section, and this section helps us to make the proper choices as to which
components and circuits are appropriate for our system. There are references
included for those desiring further study.
Although the goal of each type is regulation of pressure, ow, or direction, we
will see that in practice there is also dependence on the other variables. For example,
a pressure control valve will be aected by the ow rate through the valve. As
demonstrated throughout preceding chapters, closing the loop allows us to further
enhance the performance.

12.3.2 Pressure Control Valves


Pressure control valves regulate the amount of pressure in a system by using the
pressure as a feedback variable. The feedback usually occurs internally and controls
the eective ow area of an orice to regulate the pressure. A force balance that
takes place on the spool or poppet of the valve controls the orice size. One side of
the equation is the pressure acting on an exposed area. This pressure is balanced by
the compressed force in the spring. The spring force can be adjusted by turning a
screw. One direction of rotation causes the screw to compress the spring further,
thereby requiring additional hydraulic forces to overcome it. For electronically con-
trolled valves, a solenoid is used to provide the balancing force.
498 Chapter 12

During actual operation, the pressure control valves are in constant movement,
modulating to maintain a force balance. As presented next, dierent valve designs
have dierent characteristics. Pressure control valves are generally described by two
models: force balance on the spool (valve dynamics) and pressure-ow relationships
(orice equation).
12.3.2.1 Main Categories
Two main categories exist for pressure control valves:
 Pressure relief valves (normally closed [N.C.] valve, which regulates the
upstream pressure);
 Pressure reducing valves (normally open [N.O.] valve, which regulates the
downstream pressure).
The respective symbols used to describe the two broad types of pressure control
valves are given in Figure 3. With the pressure relief valve the inlet pressure is
controlled by opening the return or exhaust port against an opposing force, in this
case a spring. The inlet pressure acts to open the valve. For the pressure reducing
valve the operation is similar, except now opening the return or exhaust controls the
outlet pressure port. The downstream pressure is used to close the valve. If the back
pressure never increases, the valve remains open. Thus, the pressure relief valve
controls the upstream pressure and the pressure reducing valve controls the down-
stream pressure.
The pressure relief valve family is the more common of the two and is exam-
ined in more detail in the next section. Most valves are designed to cover a specic
range of pressures. Operation outside of these ranges may result in reduced or failed
performance.
12.3.2.2 Pressure Relief Valves
Pressure relief valves are included in almost every hydraulic control system. Two
common uses for pressure relief valves are
 Safety valve, limiting the maximum pressure in a system;
 Pressure control valve, regulating the pressure to a constant predetermined
value.

Figure 3 Symbols used for pressure relief and pressure reducing valves.
Applied Control Methods 499

When used as a safety valve, the goal is to ensure that the valve opens (and relieves
the pressure) before the system is damaged. This conguration does not normally
rely on the valve to modulate the pressure during normal operation. A pressure
control valve is used where the system is expected to always have extra ow passing
through the valve, which then maintains the desired system pressure. The valve in
this conguration is constantly active during system operation. Pressure relief valves
can be used to perform other functions in hydraulic circuits, but the basic steady-
state and dynamic characteristics of the valves remain the same.
Ball type pressure control valves are the simplest in design but have very
limited performance characteristics. As the ow increases, the ball has a tendency
to oscillate in the ow stream, causing undesirable pressure uctuation. Due to the
limited damping, the ball tends to remain oscillating once it has begun. These oscil-
lations cause uid-borne noise (pressure waves) that may ultimately cause undesir-
able air-borne noise. Ball type relief valves are primarily used as safety type relief
valves. As shown in Figure 4 the pressure acts directly on the ball and is balanced by
the spring force. Once the spring preload force is exceeded, the valve opens and
begins to regulates the pressure. By changing from the ball to the poppet as
shown in Figure 5, stability is enhanced since the poppet tends to center itself better
within the owstream. The stability improvement is evident over a wider ow range.
There is still little damping in many poppet type pressure control valves due to
the lack of sliding surfaces. To further enhance stability we can use guided poppet
valves. Guided poppet direct-acting relief valves can pass ows with greater stability
than the previous valves. The added stability is created from the damping provided
by the mechanical and viscous friction associated with the guide of the poppet.
However, this design must ow the relieved oil through the cross drilled passage
ways within the poppet. These holes, shown in Figure 6, cause a restriction, thereby
limiting the ow capacity of the valve. A primary limitation of a direct operated
poppet type relief valve is its limited capacity. This limitation occurs since the spring
force must be large enough to counteract (balance) the system pressure acting on the
entire ball or poppet area. In larger valves, the spring force simply becomes unrea-
sonably large.
The dierential piston type relief valve is designed to overcome this problem.
While still in the poppet valve family, this design reduces the eective area upon
which the pressure acts. As shown in Figure 7, the pressure enters the valve from the
side and acts only on the ring area of the piston. The remaining piston area is acted
upon by tank pressure. This allows the spring providing the opposing forces to be
sized much smaller.

Figure 4 Direct-acting ball type pressure control valve.


500 Chapter 12

Figure 5 Direct-acting poppet type pressure control valve.

A problem arises in this design when trying to reseat the valve. When the valve
is opened and oil starts to ow over the seat, a pressure gradient occurs across the
poppet surface. This creates a force tending to keep the valve open. This force causes
signicant hysteresis in the valves steady-state pressure-ow (PQ) characteristics.
Adding the button to the base of the poppet (shown in Figure 7) improves the
hysteresis by disturbing the ow path. The button captures some of the uids
velocity head forces and tends to create a force helping to close the piston.
Unfortunately, the button creates an additional restriction to the ow, reducing
the overall ow capacity. Increasing volumetric ows demand similarly increasing
through-ow cross-sectional areas and, to balance the larger forces, stronger springs.
Eventually, a point will be reached where the items become too large and a pilot
operated valve becomes the valve of choice.
A pilot-operated valve consists of two pressure control valves in the same
housing (Figure 8). The pilot section valve is a high pressure low ow valve which
controls pressure on the back side of the primary valve. This pressure combines with
a relatively light spring to oppose system pressure acting upon the large eective area
of the main stage.
Several advantages are inherent to pilot-operated relief valves. One, they exhi-
bit good pressure regulation over a wide range of ows. Two, they only require light
springs even at high pressures. And three, they tend to minimize leakage by using the
system pressure to force the valve closed. During operation, when the pilot relief
valve is closed (system pressure less then control pressure), equal pressures are acting
on both sides of the main poppet. Since the poppet cavity area is greater than the
inlet area, the forces keep the valve tight against the seat, thus reducing leakage. The
light spring maintains contact at low pressures.

Figure 6 Direct-acting guided poppet type pressure control valve.


Applied Control Methods 501

Figure 7 Dierential area piston type pressure control valve.

When the system pressure overcomes the adjustable spring force on the pilot
poppet, a small ow occurs from system to tank through the pilot drain. Forces no
longer balance on the main poppet since the pilot stage ow induces a pressure drop
across the orice, thus lowering the cavity pressure. This pressure dierential across
the main spool causes the spool to move, opening the inlet to tank. Once system
pressure is reduced, the valve is once again closed. The valve is constantly modulat-
ing the system pressure when in the active operating region.
Another class of pressure control valves, which may be direct acting or use
pilot stages, is the spool type. The inherent advantage in this type of valve is that the
pressure feedback controlling the valve and the ow paths are now decoupled,
whereas with the poppet valves the ow path and the area upon which the pressure
acted is the same. Figure 9 illustrates a basic direct acting spool type relief valve. In
spool type pressure control valves, the pressure acting on an area is still balanced by
a spring, but the main ow path is not across the same area. There is a piston on the
spools whose lands cover or uncover ports, allowing the system pressure to be
controlled. Lands and ports are discussed in more detail in later sections. Many
times, sensing pistons are used to allow higher pressure ranges with reasonable
spring sizes.

Figure 8 Pilot-operated pressure control valve.


502 Chapter 12

Figure 9 Spool type pressure control valve.

Adding a sensing piston to the direct-acting spool type relief valve allows the
same pressure to be regulated with a much smaller spring size. The sensing piston
area and spring forces must still balance for steady-state operation. A force analysis,
based on the pressure control valve model in Figure 10, demonstrates this. Since
both sides of the main spool are at tank pressure, the only force that spring needs to
balance is the pressure acting on the area of the sensing piston, AP . This allows for
much higher control pressures with smaller springs.

12.3.2.3 Poppet and Spool Type Comparisons


Spool type relief valves are designed to overcome several of the poppet valve short-
comings. While poppet valves are relatively fast due to no overlap, short stroke
lengths, and minimal mass, they tend to be underdamped. This may lead to large
overshoots and oscillations when trying to maintain a particular pressure. Spool type
valves have the ability to yield more precise control over a wider ow range. This is
reached at the expense of response times, relative to poppet valves.
A second area of dierence between poppet and spool type valves is leakage.
Poppet valves, by the nature of their design, have a positive sealing (contact) area
and correspondingly small internal leakage ows. Spool valves, even when over-
lapped, exhibit a contact area clearance and thus small leakage ows even in the

Figure 10 Spool type pressure control valve with sensing piston.


Applied Control Methods 503

closed position. Tighter tolerances to reduce this leakage will generally lead to
more expensive valves.
In general, a characteristic curve may be generated for a relief valve, revealing
three distinct operating regions of the valve, given in Figure 11. The rst region is
where the supply pressure is not large enough to overcome the spring force acting on
the spool. The valve is closed, sealing the supply line from the tank line. The second
region occurs when the supply pressure is large enough to overcome the spring force
but not large enough to totally compresses the spring. This is called the active region
of the valve. In the active region, the relief valve is attempting to maintain a constant
system pressure at some preset value. The system should be designed to ensure that
the valve operates in this region. The change in pressure from the beginning of the
active region (cracking pressure) to the end is often called the pressure override. The
third region occurs when the supply pressure is large enough to completely compress
the spring. This only occurs when the size of the relief valve is such that it cannot
relieve the necessary ow to maintain a constant pressure. When the valve is in this
region, it acts as though it were a xed orice, and the pressure drop can be deter-
mined from the orice equation.
The dierent valve types discussed exhibit dierent steady-state and dynamic
response characteristics. Spool type valves, while slightly slower, have greater damp-
ing and therefore less overshoot and more precise control. Spool type valves come
the closest to approaching an ideal relief valve curve. Poppet valves open quickly but
are generally under damped and tend to oscillate in response to a step change in
pressure. Pilot-operated poppet relief valves generally give better controllability than
direct acting poppet relief valves, as noted in the steady-state PQ curve in Figure 12.
Additional valves in the pressure control category have been developed for
dierent applications. The symbols for several of these valves are shown in Figure
13. The unloading valve is identical to the relief valve with the exception that control
pressure is sensed through a pilot line from somewhere else in the system. Therefore,
ow through the valve is prevented until pressure at the pilot port becomes high
enough to overcome the preset spring force. This valve may be used to unload the
circuit based on events in other parts of the system. Big power savings are possible
with this type of valve since the main system ow is not dumped at high pressures

Figure 11 Operating regions of a pressure control valve.


504 Chapter 12

Figure 12 Comparisons of dierent (typical) pressure control valves.

over the valve continuously and generating heat. Implementation is commonly


found in two pump systems where the pressure from a small pump is used as the
pilot to the unloading valve when the large pump is not needed.
Counterbalance valves are also identical to the relief valve with the exception
that it includes an integral check valve for free ow in the reverse direction and
therefore the downstream port would not be connected to tank. Counterbalance
valves are commonly used to maintain back pressure on cylinders mounted verti-
cally. As the cylinder is raised, the ow passes through the check valve into the
cylinder. If the cylinder begins to lower, the valve maintains a back pressure and
prevents the cylinder from falling.
Finally, sequence valves are also identical to the relief valve with the exception
that an external drain line must be connected to tank from the spring chamber. This
is because the sequence valves downstream port may be pressurized. Sequencing
valves are used as priority valves when more than one actuator is necessary for a
particular circuit. Typical applications include the sequential extension of two cylin-
ders where the rst one is fully extended before the second begins to extend. When
the pressure on the primary actuator is great enough (after a cylinder has stalled and
stopped moving), the valve opens and provides power to a second actuator in the
system.

Figure 13 Symbols for several additional types of pressure control valves.


Applied Control Methods 505

12.3.2.4 Pressure-reducing Valves


Pressure-reducing valves are used in hydraulic circuits to supply more than one
operating pressure. In operation they are similar to a typical pressure control
valve except that the downstream pressure, not the upstream pressure, is used to
control the poppet or spool position. They fall in the category of pressure control
valves. The valve is normally open (N.O.) and the downstream pressure closes the
valve when the spring force is overcome. The general symbol and basic valve types
are summarized in Figure 14.
The typical congurations are similar to other valves in the pressure control
valve family and might include the following:

 Direct acting or pilot operated;


 Poppet or spool;
 Built in check valve for free reverse ow.

Pressure-reducing valves exhibit some slight dierences when compared to other


pressure control valves. With pilot-operated pressure reducing valves there is a con-
tinuous ow through the orice to the tank. Thus inserting a pilot-operated pressure-
reducing valve in the circuit incurs a continuous energy loss. Also, pressure-reducing
valves require a separate connection to tank (i.e., three connections). Finally, there is
an energy cost associated with using a pressure-reducing valve since the way that the
lower pressure is achieved is by dissipating enough energy from the uid. All the
pressure control valves achieve control by dissipating energy from the uid.
Alternative techniques are presented in later sections which exhibit much higher
system eciencies.

12.3.3 Flow Control Valves


Flow control valves are constructed using similar components as described for pres-
sure control valves. Construction may be based on needle, gate, globe, or spool
valves. In most cases the pressure is still the feedback variable but instead of the
system pressure the pressure drop across an orice is held constant. If the valves are
not pressure compensated, the ow will vary whenever the load changes. Since the
ow varies with the square root of the pressure change, this may be an acceptable

Figure 14 Direct-acting spool type pressure reducing valve.


506 Chapter 12

trade-o in some situations. If better ow regulation is desired the valve should be


pressure compensated.
Several further options are available for ow control valves. The valves may
include a reverse ow check valve integral with the valve body, an overload pressure
relief valve may be built in, or the valve may also be temperature compensated.
Temperature compensated valves will adjust the orice size based on the temperature
of the uid, thus providing a fairly constant ow in the presence of both load and
temperature changes. An inline pressure compensated ow control valve is shown in
Figure 15. In this valve, where the orice is downstream of the spool, a decrease in
ow results in a reduced pressure drop and the forces no longer balance on the spool.
The spool will move to the left and increase the orice area until the ow increases
and the pressure forces balance. If the ow increases, the pressure drop also increases
and the spool begins to close, thus reducing (correcting) the ow. This is the negative
feedback component in the valve. Most valve designs include a damping orice in the
pressure feedback line to control stability. Reducing the size of the orice will add
damping to the system, increasing stability but reducing the response time.
It is also possible to use a bypass pressure compensated ow control valve
instead of the inline valve shown above. A bypass pressure compensated ow control
valve, shown in Figure 16, dumps excess ow from the circuit to maintain constant
ow to the load. In the bypass ow control method an increase in ow will cause the
spool to open more and bypass enough ow to keep the load ow constant.
Flow control valves can be modeled using the same relationships as for pres-
sure and directional control valves. The primary dierence is that the ow is related
to a pressure feedback variable. An ideal ow control valve would behave as shown
in Figure 17, where each horizontal line represents a dierent valve setting. Most
valves are set to have a constant pressure drop of 40 to 100 psi across the control
orice. To operate eectively then, the valve must be chosen such that the minimum
ow desired is within the operating range of the valve. In addition to the low ow
limit, ow control valves also will have a high ow limit similar to other valve
classes. Once the inline or bypass path is fully open, the ow can no longer be
regulated and the valve acts like a xed orice in the system.

Figure 15 Pressure compensated ow control valve.


Applied Control Methods 507

Figure 16 Bypass pressure compensated ow control valve.

12.3.4 Directional Control Valves


12.3.4.1 Basic Directional Valve Nomenclature
Directional control valves constitute the third major class of valves to be examined.
As the name implies, these valves direct the ow to various paths in the system.
There are many dierent congurations available and dierent symbols for each
type. In general, there are common elements in each symbol, making understanding
the valve characteristics fairly straightforward. Common categories used in describ-
ing directional control valves include the following:

 Construction type (spool, poppet, and rotary);


 Number of positions;
 Number of ways;
 Number of lands;
 Center conguration;
 Valve driver class.

The number of positions in a directional control valve are of two kinds: innitely
variable and distinct positions. This dierence is reected in the symbols given in
Figure 18. Distinct position valves have more limited roles in control systems since
the valve position cannot be continuously varied.
The number of ways a valve has is equal to the number of ow paths. Common
two-way and four-way valves are shown in Figure 19. Center positions are com-
monly added to describe the valve characteristics around null spool positions. The

Figure 17 PQ characteristics of a ow control valve.


508 Chapter 12

Figure 18 Number of positions in a directional control valve.

number of lands vary from one on the simplest valve to three or four on common
valves to ve or six on more complex valves. Each land is like a piston mounted on a
central rod which slides within the bore of the valve body. The rod and pistons
together are called a spool, hence spool type valves. As the spool moves, the lands
(or pistons) cover and uncover ports to provide passage ways through the valve
body. Common two, three, and four land valve symbols are shown in Figure 20.
The center conguration is one of the most important characteristics of direc-
tional control valves used in hydraulic control systems. There are three common
classications of center congurations:
 Under lapped (open center);
 Zero lapped (critical center);
 Over lapped (closed center).

An under lapped, or open centered, valve has limited use because a constant power
loss occurs in the center position. In addition, an under lapped valve will have lower
ow gain and pressure sensitivity. Open center valves are more common in mobile
hydraulics where they are used to provide a path from the pump to reservoir when
the system is not being used (idle times). This provides signicant power savings
since the pump is not required to produce ow at high pressure. The ow paths are
always open as shown in Figure 21.
A zero lapped, or critical center, valve has a linear ow gain as a function of
spool position. This requires that the lands be very slightly over lapped to account
for spool to bore clearances. This conguration is typical for most servovalves and is
shown in Figure 22.
The critical center valve is attractive for implementing in a control system since
a linear model can be used with good results. In addition, response times can be
faster than closed center valves since any spool movement away from center imme-
diately results in ow. In general, critical center valves will be between 0% and 3%
overlap, with most being less than 1% (quality servovalves).
An over lapped valve has lands wider than the ports and exhibits deadband
characteristics in the center position, as shown in Figure 23. Although there is over-
lap with the spool, even in the center position there is a leakage ow between ports

Figure 19 Number of ways in a direction control valve (two-way and four-way).


Applied Control Methods 509

Figure 20 Number of lands in a directional control valve.

Figure 21 Open center conguration (directional control valve).

Figure 22 Critical center conguration (directional control valve).

Figure 23 Closed center conguration (directional control valve).


510 Chapter 12

due to clearances needed for the spool to move. Although minimal in amounts, it
may have a great eect on stopping a load. Proportional valves generally exhibit
varying degrees of overlap. The amount of overlap is generally related to the cost.
In addition to these, there are many specialty congurations designed for
specic applications. One of the most common types are tandem center valves,
often used to unload the pump at idle conditions while blocking the work ports
and holding the load stationary. The graphic symbol is shown below in Figure 24.
Unloading the pump provides energy savings while blocking the work ports holds
the load stationary.
There are many other center types, including blocking the P port and connect-
ing A and B to tank in the center position. This is a common type where the valve is
the rst stage actuator for a larger spool. The center type allows the large spool to
center itself (spring centered, no trapped volume) when the smaller rst stage valve is
centered. Additional center types allow for motor freewheeling, dierent area ratios
for single ended cylinders, etc. The dierent specialty centers are often designed using
grooves cut into the valve spool. By changing the size and location of the grooves,
the dierent center types and ratios can be designed into the valve operation. The
grooves are also used to shape the metering curve and are a factor in determining the
ow gain of the valve.

12.3.4.2 On-Off Directional Control Valves


On-o directional control valves are commonly two or three position valves, with
direct actuation or pilot stages, and with or without detents to keep the valve open.
Since they are designed to be either open or closed they do not provide variable
metering abilities and cannot be used to control the acceleration/deceleration and
velocity of the load. This severely limits their use as a control device in a hydraulic
control system.
On-o valves can be used to discretely control cylinder position and may be
used quite eectively with simple limit switches in some repetitive motions. They are
among the cheapest directional control valves and typically have opening times
between 20 and 100 msec. Since they do not open and close instantaneously, it is
dicult to achieve accurate cylinder positioning, even with feedback.
When modeling a system using on-o valves, several assumptions can be used
to simplify the design. Generally, the acceleration and deceleration periods are quite
short and the actuator operates at its slew velocity for the majority of its motion. If
the dynamic acceleration/deceleration phases need to be change, the valves can use a
small orice plug to limit the rate of spool/poppet travel when activated, thus aect-
ing the system dynamics. The slew velocity is a function of system pressure, valve
size, and required load force/torque. Knowing the approximate load force will allow
the valve to be sized according the required cycle time.

Figure 24 Tandem center conguration (directional control valve).


Applied Control Methods 511

Due to their limitations, on-o directional control valves are seldom used in
applications where accurate control of the load is required. More time in this section
is spent discussing proportional and servo directional control valves while a case
study using on-o valves in place of directional control valves is presented later.

12.3.4.3 Proportional Directional Control Valves


Proportional valves represent the next level of performance and now allow us to
control the valve spool/poppet position and thus meter the ow to the load. It is now
possible to control the acceleration/deceleration and velocity of the load. Once again,
the same terminology applies. Proportional valves may be direct-acting, lever or
solenoid actuated, include one or more pilot stages, and involve various center
congurations. Lever actuated valves, common in mobile hydraulics, may use dif-
ferent specialty centers and grooves to provide the desired metering characteristics.
When discussing electronic control systems, the most common form of actua-
tion is the electric solenoid, either directly connected to the main spool or acting on a
pilot stage. There are three common congurations using electric solenoids, whether
direct acting or piloted. First, a single electrical solenoid is connected to one side of
the spool and a spring connected to the other. As long as the ow forces are much
smaller than the spring force, the solenoid is approximately proportional and spool
position follows the solenoid current. In a typical conguration, the symbol of which
is given in Figure 25, this implies that current is required to keep the valve centered
and a power failure will allow the valve to fully shift in one direction. In some
systems this is a positive feature for safety reasons.
The second type uses two solenoids and two springs, shown in Figure 26. When
the valve is shut down or in the case of a power failure, the valve returns to the center
position. To maintain proportionality, larger springs may be used. However, larger
springs lead to larger solenoids and slower response times.
Finally, proportional directional control valves may incorporate electronic
spool position feedback to close the loop on spool position, as shown in Figure
27. This leads to several advantages. Light springs, primarily used to center the
valve, can be used since the feedback keeps the valve acting more linear without
relying on heavier springs. The valve can also be set to respond faster (perhaps at the
expense of a slight overshoot) and thus exhibit better dynamic characteristics. Since
the spool position is controlled, the amplier card only requires a small command
signal and the spool position becomes proportional to it. Of course once feedback is
added, the problem of stability must be considered. Dierent measures of stability
are discussed in later sections.
Once electrical actuation is provided, many additional options are available.
For example, it is now quite simple to implement ramp functions by changing the
prole of the input signal. Additional advantages are listed below:

Figure 25 Single solenoid with spring return (proportional valve).


512 Chapter 12

Figure 26 Double solenoid with centering springs (proportional valve).

 Solenoids may be actuated using pulse-width modulation (PWM), thus


saving in amplier circuit cost.
 Controller automatically adjusts for solenoid resistance changes (as tem-
perature increases).
 Electronic implementation of various performance enhancements such as
valve gain, deadband compensation, ramping functions, and outer control
loops (position, pressure, etc.) is possible.
In recent years the quality of proportional valves has steadily improved and many
valves are now considered servo grade. Some servo grade proportional valves
incorporate a single solenoid with spool position feedback and achieve good
response times. Electronically, the center position can be better controlled and allows
the use of minimal spool overlap. The next section takes a brief look at servovalves
and how they compare with and dier from proportional valves.
12.3.4.4 Servovalves
To begin with, let us examine the advantages and disadvantages of typical servo-
valves relative to proportional valves, presented in Table 2. Although Table 2 clearly
shows many advantages for servovalves, two important items prohibit their wide-
spread use, especially in mobile applications: cost and sensitivity to contamination.
Where the best in control system performance is required, as in aerospace and high
performance industrial applications, servovalves are the most common choice. In
general, where an outer loop is closed to control position, velocity, or force (or
others), then servovalves provide the best performance. When the system is ulti-
mately controlled open loop, as with an operator using a hydraulic lift attachment,
proportional valves will generally suce and in many conditions perform better due
to contamination problems.
Various congurations of servovalves have been produced over the past years,
but most designs may be classied as either apper nozzle or jet pipe variants. A
small torque motor provides the electromechanical conversion and hydraulic ampli-
cation is used to quickly move the valve spool. The number of stages will vary
depending on the application. A common two-stage apper nozzle cutaway is shown
in Figure 28. When the torque motor is not energized (current 0), the apper is

Figure 27 Double solenoid with centering springs and spool position measurement for
electronic feedback (proportional valve).
Applied Control Methods 513

Table 2 Comparison of Servovalve and Proportional Valve Characteristics

Comparisons for Servo grade Typical Basic


typical valves proportional proportional proportional
Servovalves valves valves valves

Amplier Linear Linear or PWM Linear or PWM PWM


Bandwidth (Hz) 60400 40100 1040 < 10
Contamination Very high High Medium Low
sensitivity
Cost Very high High Medium Low
Feedback Internal pressure External LVDT Sometimes No
Hysteresis < 1% < 1% < 5% 28%
Spool deadband < 1% 15% 525% > 25%

centered between the two nozzles and equal ows are found on the left and right
outer paths. Since each path has an equal orice, the pressure drops are the same and
each end of the spool sees equal pressure. The spool remains centered. With a
counterclockwise torque, the right nozzle size is decreased while the left nozzle outlet
area is increased. This results in a reduced right-side ow and an increased left-side
ow. Remembering the orices, the right-side (reduced ow) has less of a pressure
drop than the left and the pressure imbalance accelerates the spool to the left. A thin
feedback wire connecting the spool and apper creates a correcting moment that
balances the torque motor. In this fashion the spool position is always proportional
to the torque motor current (after transients decay). The advantage of this system is
that the electrical actuator (torque motor) only needs a small force change that is
immediately amplied hydraulically. The resulting large hydraulic force rapidly
accelerates the spool leading to high bandwidths.
Since the valve is dependent on two orices and small nozzles, it is very sensi-
tive to contamination. In addition, there is a constant leakage ow through the pilot
stage whenever pressure is applied to the valve. Most servovalves include internal
lters to further protect the valve. Servovalves are a good example of using simple
mechanical feedback devices (the feedback wire) to signicantly improve a compo-
nents performance. Because it is a feedback control system, the same issues of
stability must addressed during the design process.

Figure 28 Two-stage apper nozzle servovalve.


514 Chapter 12

The design of the servovalve results in two advantages over typical propor-
tional valves regarding use in closed loop feedback control systems. First, the
hydraulic amplication that takes place in the servovalve enables it to have greater
bandwidths than proportional valves. Second, they are usually held to tighter toler-
ances and designed to be critically centered with zero overlap of the spool. As the
next section illustrates, this leads to signicant performance advantages and mini-
mizes the nonlinearities.

12.4 DIRECTIONAL CONTROL VALVE MODELS


This section seeks to develop basic valve models that enable the designer to accu-
rately design a valve controlled hydraulic system and predict performance. Many
aspects of valve design (spool and bore nishes, materials, groove selection, etc.) are
not covered and require much more depth to accurately model. The goal of this
section is to develop basic steady-state and dynamic models that correlate compo-
nent geometry to hydraulic performance. With the resulting equations, a fairly good
estimate of valve behavior can be predicted while minimizing tedious modeling
eorts. Advanced techniques are being used utilizing computational uid dynamics,
nite element analysis, and computer simulations to further develop detailed models.

12.4.1 Steady-State PQ in Directional Control Valves


Steady-state ow equations and generalized performance characteristics of direc-
tional control valves are presented in this section using a four-way, innitely vari-
able, two-land critically lapped valve. The basic model consists of several variable
orices at each ow path, as shown in Figure 29. We can remove the mechanical
structure of the valve and draw the orices using electrical analogies as shown in the
circuit given in Figure 30. If we rotate and connect the identical tank (T) ports
together, the circuit becomes the basic bridge circuit. It is common to many problems
in engineering and a variety of solution techniques have been developed. Further
renements, taking into account the hydraulic relationships, allow the valve model
to easily be obtained.
Regardless of the spool valve center conguration, once the spool is shifted
only two lands become active and the models become very similar. In this region of
operation the orice PQ equations provide good models of these characteristics in
spool type valves. However, as will be noted, the performance around null for the
dierent center congurations varies greatly and has a large eect on overall system

Figure 29 Flow paths and coecients in a four-way directional control valve.


Applied Control Methods 515

Figure 30 Circuit model for a four-way directional control valve.

performance. Therefore, it is important to remember that the equations developed in


this section are valid only in the active region of the valve and not near null spool
positions.
The bridge circuit of Figure 30 can be modied to reect the physical proper-
ties of the valve. Arrows are drawn through each orice since they are variable,
depending on the spool area and position. The orices are also rigidly connected
and move together, illustrated by the lines connecting each orice. Under steady-
state conditions the compressibility ows are zero and the law of continuity must be
satised. The updated bridge circuit is given in Figure 31. The bridge circuit can now
be solved using Kirchos laws on the loops and nodes. Kirchos current and
voltage laws are commonly used to solve similar electrical circuits. In summary,
the current law states that the sum of all currents, or ows, must equal zero at
each node. The voltage law states that the sum of all voltage drops, or pressure
drops, around a closed loop must equal zero.
These equations, along with the physical parameters of the valve, allow enough
equations to be formed to simultaneously solve them for each unknown. The equa-
tions resulting from the applications of Kirchos laws and the orice equation are
used to write a general equation describing valve behavior (steady state). To begin
with, let us sum the ows at each node and apply the law of continuity:

Node S: QS QPA QPB


Node A: QPA QL QAT
Node B: QPB QL  QBT

Figure 31 Bridge circuit model for a four-way directional control valve (including the load
and coecients).
516 Chapter 12

Node T: QT QAT QBT


Continuity: QS QT
Now, sum the pressure drops around the three loops. The outer loop is formed by
treating the dierence between the supply and tank pressures as the connection in the
same way that a voltage supply would be inserted into the circuit.
Outer loop (w/supply): PS  PT PPA PAT
Upper inner loop: PPA PPB  PL
Lower inner loop: PAT PL PBT
The ows can be related to the pressures using the orice equation for each orice in
the valve.
p p
QBT KBT  PBT QAT  PAT
p p
QPB KPB  PPB QPA KPA  PPA
The result yields a steady-state ow equation as a function of valve size and load
pressure. This is called the PQ characteristics of the valve. Fortunately there are
several simplications that can be taken advantage of. If the valve is critically or over
lapped, once the spool is shifted, only two lands are active and we can assume that
the leakage ow through the other ports to be negligible. In this case the following is
true:
QPA QL QBT or QPB QL QAT
In addition, if the valve is assumed to be symmetrical, then
KPA KPB and KAT KBT
Finally, if the valve has matched orices (equal metering), then
KPA KPB KAT KBT
Now we can write the common equations that result. If we assume that we have a
symmetrical valve and that the tank pressure, PT 0, we can write
p
QV KV  PV and PV PS  PA PB  PT
PS PL PS  PL
PA and PB
2 2
PL PA  PB and PS PA PB
To complete the analysis, we will use the basic orice equation with the results
from above and dene a percent open relationship such that
A p
Q C  AFO   P
AFO
where AFO is the area of the ow path with the spool fully open. Then the total valve
coecient KV can be dened as KV CAFO and furthermore
A
1 1
AFO
Applied Control Methods 517

The area ratio is dimensionless and represents the amount of valve opening. The
equation can be further simplied dening the percentage of spool movement, x, as
A
x 1 x 1
AFO
x can be treated as a dimensionless control variable representing the full range of
spool movement. When x 1, the valve is fully open and we can nd the general
valve coecient. For open and closed center valves, this equation is true only while
the valve is in the active operating region.
p QFO
Q KV  x  P ; when fully open and x 1; then KV p

PFO
Since valves are generally rated at full open conditions for ow at a rated pressure
drop, the nal substitution using the rated ow and pressure, Qr and Pr , leads to
Qr
KV p
Pr
In the above equations KV is the valve coecient for the entire valve since the model
accounted for the total valve pressure drop. Two lands are active when the valve is
shifted and the total pressure drop must be split between the two. Since the same
ow rate is seen by each active land, a symmetrical valve would have equal pressure
drops. The following analysis allows both coecients to be determined. The reduced
problem consists of two valve orices, and the load orice as shown in Figure 32.
Using Kirchos voltage law analogy (pressure drops) and the ow relation-
ships for each pressure drop allows us to relate the rated ow to each pressure drop.
Q2r Q2
Pr PPA PBT 2
2r
KPA KBT
Now we can factor out Qr and divide through both sides, resulting in
Pr 1 1
2 2
Q2r KPA KBT
This is simply the parallel resistance law (where the resistance is K2 ). Now, take the
reciprocal of both sides:

Figure 32 Bridge circuit for determining the valve coecients.


518 Chapter 12

Q2r K2  K2
2PA BT2
Pr KPA KBT
When we take the square root of both sides we achieve a familiar term on the left-
hand side of the equation:
Q KPA  KBT
pr p
P KPA 2 K2
BT

Comparing the equation with our initial denition for the valve coecient allows us
to relate the total valve coecient to the individual orices as
Qr KPA  KBT
p KV p
Pr KPA 2
KBT 2

The value for KV just developed is for one direction of spool movement. Assuming
the valve is symmetrical and dening two valve coecients for each direction of ow
through the valve allows the nal parameter to be dened.
Thus, the nal general representation of the PQ characteristics of a directional
control valve, where KV is the valve coecient for both active lands, is given as
p
Q KV  x  PS  PL  PT
and if PT
0, then
p
Q KV  x  PS  PL
These denitions will be helpful when simulating the response of valve controlled
hydraulic systems and many of the intermediate equations are used when calculating
valve coecients, individual orice pressure drops, etc. Although a relatively simple
equation can be used to describe the steady-state behavior of the valve, we need to
have linear equations describing the PQ relationships if we wish to use our standard
dynamic analysis methods from earlier chapters. The linear coecients may be
obtained by dierentiation of the PQ equations (see Example 2.5) or graphically
from the plots developed experimentally, as the next several sections demonstrate.
12.4.1.1 PQ Metering Characteristics for Directional Control Valves
The PQ curve is produced when x is held constant at several values between 1 and
1. It can be plotted using the valve equation once our valve coecient is known.
p
QL KV  x  PS  PL  PT
From the equation it is easy to see that the ow goes to zero as the load pressure
approaches the supply pressure. At this condition, there is no longer a pressure drop
across the valve, and thus no ow through the valve. Since PL is varied and is under
the square root, we will get a nonlinear PQ curve. Plotting the equation as a function
of load pressure, PL , and load ow, QL , produces the PQ characteristic curves of the
valve given in Figure 33. Each line represents a dierent spool position, x.
Several interesting points can be made from the PQ metering curve. First, once
our required load pressure, PL , approaches the system pressure the load ow, and
thus the actuator movement, becomes zero. Second, even when there is no load
pressure requirement (i.e., retracting a hydraulic cylinder), there is a nite velocity
called the slew velocity because of the pressure drops across the valve. Finally, the
Applied Control Methods 519

Figure 33 PQ metering curve for a directional control valve.

load ows continue to increase when a positive x and negative load is encountered.
This is called an overrunning load and will be discussed more in later sections.
Overrunning loads are prone to cavitation and excessive speeds.
These data are also possible from laboratory tests using a circuit schematic,
such as the one given in Figure 34. Obtaining experimental data veries the analy-
tical models and often is the easiest way to get model information for the design of
feedback control systems.

12.4.1.2 Flow Metering Characteristics for Directional Control Valves


Complementing the valve PQ characteristics are the ow metering characteristics.
While the PQ curves allow the valve coecients to be determined, they do not
display the linearity of valve metering. The three curves in Figure 35 illustrate the
typical steady state ow metering plots for under, critically, and over lapped spool
type valves, illustrating the linearity (or lack thereof) of each type.

Figure 34 Example circuit schematic to measure a PQ metering curve for a directional


control valve.
520 Chapter 12

Figure 35 Flow metering curve for dierent center congurations of directional control
valves.

Experimentally, the same test circuit given in Figure 34 is used for measuring
the ow metering characteristics. The variable load valve (orice) allows dierent
valve pressure drops, developing a family of ow metering curves for the valve. The
ow metering curve gives additional information and allows us to determine the
 Pressure drops across the lands;
 Valve coecients;
 Flow gain;
 Deadband;
 Linearity;
 Hysteresis.
The equation used to generate the ow metering data is identical to the one used for
the PQ curve but in developing the ow metering plots the pressure drop across the
valve is held constant and the valve position is varied. Additional information about
the valve linearity is available since the nonlinear term in the equation is held con-
stant. This implies that in the active region of a valve we should see linear relation-
ships.
The ow metering plot also highlights the dierences between dierent valve
types (servovalve, proportional valve, open center) and alerts the designer to the
quality of the valve. The PQ curves will look similar for similarly sized valves regard-
less of the valve overlap, since the curves are generated with the valve in the active
region and only two orices are active for each type of valve. The valve coecients
are available from both plots. An example ow metering plot from a typical propor-
tional valve is given Figure 36, showing the various operating regions. As expected,
the plots are fairly linear in the active region. The design of a control system that uses
proportional valves should choose a valve size that enables all the desired outputs to
be achieved while the valve operates in the active region.

12.4.1.3 Pressure Metering Characteristics for Directional Control Valves


The pressure metering characteristics of a valve take place within the null zone of the
valve. Pressure metering characteristics are important in determining the valves
ability to maintain position under changing loads. If a critically centered valve
was ideal, that is, no internal leakage, then the pressure metering curve would be
a straight vertical line. But practical valves have radial clearances that allow leakage
ow to move from the supply to the work ports and from the work ports to tank and
Applied Control Methods 521

Figure 36 Flow metering curve for a typical closed center directional control valve.

even from the supply directly to tank. The pressure metering curve for actual direc-
tional control valves have a slope to them as depicted in Figure 37.
The pressure metering curve lls in the missing gaps and provides information
on valve behavior in the deadband region where the load ow is zero. The deadband
region may be less then 3% for typical servovalves and up to 35% for some propor-
tional valves. This characteristic is critical in control system behavior for such
actions as positioning a load with a cylinder and maintaining position with changing
loads. This is because while we are controlling the position, for example, on a
cylinder, holding a constant position should result in no load ow through the
valve. Thus, it is the pressure metering curve that the valve is operating along
while maintaining a constant position. If a valve has a large deadband, it must travel
across it before a complete pressure reversal can take place and hold the load. The
slope of the curve in Figure 37 is generally designated the null pressure gain or
pressure sensitivity of the valve. As the valve overlap is increased, the pressure
gain is decreased. For this reason, servovalves, due to minimal spool overlap, are
capable of much better results in position control systems.

Figure 37 Pressure metering curve for a typical closed center directional control valve (no
load ow through valve).
522 Chapter 12

12.4.2 Deadband Characteristics in Proportional Valves


As we found in the previous section, proportional valves may exhibit signicant
deadband characteristics. Even though the lines are blurring between servovalves
and proportional valves, we must carefully compare and choose the valve when
designing our system. If we wish to see how deadband aects our system, we are
able to simulate it by inserting a deadband block into our block diagram using a
program like Simulink, the graphical interface of Matlab. What we normally end up
with are two options: use a deadband eliminator, discussed below, or design the
system to not normally operate within the deadband. For example, if we wish to
do position control of a hydraulic cylinder, then by denition a constant command
value will require a zero valve ow for a constant position. Any ow through the
valve will cause the piston to move and we are requiring the valve to constantly
operate in the deadband region. For a slight correction to be made, the valve must
travel through the deadband before a change is seen in the output. However, in
contrast to this, if we wish to do velocity control, this requires a constant ow
through the valve for a constant velocity. This is a great setup for a proportional
valve with deadband since it is always operating in the active linear range. A
servovalve still has an additional advantage in having a larger linear range but at
additional cost. By simulating our system, we can easily check the eects of varying
deadband.
If it is necessary to use a valve with signicant deadband in a control applica-
tion requiring constant operation in and near the deadband, an alternative to
increase performance is to use a deadband eliminator. In regards to position control
again, we wish to have zero ow through the valve to maintain a constant position.
The goal of a deadband eliminator is to never let the valve spool sit inside the
deadband. Hence it continually jumps from one side of the deadband to the other,
always ready to produce ow with a new command. This can be accomplished by
having a much higher gain within the deadband region so that any change in the
command immediately moves the spool to the beginning of the active range. This
eect can be seen by monitoring the spool position with an oscilloscope and turning
the deadband eliminator on and o.
The limitation is physics. Even though we try to make the spool move quickly
from one side of the deadband to the other, the fact remains that the spool has a
nite mass and limited actuation force available. It will always take time to move
through the deadband and never approximate the full behavior of a critically lapped
quality valve. That being said, there are signicant cost savings to using proportional
valves and the deadband eliminators do increase the performance.
The analog deadband eliminators can also be duplicated in digital algorithms.
The concept is the same: Do not let the spool remain in the deadband. When
implemented digitally this may take several forms. The rst possibility is to map
the valve command signal into the active range of the valve. This is quite simple and
can be accomplished by adding the appropriate width of the deadband to the output
signal depending on the sign of the error. Thus if our valve signal is 10 V, of which
3 V is the deadband, an error of 0:1 might correspond to 3:1 V being the
command to the valve. The upper and lower limits for the controller become
7 V. No matter what the error is, the command to the valve is never in the dead-
band of the valve.
Applied Control Methods 523

Another method would be to increase the proportional gain whenever the


controller output would normally correspond to a position inside of the deadband
of the valve. While this provides a proportional signal passing through the deadband
(smoother as compared to the previous method), it is slightly more dicult to
implement. A problem with both methods is the introduction of nonlinearities/dis-
continuities and a higher likelihood of making the spool oscillate. As in the contin-
uous method, the laws of physics still apply, and even though the valve is never
commanded to be in the deadband, it still takes nite (i.e., measurable) time to pass
through the deadband.

12.4.3 Dynamic Directional Control Valve Models


The approach when developing dynamic models for directional control valves falls
between two end points: a purely analytical (white sheet) approach and a purely
experimental (black box) approach. The appropriateness of each method depends on
the goal for the model that is developed. If the goal is the design of the valve itself,
treated as its own system, the analytical approach has many advantages since we
have access to the coecients in the model and can easily perform iterations to
optimize the design. The tools to develop such a model have been presented in earlier
chapters and include Newtonian physics, energy methods, and bond graphs. The law
of continuity and similar laws are also required. The problem with developing ana-
lytical models for systems as complex as valves is that either the math becomes very
tedious (including nonlinearities, temperature eects, etc.) or we make so many
assumptions that our model is only useful around a single operating point. There
are increasing numbers of object level computer modeling packages that help to
develop such models.
The other end point is to strictly measure input and output characteristics and
to nd a model that ts the data. Three methods have been presented in earlier
chapters that allow us to do this: step response plots (Sec. 3.3.2), frequency response
plots (Sec. 3.5.3), and system identication routines (Sec. 11.6). Step response plots
are primarily limited to rst- and second-order systems, while Bode plots can often
be used to determine models for higher order systems. The system identication
routines can be implemented for higher order models and dierent models can be
used until an acceptable t is found.
The primary disadvantage with black box approaches is the inability to extend
the model into other ranges by varying parameters within the model. For example, if
the spool mass is decreased another plot would be needed, whereas in the analytical
model the change can be made and the simulation completed.
For many control systems the valve models can be obtained experimentally (or
from manufacturers data) since the valve is just one component of many. Our goal is
generally not to design the valve but to verify how it performs (or would perform) in
our control system. Also, aspects of both approaches may be combined and in
general is recommended. The goal is to develop the basic analytical model based
on known physics and then use experimental data to t the coecients. This gives us
condence in the model and still allows us to extend the model into additional areas
of operation. In conclusion, given the importance that the valve has in determining
the characteristics of our system, we should attempt to have accurate and realistic
models when developing the controller.
524 Chapter 12

12.5 STEADY-STATE PERFORMANCE OF DIRECTIONAL CONTROL


VALVE AND CYLINDER SYSTEM
Once the individual models are developed, the task becomes combining them to
achieve a specic performance goal. In steady-state operation, this is fairly straight-
forward and stability issues are not addressed. The analysis will use a four-way
directional control valve controlling a single-ended cylinder (unequal areas). This
is a very general approach and is easily transferred to dierent system congurations.
The pump will not specically be addressed here, since in a steady state analysis it is
only required to be capable of providing the necessary ow and pressure. Thus the
pump sizing can be accomplished after analyzing the valve-cylinder interaction,
when the desired piston speed, force capabilities, and valve coecients are known.
In general, the valve cylinder model is simplied since a shift in the spool
position away from null eectively closes two of the four lands (orices). This leaves
a simpler model where the ow is aected by the spool position and cylinder load. At
this operating condition the valve-cylinder model reduces to that shown in Figure 38.
The basic cylinder model and notation was introduced earlier in Section 6.5.1.1. The
load is always assumed to be positive when it resists motion (else it is termed over-
running). Using the cylinder force, ow equations, and valve orice equations allows
the model to be developed. Recognize that this model is assuming one direction of
motion and the valve being shifted in a single direction. The identical method is used
to develop the equation describing the retraction of the cylinder. The basic cylinder
force equation can be given as follows:

dv d 2x
PA  ABE  PB  ARE  FL m  m 2
dt dt
If we assume that the acceleration phase is very short relative to the total stroke
length (which it usually is, even though during the acceleration phase the inertial
forces may be very large), the acceleration can be set to zero and in steady-state form
the equation becomes
PA  ABE  PB  ARE  FL 0

Figure 38 Simplied valve-cylinder system.


Applied Control Methods 525

Now we can describe the pressure drops and ows in terms of the force and velocity.
First, dene the pressure drops in the system:
PA PS  PPA and PB PBT (assuming PT 0
Now the pressure drops across the valve can be described using their orice equa-
tions:
Q2PA Q2BT
PPA PBT
x2 KPA
2 x2 KBT
2

Remember that x represents the percentage that the valve is open (only in the active
region) as dened in the directional control valve models (Sec. 12.4.1).
The cylinder ow rates, assuming no leakage within the cylinder, are dened as
dPBE
QS v  ABE CBE 
dt
dP
QT v  ARE CRE  RE
dt
where C is the capacitance in the system. If compressibility is ignored or only steady-
state characteristics examined, the capacitance terms are zero and the ow rate is
simply the area times the velocity for each side of the cylinder. It is important to note
that the ows are not equal with single-ended cylinders as shown above. For many
cases where the compressibility ows are negligible the ows are simply related
through the inverse ratio of their respective cross-sectional areas. In pneumatic
systems the compressibility cannot be ignored, and constitutes a signicant portion
of the total ow. If compressibility is ignored the ratio is easily found by setting the
two velocity terms equal, as they share the same piston and rod movement.
ABE
QBE Q
ARE RE
If we combine the ow and pressure drop equations with the initial force balance we
can form the general valve-cylinder equation as
A3BE A3
PS  ABE  v2  2 2
 v2  2 RE2  FL 0
x KPA x KBT
We can simply the equation by combining the velocities. This results in the nal
equation describing the steady-state extension characteristics of a valve controlled
cylinder.
!
v2 A3BE A3RE
PS  ABE  2  2
2  FL 0
x KPA KBT

Remember that this is for extension only and when the cylinder direction is reversed
the pump ow is now into the rod end. The same procedure can be followed to derive
the valve controlled cylinder equation for retraction as
!
v2 A3BE A3RE
PS  ARE  2  2
2  FL 0
x KAT KPB
526 Chapter 12

Many useful quantities can be determined from the nal equations by rearranging
and imposing certain conditions upon them. The stall force is the maximum force
available to move the piston and occurs when the cylinder velocity is zero. Since
cylinder movement begins from rest, the maximum force is available to overcome
static friction eects. The stall forces can be calculated as

Extension: PS  ABE FLext v0

Retraction: PS  ARE FLret v0

Since the supply pressure acts on a larger (bore end) area during extension, the
maximum possible force is larger than for retraction where the rod area subtracts
from the available force.
A second operating point of interest is the slew speed, occurring where the load
force is equal to zero. This is the maximum available velocity, unless the cylinder
load is overrunning. Setting the load forces equal to zero and x equal to one (100%
open for maximum velocity) produces the following equations:
v
u P A
Extension: vslew u
u 3
S BE
tABE A3RE
2
2
KPA KBT
v
u P A
Retraction: vslew u
u 3
S RE
t ABE A3RE
2
2
KAT KPB

It is useful to develop a curve from the resulting equations, as they represent the end
points of normal operation. For cylinder extension, we get the plot in Figure 39. This
curve may be produced by two primary methods. If all valve and cylinder para-
meters, and the supply pressure, are known, then the curve can be generated easily
with a computer program such as a spreadsheet. If the system is in the process of
being designed, then it is useful to know that the shape of the curve is a parabola in
the rst quadrant. The nal curve then, including the eects of overrunning loads

Figure 39 Valve controlled cylinder performance curve (extension).


Applied Control Methods 527

and negative load forces and cylinder velocities (extension and retraction directions),
is given in Figure 40.
The outermost possible line, when the valve is fully open, represents the oper-
ating envelope of system performance. Where the lines fall between the axes and the
outermost limit is determined by the spool position, x, in the valve. Each constant
value of x will produce a dierent line. The outermost envelope can only be increased
by changing the valve coecients, cylinder areas, or by increasing the supply pres-
sure.
The goal in using these equations during the design process is to properly
choose and size the hydraulic components that will enable us to meet our perfor-
mance goals. Remember that if a system is physically incapable of a response, it does
not matter what controller we use, we will still not achieve our goals.

12.5.1 Design Methods for Valve Controlled Cylinders


In beginning a new design, rst list all design parameters already dened. This might
include supply pressure (existing system), cylinder or valve parameters, necessary
performance points (force and velocity operating points), and so forth. Pick the
remaining components using an initial best guess and check to see if requirements
are met. Reiterate until necessary. For specic scenarios, several methods can be
outlined.
When either one or two operating points are specied on the operating curve, it
is necessary to write the valve-cylinder equation for the desired points, determine the
necessary remaining ratios and parameters, and pick or design a valve or cylinder
providing those features. Once the supply pressure, cylinder area, and valve coe-
cients are known, the system may be analyzed at any arbitrary point within the
operating envelope.
Another interesting design is for maximum power. Since the cylinder output
power equals the velocity times force, the valve-cylinder equation may be solved for
the force, multiplied by the speed, and dierentiated to locate the maximum power

Figure 40 Operating curves for valve controlled cylinders.


528 Chapter 12

operating point. In doing this, it can be shown the maximum power point occurs at a
load force equal to two thirds the stall force. The power curve is added to the valve
controlled cylinder curve in Figure 41. Thus to design for maximum power, choose a
stall design force of 1.5 times the desired operating load force. When a similar
analysis is completed for determining the minimum supply pressure necessary in
meeting a specic operating point, once again the same criterion is found.
Designing for maximum power results in requiring the minimum supply pressure
needed. Remember that the above equations assume a pump capable of supplying
the required pressure and ow rate.

12.5.2 Modeling a Valve Controlled Position Control System


In the previous section we developed the basic steady-state models for hydraulic
valves and cylinders. The models, although useful for sizing and choosing the proper
components, are unwieldy for use in the design of our control system. It is more
common to develop a block diagram for the design of our control system. This
section develops the linearized models for the block diagram of our valve controlled
system.
Referring to Example 2.5, where the directional control valve model was
linearized, we obtained the linear equation around the operating point as

@Q @Q
Q  x   PL Kx x  Kp PL
@x @PL

Kx is the slope of the active region in the valve ow metering curve (see Figure 36) at
the operating point and Kp is the slope of the valve PQ curve (see Figure 33) at the
operating point. The piston ow is related to the cylinder velocity as

dx
QvAA
dt

Figure 41 Maximum power in a valve controlled cylinder system.


Applied Control Methods 529

Notice that this assumes a double-ended piston with equal areas, A. Subsequent
analyses will remove this (and the linear) assumption. For now, we let PL be the
pressure across the piston and include damping b to sum the forces on the cylinder,
where

d 2x dx
F m 2
PL A  b
dt dt
The damping, b, arises from the friction associated with the load and cylinder seal
friction. To form the block diagram, we rst solve the valve ow equation for PL :
Kx 1
PL x Q
Kp Kp

These three equations are used to form the system portion of our block dia-
gram. PL is the output of the summing junction, x and Q are the inputs to the
summing junction, and the force equation provides a transfer function relating PL
to x. The basic block diagram pictorially representing these equations is given in
Figure 42. Although we have linearized the model we can still add nonlinear blocks
using simulation packages like Matlab/Simulink. If we were to incorporate the valve
and cylinder model into a closed loop control system, there are several useful blocks
that we can add. First, we need to generate a command signal for the system. In
Simulink are several blocks for this purpose. The common ones include the Signal
Generator, Constant, Pulse Train, and Repeating Sequence. It is easy to feedback
the cylinder position and add another summing junction, the output of which is our
error and the input into our controller. As with the signal blocks there are numerous
controller choices contained in Simulink. The standard PID (and a version with an
approximate derivative) is an option, along with function blocks, fuzzy logic, and
neural net blocks, and a host of others. By adding a zero-order hold (ZOH) block we
can also simulate a digital algorithm. The output of the controller would go to a
valve amplier card, providing the position input for the valve. Since the valve has
associated dynamics, we can add a transfer function relating the desired valve com-
mand to the actual valve position. As mentioned previously, these transfer functions
can be obtained from analytical or experimental (step response, Bode plots, or
system identication) methods. Finally, it is important to model the deadband and
saturation limits for the valve, both readily accessible from Figure 36.
When these are added to the model, as shown in the Simulink model given in
Figure 43, we can quickly compare dierent component characteristics and dierent
controllers. In general, we can get the required parameters from manufacturers data
and estimate the performance that dierent valves and cylinders might have in our

Figure 42 Block diagram modelcylinder position control with valve.


530 Chapter 12

Figure 43 Simulink modelvalve control of cylinder with deadband and saturation eects.

system. The valve dynamics, in this case represented by a second-order transfer


function, may take on dierent forms, depending on the modeling method used. If
there are dominant complex conjugate roots, the second-order transfer function
works well. If we are unable to obtain the data, we can secure a valve and with
simple experiments generate the performance curves for the valve. When the nal
design is chosen, the system can be tested with a variety of controllers and prelimin-
ary tuning values obtained. Step inputs, ramp inputs, and sinusoidal inputs may all
be checked to verify the chosen controller and gain set points.
To verify the simulation, it is wise to build and test the circuit, if possible. If the
circuit is not identical, at least the model can be changed to reect the components
used in the test. This veries the model structure and lends condence to using the
model in other designs.

12.5.3 Cylinder Position Control using a Digital Controller


In this section we will take our analog PID position control system utilizing a
directional control valve and hydraulic cylinder, given earlier in Figure 43, and
modify it to simulate the addition of a digital PI controller. The digital controller
will be interfaced to the continuous system model using the ZOH model of a DA
converter. Simulink includes a ZOH block. Once the model is constructed we can
easily perform simulations without having to build the actual system. It is still based
on linearized equations and subject to the accompanying limitations.
The PI incremental dierence equation, developed in Section 9.3.1, is
T
uk uk  1 Kp ek  ek  1 Ki ek ek  1
2
We can collect common terms, convert to the z-domain, and also represent the PI
digital algorithm as the sum of two transfer functions.
Uz T z 1
Kp Ki
Ez 2 z  1
This is easy to implement in a block diagram, as shown in Figure 44, where the
discrete equivalent and a ZOH model have replaced our original analog controller.
The feedback signal is not sampled on the diagram but inherently is sampled in
Simulink when using the discrete transfer functions. If the resolution of our AD
or DA converters is an issue, we can also add a quantization block in Simulink and
verify the eects on our system.
Applied Control Methods 531

Figure 44 Simulink modeldigital position control at valve and cylinder.

If desired, we could have just as easily implemented the actual dierence equa-
tions using the function block. The dierence equation for a PI controller after
collecting the ek and ek  1 terms is

T T
uk uk  1 Kp Ki ek Ki  Kp ek  1
2 2
In the Simulink function block, this is represented by
u1 Kp Ki  T=2  u2 Ki  T=2  Kp  u3
where ui is the ith input to the function block and u1 uk  1, u2 ek, and
u3 ek  1.
From the block diagram in Figure 45 we see how the delay blocks are used to
hold the previous error and controller output for use in the dierence equation. One
of the more interesting dierences with the digital controller is the eect that sample
rate has on stability. To highlight this eect for the same gain, Figure 46 gives the
responses to a step command with sample times of 0.1, 0.8, and 1.6 sec when the
proportional gain is held constant at Kp 5.
The same procedure can be done with proportional gain as the variable and
leaving the sample time, T, equal to 0.1 sec. Adjusting the proportional gain to
values of 5, 25, and 50 produces the response curves given in Figure 47. It is clear
from the response plots the dierences between marginal stability caused by sample
time and marginal stability resulting from excessive proportional gain. The longer
sample time tends to push the roots out the left side of the unit circle in the z-plane
and the increasing proportional gain out the right hand side of the unit circle.
These simple examples illustrate the benets of simulating a system. Many
simulations are possible in the space of several minutes. Although the usefulness

Figure 45 Dierence equation implementation in Simulink.


532 Chapter 12

Figure 46 Eects of sample time on system stability.

to determine the gain required in the actual real-time controller is limited to the
accuracy of our model and thus not extremely accurate for all operating ranges
(model is linearized about some point), the ability to perform what-if scenarios for
all variables in the model is extremely benecial. For example, once the model is
built, we can easily determine the feasibility of using the same valve to control the
position of a dierent load simply by changing the parameters in the physical system
block. Also, by simply changing the feedback from position to velocity, another
entire capability can be examined.

Figure 47 Eects of proportional gain on system stability.


Applied Control Methods 533

Finally, these same methods can be applied to directional control valves used
to control rotary hydraulic actuators, usually in the form a hydraulic motor. Instead
of the pressure acting on an area to produce a force, the pressure acts on motor
displacement to produce a torque. The output of the system transfer function,
P=! 1=Js b, is now angular velocity instead of linear velocity. Velocity is
also the feedback signal to be controlled in most rotary systems.

12.6 DESIGN CONCEPTS FOR FLUID POWER SYSTEMS


A brief introduction to common uid power circuits and strategies is presented to
help us implement a system with the correct physics that allows our controller to
function properly. This means, as preceding sections demonstrated, choosing the
correct valves, cylinders, motors, etc., to meet the force and speed requirements of
our system. Of particular interest are the dynamics associated with each component
since when the loop is closed we must address the issue of stability. Systems that meet
our requirement and are properly designed are often described as robust.
Robust systems exhibit good tracking performance, reject disturbance inputs,
are insensitive to modeling errors, have good stability margins, and are not sensitive
to transducer noise. Designing a control system is generally going to be a trade-o
between these items. As we increase tracking performance, we generally decrease
stability unless techniques like feedforward compensation are used. Disturbance
inputs are exemplied when the physical system has a naturally high gain, which
is the case in most hydraulic systems. Modeling errors continue to plague hydraulics
due to the many complex components with nonlinearities. Even the oil viscosity is a
logarithmic function of temperature. If oil viscosity changes cause problems, a gain
scheduling scheme may be used; this is much easier in the digital domain. Some basic
considerations for designing hydraulic control systems are listed below.
 Keep all hoses as short as possible to maintain system stiness.
 Size valves to operate in the middle of their active range.
 Correctly size hoses and ttings to reduce unnecessary losses.
 Use current signals if long wiring paths are used.
 Use caution when throttling inputs to reduce cavitation problems.

Designing (or choosing) the correct combination of components has a large eect on
the operating characteristics and potential of the system. Each method has dierent
advantages and disadvantages, especially in terms of eciency, speed, force, and
power.

12.6.1 Typical Fluid Power Circuit Strategies


In virtually all uid power circuits the fundamental goal is to deliver power to load,
providing useful work. Since the product of pressure and ow is power, we essen-
tially have three strategies when controlling the delivered power:

1. Control the pressure with a constant ow;


2. Control the ow with a constant pressure;
3. Combination pressure and ow control.
534 Chapter 12

The choice of method depends in large part on the goals of the system. Eciency can
be increased but usually at an upfront cost that is greater. The expected load cycles
will play a role in determining the appropriate circuit. Circuits with large idle times
will not want to be required to input full power in circuit at all times; instead, a
system that only provides the power required by the load would be desired. Besides
these considerations, the experience of the designer and availability of components
place practical constraints on the choice of circuits and strategies.
Some of the simplest circuits are designed to control the pressure with a con-
stant ow being produced by the pump. In this case we have a xed displacement
(FD) pump providing a constant ow and some arrangement of valves controlling
the pressure and where the ow is directed to. An example constant ow circuit is
given in Figure 48. The ow from the pump is either diverted over the relief valve,
delivered to the load, or delivered back to the reservoir when the tandem center valve
is in the neutral position. This circuit has advantages over one with a closed center
valve since under idle periods (no power required at the load) the tandem center
valve allows the pump to still have a constant ow but at a signicantly reduced
pressure. A closed center valve requires all the ow to be passed through the relief
valve and the power dissipated is large. While an open center valve provides the same
power savings at idle conditions, it will not lock the load in xed position as the
tandem center valve does. The valve coecients and cylinder size can be chosen to
meet the force and velocity requirement of our load using the method in Section 12.5.
One problem with using tandem center valves is that they limit the eectiveness
of using one pump to provide power for two loads, as shown in Figure 49. As long as
one valve is primarily used at one time, the circuit works well and provides the same
power savings at idle conditions. However, since the valves are connected in series
once one cylinder stalls, the other cylinder also stalls.
The next class of circuits involve varying the ow to control the power deliv-
ered to the load. Once again, there are available power savings when using these
types of circuits. There are two primary methods of varying the ow based on the
system demand: accumulators and pressure-compensated pumps. This means that
the initial cost of the system will be greater but so will be the power savings.
Accumulators come in a variety of sizes and ratings and generally are one of two
types: piston and bladder. For both types the energy storage takes place in the
compression of a gas, usually an ideal inert gas such as nitrogen. Their electrical

Figure 48 Constant ow circuit (FD pump and tandem center valve).


Applied Control Methods 535

Figure 49 Constant ow circuit (FD pump and dual tandem center valves).

analogy is the capacitor, and they are used in similar ways to provide a source of
energy and maintain relatively constant pressure levels in the system. An example
circuit using an accumulator is given in Figure 50.
If we know the required work cycle of the actuator, as is the case in many
industrial applications, the accumulator provides a way to have the pump be sized to
provide the average required power and the accumulator is used to average out the
high and low demand periods. This provides signicant power savings. An example
would be a stamping machine where the amount of time for extension and retraction
are known along with the amount of time it takes to load new material onto the
machine. The peak power requirement can be very large even though the average
required power is much less. Notice that a check valve and unloading valve are used

Figure 50 Variable ow circuit using accumulator, unloading valve, and closed center con-
trol valve.
536 Chapter 12

to provide additional power savings during long idle periods. Once the relief pressure
is achieved, the unloading valve opens and allows the pump to discharge ow at a
much lower pressure. The check valve prevents ow from owing back into the
pump. We are also required to use a closed center valve since an open center (or
tandem) valve in the neutral position would allow the accumulator to discharge.
A second way to achieve variable ow in our circuit is through the use of a
variable displacement (VD) pump. When the loop is closed internally in the pump,
the system pressure can be used to de-stroke the pump and reduce the ow output.
This feedback mechanism and variable displacement pump combine to make a
pressure compensated pump. The ideal pressure compensated pump, without losses
and a perfect compensator, will exhibit the performance shown below in Figure 51,
acting as an ideal ow source until it begins to pressure compensate and as an ideal
pressure source in the active (compensating) region.
In reality there will be losses in the pump and the curve has a slight decrease in
ow as the pressure increases. In the compensating range the operating curve gra-
dually increases in pressure as the ow decreases due to the additional compression
of the compensator spring and less pressure drops associated with the ow through
the pump. Since power equals pressure times ow, when we are able to operate in the
compensating region we are able to save signicant power. Only the ow that is
required to maintain the desired pressure is produced by the pump. An example
circuit using a pressure compensated pump and closed center valve is shown in
Figure 52.
There are many subsets of the circuits presented thus far and a sampling of
them are presented in the remainder of this section. The ones presented are not
comprehensive and only serve to illustrate the many ways that hydraulic control
systems can be congured and optimized for various tasks. In many cases these
circuits are now controlled electronically and play a large roll in determining the
performance of our system.
The two additional classes examined here are pressure control and ow control
methods. In most cases they may be constructed using constant ow or variable ow
components. Many of the valves used in these circuits are discussed in more detail in
Section 12.3. To begin with, there are situations where we need two pressure levels in
our system and yet desire to have only one primary pump. To provide two pressures
with one pump we can use a pressure reducing valve as shown in Figure 53.

Figure 51 Pressure compensated pump characteristics.


Applied Control Methods 537

Figure 52 Variable ow circuit using pressure compensated pump and closed center valve.

We limit our power-saving options in this type of circuit since we cannot use
open center valves to unload the pump when not needed. Also, the method by which
the lower pressure is reached is simply another form of energy dissipation in our
system. The pressure-reducing valve converts uid energy into heat when regulating
the pressure to a reduced level. This circuit is attractive when the actuator requiring
the reduced pressure does not draw signicant power (less ow and thus less loss
across the valve) and when an additional component that requires the lower pressure
is added to an existing circuit. If designing upfront, a better goal would be to use
components that all operate with identical pressures.
A second pressure control circuit is the hi-lo circuit, shown in Figure 54. The
basic hi-lo circuit consists of two pumps designed for two dierent operating regions.
The rst pump is a high pressurelow ow pump that always contributes to the
circuit. The second pump is a low pressurehigh ow pump that is unloaded when
the system pressure exceeds a preset level. This has several benecial characteristics.
When the load force is minimal, the required pressure is low and the ow of both
pumps add together and move the cylinder quickly. When the load increases, the
large pump is unloaded and only the smaller high pressurelow ow pump is used to
move the load. The large pump is protected from high pressure through the use of a

Figure 53 Two-level pressure control using pressure reducing valves.


538 Chapter 12

Figure 54 Power savings using a hi-lo circuit with two pumps.

check valve. Since the full ow capability of the system is not produced at high
pressures we experience signicant power savings. Remote pilot-operated relief
valves, accumulators, and computer controllers are all additional methods of con-
trolling pressure levels in hydraulic systems.
To wrap us this section let us conclude by examining several ow control
circuits. Whereas pressure control eectively limits the maximum force or torque
that an actuator produces, ow control devices limit the velocity of actuators. One
simple method to control the speed is by using a ow control valve in series with our
actuator. If our ow control valve limits the inlet ow to the actuator, we call it a
meter-in circuit, as shown in Figure 55. Using the internal check valve in the ow
control valve is required if we only desire to meter the inlet ow. When the cylinder
(in this case) retracts, the ow is not metered and simply passes through the check
valve. This circuit works well when the load is resistive and counteracts the desired
motion. If this is the case the bore end always maintains a positive pressure and
cavitation is avoided. If we begin to encounter overrunning loads, a meter-in circuit
will tend to cause cavitation in the cylinder since it is an additional restriction to the
inlet ow. This problem is solved by using a meter-out circuit, shown in Figure 56.

Figure 55 Actuator velocity control using a meter-in circuit.


Applied Control Methods 539

Figure 56 Actuator velocity control using a meter-out circuit.

The potential problem with meter-out is pressure intensication. When the


valve is shifted to move the load down, we have system pressure acting on the
bore end and the ow control valve produces very high pressures on the rod end
of the cylinder to control the speed. The amount of pressure intensication is related
to the area ratio of the cylinder. As before, when the cylinder is raised the check
valve provides an alternative ow path. As long as we are aware of the potentially
higher pressure, a meter-out circuit prevents cavitation, controls speed, and provides
inherent stability since if the load velocity increased, so would the resulting pressure
drop in the ow control valve, and the load velocity stabilizes. With the meter-in
circuit and an overrunning load that cavitates, we can easily get large velocities when
they are not desired.
A similar circuit (to the meter-in and meter-out) is the bleed-o ow control
circuit shown in Figure 57. This circuit controls the speed of the actuator by deter-
mining the amount of ow that is diverted from the circuit. Since it does not intro-
duce an additional pressure drop in series with the load, it is capable of higher
eciencies than the previous two circuits. The disadvantage is that it exhibits a
limited speed range adjustment.

Figure 57 Actuator velocity control using a bleed-o circuit.


540 Chapter 12

Finally, a common circuit used to increase retraction velocities by increasing


the ow into the cylinder that is greater than the pump ow is the regenerative
circuit, shown in Figure 58. With the regenerative circuit we have added another
valve position, giving us ow regeneration. When the cylinder is extending in the
regenerative mode (lowest position on the valve), the bore end ow is directed back
into the system ow and thus adds with the pump ow to produce higher velocities.
As with all methods there is a trade-o. Since the bore and rod end pressures are
equal, the available force is decreased. In fact, the power remains the same and we
are simply trading force capability for velocity capability. With the valve that is
shown, since it retains the original three positions, it would still have full force
capability in those positions.
Additional circuits that are used to provide a form of ow control are decel-
eration, ow divider, sequencing, synchronization, and fail safe circuits. Many var-
iations of pressure and ow control circuits have been developed over the years, and
this section provides an introduction into several of them as they relate to control of
variables in our circuit. In the next section we examine in more detail the power and
eciency issues associated with several types of circuits.

12.6.2 Power Management Techniques


To increase the eciency of our systems, we need to eectively manage the power
that is produced by the pump and distributed to the hydraulic actuators. There are
dierent levels to which we can manage the power in our system. Four basic
approaches that are often used are
 Fixed displacement pump/closed center valve (FD/CC);
 Fixed displacement pump/open center valve (FD/OC);
 Pressure compensated variable displacement pump/closed center valve (PC/
CC);
 Pressure compensated load sensing pump/valve (PCLS).
The rst system listed, the FD/CC system, relieves all pressure through the relief
valve or control valve and is a constant power system. This is a simple approach to
designing a circuit but suers large energy losses, especially at idle conditions. The

Figure 58 Regenerative circuit example.


Applied Control Methods 541

pump produces a constant ow, any excess of which must be dumped over the relief
valve at the system (high) pressure. The only time this circuit is ecient is during
periods of high loads where the largest pressure drop occurs at the load (useful work)
and very little is lost across the relief valve or control valve. The worst condition, at
idle (no ow to the actuator), causes all pump ow to be passed through the relief
valve at the relief pressure. Almost all the power generated by the pump is dissipated
into the uid and valve in the form of heat. This signicantly increases the cooling
requirements for our system.
The FD/OC system, an example of which is given in Figure 48, acts similarly
with a constant ow but exhibits increased eciencies at null spool positions, where
the valve unloads the pump. Although the pump still operates at the maximum ow,
it is at reduced pressure. The amount of pressure drop across the valve in the center
position determines the eciency of the system at idle. When operating in the active
region of the valve, the system exhibits the same eciencies as the FD/CC system.
This type of circuit is common in many mobile applications.
Once we commit to a variable displacement pump (now a variable ow circuit
instead of constant ow as in the previous two types), our eciencies can be sig-
nicantly improved. The variable displacement pump conguration of interest is
pressure compensation, the ideal operation being shown in Figure 51. The PC/CC
system increases system eciency even further since the pump only provides the ow
necessary to maintain the pressure. The ow is very close to zero at null spool
positions. The compensator maintains a constant pressure in our system by varying
the ow output of the pump. Now the wasted (dissipated) energy in our system
primarily occurs across our control valve since we still have full system pressure
on the valve inlet. The relief valve, although still included for safety, is inactive
during normal system operation. Another advantage is that multiple actuators can
be used and all will have access to the full system pressure. With the FD/OC system
described previously, if one OC valve is in the neutral position the whole system sees
the lower pressure. A negative to using pressure compensated pumps is the higher
upfront cost for the pump, although it will likely save in other areas (heat exchan-
gers) and certainly in the operating costs. The other disadvantage is that the pump
still produces system pressure even at idle and even though it is not always required.
This leads to the nal alternative, a pressure compensated load sensing circuit.
The load sensing pump/valve system exhibits the best eciency by limiting
both the ow and pressure, providing just enough to move the load. Instead of
producing full ow as with the xed displacement pumps or full pressure as with
the pressure compensated pump, the system regulates both pressure and ow to
always optimize the eciency of our system. The system is virtually identical to
the PC/CC system except for that the load pressure, instead of the system pressure,
is used to control the ow output of the pump. An example PCLS circuit using a
variable displacement pump is shown in Figure 59. With this system the pressure is
always maintained at a preset P above the load pressure. Shuttle valves are used to
always choose the highest load pressure in the system. The force balance in the
compensation valve is such that system pressure is balanced by the highest load
pressure plus an adjustable spring used to set the dierential pressure between the
load and system. If the load pressure decreases, the valve shifts to the left and the
displacement is decreased, decreasing the system pressure and maintaining the
desired pressure dierence. The only negative is that the load sensing circuit must
542 Chapter 12

Figure 59 Pressure compensated load sensing (PCLS) using a variable displacement pump.

use the highest required load pressure or some actuators may not operate correctly.
If the two actuators have very dierent pressure requirements, the system eciency is
still not maximized. If we do not wish to add the extra cost of the a variable
displacement pump we can achieve many of the PCLS benets by using a load
sensing relief valve, shown in Figure 60.
The operation is much the same as with PCLS except that the system pressure
is maintained at a preset level above the load pressure by adjusting the relief valve
cracking pressure. As before, the system pressure is compared with the largest load
pressure. The dierence is that we once again have a constant ow circuit and no
longer vary the ow but instead vary only the relief pressure. This means that at idle
conditions the same level of eciency is not achieved since although the pressure is
the same, the ow is greater over the relief valve. This circuit has eciency advan-
tages over the FD/OC circuit of earlier since when the control valve is in the active
region the system pressure is still only slightly above the load pressure, whereas with
the FD/OC system the pressure returns to maximum once the valve is moved from

Figure 60 Load sensing relief valve (LSRV) using a xed displacement pump.
Applied Control Methods 543

the null position into the active region. These strategies can be summarized as shown
in Figure 61.
Remember that the eciencies for the systems will decrease when dierent
pressure requirements exist in our systems. In the load sensing systems the pressure
is maintained above the largest required load pressure. If there is one high load
pressure and three lower ones, the control valves will provide the pressure drop as
required per individual actuators.
To achieve the maximum possible system eciency and most wisely manage
our power, we can progress to full computer control of the pump displacement, valve
settings, and actuator settings. For example, if we have both a variable displacement
pump and a variable displacement motor controlled by a computer, we can com-
pletely eliminate the control valve and associated energy losses (pressure drops) and
use the computer to match displacements in such a way as to control the system
pressure and the power delivered to the load. The only energy losses in our system
are associated with component eciencies, not pressure drops across control valves.
These same concepts can be extended to larger systems where multiple displacements
are controlled to always give the maximum possible system eciency.
As our society becomes more energy conscious, we will continue to develop
new power management strategies. It is likely that the computer will be the center-
piece, controlling the system to achieve greater levels of performance and eciency
simultaneously.

12.7 CASE STUDY OF ON-OFF POPPET VALVES CONFIGURED AS


DIRECTIONAL CONTROL VALVES
There are many potential advantages for considering the use of high speed on-o
poppet valves for use as control valves in hydraulic control systems. Cost, exibility,
and eciency are three primary ones. There are also several disadvantages. Whereas
in manually controlled directional valves a simple mechanical feedback linkage could
be used (Example 6.1 and problem 4.8) to create a closed loop feedback system, the
use of poppet valves requires that we use an electronic controller outside of the
normal PID type of algorithm. The purpose of the case study presented in this

Figure 61 Comparing eciencies of dierent power management strategies.


544 Chapter 12

section is to demonstrate how one might be used in a typical application, that of


controlling the position of a hydraulic cylinder. The system examined here is an
example of a nonlinear model and multiple inputsingle output controller. In the
next section we take a look at one application of this system.

12.7.1 Basic Configuration and Advantages


The general system we want to model is shown in Figure 62. Four normally open
high-speed poppet valves are used to control the position of a hydraulic cylinder.
Ignoring the controller for the time being, the system is easy to construct and
understand. The poppet valves act in pairs and either connect one side of the cylinder
piston to tank and the other side to supply pressure or vice versa. The valves may be
two stage to increase the response time depending on the primary orice size
required. Some of the advantages of this conguration are listed in Table 3.
Since on-o valves are generally positive sealing devices, there is very little
leakage when closed even when compared with overlapped proportional valves.
All the poppet valves are closed when the cylinder is stationary and it is positively
locked with zero leakage rates between the pump and tank. No hydraulic energy is
dissipated or electrical energy required maintaining this position. Normal spool type
valves always have a leakage path from supply to tank even with the spool centered.
Thus there is a constant energy loss associated with each valve spool in the circuit.
Since the spool tolerances in servovalves and proportional valves are much tighter,
there is also a greater susceptibility to contamination. This all leads to greater
hydraulic eciency and reliability at a reduced cost.
In many systems, however, the valve energy losses are not signicant, and these
advantages may seem minor. Perhaps the most compelling advantage is the resulting

Figure 62 Basic poppet valve position control system.


Sosnowski T, Lucier P, Lumkes J, Fronczak F, Beachley N. Pump/Motor Displacement Control using
High-Speed On/O Valves. SAE 981968, 1998.
Applied Control Methods 545

Table 3 Potential Advantages of Using On-Off Valves for Position Control of Hydraulic
Cylinder

Characteristics of poppet valves as used to control cylinder position:


Zero leakage when no cylinder motion
Less electrical energy requirements
Zero energy requirements when stationary
Cheaper component cost
Less susceptibility to contamination
Valves easily mounted directly to actuators
Controller algorithm able to switch between meter-in, meter-out, different valve center
characteristics, ow control, position control, velocity control, etc., in real time without
hardware changes

exibility. Without going into the details of each circuit (some are discussed in the
previous section), we can quickly discuss some of the possibilities when using indi-
vidual poppet valves to control cylinder position. Most advantages stem from the
fact that the metering surfaces are now decoupled. In our typical spool type direc-
tional control valve, when one metering surface opens, so does the corresponding
return path. The metering surfaces are physically constrained to act this way. Thus
when a certain valve characteristic or center conguration is desired, a specic valve
must be purchased with this conguration. This is opposed to the four poppet valves
where any valve can open and close independently of each other, as in Figure 63.
Taking advantage of this exibility requires digital controllers but adds the
ability to choose (or have the controller automatically switch between) many dier-
ent circuit behaviors. For example, in a meter-out circuit the ow out of the cylinder
is controlled to limit the extension or retraction speed of the cylinder. This is com-
monly done with overrunning loads to prevent cavitation and run-away loads. Using
conventional strategies requires an additional metering valve in the circuit since the

Figure 63 Comparison of spool and poppet metering relationships.


546 Chapter 12

directional control valve will meter equally in both directions. With the poppet
valves, however, simply keeping the inlet valve between the tank and inlet port
fully open allows the downstream, or outlet, valve to be modulated and control
the rate of cylinder movement. To get metering characteristics in both directions,
even more metering devices and check valves are required to complete the circuit. A
simple change in algorithm accomplishes the same thing using independent control
of the poppet valves. Thus, the hardware and plumbing remains identical and the
controller is used to provide many dierent circuit characteristics.
In addition to dierent metering circuits, dierent power management strate-
gies are available since the valves can be operated to simulate dierent valve centers
in real time. If all the valves are closed when the system is stationary, the poppet
valves will act as a closed center spool valve. In this case the cylinder remains
stationary and the pump ow must be diverted elsewhere. Open center character-
istics occur when all the poppet valves are opened. Even tandem center spools, which
allow the pump to be unloaded to tank while the cylinder is xed, are easily simu-
lated by closing the two valves resisting the load (assuming the load wants to move in
one direction) and opening the remaining two to unload the pump.
It is obvious that only one side of the story has been presented, and the dis-
advantages must also be considered. First and primary, the operation of the poppet
valves are primarily on-o and somewhere along the line the output must be modu-
lated. This could possibly be a feature of special valves where some proportionality is
achieved in poppet valves by rapidly modulating (i.e., PWM) the electrical signal to
the valve. If a two-stage poppet valve is used, the rst stage could be extremely fast
and used to modulate the second stage, again with the goal of obtaining some
proportionality for metering control. The digital computer can be used to great
advantage here since it is capable of learning or using lookup tables to predict
what result the signal change will have on the system, even if very nonlinear.
Finally, the poppet valve, if much faster than the system, could be modulated in
much the same way that a PWM signal is averaged in a solenoid coil to produce an
average pressure acting on the system. This obviously has the potential to create
undesired noise in the system. In some position control systems this is acceptable
when the result is not the physical cylinder position. For example, and as illustrated
in this case study, we could use this approach to control the small cylinder that in
turn controls the displacement on a variable displacement hydraulic pump/motor.
The output of the hydraulic pump/motor will not signicantly change when small
discrete changes occur in displacement (especially when connect to a large inertia,
i.e., wheel motor applications), since the torque output is even further averaged by
the inertia it is connected to. Thus there are many things we need consider when
designing the controller, hence the incentive to develop the following model.

12.7.2 Model Development


This case study illustrates two methods for developing the model used in simulating
the poppet valve position controller discussed above. In both cases we will use
nonlinear models and illustrate the use of Simulink in simulating nonlinear models.
Method one uses a bond graph to model the physical system, from which the state
equations are easily written. Method two uses conventional equations, Newtons
law, and continuity to develop the dierential equations describing the model,
Applied Control Methods 547

from which the equivalent blocks (some nonlinear) are found and used to construct
the block diagram. In the basic model we have the supply pressure connecting with
two valves and the tank connected to two. The cylinder work ports are connected to
the opposite sides in such a way that pairs can be opened to position the cylinder
where desired. Referencing Figure 62 allows us to directly draw the bond graph from
the simplied physical model, given in Figure 64.
In general, power ow will be from the power supply to the high pressure
cylinder port and the low pressure cylinder port to the tank. Figure 65 gives the
corresponding bond graph and illustrates the parallels it shares with the physical
system model. The bond graph structure and physical model complement each
other and the power ow paths through the physical system are easily seen in the
bond graph. There are ve states, each associated with an inertial or capacitive
component:

y1=dt dp13 =dt pressure on bore end of cylinder


y2=dt dq15 =dt flow due to compressibility in lines and oil (bore end of
cylinder)
y3=dt dp16 =dt pressure on rod end of cylinder
y4=dt dq18 =dt flow due to compressibility in lines and oil (rod end of
cylinder)
y5=dt dp21 =dt force acting on cylinder load

Using the equation formulation techniques described in Section 2.6, it is easy to


write the actual state equations. Remember that 0 junctions all have equal eorts and
1 junction equals ows. Then use the inertial, capacitive, and resistive relationships
to write the equations. The nal state equations can then be written as follows:

1
p_13 q
CA 15
s s
1 1 1 A
q_13 KPA x1 PSA  q  KAT x2 q  PT  p13  BE p21
CA 15 CA 15 IA m

Figure 64 Simplied poppet valve controller model.


548 Chapter 12

Figure 65 Bond graph model of poppet valve controller.

1
p_16  q
CB 18
s s
1 1 1 A
q_18 KPB x3 PSB  q  KBT x4 q  PT  p16  RE p21
CB 18 CB 18 IB m
ABE A b
p_21 q  RE q18  p (force balance on cylinder)
CA 15 CB I21 21

The units for the hydraulic components using the English system are

Displacement qs in3 Effort es psi


2 psi
Inertia I lbf 4s Capacitance C
in in3
3
Valve coefficients Ks inp
sec  psi

Using the state equations, it is straightforward to numerically integrate them


with a variety of common programs. We should notice that is does not matter how
complex the state equations are for numerical integration. The complexity does
relate to simulation speed and what integration routine works best, but each equa-
tion is constantly evaluated to predict the next time step and thus discontinuities and
nonlinearities pose no problem. In the bond graph model above states were included
for system compliance and the line could easily be broken into more sections for
greater detail. Nonlinear orice equations are used to represent the PQ character-
istics for each valve.
Applied Control Methods 549

To compare the procedure needed to develop the equivalent block diagram


representing the poppet valve control of a hydraulic cylinder, we must rst write the
equations governing system behavior. For the bond graphs we simply model energy
ow and the equations result from the model (the same equations, of course). For
the block diagram, let us quickly review the basic equations. There are three types of
ows we are concerned about:
Valve ow:
p
QV KV x PV
Cylinder ow:
QCYL A dx=dt x cylinder position)
Compressibility ow:
QCMP V=dP=dt
If we look at the ow through valves PA or PB, we see that the ow can go three
places, into the cylinder, into compression of the uid and expansion of hoses, and
back through valves AT and BT, respectively. We can then write the continuity
equation for each valve:
QPA QAT QCYL bore QCMP bore
QPB QBT QCYL rod QCMP rod
Finally, summing the forces on the cylinder:
FC FA  FB  fL
FC PA ABE  PB ARE  fL
Inserting each ow into the respective continuity equations and solving for the
derivative of pressure results in two equations, each representing a summing junction
on a block diagram.
V _ p p
PA KPA x1 PS  PA  KAT x2 PA  PT  AB x_

V _ p p
PB KPB x3 PS  PB  KBT x4 PB  PT  AR x_

After these two summing junctions are formed the result can be integrated to
get PA and PB . The pressures are then used to satisfy the force balance on the piston.
Finally, the block diagram model (along with necessary functional blocks) is shown
in Figure 66. In the upper left we see the cylinder pressure being subtracted from the
supply pressure to calculate the ow across that valve. This is repeated for each
valve. The two valve ows and cylinder velocity ow rate are summed and the
dierence integrated to get pressure. The pressure(s) times their respective area(s)
is summed to get the net force acting to accelerate the cylinder and load. This is
integrated once to get the cylinder velocity (used to calculate the respective ows)
and integrated once again to obtain the position. There are additional lines used to
combine the signals and send them to the workspace for additional analysis and
plotting.
550
Chapter 12
Figure 66 Block diagram of poppet valve position controller.
Applied Control Methods 551

Once the model is up and running, it is important to verify it at known posi-


tions. For this model it is quite easy to calculate the slew velocity (cylinder velocity
with no load) to verify the model. With a supply pressure of 1000 psi and the
coecients determined for the valve (and used in the model), the slew velocities,
ow rates, and cylinder pressures all agreed with the model. In the next section we
use it to perform several simulations.

12.7.3 Simulation and Results


With the veried (steady state) model we can now play with dierent input
sequences, disturbances, valve opening times, pressures, cylinder loads, valve coe-
cients, and, most importantly, controller strategies. To give an idea of basic con-
troller implementation, the block diagram uses two function blocks to dene a
deadband range in which all the valves are o. Once the cylinder leaves the dead-
band, the valves required to move the cylinder in that direction are opened and the
correction is made. Examining the sine wave command and response in Figure 67,
with a deadband of 0.2 inches, we see that the cylinder position follows the command
and it behaves similar to a proportional controller. The dierence is also evident in
that the output is not smooth but reects the on-o nature of the control valves.
Whether or not this chatter is acceptable depends on the application and what the
actuator is connected to.
Whenever the cylinder reaches a position inside the deadband, all the valves are
shut down and the cylinder stays stationary. If the deadband gets too small, the
valves begin to chatter. Having a disturbance input act on the system only changes

Figure 67 Poppet valve simulation: cylinder response to a sine wave input.


552 Chapter 12

the pressures on the cylinder since all the valves are closed and the position only
changes due to compressibility in the system.
Things are a little more interesting when we examine the valves during this
motion. In Figure 68 we see the on-o nature to this type of controller (valve itself is
not proportional in this model). The poppet valves are constantly turning on and o
to modulate the position of the cylinder and thus the velocity is constantly cycling
even though the position output follows the command quite well. The system can be
made to oscillate but it will not go unstable (globally) with this type of controller. If
the error becomes negative and outside of the deadband, it always tries a corrective
action.
In conclusion, then, we have briey examined two approaches to developing
and simulating a cylinder position controller using four poppet valves in place of the
typical spool valve. As demonstrated, it is very easy to run many simulations without
having to build and test each system in the lab. We must remember it is still only a
model, and although it includes the nonlinearities associated with the valve, it
ignores others like friction eects, valve response times and trajectories, eects of
ow forces on the valve, etc. As mentioned, always attempt to verify a model with
known data before extending the model to unknown regions.

Figure 68 Poppet valve simulation response to sinusoidal input.


Applied Control Methods 553

12.8 CASE STUDY OF HYDROSTATIC TRANSMISSION


CONTROLLER DESIGN AND IMPLEMENTATION
This case study summarizes the design, development, and testing of an energy sto-
rage hydrostatic vehicle transmission and controller. Primary benets include regen-
erative braking and the decoupling between the engine and road load. The controller
was developed and installed on a vehicle, demonstrating the potential fuel economy
savings and feasibility of hydrostatic transmissions (HST) with energy storage.
Controller algorithms maximize fuel economy and performance. Being true drive-
by-wire, the computer controls the engine throttle position while intermittently run-
ning the engine. The engine only operates to recharge the accumulator or under
sustained high power driving.
The series hybrid conguration, which we examine here, allows many features
to be implemented in software. The torque applied to each wheel can be controlled
independently, both during braking and acceleration. The vehicle incorporates a
pump/motor at each front wheel, with provisions to add units at the rear wheels,
providing true all wheel drive, antilock braking, and traction control abilities.
Controller development involves several steps. Axial piston pump/motor mod-
els are rst developed and implemented into a simulation program. The simulation
program allows fuel economy studies, component sizing, performance requirements,
and congurations to be quickly evaluated, over a variety of driving cycles. In
addition, the dynamics of inserting a valve block at each wheel were studied. The
valve block allows pump/motors not capable of overcenter operation to be used. In
many cases, these units exhibit higher eciencies. Successful switches were modeled
analytically and conrmed in the lab.
To demonstrate the use of electrohydraulics (electronics working with hydrau-
lics), we will summarize the hardware and software for complete vehicle control. The
hardware consists of the computer, data acquisition boards, sensors, circuit boards,
and control panel. The software maintains safe and ecient operation under normal
driving conditions. Engine operating and eciency models were developed to allow
future open loop control of the engine-pump system. A stepper motor is used to
control the throttle position. Both distributed and centralized controllers are used on
the vehicle, maximizing computer and vehicle performance.
The nal result is a vehicle incorporating a hydrostatic transmission with
energy storage, allowing normal driving operation and performance with increased
gains in fuel economy. The remaining sections discuss the overall layout and goal,
thus providing the framework for the environment the controller must operate in,
the necessary hardware, and nally examples of the controller strategies. This case
study goal is to stimulate thought in how distributed and centralized control schemes
might be implemented and what the potential is utilizing electronic control of
hydraulic systems.


Lumkes J. Design, Simulation, and Testing of an Energy Storage Hydrostatic Vehicle Transmission and
Controller. Doctoral Thesis, University of WisconsinMadison, 1997.
554 Chapter 12

12.8.1 Overall Layout and Goal


There are many ways to operate and control hydrostatic transmissions, ranging from
garden tractors with one control lever to large construction machines with over one
million lines of computer instructions and multiple processors. While manual control
works well for simple systems where eciency and features are not emphasized, to
take full advantage of complex systems like large hydrostatic transmissions, always
maintaining optimum eciency, and implementing safety features requires more
inputs/outputs than one operator can muster. It is in these applications that
advanced control systems really shine. Only one possible example and solution
summary is presented here, with many examples and dierent solutions certainly
possible.
The basic concept is to control a hydrostatic transmission to optimize eciency
over wide ranges of operation. In regards to vehicle HST drive systems, the options
range from the simple two-wheel drive parallel system shown in Figure 69 to the
exible all-wheel drive series system in Figure 70.
The simplest conguration is to leave the existing vehicle driveline intact and
add the hydraulic system in parallel. By adding clutches, both regenerative braking
and engine-road load decoupling are accomplished. The parallel system requires the
least number of additional components and leaves the ecient mechanical driveline
intact. This conguration is currently in use in Japan where Mitsubishi supplied 59
buses equipped with a diesel/hydraulic parallel hybrid vehicle transmission. This
design gives fuel savings of 1530% in everyday use. Volvo, Nissan-Diesel, and Isuzu
have similar versions. The Mitsubishi version incorporates two bent-axis swash plate
pump/motors, two accumulators, and a controller that decreases the engine throttle

Figure 69 Vehicle hydrostatic transmission in parallel (with energy storage).


Yamaguchi J. Mitsubishi MBECS III is the Latest in the Diesel/hydraulic Hybrid City Bus Series.
Automotive Engineering, June 1997, pages 2930.
Applied Control Methods 555

Figure 70 Vehicle hydrostatic transmissionseries conguration, AWD.

when hydraulic assist is provided. At higher speeds, or steady state, the bus is driven
by the engine alone. A single dry-plate clutch/synchromesh gear unit connects the
pump/motors to the mechanical driveline. With the hydraulic assist, the bus is cap-
able of using up to third gear to begin moving. The controller for this system can be
much simpler and many features cannot be implemented.
The series conguration allows for more features than simpler designs. In this
case, a pump/motor is located at each wheel, as shown in Figure 70. This congura-
tion allows many options:
 Decoupling of engine from road load;
 Regenerative braking;
 All-wheel drive;
 Antilock braking systems (ABS);
 Traction control;
 Ability to deactivate several pump/motors for greater system eciency;
 Hydrostatic accessory drives;
 Adaptable to variable and/or active suspension systems.

Once the hardware is installed in the series system, the addition of features can
largely be accomplished through software. ABS and traction control, assuming
there are wheel speed sensors, can be completely integrated into the controller at a
lower cost than possible for current vehicles. Also, by selectively using the wheel
pump/motors greater system eciencies are possible.


Fronczak F, Beachley N. An Integrated Hydraulic Drive Train for Automobiles. Proceedings, 8th
International Symposium on Fluid Power, Birmingham, England, April 1988.
556 Chapter 12

For these reasons we examine the hardware and software used in the initial
design of a controller to optimize the eciency of a series hydrostatic transmission
with energy storage. To control this system we will use a multiple-input multiple-
output controller utilizing centralized (engine speed and strategy) and distributed
controllers (all displacement controllers) and both analog and digital controllers. In
this way we can maximize performance and resources to develop a working con-
troller and demonstrate, less abstractly, the concepts presented in this text.

12.8.2 Hardware Implementation


Many of the components we have examined in Chapters 6 and 10 are used in this
case study. The PC-based data acquisition/control system functions as the primary
central controller and monitors the distributed controllers. It is interfaced to the
hardware through analog and digital input/output (I/O). The inputs and outputs
are made compatible with the computer using a variety of methods. All the control
actuators are mounted on a metal car frame representing the size of a normal full
size passenger car. The frames primary purpose is to allow dynamometer testing for
system eciencies, thus allowing renement and tuning of each controller. This case
study focuses primarily on the hardware required to implement the controller.

12.8.2.1 Central PC Controller


The vehicle controller PC receives two input signals from the driver, the accelerator
and brake pedal positions, and proceeds to regulate and control each component to
respond correctly and safely. Figure 71 illustrates the interaction between the major
components. The driver controls the accelerator and brake pedal positions. The
computer, upon reading these, uses a combination of analog and digital I/O to
interface with the wheel pump/motors, engine, engine pump, and accumulators.
Pedal positions, wheel and engine speeds, pressures, and engine temperature are
all inputs into the computer.

Figure 71 Overview of controller hardware for vehicle HST controller.


Applied Control Methods 557

The computer is the central control component in the vehicle. It performs some
of the control algorithms, monitors all subsystems for failure, and determines the
overall operating strategy for the vehicle, which in large part with the component
eciency determines the eciency of the vehicle
Table 4 summarizes the channel inputs, items read, and types of signals. As
controller algorithms change, it may be necessary to adjust what is being monitored.
For example, if an open loop engine controller is developed to the point where the
condence level is high, the engine speed would not be necessary. In addition, the
engine temperature, once the vehicle conguration and cooling system is tested,
would not be needed. The gauges mounted on the dashboard could monitor these
items, along with the engine control module (ECM).
The data acquisition outputs are summarized in Table 5. Three analog outputs
are used for the wheel pump/motor and engine-pump displacement commands.
These distributed controllers use these voltage commands for setting the desired
displacement. Nine digital outputs are used to control the solenoids, engine ignition
and starting, and throttle position. The solenoids are used to shut down the system
(two modes) and isolate the accumulator from the system, both for safety and
operating procedures. The digital outputs, when used for the solenoids and engine,
must be operated through a high-current relay, since the signal levels are very small.
This is a relatively simple computer conguration and would easily be imple-
mented using an embedded microcontroller in production applications. The PC data
acquisition system provides a cost-eective way to design and test many controllers
very quickly.

12.8.2.2 Sensors Used for Feedback


Eleven sensors are used to monitor and control the vehicle. A combination of mag-
netic pickups, linear and rotary potentiometers, pressure transducers, and tempera-
ture sensors are implemented throughout the vehicle. Table 6 lists the type of sensors
required and location.
The rotary speeds are all measured using magnetic pickups and signal condi-
tioning circuits that convert the frequency to voltage. By calibrating the circuit, the
computer can determine the rotational speeds desired. Similar magnetic pickups and
circuits are used for the wheel speeds and engine speed. Rotary potentiometers are
used to determine the wheel pump/motor displacements. A linear potentiometer is

Table 4 Vehicle Controller Inputs to Central Computer

Input channels Signal

Left wheel speed Voltage


Right wheel speed Voltage
Accelerator pedal position Voltage
Brake pedal position Voltage
HST low pressure (boost) Voltage
HST high pressure (system) Voltage
Engine speed Voltage
Engine temperature Voltage
558 Chapter 12

Table 5 Vehicle Controller Outputs from Central Computer

Output channels Signal

Left wheel P/M displacement Voltage


Right wheel P/M displacement Voltage
Engine pump displacement Voltage
Accumulator bleed-down Digital output
Quick panicnormal mode switch Digital output
Quick panicshutdown mode switch Digital output
Accumulator isolationno Digital output
Accumulator isolationyes Digital output
Engine ignition power Digital output
Engine starter solenoid Digital output
Stepper motor movement Digital output
Stepper motor direction Digital output

used for measuring the engine-pump displacement. The accelerator and brake pedals
also use potentiometers to determine position. For all potentiometers, the voltage
was regulated to 10 V, giving maximum resolution, since the computer was cong-
ured for 010 V also.
The signals from the potentiometers and engine temperature sensor are all
voltages in the ranges capable of being read by the computer data acquisition
boards. The pressure transducers have current outputs, which are more immune
to line noise. At the connection on the data acquisition boards, the current is passed
through 400  resistors to convert the signals to voltages. The current ranges from 4
to 20 mA, varying linearly with the pressure. An eort was made to reduce the
number of sensors, developing a control system using sensors likely to be available
in production vehicles. Torque transducers and ow meters are avoided for this
reason.

Table 6 Vehicle ControllerSensors Used for Feedback

Item Part number Placement

Left wheel speed Magnetic pickup Ring gear on CV joint


Right wheel speed Magnetic pickup Ring gear on CV joint
Engine speed Magnetic pickup Flywheel teeth
Left P/M displacement 5 K rotary pot Swashplate pivot arm
Right P/M displacement 5 K rotary pot Swashplate pivot arm
Engine pump displacement Linear pot Rod on control piston
Accel. pedal position 5 K rotary pot Floor board
Brake pedal position 5 K rotary pot Fire wall
System pressure Capacitive type Manifold block
Boost pressure Capacitive type Manifold block
Accumulator pressure Capacitive type Manifold block
Engine temperature OEM on engine
Applied Control Methods 559

12.8.2.3 Conditioning Circuits for Computer Inputs and Outputs


As is usual in practice, additional circuits are required to interface dierent compo-
nents with the controller. The circuits serve a variety of functions, ranging from
signal conditioning, converting signals, displacement controllers, and voltage regu-
lators. A sample of some of the circuits is provided here.
Three circuits convert frequency to voltage, with the voltage being propor-
tional to the input frequency. These circuits allow the computer to read the speed
from a magnetic pickup. Other circuits convert the unregulated battery voltage,
which can vary between 11 and 14 V, to regulated 24, 10, and 5 V. The various
voltage levels are required both for power (transducers) and excitation of potenti-
ometers. Three displacement controller circuits use OpAmps to create hysteresis,
deadbands, and on-o controller routines, as modeled and discussed in the previous
case study. Finally, a bank of solid-state relays is used to allow the computer to drive
high-current loads using the digital output channels.
The controller algorithms, since they are implemented using OpAmps, allow
the computer to process other tasks more eciently. The computer simply sends the
desired command signal to the distributed controllers and assumes that its request
will be met. Other algorithms, such as engine speed and ABS braking algorithms, are
run from the central computer.
These examples illustrate some techniques to interface transducers with the
data acquisition cards and also using simple distributed controllers to o-load com-
putational demands from the central computer.

12.8.3 Software Implementation


The controller software, through the sensors, circuits, and data acquisition boards,
controls the vehicle components to ensure good system stability and safety. In addi-
tion, the software can add secondary features like ABS, traction control, and cruise
control. The control algorithms, optimized for stability, safety, and system eciency,
are easily modied in the software.
In this section we only summarize some of the basic concepts and routines that
can be used to implement controllers using computers. Many times, as we will nd if
we begin to write software for controller algorithms, the most dicult part is devel-
oping the proper ow chart of the program. This is also commonly taught in many
college courses as the beginning step in developing a computer program. Once the
ow chart is developed, computer programmers can develop the actual software
code and convert the ow chart into a set of commands to be compiled and run
(or downloaded to an embedded microcontroller). In other words, after the devel-
opment of the ow chart and specic controller algorithm, the next step is largely
syntax and programming, but less controller design.
Figure 72 gives the main controller ow chart. This corresponds to the main
controller routine that is responsible for determining which routine runs next and
controlling program ow. It is designed for testing purposes to run the control loop
until any key is pressed on the keyboard. Before the loop is started, all the global
variables and functions are dened. The main process of the loop then begins where
the inputs are all read, compared with safety limits, and if all is okay, to run the
controller algorithms. If anything fails, the program puts the vehicle into a con-
trolled shutdown. Options are provided to display data to the monitor every speci-
560 Chapter 12

Figure 72 Controller ow chartmain routine.

ed number of samples. Finally, data can be stored to the hard disk to track con-
troller performance, although this does slow down the sampling rate. This particular
controller is designed to utilize program-controlled delays. No explicit delays are
used to maximize the sample rate, although some routines are not called every pass.
The strategy routine implements the overall controller strategy about how to
control the vehicle and where each subsystem should be operating. Several routines
are called from the strategy routine (Figure 73). First, if there are no input com-
mands and the engine is not running, the accumulator is isolated from the system to
prevent leakage from occurring across the pistons in the wheel pump/motors and
engine pump. When a command (brake or gas) signal is received, the controller
connects the accumulator to the main system and changes the displacement of the
wheel pump/motors proportional to the command signal. Being over-center units,
achieving braking and acceleration is easily achieved. Next, the engine is checked and
if the accumulator pressure needs to be recharged, it begins the sequence to start the
engine. If the engine does not start after a set number of loops, it shuts down the
system. Once the engine recharges the accumulator, it is shut o until the next time
the accumulator needs to be charged.
Applied Control Methods 561

Figure 73 Controller ow chartstrategy routine.

The engine controller routine starts the engine and controls the engine speed
using a stepper motor connected to the throttle (Figure 74). The hydrostatic pump
mounted to the engine is controlled by varying its displacement. Since it is computer
controlled it can be set to optimize the overall system eciency. To control for
maximum eciency, the whole system is analyzed and the pump eciency curves
and engine eciency curves are known and then the product of those is maximized to
provide the correct operating point. This could be enhanced further by going to a
feedforward routine where the desired pump displacement and stepper motor posi-
tion is precalculated as a function of system pressure and accessed via a lookup table.
Thus, without all the details, we should see how a sample controller might be con-
gured. The goal of building such a system is to verify the design and allow extended
vehicle testing.
Now a quick conclusion relating earlier material to how dierent controllers
may be used to change/improve the existing initial design. First, most components in
the system were modeled and tested to verify operation within the whole system.
Each subcontroller could constitute another study of the control system design
process. The complete vehicle was also modeled and simulated on the Federal
Urban Driving Cycle. This provided much useful input for designing the controller:
expectations of fuel economy, sizing requirements for components (acceleration and
braking rates), dynamic bandwidth required by each controller, and engine on-o
cycles. This simulation provided many of the guidelines necessary to design the
complete system. The wheel pump/motor displacement controllers were tested to
ensure that they met the required bandwidth requirements for the vehicle. Having
these models would now allow the controller to be designed for the next stage
global optimization of system eciency. This case study veried the overall control-
ler strategy and stability of the system. Also, the operating mode when the accumu-
562 Chapter 12

Figure 74 Controller ow chartengine routines.

lator is not in the system needs to be implemented for cases where the accumulator
pressure is too low to perform a maneuver (i.e., passing another vehicle) and the time
to charge the accumulator is not available. This mode of operation was studied and
tested in the lab using sliding mode control and PID algorithms. This mode was
initially examined separately from the vehicle due to its higher complexity and
potential stability/safety problems. The goal of the controller in this mode is to
control the engine throttle, engine-pump displacement, and wheel pump/motor dis-
placements and thus control the torque applied to the wheels while maintaining
maximum system eciency. Since it is desired to never relieve system pressure
with a relief valve (i.e., adding major energy losses), the throttle and displacements
must always be matched to avoid overpressurization. When the accumulator is in the
system, it essentially (at least relative to loop times) makes the system see a con-
stant pressure and the actual controller strategy is simplied. Implementation is just
as dicult in both cases.
In conclusion, an actual PC-based control system for a hydrostatic transmis-
sion with regenerative braking has been presented as an example of how the tech-
niques and hardware studied earlier might be used in actual product research and
development.


Lewandowski E. Control of a Hydrostatic Powertrain for Automotive Applications. Ph.D. Thesis,
University of WisconsinMadison, 1990.
Applied Control Methods 563

12.9 PROBLEMS
12.1 Locate an article describing a unique application of industrial hydraulics and
briey summarize the system and how it is used.
12.2 Locate an article describing a unique application of mobile hydraulics and
briey summarize the system and how it is used.
12.3 Briey list and describe the three functions of control valves.
12.4 A pressure reducing valve regulates the downstream pressure. (T or F)
12.5 What two types of pressure control valves can use smaller springs with higher
pressures?
12.6 If the orice is removed in a pilot operated pressure control valve, the valve
still regulates the pressure. (T or F)
12.7 List an advantage and disadvantage of spool type pressure control valves.
12.8 Graphically demonstrate what pressure override is.
12.9 Counterbalance valves combine the functions of what two valves?
12.10 Pressure reducing valves are an ecient means of reducing the pressure in our
system. (T or F)
12.11 Most ow control valves use pressure as the feedback variable. (T or F)
12.12 Tandem center directional control valves combine what two types of center
congurations?
12.13 The center conguration of servovalves is of what type?
12.14 List several advantages to using electronic feedback on proportional valves.
12.15 For a position control system requiring high accuracy, the most applicable
type of valve is ______________.
12.16 What is the feedback device in a typical apper nozzle servovalve?
12.17 Two disadvantages of servovalves are ____________ and _____________.
12.18 Using the two valve coecients below, what is the total valve coecient?
in in
KPA 0:34 p and KBT 0:45 p
sec  psi sec  psi

12.19 Pressure metering curves show important characteristics for what type of
control?
12.20 In valve controlled cylinders, what is unique about the force located at 2/3 of
the stall force?
12.21 Under what conditions is a xed displacement/closed center (FD/CC) system
the most ecient?
12.22 List one advantage and one disadvantage of using regeneration during exten-
sion.
12.23 Describe an advantage of load sensing circuits.
12.24 Design a hydraulic circuit capable of meeting the following specications
when the system pressure is 1500 psi, the control valve has matched orices and is
symmetrical, and the cylinder is mounted horizontally and has a stroke of 10 inches.
Specications:
Maximum extension force 12,000 lbs (all forces opposed to motion)
Maximum retraction force 7000 lbs
Extension velocity of 4 in/sec at a load of 5000 lbs
564 Chapter 12

Determine the answers for


a. Cylinder piston and rod diameters (round to nearest standard values)
b. Minimum valve coecients required (include units)
c. Maximum ow rate the pump needs to supply when the load is zero
12.25 Design a hydraulic circuit capable of meeting the following specications
when the system pressure is 2000 psi, the control valve has matched orices and is
symmetrical, and the cylinder is mounted horizontally and has a stroke of 16 inches.
Specications:
Maximum extension force 20,000 lbs (all forces opposed to motion)
Maximum retraction force 15,000 lbs
Extension velocity of 2 in/sec at a load of 10,000 lbs
Be sure to specify answers to the following questions:
a. Cylinder piston and rod diameters using nearest standard values
b. Minimum valve coecients required (include units)
c. Maximum ow rate the pump needs to supply when the load is zero
12.26 For the meter-out speed control circuit shown in Figure 75, the diameter of the
cylinder piston is 2.5 inches and the rod diameter is 1.75 inches. The external oppos-
ing force during extension is 10,000 lbf. The ow rate from the pump is 20 gpm and
the relief valve is set at 3000 psi. The rod side ow is controlled to 7 gpm by the
pressure compensated ow control valve. Determine the following:
a. How much of the pump ow passes over the relief valve?
b. What is the pressure on the rod side of the cylinder if the external load is
10,000 lbf?
c. If the external load should suddenly become zero during extension, what is
the magnitude of the pressure on the rod side of the cylinder?
d. If the cylinder is rated to 5000 psi, what is the minimum external force in
extension which must be applied?

Figure 75 Problem: meter-out ow control circuit.


Applied Control Methods 565

12.27 Given the circuit shown in Figure 76, answer the following questions:
a. What is the max cylinder extension speed?
b. What is the max cylinder extension force?
c. What is the max cylinder retraction speed?
d. What is the bore end pressure during retraction?
e. What will the cylinder do when the valve is centered?

Figure 76 Problem: meter-out ow control circuit.


This Page Intentionally Left Blank
Appendix A
Useful Mathematical Formulas

Quadratic Equation
Polynomial:

ar2 br c 0

Roots:
p
b b2  4ac
r1 ; r2 
2a 2a

Eulers Theorem

ej cos  j sin 


ej cos   j sin 

Matrix Definitions
Square Matrix
A matrix with m rows and n columns is square if m n.
Column Vector
1  n matrix, all values in one column.
Row Vector
m  1 matrix, all values in one row.
Symmetrical Matrix
aij  aji .
Identity Matrix
Square matrix where all diagonal elements 1 and all o diagonal elements 0.
Using matrix subscript notation it can be dened as:
567
568 Appendix A
(
  1 ij
Iij
0 i 6 j

Matrix Addition
C A B; cij aij bij .
Matrix Multiplication
If multiplication order is AB, then the number of columns of A must equal the
number of rows in B. If A is m  n and B is n  q, resulting matrix will be m  q.
The product C AB is found by multiplying the ith row of A and the jth column of
B to the element cij .
X
q
cij aik
bkj and AB 6 BA
k1

Transpose
Denoted AT , is found by interchanging the rows and columns; it is dened by
aij T aji . Will turn a column vector into a row vector, and vice versa.
Adjoint
The adjoint of a square matrix, A, denoted Adj A, can be found by the following
sequence of operations on A:
1. Find the minor of A, denoted M. Elements mij  are found by evaluating
det A with row i and column j deleted.
2. Find the cofactor of A, denoted C. Element cij  1ij mij .
3. Find the adjoint from the cofactor transposed: Adj A CT .
Inverse
The inverse of a square matrix A, denoted A1 , is
AdjA
A1
detA
If the determinant is zero, the matrix inverse does not exist and the matrix is said to
be singular.
Appendix B
Laplace Transform Table

List no. Laplace domain Time domain z-domain

1 1 t 1
Unit Impulse

2 ekTs t  kT zk
k is any integer Delayed Impulse (Output is 1 if t kT, 0 if t 6 kT

3 1 ut 1 for t  0 z
s Unit Step z1

4 1 t Tz
s2 Unit Ramp z  12

5 1 t2 T 2 zz 1
s3 Unit Ramp 2 z  13

6 1 eat z
sa z  eaT

7 1 teat TzeaT
s a2 z  eaT
2

 
8 1 1  z 1  eaT
1  eat  
ss a a az  1 z  eaT
 
9 ba eat  ebt z eaT  ebT
s as b   
z  eaT z  ebT

10 1 1 eat ebt
ss as b  
ab ab  a ba  b
 
1 z bz az

ab z  1 a  bz  eaT a  bz  ebT

569
570 Appendix B

List no. Laplace domain Time domain z-domain


2 at at
11 a 1e  ate
ss a2
  
z 1  eaT  aTeaT z e2aT  eaT aTeaT
 2
z  1 z  eaT
   
12 1 a 1  z aT  1 eaT z 1  eaT  aTeaT
 at  1  eat  
s2 s a a az  12 z  eaT

s  
13 1  ateat z z  eaT 1 aT
s a2  2
z  eaT

14 b sinbt z sinbT
s2 b2 z2  2z cosbT 1

15 s cosbt zz  cosbT
s2 b2 z2  2z cosbT 1

16 b eat sinbt zeaT sinbT


s a2 b2 z2  2zeaT cosbT e2aT

17 sa eat cosbt z2  zeaT cosbT


s a2 b2 z2  2zeaT cosbT e2aT
h a i
18 1 a2 b2 1  eat cosbt sinbt
 b
s s a2 b2

z Az B
 2 aT
z  1 z  2ze cosbT e2aT
a
A 1  eaT cosbT  eaT sinbT
b
2aT a aT
Be e sinbT  eaT cosbT
b

19 !2n If  < 1 (Underdamped)


2 
s s 2!n s !2n
1  p 
1  p e!n t sin !n 1  2 t cos1 
2
1

20 !2n If  < (Underdamped)


 
s2 2!n s !2n  p 
!n
e!n t sin !n 1  2 t
p
2
1

21 s!2n If  < 1 (Underdamped)


 
s2 2!n s !2n  p 
!n
e!n t sin !n 1  2 t  cos1 
 p
1 2
Appendix C
General Matlab Commands

To receive more information on any command listed below, type:


>> help command_name

Creation of LTI models

ssCreate a state-space model.


zpkCreate a zero/pole/gain model.
tfCreate a transfer function model.
dssSpecify a descriptor state-space model.
ltSpecify a digital lter.
setSet/modify properties of LTI models.
ltipropsDetailed help for available LTI properties.

Data extraction

ssdataExtract state-space matrices.


zpkdataExtract zero/pole/gain data.
tfdataExtract numerator(s) and denominator(s).
dssdataDescriptor version of SSDATA.
getAccess values of LTI model properties.

Model characteristics

classModel type (ss, zpk, or tf).


sizeInput/output dimensions.
isemptyTrue for empty LTI models.
isctTrue for continuous-time models.
isdtTrue for discrete-time models.
isproperTrue for proper LTI models.
issisoTrue for single-input/single-output systems.
isaTest if LTI model is of given type.
571
572 Appendix C

Conversions
ssConversion to state space.
zpkConversion to zero/pole/gain.
tfConversion to transfer function.
c2dContinuous to discrete conversion.
d2cDiscrete to continuous conversion.
d2dResample discrete system or add input delay(s).

Overloaded arithmetic operations


and  Add and subtract LTI systems (parallel connection).
Multiplication of LTI systems (series connection).
\ - Left dividesys1\sys2 means inv(sys1)*sys2.
/ - Right dividesys1/sys2 means sys1*inv(sys2).
0
Pertransposition.
. 0 Transposition of input/output map.
[..]Horizontal/vertical concatenation of LTI systems.
invInverse of an LTI system.

Model dynamics
pole, eigSystem poles.
tzeroSystem transmission zeros.
pzmapPole-zero map.
dcgainD.C. (low frequency) gain.
normNorms of LTI systems.
covarCovariance of response to white noise.
dampNatural frequency and damping of system poles.
esortSort continuous poles by real part.
dsortSort discrete poles by magnitude.
padePade approximation of time delays.

State-space models
rss,drssRandom stable state-space models.
ss2ssState coordinate transformation.
canonState-space canonical forms.
ctrb, obsvControllability and observability matrices.
gramControllability and observability gramians.
ssbalDiagonal balancing of state-space realizations.
balrealGramian-based input/output balancing.
modredModel state reduction.
minrealMinimal realization and pole/zero cancellation.
augstateAugment output by appending states.

Time response
stepStep response.
impulseImpulse response.
General Matlab Commands 573

initialResponse of state-space system with given initial state.


lsimResponse to arbitrary inputs.
ltiviewResponse analysis GUI.
gensigGenerate input signal for LSIM.
stepfunGenerate unit-step input.

Frequency response
bodeBode plot of the frequency response.
sigmaSingular value frequency plot.
nyquistNyquist plot.
nicholsNichols chart.
ltiviewResponse analysis GUI.
evalfrEvaluate frequency response at given frequency.
freqrespFrequency response over a frequency grid.
marginGain and phase margins

System interconnections
appendGroup LTI systems by appending inputs and outputs.
parallelGeneralized parallel connection (see also overloaded ).
seriesGeneralized series connection (see also overloaded ).
feedbackFeedback connection of two systems.
starRedheer star product (LFT interconnections).
connectDerive state-space model from block diagram description.

Classical design tools


rlocusEvans root locus.
rlocndInteractive root locus gain determination.
ackerSISO pole placement.
placeMIMO pole placement.
estimForm estimator given estimator gain.
regForm regulator given state-feedback and estimator gains.

LQG design tools


lqr,dlqrLinear-quadratic (LQ) state-feedback regulator.
lqryLQ regulator with output weighting.
lqrdDiscrete LQ regulator for continuous plant.
kalmanKalman estimator.
kalmdDiscrete Kalman estimator for continuous plant.
lqgregForm LQG regulator given LQ gain and Kalman estimator.

Matrix equation solvers


lyapSolve continuous Lyapunov equations.
dlyapSolve discrete Lyapunov equations.
careSolve continuous algebraic Riccati equations.
dareSolve discrete algebraic Riccati equations.
574 Appendix C

Demonstrations
ctrldemoIntroduction to the Control System Toolbox.
jetdemoClassical design of jet transport yaw damper.
diskdemoDigital design of hard-disk-drive controller.
milldemoSISO and MIMO LQG control of steel rolling mill.
kalmdemoKalman lter design and simulation.
Bibliography

Airy, G., On the Regulator of the Clock-work for Eecting Uniform Movement of
Equatorials, Memoirs of the Royal Astronomical Society, vol. II, pp. 249267,
1840.
Ambardar, A., Analog and Digital Signal Processing, International Thomson
Publishing, 1995, ISBN 0-534-94086-2.
Anderson, W., Controlling Electrohydraulic Systems, Marcel Dekker, Inc., 1988,
ISBN 0-8247-7825-1.
Anton, H.,Elementary Linear Algebra, 7th ed., John Wiley & Sons, Inc., 1994.
Astrom, K., and Wittenmark, B., Adaptive Control, Addison-Wesley Publishing
Company, Inc., 1995, ISBN 0-201-55866-1.
Astrom, K., and Wittenmark B., Computer Controlled Systems. Theory and
Design, 3rd ed., Prentice Hall, 1997, ISBN 0-13-314899-8.
Bishop, R., Modern Control Systems Analysis and Design using Matlab and
Simulink, Addison Wesley Longman, Inc., 1997, ISBN 0-201-49846-4.
Bollinger, J., and Due, N., Computer Control of Machines and Processes,
Addison-Wesley Publishing Company, 1988, ISBN 0-201-10645-0.
Bolton, W., Mechatronics, Electronic Control Systems in Mechanical
Engineering, Addison Wesley Longman Limited, 1995, ISBN 0-582-25634-8.
Chalam, V., Adaptive Control Systems, Techniques and Applications, Marcel
Dekker, Inc., 1987, ISBN 0-8247-7650-X.
Ciarlet, P.G., Introduction to Numerical Linear Algebra and Optimisation, page
119, Cambridge: Cambridge University Press, 1991.
DAzzo, J., Houpis, C., Linear Control System Analysis and Design, Conventional
and Modern, 3rd edition, McGraw-Hill Book Company, 1988, ISBN 0-07-
100191-3.
Dorf, R., and Bishop, R., Modern Control Systems, 8th edition, Addison Wesley
Longman, Inc., 1998, ISBN 0-201-30864-9.
Driankov, D., Hellendoorn, H., and Reinfrank, M., An Introduction to Fuzzy
Control, Springer-Verlag, 1996.
Driels, M., Linear Control Systems Engineering, McGraw-Hill, Inc., 1996, ISBN
0-07-017824-0.
Dutton, K, Thompson, S., and Barraclough, B., The Art of Control Engineering,
Addison Wesley Longman Limited, 1997, ISBN 0-201-17545-2.
575
576 Bibliography

Evans, W., Graphical Analysis of Control Systems, Transactions AIEE, vol. 67,
pp. 547551, 1948.
Franklin, G., Powell, J., and Emami-Naeini, A., Feedback Control of Dynamic
Systems, 3rd edition, Addison-Wesley Publishing Company, 1994, ISBN 0-201-
52747-2.
Franklin, G., Powell, J., and Workman, M., Digital Control of Dynamic Systems,
3rd edition, Addison Wesley Longman, Inc., 1998, ISBN 0-201-82054-4.
Fuller, A., The Early Development of Control Theory, Journal of Dynamics
Systems, Measurement, and Control, ASME, vol. 98G, no. 2, pp. 109118,
June 1976.
Gupta, M., Adaptive Methods for Control System Design, IEEE Press, 1986,
ISBN 0-87942-207-6.
Histand, M., and Alciatore, D., Introduction to Mechatronics and Measurement
Systems, WCB McGraw-Hill, 1999, ISBN 0-07-029089-X.
Johnson, J., Basic Electronics for Hydraulic Motion Control, Penton Publishing
Inc., 1992, ISBN 0-932905-07-2.
Johnson, J., Design of Electrohydraulic Systems for Industrial Motion Control,
2nd edition, Jack L. Johnson, PE, 1995.
Kandel, A., and Langholz, G., Fuzzy Control Systems, CRC Press, 1994, ISBN
0-8493-4496-4.
Kraus, T., and Myron, T., Self-tuning PID uses Pattern Recognition Approach,
Control Engineering, June 1984.
Leonard, N., and Levine, W., Using Matlab to Analyze and Design Control
Systems, 2nd edition, The Benjamin/Cummings Publishing Company, Inc.,
1995, ISBN 0-8053-2193-4.
Lewis, F. L., Applied Optimal Control and Estimation, Prentice Hall, 1992.
Maciejowski, J., Multivariable Feedback Design, Addison-Wesley Publishers
Ltd., 1989, ISBN 0-201-18243-2.
Mayr, O., The Origins of Feedback Control, Cambridge: MIT Press, 1970.
McNeill, F., and Thro, E., Fuzzy Logic, A Practical Approach, Academic Press
Limited, 1994, ISBN 0-12-485965-8.
Merritt, H., Hydraulic Control Systems, John Wiley & Sons, Inc., 1967, ISBN
0-471-59617-5.
Nise, N., Control Systems Engineering, 2nd edition, Addison-Wesley Publishing
Company, 1995, ISBN 0-8053-5424-7.
Ogata, K., Modern Control Engineering, 2nd edition, Prentice-Hall, Inc., 1990,
ISBN 0-87692-690-1.
Ogata, K., System Dynamics, 3rd ed. New Jersey: Prentice-Hall, 1998.
Palm, W., Control Systems Engineering, John Wiley & Sons, Inc., 1986, ISBN
0-0471-81086-X.
Passino, K., and Yurkovich, S., Fuzzy Control, Addison-Wesley Longman, Inc.,
1998, ISBN 0-201-18074-X.
Saadat, H., Computational Aids in Control Systems using MATLAB, McGraw-
Hill, 1993, ISBN 0-07-911358-3.
Schwarzenbach, J., Essentials of Control, Addison-Wesley Longman Limited,
1996, ISBN 0-582-27347-1.
Shetty, D., and Kolk, R., Mechatronics System Design, PWS Publishing
Company, 1997, ISBN 0-534-95285-2.
Bibliography 577

Slotine, J., and Li, W., Applied Nonlinear Control, Prentice-Hall, Inc., 1991,
ISBN 0-13-040890-5.
Spong, M., and Vidyasagar, M., Robot Dynamics and Control, John Wiley &
Sons, Inc., 1989, ISBN 0-471-50352-5.
Stier, A., Design with Microprocessors for Mechanical Engineers, McGraw-
Hill, Inc., 1992, ISBN 0-07-061374-5.
Tonyan, M., Electronically Controlled Proportional Valves, Selection and
Application, Marcel Dekker, Inc., 1985, ISBN 0-8247-7431-0.
Van de Vegte, J., Feedback Control Systems, 3rd edition, Prentice-Hall Inc., 1994,
ISBN 0-13-016379-1.
Wang, L., A Course in Fuzzy Systems and Control, Prentice Hall, 1997.
This Page Intentionally Left Blank
Answers to Selected Problems

Problem Answer(s)

1.6 a. analog, b. analog, c. digital, d. digital

2.5 Cs s3 1
 3 
Rs s 2s s 2
2 3 2 3
2.14 0 1 0 0 0 0
6 7 6 7
6 1=a b=a 0 07 6 20=a 0 7

6 7 6 7
A6 7B 6 7C 0 0 1 0
6 0 0 0 1 7 6 0 0 7
4 5 4 5
0 0 1=c 1=c 0 1=c

2.16 z 27 21x  2 11y  3

2.18 mx 0 b1 x_ 0 k1 k2x k2 xi

2.20 my by_ k1 k2 y b_r k1 r

3.1 1 1

 1=20s; f t 1  e20t
4

3.3 Ys 10

Us s3 3s2 4s 9

3.4 Ys 5s 1

Us s3 5s2 32

3.6 1 
ct 2t  1 e2t
2

3.7 1 3 2t 8 5t


yt e  e
10 2 5

3.12 2
Gs
5s 1

579
580 Answers

Problem Answer(s)

3.13 36
Gs
s2 4s 36

3.14 Transient response characteristics: overshoot, underdamped. Steady-state output


(unit step input): Css 1=2

3.26 a. 2nd; b.  < 0:707, under; c. 0.7 rad/sec; d. 1 rad/sec

3.30 s 0:5  0:866j, damped oscillations

4.5 Yss 1=10  10 1

4.6 8, 40/85, 1-40/85

4.7 ess from R is 0; ess from D is 1=K

4.10 css 10

4.12 0 < K < 1386

4.17  3

4.20 PM 55 degrees


GM 40 dB
Unstable

4.21 237 s 1s 4
GHs ; PM 105
s s 10s 30

5.9 k 2; p 2

5.11 K > 0:82

5.13 K 59

5.14 P) ess 2=3; PI) ess 0

5.21 K 14

5.22 PM 45 degrees, GM 20 dB, K 10

5.23 Kp 10, Ki 100, ess 1=100 for ramp

5.25 z 1; p 3:3

5.33 K1 4, K2 3

6.9 Normal (rated), maximum (failure), burst

6.11 Current
Answers 581

Problem Answer(s)

6.19 False

6.23 Transistor

6.25 False

7.10 1.00, 0:10, 0.01, 0:001, 0:0001   

7.11 0, 0.6320, 1.0972, 1.2069,1.1165, 1.0096, 0.9642, 0.9701, 0.9912, 1.0045

7.12 xk 2eaT xk  1  e2aT xk  2 TeaT k  1

7.14 Ys 1
a
Rs s2 5s 6
Yz 0:1156z 0:02134
b 2
Rz z  0:1851z 0:006738
c 0:1156; 0:1583; 0:1655; 0:1665; 0:1666; 0:1667; 0:1667; 0:1667;   
" # " #" # " # " #
7.17 x1 k 1 2 1 x1 k 1
x1 k
1 ck 1 0
x2 k 1 1 0 x2 k 0 x2 k

7.18 T
Cz Rz
2z  1

7.19 Yz 0:3

Rz z  0:5

7.20 Yz 0:2z2
2
Rz z  0:5z 0:3

8.5 a) 1.0000, 0.9000, 1.1100, 1.0690, 1.1151; b) FVT: 1.11111

8.7 IVT: y0 0; FVT: y1 1

8.8 a y1 1
0:368z 0:264
b Gz
z2  1:368z 0:368
c 0; 0:3679; 1:1353; 2:0498; 3:0183; 4:0067; 5:0025
d y1 1

8.10 0
T
0:2

8.13 b) Approaches marginal stability


c) Pole #1,  1; Pole #2, 0

0:22

8.18 0:04117z 0:03372


Gz
z2  1:511z 0:5488
K 0:5
582 Answers

Problem Answer(s)

8.20 0:01936z3  0:0070z2  0:00724z 0:00265


Gz
z4  1:912z3 1:424z2  0:4568z 0:04979
K6

9.6 z  0:1353
Gc z 0:4551
z  0:6065

9.7 2 Tz T  2
Gc z
0:2 Tz T  0:2

9.9 9:601z  2:751 11:97z  2:763


a Gc z ; b Gc z
z  0:02136 z 0:3158

9.10 K 6:3

9.13  0:69, type 1 system

9.17 z2  1:287z 0:4493


Dz
0:1835z3 0:1404z2  0:1835z  0:1404

10.11 Ladder diagrams

10.13 Difference equations

10.15 Absolute

10.22 Saturated region

11.2 False, they are proactive.

11.4 False, the denominator of our system remains the same.

11.8 When all states cannot be measured

11.10 Difference equations

11.20 Cz 0:36
a 0:82; b 0:36
Rz z  0:82

11.21 Cz 0:7z 0:24


a1 0:11; a2 0:05; b1 0:7; b2 0:24
Rz z2  0:11z 0:05

12.4 True

12.6 False

12.9 Pressure relief and check valves

12.10 False, the pressure drop becomes heat.


Answers 583

Problem Answer(s)

12.11 True

12.13 Critically centered

12.25 a. Diap 4 00 , DiaR 2 00 , b. K 0:224 gpm/(psi)1=2 , c. 10 gpm

12.26 a 6.3 gpm, b. 1890 psi, c. 5883 psi, d. 2200 lbf


This Page Intentionally Left Blank
Index

Absolute encoder, 417418 Angle of departure or arrival, 160161


Accelerometers, 294 Approximate derivative controller, 205
Accumulator, 60, 6162, 534535, 553562 constructing with OpAmps, 281283
Ackermanns formula, 270 Area ratio:
AC motor, 302 in cylinders, 297, 525
Activation function, neural net, 489, 491 in valves, 517
Active region: Armature, 300301
in pressure control valves, 503, 515516 Arrival angle, 158160, 354
in transistors, 423 Articial nonlinearity, 474
Actuator, 279 Assembly language, 312, 408
digital, 419421 Asymptotes:
linear analog, 296298 Bode plots, 111
pulse width modulation driver, 428429 root locus plots, 159, 354
rotary analog, 299303 Asymptotically stable, 475476
saturation of, 295296, 386 Attenuation, 305, 307
stepper motor, 419421 Automobile:
Adaptive controllers, 458, 470473 cruise control, 23
AD converter (see Analog-to-digital suspension model, 34
converter) Autoregressive, 464
Address bus, 408 Autotuning, 471472
Aggregation, fuzzy logic, 484, 486 Auxiliary equation, 18, 7677
Airy, G., 4 Axial turbine ow meter, 289290
Algorithm implementation, 414416
Aliasing, 318319 Back emf, 43
Alpha-cut, fuzzy logic, 481 Backward dierence, 366
Amplier, 279, 302306 Backward rectangular rule, 337338, 366
linear electronic, 422424 Ball type, pressure control valve, 499
power amplier, 305306 Band pass lter, 307
signal amplier, 303305 Bandwidth frequency, 117
Amplitude, PWM, 428 of electrohydraulic valves, 513
Analog controller, vs. digital, 6, 314315 and sample time, 339
Analog-to-digital converter (AD Base, 422424
converter), 325, 344, 403 Base current, 424425
Angle condition, root locus, 157 Batch processes, system identication,
Angle contribution, 244 459467

585
586 Index

Beat frequency, 318 Case study:


Beta factor, 423 hydraulic P/M, accumulator, ywheel,
Biasing, 426 5865
Bilinear transform, 316, 322, 338, 369 hydrostatic transmission control,
Bipolar junction transistor (BJT), 422, 553562
426427 on-o poppet valve controller, 543552
Bisector method, fuzzy logic, 486 Cauchys principle, 179
Black, H., 5 Cavitation, 533
Bladder accumulator, 534535 Center conguration, 507510
Bleed-o circuit, hydraulic, 539 Central dierence, 366
Block diagrams: Centralized controller, 313, 413
common blocks, 102104 Central processing unit (CPU), 406407
digital components, 312 Centroid method, fuzzy logic, 486
operations on, 2023 Characteristic equation (CE), 99, 127, 354
properties of, 1923 and parameter sensitivity, 435437
reduction of, 21, 6667, 145, 345347 and Routh-Hurwitz stability, 154
steady-state errors, 144151 Classical control theory, 5, 8
transfer functions, 9799 Clock speed, 408
of valve-cylinder system, 286, 529531, Closed center, spool valve, 508509,
550 519520, 540
Bode, H., 5 Closed loop vs. open loop, 142143, 344
Bode plots, 107118 Closed loop response from open loop
asymptotes, 111 Bode plots, 190192
common factors, 109113 Coil, 412
design of lters, 307 Collector, 422424
design of PID controllers, 226233 Command feedforward, 441443
from experimental data, 121124 Common emitter/collector/base, 424
magnitude plot, 107108 Compensator, 200 (see also Controller)
parameters of, 117118 feedforward, 437448
phase angle plot, 107108 series and parallel, 200201
PID contributions to, 228229 Compiler, 405
relationship to splane, 175176 Complex conjugate roots, 78
subsystems, 108 Compressibility, 297, 525
from transfer functions, 114115 ows, 515, 547549
Bond graphs: pneumatic, 525
case study, 5865, 543552 Computer:
causality, 4649 as a controller, 400
assigning, 4647 hardware, 401
elements of, 4647 interfaces, 402404
equations, 50 software, 405406
Branches, 411 Contamination, 512513
Break-away points, 160, 354 Continuity, law of, 515
Break frequency, 118 Continuous nonlinearity, 474475
Break-in points, 160, 354 Contour plots, 208, 223226, 434435
Bridge circuit, 514 Controllability, 268, 454
Brushless DC motor, 301 Control law matrix, 269
Bulk modulus, 6364 Controller:
Bumpless transfer, 367 analog vs. digital, 314315
Buses, network, 413414 centralized topography, 313
Bypass ow control, 506507 vs. compensator, 200
digital algorithm implementation,
Capacitance, hydraulic, 525, 548 414416
Index 587

[Controller] Damping ratio, 8287, 352353


digital system hardware, 400403 lines of constant, 100, 353
distributed topography, 313314 vs. percent overshoot, 86
embedded, 407 vs. phase margin, 192
feedforward, 437448 Darlington transistor, 423
general denition, 23, 200 Data acquisition boards, 402404, 557
software, 405406 channels, 403
ControlNet, 414 software issues, 405406
Control valves, hydraulic, 496523 DC motor, 300301
characteristics, 496497, 507508 brushless, 301
ow metering, 519520 DC tachometer (see Tachometer)
PQ metering, 518519 Deadband, 508
pressure metering, 520521 and cylinder position control, 521
coecients, 514517, 520 in electrohydraulic valves, 513, 520521
as control action device, 285 electronic compensation, 512
of cylinder system, 524533 eliminator, 522523
deadband characteristics, 522523 Dead-beat design, 317, 378
directional control, 507523 guidelines for, 387
dynamic models, 523 Deadhead pressure, 536
electrohydraulic, 14 Decay ratio (DR), 472
ow control, 505506 Decibel (dB), 107, 109
linearized model of, 31 Decoupling:
modeling, 514523 engine from road load, 553555
pressure control, 497505 input/output eects, 449, 451452
Convection heat transfer, 4041 Defuzzication, 483484, 486
Convergence time, 456 Degree of membership, 480481
Cost function, 449 Delay:
Counter, 411412, 418 operator, 324, 531
Counterbalance valve, 504 time, 86
CPU (see Central processing unit) Departure angle, 158160, 354
Cracking pressure, 503 Derivative causality, 49
Crisp sets, 480 Derivative control action, 204
Critical: in Bode plots, 227
center, spool valve, 508509, 519520 with velocity feedback, 206
gain, 235 Derivative time, 234, 368
points, 476 Detent torque, 420
Critically damped, 83 DeviceNet, 414
Cross-coupling, 450 Dierence equations, 319
Crossover frequency, 118, 181, 229, 232 determining coecients of, 459469
vs. the natural frequency, 191192 from discrete transfer functions, 325326,
Current driven signals, 308 330333
Curve tting, polynomials, 463464 implementing, 414416
Cut-o: from numerical dierentiation
pressure, pump, 536 approximations, 320321
rate, 307 from numerical integration
region, 423424 approximations, 321323
Cylinder, hydraulic, 296297 from PID approximations, 366368,
valve control of, 524533 530531
Dierential equations:
DA converter (see Digital-to-analog classication, 18
converter) notation, 18
Damped natural frequency, 78 solution to rst-order, 7677
588 Index

[Dierential equations] Disturbances:


solution to second-order, 7780 eects on output, 144147, 347348, 394
Dierential piston, pressure control valve, examples of, 143
499500 Dither, 429
Dierentiating amplier, 304 DMA (see Direct memory access)
Digital-to-analog converter (DA Dominant poles, 86, 104, 191, 192
converter), 324, 344, 403 and phase-lead controllers, 243
Digital controller: and sample time, 339
vs. analog, 6, 314315 and tuning, 207, 233236
from analog controller, 316 Double-ended inputs, 403
design methods overview, 315317 Drebbel, C., 4
direct design of, 316, 377394 Drive-by-wire, 553
history of, 314315 DSP (see Digital signal processor)
program vs. interrupt control, 406 Duty cycle, PWM, 428
sampling characteristics of, 317319 Dynamic response of transducers, 288
Digital encoder (see Encoder)
Digital interface, 312
Digital IO, 403, 416 Eective damping, 204
interfacing, 421431 Eigenvalues, 127, 334
Digital signal processor (DSP), 401402 Electrically erasable-programmable ROM
Digitization, 314 (EEPROM), 407
Diode, 422423 Electrical system:
yback, 427 electromechanical, 43
Directional control valve, 296, 507523 RLC circuit, 3738, 5455
approximating with on/o valves, Electric solenoid (see Solenoid)
543552 Electrohydraulic (see also Fluid power):
center congurations, 508509 control systems, 553562
coecients, 514517, 520 valves, 14, 511514
as control action device, 285 comparison of, 513
of cylinder system, 524533 Electromagnet, 300
deadband characteristics, 522523 Electromotive force, 300
dynamic models, 523 Embedded controller, 407
electrohydraulic, 14 Emitter, 422424
linearized model of, 31 Encoder, 417418
modeling, 514523 absolute, 417418
on/o, 510511 incremental, 417
proportional, 511512 Energy loss control method, 496497
terminology, 507510 Equilibrium, 34, 574
Direct memory access (DMA), 402, 405 Erasable-programmable ROM (EPROM),
Direct response design, 386389 407
Discontinuous nonlinearity, 474475 Error:
Discrete state-space matrices, 334338 steady-state, 144151
converting to transfer functions, 336 of transducers, 288
Discrete transfer function, 325326, Error detectors, 279, 282, 312 (see also
330333 Summing junction)
converting to discrete matrices, 335 Estimator (see Observer)
Distributed controller, 313, 413 Ethernet, 414
Disturbance rejection: Euler integration, 128
and direct response design, 387 Evans, W., 5
with feedforward compensation, 438440 Expert system, 491492
and sample time, 339 Exponentially stable, 475476
Index 589

FAM (see Fuzzy association matrix) Fuzzy association matrix (FAM), 484, 487
Feasible trajectory, 387388, 442 Fuzzy logic, 478489
Feedback linearization, 477478 crisp vs. fuzzy, 480
Feedforward compensation, 437448 membership function, 480
adaptive algorithms, 473 Fuzzy sets, 480
command feedforward, 441443
disturbance input rejection, 438440 Gain margin, 118, 174178, 258, 261
Field eect transistor (FET), 422, 426427 in controller design, 228233
Filter, 307308 Gain scheduling, 470471
active, 305, 307 Genetic algorithm, 491
aliasing eects, 319 Global stability, 153, 474476
integrated circuit, 308 Gray code, 417418
passive, 305, 307 Guided poppet, pressure control valve,
Final value theorem (FVT), 100, 348349 498499
Finite dierences, 316
First-order system: Half duplex, 313314
normalized Bode plot, 111112 Half-step, 420
normalized step response, 81 Hall eect transducer, 294295, 418
Flapper nozzle, 512513 Hamilton, W., 4
Flash converter, 404 Hardware in the loop, 405
Float regulator, 3 Hazen, H., 5
Flow: Hedging, fuzzy logic, 481
chart, programming, 559562 Height, fuzzy logic, 481
control circuit, 536540 Heuristic approach, 479
control valve, 505506 Higher-level buses, 413414
equations, valve, 514523 High frequency asymptote, 111
gain, valve, 520 High pass lter, 307
meter, 289290 Hi-lo circuit, hydraulic, 537538
metering characteristics, valve, 519520 Hold, 325
paths, control valve, 514 Holding torque, 420
Fluid power: HST (see Hydrostatic transmission)
circuits, 534542 Hurwitz, A., 4
bleed-o, 539 Hybrid vehicle, 553562
hi-lo, 537538 Hydraulic (see also Fluid power):
meter-in/out, 538539 accumulator, 60, 6162
regenerative, 540 position control system, 285287
components, 495 pump/motor, 58, 299
actuators, 296300, 524 Hydraulic system:
control valves, 496523 case study, 5865
strategies, 533534 industrial vs. mobile, 495496
power management, 540543 modeling, 3839
Flyback diode, 427, 430431 Hydropneumatic accumulator, 6162
Forgetting factor, 469 Hydrostatic transmission (HST), control
Forward dierence, 366 of, 553562
Forward rectangular rule, 337338, 366 Hysteresis, 474475
Frequency: in electrohydraulic valves, 513, 520
PWM, 428 in pressure control valves, 500
to voltage converter, 559
Frequency response (see Bode plots) Idle time, 534
Full duplex, 313314 IGBT (see Insulated-gate bipolar
Full state observer, 454 transistor)
Fuzzication, 483 Impedance, 403, 426
590 Index

Implication, fuzzy logic, 485486 Lands, in control valves, 507509


Impulse: Laplace transforms, 8797
function, 324, 332 of digital impulse function, 324
input, 47, 89, 146, 205 partial fraction expansion, 9097
Impulse invariant method, 316 solution steps, 88
Increasing phase systems, 177178 table of common pairs, 89
Incremental encoder, 417 Largest of maximum (LOM), fuzzy logic,
Incremental PI algorithm, 367368 486487
Induction motor, 302 Laser, 292
Inductive loads, 422 Lead system, 108
with yback diodes, 427, 430431 Leakage, 520
Initial conditions, 78 Least squares system identication,
Initial value theorem (IVT), 100, 348349 459470
Input contacts, 411 batch processing, 459466
Input impedance, 303, 307 recursive algorithms, 467470
Inputs (see Impulse; Ramp; Step, input) weighting matrix, 467
Instruction set, 408 Left-hand plane (LHP), 100
Insulated-gate bipolar transistor (IGBT), Leibniz, G., 4
422, 426427, 429 Limit cycles, 474
Integral: Limit switch, 291, 418
causality, 49 Linearization, 2631
control action, 203204 Linear quadratic regulator (LQR), 449,
in Bode plots, 227 456457
reset switch, 281 Linear superposition, principle of, 144,
time, 234, 368 474
windup, 203204, 281, 368 Linear variable displacement transducer
Integrated circuit, lter, 308 (see LVDT)
Integrating amplier, 304 Linguistic variables, fuzzy logic, 484
Interconnection, neural net, 489, 491 Liquid level system, 4143, 5758
Interfaces with computers, 401402 Load sensing:
Interrupt program control, 406, 413 pressure-compensated (PCLS), 540543
Intersample oscillations, 388 relief valve (LSRV), 542
Inverting amplier, 304 Local stability, 474476
I-PD control, 205, 368369 LOM (see Largest of maximum)
Isolation (see Protection) Lookup tables, 442
Loss function, 459
Jet pipe, 512 Lower-level buses, 413414
Jump resonance, 474 Low frequency asymptote, 111
Low pass lter, 307, 430
Kalman: LQR (see Linear quadratic regulator)
controller, 388 LU decomposition, 469
lter, 449, 456457 LVDT, 291
Kirchos laws, 37, 51517 Lyapunov:
equations, 456
Ladder logic, 409, 411413 direct vs. indirect, 476
Lag: function, 476
in digital systems, 377 methods for nonlinear systems, 475476
system transfer function, 104, 108 stability, 476
Lag-lead controllers, design of, 248253 Lyapunov, A., 5
(see also Phase-lag/lead controllers)
Lagrange, J., 5 Machine code, 312
Laminar ow, 289 Magnetic pickup, 3, 12, 294, 418419, 557
Index 591

Magnetostrictive, 291 Meter-in/out circuit, hydraulic, 538539


Magnitude condition, root locus, 157 MIAC (see Model identication adaptive
Marginal stability, 153154, 531532 controller)
Matlab commands (introduction of): Microcontroller, 10, 400, 406408
axis, 448 Microprocessor, 312, 406, 409
bode, 186 for fuzzy logic, 487
conv, 252 Microstep, 420
ctrb, 270 Middle of maximum (MOM), fuzzy logic,
c2d, 328 486487
det, 270 MIMO (see Multivariable controllers)
dlqr, 457 Minimum phase system, 124
eig, 131 Minorsky, N., 5
feedback, 220 Model identication adaptive controller
gure, 186 (MIAC), 473
gensig, 448 Modeling:
hold, 225 bond graphs (power ow), 4565
imag, 436 directional control valves, 514523
kalman, 457 and cylinders, 524533
linspace, 436 dynamics of, 523
lsim, 252 eects of errors, 434437, 446
margin, 186187 energy methods, 4445
nyquist, 186187 Newtonian physics, 3240
place, 270 relationships between systems, 33
plot, 436 valve-controlled cylinder, 524532
pole, 436 block diagram, 528530
rank, 270 Model reference adaptive controller
real, 436 (MRAC), 472473
residue, 92 Modern control theory, 6, 8
rlocnd, 186187, 216 MOM (see Middle of maximum, fuzzy
rlocus, 174 logic)
rltool, 216, 384 MOSFET (see Metal oxide semiconductor
roots, 131 eld eect transistor)
sgrid, 221222 Motor:
step, 220 AC, 302
subplot, 217 DC, 300301
tf, 131 hydraulic, 299300
tf2ss, 131 stepper motor, 302, 367, 419421
zgrid, 359 vs. DC motor, 419
Maximum overshoot (see Percent drivers, 403, 419
overshoot) parameters, 421
Maxwell, J., 4 permanent magnet, 419
Mechanical controllers, 284286 variable reluctance, 419420
feedback wire in servovalve, 513 torque, 512513
Mechanical-rotational system, 36, 5253 MRAC (see Model reference adaptive
Mechanical-translational system, 3236, controller)
5051 Multiple input, multiple output (see
Membership function, 480483 Multivariable controllers)
examples of, 482 Multiple valued, nonlinearity, 474475
Metal oxide semiconductor eld eect Multiplexed, 403,404
transistor (MOSFET), 426427, Multivariable controllers, 448457
429431 observers, 454456
Metering, 516 using state-space equations, 453457
592 Index

[Multivariable controllers] Optimal control, 449, 453, 456457


using transfer functions, 449453 Optoisolator, 305, 422
Orice:
Natural frequency, 8287,191192, 352353 equations, 516
and sample time, 339 matched, 516
Natural nonlinearity, 474 in pilot-operated pressure control valve,
Neural net, 489, 491 501
Neurons, 489 in servovalve, 513
Newton, I., 4 Overdamped, 83
Newtons law of motion, 32 Over lapped (see Closed center, spool
Noise amplication: valve)
with analog ampliers, 306 Overrunning load, 519, 526, 539
with derivative controllers, 204
Noise immunity, 312 Pade approximation, 458
Noninverting amplier, 304 Parallel compensation, 200201
Nonlinear: Parallel port, 401402
controllers, 476492 Parameter sensitivity, 434437, 446
systems, 474476 Parametric models (see System
Nonlinearities, characteristics, 474475 identication)
Non-minimum phase system, 124 Partial fraction expansion, 9097
Nonparametric models (see System PCI (see Peripheral component
identication) interconnect)
npn transistor, 423424 PCLS (see Load sensing)
Numerical approximations: PCMCIA (see Personal computer memory
of dierentials, 320 card international association)
of integrals, 321 PC-104, 414
of PID controller, 366369 PD control action, 203, 228, 255
Numerical integration: with approximate derivative, 283
Euler method, 128 implementation with OpAmps, 282
Runge-Kutta method, 128 Peak time, 85
Numerical simulations, 475 Percent overshoot, 86
Nyquist frequency criterion, 318 Perceptron learning rule, 489, 491
Nyquist, H., 5 Performance:
Nyquist plots, 118121, 178181 curve, valve-cylinder system, 526
index, 449
Observability, 454 specications, 8087
Observer, 434, 454456 Period, 317
Octave, 111 with PWM, 428
On-site tuning methods: Peripheral component interconnect (PCI),
analog PID controllers, 233236 402
OpAmp: Permanent magnet:
comparator, 304 DC motor, 300
control action circuits, 281282 stepper motor, 419
inverting amplier, 304 Perron-Frobenius (P-F), 453
signal amplier, 303305 Personal computer memory card
Open center, spool valve, 508, 519520, international association
540 (PCMCIA), 401402
Open loop vs. closed loop, 142143, 344 PFM (see Pulse frequency modulation)
Operating envelope, valve-cylinder system, Phase-lag controller:
526 digital from analog conversion, 372376
Operational ampliers (see OpAmp) implementation with OpAmps, 282
Optical encoder (see Encoder) root locus design steps, 238
Index 593

Phase-lag/lead controllers, 236257 [Power]


Bode plot design method, 254267 maximum, valve-cylinder system, 527
comparison with PID, 237, 256 PQ (see Pressure-ow)
implementation with OpAmps, 282 Pre-compute, 442
root locus design method, 249 Pressure:
Phase-lead controller: intensication, 539
digital from analog conversion, 372376 metering characteristics, valve, 520521
implementation with OpAmps, 282 minimum, valve-cylinder system, 528
root locus design steps, 243244 override, 503
Phase margin, 118, 174178, 258, 261 reducing valve, 498, 505, 537
in controller design, 228233 sensitivity, 521
vs. damping ratio, 192 transducer, 287289
PI control action, 203, 228, 254 Pressure compensation:
implementation with OpAmps, 282 in control valves, 505506
incremental dierence equation load sensing, 540543
algorithm, 367368, 530 in pumps, 534, 536, 540542
PI-D control, 206 Pressure control:
PID controller, 201206, 226236 in circuits, 533540
approximation with dierence equations, valve, 497505
366369 characteristics, 503504
comparison with phase-lag/lead, 237, 256 counterbalance, 504
digital from continuous conversion, poppet vs. spool, 502503
369370 reducing, 498, 505
frequency response design of, 226233 relief, 498504
implementation with OpAmps, 282 sequence, 504
root locus design of, 207226 unloading, 503504
transfer function of, 202 Pressure-ow (PQ):
Piezoelectric, 287288, 294, 298299 equations, 514518
Pilot-operated pressure control valve, metering characteristics, 518519
500501 Priority valve, 504
Piston accumulator, 534535 Proactive vs. reactive compensation, 437
Plant, 17, 141, 442 Program control, 406, 411, 413
PLC (see Programmable logic controller) ow charts, 559562
Pneumatic, 525 Programmable logic controller (PLC), 13,
pnp transistor, 423424 315
Polar plots (see Nyquist plots) vs. computer, 400
Pole, 97, 100, 127 history of, 409
Pole-zero cancellation, 212, 240, 387 Proportional control action, 203
Pole-zero matching, 316, 369, 372 in Bode plots, 227
Pole-zero placement: with hydraulic system, 285
from performance goals, 100 Protection:
with phase-lead controllers, 244 of microprocessors, 400
with PID controllers, 207, 213215 optical-isolator, 422
with state-space controllers, 267271, 453 over-voltage, 403
Position PI algorithm, 367 Pull in/out rate, 420421
Positive denite function, 476 Pull in/out torque, 420421
Potentiometer: Pull-up resistor, 429
linear, 291 Pulse frequency modulation (PFM), 431
rotary, 293, 557 Pulse width modulation (PWM):
Power: approximate analog output, 430
amplier, analog, 305306 creating, 418
management, uid power, 540543 description of, 427429
594 Index

[Pulse width modulation (PWM)] Rise time, 85


implementation, 429431 Robust systems, hydraulic, 533
outputs, 403 ROM (see Read-only memory)
Pump: Root locus:
pressure-compensated, 534536 angle and magnitude condition, 157
unloading of, 510 design of digital controllers, 377386
variable displacement, 536 design of phase-lag controllers, 238
Pump/motor (P/M), 58 design of phase-lead controllers, 243
PWM (see Pulse width modulation) design of PID controllers, 207226
examples of, 162173
QR decomposition, 469 guidelines for construction of:
Quadratic equation, 77 in the s-plane, 158
Quadratic optimal regulator, 267 in the z-plane, 354
Quantization, 343, 400, 530 parameter sensitivity, 434437
errors, 404405 stability regions:
in the s-plane, 100
Rail, 303 in the z-plane, 353
RAM (see Random access memory) Rosenbrock approach, 452
Ramp: Rotor, 43, 301302, 419420
function, valves, 15, 512 Routh, E., 4
input, 89, 149, 202 Routh-Hurwitz stability criterion, 154156
response, 252253, 390391 RTD, 295
Random access memory, 407 Rule-based inferences, fuzzy logic, 484
Range, of transducers, 288 Runge-Kutta integration, 128
Rank, of controllability matrix, 268
Read-only memory (ROM), 407 Safety valve, pressure 498
Realizable, 378, 387 Sample and hold (see ZOH)
Recursive least squares algorithm (RLS), Sample period, 317, 325
468469 Sampler, 344347
in model identication adaptive Samples per second, 403
controller, 473 Sample time:
Recursive solution, system identication eects of, 317319, 532
algorithm, 467470 (see also guidelines for choosing, 338339
Dierence equations) Saturation:
Reduced-order state observer, 454455 with derivative controllers, 204206
Reference model, 472473 and limits with digital controllers, 386
Regenerative: nonlinearity, 474475
hydraulic circuit, 540 with OpAmps, 281
vehicle braking, 553562 with transistor, 423424, 427, 429
Relative stability, 154 SCADA (see Supervisory control and data
Relay, 305 acquisition)
physical vs. simulated in PLC, 409, 412 Scan time, 414415
solid-state vs. mechanical, 421 Schmitt trigger, 416, 418419
Reliability, 400 Scope, fuzzy logic, 481
Resistance-temperature detector (see SCR (see Silicon-controller rectier)
RTD) Second-order system:
Resolution, 403404, 416, 417 normalized Bode plot, 113114
bits of, 404405 normalized step response, 85
of stepper motor, 419420 Self-tuning, adaptive, 47
Resolver, rotary, 293 Sensing piston, 501502
Riccati equations, 456 Sensor vs. transducer, 287 (see also
Right-hand plane (RHP), 100 Transducer)
Index 595

Sequence valve, 504 [s-plane]


Serial port, 401402 time response equivalent, 101
Series compensation, 200201 Spool, 508
Servo, 2 Stability:
Servomotor, 301 of adaptive controllers, 470
Servovalve, 508, 512513 adding with PD controllers, 219, 232
vs. proportional valve, 513, 522 adding with phase-lead controllers, 244
Set-point-kick, 205, 369 of feedback systems, 153154
Settling time, 85 in the frequency domain, 174179
Shift operator, 324 local vs. global, 474476
Shuttle valves, 541542 Lyapunov, 475476
Sigmoid, 489, 491 of nonlinear systems, 474
Signal-to-noise ratio, 306 with parameter variance, 434437
Silicon-controller rectier (SCR), 422 Routh-Hurwitz criterion, 154156
Simulink blocks (introduction of): vs. sample time, 532
constant, 550 in the s-plane, 100101
dead zone, 530 of transducers, 288
discrete transfer function, 531 in the z-plane, 351353, 531532
fcn (function), 531 State-space controller:
gain, 530 disturbance rejection, 453
integrator, 530 for multivariable systems, 453457
multiplication, 550 pole-placement design, 267271
mux (multiplex), 530 State-space equations:
PID controller, 530 from bond graphs, 4958, 6162, 547548
rate limiter, 550 discrete systems, 334338
saturation, 530 eigenvalues, 127, 334
scope, 530 matrix notation, 24
signal generator, 529530 representation of, 23
sqrt (square root), 550 solutions to, 127129
step input, 550 from transfer functions, 129131
summing junction, 529530 to transfer functions, 125126
transfer function, 530 Stator, 301302, 419420
unit delay (z-1), 531 Steady-state errors:
to workspace, 550 solving for, 144151, 348351
zero order hold (ZOH), 531 Steady-state gain, 118
Simultaneous equations, solving, 460463 Step, input, 80, 89, 101, 149, 205
Single-ended inputs, 403 Step input response, 8087
Single phase motor, 302 rst-order systems, 8082
Singletons, 481 second-order systems, 8387
Single valued, nonlinearity, 474475 characteristics, 8586
Slew: Stepper motor, 302, 367, 419421
range, 420421 vs. DC motor, 419
speed, 518, 526, 551 drivers, 403, 419
Sliding mode control, 477 parameters, 421
Smallest of maximum (SOM), fuzzy logic, permanent magnet, 419
486487 variable reluctance, 419420
Software, 405406 Stodola, A., 4
Solenoid, 12, 298, 557 Strain gage, 287288, 294, 295
Solid-state switch (see Transistor) Successive approximation, 404
s-plane, 87 Summing amplier, 304
relationship to Bode plots, 175176 Summing junction:
stability in, 100101 in block diagrams, 2, 1920, 22, 202
596 Index

[Summing junction] [Transfer functions]


implementation with OpAmps, 282, 304 from state-space matrices, 125126
in mechanical controllers, 285 to state-space matrices, 129131
Superposition, principle of, 144, 474 Transfer rates, 402403
Supervisory control and data acquisition Transient response characteristics:
(SCADA), 313 of rst-order systems, 8082
Suspension system, vehicle, 34 of second-order systems, 8387
Symmetrical, valve, 516 and s-plane locations, 101, 152153
Synchronous motor, 302 and z-plane locations, 352353
System identication: Transistor, 305, 421431
adaptive controller, 473 beta factor, 423
nonparametric models, 458 characteristics, 427
using Bode plots, 121124 operating regions, 423424
using step response plots, 8086 power dissipation, 424, 427
parametric models, 458 Trapezoidal approximation (see also
using dierence equations, 460, Bilinear transform; Tustins
464465 method), 322, 366
using input-output data, 458470 Turbulent ow, 289
System type number: Tustins method, 316, 322, 369370
and block diagrams, 148151 Two input, two output (TITO) controller,
and Bode plots, 152 449453 (see also Multivariable
controllers)
Tachometer, 294
Tandem center valve, 510, 534535, Ultimate cycle, 235
538540 Underdamped, 83
Temperature compensation, 306, 506 Under lapped (see Open center, spool
Temperature transducer, 295 valve)
Thermal system, 4041, 5556 Universal serial bus (USB), 402
Thermistor, 295 Unloading valve, 503504
Thermocouple, 295 User routine, 411
Three-phase motor, 302
Time constant, 8082 Vacuum tube, 422
and sample time, 338 Valve:
Timer, 411412 characteristics, 496497, 507508
Torque motor, 512513 ow metering, 519520
Tracking performance, 437, 441443 PQ metering, 518519
Transducer, 279 pressure metering, 520521
digital, 416418 coecients, 514517, 520
ow, 289290 as control action device, 285
with digital IO, 418 deadband characteristics, 522523
important characteristics, 288 electrohydraulic, 14, 511514
linear analog, 290292 linearized model of, 31
optical, 417 Variable reluctance:
pressure, 287289 proximity sensor, 418
rotary, 293294, 417418 stepper motor, 419420, 535
vs. sensor, 287 Velocity PI algorithm, 367368
temperature, 295 Voltage regulation, 559
Transfer functions, 97104 Volume control strategy, 496
characteristic equation (CE), 99
common forms, 102104 Watt, J., 5
denition, 98 Weighted least squares, system
discrete, 325326 identication, 467
Index 597

Words, 408 [Ziegler-Nichols tuning]


step response parameters, 235
ultimate cycle parameters, 236
Zero lapped (see Critical: center, spool ZOH:
valve) development of, 325326
Zero-order hold (see ZOH) location of in model, 344345
Ziegler-Nichols tuning, 233236 z transform:
with digital controllers, 371 development of, 323327

You might also like