Professional Documents
Culture Documents
Digital
Control Systems
Volume 2:
Stochastic Control, Multivariable Control,
Adaptive Control, Applications
Springer-Verlag
Berlin Heidelberg NewYork
London Paris Tokyo
Hong Kong Barcelona Budapest
Professor Dr.-Ing. Rolf Isermann
Institut fUr Regelungstechnik
Technische Hochschule Darmstadt
SchloBgraben 1
D-6100 Darmstadt, West Germany
This work is subject to copyright. All rights are reserved, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting,
re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in
other ways, and storage in data banks. Duplication of this publication or parts
thereof is only permitted under the provisions of the German Copyright Law of
September 9, 1965, in its version of June 24, 1985, and a copyright fee must always
be paid. Violations fall under the prosecution act of the German Copyright Law.
© Springer-Verlag Berlin Heidelberg 1991
Softcover reprint of the hardcover 2nd edition 1991
The use of registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are ekemDt from the
relevant protective laws and regulations and therefore free for general use
Typesetting: Macmillan India Ltd orangaioie 25.
(lower susceptibility to malfunctions) and can lead to savings in wiring. The second
phase of process automation is thus characterized by decentralization.
Besides their use as substations in decentralized automation systems process
computers have found increasing application in individual elements of automation
systems. Digital controllers and user-programmable sequence control systems,
based on microprocessors, have been on the market since 1975.
Digital controllers can replace several analog controllers. They usually require an
analog-digital converter at the input because of the wide use of analog sensors,
transducers and signal transmission, and a digital-analog converter at the output
to drive actuators designed for analog techniques. It is to be expected that, in the
long run, digitalization will extend to sensors and actuators. This would not only
save a-d and d-a converters, but would also circumvent certain noise problems,
permit the use of sensors with digital output and the reprocession of signals in
digital measuring transducers (for example choice of measurement range, correc-
tion of nonlinear characteristics, computation of characteristics not measurable in
a direct way, automatic failure detection, etc.). Actuators with digital control will be
developed as well. Digital controllers not only are able to replace one or several
analog controllers they also succeed in performing additional functions, previously
exercised by other devices or new functions. These additional functions are such as
programmed sequence control of setpoints, automatic switching to various con-
trolled and manipulated variables, feedforward adjusted controller parameters as
functions of the operating point, additional monitoring of limit values, etc.
Examples of new functions are: communication with other digital controllers,
mutual redundancy, automatic failure detection and failure diagnosis, various
additional functions, the possibility of choice between different control algorithms
and, in particular, selftuning or adaptive control algorithms. Entire control systems
such as cascade-control systems, multi variable control systems with coupling
controllers, control systems with feedforward control which can be easily changed
by configuration of the software at commissioning time or later, can be realized by
use of one digital controller. Finally, very large ranges of the controller parameters
and the sample time can be realized. It is because of these many advantages that,
presently various digital devices of process automation are being developed, either
completing or replacing the process analog control technique.
As compared to analog control systems, here are some of the characteristics of
digital control systems using process computers or process microcomputers:
- Feedforward and feedback control are realized in the form of software.
- Discrete-time signals are generated.
- The signals are quantized in amplitude through the finite word length in a-d
converters, the central processor unit, and d-a converters.
- The computer can automatically perform the analysis of the process and the
synthesis of the control.
Because of the great flexibility of control algorithms stored in software, one is not
limited, as with analog control systems, to standardized modules with P-, 1- and D-
behaviour, but one can further use more sophisticated algorithms based on
Preface vii
17 Feedforward Control. . . . . 56
17.1 Cancellation Feedforward Control . . . . . . . . 56
17.2 Parameter-optimized Feedforward Control . . . 60
17.2.1 Parameter-optimized Feedforward Control without a
Prescribed Initial Manipulated Variable . . . . . . . . . . 60
17.2.2 Parameter-optimized Feedforward Control with Prescribed
Initial Manipulated Variable. . . . . . . . . . . . . . . . . 61
17.2.3 Cooperation of Feedforward and Feedback Control. . . 64
17.3 State Variable Feedforward Control . . . . . .. 65
17.4 Minimum Variance Feedforward Control .. 66
References. . . · 307
1 Introduction
A Fundamentals
2 Control with Digital Computers (Process Computers,
Microcomputers)
3 Fundamentals of Linear Sampled-data Systems (Discrete-time
Systems)
6 General linear
and Cancellallon
Controllers
7 Deadbeat
Controllers
9 Controllers for
Processes with
Lorge Deadtlme
10 Robust
Controllers
11 Companson of
Control Algorithms
Graphic Outline of Contents (Volume II)
Design of Design of Information on Realization
Control Systems Structure Control Algrilhms Process and Signals with Digital Computers
14 Minimum
Variance Controllers
17 Feedforward
Control
19 Parameter-optimized
Multivariable
Control Systems
20 Matrix
Polynomial Controllers
24125 Process
Identification
27 Amplitude
Quantization
jj Fourier-transform
.:J information
£( ) Laplace-transform
3( ) z-transform
~( ) correspondence G(s) --+ G(z)
IX coefficient
f3 coefficient
y coefficient; or state variable of the reference variable model
,
(j deviation, or error
e coefficient
state variable of the noise model
11 state variable of the noise model; or noise/signal ratio
K coupling factor; or stochastic control factor
A standard deviation of the noise v(k)
J1 order of P(z)
v order of Q(z); or state variable of the reference variable model
n 3.14159 ...
(1 standard deviation, (12 variance, or related Laplace variable
r time shift
w angular frequency w = 2n/Tp (Tp period)
d deviation; or change; or quantization unit
o parameter
II product
~ sum
n related angular frequency
x = dx/dt
Xo exact quantity
X estimated or observed variable
i,dx = x - Xo estimation error
X average
Xoo value in steady state
Mathematical abbreviations
exp(x) = eX
E{ } expectation of a stochastic variable
var[ ] variance
covE ] covariance
dim dimension, number of elements
tr trace of a matrix: sum of diagonal elements
adj adjoint
det determinant
Indices
P process
Pu process with input u
xx List of Abbreviations and Symbols
Other abbreviations
ADC analog-digital converter
CPU central processing unit
DAC digital-analog converter
PRBS pseudo-random binary signal
ADC analog digital converter
CPU central processor unit
DAC digital-analog converter
PRBS pseudorandom binary signal
WL word length
List of Abbreviations and Symbols xxi
Remarks
The vectors and matrices in the Figures are roman types and underlined. Hence it
corresponds e.g. x ---+ ,!; K ---+ K
The symbol for the dimension of time t in seconds is usually s and sometimes sec in order to
avoid misinterpretation as the Laplace variable s.
C Control Systems for Stochastic
Disturbances
12 Stochastic Control Systems (Introduction)
This section presents some equations describing signal processes which are re-
quired in the design of stochastic controllers and state estimators. However,
a detailed introduction and derivation cannot be given here, so the reader is
4 12 Stochastic Control Systems (Introduction)
(12.2.4)
cPXy(r) = 0 . (12.2.8)
For white noise, a current signal value is statistically independent of all past values.
It has no intrinsic relationship, and in the case of Gaussian amplitude distribution
it is completely described by the average i and covariance function
(12.2.12)
On the diagonal are the n autocovariance functions of the individual scalar signals,
and all other elements are crosscovariance functions. Note that the covariance
matrix is symmetric for r = o.
6 12 Stochastic Control Systems (Introduction)
Example 12.2.1: Xl (k) and x2(k) are two different white random signals. Then their
covariance matrix is:
cov[x, '!" = 0] = [
2
U XI OJ
2
o U X2
[ xd
k
xz(k
+ I)J [0al az1J[xxz(k)(k)J + [OJ
+ 1) = f
1 v(k) (12.2.18)
E{v(k)} = v
V for r =0
cov[v(k), r] =
{
0 for r '*' 0
E{x(O)} = i(O) (12.2.22)
cov[x(O), r = 0] = X(O)
E{[x(k) - i][v(k) - vY} = O.
are within the unit circle of the z-plane, and if A and F are constant matrices, then
for k -+00 a stationary signal process with covariance matrix X is obtained which
can be recursively calculated using (12.2.25) giving
X = A X AT + F V FT . (12.2.26)
In the following, the expectation of a quadratic term of the form x T(k) Qx( k) will
be required, where x(k) is a Markov process with covariance matrix X, and both
X and Q are nonnegative definite matrices. Using:
x TQx = tr[QxxT] (12.2.27)
where the trace operator tr produces the sum of the diagonal elements it follows, for
x(k)=O:
E{xT(k)Qx(k)} = E{tr[Qx(k)xT(k)]} = tr[QE{x(k)xT(k)}]
= tr[QX] . (12.2.28)
If x(k) = x '* 0 accordingly
E{xT(k)Qx(k)} = XTQX + tr[QX] . (12.2.29)
if the disturbance signals are known. When using a process computer, the stochas-
tic noise can first be stored and then used in the optimization of controller
parameters. If the disturbance is stationary, and if it has been measured and stored
for a sufficiently long time, it can then be assumed that the designed controller is
optimal also for future noise and a mathematical noise model is not necessary for
parameter optimization.
In the following some simulation results are presented which show how the
optimized controller parameters change compared with parameters obtained for
step changes of the disturbances and for test processes II and III. A three-para-
meter-control-algorithm
(13.2)
is used as in (5.2.10). A stochastic disturbance v(k), as in Figure 5.1 acts on the input
of the process, and is considered to be a normally distributed discrete-time white
noise with
E{v(k)} =0 (13.3)
and standard deviation
(13.4)
Then one has n(z) = Gp(z)v(z). For this disturbance the controller parameters were
determined by minimization of the control performance criterion (13.1) for
M = 240, r = 0 and using the Fletcher-Powell method. Table 13.1 gives the
resulting controller parameters, the quadratic average value of the control error Se
(control performance), the quadratic average value of the deviation ofthe manipu-
13 Parameter-optimized Controllers for Stochastic Disturbances 11
Table 13.1 Controller parameters, control performance and manipulation effort for stochastic
disturbances v(k)
Process II Process III
Se.stoch --> Min Se ...r --> Min Se.stoch --> Min Se ...r --> Min
10 = 4s 10 = 4s
3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2
Se.stoch --> Min S •. ..r --> Min S •• stoch --> Min Se ...r --> Min
10 = 8s 10 = 8s
3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2
processes and CD increases, with the exception of process II, To = 4 s. The integra-
tion factor CI tends towards zero in all cases, as there is no constant disturbance,
meaning that E {v(k)} = O. The controller action in most cases becomes weaker, as
the manipulation effort Su decreases. Therefore the control performance is im-
proved as shown by the values of the stochastic control factor x. The inferior
control performance and the increased manipulation effort of the controllers
optimized to step changes indicates that the stochastic disturbances excite the
resonance range of the control loop. As the stochastic disturbance n(k) has
a relatively large spectral density for higher frequencies, the x-values of the
stochastic optimized control loops are only slightly below one. The improvement
in the effective value of. the output due to the controller is therefore small as
compared with the process without control; this is especially true for process II.
For the smaller sample time To = 4 s, much better control performance is pro-
duced for process III than with To = 8 s. For process II the control performance in
both cases is about the same. For the controller 3PC-2 with a given initial input
u(O) = qo and where two parameters q1 and q2 are to be optimized, only one value
qo was given. For process II qo was chosen too large. For process II the control
performance is therefore worse than that of the 3PC-3 controller. In the case of
process III for both sample times To = 4 s and To = 8 s, changes of qo compared
with 3PC-3 have little effect on performance.
These simulation results show that the assumed 3-parameter-controller, having
a PID-like behaviour for step disturbances, tends to a proportional differential
(PD-)action for stationary stochastic disturbances with E {n(k)} = O. As there is no
constant disturbance, the parameter-optimized controller does not have integral
action. If in (5.2.18) CI = 0, then the pole at z = 1 is cancelled and one obtains
a PD-controller with transfer function
u(z) -1-1
GR(z) = e(z) = K[(1 + CD) - CDZ ] = qo - q2 Z (13.6)
In the design of minimum variance controllers the variance of the controlled variable
var[y(k)] = E{y2(k)}
is minimized. This criterion was used in [12.4] by assuming a noise filter given by
(12.2.31) but with C(Z-l) = A(Z-l). The manipulated variable u(k) was not
weighted, so that in many cases excessive input changes are produced. A weighing
r on the input was proposed in [14.1], so that the criterion
E{y2(k + i) + ru 2(k)}; i =d+1
is minimized. The noise n(k) can be modelled using a nonparametric model
(impulse response) or a parametric model as in (12.2.31). As a result of the
additional weighting of the input, the variance of the controlled variable is no
longer minimal; instead the variance of a combination of the controlled variable
and the manipulated variable are minimized. Therefore a generalized minimum
variance controller is produced.
The following sections derive the generalized minimum variance controller for
processes with and without dead time; the original minimum variance controller is
then a special case for r = O. For the noise filters are assumed parametric models as
they are particularly suitable for realizing adaptive control algorithms on the basis
of parameter estimation methods.
(14.1.1)
w y
I
Controller L _______ ~
Figure 14.1 Control with minimum variance controllers of processes without deadtime
E{v(k)} = v= 0
(14.1.4)
The controller must generate an input u(k) such that the errors induced by the noise
process {v(k)} are minimized according to (14.1.4). In the performance function J,
y(k + 1) is taken and not y(k), as u(k) can only influence the controlled variable at
time (k + 1) because of the assumption bo = O. Therefore y(k + 1) must be pre-
dicted on the basis of known signal values y(k), y(k - 1), ... and u(k), u(k - 1), ....
Using (14.1.1) and (14.1.2) a prediction of y(k + 1) is
B(Z-l) D(Z-l)
zy(z) = A(Z-l) zu(z) + A C(Z-l) zv(z) (14.1.5)
and
A(Z-l )C(Z-l)Z y(z) = B(Z-l )C(Z-l)Z u(z)
+ AA(z-l )D(Z-l )zv(z) (14.1.6)
or
(1 + alz- 1 + ... + a",z-m)(1 + CIZ- 1 + ... + cmz-m)zy(z)
= (b1z- 1 + ... + bmz- m)(1 + CIZ- 1 + ... + cmz-m)zu(z)
After multiplying and transforming back into the time domain we obtain:
y(k + + (al + cdy(k) + ... + amcmy(k - 2m + 1)
1)
= b1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[v(k + 1) + (al + ddv(k) + ... + amdmv(k - 2m + 1)] . (14.1.8)
Therefore the performance criterion of (14.1.4) becomes:
/(k + 1) = E{ [-(al + cdy(k) - .,. - amcmy(k - 2m + 1)
+ b 1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + ... + ~dmv(k - 2m + 1)]
(14.1.9)
At time instant k, all signal values are known with the exception of u(k) and
v(k + 1). Therefore the expectation of v(k + 1) only must be taken. As in addition
v(k + 1) is independent of all other signal values:
/(k + 1) = [ -(al + cdy(k) - ... - ~cmy(k - 2m +
+ b1u(k)
1)
+ (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ ).[(al + ddv(k) + ... + ~dmv(k - 2m + I)]Y
+ A2E{v 2 (k + I)}
+ 2).[ -(al + cdy(k) - ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + .. , + ~dmv(k - 2m + 1)]]E{v(k + I)}
+ ru 2 (k) . (14.1.10)
Therefore the condition for optimal u(k) becomes:
o/(k + 1)
o k
u( )
= 2[ -(al + cdy(k) - '" - amcmy(k - 2m + 1)
+ b1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + ... + amdmv(k - 2m + 1)]] b1
+ 2ru(k) = O. (14.1.11)
Considering that for the term with b 1 according to (14.1.8)
[ ... ] = [y(k + 1) - Av(k + 1)]
is valid, it follows using (4.1.11) that:
[zy(z) - ).zv(z)]b 1 + ru(z) =0. (14.1.12)
Applying (14.1.5) to predict v(k + 1)
C(Z-l) B(Z-l )C(Z-l)
AZV(Z) = D(Z-l) zy(z) - A(z 1)D(z 1) zu(z)
16 14 Minimum Variance Controllers for Stochastic Disturbances
(14.1.13)
(Abbreviation: MV1)
This controller contains the process model with polynomials A(Z-I) and B(Z-I)
and the noise model with polynomials C(Z-I) and D(Z-I). With r = 0, the simple
form of the minimum variance controller is produced:
( ) __ A(Z-I)[D(z-l) - C(Z-I)]Z
G RMV2 z - ZB(Z-1 )C(Z-I)
= _ ZA(Z-I) [D(Z-I) -
ZB(Z-I) C(Z-I)
1J . (14.1.14)
(Abbreviation: MV2)
This controller is a cancellation controller with the command variable behaviour of
the closed loop
G (z) = Z[D(Z-I) - C(Z-I)] = 1 _ _2_
w zD(z 1) Gpv(z)
G ( )_ [D(Z-I) - A(Z-I)]Z
(14.1.15)
RMV3 Z - - •
r
ZB(Z-I) + b 1 D(Z-I)
(Abbreviation: MV3)
and for r =0
G ( ) [D(Z-I) - A(Z-I)]Z
RMV4 Z = - zB(z 1) (14.1.16)
(Abbreviation: MV4)
After extension with A (z - 1 ) in the numerator and the denominator and compari-
son with (6.2.4), it follows that this controller corresponds with a cancellation
controller with the command variable behaviour:
G (z) = Z[D(Z-I) - A(Z-I)] = 1 _ _ A._
w ill~ 1) GhW
Therefore, for the minimum variance controller with r = 0 the command variable
behaviour of the closed loop only depends on the noise filter. That means, that,
14.1 Generalized Minimum Variance Controllers for Processes without Deadtime 17
apart from the process model Gp(z) the noise model Gpv(z) has to be known
relatively well, which is, of course, a practical problem.
These controller equations have the following properties:
a) Controller order
numerator denominator
MVl 2m - 1 2m
MV2 2m - 1 2m - 1
MV3 m-l m
MV4 m-l m- 1
Because of the high order of MVl and MV2, one should assume C(z - 1) = A (z - 1)
for modelling the noise and then prefer MV3 or MV 4.
c) Stability
It is assumed that the conditions listed under b) are satisfied. Then the character-
istic equation of the closed loop becomes:
The polynomial C(z) is cancelled in both cases. It follows that for closed-loop
stability
MVl and MV3 (r 0): *'
- The zeros of the noise filter D(z) = 0 must lie within the unit circle of the z-plane.
- The zeros of
must lie within the unit circle. The larger the weight r on the process input, the
nearer are these zeros to the zeros A(z) = 0, i.e. to the process poles.
- for MVI the process poles must lie inside the unit circle.
MV2 and MV4 (r = 0):
- The characteristic equation of the closed loop becomes, for r = 0:
MV2: zA(z)B(z)D(z) = 0; MV4: zB(z)D(z) = 0 (l4.1.17c)
- Therefore the zeros of the process B(z) = 0 and of the noise filter D(z) = 0 must
lie within the unit circle.
- The poles of the noise filter C(z) = 0 (for MV2) do not influence the characteristic
equation. Therefore they can lie anywhere.
d) Dynamic control factor
The dynamic control factor of the closed loop means for the controller MVl:
(14.1.18)
with r = 0, for the controller MV2 or MV4 it follows that:
(14.1.19)
Therefore the dynamic control factor for r = 0 is the inverse of the noise filter. It
follows that:
-1 y(z) 1
Gv(z ) = AV(Z) = R(z) GpJz) l = 1 . (14.1.20)
Hence the closed-loop is forced to behave as the reciprocal of the noise filter. Poles
and zeros ofthe processes do not appear in (14.1.19) because they are cancelled by
the controller. With increasing weight r on the process input, however, the poles of
the process increasingly influence the closed-loop behaviour, as can be seen from
(14.1.18).
14.1 Generalized Minimum Variance Controllers for Processes without Deadtime 19
: A(Z-l)[D(z-l) - C(Z-l)]
1+
[:1
= 1 . (14.1.21)
A(Z-l) + ZB(Z-l)] C(Z-l)
B(l)C(l) + A(l)D(l)
Ia,[Id; - Ic;]
(14.1.23)
m
Here I is read as I . If the process Gp(z) has a proportional action behaviour, i.e.
I a; '* 0 and I b; '* 0, then the controller MV1 in general has a proportional
;=0
action static behaviour. For constant disturbances, therefore, offsets occur. This is
also the case for the minimum variance controllers MV2, MV3 and MV4. To avoid
20 14 Minimum Variance Controllers for Stochastic Disturbances
offsets with minimum variance controllers some modifications must be made, and
these are discussed in chapter 14.4.
h) Choice of the weighting factor r
The influence of the weighting factor r on the manipulated variable can be
estimated by looking at the first input u(O) in the closed-loop after a reference
variable step w(k) = l(k), see Eq. (5.2.30). Then one obtains u(O) = qow(O) = qo.
Therefore qo is a measure for the size of the process input. For the controller MVI
(process without deadtime) it follows, if the algorithm is written in the form of
a general linear controller as Eq. (11.1.1)
dl - CI
qo = (14.1.24)
r
bl +-
b l
i) Summary
Typical properties of the minimum variance controllers are summarized in Table
14.1. The best overall properties are shown by controller MV3. Hence for practical
realization of minimum variance controllers, C (z - I) = A (z - I ) should be assumed.
Here it should be emphasized once again that minimum variance controllers are
decidedly characterized by the noise filter Gp,,(z). For r = 0, the closed loop
behaviour is only prescribed by the noise filter.
Minimum variance controllers, therefore, require a relatively precise knowledge
of the stochastic noise model which can be expected only in connection with
adaptive estimation methods, see chapter 26.
The larger the weighting factor r, the more the closed loop behaviour is charac-
terized by the denominator polynomial A(Z-l) of the process, compare (14.1.17).
This leads to a less influential noise filter. As, generally, the process model is known
14.2 Generalized Minimum Variance Controllers for Processes with Deadtime 21
zA[D - C]
MVl A- = 0 D- =0
r
zBC +-AD
b1
zA(D - C) A- = 0 D- =0
MV2 r=O C(l) = 0 C(l) = 0
zBC B- = 0 B- = 0
zeD - A]
MV3 C=A D- =0 A(l) = 0
r
zB+-D
b1
C=A zeD - A] D- =0
MV4 B- = 0
B- = 0
A(l) =0
r=O zB
more precisely than the noise model, minimal variance controllers with r > 0 are
recommended for application.
In deriving minimum variance controllers we assumed bo = O. If bo =1= 0, one
needs only replace b 1 by bo and write
B(Z-1)=bo+b1z- 1 + ... +bmz- m .
as shown in Figure 14.2. The disturbance filter is as assumed in (14.1.2) and (14.1.3)
describes the disturbance signal v(k). As the input u(k) for processes with deadtime
d can influence the controlled variable y(k + d + 1) at the earliest, the performance
criterion
J(k + 1) = E{y2(k + d + 1) + ru 2(k)} (14.2.2)
22 14 Minimum Variance Controllers for Stochastic Disturbances
w e
Figure 14.2 Control with a minimum variance controller for processes with dead time
(14.2.3)
As at the time k for which u(k) must be calculated the disturbance signals
v(k + 1), ... , v(k + d + 1) are unknown, this part ofthe disturbance filter is separ-
ated as follows:
(14.2.4)
As can also be seen from Figure 14.2, the disturbance filter is separated into a part
F(Z-I) which describes the parts of n(k) which cannot be controlled by u(k), and
a part Z-(I+d)L(z-I)/C(Z-I) describing the part of n(k) in y(k) which can be
influenced by u(k). The corresponding polynomials are:
Example 14.2.1
For m = 3 and d = 1 it follows from (14.2.7)
fl = d1 - Cl
10 = d2 - C2 - Clfl
11 = d 3 - C3 - C2fl
12 = -c3fl
14.2 Generalized Minimum Variance Controllers for Processes with Deadtime 23
(14.2.10)
(Abbreviation: MVI - d)
For r = 0:
G ) A(Z-1)L(z-1)
RMV2Az = - zB(z 1)c(z 1)F(z 1)' (14.2.11)
(Abbreviation: MV2 - d)
With C(Z-1) = A(Z-1) and r =1= 0 it follows that
L(Z-1 )
GRMV3Az) = - ------'-----'------ (14.2.12)
r
ZB(Z-1 )F(Z-1) + b1 D(Z-1)
24 14 Minimum Variance Controllers for Stochastic Disturbances
(Abbreviation: MV3 - d)
and with r = 0:
L(Z-1 )
GRMV4d(Z) = - ZB(Z-1 )F(Z-1) . (14.2.13)
(Abbreviation: MV4 - d)
The properties of these minimum variance controllers with d = 0 can be sum-
marized as follows:
a) Controller order
MV3 - d:
MV2-d:
zA(z)B(z)D(z) = 0 (14.2.15a)
MV4-d:
zB(z)D(z) =0 (14.2.15b)
They are identical with the characteristic equations for the minimum variance
controllers without deadtime, and therefore one reaches the same conclusions
concerning stability.
d) Dynamic control factor
For MV1 - d one obtains:
ZB(Z-1)C(Z-1)F(z-1) +: A(Z-1)D(z-1)
[:1
R(z) = y(z) = 1 (14.2.16)
n(z) A(z-1) + ZB(Z-1)]D(Z-1)
14.3 Minimum Variance Controllers for Processes with Pure Deadtime 25
(14.2.17)
Again in the dynamic control factor the reciprocal disturbance filter arises, but
it is now multiplied by F (z - 1) which takes into account disturbances
v(k + 1) ... v(k + d + 1) which cannot be controlled by u(k).
e) Controlled variable
For r = 0, we have for controllers MV2 - d and MV4 - d
y(z) 1_1
-.- = R(z)Gpv(z) -;- = F(z ). (14.2.18)
).v(z) It
(14.2.20)
The larger the deadtime the larger is the variance of the controlled variable.
f) Offset behaviour
In principle,the same disadvantages arise as for d = O. In order to control offsets,
one can proceed in the same way as described in section 14.4.
The minimum variance controllers of section 14.2, being structurally optimal for
the process B(Z- 1)z -d / A (z - 1) and the stochastic disturbance filter D(z - 1)/c(z - 1),
were derived for time/ag processes with deadtime. As can be seen from
(14.2.18)-(14.2.20), the controlled variables of the controllers MV2 - d and
MV3 - d are a moving average signal process of order d whose variance increases
strongly with dead time d.
As in section 9.2.2 the minimum variance controllers for pure dead time processes
are considered. Based on
(14.3.1)
and B(Z-1) = b 1z- 1 and the deadtime d - 1 as in section 14.2, the following
26 14 Minimum Variance Controllers for Stochastic Disturbances
L(Z-1 )
GRMVld = (14.3.2)
b 1C(Z-1 )F(Z-I) + br1 D(Z-I)
L(Z-1 )
GRMV2d = - b 1C(z 1)F(z 1) (14.3.3)
(14.3.6)
L(Z-1 )
GRMV4d(Z) = - b 1 F(z 1)· (14.3.7)
From (14.2.7) it follows that L(Z-I) = 0, and therefore no controller exists if the
order m of D(Z-I) is m ~ d - 1. This again illustrates the principle used to derive
the minimum variance controller to predict the controlled variable y(k + d + 1)
based on known values of u(k - 1), u(k - 2), ... and v(k - 1), v(k - 2), ... and use
the predicted value to compute the input u(k). Here the component of the disturb-
ance signal
yv(k + d + 1) = [v(k + d) + flV(k + d - 1) + ... + fd-lV(k + 1)]A.
(14.3.8)
cannot be considered nor controlled (see (14.2.4) and (14.2.19)). Ifnow the order of
D(Z-I) is m = d - 1 then:
u(z) IX 1 - (I - IX)Z-l
) = 1 + --1
= ---;-( (14.4.3)
GPJ(z) U Z z- = 1 -z 1
is fulfilled if for controllers MVI and MV2 D(1) =l= C(1), and for MV3 and MV4
D(I) =l= A(I). If these conditions are not satisfied, additional poles at z = 1 can be
28 14 Minimum Variance Controllers for Stochastic Disturbances
assumed
MV2: C'(z) = (z - 1)C(z)
MV3 and MV4: A'(z) = (z - I)A(z)
Only for MVI is there no other possibility. The insertion of integrations has the
advantage of removing offsets. However, this is accompanied by an increase of the
variance of y(k) for higher frequency disturbance signals v (k), c.f. section 14.5.
Through a suitable choice of IX both effects can be weighted against each other.
the value of u(k) for y(k) = w(k), the zero-offset case. A derivation corresponding to
section 14.2 then leads to the modified minimum variance controller [14.2J
L(Z-l)[D(z-l) - C(Z-l)]Z
u(z) = y(z)
ZB(Z-l )C(Z-l )F(Z-l) + br A(Z-l )D(Z-l)
l 1 J
Y
A(Z-l)D(z-l)
+------------------------------
ZB(Z-l)C(Z-l)F(z-l) + hIr A(Z-l)D(z-l)
This controller removes offsets arising from variations in the reference variable
w(k). Another very simple possibility in the connection with closed loop parameter
estimation is shown in section 26.4.
The behaviour of control loops with minimum variance controllers is now shown
using an example. The minimum variance controllers MV3 and MV4 were
simulated for a second-order test process using a digital computer.
14.5 Simulation Results with Minimum Variance Controllers 29
~:@
1.0
"
!~-
0'15-:\
\/\ '-
0:J.\ "\.
0/ '\ -
\ I "",
1\ _
0.5 0.05\~ ! "'_
, '-....../
--=:::::::--=
------ 0
!.o: I.
r: O.o2'-.. '{
om r :ofex 2
=0
MVI.
Figure 14.3 Stochastic control factor x as a function of the manipulating effort Su for process
VII with the minimum variance controller MV3 for different weighting factors r on the
process input and rx on the integral part.
30 14 Minimum Variance Controllers for Stochastic Disturbances
••• ••
0 0
N N
·
•
••
•
""
··•
•"
•"
•••"
0
·•·• S'
•• ~:
•
0
•"
,..... . ..
~
>.. 0
0 ~
~
0 I. 0
•
0 N N
\.
·• I:
0 0
0 I:
0
I"
I:
••
0
I:
-3
·•"
S' > S'
::£ 1 0
0 I:
0
0 11
•• ~
• "
0 >.. 0. 0
~
-
_.---.
0 ______ •
o
III
_0
·--0
••
\-
.':
It > •
...---
.--.--
-..-"
o .,
__---,0- M _ . ,-e
...
---..
O_______ • _ _ _ _ _ _
_.
~
:::::==-•
•
---;:.0
---"
e __ • _ _ _ _ •
0' ·
.....
M N 0 -3 M N
C -3
o· d d C; d d, d,
u u
10
~.: MV~ (r:O) ~
10 n n nn n ~ ': ~~ ""',," 5
05
0
IIl u 10 20 :"'-
:111~U"r-":ro v.
-5 [/)
3·
: rt 1~ I t~MI ~~'~~I r~'
-1.5
-201
u
u u U -10 III - 10
c::
§:
o·
::l
;:Q
(t)
Y .. Y MV3 ( =0.02)
03 §'"
'"
1 , ,- - - - - - - 1 h·~ ......···········-
\ ............. ~.
0.1 ::r
lit •
0
I 0
10 20
0
I 10 20
~
::l
-0 1 \ 50 3·
-0.2
c
. 3
u
-<
Il'
..,
u1
~.
MV 3 (r=O.Q2)
5 ::l
1.0 I II ;l @
05 ()
o
lll ~ ::l
o nrl ~ ~i ~'I'r~1 rlf' 6'lJ1H;11 I~. 10
°1 ~~=120 10 20
-OS
[
..,o
-1.0
'"
a b
Figure 14.4a-c. Signals for process VII with minimum variance controllers MV4 and MV3. a Stochastic disturbance n(k); b step change
w
in the reference variable w(k) CI. = 0; c step change in the reference variable w(k) ex = 0.2.
32 14 Minimum Variance Controllers for Stochastic Disturbances
In Figure 14.3 N = 150 samples were used. Figure 14.3 now shows:
- The best control performance (smallest x) is obtained using r = 0 and cx = 0, i.e.
for controller MV4.
- The rather small weighting factors ofr = 0.01 or 0.02 reduce the effective value of
the manipulated variable compared with r = 0 by 48% or 60% at the cost of
a relatively small increase in the effective value of the controlled variable by 12%
or 17% (numbers given for cx = 0). Only for r ~ 0.03 does x become significantly
worse.
- Small values of the integral part cx ~ 0.2 increase the effective value of the
controlled variable by r ~ 3 ... 18% according to r. For about cx > 0.3 the
control performance, however, becomes significantly worse.
Figure 14.4a shows a section of the disturbance signal n(k) for A. = 0.1, the resulting
control variables and manipulated variable with the minimum variance controller
MV4 (r = 0)
G () 11.0743 - 0.0981z- 1
RMV4 Z = - 1 + 0.641Oz 1
For MV4 it can be seen that the standard deviation of the controlled variable y is
significantly smaller than that of n(k); the peaks of n(k) are especially reduced.
However, this behaviour can only be obtained using large changes in u(k).
A weighting factor of r = 0.02 in the controller MV3 leads to significantly smaller
amplitudes and somewhat larger peak values of the controlled variable.
Figure 14.4b shows the responses of the controlled variable to a step change in
the reference value. The controller MV4 produces large input changes which are
only weakly damped; the controlled variable also shows oscillating behaviour and
an offset. By using the deadbeat controller DB(v) the maximal input value would be
u(O) = 1/(b 1 + b2 ) = 4.4; the minimum variance controller MV4, however, leads to
values which are more than double this size. In addition, the offset means that the
resulting closed loop behaviour is unsatisfactory for deterministic step changes of
w(k). The time response of u(k) obtained using controller MV3 and r = 0.02 is
much better. However, the input u(O) is still as high as for DB (v) and the offset is
larger than for MV4.
For cx = 0.2, Figure 14.4c, the offset vanishes. The time response of the manipu-
lated and the controlled variable is more damped compared with Figure 14.4b. The
transient responses of the various controllers are shown in Figure 14.5.
The simulation results with process III (third order with deadtime) show that
with increasing process order it becomes more and more difficult to obtain
a satisfactory response to a step change in the reference variable. In general,
however, it is possible to systematically find a compromise matched to each special
case by suitable variation of the weighting factors rand cx.
14.6 Comparison of Various Deterministic and Stochastic Controllers 33
u u
10 10
MVl, (r=O) MVl, (r=O)
5 5
o 10 20 k o 10 20 k
u u
10 10
MV 3 (r = 0.02)
5 5
o 10 20 k o 10 20 k
a Ct =0 b Ct = 0.2
Figure 14.5 Transient responses of the controllers MV3 and MV4 and process VII for
different rand rx..
Figure 14.6a-h. Graph of controlled variable y(k) and manipulated variable u(k) of a sec-
ond-order process with different control algorithms for a stochastic noise signal n(k). a noise
signal n(k) (without control); b MV4; c MV3 rlb t = 0.144; d MV3-PI rlb t = 0.144, 11.1 = 0.1;
e DB (v); f DB(v + 1) qo = 2.158; g 3PC-2 qo = 4.394; h LCPA Zt = 0.1, Z2 = 0.4.
Z3.4 = 0.1 ± O.li (linear controller with pole prescription)
k k k
k k k k
Figure 14.7 Graph of controlled variable y(k) and manipulated variable u(k) for a second-
order process with various control algorithms for a step change in the reference variable
w(k).
36 14 Minimum Variance Controllers for Stochastic Disturbances
0.4 \
\
0.3
\
\
\
0.2 "- . - DB(v+ l)
8( v l
MV3- PI
"- 3PC-2
'
r: 0.02 .... LCPA
0.1
MV3......... ............. MVI.
r:O
0.1 0.5
Fig. 14.8
Figure 14.8 Mean squared quadratic control deviation Se as a function of the mean squared
input power Su for the control algorithms for stochastic disturbances indicated in Figure 14.6.
1. no control
Figure 14.9 Mean squared quadratic control deviation Se as a function of the mean squared
input power Su for the control algorithms for step changes in the reference variables shown in
Figure 14.7.
The process model assumed in chapter 8 for the derivation of the state controller
for deterministic initial values is now excited by a vector stochastic noise signal v(k)
x(k + 1) = Ax(k) + Bu(k) + Fv(k) . (15.1.1)
The components of v(k) are assumed to be normally distributed, statistically
independent signal processes with expectation
E{v(k)}=0 (15.1.2)
and covariance matrix
cov[v(k), '! = i - j] = E{v(i)vT(j)} = Vc5,j (15.1.3)
where
1 for i = j
{
c5'J = 0 for i =t= j
the output variable y(k) is not used. Section 15.3 considers the case of nonmeasur-
able state variables x(k) and the use of measurable but disturbed output variables.
The literature on stochastic state controllers started at about 1961, and an extens-
ive treatment can be found in [12.2], [12.3], [12.4], [12.5], [8.3].
The Bellman optimality principle, described in section 8.1, can be used to
calculate the optimal input u(k), giving:
k = 0, 1, 2, ... , N - 1
(15.1.7)
and
E{IN-Z} = E{XT(N - 2)PN- 2 X(N - 2) + vT(N - 2)F T QFv(N - 2)
15.2 Optimal State Controllers with State Estimation for White Noise
In section 15.1 it was assumed that the state variables x(k) can be measurable
exactly, but in practice this is generally untrue and the state variables must be
determined on the basis of measured variables. We now consider the process
x(k + 1) = Ax(k) + Bu(k) + Fv(k) (15.2.1)
with measurable ouputs
y(k) = Cx(k) + n(k) (15.2.2)
or
y(k + 1) = Cx(k + 1) + n(k + 1) .
40 15 State Controllers for Stochastic Disturbances
[ X(k
x(k
+
+ 1)
I)J =
[A
rCA
- BK
A - B K - rCA
J[X(k)J
x(k)
+[
F
rCF
OJ
r
[V(k) J
n(k + 1) .
(15.2.6)
[ X(k
i(k
+
+ 1)
I)J = [A - BK BK
0 A - rCA
J [X(k)J
i(k)
This equation system is identical to the equation system Eq. (8.7.5) with exception
of the last noise term. Instead of the observer feedback
Ax(k) = HCx(k)
here
Ax(k + 1) = rCAx(k) ,
the state filter feedback, influences the modes, as the state filter, unlike the observer
of section 8.6, uses a prediction Ax(k) to correct the state estimate. The poles ofthe
15.3 State Estimation for External Disturbances 41
control system with state controller and filter follow from (15.2.8)
They consist, in factored form, of the m poles of the control system without state
filter, (15.1.10), and of the m poles of the state filter. Therefore the poles of the
control and the poles of the state filter do not influence each other and can be
independently determined. Stochastic state controllers also satisfy the separation
theorem. The design of the state filter is independent of the weighting matrices
Q and R of the quadratic performance criterion which determine the linear
controller as well as the process parameter matrices A and B. The design of the
controller is also independent of the covariance matrices V and N of the distur-
bance signals and independent of the disturbance matrix F. The only common
parameters are A and B.
As the state controller is the same for both optimally estimated state variables
and exactly measurable state variables, one can speak of a 'certainty equivalence
principle'. This means that the state controller can be designed for exactly known
states, and then the state variables can be replaced by their estimates using a filter
which is also designed on the basis of a quadratic error criterion and which has
minimal variances. Compared with the directly measurable state variables the
control performance deteriorates (15.1.14), because of the time-delayed estimation
of the states and their error variance [12.4].
Note, that the certainty equivalence principle is valid only if the controller has no
dual property, that means it controls just the current state and the manipulated
variable is simply computed so that future state estimates are uninfluenced in any
definite way [15.1]. A general discussion of the separation and certainty equival-
ence principles can be found in chapter 26.
In the design of the stochastic state controller of (15.2.5) a white vector noise signal
v(k) was assumed to influence the state vector x(k + 1). As the output signal with
n(k) = 0 satisfies
y(k) = Cx(k)
the internal disturbance v(k) generates an output
(15.3.1)
with
(15.3.2)
Q(kJ
y(kJ
Figure 15.1 Stochastically disturbed process with a disturbance model for external disturb-
ance n~(k)
(15.3.8)
15.3 State Estimation for External Disturbances 43
and, depending on the choice of /;, one obtains for each ~i(Z) a disturbance signal
filter
Di(Z)
Gp~I(Z) = A(z) i = 1,2, ... ,m (15.3.9)
with
A(z) = det[zl- A] (15.3.10)
DT(z) = [Dm(z) ... D,(z) ... Ddz)F = c T adj [zl- A] F. (15.3.11)
Note, that the process satisfies
Example 15.3.1
Consider a second-order process with transfer function
B(z) blz-I+blz- l biz+ bl
Gp(z) = - = --=------,---=-----:0
A(z) l+a l z- l +alz- l Zl + alz + az .
a) Controllable canonical form
A =[~
Here also d li and d2i cannot be freely chosen; one of the two parameters is always zero.
This example shows that with the assumption of white vector disturbance signals
C;(k) or v(k) with independent disturbance signal components, the parameters of
the corresponding disturbance signal polynomials cannot possess arbitrary values.
This position changes, however, if the disturbance signal components are equal:
~1(k) = ~2(k) = ... = ~m(k) = ~(k) . (15.3.13)
Then F changes to a vector
IT = [fm ... i2il] (15.3.14)
and in example 15.3.1 we have
D(z) = d 1 z + d2
15.3 State Estimation for External Disturbances 45
where
a) Controllable canonical form
[ d 2J =
dl
[(a l b2 + a2 b d
b2
b2J[I2J
bl II
b) Observable canonical form
d 2 =12
d l = II .
The parameters of D can then be chosen independently. The assumption of (15.3.13)
means that all elements are equal in the covariance matrix of the disturbance
2=(Jt[~ ~"'~l'
~ .
1
.
1
.
1
(15.3.15)
(J
This covariance matrix is, however, positive semidefinite for ~ =1= 0, so that the
assumptions of (15.1.1) are not violated, nor is the assumption of (22.1.4) used in
deriving the Kalman filter violated by (15.3.15).
Until now Fwas assumed to be diagonal. If all elements are non-zero, that means
in the case of example 15.3.1
F= [122 121
112 III
J (15.3.16)
(15.3.17)
with
D(z)=cTadj[zl-ArIJ=dlz m + ... +dm~IZ+dm (15.3.18)
where e.g.
~(k) = [1 ... 1 IF ~(k) (15.3.19)
or
v(k) = [I ... 1 I]T v(k)
and
(15.3.20)
46 15 State Controllers for Stochastic Disturbances
The parameters off and E or V and therefore the parameters of the disturbance
polynomial D(z) affect only the design of the state filter but not the design of the
state controller. Therefore in the state filter one must set either
(15.3.21)
from (15.3.15) and F = f from (15.3.20), or all elements of F must be properly
chosen so that the stochastic correlated disturbance signal n~(k) generated by the
noise filter (15.3.17) can be optimally controlled.
D Interconnected Control Systems
Up to now, when considering the design of controllers or control algorithms it was
assumed, with the exception of state controllers, that only the control variable
y determines the process input u. This leads to single control loops. However, in
chapter 4 it was mentioned that by connecting additional measurable variables to
the single loop - for example auxiliary variables or disturbances - improved control
behaviour is possible. These additions to the single loop lead to interconnected
control systems. Surveys of common interconnected control systems using analogue
control techniques are given for example in [5.14], [5.32], [5.33], [16.2], [16.3].
The most important basic schemes use cascade control, auxiliary control variable
feedback or feedforward control.
In cascade control and auxiliary control variable feedback additional (control)
variables of the process, measurable on the signal path from the manipulated
variable to the controlled variable, are fed back to the manipulated variable. The
cascade control scheme uses an inner control loop and therefore involves a second
controller. In the case of the auxiliary variable, the differentiated auxiliary variable
(continuous-time) is usually added to the input or the output of the controller.
Then, instead of a controller only a differentiating element is necessary, which
possibly needs no power amplification. When realising control schemes in digital
computers the hardware cost is a small fraction of the total, so we concentrate here
on the cascade control scheme. This also allows for a more systematic single loop
design, so only cascade control systems (chapter 16) and no other auxiliary variable
feedback scheme is considered. Also of significance is feedforward control (chapter
17). In this case measurable external disturbances of the process are added to the
feedback loop.
16 Cascade Control Systems
The design of an optimal state controller involves the feedback of all the state
variables of the process. If not all state variables can be measured, but for
example only one state variable between the process input and output, then
improvements can be obtained for single loop systems using for example parameter
optimized controllers, by assuming this state variable to be an auxiliary control
variable Y2 which is fed back to the manipulated variable via an auxiliary control-
ler, as shown in Figure 16.1. Then the process part Gpu2 and the auxiliary controller
G R2 form an auxiliary control loop whose reference value is the output of the main
controller G Ri .
The main controller forms the control error as for the single loop by subtracting
the (main) control variable Yi from the reference value Wi' The controlled plant of
the main controller is then the inner control loop and the process part Gpui ' The
auxiliary control loop is therefore connected in cascade with the main controller.
A cascade control system provides a better control performance than the single
loop because of the following reasons:
1) Disturbances which act on the process part Gpu2 , that means in the input
region of the process, are already controlled by the auxiliary control loop
before they influence the controlled variable Yl.
2) Because of the auxiliary feedback system, the effect of parameter changes in
the input process part G pu2 is attenuated (reduction of parameter sensitivity
by feedback, chapter 10). For the initial design of controller GRb only
V2
,------------,
I V1
Process
I
I I
Auxiliary I I
(Minor) I n2 n1 I
Controller I I
W1 IU I Y1
parameter changes in the output process part GplIl need to be considered and
the small changes in the auxiliary control loop behaviour can be incorporated
in the second place.
3) The behaviour of the control variable Yl becomes faster (less inert) if the
auxiliary control loop leads to faster modes than those of the process part
Gp1I2 ·
The overall transfer function of a cascade control system can be determined as
follows. For the reference value of the auxiliary loop as input one has
G 2(Z) = Y2(Z) = GR2 (Z)GPII2 (z) (16.1)
w W2(Z) 1 + GR2 (Z) GpII2 (z)
and for the behaviour of its manipulated variable:
u(z) GR2 (Z)
(16.2)
W2(Z) 1 + GR2 (Z) GpII2 (z) .
With
Yl (z) = GPul GpII2 (z) U (z) = GplI(z)u(z)
it follows for the behaviour of the plant of the main controller GRI that:
GR2 (Z) ,
1 + GR2 ()G ( ) GplI(z) = GplI(z). (16.3)
Z PII2 Z
In addition to the plant GplI(z) of the single loop the plant of the main controller of
the cascade control system now includes a factor which acts as an acceleration
term. Therefore a 'faster' plant results. For the closed loop behaviour of a cascade
control system it finally results:
The design of cascade control systems depends significantly on the location of the
disturbances, so that each cascade control system should be individually treated.
A simple example shows the behaviour of such a cascade control system.
Example 16.1
The process under consideration has two process parts with the s-transfer function
1
G (s)----
PII2 - (1 + 7.5s)
1
GplI ! (s) = (l + 10s)(1 + 5s) .
16 Cascade Control Systems 51
To obtain an asymptotically stable auxiliary loop its pole must lie within the unit circle of
the z-plane, giving:
- I < - (a 12 + q02bl2) < 1.
Therefore the gain of the P-controller satisfies:
1 + al2 1 - al2
- - - - < q02 < - - - or - 1 < q02 < 3.838 .
b l2 b l2
(Note that a proportional controller acting on a first order process is not structurally stable
in discrete time, unlike the continuous-time case.) If positive Q02 are chosen then with
Q02 = 0.7 or Q02 = 1.3:
0.2894 0.5374
G w2 (z) = z _ 0.2972 or z - 0.0492·
The pole moves toward the origin with increasing Q02 reaching the origin for Q02 = 1.42.
This shows that the settling time of the auxiliary control loop becomes smaller than that of
the process part Gpu2 . The resulting closed loop behaviour of the cascade control system
compared with that of the single loop becomes better only for Q02 > 1.3. If Q02 is chosen too
small then the behaviour of the cascade control system becomes too slow because of
a smaller loop gain compared with that of the optimized main controller. Notice that the
parameters of the main controller were changed when the gain of the auxiliary control loop
changed. The gain of the auxiliary loop varies for 0 < Q02 ~ 1.3 by 0 < Gw2 (1) ~ 0.54. It
makes more sense to use a PI-controller as auxiliary controller
G (z) =
Q
02
+ Q12 Z~I
R2 l-z~1
52 16 Cascade Control Systems
w
..............
0
5 10 15 k
u
w2
1 .............. "
o +---~----~----~-- 3
5 10 15 k
u
2 2
-,
.... ...........-.-
6-,
0.,
C-~:3:&" • • • • •
•
........... .
Gpu
-..
0
Yl
1
G'Pu .,,"
e'·· •
Yl
1 •
•• i i • • • • • • •
0
• 0 \G ~ ·0°
• 0 • 0
Pu
5 k
~ • .
0
O~~-~----+I----_+I----_+I---
10 15 5 15 k 5 10 15 k
a c
Figure 16.2 Transients of a control loop with and without cascade auxiliary controller. The
main controller has PID-, the auxiliary controller PI-behaviour. 000 without auxiliary
controller ___ with auxiliary controller; a Auxiliary control variable Y2; b control variable
Yl without main controller; c control variable Yl with main and auxiliary controller
so that Gw2 (l) = 1. The closed loop transfer function of the auxiliary loop then becomes:
q02b12Z-1 + q12 b 12 Z- 2
Gw2 (z) = 1 2
1 + (a 12 + Q02b12 - l)z- + (Q12b12 - a12)z-
With controller parameters Q02 = 2.0 and Q12 = - 1.4 one obtains:
0.8268(z - 0.7000)
G (z) - - - - - - - - -
w2 - (z - 0.7493)(z - 0.0105)
One pole and one zero approximately cancel and the second pole is near the origin. The
settling time of Y2(k) becomes smaller than that of the process part G pu2 (z), Figure 16.2a.
16 Cascade Control Systems 53
Table 16.1 Optimized controller parameters of the main controller. Design criterion (5.2.6)
r = 0.1
3-PC3 r = 0.1 qo ql q2 K CD c[
without auxiliary
control loop 2.895 -4.012 1.407 1.488 0.9456 0.1950
with auxiliary
control loop 2.6723 - 3.3452 1.036 1.6363 0.6330 0.2219
The overall transfer function of the plant of the main controller is given by Eq. (16.3)
, 0.0372(z - (0.6433 + 0.0528i))(z - (0.6433 - 0.0528i))
Gp.(z) = (z _ 0.7493)(z - 0.0105)
(z + 0.1718)(z + 2.4411)
(z - 0.5866)(z - 0.6705)(z - 0.4493)
The auxiliary control loop introduces the poles of G w2 and a conjugate complex zero pair in
addition to the poles and zeros of the process G pu ' Figure 16.2b shows that therefore the
plant of the main controller becomes quicker. Finally, a quicker but well damped overall
behaviour is obtained which, of course, requires large process input changes, Figure 16.2c.
Table 16.1 shows that the parameters of the main controller are changed by adding the
auxiliary controller as follows: K larger, CD smaller, C[ larger.
In discrete time the gain of the auxiliary controller must be reduced because of
the smaller stability region (see example 16.1). The auxiliary control loop therefore
becomes slower and its offset is larger. In addition the parameter changes of the
process part Gpu2 have more influence on the parameter tuning of the main
controller. By adding an I-term one obtains always G w2 (1) = 1 as the gain of the
auxiliary loop independently of any parameter change of the process part Gpu2 •
Then if the resulting PI-controller is tuned so that it is far enough away from the
stability limit, larger parameter changes of the first process part need not be
considered when designing the main controller, provided that the dynamics of the
auxiliary control loop are much quicker than those of the second process part.
As main controllers for example parameter optimized controllers, dead-beat
controllers or minimum variance controllers are suitable. For their design the
process plus the already tuned P- or PI-auxiliary controllers can be considered
together as one plant given by Eq. (16.3). Using a state controller one should
consider the auxiliary variable Y2 to be a directly measurable state variable and
employ a reduced order observer (see section 8.8) or by inserting the directly
measurable state variable in place of an observed state variable given by a full
order observer (see section 8.7.2).
For two measurable auxiliary control variables a double-cascade control system
with two auxiliary controllers can be designed [16.1]. If all state variables are
measurable then the multi-cascade control system has a similar structure as a state
controller. From the theory of optimal state control it is known that the single
auxiliary controllers have P-behaviour, chapter 8. Cascade control systems with
P-controllers can therefore be considered as first steps towards optimal state
control.
A particular advantage of the separation into an auxiliary controller and main
controller is in the resulting stepwise parameter adjustment one after another. This
is true both for applying tuning rules and for computer aided design. For cascade
control systems, first estimates can be simply obtained of the parameters q02 of the
auxiliary controller and qOI of the main controller by prescribing the manipulated
variable u(O) in the case of a step in the reference value WI (0). (5.2.31) and (5.2.32)
give
u(O) = q02W2(0)
W2(0) = qOI WI (0)
and therefore
(16.9)
This relation can in particular be used to choose qOI of a parameter optimized main
controller if the initial manipulated variable u(O) must be adjusted to the manipUla-
tion range and if the parameter q02 of the auxiliary controller is already fixed.
Cascade control systems can often be applied. If in the input area of a process an
auxiliary control variable is measurable for control systems with higher perform-
ance requirements cascade control systems should always be applied. They can be
16 Cascade Control Systems 55
especially recommended in cases where a valv~ manipulates flow. The gain of the
valve is non-linear, as it depends among others on the pressure drop across the
valve which can change significantly during operation. An auxiliary control loop
with PI-controller can compensate for these gain changes completely. Cascade
control systems should be applied more frequently in digital control systems
compared with analogue control systems, as the additional effort of the auxiliary
controller is small.
17 Feedforward Control
1---------1
v I
1
1
Iy Figure 17.1 Feedforward control
of a single input/single output
process
17.1 Cancellation Feedforward Control 57
tic and stochastic disturbances. State controllers for external disturbances already
contain ideal feedforward control for part of the disturbance model. Feedforward
control systems for directly measurable state variable disturbances satisfy the state
control concept for external disturbances in the form of state variable feedforward
control, section 17.3. Finally, corresponding to minimum variance control, min-
imum variance feedforward control for stochastic disturbances can also be designed,
section 17.4.
In the following it is assumed that mathematical models of the processes both for
the process behaviour
y(z) B(Z-I) -d b 1z- 1 + ... + bmz- m -d
Gpu(z) =- = --1 Z = z (17.1)
1 + al z - + ... + amz m
1
u(z) A(z-)
and for the disturbance behaviour
n(z) D(Z-I) do + d 1 z- 1 + ... + dqz- q
Gp ,(z) =- = - - = -------:-------'-- (17.2)
[ v(z) C(Z-I) I+clz-I+···+cqz- q
are known. For state feedback control the state model
x(k + l) = A x(k) + Bu(k) (17.3)
y(k) = Cx(k) (17.4)
is assumed to be known.
and only the numerator of the process transfer behaviour has to be cancelled.
58 17 Feedforward Control
If these feedforward controls can be realized and are stable, the influence of the
disturbance v(k) on the output y(k) is completely eliminated. One condition for the
realizability of (17. 1.2) is that if the element ho is present an element/o must also be
present, and if h1 is present /1 must also be present, etc. This means that for the
assumed process model structure of (17.1) and (17.2) d = 0 and do = 0 must always
be fulfilled. Therefore one can assume d = 0 from the beginning if Gpv(z) has no
jumping property and does not always contain a deadtime d' ~ d. Then only the
part B/A is cancelled.
To obtain stable feedforward control the roots Zj of the denominator F(z) must
satisfy IZjl < 1, that means the zeros of B(z) and C(z) must lie within the unit circle.
Therefore ideal feedforward control is impossible for processes with deadtime and
with a jumping property, or for processes with zeros of the process or of the
disturbance behaviour on or outside the unit circle in the z-plane (e.g. for nonmini-
mum phase behaviour).
Example 17.1.1
As examples the feedforward control of three test processes I, II and III with distinct process
behaviour, but with identical disturbance behaviour are considered (see Tables 17.1, 17.2
and appendix Vol. I).
Process I (second order with nonminimum phase behaviour; model of an oscillator)
From (17.1.2) it follows that:
o di + (aid i + d 2)z-1 + (a i d 2 + a2 d dz- 2 + a2 d2z - 3
G (z)-------------~----------~----~
S - b i + (biCi + b 2)z-1 + (b i C2 + b2cdz- 2 + b 2C2Z- 3
0.144 - 0.123z- i - 0.039z- 2 + 0.065z- 3
1.0 - 0.527z- i - 0.237z- 2 + 0.132z- 3
(z + 0.646)(z - (0.750 + 0.369i))(z - (0.750 - 0.369i))
(z + 0.494)(z - (0.510 + 0.082i)(z - (0.510 - 0.082i))
To[s] ai a2 a3 bi b2 b3 d
ulk)
- 0150
- 0.133 ~ - - - - ~ .-.-. --. ... -e ..... --. .......... ____ -
•
•
- 0.100
• •
•
•
- 0.050
This transfer function can be realized and is stable. Figure 17.2 shows the behaviour of the
manipulated variable for a step change in the disturbance v(k). No change in the output
variable arises, as there is complete compensation.
Process II (second order with nonminimum phase behaviour)
For this process one obtains a real pole at z = 1.695, as the zero is outside the unit circle for
Gpu(z), indicating unstable behaviour of the feedforward element. Ideal feedforward control
is therefore impossible.
Process III (third order with deadtime; model of a low pass process)
The feedforward element given by Eq. (17.1.2) is unrealizable for process III because d =F 0.1f
the deadtime d in (17.1.2) is omitted, that means only the dead time-free term BjA is
compensated, then:
o (z - 0.675)(z - 0.560)(z - 0.264) 0.385(z + 0.441)
Gs (z) = .-----------
+ 0.879)(z - 0.140) Z2 - 0.527z + 0.07
0.065(z
5.923 - 6.448z- 1 + 0.526z- 2 + 1.121z- 3 - 0.243z- 4
1 + 0.212z- 1 - 0.443z- 2 + 0.117z- 3 - 0.009z- 4
This feedforward element is realizable. However, the cancellation in this case involves a large
input amplitude which can be seen from the alternative equation for the feedforward element
us(O) = (0.385jO.065)v(0) = 5.923v(0) .
with
u=u(oo) final value for e.g. step disturbances
u = E{u(k)} expected value for stochastic disturbances
In many cases one obtains satisfactory feedforward control performance for I ~ 2.
As the gain of the feedforward element satisfies
Hence for 1= 1, the design of the feedforward element with a prescribed initial
manipulated variable u(O) leads to the optimization of a single parameter, taking
into account the restriction of (17.2.14). The computational effort for parameter
optimization in this case is particularly smalL Improved gradient methods, de-
scribed in section 5.4, are recommended as suitable optimization methods when
using a digital computer. The truncation criterion must not be selected too small.
A parameter-optimized feedforward control of first order (l = 1) with a pre-
scribed initial manipulated variable is now described for the examples of processes
II and III subjected to a step change in the disturbance v(k).
Example 17.2.1
Process II
Figure 17.3 shows V[u(1)] and Figure 17.4 the corresponding time responses of the
manipulated and the controlled variable for u(O) = 1.5. Because of the nonminimum phase
behaviour, the initial deviation is increased by feedforward control, the deviation for k ;?: 3,
however, is improved.
Process III
Figure 17.5 shows V[u(1)] for different u(O). The minima are relatively fiat for large u(O).
Figure 17.6 shows that the feedforward control improves the behaviour for k ;?: 2.
2.0
1.0
ylk)
o o~
o ~
o without
feedforword control
o
with
.:. / / /
05
o •
•
o •
•
o+
o
i
I I I I I I I I I
10
••
•• ..... -..... 20 k
ulk)
- 1.5
1 •
•• Figure 17.4 Transient re-
- 10O [-, ~-------~--"'
•• • • • • a . . . . . . . . . . . . e-.-.-- sponses of the manipulated
variable u( k) and the output
variable y(k) for the process
--. I I I I I I I I I II for 11 = - O.S; ho = 1.5;
o 10 20 k h[=-1.3.
15 u(0)=2.o
10
ulO) = 3.0
0.5 I
phase behaviour. The computational effort required for synthesis is, however,
larger than that for feedforward cancellation control. The parameter calculation
for first or second order elements is in general a simple computer aided design
problem.
64 17 Feedforward Control
y Ikl
1.0 ----0-0-0-00-0-0-0-0-0'0""0-0-0-0-0- - -
o
o
~W;\hO"\
../With
o ulkl
teedforward control
- 3.0
0.5
- 2.0
•
-1.0 -•~---- ................... - ...... -
•
•
•••nrrT~~-----
O~~++4-.~+4~.~._ 0+-+4-1--+-+-+-+-1--+-;-------------t---__
••• • 20 k o 10 20 k
Figure 17.6 Transient responses of the manipulated variable of u(k) and output variable
y(k) for process III for II = 0.3; ho = 3.0; hI = - 2.3.
....................................
20 1.0 s 60
:(.... _..._............................... .
~! U
.'
-1 -1
.'
l
-2 -2
wi(hou feedback control w i h feedback con ro l
Figure 17.7 Graph of controlled and manipulated variable after a step change in the
disturbance variable v for the case that the control system with feedforward control was
designed for: . ._ - open loop; ...... closed loop (example: steam superheater control)
If the state variables x( k) are directly measurable, the state variable deviations
x,,( k) are acquired by the state controller of (8.1.33)
u(k) = - Kx(k)
one sample interval later, so that for state control additional feedforward control is
unnecessary. With indirectly measurable state variables, the measurable distur-
bances v(k) can be added to the observer. For observers as in Figure 8.7 or Figure
8.8 the feedforward control algorithm is:
i(k + \) = Fv(k) or i(k + \) = Fv(k) . (17.3.2)
66 17 Feedforward Control
In this case for z y(z) (14.1.5) is introduced, and for feedforward control it follows
that:
u(z) AzA(z-I)[D(z-l) - C(Z-I)]
GSMV I (z) = - = - -----'--=---'-------'------==------ (17.4.3)
v(z) ZB(Z-I)C(Z-I) + :1 A(Z-I)C(Z-I)
GSMV3 () _ Az[D(z-I)-A(z-l)]
Z - - -------- (17.4.5)
r
ZB(Z-I) + bl A(Z-I)
and for r = 0:
G ( ) __ Az[D(z-l) - A(Z-I)]
SMV4 Z - zB(z I) (17.4.6)
The feedforward control elements SMV2 and SMV4 are the same as the minimum
variance controllers MV2 and MV4 with the exception of the factor A. As the
discussion ofthe properties of these feedforward controllers is analogous to that for
the minimum variance controllers in chapter 14, in the following only the most
important points are summarized.
For SMVI and SMV2 the roots of C(Z-I) = 0 must lie within the unit circle for
SMV3 and SMV4 also the roots of B(Z-I) = 0, so that no instability occurs.
17.4 Minimum Variance Feedforward Control 67
The feedforward control SMVI affects the output variable in the following way:
y(z) AD(z-l) B(Z-l)
G,,(z) = v(z) = C(Z-l) + Gs(z) A(Z-l)
~D(Z-l) [I +
C(Z-l)
ZB(Z-l)
r
[C(Z-l) -
D(Z-l)
IJ]. (17.4.7)
ZB(Z-l) + -A(Z-l)
hI
C(Z-I) = A(Z-l) has to be set only for SMV3. When r --> 00, Gv(z)-->
AD(z-1 )/C(Z-l), and the feedforward control is then meaningless. For r = 0 i.e.
SMV2 or SMV4, one obtains, however:
(17.4.8)
This means that the effect of the feedforward control is to produce white noise
y(z) = AV(Z) with variance A2 at the output. For processes with dead time d the
derivation of the minimum variance controller is identical with (14.2.2) to (14.2.9).
In (14.2.9) one has only to introduce (14.2.4) to obtain the general feedforward
element
, u(z) AA(z-l)L(z-l)
GSMVld(Z) = - = - ------------ (17.4.9)
v(z) ZB(Z-l )C(Z-I) + ~ A(Z-I )C(Z-l)
hI
or with r = 0:
G z =_ ),A(Z-I)L(z-l)
SMV2d( ) ZB(Z-l )C(Z-I) . (17.4.10)
The resulting output variable is, for feedforward controllers SMV2d and SMV4d:
Part E considers some design methods for linear discrete-time multi variable
processes. As shown in Figure 18.1 the inputs u, and outputs Y1 of multivariable
processes influence each other, resulting in mutual interactions of the direct signal
paths Ul - Yl, U2 - Y2, etc. The internal structure of multi variable processes has
a significant effect on the design of multi variable control systems. This structure
can be obtained by theoretical modelling if there is sufficient knowledge of the
process. The structures of technical processes are very different such that they
cannot be described in terms of only a few standardized structures. However, the
real structure can often be transformed into a canonical model structure using
similarity transformations or simply block diagram conversion rules. The following
sections consider special structures of multi variable processes based on the transfer
function representation, matrix polynomial representation and state representa-
tion. These structures are the basis for the designs of multi variable controllers
presented in the following chapters.
~
riel Msm
--
@
Ga '"o
....,
GglLI Superheater
;turbance ~Ms r-----------l s:::
filter -:;.- I I g.:;::.
I I
~~: Ms - ILl I ~
I II I :;!.
~
G,s I G, I 0"
I I 0"
~qG I I
I I
qG I I '"a
'-- I
I---lLI I ~ I ~
G s I Gs I ~
Injection I I Temperature '"
I I
I
+ - gauge
u, .1 "'5O ,1"'so I~I Y,
I~I 1t=1 I 1lL=1
I I ,1M ow I -I : I I + + : II
G, G2 I G3 I G4
Ms I I
L ____________ J
-- - -_.-
Figure 18.2 Block diagram ofthe evaporator and superheater of a natural circulation steam generator [\8.5], [18.6].
18.1 Structural Properties of Transfer Function Representations 73
variables are the fuel flow U2 and spray water flow Ul. Based on this block diagram
the following continuous-time transfer functions can be derived.
Y2(S)
Evaporator: G 22 (s) = -(-) = GlO(S)G13(S)G14(S)G1S(S)
U2 S
Coupling Ya(s)
superheater-evaporator: G 12 (S) = -(-) = Gl(S)GS(S)G14(S)G1S(S)
Ul S
Coupling Yl (s)
evaporator-superheater: G2ds) = - () = G10(S)GS(S)G6(S)G4(S)
U2 S
G ll and G 22 are called the 'main transfer elements' and G12 and G21 the 'coupling
transfer elements'. Assuming that the input and output signals are sampled syn-
chronously with the same sample time To, the transfer functions between the
samplers are then combined before applying the z-transformation, giving:
Ydz)
Gll (z) = -(-) = G1G2 G3G4 (z)
Ul Z
(18.1.1)
This example shows that there are common transfer function elements in this
input/output representation. The transfer functions can be summarized in a trans-
fer matrix G(z):
U2 Y2 Y2
a matrix GH containing only the diagonal elements, and into a matrix GN which
contains the remaining elements, yielding:
y = GHu + GNU = GH[u + GH1GNU] .
If G is non-singular, one has
U = G-1y
so that:
y = GH[u + GH1GNG-1y]. (18.1.6)
Comparing with Eq. (18.1.3) one obtains:
GK = GH1GNG- 1 . (18.1.7)
Both canonical forms can therefore be converted to each other, but realizability
must be considered. For a two variable process the calculation of the transfer
function elements is for example given in [18.2].
If the behaviour of multi variable processes has to be identified on the basis of
nonparametric models, as for example using nonparametric frequency responses or
impulse responses, then one obtains only the transfer behaviour in a P-canonical
structure. If other internal structures are considered, proper parametric models and
parameter estimation methods must be used.
The overall structure describes only the signal flow paths. The actual behaviour
of multi variable processes is determined by the transfer functions of the main and
coupling elements including both their signs and mutual position. One distin-
guishes between symmetrical multivariable processes, where
Gii(z) : Gjj(Z)} ~: 1,2, .. .
Gi/z) - Gji(z) ] - 1,2, .. .
and non-symmetrical multivariable processes, where
Gii(Z) =t= Gj/z)
Gij(z) =t= GJ,(z) .
With regard to the settling times of the decoupled main control loops slow process
elements Gii can be coupled withfast process elements Gij. With lumped parameter
processes signals can only appear at the input or output of energy, mass or
momentum storages. The main and coupling elements often contain the same
storage components, so that a main transfer element and a coupling transfer
element possess some common transfer function terms. Hence G ii ~ Gij or G ii ~ Gji
can often be observed.
[ UI(Z)] = [ -Rll(z)
U2(Z) 0
u(z) = R(z) y(z) (18.1.8)
which consists of only two main controllers. The sample time is assumed to be
equal and sampling to be synchronous for all signals. Furthermore WI = W2 = o.
Then one obtains:
[/ - G(z)R(z)]y(z) = 0 (18.1.9)
or
If the first equation is solved for Y2 and introduced into the second equation, one
obtains:
[(1 + GllRll)(l + G 22 R 22 ) - G 12 R ll G 2I R 22 ]YI = o.
Therefore the characteristic equation of the twovariable control system becomes:
G12RllG2IR22]
(1 + G ll R ll )(1 + G 22 R 22 ) [ 1 - (1 + Gl!R ll )(1 + G22 R 22 ) = o.
18.1 Structural Properties of Transfer Function Representations 77
The transfer functions with the reference variables as inputs are introduced
(18.1.12)
so that:
(18.1.13)
The term (1 - xG wl G w2 ) = 0 contains additional eigenvalues arising from the
influence of the couplings, where
Influenced by the coupled control loop the controlled "process" of the main
controller changes as follows:
Gl l -+ G l l (1 - xG w2 )
(see Figure 18.4a). A second transfer path G"xG wj appears in parallel to the
controlled main process element G u .
Now the change in the gain of the controlled "processes" caused by the coupled
neighbouring control loop is considered. For the controller R;;(z) the process gain
is Gii (l) in the case of the open loop j and Gu(1) [1 - x o Gwj (l)] in the case of the
closed neighbouring loop. The factor [1 - x oGwj (I)] = Eu describes the change of
the gain through the coupled neighbouring loop. Xo is called the static coupling
factor
(18.1.16)
This coupling factor exists for transfer elements with proportional behaviour, or
integral behaviour ifthere are two integral elements G'i(Z) and Gij(z). In Figure 18.5
the factor Eii is shown as a function of Xo. For an open neighbouring loop j, eu = 1 is
78 18 Structures of Multivariable Processes
I
I
I
I
j I
a L _____________________ I
I
I
I
I I
I I
b L ____________________________ ~:
o
I
Negative coupled Positive coupled
Figure 18.5 Dependence of the
factor I:jj on the static coupling fac-
without
-....-----------------~?
I with .. tor "0 for two variable control sys-
sign change tems with P-canonical structure.
valid. If the neighbouring loop is closed, the following cases can be distinguished
[18.7]:
1) "0 < 0: negative coupling ~ ejj > 1
2) "0 > 0: positive coupling
~O ~ ejj < 1
1
<0.
b)
"0 > Gwj(l)
~eji
IS.1 Structural Properties of Transfer Function Representations 79
Therefore a twovariable process can be divided into negative and positive coupled
processes. In case 1), the gain of the controlled "process" increases by closing the
neighbouring loop, so that the controller gain must be reduced in general. In case
2a), the gain of the controlled "process" decreases and the controller gain can be
increased. Finally, in case 2b) the gain of the controlled "process" changes such that
the sign of the controller Rii must be changed. Near 8ii ~ 0 the control of the
variable Yi is not possible in practice.
As the coupling factor x(z) depends only on the transfer functions of the
processes including their signs, the positive or negative couplings are properties of
the twovariable process. The path paralleling the main element Gil, see Figure
18.4b, generates an extra signal which is lagged by the coupling elements. If these
coupling elements are very slow, then the coupled loop has only a weak effect. For
coupling elements G l2 and G 21 which are not too slow compared with Gil, a fast
coupled loop 2 has a stronger effect on Y1 than a slow one.
c) Reference variables
In the example of the steam generator of Figure 18.2 these cases correspond to the
following disturbances:
a) - changes in steam flow following load changes
- changes in calorific value of the fuel (coal)
- contamination of the evaporator heating surface
80 18 Structures of Multivariable Processes
can be distinguished. If GVi and Gv2 have different signs, in Table 18.1 the sign
combinations of groups I and II or groups III and IV must be changed. The
disturbance transfer function
shows that the response of the controlled variable is identical for the different sign
combinations within one group. If only one disturbance ni acts on the out-
put Yi (and n2 = 0), then the action of the neighbouring controller R22 is given in
18.1 Structural Properties of Transfer Function Representations 81
Table IS.1 Mutual effect of the main controllers as a function of the sign of the main and
coupling elements for a step disturbance v, simultaneously acting on both loops. Gv1 and
Gv2 have the same sign. From [18.7].
+ + + + I)
+ + reinforcing - >0
K21
K22
+ + - >0
K12
Kll
positive
"0> 0 + + II)
+ + counteracting - <0
K21
K22
+ + <0
K12
-
Kll
+ +
+ + + III)
Table 18.2. The controller R22 counteracts the controller Rll for positive coupling
and reinforces it for negative coupling.
After comparing all cases
- GV1 and Gv2 have same sign
- GV1 and Gv2 have different sign
- GV1 = 0; Gv2 9= 0 or Gv2 = 0; GV1 9= 0
82 18 Structures of Multivariable Processes
positive counteracting I
)Co> 0 counteracting II
negative reinforcing III
)Co < 0 reinforcing IV
it follows that there is no sign combination which leads to only reinforcing or only
counteracting behaviour in all cases. This means that the mutual effect of the main
controllers of a twovariable process always depends on the particular external
excitation. Each multi variable control system must be individually treated in this
context.
As an example again the steam generator in Figure 18.2 is considered. The
disturbance elements have the same sign for a steam change, so that Table 18.1 is
valid. An inspection of signs gives the combination - + + + and we have
therefore group IV. The superheater and evaporator are negatively coupled and
Xo = -0.1145. The steam pressure controller R22 reinforces the steam temperature
controller R ll , c.f. [18.5]. However Rll counteracts R22 only unsignificantly, as
the coupling element Gs in Figure 18.2 has relatively low gain. Also the calorific
value disturbances act on both outputs with the same sign, so that the same group
is involved.
(18.1.17)
with:
(18.1.18)
If A (z - 1) is a diagonal polynomial matrix one obtains for a process with two inputs
and two outputs:
18.2 Structural Properties of the State Representation 83
elements have no independent storage or state variable. The state representation is:
(18.2.3)
The matrices All and A22 of the main transfer elements become diagonal blocks
and the coupling matrices A'12 and All nondiagonal blocks of the overall system
matrix A. The main transfer elements can be put into one of the canonical forms of
Table 3.3. The coupling matrices then contain only coupling parameters in a cor-
responding form and zeros.
Observable or controllable multivariable processes of arbitrary structure can be
presented in block wise triangular form by similarity transformation so that the
diagonal blocks All and A22 are represented in row companion canonical form or
observable canonical form or in controllable canonical form or column companion
canonical form. c.f. [2.19, 18.8].
b) A twovariable process with a P-canonical structure
In analogy to Figure 18.3a a twovariable process with P-canonical structure is
shown in Figure 18.7. Different storages and state variables are assumed for both
the main elements and the coupling elements, with no direct couplings between
1
them. The state representation then becomes:
[ :::~~: :~ 1 [A~l [
o o o
x2dk
x22(k
+
+ 1)
1) 0
0
A12
o
o
o
o
o
o 1 Xll(k)
x12(k)
X21
x22(k)
(k)
(18.2.5)
U1(k)
(18.2.6)
In this case all matrices of the main and coupling elements occur in A as diagonal
blocks.
[X"(k
X21(k
+ 1) ]
X12(k + 1)
+ 1)
X22(k + 1)
[
b,t
A"
0
A12
0
bllcII
0
A21
0
0
b2l CI2
] [X"(k)
x12(k)
X21
]
(k)
b22 Ci2 0 A22 X22(k)
+ [I :][U'(k)]
o
b 22
u2(k)
(18.2.7)
[YI(k)]=[C il 0 0 o [X"(k)
XI2(k)
]
Y2(k) 0 0 0 cIJ (18.2.8)
X21(k) .
X22(k)
In addition to the matrices of the main and coupling transfer elements in the block
diagonal 4 coupling matrices appear for this V-canonical structure as for the direct
coupling, (18.2.3). The matrices Band C are also similar.
If the inner structure of a multivariable process is determined through theoret-
ical modelling (compare section 3.7.2), then it is obvious that multivariable pro-
cesses rarely show the simple structures treated in the previous examples. A P-
canonical structure according to (18.2.5) first results for the steam generator shown
in Figure 18.2. Because of the common elements G b G4 , G IO , G I4 and GIS,
compare Figure 18.2 this structure is transformed in the following minimal
realization:
18.2 Structural Properties of the State Representation 87
o o 0 o o o o
o 0 o o o o
a32 a33 0 0 o o o o
a41 0 0 1 0 o o o o
A= 0 0 0 00 1 o o o (18.2.9)
o 0 0 0 0 o o o
o o o o 0 0 0 0
o o o o 0 0 0 0
o o o
o 1
o 0
o 0
B= 0 o (18.2.10)
o 0
o 0
o 0
o
This example shows that, in general, mixtures of different special structures occur,
the reader is also referred to [18.11].
88 18 Structures of Multivariable Processes
If the state representation is directly obtained from the transfer functions of the
elements of Figure 18.3 some multiple state variables are introduced if the elements
have common states, as in (18.1.1) for example. Then the parameter matrices have
unnecessary parameters. However, if the state representation is derived taking into
account common states so that they do not appear in double or multiple form,
a state representation with a minimal number of states is obtained. This is called
a minimal realization. A minimal realization is both controllable and observable.
Nonminimal state representations are therefore either incompletely controllable
and/or incompletely observable. Methods for generating minimal realizations are
given for example in [18.9], [18.3].
The definition of observability and controllability of multi variable systems is
analogous to single-input/single-output systems, described in chapter 3, c.f. [2.19],
[18.3]. A multi-input/multi-output system of order m is controllable if
Rank[B, AB, ... , A N - 1 B] = m
and is observable if
Rank[C, CA, ... , CAN-1Y = m.
The definition of N causes a certain problem when examining the observability and
controllability of multi variable systems. If N = m, then each state variable is
controllable from each manipulated variable or observable from each output
variable. In most cases, however, only certain state variables are controllable or
observable from one input or one output variable. Then N < m. The controllability
and observability of multivariable systems is treated in more detail in e.g. [2.19,
18.3, 5.17].
The controllable and observable state model contains m2 + mp + mr para-
meters. In order to describe the input/output behaviour, however, often fewer
parameters are sufficient. The state model can be performed by a linear trans-
formation (see section 3.6.3). The transformation matrix T is to be chosen in such
a way that specific canonical state models are generated; hereby as many para-
meters of A should become zero or one.
The thus emerging models with minimal number of parameters are significant
particularly in connection with parameter estimation methods. The state model in
row companion canonical form is especially suited for multi variable control
systems [26.43]. If also provides a simple transition to minimal realized and P-
canonical input/output models.
Mter discussion of some special structure properties of multivariable processes
in this chapter the two following ones will present some methods for the design of
multi variable control systems.
19 Parameter-optimized MuItivariable
Control Systems
Chapter 18 has already shown that there are many structures and combinations of
process elements and signs for twovariable processes. Therefore general investiga-
tions on twovariable processes are known only for certain selected structures and
transfer functions. The control behaviour and the controller parameter settings are
90 19 Parameter-optimized Multivariable Control Systems
W2
described in [19.1], [19.2], [19.3], [19.4], [19.5] and [18.7] for special P-canonical
processes with continuous-time signals. Based on these publications, some results
which have general validity and are also suitable for discrete-time signals, are
summarized below.
For two variable processes with a P-canonical structure, synchronous sampling
and equal sample times for all signals, the following properties of the process are
important for control (see section 18.1):
a) Stability, modes
• transfer functions of the mam elements Gil, G22 and coupling elements
G 12 , G 21 :
- symmetrical processes
Gil = G 22
G 12 =G 21
- asymmetric processes
Gil =l= G 22
G I2 =l= G 21
• coupling factor
_ dynamic %(z) = G I2 (Z)G 21 (Z)
G ll (Z)G 22 (Z)
KI2K21
- static %0=---
KIIK22
K-.
GiAs) = (1 + ~S)3 ij = 11,22, 12,21 (19.1.1)
the stability limits are shown in Figures 19.2 and 19.3 for positive and negative
values of the coupling factor [19.1]
The controller gains KRii are related to the critical gains KRiiK on the stability limit
of the noncoupled loops, i.e. "0 = o. Therefore the stability limit is a square with
K RidK RiiK = 1 for the noncoupled loops. In the case of increasing magnitude ofthe
negative coupling "0 < 1 an increasing region develops in the middle part and also
the peaks at both ends increase, Figure 19.2. For an increasing magnitude of
positive coupling "0 > 1 the stability region decreases, Figure 19.3, until a triangle
remains for "0 = 1. If "0 > 1 the twovariable system becomes monotonically
structurally unstable for main controllers with integral action, as is seen from
Figure 18.4a. Then Gw1 (O) = 1 and Gw2 (O) = 1 and with "0 = 1 a positive feedback
results. If "0 > 1 the sign of one controller must be changed, or other couplings of
manipulated and controlled variables must be taken. Figures 19.2 and 19.3 show
that the stability regions decrease with increasing magnitude of the coupling factor,
if the peaks for negative coupling are neglected, which are not relevant in practice.
Figure 19.4 shows - for the case of negative coupling - the change ofthe stability
regions through adding to the P-controller an integral term ( -+ PI-controller) and
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 93
2
KR11 "-
KR11k
t 1.5
0.5
-3
-1
"- ,
o 0.5 1.5
Figure 19.2 Stability regions of a symmetrical two variable control system with negative
coupling and P-controllers [19.1].
KR11
KR11k 1 ~::--=:~=!::::,""..-;n""""""77A x o=O
0.5
a differentiating term (-> PID-controller). In the first case the stability regIOn
decreases, in the second case it increases.
The stability limits so far have been represented for continuous-time signals. If
sampled-data controllers are used the stability limits differ little for small sample
times To/T95 ~ 0.01. However, the stability regions decrease considerably for
larger sample times, as can be seen from Figure 19.5. In [19.1] the stability limits
------1
I
I
I
I
PIO I
I PI
-r-
Figure 19.4 Stability regions of a symmetrical twovariable system with negative coupling
Xo = -1 for continuous-time po, PI- and PID-controllers [19.1]. PI-controller: TJ = Tp;
PID-controller: TJ = Tp; TD = O.2Tp; Tp: time period of one oscillation for K R/i = KRiiK
(critical gain on the stability limit), see figure in Table 5.6.
2 KR22
KR22k
Figure 19.5 Stability regions for the same twovariable system as Figure 19.4. However
discrete-time P-controllers with different sample time To.
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 95
0
u
a:; Vl
Vl
E <l>
u
E 0
o.
..
C>-
If) Increasing asymmetry
!
t?
/
~ /
/
IIncr.eaSing
/
b/// b
positive
tl
c ou p ling
r'"'y
o ~/"
----- ---- -<J------- nJnCOJPted
,~ Increasing
negatl'Je
coupling
~
-1
~
L
Figure 19.6 Typical stability re-
gions for two variable control sys-
---
2 00
tems with P-controllers; T p ,: period
T p2 of the uncoupled loops at the stabil-
T p1 ity limit [19.1].
have also been given for asymmetrical processes. The time constants, (19.1.1), have
been changed so that the time periods Tp, of the uncoupled loops with P-controllers
satisfy Tp2/TpJ > 1 at the stability limits. Figure 19.6 shows the resulting typical
forms of stability region. Based on these investigations and those in [18.7] twovari-
able control systems with P-canonical structure and lowpass behaviour show the
following properties:
a) For negative coupling, stability regions with peaks arise. Two peaks appear for
approximately symmetric processes. Otherwise there is only one peak.
b) For positive coupling, large extensions of the stability region arise with increas-
ing asymmetry.
c) With increasing asymmetry, i.e. faster loop 1, the stability limit approaches the
upper side of the square of the uncoupled loops. This means that the stability of
the faster loop is influenced less by the slower loop.
96 19 Parameter-optimized Multivariable Control Systems
The knowledge of these stability regions is a good basis for developing tuning rules
for twovariable controllers.
Here, the (Xi are the weighting factors for the individual controller- and manipulated
variables, with ~(Xi = 1. If these have a unique minimum
dS;u = 0 (19.1.4)
dq
lead to the optimal controller parameters:
(19.1.5)
Already with restriction of v = 2, however, the required computational effort
increases considerably with the number p of controlled variables, approximately
proportional to n 3 , if n is the number of parameters [5.36, p. 186]. Good starting
values of the controller parameters or given parameters qOh compare section 5.2.2,
can reduce the computational effort. Appropriate starting values for parameter
optimization can be determined through tuning rules which will be treated in the
following. Note, that the results depend very much on the noise- or command-
signals which act separately simultaneously on the twovariable system, see start of
section 19.1.
Tuning rules for parameter-optimized main controllers with P-, PI- or PID-
behaviour have been developed by several authors. However, these rules have been
obtained only for continuous-time signals. Since, at least for small sample times,
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 97
they can be also used for discrete-time controllers, some tuning rules given in [18.7,
19.1-19.5] are described.
The tuned controller parameters, of course, have to lie inside the stability region
sufficiently distant from the stability limits. An additional requirement in practice is
that each of the control loops remains stable if the other is opened. Therefore the
gains must always satisfy KRII/KRliK < 1 and can only lie within the hatched areas
in Figure 19.7.
Based on the stability regions, the following controller parameter tuning rules
can be derived. The cases a) to d) refer to Figure 19.7.
1. Determination of the stability limits
1.1 Both main controllers are switched to P-controllers.
1.2 Set KR22 = 0 and increase KRll until the stability limit KRllK IS
reached -+ point A.
1.3 Set KRll = 0 and search for KR22K -+ point B.
lA Set KRll = KRllK and increase KR22 until a new oscillation with constant
amplitude is obtained -+ point C for a) and b).
1.5 If no intermediate stability occurs, KR22 is increased for KRll =
KRllK/2 -+ point C' in case c) and d).
1.6 In case a) and b) item IA is repeated for KR22 = KR22K and changing
KRll -+point D for a).
Nowa rough picture of the stability region is known and also which case a) to d)
is appropriate.
B
K R22C KR22k
b
A
KRllk
KRllk/2
KR22k
d
Figure 19.7a-d Allowable regions of controller gains for two variable systems. Negative
coupling: a symmetrical; b asymmetrical. Positive coupling: c symmetrical; d asymetrical.
98 19 Parameter-optimized Multivariable Control Systems
I equal better
II worse worse
III equal better
IV worse equal/worse
R22 counteracts R ll , and for positive coupling if both controllers reinforce each
other. In both cases the main controller of the slower loop is reinforced. The poorest
control is for negative coupled processes, where Rll counteracts R22 and
R22 reinforces R 11 , and especially for positive coupling with counteracting control-
lers. In these cases the main controller of the slower loop is counteracted. This
example also shows that the faster loop is influenced less by the slower loop. It is
the effect of the faster loop on the lower loop which plays a significant role.
A comparison of the control performance of the coupled twovariable system
with the uncoupled loops gives further insight [18.7]. Only small differences occur
for symmetrical processes. For asymmetrical processes, see Table 19.1, it is shown
that the control performance of the slower loop is improved by the coupling, if its
controller or both controllers are reinforced. The loops should then not be
decoupled. The control performance becomes worse if both controllers counteract,
or if the controller of the slower loop is counteracted. Only then should one
decouple, i.e. especially for positively coupled processes with counteracting
controllers.
If the coupled control system has a poor behaviour or if the process requires
decoupled behaviour, decoupling controllers can be designed in addition to the
main controllers. Decoupling is generally only possible for definite signals. A multi-
variable control system as in Figure 19.8 is considered, with dimy = dimu = dim w.
External signals v and w result in
y = [I + GpR] -1 GpvV + [I + GpR] -1 GpRw (19.2.1 )
l J \ J
Y v
Gv Gw
whereas for missing external signals the modes are described by:
[I + GpR]y = 0 . (19.2.2)
Three types of non-interaction can be distinguished [18.2], [19.6].
100 19 Parameter-optimized Multivariable Control Systems
D~J
(19.2.7)
(19.2.8)
R~J
IL _ _ _ _ _ _ _ _ _ J
The decoupling structures shown in Figures 19.9 and 19.10 are basically also
valid for higher-order multivariable processes [19.6]. Compared with analogous
control systems, these decouplings can be easily realized in process computers
through algorithms as in feed forward control systems.
Section 19.1 showed that the couplings in a twovariable process may deteriorate or
improve the control compared with uncoupled processes. Should the control
behaviour deteriorate coupling controllers are to be introduced which act in
a "decoupling" way. The previous section already treated the case for complete
decoupling. If, through process couplings, the control behaviour should improve,
one should examine whether control performance can be even more improved by
additional reinforcement of the coupling, that means through coupling controller
acting in an "coupling" way. This has been considered in [18.7].
As shown in Figure 19.1, the coupling controller can be arranged in P-similar
structure before, after or parallel to the main controllers. Also corresponding
V-similar structures can be used. The same algorithms which are used for the main
controllers, (19.1.2) can be applied for the coupling controllers. Often pure P-
controllers are sufficient, hence for parallel arrangement in P-structure, e.g.
u,(z) = qO'Jej(z) = KR,jeAz) .
As it was done for the main controllers, a control performance criterion according
to (19.1.3) can be used to optimize the parameters KRlJ" For numerical parameter
optimization the unknown parameters KRl} are taken up in the parameter vector
qT, (19.1.5).
In [18.7] proportionally acting coupling controllers were examined for various
symmetrical and asymmetrical twovariable processes with 4th-order proportional
processes in P acting structure. The results listed in the following refer to these
structures. At the same time both control variables are disturbed.
For symmetrical processes coupling controllers show no essential improvement
of the control performance of both control variables. For asymmetrical processes
the quadratic integral criterion can be improved from 10% to 50% for the cases
listed in Table 19.2. This table also shows whether coupling controllers should act
in a "coupling" or "decoupling" way. Hence, coupling controllers are supposed to
act in an additionally "coupling" way, if the main controllers reinforce each other
or if the controller Rll of the more rapid control loop reinforces the controller
R22 of the slower control loop and if the coupling G 12 to the slower control loop is
dynamically rapid. (In the latter case the coupling controllers G12 act from the
more rapid to the slower control loop.)
Coupling controllers are to be used in a decoupling way, if the main controllers
disturb each other or if the controller R22 of the slower control loop is disturbed by
the controller Rll of the more rapid control loop. (In the latter case the decoupling
coupling controllers act from the rapid to the slow control loop.)
104 19 Parameter-optimized Multivariable Control Systems
Table 19.2 Improved effect of coupling controllers with twovariable processes in P-canoni-
cal structure, arrangement of the coupling controllers in P-similar structure after the main
controllers (Fig. 19.1b) and simultaneous disturbance of both control variables
Note, that these results are only valid for the indicated process structures. It is
recommended to treat each case specifically and to pay attention that no instabili-
ties occur through opening of single control loops by the additionally introduced
coupling controllers.
20 Multivariable Matrix Polynomial
Control Systems
It is assumed that all polynomials of the process model have order m and that all
inputs are delayed by the same dead time d. A deadbeat controller then results by
106 20 Multivariable Matrix Polynomial Control Systems
requiring there to be a finite settling time of m + d for the process outputs and of
m for the process inputs if step changes of the reference variables w( k) are assumed.
For the SISO case this gave the closed loop responses, c.f. section 7.1,
u(z) = B- 1(I)A(z-1)
w(z)
and the deadbeat controller
u(z) B- 1(I)A(z-1)
GR(z) = e(z) = 1 _ B- 1(I)B(z-1 )Z-d .
A direct analogy leads to the design equation for the multivariable deadbeat
controller (MDBl) [20.1]:
[I - B- 1(I)B(z-1)Z-d]U(Z) = B-l(I)A(z-l)e(z). (20.2.1)
This controller results in the finite settling time responses
u(z) = B- 1(I)A(z-1)W(Z) (20.2.2)
y(z) = A- 1(z-1)B(z-1)Z- dB- 1(I)A(z-1)W(Z) = R(Z-l)W(Z) (20.2.3)
if R(Z-l) has a finite order ofm + d. The controller equation can also be written as:
(20.2.4)
To decrease the amplitudes of the process inputs the settling times can be increased.
If the settling time is increased by one unit to m + 1 and m + d + 1 the SISO
deadbeat controller becomes, c.f. Eq. (7.2.14),
_u(z)_ qO[I-Z-1/OC]A(z-1)
G R ()
Z - - - ...,..---....::...:...::.,..,...----i-:-=----'-----,-'---;
e(z) 1 - qo(l - z l/oc)B(z l)Z d
so that
u(l) ~ u(O) .
The smallest process inputs are obtained for
qo = 1/(1 - adB(I)
which means that
l/oc = al .
The multivariable analogy (MDB2) is
[I - Qo[l- Hz-1]B(z-1)Z-d]U(Z) = Qo[l- Hz-1]A(z-1)e(z) (20.2.5)
20.3 Matrix Polynomial Minimum Variance Controllers 107
with
(20.2.6)
Qo can arbitrarily be chosen in the range
QOmin = B- 1 (1) [I - Al r 1 and QOmax = B- 1 (1) (20.2.7)
satisfying u(1) = u(O) for QOmin. For the smallest process inputs, u(1) = u(O), this
requires that
Qo = B- 1 [1 - Alr l (20.2.8)
yielding
H=A 1 · (20.2.9)
resulting in
Bf[A- 1(z-1)[B(z-1)ZU(Z) + L(Z-l)V(Z)] - w(z)] + R[u(z) - uw(z)] = 0
(20.3.9)
where v(z) can be replaced (reconstructed) by
v(z) = D- 1(z-1)[A(z-1)y(Z) - B(Z-l)Z-d U(Z)] . (20.3.10)
After introducing (20.3.4) the generalized matrix polynomial minimum variance
controller (MMV1) is found to be
u(z) = [F(Z-1)D- 1(z-1)B(z-1)Z + (Bf)- l Rr 1 .
. {[I + (Bf)-l RB-1(1)A(1)] w(z)
- A- 1(z-1)L(z-1)D- 1(z-1)A(z-1)y(Z)}. (20.3.11)
If R = 0 is set, the minimum variance controller (MMV2) results from (20.3.9) and
(20.3.1), [20.2],
Z-l
u(z) = B- 1(z-1) 1 _ z (d+ l)[A(z-l )[w(z) - y(z)]
The state controller for multi variable processes was designed in chapter 8. There-
fore only a few additional comments are made in this chapter. The process
equation considered in the deterministic case is
x(k + 1) = A x(k) + Bu(k) (21.1 )
y(k) = Cx(k) (21.2)
with m state variables, p process inputs and r process outputs. The optimal
steady-state controller is then
u(k) = - Kx(k) (21.3)
and possesses p x m coefficients if each state variable acts on each process input.
(21.1.2)
A=
A2l
A3l
[A" A22
A32
A 23
A33
A24
A34
B = b2l b 22
b3l b32
A4l A42 A43 A44 b4l b42
Note the index notations which, compared with section 18.2 are changed and are
adapted to the usual matrix representation. The state controller is assumed to be
r"
F12 Fl3
F22 F23 F"
F24 ]
A - BK= F= F2l (21.1.7)
F3l F32 F33 F34
F4l F42 F43 F44
becomes valid.
F is composed of the following blocks
F= (21.1.8)
As shown in section 18.2, the system matrix A can be transformed into a canonical
form (e.g. controllable canonical form or observable canonical form for multivari-
able systems). Matrix F obtains the same form. The coefficients of the characteristic
equation (21.1.2) are determined by the parameters of F. Specifications of the
structures of K and F ease the generation of specific coefficients of the characteristic
equation or ease the determination of the coefficients of K for a prescribed
characteristic equation or prescribed poles. Some simplified structures are there-
fore briefly examined.
If, for example, only the state variables Xl and X4 of the main transfer elements
are fed back, then the controller matrix is to be written as follows
......
......
112 21 Multivariable State Control Systems
and F is simplified considerably. In the case of direct state coupling the number of
controller coefficients equals the number of coefficients of the characteristic equa-
tion, so that the controller coefficients can be determined uniquely provided the
poles are assigned.
For P- and V-canonical structure the characteristic equation has order
m = ml + m2 + m3 + m4, if dimension of Xl is ml, dimension of X2 is m2, etc. For
assigned poles the controller coefficients can be determined uniquely, if their
number is also m. This is the case for the simplified state controllers which are listed
in Table 21.1. They were composed such that, except for the state variables of the
main transfer elements, the state variables of the coupling elements act on the
corresponding control variables in the sense of feedforward control.
Simplifications to the state controller can also then be recommended if control-
lers which correspond to specific command variables or control variables are to be
switched to manual operation of the corresponding manipulated variable. This is
to be realized in such a way that the state variables of the corresponding main
transfer elements do not directly influence the remaining control via the con-
trollers. Then for P- or V-structure ki4 = 0 and kIl = 0 can be set.
Another possibility to simplify F is to only parameterize the diagonal matrices
F l l , F22 , F33 , F44 and to set all non diagonal matrices to zero. This decouples the
state vectors Xl, X2, X3, X4, thus the eigen oscillations of the individual processes do
not disturb each other. Table 21.1 shows the resulting parametrization of K for the
case of direct state coupling. Then A4l - b42 kIl = 0 and A 14 - b ll ki4 = 0 is to be
fulfilled.
This state decoupling of the systems cannot be realized, however, for P-canonical
and V-canonical structure with the assumed controller structure (21.3), since this
leads to requirements for K = 0 with P-structure or coupling matrices A2l = 0,
A34 = 0 with V-structure.
At this time, one should emphasize again that the basic structures, treated as
examples, rarely occur in this form. Mixtures of these structures mostly prevail.
Further methods for controller coefficient determination via pole assignment are
given in [2.19]. Pole assignment for multi variable processes in diagonal form
through state controller (modal control) has been already treated in section 8.4.
A basic paper on decoupling of multi variable state control systems for command
variable behaviour was published in [21.1]. Here an additional feedforward con-
trol u = L wand a specific choice of K leads to command autonomy, c.f. [21.2,
21.3].
As the above examples showed, the determination of state parameters using pole
assignment methods may then be expedient when specific structural simplifications
of the controller are involved. If, however, it is difficult to determine the poles and
all coefficients of the control matrix K are allowed to be occupied, then the design
21.3 Multivariable Decoupling State Controllers 113
A multi variable state-control system for which the outputs y do not influence one
another
K = [C Br 1 [C A - A C] (21.3.3)
where the parameters )'i determine the eigenvalues of the system of (21.1.2).
In section 15.3 an optimal state controller for stochastic disturbances was discussed
which minimizes the performance criterion Eq. (15.1.5) and uses a state variable
estimator. The derivation of this state cop·"oller was performed according to the
state controller for deterministic disturbances in chapter 8. In this section another
approach is presented which is based on the minimum variance principle shown in
chapter 14, which uses a prediction of the noise and which is especially suitable for
114 21 Multivariable State Control Systems
multi variable adaptive control [26.33, 26.43]. To derive stochastic minimum vari-
ance state controllers the innovations state space model (as suitable for identifica-
tion methods)
x(k + 1) = + Bu(k) + Fv(k)
Ax(k) (21.4.1)
y(k) = Cx(k) + v(k) (21.4.2)
is used, where v(k) is a zero-mean Gaussian white noise. The quadratic criterion
J(k + 1) = E{xT(k + l)Qx(k + 1) + uT(k)Ru(k)} (21.4.3)
where Q is positive definite and R positive semidefinite is to be minimized. This
criterion is the same as Eq. (8.1.8) the only difference being that the process is
disturbed by the noise v( k). Therefore the results of (8.1.9) to (8.1.10) can be directly
used to write
oJ(k + 1)
ou(k) = 2BT Q[Ax(k) + Bu(k) + Fv(k)J + 2Ru(k) = 0 (21.4.4)
(MSMV2).
For R = 0 finally the multivariable minimum variance state controller (MSMV3)
becomes:
u(k) = - (CB)-lC[Ax(k + d) + AdFv(k)J. (21.4.11)
21.4 Multivariable Minimum Variance State Controllers 115
The controller equations show that they consist of a deterministic feedback law
uAk) and a stochastic feedforward law uv(k)
(21.4.12)
The deterministic feedback controller in the generalized minimum variance con-
troller is the matrix Riccati controller if P in (21.2.1) or Eq. (8.1.34) is replaced by Q.
And the deterministic controller in the minimum variance controller (21.4.11) is
a decoupling state controller, (21.3.3), if A = 0 [21.1, 26.43].
Comment on the Multivariable Systems which have been treated up to now
The multivariable processes and multi variable control systems treated in chapter
18 to 21 presuppose a complete (central) process model. Quite often, however,
a complete multivariable process model is not known and the control algorithms
are not desired to be realized in only one (central) computer because of reliability
and simplicity reasons. This then leads to the design of decentralized control systems
for multi variable processes, c.f. section 26.11.
To assume the same sampling time for all input- and output variables of
multivariable processes is disadvantageous for processes with significantly different
settling times. The design of parameter-optimized control can of course be directly
applied for different sampling times, yet is not so easy for the structure-optimal
multivariable control systems of chapter 20 and 21. One can then attempt to
convert the required models to integer multiples of the sampling time. Also
possible is the transition to the design of decentralized control systems, c.f. section
26.11.
22 State Estimation
For the realization of state controllers for processes with stochastic disturbances
estimates x( k) of the internal state variables are required which are based on
measured input and output signals u(k) and y(k), see chapter 15 and 21. The
measurable variables y(k) are not only contaminated by u(k) but also by the
nonmeasurable noise signals v( k) and n( k). Since, however, only the signal part
caused by u(k) is of interest in the state variables x(k), suitable filtering methods
have to separate the signal from the noise. Therefore, the general problem of how to
separate signals from noise is briefly treated first, followed by the derivation of the
Kalman filter explaining in several steps the principle of estimation for both, the
scalar and vector cases. State representation allows a direct consideration of
multi variable processes.
A signal s(t) is supposed to be separated from a noise n(t) and only
y(t) = s(t) + n(t) is measurable. It is assumed that noise and signal are located in
the same frequency range which excludes a simple separation through bandpass
filtering and requires the application of estimation methods.
The Wiener filter was developed first in 1940 by Wiener [22.1], compare Figure
22.1. Signal s(t) and noise n(t) are now assumed to be noncorrelated and con-
tinuous-time signals and their spectral densities Sss(w) and Snn(w) are known in
rational fractional form.
The estimation error
s(t) = s(t) - s(t)
can be minimized for - 00 < t < 00 according to the least squares method and one
obtains as a transfer function of the Wiener filter [22.2, 22.3]
with S)I)I(w) = Sss(w) + Snn(w) and S)I~(w) as the rational fractional part of S)I)I(w)
with poles and zeroes in the left half plane. Here, only the physically realizable part
(poles in the left half plane) of Sss(w)/S;'(w) is used which is marked by "PR".
The calculation of the Wiener filter may cause considerable problems. The
factorization of S)I)I might be difficult when trying to solve the problem in the
frequency domain. The corresponding solution in the time domain requires the
22.1 Vector Signal Processes and Assumptions 117
s ] y I Wlener- Ir--.----
s
--~~-o~~
I filter I
Figure 22.1 Estimation of signal s using the Wiener filter. sit) signal, nit) noise, y(t) measured
signal, sit) estimated signal, sit) estimation error.
ulk) ~--,
B ~=~~=============~~~
~=~L_-_..J £ y(k+1)
Figure 22.2 Stochastic vector Markov process y(k + 1) (u = 0) or a dynamic process with
measurable input u(k), output y(k + 1) and noise l1(k).
include a measurable input u(k). Here the following symbols are used:
x(k) (m xl) state vector
v(k) (p x 1) vector input noise, statistically independent with covariance
matrix V
y(k) (r x 1) measurable output vector
n(k) (r xl) vector output noise, statistically independent with covariance
matrix N
A (m x m) system matrix
F (m x p) input matrix
C (r x m) output matrix
A, F and C are assumed to be time invariant.
The objective is to estimate the state vector x(k) based on measurements of the
outputs y(k) which are contaminated by white noise n(k). The following are
assumed known a priori
A, Cand F
The resulting Kalman filter estimation algorithms form a weighted mean of two
independent vector estimates. Therefore it is assumed that Xl and X2 are two
statistically independent estimates of an m-dimensional vector x. The weighted
mean of these two estimates is
(22.2.1)
where K' is an (m x m) weighting matrix which is to be chosen such that the
x
variance of the estimate becomes a minimum. Subsequently a dynamic system is
considered where Xl is a state vector of dimension m and instead of X2 a measurable
output vector Y2 with dimension p < m is used. Therefore
Y2 = CX2 (22.2.2)
is asserted. Then (22.2.1) becomes:
x = Xl + KC[X2 - X1J
'-v--'
K'
= Xl + K[Y2 - CX1J
STSR T = STCQ
RT = (STS)-lSTCQ
R = QC TS(ST S)-l
RRT = QCTS(STS)-l(STS)-lSTCQ
\ J
Y
W
STWS= (STS)(STS)-l(STS)-l(STS) = I
SST WSS T = SIS T = SST
W = (SST)-l
The recursive weighted averaging of two vector variables described in the last
section is applied to the estimation of the state variable x(k + 1) of the Markov
process (22.1.1) and (22.1.2). In (22.2.3) the following are introduced:
Xl = i(k + 1), the prediction of x(k + 1) based on the last estimate x(k)
Y2 = y(k + 1), the new measurement
with the prediction
and (22.2.15) the covariance matrix of the estimation error of x(k + 1) becomes:
P( k + 1) = Q( k + 1) - K( k + 1) C Q( k + 1) . (22.3.6)
(22.3.7)
Starting values
To start the filter algorithm, assumptions on the initial state have to be made. If it is
unknown
x(O) = x(O)
is taken. The initial value of the covariance matrix P(O) must also be assumed. For
properly chosen x(O) and P(O) their influence vanishes quickly with time k, so that
precise knowledge is unnecessary.
Block diagram
In Figure 22.3 the estimation algorithm is shown for v(k) = O. The Kalman filter
contains the homogeneous part of the process. The measured y(k + 1) and its
model predicted value j(k + 1) are compared and an error
e(k + 1) = y(k + 1) - j(k + 1) = y(k + 1) - Ci(k + 1)
=y(k+1)-CAi(k) (22.3.11)
is formed. This error causes a correction i(k + 1) to the predicted i(k + 1).
PROCESS
-
I
------------- - - -- ---~-i
I
I
I n( k+1)
I
I x (k+1) y(k+1 )
I
I measured
output
I
I
L __________________ _ _______ J
I - - - - - - - - - - - - -~Tk+1l new - - - - +- ~k:1) - - - -l
I estimate Output error
I (Redidual,
I innovation)
I
I predicted
predicted new output
estimate
C
KALMAN FILTER
Figure 22.3 Markov process with Kalman filter for the estimation of i(k + 1).
124 22 State Estimation
Orthogonalities
In the original work of Kalman [22.4] the recursive state estimator was derived by
applying the orthogonality condition between the estimates and the measurements:
E{i(i)yT(j)} = 0 for j < i . (22.3.14)
However, also other orthogonalities exist for minimum variance estimates:
E{i(i)iT(i)} = 0 (22.3.15)
or with y(i) = Ci(i):
E{i(i)P(i)} = o. (22.3.16)
From these orthogonalities it follows that the error signal (residual, innovation)
e(k + 1) = y(k + 1) - CAi(k)
is statistically independent
E{ e(i)eT(j)} = 0 for i 9= j . (22.3.17)
w y
1-----------,
I I
Figure 23.1 Basic schemes of con-
I ---.J troller applications. a Feedforward
I ~------------,
adaptation (open loop adaptation,
gain scheduling) As: (adaption algo-
rithm (feedforward) z: measurable
w y external signal; b Feedback adapta-
tion (closed loop adaptation) A R :
Adaption algorithm (feedback)
b
Process
modell
w y
Since the reference model methods and the relevant theory are mainly given for
continuous-time signals, continuous time is assumed first. Surveys dealing with this
class of adaptive systems can be found in [23.12, 23.13, 23.21, 23.37].
130 23 Adaptive Control Systems (A Short Review)
With MRAS a reference model is prescribed expressing the desired closed loop
transfer behaviour, for example the command variable behaviour
G ( ) = Ym(s) (23.1.1)
M S w(s) ,
compare Figure 23.2b. The unknown parameters of a controller GR(s) are then
tuned in such a way that an appropriate error signal, e.g.
output error: L\y(t) = YM(t) - y(t) (23.1.2)
state error: e(t) = XM(t) - x(t) (23.1.3)
becomes small. Hereby performance criteria are chosen which are suited for further
analytical use, for example quadratic performance criteria as:
(23.1.4)
I,
/2 = JeT(t)Pe(t)dt . (23.1.5)
o
Minimizing the performance criterion and possible additional requirements then
results in the adaptation law. Note, that the emerging adaptive system is nonlinear
and timevariant in principle.
i = 1,2, ... , p
Hereby ki > 0 are gain factors which have to be appropriately chosen. For
changing speed it follows that:
drRi _ k a 0/ _ k a 0/
(23.1.7)
dt - - i at orRi - - i orRi at '
where the time derivative of / is assumed to be small (slow adaptation). If for the
performance criterion (23.1.4) is introduced, then it yields with (23.1.2)
oy(t)
= 2kiL\y(t) OrRi . (23.1.8)
23.1 Model Reference Adaptive Systems (MRAS) 131
Hence the changing speed is proportional to the product of error signal and
parameter sensitivity of the output signal. Integrating leads to:
ay(t)
JL\y(t)-a-dt.
I,
rR,(td = 2ki (23.1.9)
o rRi
Reference
model
A(td = !
"
FA8(t)X T (t)dt }
(23.1.20)
B(td = JFH8(t)u T (t)dt
r
o
According to (23.1.17) a global asymptotically stable system emerges provided FA
and FH are (arbitrary) positive definite matrices. Figure 23.4 shows the resulting
adaptive system.
Comparing (23.1.20) with (23.1.9) it follows that the error signals are multiplied
with the states as well as with the input variables instead of being multiplied with
the locally valid sensitivity functions of the output variable.
The assumed system structure (23.1.13) can be used, for example to develop an
adaptive state feedback control (contained in A) and an adaptive feedforward
control (contained in B). Note, the derivations of e(t) in e(t) have to be calculated
separately (e.g. through state variable filters).
The resulting adaption law follows directly from the Ljapunov function which,
however, was chosen arbitrarily. This shows that the resulting adaptation law is
only one among several others. Another disadvantage is the arbitrary choice of the
many free parameters of Q, FA and F H.
with yJ as a finite positive constant factor. The overall system is then (asymp-
totically) hyperstable if the transfer function of the forward path Gv(s) is positive
real [23.27, 23.28, 23.37, 23.39], that means:
a) Gv(s) is real for real s
b) For the poles of Gv(s) Re(s) < 0 is valid (23.1.22)
c) For all real w holds:
Re(iw) > 0, -00 < w <00.
This is the case if for Gv(s) = Bv(s)/Av(s) it holds that [23.21]:
- Av(s) and Bv(s) are asymptotically stable
- The pole excess is Igrad [A(s)] - grad [B(s)]I (23.1.23)
= 1m - nl ~ 1.
Hence, transfer functions are positive real if the maximum phase shift is Icp I ~ 90°.
That means they show first order system behaviour. Examples are:
(23.1.24)
It can be further shown that transfer functions are positive real if the system
elements are passive, hence involving losses [23.39, 23.40].
The system (23.1.12) to (23.1.14) will further on be examined using again state
representation. (23.1.15) is the equation system relevant for stability. The linear
23.1 Model Reference Adaptive Systems (MRAS) 135
becomes positive real. In Figure 23.4 this filter corresponds to the block with P.
Now a proportional integral acting adaptation law is formed:
=!
II
tP 1 , tP 2 , '1'1 and '1'2 have to be chosen in such a way that the Popov integral
equation (23.1.21)
II
~ - Y6 (23.1.32)
has to be valid.
136 23 Adaptive Control Systems (A Short Review)
It was shown in [23.21] that this equation is fulfilled by the following specific
solutions:
tPl = FAe(t)xT(t)
tP2 = FAe(t)xT(t)
"1 = FBe(t)unt)
"2 = Fae(t)u,T(t)
) (23.1.33)
Here FA, FA, F B, Fa are positive definite matrices, that means they are for example
diagonal. For the adaptation laws it yields:
II
Adaptive control systems with identification model are based on the identification
of a closed loop process model, compare Figure 23.2a. Here, the input signal u(k)
and the output signal y(k) are measured. A recursive identification method for
example determines on-line and in real-time the process model and perhaps also
a noise signal model. Based on this process/noise signal model the controller
parameters are calculated using an appropriate controller design method. Then
the parameters of the control algorithm are changed.
Hence, these adaptive control methods are composed of two main steps:
1. Process/noise signal identification
2. Controller parameter calculation
They try to attain optimal controller adaptation according to the controller
design method or the control performance criterion. They therefore may be called
self-optimizing controllers.
Development of these adaptive controllers presupposes appropriate recursive
identification methods. In practice they can be realized only with digital computers,
otherwise the computational effort would be intolerable.
23.2 Adaptive Controllers With Identification Model 139
The first publication [23.31J consisted in the recursive least squares method
(though in a somewhat unexpedient form) for parameter estimation of the process
model and of a deadbeat controller for the control. This digital adaptive control
method was implemented on a digital computer which was particularly built for
that purpose.
The advances in parameter estimation of dynamic processes and in designing
stochastic controllers brought forth discrete-time adaptive controllers also called
"selftuning regulators".
In [23.32, 23.33J the recursive least squares method was combined with a min-
imum variance controller, in [23.34J this was done with an extended minimum
variance controller.
Next were e.g. selftuning controllers with prescribed pole design [23.35J and the
combination of various parameter estimation- and controller design methods. The
further development will be treated in chapter 26.
Adaptive controllers with identification model can be designed with non-
parametric or parametric models in continuous or discrete time. Up to now the
emphasis was put on discrete-time parametric models. The resulting adaptive
digital controllers with parameter estimation are also called parameter-adaptive
controllers or selftuning regulators. They can be classified in two main groups,
compare Figure 23.6:
- Explicit parameter-adaptive controllers estimate explicitly the process model
parameters followed by the calculation of the controller parameters. The process
model parameters are then available as an interim result.
Process parameters
w y
It is assumed that a stable process is time invariant and linearizable so that it can be
described by a linear difference equation
yu(k) + alYu(k - 1) + ... + a",yu(k - m)
(24.1.1)
where
u(k) = U(k) - Uoo } (24.1.2)
y(k) = Y(k) - Yoo
are the deviations of the absolute signals U(k) and Y(k) from the d.c. ('direct
142 24 On-line Identification of Dynamical Processes and Stochastic Signals
Yu Y
Figure 24.1 Process and noise model.
where 0';is the variance and ~(r) is the Kronecker delta function. The z-transfer
function of the noise filter is:
Gv(z) = n(z) = D(Z-1) = 1 + d 1z- 1 + ... + dpz- p . (24.1.6)
v(z) C(z 1) 1 +C1 Z 1 + ... +cpz p
Eq. (24.1.3) and (24.1.6) yield the combined process and noise model:
B(z -1) -d D(z - 1)
y(z) = A(Z-1) Z u(z) + C(Z-1) v(z). (24.1.7)
estimates. As well as the general model (24.1. 7) one distinguishes in particular two
specialized models, the "ML-model" also called the "ARMAX-model" (ARMA
model, (24.1.5), with an exogenous variable [23.14J):
(24.1.8)
(24.1.9)
Now inputs and outputs are measured for k = 1,2, ... , m + d + N. Then N +1
equations of the form:
and therefore
dVI
- =0 (24.2.11)
dB 8=9
[-
new old correcting one step ahead
estimate
=
estimate
+ vector measurement prediction of the
new measurement. ] (24.1.14)
24.2 The Recursive Least Squares Method (RLS) 145
and
with (24.2.19)
and the parameter error L\@(k) = @(k) - eo. Hence the recursive algorithm
produces the variances of the parameter estimates (diagonal elements of covariance
matrix). (24.2.14) can also be written as:
8(k + 1) = 8(k) + I'(k)e(k + 1) . (24.2.20)
Convergence Conditions
The general requirements for the performance of parameter estimation methods
are that the parameter estimates are unbiased
E { 8(N)} = @o N finite (24.2.21)
and consistent in mean square
lim E{ 8(N)} = @o (24.2.22)
N--+oo
(24.2.23)
N--+oo
For the method of least squares, applied to a stable difference equation which is
linear in the parameters, the following conditions must therefore be satisfied:
a) The process order m and the dead time d are known.
b) The input signal u(k) = U(k) - U oo must be exactly measurable and U oo must
be known.
c) lim p- 1 (k) is positive real. This includes that the input signal u(k) must be
k--+ 00
and
N-1
<Puu(.) = lim L u(k)u(k +.)
N-oo k=O
As for the process parameter estimation the variations of u(k) and y(k) of the
measured signals V(k) and Y(k) have to be used, the d.c. values V 00 and Yoo either
have also to be estimated or have to be removed. The following methods are
available:
a) Differencing
Through differencing
L\ Y(k) = Y(k) - Y(k - 1)
= [y(k) + Yoo ] - [y(k - 1) + Yoo ]
Y oo
(24.2.25)
Both constants are integrated in one D.C. value parameter
Ko = Yo*o - U60 (24.2.26)
and the data vector and the parameter vector (24.2.4), (24.2.4) are extended as
follows
",J(k) = [1 - Y(k - 1) ... - Y(k - m) U (k - d - 1) ... U(k - d - m)]
@J = [KOa1 ... amh1 ... hm] (24.2.27)
The equation error thus is
e(k) = Y(k) - Y(klk - 1) = Y(k) - ",J(k)@*(k - 1) . (24.2.28)
The parameter estimation then can be ensued with the correspondingly extended
matrix tp and with
yT = [Y(m + d) ... Y(m + d + N)] (24.2.29)
according to (24.2.13)
QT_[tpTtp ]-ltpTy (24.2.30)
0'* - * * *
or with the corresponding recursive parameter estimation equation (24.2.14).
The implicit D.C. value parameter estimation is e.g. then of interest, if U 00 is
known and Yoo is to be determined continuously according to (24.2.26). Then,
however, D.C. parameter Ko and dynamic parameters @ are coupled via the
estimation equation.
c) Explicit estimation of a D.C. value parameter
The parameters ai and hi for the dynamic behaviour and the D.C. value parameter
Ko can also be estimated separately. At first the dynamic parameters are estimated
by differencing according to a). Then it follows from (24.2.25) and (24.2.26) with
(24.2.34)
where
",T(k) = [-y(k - 1) ... - y(k - p)v(k - 1) ... v(k - p)] (24.2.37)
8 T = [el ... cpd l ... dp] . (24.2.38)
If v(k - 1), ... ,v(k - p) were known, the RLS method could be used as Eqs.
(24.2.14) to (24.2.17), as v(k) in Eq. (24.2.36) can be interpreted as equation error,
which is statistically independent by definition.
Now the time after the measurement of y(k) is considered. Here
y(k - 1), ... ,y(k - p) are known. Assuming that the estimates
v(k - 1), ... ,v(k - p) and 8(k - 1) are known, the most recent input signal v(k)
can be estimated via Eq. (24.2.36), [24.1], [24.2]
v(k) = y(k) - q,T(k)8(k - 1) (24.2.39)
24.3 The Recursive Extended Least Squares Method (RELS) 149
with
Y, T(k) = [ - y(k - 1) ... - y(k - p) {}(k - 1) ... {}(k - p)J . (24.2.40)
Then also
y,T(k + 1) = [-y(k) ... - y(k - p + l){}(k) ... {}(k - p + 1) (24.2.41)
is determined, such that the recursive algorithms (24.2.14) to (24.2.16) can be used
to estimate 8(k + 1) if there", T(k + 1) is replaced by y, T(k + 1). Then {}(k + 1) and
8(k + 2) are estimated, etc. For starting the algorithm
(}(O) = .Y(O); 8(0) = 0; P(O) = (XI
can be used. As v(k) is statistically independent, v(k) and ",T(k) are uncorrelated
which results in unbiased estimates consistent in mean square. As the model
(24.2.39) must be stable, the roots of C(z) = 0 and D(z) = 0 should lie within the
unit circle of the z-plane. The variance of v(k) can be estimated by [3.13J
A2(k)
=k 1 ~ A2(.) (24.2.42)
(Tv
+1_ 2 L..., V
P 1=0
I
and equations corresponding to (24.2.14) to (24.2.16). The signal values iJ(k) = e(k)
in y,T(k + 1) are calculated recursively with (24.2.39). Therefore the roots of
D(z) = 0 must lie within the unit circle of the z-plane. The parameter estimates are
unbiased and consistent in mean square if the convergence conditions of the least
squares method, section 24.2.1 and 24.2.2 are transferred to the model (24.3.3). That
means that the model (24.3.2) has to be valid.
Furthermore,
H(z) = I/D(z) - 1/2 positive real (24.3.7)
This means, that H(z) is the transfer function of a system which can only be realized
with positive elements (phase angle magnitude ~ 90°), leading to (c.f. section
23.1.3)
H(z) is stable
Re{ H(z)} > 0 for z = e iwTo ) (24.3.8)
- n < wTo < n
This includes (sufficient condition)
ID(e iWTO ) - 11 < 1 . (24.3.9)
For convergence of the least squares method the error signal e(k) must be
uncorrelated with the elements of '" T (k). The instrumental variables method
bypasses this condition by replacing the data vector '" T (k) by an instrumental
vector w T(k) whose elements are uncorrelated with e(k). This can be obtained if the
instrumental variables are correlated as strongly as possible with the undisturbed
components of ",T(k). Therefore an instrumental variables vector
wT(k) = [ -h(k - 1) ... - h(k - m) u(k - d - 1) ... u(k - d - m)]
(24.4.1)
is introduced where the instrumental variables
(24.4.2)
are taken from the undisturbed output of an auxiliary model with parameters
@aux(k). The resulting recursive estimation algorithms have the same structure as
for RLS, [24.5], [24.6], c.f. Table 24.1. To have the instrumental variables h(k)
less correlated with e(k), the parameter variations of the auxiliary model are
delayed by a discrete first order low-pass filter with dead time [24.6]
(24.4.3)
During the starting phase this RIV is sensitive to inappropriately chosen initial
values of 8(0), P(O) and p. It is therefore recommended that this method is started
with RLS [24.11].
Table 24.1 Unified recursive parameter estimation algorithms for bo = 0 and d = o. 6(k + 1) = 6(k) + l'(k)e(k + 1);
l'(k) = J.l(k + I)P(k),(k + 1); e(k + 1) = y(k + 1) - ",T(k + 1)6(k).
N
£1 1 .j>.
[ - y(k) ... - y(k - m + 1) 1 N
RLS I : [1- y(k)'{Il (k + 1)] P(k) '{I(k + 1) ...,
u(k) ... u(k - m + 1)] 1 + '{IT(k + I)P(k)'{I(k + 1) A(Z-I)
0-
I ~m D(Z-I) (l>
RIV as RLS 1 [ -h(k) ... - h(k - m + 1)
b.. [/- y(k)rpT(k + 1)] P(k) ;:tI
1 + '{I T(k + I)P(k)rp(k + 1) u(k) ... u(k - m + 1)] C(Z-I) (l>
(")
IX 1 !:
....
rJ>
STA I bm I as RLS p(k + 1)1 = --I '{I(k + 1)
k+ 1 A(Z-I) :;;-
(l>
::l
~
£1 1
[-y(k) ... -y(k - m + 1)
-
....
!:
RELS I : 3
(l>
u(k) ... u(k - m + 1) ::l
am
elk) ... elk - m + 1)] as RLS as RLS '{I(k + 1) D(Z-I) [
hI
A(Z-I) -<
P>
::1.
P>
hm CT
Ci"
rJ>
dl ~
RML I as RELS 1 (l>
[1- y(k)rpT(k + 1)] P(k) [ - y'(k) ... - y'(k - m + 1) D(Z-I) ....
0-
1 + rpT(k + I)P(k)rp(k + 1) A(Z-I) 0
u'(k) ... u'(k - m + 1) Q..
dm I e'(k) ... e'(k - m + 1)] ~
~
-
Vl
-
152 24 On-line Identification of Dynamical Processes and Stochastic Signals
The recursive parameter estimation algorithms RLS, RELS, RIV, RML and STA
can be represented uniquely by, compare [2.22, 2.23J
They differ only in the parameter vector 8, the data vector ",T(k + 1) and in the
correcting vector y(k). These quantities are summarized in Table 24.1.
Up to now it was assumed that the process parameters to be estimated are
constant and therefore the measured signals u(k) and y(k) and the equation error
e(k) are weighted equally over the measuring time k = 0, ... ,N. If the recursive
estimation algorithms are to be able to follow slowly time varying process para-
meters, more recent measurements must be weighted more strongly than old
measurements. Therefore the estimation algorithms should have afading memory.
This can be incorporated in the least squares method by time dependent weighting
of the squared errors (the method of weighted least squares [3.13J)
(m+d+N)
V= L w(k)e 2 (k) . (24.5.4)
k=(m+d)
By choice of
w(k) = A(m+d+N)-k = AN'-k with 0< A < 1 (24.5.5)
the errors e(k) are weighted as shown in Table 24.2 for N' = 50. The weighting then
24.5 A Unified Recursive Parameter Estimation Algorithm 153
k 10 20 30 40 47 48 49 50
1
(24.5.6)
J1.(k + 1) = ), + IfIT(k + I)P(k)rp(k + 1)
- P(k + 1) is multiplied by 1/),
When choosing the weighting factor ), one has to compromise between greater
elimination of the noise or better tracking of time varying process parameters. It is
recommended that A is chosen within the range 0.90 < A < 0.995. As the RML and
RELS methods exhibit slow convergence during the starting phase due to the
unknown e(k) = D(k), convergence can be improved if the initial error signals are
weighted less and the subsequent error signals are increasingly weighted up to 1.
This can be achieved with a time varying ),(k) as in [24.13]
A(k + 1) = AoA(k) + (1 - Ao) (24.5.8)
with Ao < 1 and A(O) < 1. For Ao = 0.95 and A(O) = 0.95 one obtains for example
A(5) = 0.9632 A(10) = 0.9715 A(20) = 0.9829.
In the limit, lim A(k + 1) = 1.
The weightings given by (24.5.8) and (24.5.5) can be combined in the algorithm
A(k + 1) = AoA(k) + A(1 - Ao) . (24.5.9)
There is a small weighting in the starting phase, depending on Ao and A(O), and for
large k an exponential forgetting given by (24.5.5) is obtained:
lim A(k + 1) = A .
The recursive parameter estimation algorithms have been compared with respect
to the performance of the estimates, the reliability of the convergence and the
computational effort by simulations [24.9], [3.13], [24.10], [24.13], by practical
154 24 On-line Identification of Dynamical Processes and Stochastic Signals
tests [24.11], [24.12] and theoretically [24.13], [24.17]. The results ofthe compari-
sons of the recursive parameter estimation algorithms can be summarized as
follows:
RLS: Applicable for small noise/signal ratios, otherwise gives biased estimates.
Reliable convergence. Relatively small computational effort.
RELS: Applicable for larger noise/signal ratios, if the noise model D/A fits. Slow
convergence in the starting phase. Convergence not always reliable (c.f. RML).
Noise parameters D are estimated. They show a slower convergence than for Band
A. Somewhat larger computational effort than RLS.
RIV: Good performance of parameter estimates. To accelerate the initial conver-
gence, starting with RLS is recommended. Larger computational effort than RLS.
RML: High performance of parameter estimates, if the noise model D/A fits. Slow
convergence in the starting phase. More reliable convergence than RELS. Noise
parameters D are estimated, but show slow convergence. Larger computational
effort than RLS, RELS and RIV.
STA: Acceptable performance only for very large identification times. Conver-
gence depends on IX. Small computational effort.
For small identification times and larger noise/signal ratios all methods (except
STA) lead to parameter estimates of about the same quality. Then in general RLS is
preferred because of its simplicity and its reliable convergence. The superior
performance of the RIV and RML methods is only evident for a larger identifica-
tion time.
Here S is called the "square root" of P. The resulting algorithms for the least
squares method then are written
with the initial value S(O) = ~. These equations were stated in similar form for
the state estimation [24.18].
The discrete square root filter in the information form (DSFI) results from the
nonrecursive least squares method in the form
Ty = [~l (24.6.9)
[ h(k + 1)
w(k + 1)
J = T(k + 1)[fi h (k)
y(k + 1)
J. (24.6.12)
Then S-l(k + 1) and h(k + 1) are used to calculate 8(k + 1) according to (24.6.6).
This partly nonrecursive partly recursive form has the advantage that no initial
values 8(0) have to be assumed and that exactly S-l(O) = 0 is valid. Therefore
convergence is excellent in the initial phase. Furthermore, matrix inversion is not
required. This method is especially expedient if the parameters 8 are not required
for each sampling step. Then only S-l and h have to be calculated recursively.
For the discrete root filtering in covarianceform a further developed method has
been given in [24.23], the so-called U-D factorization (DUDC). Here the covariance
matrix is factorized as follows
p= UDU T (24.6.13)
where D is a diagonal matrix and U an upper diagonal matrix with ones in the
diagonal. Then the recursive equation for the covariance matrix is written (24.2.16)
After inserting (24.2.15) and (24.6.13) it yields for the right-hand side
f(k)
v(k)
= UT(k)",(k +
= D(k)f(k)
1) 1 (24.6.16)
cx(k) = A + fT(k)v(k)
(24.6.19)
Unless stated otherwise, in this chapter it is assumed that the processes are linear
and the controllers are linear, time-invariant and noise-free.
(25.1.1)
md
Gp (z) = n(z) = D(Z-l) = _1_+_d_1 _z_-.,-1_+_ _+_d_m-'!.d_z_-_ (25.1.2)
v v(z) A(Z-l) 1 + alz- 1 + + amaz- ma
(25.1.3)
holds, with 8 0 as the true parameter vector and N as the measuring time. Now
conditions for parameter identifiability are given, if only the output y(k) is measured.
Identifiability Condition 1
In concised notation the process equation for the input/output behaviour accord-
ing to (25.1.4) is
[A + B~JY = Dv.
25.1 Parameter Estimation without Perturbations 161
[A + S + B ~- S] Y = Dv
[A + S + (B - ~ S) ~ ] y = Dv
Identifiability Condition 2
+ mb unknown process parameters tli and bi have to be
(25.1.4) shows that the ma
determined by the t parameters IXi. If the polynomials D and d have no common
roots, a unique determination of the process parameters requires t ~ ma + mb or
max [ma + Jl., mb + v + d] ~ ma + mb } (25.1.11)
max [Jl. - mb, v + d - ma] ~ o.
Hence the controller orders have to be
or
v > Jl. - d + ma - mb -+ v ~ ma - d }
(25.1.12)
v < Jl. - d + ma - mb -+ Jl. ~ mb .
If the process deadtime is d = 0, the orders of the controller polynomials must
satisfy either v ~ ma or Jl. ~ mb' If d > 0, either v ~ ma - d or Jl. ~ mb must be
satisfied. Here the dead time d can exist in the process or can be generated in the
controller, see (25.1.4). This means that the identifiability condition 2 can also be
satisfied by using a controller e.g. with d = ma and v = 0 and Jl. = O.
The parameters d;, (25.1.2), can be calculated uniquely from the Pi> (25.1.4),
if r ~ md, i.e.
(25.1.13)
Hence the parameters d, can be estimated for any controllers, provided D and
d have no common roots.
162 25 On-line Identification in Closed Loop
If d(Z-I) and D(Z-I) have p common poles, they cannot be identified, but only
t - p parameters IX; and r - p parameters Pi.
The identifiability condition 2 for
process parameters tli and bi then becomes
(25.1.14)
Note that only the common roots of d and D are of interest, and not those of
d and 91, as 91 = DP and P is known. Therefore the number of common zeros in
the numerator and denominator of
D(Z-I) D(Z-I)
Gid(Z) = d(z 1) = A(z l)p(Z 1) + B(z I)Z dQ(Z 1)·
(25.1.5)
Example 25.1.1
The parameters of the first-order process m. = mb = m = 1
y(k) + ay(k - 1) = bu(k - 1) + v(k) + dv(k - 1)
are to be estimated in closed loop. Various controllers are considered.
a) 1 P-controller: u(k) = -qoy(k) (v = 0; II = 0). (25.1.4) leads to the ARM A process
y(k) + (a + bqo)y(k - 1) = v(k) + dv(k - 1)
or
y(k) + ~y(k - 1) = v(k) + pv(k - 1) .
Comparison of the coefficients gives
a+ bqo
!X =
P=d
No unique solution for aand b can be obtained, as
- L1a
a= ao + L1a and b = bo - -
qo
satisfy these equations for any L1a. The parameters a and b are therefore not identifiable.
According to (25.1.12) it is required that v ~ 1 or II ~ 1.
b) 1 PD-controller: u(k) = -qo(Y) - qly(k - 1) (v = 1; II = 0). The ARMA process now
becomes second order
y(k) + (a + bqo)y(k - 1) + bqly(k - 2) = v(k) + dv(k - 1)
y(k) + ~ly(k - 1) + ~2y(k - 2) = v(k) + pv(k - 1).
Comparison of coefficients leads to
a= !Xl - bqo; b = !X2/ql; d= P
The process parameters are now identifiable.
25.1 Parameter Estimation without Perturbations 163
c) 2 P-controllers: u(k) = -qol y(k); u(k) = -qo2y(k). Due to a) two equations with
coefficients
:£11 = a+ bqol and :£12 = & + bqoz·
are obtained. Hence
b- = - 1 ['iX12 - a'J .
qoz
The process parameters are identifiable if qo I oj= q02.
Generally the process parameter vector e is obtained by the ARMA parameters
&1, ... ,:1.1 via the comparison of coefficients in (25.1.4) by considering the identifi-
ability conditions 1 and 2. If d = 0 and rna = mb and therefore t = 2m, and the
orders of the controller polynomials are v = m, and J1 ~ m to satisfy (25.1.12), it
follows with Po = 1 that
al + b l qo = !XI - PI
0 0 qo 0 0 al !XI - PI
PI 0 ql qo 0 az !xz - pz
PI ql
PI' qo am !XI' - PI'
0 PI' PI qm ql bl !XI' + I
0 0 0 qm bz !X1'+2
PI'
0 0 0 0 0 qm bm !X2m
S () a*. (25.1.17)
y(z) y(z)/v(z) 1
-= --- (25.1.22)
u(z) u(z)/v(z) GR(z)
would have been obtained, i.e. the negative inverse controller transfer function. The
reason is that the undisturbed process output yu(k) = y(k) - n(k) is not used, but
the disturbed y(k). If yu(k) were known, the process
could be identified. This shows that for direct closed loop identification the
knowledge of the noise filter n(z)/v(z) is required. Therefore the process and noise
model resulting from (25.1.1) and (25.1.2)
A(Z-l)y(Z) = B(Z-l)Z-dU(Z) + D(Z-l)V(Z) (25.1.24)
is used.
25.1 Parameter Estimation without Perturbations 165
The basic model for indirect process identification is the ARMA, c.f. (25.1.4),
[A(Z-l)P(Z-l) + B(Z-l)Z-dQ(Z-l)]y(Z) = D(Z-l)P(Z-l)V(Z). (25.1.25)
Replacing the controller equation
Q(Z-l)y(Z) = _P(Z-l)U(Z) (25.1.26)
results in
A(Z-l)P(Z-l)y(Z) - B(Z-l)Z- dp(Z-l)U(Z) = D(Z-l)P(Z-l)V(Z) (25.1.27)
and after cancellation of the polynomial p(Z-l) one obtains the equation of the
process model as in open loop, (25.1.23). The difference from the open loop case is,
however, that u(z) or p(Z-l )u(z) depend on y(z) or Q(Z-l )y(z), (25.1.26), and
cannot be freely chosen.
The identifiability conditions for direct process identification in closed loop can
be derived from the condition for a unique minimum of the loss function
(25.1.28)
A unique minimum of the loss function V with regard to the unknown process
parameters requires a unique dependence of the process parameters in
1 [ A-
....,., + Bz Q] _
- -d - - AP +_Bz-dQ -
_ d-
(25.1.31)
D P DP f!I
on the error signal e. This term is identical to the right-hand side of (25.1.4), the
model for indirect process identification for which the parameters of A, Band Dcan
be uniquely determined based on the transfer function y(z)/v(z), provided that the
identifiability conditions 1 and 2 are satisfied. Therefore, in the case of convergence
with e(z) = v(z) the same identifiability conditions must be valid for direct closed
loop identification. Note that the error signal e(k) is determined by the same
equation for both the indirect and the direct process identification, compare (25.1.4)
and (25.1.30, 31). In the case of convergence this gives A = A, B = Band D= D and
therefore in both cases e(k) = v(k). A second way of deriving the identifiability
condition 2 results in the consideration of the basic equation of some nonrecursive
parameter estimation methods. For the least squares method (24.2.2) gives
y(k) = ",T(k)8 = [ - y(k - 1) ... - y(k - rna)
u(k - d - 1) ... u(k - d - rnb)]8. (25.1.32)
166 25 On-line Identification in Closed Loop
",T (k) is one row of the matrix tp of the equation system (24.2.6). Because of the
feedback (25.1.26), there is a relationship between the elements of ",T(k)
u(k - d - 1) = -PI u(k - d - 2) - ... - p,.u(k - Jl - d - 1)
-qoy(k - d - 1) - ... - qvy(k - v - d - 1) . (25.1.33)
however, only a process with rna parameters in the denominator and rnb para-
meters in the numerator, a better result can be expected for direct process
identification. This especially holds for higher-order processes. Additionally the
computational effort is smaller.
3. For direct process identification in closed loop, parameter estimation
methods using the prediction error can be applied as in open loop, provided the
identifiability conditions are satisfied. The controller need not be known.
4. If the existing controller does not satisfy the identifiability condition 2 as it has
a too Iowan order, identifiability can be obtained by:
a) switching between two controllers with different parameters [25.4, 25.5].
b) introduction of dead time d ~ rna - V + p in the feedback,
c) use of nonlinear or time varying controllers.
Now an external perturbation us(k) is assumed to act on the closed loop as shown
in Figure 25.2. The process input then becomes
(25.2.1)
with
Q(Z-l)
UR(Z) = - P(Z-l) y(z) . (25.2.2)
The additional signal us(k) can be generated by a special filtered signal s(k)
us(z) = Gs(z)s(z) . (25.2.3)
If Gs(z) = GR(z) = Q(Z-l )/p(Z-l) then s(k) = w(k) is the reference value. s(k),
however, may also be a noise signal generated in the controller. If a test signal acts
directly on the process input, then Gs(z) = 1 and us(k) = s(k).
That means that there are several ways to generate the perturbation us(k). For
the following it is only important that this perturbation is an external signal which
is uncorrelated with the process noise v(k). At this time the perturbation needs not
to be measurable.
w y
Figure 25.2 Scheme of the process to be identified in closed loop with an external perturba-
tion s.
168 25 On-line Identification in Closed Loop
resulting in
[AP + Bz-dQ]y(z) = DPv(z) + Bz-dpu.(z) .
Inserting (25.2.1) gives
A(Z-I)p(Z-I)y(Z) - B(Z-I)Z-dp(Z-I)U(Z) = D(Z-I)P(Z-I)V(Z) (25.2.5)
and after cancellation of the polynomial P(z - I) one obtains the open loop process
equation
(25.2.6)
Unlike (25.1.25) u is generated not only from the controller based on y, but
according to (25.2.1) also by the perturbation u.(k). Therefore the difference equa-
tion according to (25.1.33), (25.2.1) and (25.2.2) is
u(k-d-l)= -Plu(k-d-2)- ... -p"u(k-jl-d-l)
- qoy(k - d - 1) - ... - qvy(k - v - d - 1)
+ u.(k - d - 1) + PI u.(k - d - 2)
+ ... + p"u.(k - jl - d - 1) .
If u.(k) =1= 0, u(k - 1) is nonlinearly dependent on the elements of the data vector
",T(k), (25.1.32) for arbitrary controller orders jl and v. The process described by
(25.2.6) is therefore directly identifiable if the external perturbation u.(k) is suffi-
ciently exciting the process parameters to be identified. Note, that the perturbation
u.(k) need not to be measurable. Hence, for an external perturbation signal u.(k),
the identifiability condition 2 indicated in the last section is not significant in this
case. Identifiability condition 1, however, has still to be satisfied.
As already stated in the previous section, the prediction error parameter estima-
tion methods as used in open loop identification can be applied provided an
external perturbation signal acts on the process. The controller needs not to be
known and the perturbation signals needs not to be measurable. Note, that this
result is also valid for any arbitrary noise signal filter D(Z-l )!C(Z-l).
w u y
Within each of these groups further classifications can be made. The most import-
ant cases, the resulting designations of adaptive controllers and upcoming tasks are
considered in the following.
a) Process models
Section 3.7.1 included a classification of mathematical process models. Here only
parametric process models are of interest:
- Input/output models in the form of (stochastic) difference equations or z-transfer
functions
A(Z-l )y(z) - B(Z-l )Z-dU(Z) = D(Z-l )v(z) (26.1.1 )
(ARMAX-model)
A(Z-l )y(z) - B(Z-l )Z-dU(Z) = v(z) (26.1.2)
(least squares model -> LS model).
v(k) is a statistically independent stochastic signal with E {v(k)} = 0 and variance
(J'~. The parameters 6JT = Cal ... am; b l ... bm; d l ... dm] are assumed to be con-
stant. Further specifications see section 24.1.
- State models in form of (stochastic) vector difference equations
In general v(k) and n(k) are statistically independent signal processes with
E {v(k)} = 0, E {n(k)} = 0 and covariance matrices V and N. For more details the
reader is referred to chapter 15 and 22. The parameters @ can be constant or
modelled by the stochastic process
@(k + 1) = tP@(k) + ~(k) (26.1.4)
Here the vector signal process ~(k) is statistically independent with E{~(k)} = 0
and covariance matrix E. The parameter vector e(k) can also be contained in an
extended state vector x( k).
The above given process models are valid for stochastical disturbances. Ordi-
nary difference or vector difference equations are involved if the disturbances Vi and
ni are either deterministic signals or null.
The difference equation belonging to (26.1.1) or (26.1.2) can be easily transformed
into nonlinear difference equations, which are linear in the parameters e i however at
the same time, contain terms as ua(k) and y P(k), see [26.59]. Nonlinear state models
can be written in general form
- Parameter estimation:
- process parameter estimates
311 = [e] = [Ili; b;]T or [Ili; bi ; d;]T (26.1.6)
- processes parameter estimates and their uncertainty
312 = [e, L1ey (26.1.7)
- State estimation:
- state estimates
321 = [.i(k + 1)] (26.1.8)
- state estimates and their uncertainty
322 = [X(k + 1), L1.i(k + I)Y (26.1.9)
- Signal estimation:
If the assumed process model contains a noise signal filter, e.g. D(Z-l )/A(Z-l),
the nonmeasurable noise signals vi(k) or ni(k) can be estimated applying para-
meter- or state estimation methods
(26.1.10)
Cautious controllers
A controller which employs the separation principle in the design and uses the
parameter and state estimates together with their uncertainties is called a 'cautious
controller'. Here the information measures are 312 or 322 are used. Because of the
uncertainty of the estimates the controller applies a cautious action on the process.
174 26 Parameter-adaptive Controllers
II = E{e~(k + d + j) + ru 2 (k)};j ~ 0
It is not necessary to use the expectation value for designing with deterministic
signals. Controllers, designed according to the cancellation principle as e.d. dead
beat controllers, do not require a specific performance function since the graph of
the control and manipulated variable or their settling times are prescribed.
26.2 Suitable Control Algorithms 175
e) Control algorithms
The actual design of a control algorithm is of course performed before implementa-
tion in a digital computer. Then it remains to calculate the controller parameters
as functions of process parameters. Control algorithms for adaptive control should
satisfy:
(1) Closed loop identifiability condition 2.
(2) Small computing and storage requirements for controller parameter
calculation.
(3) Applicability to many classes of processes and signals.
The next section discusses which of the control algorithms meet these requirements
for parameter-adaptive control. Within the class of self-optimizing adaptive con-
trollers based on process identification, nondual methods based on the certainty
equivalence principle and recursive parameter estimation have shown themselves to
be successful both in theory and practice. The resulting methods will be called
parameter adaptive control algorithms; they are also called self-tuning regulators, e.g.
[27.8, 27.13]. One could imagine there a distinction between 'self-tuning' and
'adaptive', as the former appears to imply constant process parameters. However,
there is no sharp boundary between the cases when considering their applicability,
so the distinction is of secondary importance.
This section examines structure and design efforts of various control algorithms
with regard to their application for parameter-adaptive controllers. For minimal
computational effort in calculating the controller parameters, the following control
algorithms are preferred for parameter-adaptive control:
- deadbeat controller DB (v), DB(v + 1)
- minimum variance controller MV3, MV4
- parameter-optimized controller iPC-j
(with direct parameter calculation)
More computation is required for:
- general linear controller with pole assignment LC-PA
- state controller SC
These control algorithms are now considered with regard to the identifiability
condition 2 (25.1.14), and the computational effort involved. For control algo-
rithms which theoretically cancel the process poles or zeros, one must distinguish if
the controller is adjusted exactly or not to the process.
Its orders are v = rna and JI. = rnb + d. This means that identifiability condition 2 is
satisfied provided there are no common roots in the transfer behaviour (25.1.15).
For the case of inexactly adjusted controller parameters
(26.2.2)
(26.2.3)
Since no common roots occur in the numerator and denominator the process
remains identifiable. Note, that in the control loop model (26.2.2) which is based
on process identification, the polynomial d(Z-l) has order I = rna + rnb + d.
In the adjusted state, however, d(Z-l) = A(Z-l) so that the parameters
«(Xm - 1 ••• (X2m + d) = 0 for e.g. rna = rnb = rn. Hence, in the case of indirect process
identification these disappearing parameters have to be used for calculation of the
process parameters. According to (7.2.14) the deadbeat controller DB(v + 1) has
orders (v = rna + 1 and JI. = rnb + d + 1. Concerning the identifiability condition
this controller shows the same behaviour as DB (v). It is recommended to apply
deadbeat controllers only with increased order for strongly damped low-pass
processes.
G,d(Z) =
15
[
zB +i
JD
+ A zF[AB - BA]
(26.2.8)
In general no common roots appear, and the identifiability conditions (26.2.6) and
(26.2.7) are satisfied with rnd ~ rnb or rnd ~ rna + 1 for d = 0 or d ~ 1.
Ho~ever, if the controller becomes exactly tuned to the process, A = A, B = B,
D=D
G,d(Z) =
D
[
zB
D
+ ~A
r J' (26.2.9)
bl
and p = rnd common roots appear, which means that the process is no longer
identifiable for d = 0 (26.2.6). (26.2.7) leads to
max[d - 1, rnd - rna - 1] ~ rnd , (26.2.10)
resulting in the requirement for MV3-d that
d ~ rnd + 1 (26.2.11)
which is satisfied only for relatively large deadtime. With r = 0 the MV4-d con-
troller arises, (14.2.13)
i(z- I)
GR(z) = ~ ~ (26.2.12)
ZB(Z-I)F(z-l)
Identifiability for d = 0 in the untuned case is obtained with md ~ rna + 1. For
d ~ 1 the identifiability conditions are always satisfied. When tuning is exact
p = rnd common roots appear for r = 0, (26.2.3). Then the process is not identifiable
for d = O. For d ~ 1 it has to be satisfied
(26.2.13)
If the minimum variance controllers are followed by a proportional-integral acting
term (14.1.25) in order to avoid lasting control deviations, the orders v and J.l are
increased by one and no common roots of D and d occur for !Y. =1= O. Hence the
same identifiability conditions are valid for these modified minimum variance
controllers MV3d-PI or MV4d-PI as for inexactly tuned MV3 and MV4 control-
lers. Therefore minimum variance controllers often satisfy the identifiability condi-
tions, see Table 26.1. It is recommended to use minimum variance controllers only
for distinct stochastic disturbance signals.
....
-..j
00
Table 26.1 Properties of different control algorithms with respect to application for parameter-adaptive control.
N
0'1
control closed loop computational effort danger of evaluation for parameter-
algorithm
~
identifiability parameter operation instability adaptive control
condition 2 calculation for *) ~
2
o
satisfied ...
deadbeat contr. yes very medium [(z) =0 suitable for asymptotically ~
~
DB(v), DB(v + 1) small stable processes <'
o
min. variance d~md+l small medium D-(z)=O suitable for stochastic C"l
o
contr. MV3-d disturbances a(3
min. variance d~md+l small medium B-(z) =0 suitable for processes with if
;;!
contr. MV4-d D-(z) =0 zeros inside the unit circle
and stochastic disturbances
parameter- v~m.-d medium small suitable, dependent on
optimized (v = 2) proper design
contr. i-PC-j
(26.2.14)
satisfies the identifiability condition (25.1.12) if its order is related to the process by
v ~ rna - d and the parameters q, are chosen in such a way that no common roots
appear in (25.1.15). Hence PID-controllers (3PC) are suitable for processes where
rna ~ 2 + d. If the process possesses no deadtime the allowable process order is
m = 2. For d = 1 the order m = 3 is allowed to be chosen, for d = 2, m = 4, etc.
This somewhat restricts the application of PID-controllers. (However, also for
higher-order processes good results can be accomplished as will be shown later.)
For PID-control algorithm design the following methods can be used
(chapter 5):
- numerical parameter optimization
- approximation of controllers which can be easily designed
- simulation of tuning rules.
Table 26.2 Computational effort and storage requirement for different control algorithms
[25.16].
for parameter estimation and a large sampling time for control may be used for
an automatic on-line search for a suited, later synchronous sampling time,
[26.36]. In the case of computing time problems the controller design can also
be distributed over several sampling intervals.
b) Conditional controller design. The calculation or the change of controller para-
meters may depend on certain conditions. For example the transgression of
thresholds of process parameter changes, or the excitation by input signals, or
the result of a closed-loop simulation are such conditions.
Hence, several possibilities in combining parameter-adaptive controllers do exist.
Their selection, for example, depends on the process, the acting signals, the
necessity of adaptation and the computer capacity.
(asymptotic stability)
To satisfy the first condition, for example, the pole zero cancellation problems,
which depend on the structure of the process and the controller have to be taken
into account. The second condition implies the close connection between conver-
gence and stability. The convergence can be subdivided in several phases:
- convergence at the beginning
- convergence far from the convergence point
- convergence close to the convergence point (asymptotic convergence)
26.3 Suitablw Combinations 183
The stability condition 2 is related only to the asymptotic behaviour. The adaptive
system in general is stable, if the process model parameters reach the true values
The parameter estimation methods RLS, RELS and RML satisfy this condition if
following convergence conditions (see section 24.2) are met:
a) process: stable and identifiable
b) process order m and dead time d known
1
c) lim kP- 1 (k) positive definite (persistent excitation of order m)
k-oo
d) e(k) not correlated with u(k)
e) e(k) not correlated and E{e(k)} = 0
f) RELS, RML: H(z) = 1/(Dz- 1 ) - 1/2 positive real. This means H(z) is stable
and Re(H(z)) > 0 or Re(D(z-l)) < 2)
g) identifiability conditions in closed loop satisfied (chapter 25).
For some parameter-adaptive controllers these conditions can be weakened:
- biased parameter estimates can be tolerated or just give asymptotic convergence
(RLS/MV4)
- the process may be unstable
- the identifiability conditions can be circumvented by assuming some controller
parameters as known (e.g. MV).
The stability and convergence of parameter-adaptive controllers were investigated
with different approaches. A survey is given in [26.38] and [26.37].
At first the convergence properties of recursive parameter estimators are of
interest. The ODE-method (analysis with approximation through ordinary differ-
ential equations due to Ljung, (1977), [26.17], the analysis via Ljapunov-function,
de Larminat (1979) [26.41], or the application of the Martingale theory, Kumar,
Moore (1978), Solo (1979) result in conditions for the asymptotic even global
convergence [26.39]. Those results can be directly transferred to some implicit
parameter-adaptive controllers, Egard (1979), [26.40] Gawthrop (1980), [26.63],
e.g. for RLS/MV4 due to Ljung [26.37, 26.40, 26.63].
The convergence of a large class of explicit parameter-adaptive controllers can
be investigated as shown by de Larminat (1980) [26.42]. For the case of determin-
istic signals (no disturbances) the procedure is as follows, [26.42], [26.43]
For the process it is
(26.3.13)
Now a model loop is formed, existing in the timevariant process model, the present
controller and the measured signals. In state space representation then it is
",(k + 1) = tP(k)",(k) + h(k)e(k) (26.3.15)
with
- 8T (k) -1
- ------+------ -
1 0 I 0
I
0 I
I 0
0 I
I
0 1 I 0
tP(k) = -
------+------ - h(k) = (26.3.16)
- [r1(k) + qo(k)O(k)Y qo(k)
-
------+------ -
I 1 0 0
I
I0
I
I 0
I
I 0 1 0
If now the signals in ",T(k), the process parameters 8(k) and the controller
parameters r(k) are bounded also all other elements of tP(k) and h(k) are bounded.
If the process parameters for k -+ 00 converge towards final values, then there exists
a finite time kl' after which tP(k) has only stable eigenvalues. For k > kl the system
becomes more and more a time invariant form. As for k -+00 the elements of ",(k)
approach zero, the model control loop and also the parameter-adaptive control
loop becomes asymptotically stable. This method can also be transferred to
stochastic disturbances, applying e.g. the RELS estimation method, Schumann
(1982) [26.43].
The investigation shows that the asymptotic stability of an explicit parameter-
adaptive control system can be reached, if
a) The convergence condition of the parameter estimation method are satisfied
(items a to g).
b) The manipulated variables u(k) of the process are restricted.
This convergence consideration can be classified as "somewhat far" from the
convergence point. Simulations and applications have shown that generally con-
vergence "far from the convergence point" can be reached, if the above mentioned
26.3 Suitable Combinations 185
0.6
0.2
0
1500 2000 k
-0.2
-0.6
a -1.0
0.6
0.2
0
1000 1500 2000 k
-0.2
-0.6
-1.0
b
Figure 26.2a, b. Parameter estimation values for a first-order process in closed loop
(al = -0.8; b l = 0.2; d l = 0.5) with a an exactly tuned control algorithm MV3 (r = 0.05);
b parameter-adaptive controller RML/ MV3 ().o = 0.99, ),(0) = 0.95, r = 0.05).
Control algorithm
Parameter
Estimation Stochastic Deterministic
RLS x· x· x x x x
RELS x x x x x x
RML x x xb xb xb xb
depends on the quality ofthe parameter estimates. Therefore a d.c. value correction
according to (26.5.16) is recommended. In general an integral term in the controller
is to be preferred.
d) Choice of the controller design method
The choice of the method for the controller parameter calculation is mainly
determined by the computational effort, see section 26.2.
[26.8, 26.9], such that an implicit combination results, compare section 26.3.1.
(26.4.1) is multiplied with F(Z-l)
FAy - BFz-du = Fv (26.4.6)
and inserted in the controller design equation (26.4.3)
y(z) = L(Z-1)Z-(d+1)y(Z) + B(Z-l)F(z-l)Z-d U(Z) + F(Z-l)V(Z). (26.4.7)
With (26.4.2) it follows that
y(z) = - Q(Z-l)Z-(d+l)y(Z) + P(Z-1)Z-(d+1)U(Z) + F(Z-l)V(Z). (26.4.8)
Thus the modified model directly contains the controller parameters qi and Pi'
They can be estimated by applying the RLS method on (26.4.8) and can be used for
the calculation of u( k). The corresponding difference equation becomes with
v = rna - 1 and f.1. = mb + d - 1
+8(k-d-l) (26.4.9)
Hereby B(z) = F(Z-l )v(z) is a MA-process of order d. (26.4.9) contains rna para-
meters q, and mb + d parameters p" i.e. altogether rna + mb + d parameters have to
be estimated. Since, however, according to (26.4.5) only rna + mb + d - 1 para-
meters can be estimated, in principle one parameter has to be assumed as known
when using the modified model. If for example Po = b 1 is known, then it follows
from (26.4.9) that
y(k) = ",T(k - d)@ + Pou(k - d - 1) + 8(k - d - 1) (26.4.10)
with
@ = [qo ... qvP1 ... P/l] (26.4.11)
",T(k - d) = [-y(k - d - 1) ... - y(k - d - rna)
The parameter estimates are inserted in the control algorithm (26.4.2) and the new
manipulated variable is calculated
u( k + 1) = -1 A
- rna + 2)]
Po
- P1 u(k) - ... - p/lu(k - mb - d + 2)] . (26.4.14)
190 26 Parameter-adaptive Controllers
Example 26.4.1:
Equations for programming of an explicit stochastic parameter-adaptive controller
1. Estimation of d.c.-values through differentiation
~u(k) = U(k) - U(k - 1)
~y(k) = Y(k) - Y(k - 1)
~u(k) = u(k); ~y(k) = y(k)
2. Parameter estimation (RLS, RELS, RML), compare table 24.1
a) e(k) = y(k) - 'l'T(k)e(k - 1)
b) e(k) = e(k + 1) + y(k - l)e(k)
c) Inserting y(k) and u(k - d) in 'l'T(k + 1) and 'fJ(k + 1)
d) y(k) = fl(k + I)P(k)'fJ(k + 1)
r 1
e) P(k + 1) = [1-y(k)'fJ (k + 1)]P(k)~
= [[ 1 JI I[tl
+ a, ] i h, ] ] Yoo ( k + 1)
e) New manipulated variable: U(k + 1) = U oo + Plu(k) + P2u(k - 2) + ...
+ Pm+d-I u(k - m - d + 2)
- qoew(k + 1) - ql ew(k) - ...
- qm-Iew(k - m + 2).
5. Cycle
a) Replace Y(k + I) by Y(k) and u(k + 1) by u(k).
b) Step to I.
Note, that the old parameters e(k) are used to calculate the process input u(k + 1) in order
to save computing time between 4a) and e).
Figure 26.3 shows the stochastic noise signal for the process with m = 2 (de-
scribed in section 26.6), the signals for a fixed, exactly tuned controller MV3 and
adaptive controller RML/MV3. The adaptive controller was switched on at k = 0
with initial parameters 8(0) = 0 (without pre-identification). After initial larger
192 26 Parameter-adaptive Controllers
y(k)
u(k)
Figure 26.3a-c. Control variable y(k) and manipulated variable u(k) for test process VII
(m = 2). a Noisy output y(k) = n(k), no control; b Fixed controller MV3 (r = 0.01);
c Adaptive controller RML/ MV3 (r = 0.01).
controller actions and about 10 sampling steps, almost the same control perform-
ance is obtained as for the exactly tuned, fixed MV3. Further examples will be given
in section 26.6.
Deterministic controllers are often designed for step changes in the reference value,
which e.g. is a natural excitation for servo-control systems. They are also used for
fixed controllers with constant reference values because of their easy understand-
ing and simple experimental verification. In the following different deterministic
adaptive controllers are briefly described. For some adaptive controllers control
signals with test process VI (Appendix, Vol. I) are shown
1
Gp(s) = (1 + IOs)(t + 7.5s)(t + 5s)
26.5 Deterministic Parameter-adaptive controllers 193
Before the adaptive controller is switched on, for some cases a pre-identification
with a PRBS-test signal is performed in open loop, also compare section 26.7. The
adaptive behaviour is also demonstrated for step changes of the dominant time
constant of TI = 10 s to 7;. = 2.5 s. If not indicated otherwise, the results were
obtained with microcomputer DMR-16 (which has been especially developed for
adaptive control) and an analog simulated process, [26.44].
u(z) A(Z-I)
GDBv(Z) = - - = - - - - - - (26.5.1 )
1 - B( Z - I) Z -d
ew(z) -
qo
with
(26.5.2)
This yields
(26.5.3)
W,Y
6.0
V
2.0
O+-~~~~-------r----~-;-,~----~
15 k
-2. 0
-1..0
u
8.0
V
5.0
2.0
0+-4-~~~~------~-#----4--r--~------~
1.0 60 k
- 2.0
-5.0
-8.0
a
W,Y
60
V
2.0
O+-~--~~--------~------~r-~~rr----"
-2.0
-1. .0
5.0
V
2.0
O T-+-Th-T--~-------+--------r-~--~r-~
15 1.0 60 k
-20
-5.0
b
Figure 26.4a, b. Adaptive deadbeat controller RLS/ DB. Process VI. (Design parameters:
m = 3; To = lOs; A. = 0.95). a DB(v); b DB(v·+ I) r ' = I(qo = qomm).
26.5 Deterministic Parameter-adaptive controllers 195
with
C(k)= [-y(k-I)··· - y(k-m)u(k-d-I)-~u(k-d-l)···
- ~u(k - m- d + I)] (26.5.7)
T(k - J) = [al .. . ambt . .. b!] (26.5.8)
- b!~u(k - d- m + I) + ew(k)
+ alew(k - I) + .. + amew(k - m)] . (26.5.10)
This means that no calculation time can be saved by applying the implicit
algorithms instead of the explicit one. This is why the explicit version is recommen-
ded which is also more transparent.
In [26.45] it was shown that this adaptive deadbeat controller is globally
asymptotic stable, also compare section 26.3.2.
Figure 26.4 represents control signals. After pre-identification for 12 samples in
open loop the exact deadbeat behaviour results already for the first step change of
the reference variable. Deadbeat controller with increased order DB(v + I) shows
considerably smaller control amplitudes compared with DB(v). After changing the
dominant time constant T l , Figure 26.5, the fixed DB-controller shows a remark-
able oscillating behaviour, while the adaptive DB-controller is better tuned after
W.Y
T,: 2.5s
\,
1\
0
10 25 52 ... ' k
-1
u
2
0
k
-2
Figure 26.5. Adaptive deadbeat controller RLSj DB for a step change of the time constant
Tl . Process VI. (m = 3; To = 8 s; A = 0.93) ~- adaptive controller - - - fixed controller
196 26 Parameter-adaptive Controllers
each step change in the reference value and is almost exactly adapted after the third
step.
Since for deadbeat controller design the process model has to be known rather
precisely, the parameter-adaptive deadbeat controller is more suited for applica-
tion than the fixed deadbeat controller. The parameter sensitivity which is ob-
served for some processes can be reduced considerably through the automatically
tracked process model. Several applications have shown, that the parameter-
adaptive deadbeat controller with increased order shows good results for well-
damped processes and not too small a sampling time. The significance of the
adaptive deadbeat controller, however, is mainly to be seen as a simple standard
type for simulations, comparisons, first experimental trials and theoretical invest-
igations.
~( k) calculation
!iT
d. c. value
calculat.
Process
W(k)
State Y(k)
Y(k)~-H>----l estlmat.
By including the process gain Kp the influence of the weighting of the manipulated
variable becomes independent of the present Kp. The state controller parameters
kT are calculated by the recursive solution of the matrix Riccati equation (8.1.31).
Mostly 10 recursion steps are sufficient to obtain a good approximation of the
steady state solution P. Then according to (8.1.34)
e = [rK~ + bTPbr1bTPA (26.5.12)
can be determined. The required computational effort is reasonable, also for
microcomputers.
198 26 Parameter-adaptive Controllers
(26.5.14)
W,Y
5.0
2.0
0
15 50 k
- 2.0
8.0
5.0
2.0
0
15 50 110 k
- 2.0
-5.0
-!l.0
Figure 26.7 Adaptive state controller DSFIjSC Process VI. Design parameters: m = 3;
To = 5 s; A = 0.95; Q = I; r = 0.2.
for state controllers. During the first settling towards the new reference value, the
state controller is approximately, after the second step completely adapted. After
changing the time constant T\ and after two changes in the reference variable, the
adaptive state controller is well adapted, see Figure 26.8. The fixed state controller
shows a somewhat oscillating behaviour which, however, is significantly less
distinct than for the DB-controller and also in comparison to the PID-controller,
Figure 26.12).
W,Y
- - - f -- - - - - - T, =2.55 - -- - - - --1
O *---+~~-~~--~~=-~~--~-~-r~~----
15 41 83 117 k
-1 '--
u
2
1
O~-~-h~-~~--4~----+---~--r-+~~----
15 k
-1
-2
Figure 26.8 Adaptive state controller DSFI/SC for a step change of the time constant T,.
Process VI. (m = 3; To = 5s; A. = 0.95; Q = I; r = 0.2). - - adaptive controller; -----
fixed controller.
The first selftuning controllers which appeared on the market use mainly a), for
example measurement of an impulse response, [26.51J or by changing character-
istic values of the settling behaviour of the closed loop [26.52].
In [26.53J the tuning rule of Ziegler-Nichols was applied after an oscillation
experiment. To determine the critical gain K erit and the critical period Tp, first an
on-off controller instead of a P-controller is inserted in the loop. Kerit and Tp are
determined through parameter estimation of the basic oscillation and the con-
troller parameters are calculated using prescribed amplitude- and phase margins.
Hence, the methods a) with an active tuning experiment are simple and easy to
understand. However, they require a considerable disturbance of the process by the
experiment and may be only applied for low process noise. They are further on
limited to unique or occasionally repeated tuning.
If the process model is determined by parameter estimation then testsignals of
small amplitude can be used and considerable noise and also closed loop operation
may be allowed. Furtheron, the different tuning rules may be applied to simulated
"experiments" in the computer, i.e. method b). In [26.54, 26.55J the critical gain
Kent and the period Tp are calculated for a P-controller by using a stability
criterion. Then modified Ziegler- Nichols-rules are used as shown in Table 5.8. In
order to improve the resulting signals, the gain can be modified iteratively, until the
simulated control behaviour shows a given overshoot. This method can be applied
rather generally, also for continuously operating adaptive controllers.
y
6.0
1..0
")(. =10% ")(. = , ., . ~")(. = O%
I
Y
2.0 w
0
20 30 1. 0 50 60 70
-20
U
Identificat ion self uning controller for )(.
6.0
1..0
2.0
0
10 20 30 ['0 50 60 70 kTo
- 2.0
Figure 26.9 Selftuning PID-controller with RLS and design via tuning rules and tuning
simulation [26.54]. Process VI. Design parameters: m = 3; To = 4 s; A = 0.99;){ = 10%; 1%;
0%; )( = overshoot.
202 26 Parameter-adaptive Controllers
Figure 26.9 shows the signals for this tuning method. The parameter estimation
is performed for two step responses. A fixed PIO-controller is designed and the
overshoot is reduced for the next closed loop responses.
is minimized by a numerical optimization method, see section 5.4. Through this the
PIO-parameters can be determined for arbitrary linear processes. A drawback
seems at first to be the relatively large calculation time.
In [26.44, 26.56] an optimization procedure was developed on the basis of the
Hooke-Jeeves search method which distributes the optimization time over several
sampling intervals, if required.
These stepwise parameter optimization is realized in two program parts:
1. Real-time program:
control variable sampling -+ calculation of manipulating variable -+ generation
of manipulating variable -+ parameter estimation
2. Design program:
Starting values q(n) -+ parameter optimization -+ intermediate values q(n + 1)-+
interruption by t. -+ continuation of 2., etc.
As starting values q(O) the parameters q* by approximation of the deadbeat
controller are used, see section 7.4. The control performance calculation is done by
applying Parseval's equation (5.4.5) in the z-domain, provided the control loop of
the model is asymptotically stable. Otherwise, a simulation of the required signals
in the time domain is performed. This rather generally applicable method con-
verges already after a few samplings.
W,Y
~.O
0
k
- 2.0
- 4.0
u
4.0
W,Y
6.0
V
2.0
O*+~~~~~~--------~--~~~~+----
k
- 2.0
- 4.0
u
6.0
V
2.0
0~++~~~~44------~--~--~~~~-+----
k
- 2.0
- 4.0
Figure 26.11 Adaptive PID-controller RLSj PID with parameter optimization. Process VI
with stochastic noise. Design parameters: m = 3; To = 8 s; A = 1; r = 0.08.
W,Y
- --1---- - - - - - - - T,: 2.5s - - - - - --1
25 1.2 67 k
-1
u
2
-+--l-Lr. -..
~~J~b 25
- ~~______Ib
--+-b
1.2
-·,,·-t-p, ".,
67
1,1
..,..--,~~~
92 '':
- -----k
-1 I .
-2
Figure 26.12 Adaptive PID-controller RLSj PID for a step change in the time constant T1 •
Process VI. (m = 3; To = 8s; A = 0.9; r = 0.05) adaptive controller, -- -- ---
fixed controller.
204 26 Parameter-adaptive Controllers
n (k)
y (k)
u( kJ
tOOk
b
y( kJ
u(kJ
tOOk
c _
y(k)
u (k)
1
Figure 26.13a-e. Control variable y(k) and manipulated variable u(k) for stochastic noise
signal. y(k) is drawn stepwise. a disturbed output signal y(k) = n(k) without control; b fixed
controller MV3 r = 0.01; c adaptive controller RML-MV3 (r = 0.01); d fixed controller
DB (v); e adaptive controller RML-DB(v).
Figure 26.14a-e. Control variable y(k) and manipulated variable u(k) for: a change of the
reference value w(k); b fixed controller MV3 r = 0.025; D(Z-I) = 1; c adaptive controller
RLS/ MV3 (r = 0.025); d fixed controller DB (v); e adaptive controller RLS/DB(v).
26.5 Deterministic Parameter-adaptive controllers 205
W(K)
100 k
a
Y (K)
U(K)
W (K) VU(K)
Y(K)
1 II
U [J
II ~
100 k
b
Y(K)
U(K)
W(K) U(K)
Y(K)
100 k
c
Y(K)
U(K)
W(K)
100 k
d
Y (K)
U(K)
(K)
",U(K)
1"( Y(K)
1
n.
I/ lf
~
e
I 100 k
a,/12 0,,02
O2 92
________ J 02
100 k 100 k
0,
- - - ,,-
\---;' ~01
01 01
5»2
b1.b2
b,
0.1 0.1
100 k 100 k
K K
1· - -
L - - I - K
·
100 k 100 k
a b
Figure 26.158, b. Estimates of parameters di(k) and bi(k) and gain factor K(k) for the adaptive controllers with deterministic disturbances
according to Figure 26.14. a RMLjMV3 (r = 0.025); b RLSjDB(v).
26.6 Simulation examples 207
density; standard deviation 0.4 V). The preidentification now lasts longer (k = 39).
The PID-controller adapts quickly also under stochastic noise and with good
accuracy. With changed time constant T1 the adaptive PID-controller is adapted
completely after two steps in the reference values, Figure 26.12. The fixed controller
distinctly shows a less-damped behaviour.
Chapter 31 will show the respectively required computational effort for the
various adaptive controllers and the necessary storage capacity.
In order to further discuss the basic behaviour, signals are shown with different
adaptive controllers and processes. The adaptive controllers were programmed on
a process computer HP21MX and were operated together with processes which
were simulated on analog computers.
(Test process VII, see Appendix Vol. I). This process was disturbed at the input by
a reproducible coloured noise signal which was produced by a noise signal
generator. This corresponds to a noise signal filter
n(z) D(Z-1) 1 + 0.0500z- 1 + 0.8000z- 2
Gpv(z) = v(z) = A(Z-1) = 1 _ 1.0360z 1 + 0.2636z 2 (26.6.3)
RLS
RML
Figure 26.14 shows the signals and Figure 26.15 represents the parameter estimates
for step changes in the reference values. These and other simulations show:
- After closing the control loop small changes in the manipulated variables are
generated which lead to an approximation model. This model then provides
a reasonable control performance after the first step of the reference value;
- After the second step in the command variable the parameter estimates differ
only little from the exact values.
- After the second step in the reference value the performance of adaptive con-
trollers is about the same as for exactly fixed controllers.
0-
(1)
0.0088z - 1 + 0.0086z - 2 V>
1 - 0.0132z- 1 - 0.0139z- 2
8 0.5 G (z)-----~--~ unstable behaviour
Gs(s) = (1 + 5s)(1 - 2s) s - 1 - 2.1889z- 1 + 1.1618z- 2
~
210 26 Parameter-adaptive Controllers
l y(k) y (k)
u(kl
w(kl
RLS I DB
CD /'If r
20 k u 50 lOa k
y(kl y( kl
u(k l
w(k)
RLS/DB2
l rL~ "'"
]
~
20 k ""50 100 k
y(kl y(kl
u(k)
w(kl
~ RLS/DBI
IS k 75 k
y(k) y(k)
u(k)
w(k)
RLS/DBI
15 k 75 k
y(k) y(k)
u(k)
w(k)
®
_n~~
RLS/OB 2
re:---J
20 k 1r
Il.ij SO 100 k
J2:=
RLS/MV 3
®
20 k 165 k
y(k)
u(k)
w(k)
(j) RLS/MV 3
- k k
&y(k) y(k)
u(k)
w(k)
®
RLS/MV 3
-
k 165 k
Figure 26.16.
212 26 Parameter-adaptive Controllers
26.7.1 Preidentification
As the parameter-adaptive controllers are based on process parameter estimation
it has to be assured that the parameter estimation method and the free parameters
are chosen properly. For an unknown process it is therefore recommended to first
perform a process identification. In the case of stable processes this can be done in
open loop, for unstable processes in closed loop with a fixed controller. To this
a test signal (e.g. PRBS) is introduced and after sufficiently long identification time
a model verification is performed. This shows how well the model agrees with the
process, see e.g. [3.13, ;3.18]. This includes the determination of appropriate
m
sampling time To, model order and deadtime d. Since process identification is an
iterative procedure, this is also valid for the initial phase of an adaptive controller.
(26.7.1)
where T95 is the 95% settling time of the process step response. In respect to
control, the sampling time should be as small as possible (exceptions: DB- and
MV-controllers), while for parameter estimation the sampling time should be
neither too small (numerical problems) nor too large (poor model performance).
Hence, suitable compromises have to be found, also see section 26.8.
Simulations and practical experience have shown that adaptive control was
insensitive to mo ~ 3 if model order within the range
(26.7.2)
F or the following selection of the forgetting factor A, the following holds, also see
section 24.5:
- rapid process parameter changes: A small
- model order m large: )0 large
- large noise signal amplitudes: A large.
For adaptive control the following has proved to be efficient:
- Constant or very slowly time varying processes: A = 0.99
- slowly time varying processes with stochastic disturbances: 0.95 ~ A ~ 0.99
- stepwise reference variable changes 0.85 ~ A ~ 0.90.
The smaller values are valid for lower model orders (m = 1, 2) and the larger values
for higher orders. For selftuning of lateron fixed controllers 0.95 ~ A ~ 1 can be
taken.
The choice of the controller design parameters, e.g. r, Q etc., is described in the
corresponding chapters which treat controller design.
A considerable advantage of parameter-adaptive controllers is that all free
design parameters can be changed in on-line operation. Therefore the result of
changing the design parameters can be observed immediately. This holds especially
for the final adjustment of the parameters A, rand Q which mostly cannot be
specified exactly in advance.
ulk) r- u{kl
I, I,
3 3
2 2
J n r.~ r-~
Ir
-1
J
10 l/" 20 30 k
-1 L 10 20 30
k
-2 -2
·3 -3
-I. -I,
w(kl w(kl
ylkl y lkl
I, I.
3 3
2 2
30 k 30 k
-I -I
-2 -2
-3 -3
-I. - I,
a b
These and perhaps other verifications and the appropriate actions allow an
automatic starting with a sufficiently good starting model.
Coordination Isupervlslon
Supervlslonl
coordination
level
Adaptation
level
Control
level
w y
Figure 26.18 Parameter-adaptive control loop with supervision and coordination level.
Low-pass filtering proved to be successful of both, the signals u(k) and y(k) the
parameter estimation values 8(k); thus resulting in an improved control perform-
ance and supervision.
b) Controller design
Problems might occur by e.g.:
- cancellation of unstable poles (DB) or zeros (MV4), compare Table 26.1;
- sample time To too large or too small;
- wrong design parameters.
Therefore for the corresponding controllers e.g. the poles and zeros can be cal-
culated before the controller synthesis.
c) Closed control loop
A possible malfunction is indicated e.g. by:
- control difference ew(k) = w(k) - y(k) increases monotonously
- actuator position stays at a restriction
- Oscillating unstable behaviour.
Should, despite all supervision measures by a) and b) this behaviour be observed,
a fixed and robust back-up controller which can be designed and stored already
during preidentification, can be used instead of the adaptive controller. Examples
in [26.58, 26.59] show that a considerable improvement of the overall behaviour
and robustness can be reached by applying these supervision measures, also
compare [26.44, 26.60]. The additional computational effort amounts to about
15%.
26.9 Parameter-adaptive feedforward control 217
(26.9.3)
218 26 Parameter-adaptive Controllers
Adaptation
y (kl
v (kl (l
IV k
1 ~------------~
100 k
a
y(k )
u(k )
v(k l
/v(kl /u(kl
~ ~
L 100 k
VYIkI
Figure 26.20a, b. Parameter-adaptive feed forward control for a low-pass process with order
m = 3 and a disturbance filter with m = 2. a no feedforward control, v(k): steps; b para-
meter-adaptive feedforward control with RLS-DB
Methods for selftuning and adaptive control systems are of special interest for
application, if the process shows a difficult behaviour and the control systems are
more complex. If a complete (centralized) process model is used for parameter
estimation, the principle of parameter-adaptive control can be extended to multi-
variable processes.
This section briefly describes the development of parameter-adaptive (centralized)
control loops for multi variable systems. Extensions to the RLS/MV4 controller to
multivariable systems have been made in [26.30, 26.32] using matrix polynomial
models.
In [26.33, 26.34, 26.61) a variety of combinations is given. There the following
process moclels are used with p input and r outputs and stochastic noise signals:
- p-canonical model
p r
Aii(z-l)Yi(Z) = L Bij(Z-l)Z-dUiz) + L Du(Z-l)Vj(z)i= 1, ... ,r
j;l j;l
(26.10.1)
- matrix polynomial model (section 18.1.5)
u, Y,
I
u,
l 1 • 105 • 215 2
1 • 75 • 125 2
~
11 1 • 125 • 355 2
~
J 2 • 175 • 325 2
u2
l 1 _ 175_1045 2• 2865 3• 2405' Y2
W1 (kl\
1 + - - - 1- - - ' 3 0
50 90 120 k
u
Ir~--'"
k
Figure 26.22 Parameter-adaptive control of the twovariable test process of Figure 26.21 for
step changes of wl(k) and w2(k). RLS/MDB. To = 4s, ml = 3, m2 = 5. Restricted input
signals - 2 ~ Ui ~ 2 for 0 ~ k ~ 20.
tuning is achieved in short time with only small test signals. This is also valid for
processes which are more disturbed. This method is not only useful for digital
controllers but also for analogue controllers, provided small sampling times are
chosen [26.16, 26.44]. The application as selftuning controller is especially
26.11 Application of parameter-adaptive control algorithms 223
,
y,lkJ
Figure 26.23 Parameter-adaptive control of the twovariable test process of Figure 26.21 for
stochastic noise signals n,(k). RELSj MMVI. To = 4s; ml = 3; m2 = 5; R = 0.005 I, S = I.
Restricted input signals - 5 ~ u, ~ 5 for 0 ~ k ~ 20
224 26 Parameter-adaptive Controllers
As well as choosing appropriate control algorithms and their tuning to the process,
several other aspects must be considered in order to obtain good control with
digital computers. Amplitude quantization in the AjD converter, in the central
processor unit and in the DjA converter is discussed in chapter 27 with regard to
the resulting effects and required word length. Another requirement is suitable
filtering of disturbances which cannot be reduced by the control algorithms.
Therefore the filtering of high and medium frequency signals with analog and
digital filters is considered in chapter 28. The combination of control algorithms and
various actuators is treated in chapter 29. The linearization of constant speed
actuators and the problem of windup are both considered there. Chapter 30 deals
with the computer aided design of control algorithms based on process identification.
Case studies are demonstrated for the digital control of a superheater and an heat
exchanger. Then the application to digital control of a rotary drier is shown.
In the last chapter adaptive control with microcomputers and process computers is
described and applications are shown for the digital adaptive control of an air
heater, an air conditioning unit and a pH-value process.
27 The Influence of Amplitude Quantization
on Digital Control
In the previous chapters the treatment of digital control systems was based on
sampled, i.e. discrete-time signals only. Any amplitude quantization was assumed to
be so fine that the amplitudes could be considered as quasi continuous. This
assumption is justified for large signal changes in current process computers.
However, for small signal changes and for digital controllers with small word
lengths the resulting effects have to be considered and compared with the continu-
ous case.
The quantization units of an ADC with WL = 7 ... 15 bits are shown in Table 27.1.
Two examples are given as illustration:
If the largest numerical value is the voltage 10 V = 10000 mY, for word lengths
of 7 ... 15 bits the smallest representable unit is A = 78.7 ... 0.305 mY. If a temper-
ature of 100°C is considered, this gives A = 0.787 ... 0.003°C.
228 27 The Influence of Amplitude Quantization on Digital Control
Table 27.1 Quantization units as functions of the word length with no sign bit
N
N
1.0
230 27 The Influence of Amplitude Quantization on Digital Control
For fixed point representation the quantization units shown for the ADC hold if
8 bits or 16 bits word length CPUs are used. The quantization can be decreased by
the use of double length working.
In the case of floating point representation for process computers with 16 bits
word length two or more words are often used. The floating point number
(27.1.9)
for example can be represented using two words of 16 bits each, with 7 bits for the
exponent E (point after the lowest digit) and 23 bits for the mantissa M (point after
the largest digit), within a numerical range of
-0.8388608,2- 128 ~ L ~ 0.8388607,2 127
- 0.24651902 '10- 39 ~ L ~ 0.14272476' 10 39 .
3. Analog Output
With analog controlled actuators the quantized manipulated variable uQ(k) is
transferred to a digital/analog converter (DAC) followed by a holding element. The
quantization interval of the DAC depends on its word length. As shown in Figure
27.1, the DAC introduces a further nonlinear multiple point characteristic.
The above discussions have shown the various places where nonlinearities crop
up. As it is hard to treat theoretically the effect of only one nonlinearity on the
dynamic and static behaviour of a control loop the effects of all the quantizations
are difficult to analyze. The known publications assume either statistically uni-
formly distributed quantization errors or a maximal possible quantization error
(worst case) [27.1] to [27.6], [2.17]. The method of describing functions [5.14],
[2.19] and the direct method of Ljapunov [5.17] can be used to analyze stability.
Simulation probably is the only feasible way, for example [27.3], to investigate
several quantizations and nontrivial processes and control algorithms.
The following sections consider the effects of quantization using simple ex-
amples. The principal causes can be summarized as follows:
- quantization of variables (rounding of the controlled or manipulated variables in
the ADC, DAC or CPU)
- quantization of coefficients (rounding of the controller parameters)
- quantization of intermediate results in the control algorithm (rounding of pro-
ducts, Eq. (27.1.8».
27.2 Various Quantization Effects 231
b) The control loop does not return into the zero steady state position as offsets
occur
lim e(k) =!= 0 .
Quantization noise
If a variable changes stochastically such that different quantization levels are
crossed it can be assumed that the quantization errors c5(k) are statistically indepen-
dent. As the c5(k) can attain all values within their definition interval (27.1.5) and
(27.1.6) uniform distribution can be assumed, Figure 27.2. The digitized signal YQ
then consists in the analog signal value y and a superimposed noisy value c5. Eq.
J ''1
-~/2 ~!2
[ }/~
-<5
'J 0
L-Il/~
~
.
<5
a b
Figure 27.2 Probability density of the quantization error for a rounding; b truncation.
232 27 The Influence of Amplitude Quantization on Digital Control
(27.1.4) gives
Ya(k) = y(k) - c5(k) . (27.2.1)
The expectation of the quantization noise then becomes
00
J [c5 -
00
If this white quantization noise is generated in the ADC it acts as a white noise n(k)
on the controlled variable, and its variance cannot be decreased by any control.
This leads to undesirable changes of the manipulated variable which can be larger
than one quantization unit of the DAC, as shown by the next example.
Example 27.1. Effect of the ADC quantization error on the manipulated variable.
The process is assumed to have low-pass behaviour. Parameter-optimized controllers then
tend to have PD-behaviour, chapter 13. With e(k) = - y(k) the control algorithm becomes
u(k) = -qoy(k) - q,y(k - 1) .
If u6(k) is filtered by the low-pass process such that the resulting output component
Y6(k) ~ 0, the variance of the moving average signal process u6(k) is
U;6 ~ [qfi + qf]uE .
With the controller parameters qo = 3 and q1 = -1.5 the standard deviation is, using Eq.
(27.2.4),
In the ADC the measured analog signal y(k) is rounded to the second place after the decimal
point, resulting in YQ(k). The response of the signals without and with rounding to a refer-
ence value step w(k) = 1(k) the initial conditions y(k) = 0 and u(k) = 0 for k < 0 and the gain
qo = 1.3 is shown in Table 27.2.
The quantization unit is ~ = 0.01. The rounded controlled variable stops at YQ = 0.56.
This results in an offset of ~y = 0.003 which is negligible.
These examples have shown that the resulting amplitudes of quantization noise,
offsets or limit cycles in the controlled variable are at least one quantization unit of
the ADC. Limit cycles arise particularly with strongly acting control algorithms.
They can disappear if the controller gain is reduced. The simplest investigation of
these quantization effects is obtained by simulation. This is true particularly if
quantizations occur at more than one location.
0 1.3000 0 1.3000 0
1 0.6015 0.5373 0.5980 0.5373 0.54
2 0.5670 0.5638 0.5720 0.5640 0.56
3 0.5653 0.5651 0.5720 0.5649 0.56
4 0.5652 0.5652 0.5720 0.5649 0.56
5 0.5652 0.5649 0.56
234 27 The Influence of Amplitude Quantization on Digital Control
0 2.0000 0 2.0000 0 0
0.3468 0.8266 0.3400 0.8266 0.83
2 0.7434 0.6283 0.7400 0.6254 0.63
3 0.6482 0.6759 0.6600 0.6727 0.67
4 0.6711 0.6644 0.6600 0.6675 0.67
5 0.6656 0.6672 0.6800 0.6644 0.66
6 0.6669 0.6665 0.6600 0.6708 0.67
7 0.6667 0.6600 0.6663 0.67
8 0.6600 0.6664 0.67
9 0.6800 0.6637 0.66
10 0.6600 0.6705 0.67
11 0.6600 0.6661 0.67
12 0.6800 0.6636 0.66
13 0.6600 0.6703 0.67
14 0.6600 0.6661 0.67
15 0.6800 0.6636 0.66
Y,U
Figure 27.3 Response of the controlled variable y and the manipulated variable u for
quantization in the ADC and DAC with quantization units L\y = 0.1 and L\u = 0.1. Third
order process (test process VI). P-controller: qo = 4. To = 4 sec.
27.2 Various Quantization Effects 235
The describing function or the direct method of Ljapunov can be used in stability
investigations for the detection of limit cycles. To determine the describing function
of one multiple point characteristic, for example two three-point characteristics can
be connected in parallel, to obtain a five-point characteristic, etc. [5.14, chapter 52].
A limit cycle results if there is an intersection of the negative inverse locus
- l/G(iw) of the remaining linear loop with the describing function.
To apply Ljapunov's method it is assumed that the linear open loop of Figure 27.1
with the transfer function y(z)/(YQ(Z))AD can be described by
x(k + 1) = Ax(k) + b(YQ(k))AD )
(27.2.5)
y(k) = c Tx(k) .
As with Eq. (27.2.1) only the solution for the superimposed quantization error by as
input is considered and the stability of
x(k + 1) = Ax(k) + bby(k); Ibyl max = 11/2 (27.2.6)
is analyzed. Further details are given in [5.17, chapter 12]. After defining
a Ljapunov function
V(k) = xT(k) Yx(k); AT YA - Y = I
maximal possible errors l1y in the output can be obtained dependent on the
quantization error 11.
~O
If the rounding errors bq and be are statistically independent and have variance
(Jl = 112/12, the product error obtained by rounding the factors is
(Jr ~ (Q2 + E2 )112 (Jl . (27.2.9)
236 27 The Influence of Amplitude Quantization on Digital Control
(27.2.10)
with variance O"i = 0";. Hence, for statistically independent bq , be and bQE , the
variance of the overall error becomes
(27.2.11)
This shows that with increasing values of the factors q and e, the factor rounding
mainly determines the overall error.
The quantization effects are affected according to whether the rounding is
performed for each product or for its sum. If each product is rounded the resulting
error in the manipulated variable for the control algorithm (27.1.8) with quantiz-
ation error bpui and bqei of the products Piu(k - i) and qie(k - i) becomes
The estimation of the resulting error depends largely on the assumptions of the
mutual dependence of the quantization errors. For stochastic signals they can be
assumed to be statistically independent. Then the variance of bu is
/l v
2
O"lJu = ~ 2
t.... O"lJpui + ~ 2
t.... O"lJqei . (27.2.13)
i=1 i=O
This increases with the number of products. A statistical analysis for control loops
with quantization in the ADC and rounding of products has been made in [27.3].
The resulting error standard deviations in the output signal decrease by the factor
3 per 1 bit increase of the word length. They also depend on the programming - see
also [27.1], [27.2]. A simple example shows the effect for a deterministic input
signal.
Example 27.5 Limit cycle due to quantization of the products in the control algorithm
The same control loop is assumed as in Example 27.2. The factors and the product in the
control algorithm are rounded to the second decimal place such that the quantization unit is
A = 0.01. The results are shown in Table 27.4. A limit cycle with period M = 3 arises as with
quantization in the ADC. The amplitude is also about the same: IAYI ~ 0.0034 and
IAul = 0.01, though there is only one product.
Dead band
If the parameters ofJeedJorward control algorithms or digitalfilters lie within certain
ranges, offsets in the output variable can arise, by product rounding, which are
multiples of the quantization units of the products.
27.2 Various Quantization Effects 237
0 2.0000 0 2.00 0
1 0.3468 0.8266 0.34 0.8266
2 0.7434 0.6283 0.74 0.6255
3 0.6482 0.6759 0.66 0.6728
4 0.6711 0.6644 0.66 0.6675
5 0.6656 0.6672 0.68 0.6644
6 0.6669 0.6665 0.66 0.6708
7 0.6667 0.66 0.6664
8 0.68 0.6637
9 0.66 0.6705
10 0.66 0.6661
11 0.68 0.6636
12 0.66 0.6704
13 0.66 0.6661
14 0.68 0.6636
15 0.66 0.6704
Example 27.6
Consider a first order feedforward control algorithm
u(k + 1) = -alu(k) + blv(k)
with al = -0.9 and b l = 0.1. As the gain is K = bd(l + al) = 1 in the ideal case y(oo) = 1 is
attained for v(k) = 1. With rounding of the products to the second decimal place (i.e.
L\ = 0.01) one obtains for various initial values u(O) the final values of u(k) given in
Table 27.5.
Depending on the initial values, the following final values are attained:
For k ~ 1 all initial values 0.9639 ~ u(O) ~ 1.0449 give a nearby rounded steady state value
within the range 0.97 ~ uQ ~ 1.04. The region 0.96 ~ uQ ~ 1.05 is called a dead band [27.6]
which lies around the steady state value for a constant process input. If starting with
u(O) = 0.96 the input v(k) = 0 is applied, the signal u(k) approaches u(k) = 0.05 for k ~ 24.
The dead band always lies around the new steady state.
If the same difference equation is used as a process and a P-controller with qo = 2.0 is used
as feedback and rounding due to this example with L\ = 0.01 are made a limit cycle arises
with period M = 3 and lL\ul = 0.005 and IL\YI ::<::: 0.0005. Because of the feedback there is no
large dead band over several quantization units.
238 27 The Influence of Amplitude Quantization on Digital Control
Table 27.5 Effects of rounding the product of a feedforward control algorithm for
v(k) = 1 and various initial values u(O).
The dead zone for a first-order difference equation can be approximately cal-
culated as follows. It is assumed that in the difference equation of example 27.5 the
signals u(k) and v(k) and the parameters al and b l are already rounded according
to the quantization unit A.
Considering only the rounding of a product yields
u(k + 1) = - [al u(k)]Q - bau(k) + b l v(k) (27.2.14)
and in the steady state with v(w) = v
(27.2.15)
Since Ibau I ~ A/2
IUQ(1 + ad - blvl ~ Aj2. (27.2.16)
For example 27.5 this leads with b l = 1 + al to
A
IuQ - v I ~ 21 = 0.05 . (27.2.17)
1 + all
The same relation yields for the rounding of the product b l v(k). For several
rounding errors either the worst case has to be assumed, that is to superimpose
additively several bi = A/2 or statistic assumptions have to be made for the single
errors.
(27.2.17) also indicates the range of the parameter al, for which for processes
with K = 1 dead zones occur with width IuQ - v I = jA, j = 1, 2, 3.
The following is valid
1
11 + all ~ 2j . (27.2.18)
at ~ -0.5; -0.75; -0.83; -0.9. The closer the pole Zt = - a t gets to the stability
limit, the larger becomes the dead zone.
The same difference equation according example 27.5 for the controlled process
and with a P-controller of gain qo = 2 as feedback for roundings according
example 27.4 with L\ = 0.01 leads to a limit cycle of period M = 3 and amplitudes
lL\ul = 0.005 and lL\yl ~ 0.0005. Because of the existing feedback, a dead zone
covering several quantization units does not occur. A dead zone can be sometimes
avoided by superimposing a small noise signal on the input signal (dithering)
[27.7].
Based on these examples, the following conclusions can be drawn as to how
undesired quantization effects in digital control loops can be avoided:
1. The word lengths of the ADC and DAC and the numerical range of the CPU
must be sufficiently large and coordinated.
2. The word lengths or dynamic range at all quantization locations must be
utilized as much as possible, by appropriate scaling of variables. The word
length of the ADC should be chosen such that its quantization error is smaller
than the static and dynamic errors of the sensors. A word length of 10 bits
(resolution 0.1 %) is usually sufficient. The word length of the DAC must be
coordinated with that of the ADC. For digital control it can be taken such that
one quantization unit of the manipulated variable arises, after transfer through
the process, in about one quantization unit of the ADC.
3. To avoid excessive quantization errors of factors and products, the CPU word
length for fixed point calculations must be significantly larger than that of the
ADC (for example double word).
4. If limit cycles arise for a given digital controller, the controller parameters
should be modified to give a weaker controller action (detuned).
5. For feedforward control algorithms and digital filters one must take care of the
dead band effect around the steady state.
28 Filtering of Disturbances
Some control systems and many measurement techniques require the determina-
tion of signals which are contaminated by noise. It is assumed that a signal s(k) is
contaminated additively by n(k) and only
y(t) = s(t) + n(t)
is measurable.
If the frequency spectra of the signal and the noise lie in different ranges, they can
be separated by suitable bandpass filters, Figure 28.1. If, however, the frequency
spectra lie in the same frequency range, estimation methods have to be used to
determine the signal. In this case it is not possible to determine the signal without
error. The influence of the noise can only be minimized. This was discussed in
chapter 22.
This chapter considers the bandpass filtering with regard to application for
control systems. In section 28.1 the noise sources and noise spectra are treated
which usually contaminate the control systems. Various filters and their applica-
tion are then described: analog filters in 28.2 and digital filters in 28.3.
The graph of the dynamic control factor IR(z)l, Figure 11.5, shows that high
frequency disturbances n(k) with frequencies in the range III, ro > roll or IR(z)1 ~ 1
cannot be influenced by the control system. They only cause undesired actuator
changes and should therefore be eliminated by suitable filters. High frequency noise
signals in general consist of the following components:
28.1 Noise Sources and Noise Spectra 241
Noise spectra
Analog and digital filtering can avoid or diminish the various noncontrollable high
frequency noise signals provided the signals required for the control are not
affected. In order to design those filters, at first the frequency spectra have to be
considered which are generated by noise signals and sampling.
The continuous measurement signal is described by
y(t) = s(t) + n(t) (28.1.1)
with s(t) the undisturbed signal and n(t) the noise. y(t) is sampled with sample time
To or at a sample frequency Wo = 2n/To . By this sampling the Fourier transform of
a deterministic signal becomes periodic at Wo
y*(iw) = y*(i(w + vWo)) v = 0, 1,2, ... (28.1.2)
c.f. chapter 3. The power density spectrum of a stochastic signal is also periodic
(28.1.3)
As well as the basic spectrum (v = 0), side spectra (side bands) with distance Wo
appear for v = ± 1, ±2, ... These are shown in Figure 28.2 a for the signal s(t)
and in b) for the noise n(t) for v = + 1.
If W max is the maximum frequency of the signal which is of interest for control
then, if W max > wo/2, the basic and side spectra overlap. The continuous signal
242 28 Filtering of Disturbances
S .. (wl
;'
, + ......
I "
,.
/' I , I
/ /
a Wo 3w s w
Snn(w)
S~n(wl
Snn(W'Wo)
b w
c w
Figure 28.2a-d Power density spectra S(w) for the signal s(k), the noise n(k) and their
low-pass filtering a) signal b) noise c) low-pass filter d) filtered signals; Wo : sample fre-
quency; Ws = wo /2: Nyquist frequency.
cannot then be reconstituted without error using ideal bandpass filters. To recon-
stitute a limited frequency spectrum W ~ Wmax from the sampled signal, Shannon's
sampling theorem states that Wmax < wo / 2 = Ws must be satisfied, c.f. chapter 3.
Hence,
(28.1.4)
If high frequency noise n(t) with Snn(w) =t= 0 for W > Ws is contained in the
measured signal y(t), side spectra Snn(w + vWo) are generated which are super-
imposed on the basic spectrum Snn(w), forming S:n(w), see Figure 28.2 b). High
frequency noise with (angular) frequency Ws < Wl < Wo generates after sampling at
Wo a low frequency component with frequency
(28.1.5)
with the same amplitude. To illustrate this so-called aliasing effect Figure 28.3
shows a sinusoidal oscillation with period Tp = 12To / lO.5 and therefore
28.2 Analog Filtering 243
n( t) W, n( k)
•• •••
k
• •
•
Figure 28.3 The aliasing effect: generation of a low frequency signal n(k) with frequency W2
by sampling of a high frequency signal n(t) with frequency WI > wo/2 with sampling
frequency Wo > WI'
WI = 2n/ Tp = 14n/ STo with sampling frequency Wo = 2n/ To = 16n/STo. This re-
sults in a low frequency component with the same amplitude and with frequency
W2 = Wo - WI = 2n/STo [2S.2]. Noise components with WI ~ Wo therefore gener-
ate very low frequency noise W2' This is the reason why high frequency noise with
significant spectral densities for W > Ws = n/ T o have to be filtered before they are
sampled. This is shown in Figure 2S.2 c) and d). Analog filters are effective for this
purpose.
I I
(2S.2.2)
GdiQ g ) = (I + iQ9 )" = (I + 0")" .
In this representation the time constant T changes with the order n. Higher order
low-pass filters with n ~ 2 can be designed differently. Here compromises must be
made with regard to a flat pass-band, to a sharp cut-off and to a small overshoot of
the resulting step response, c.f. Figure 2S.4. Such special low-pass filters are for
244 28 Filtering of Disturbances
o I .. ," , "
[dBI
-30
0.001
I I
0.01
I I
0.1
Figure 28.4 Frequency response magnitudes of various low-pass filters with order n = 4.
[28.4]. 1 Simple low-pass due to (28.2.2) 2 Butterworth low-pass 3 Bessel low-pass
4 Tschebyscheff low-pass ( ± 1.5 dB pass-band oscillations).
(28.2.3)
with magnitude
(28.2.4)
(28.2.5)
(28.2.8)
As analog filters for frequencies of /g < 0.1 Hz become expensive, such low fre-
quency noise should be filtered by digital methods. This section first considers
digital low pass filters. Then digital high pass filters and some special digital filtering
algorithms are reviewed.
It is assumed that the sampled signal s(k) is contaminated by noise n(k), so that
(28.3.2)
, () b~ (28.3.6)
GFl Z = 1
+ alz I .
The frequency response of the first order low-pass filter using z = eToi '" is
G' (. )
= 1
b~ b~
FI 1W
+ ale Toi'"
b~[(1 + alcoswTo) + ial sin wTo]
(28.3.7)
(1 + al cos WTO)2 + (al sin WTO)2
This gives the amplitudes
2.0
1.0 +--==---~---I.----.
0.1
I
I
I
f
I
w To = L
Figure 28.5 Frequency re-
sponse magnitude of first or-
der low-pass filters - -
0.01 discrete filter - - - continuous
0.1 wTo 10 20 filter.
/ b '1 z - 1 b'1 Z - 1
F or noise filtering in control systems for frequencies /g < 0.1 Hz digital low-pass
filters should be applied. They can filter the noise in the range of Wg < w < Ws .
Noise with w > W s must be reduced with analog filters. The design of the digital
filter, of course, depends much on the further application of the signals. In the case
of noise pre-filtering for digital control, the location of the Nyquist frequency
Ws = n/ T o within the graph of the dynamic control factor IR(z)l, section 11.4, is
crucial, c.f. Figure 28.6. If W s lies within range III for which a reduction of the noise
248 28 Filtering of Disturbances
IR (zl" IR(zll
Ws w s w
a - l - --ill b
Figure 28.68, b Location of the Nyquist frequency Ws = n/To within the dynamic control
factor. a Ws in range III, small sample time; b Ws in range II, large sample time.
~: (l - e- To / T ,) ~ wTo ~ 1t.
In the high frequency range is IGF(iw)1 = 0 for wTo = V1t, with v = 2,4, ... For low
frequencies the behaviour is as the continuous filter.
28.3 Digital Filtering 249
Recursive averaging
For some tasks only the current average value of the signals is of interest, i.e. the
very low frequency component. An example is the d.c. value estimation in recursive
parameter estimation, chapter 24. The following algorithms can be applied.
a) Averaging with infinite memory
It is assumed that a constant value s is superimposed on the noise n(k) with
E {n(k)} = 0 and the measured signal is given by
y(k) = s + n(k) . (28.3.14)
The least squares method with the loss function
N N
V= Le 2 (k) = L [y(k) - §]2 (28.3.15)
k=1 k=1
yields with d V/ds = 0 the well-known estimate
1 N
§(N)=-L y(k). (28.3.16)
N k=1
The corresponding recursive estimate results from subtraction of §(N - 1)
This algorithm is suitable for a constant s. With increasing k the errors e(k) and
therefore the new measurements are weighted increasingly less. However, if s(k)
is slowly timevariant and the current average is to be estimated, other algo-
rithms should be used.
b) Averaging with a constant correcting factor
If the correcting factor is frozen by setting k = k1' the new measurements y(k)
always give equally weighted contributions
with al = - (k1 - 1)/k 1 and b o = 1/k 1 • Hence, this algorithm is the same as the
discrete first order low-pass filter, Eq. (28.3.6).
c) Averaging with limited memory
s(k) =~ i
N i=k-N+1
y(i) . (28.3.20)
N {N-l
s(N) = (1 - A) k~1 y(k)AN-k(1 - A) k~l y(k)A.N-k + y(k) } , (28.3.24)
Subtraction of s(N - 1)
N-1
s(N - 1) = (1 - A) L y(k)A N- k- 1 (28.3.25)
k=1
10
IGI
,
J
"-
2
i
\
\
,
\
0.1
0.01 + - - - - - - - / - - - I I - - l I --lf--
0.001 0.1 T[ 21{ 10 20
Figure 28.7 Magnitudes of the frequency response of various recursive algorithms for
averaging of slowly time varying signals 1 frozen correcting factor kl = 20; fading memory
A = 0.95; 2 limited memory N = 20.
This algorithm has the same form as for the frozen correcting factor (28.3.18) and
therefore the z-transfer function as in (28.3.19) with a 1 = - A and bo = (I - A).
Hence, kl = 1/(\ - A) becomes valid.
As these averaging algorithms track low frequency components of s(k) and
eliminate high frequency components n(k), they can be considered as special
low-pass filters. Their frequency responses are shown in Figure 28.7.
The recursive algorithm with a frozen correcting factor or fading memory has the
same frequency response as a discrete low-pass filter with To / T = In(l / - ad.
Noise with wTo > n cannot be filtered and there increases the variance through
averaging.
The frequency response of the recursive algorithm with limited memory becomes
zero at wTo = vn/N with v = 2,4,6, .... Noise with these frequencies is eliminated
completely, as with the integrating A/ D converter. The frequency response magni-
tude has a maximum at wTo = vn/N with v = 1, 3, 5, ... . Therefore noise with
w To > 2n/ N cannot be effectively filtered.
converters are filters for specific elimination of certain periodic noise signals with
discrete frequencies.
Filtering of Outliers
Up to now high frequency, quasistationary, stochastic noise signals were con-
sidered, which cannot be damped by a control algorithm. A similar problem arises
if the measurement value contains a so-called "outlier". These measured values can
sometimes be totally wrong and lie far away from the normal values. They may be
caused by avoidable or unavoidable disturbances of the sensor or of the transmis-
sion line. As they do not correspond to a real control deviation they should be
ignored by the controller. Contrary to analog control systems, digital control
provides simple means to filter these types of disturbance.
It is assumed that the normal signal y(k) consists of the signal s(k) and the noise
n(k)
y(k) = s(k) + n(k) . (28.3.27)
In this signal the outliers are to be detected. The following methods can be used:
a) - estimation of the mean value y = E { y(k)}
- estimation of the variance (1; = E ([ y(k) - YJ2}.
b) - estimation of the signal §(k)
(signal parameter estimation as in section 24.2.2, then Kalman filtering as in
chap. 22)
- estimation of the variance (1; = E{[s(k) - S]2}
c) - estimation of the parametric signal model as in section 24.2.2.
- prediction of Y(klk - 1)
- estimation of the variance (1;.
Here only the simplest method a) is briefly described. Estimation of the mean value
can be performed by recursive averaging
~ - 1 ~
Y(k+ l)=y(k)+ k+ 1 [y(k+ l)-Y(k)] (28.3.28)
compare (28.3.17), etc. For slowly time varying systems an averaging with
a "frozen" correcting factor is recommended
with K(k) = 11k or, better, K = const. To detect outliers knowledge of the prob-
ability density p(y) is required. Then it can be assumed that measured signals with
I~y(k + 1)1 = Iy(k + 1) - y(k + 1)1 > xUy(k + 1) (28.3.31)
are outliers, with for example x ~ 3 for a normal distribution p(y).
The value y(k + 1) is finally replaced by y(k + 1) = y(k + 1) and used as estimate
for control.
29 Combining Control Algorithms and Actuators
Actuator control
At the digital computer output the required manipulated variable or its change is
only briefly present as a digitized value. For controlling the actuator this digital
manipulated variable has now to be transformed in a matching signal using
a corresponding interface. Different control signals are required dependent on the
type of actuator. One distinguishes between e.g. actuator feedforward control with
amplitude modulated, pulse width modulated and impulse number modulated
analog control signals, see Figure 29.1, and, for example [5.33]. An absolute analog
manipulated variable URis generally required for proportional and integral ac-
tuators with e.g. pneumatic, hydraulic or electrical auxiliary energy. For older
process computers often only one D/A converter was available for several ac-
tuators and for each actuator one analog holding element had to be provided.
Today a D/A converter which is preconnected by a data register as an intermediate
storage is usually assigned to each analog output, Figure 29.1. Corresponding
interface modules are available as integrated circuits. The analog manipulated
variable value URis transmitted then as a d.c. voltage (e.g. 0 ... 10 V) or as an
impressed current (e.g. 0 ... 20 rnA or 4 ... 20 rnA) to the corresponding actuator.
This is followed by an analog signal conversion and a power amplification leading
to a suitable quantity for controlling the pneumatic, hydraulic or electrical drive.
Integral actuators with constant speed are controlled through incremental analog
signals ~U R in the form of pulse width modulated signals of a certain amplitude
29 Combining Control Algorithms and Actuators 255
Dlgltat Anatog
Register I--- D/A
quantity quan I y
JUUUl
Digital
Register Counterl---- -
quantity
Figure 29.la- c. Various schemes for the control of actuators. a absolute analog value
output with register and DAC. b incremental analog value output with register and counter
(pulse width modulated signal). c Incremental analog value output with register and
impulse transformer (impulse number modulated analog signal).
with different signs. Figure 29.1b represents a frequently used scheme. A data
register holds the digital output of the computer. The following counter counts the
register value with a determined clock frequency in positive or negative direction
towards zero. The analog output signal continues until the counter has reached
zero. In this case a DAC is not required.
Quantizing actuators as for example step motors are controlled by brief pulses
and therefore need on the input incremental analog signals f!U R in the form of pulse
number modulated signals of determined amplitude and a pulse duration with
different signs. Figure 29.1 c depicts a realization using a data register and an
impulse transformer. This is called "direct digital actuator control". Also for this
case a DAC is not required.
Response of actuators
Table 29.1 summarizes some properties of frequently used actuators. Here, pneu-
matic, hydraulic and electrical actuators are considered because of their signifi-
cance in industrial applications. Because of the great variety available, only
a selection of types can be considered here.
Table 29.1 Properties of frequently used actuators
With respect to the dynamic response the following grouping can be made, see
Table 29.1:
Group I: Proportional actuators
- proportional behaviour with lags of first (or higher) order
- pneumatic or hydraulic actuators with mechanical feedback
Group II: Integral actuators with varying speed
- linear integral behaviour
- hydraulic actuators without feedback,
electrical actuators with speed controlled d.c. current motors
Group III: Integral actuators with constant speed
- nonlinear integral behaviour
- electrical actuators with a.c. motors and three-way switches
Group IV: Actuators with quantization
- integral or proportional behaviour
- electrical stepping motors
Within the control range
(29.1)
the actuators of groups I, II and IV approximately behave linearly. Feedback from
the actuator load, hysterese effects, and dead zones may, however, also lead to
nonlinear behaviour. The actuators of group III generally show nonlinear behavi-
our, which, however, can be linearized for small signal changes as will be shown
later.
Figure 29.2a-d. Various possibilities for actuator control (shown for an analog controlled
actuator). a feedforward position control; b analog feedback position control; c digital
feedback position control; d position feedback to the process algorithm.
(29.2)
Process and actuator have the same sampling time, uR(k) = u(k) is given to the
actuator.
Scheme a) is the simplest, but gives no feedback from the actuator response.
Therefore the agreement between required and effective position range can be lost
with time. Schemes b), c) and d) presume position feedback uA(k). b) and c) have
the known advantages of a positioner which acts such that the required position is
really attained [5.14]. c) requires in general a smaller sample time in comparison
to that of the process, which is an additional burden on the CPU. Scheme
d) avoids the use of a special position control algorithm. The calculation of u(k) is
29 Combining Control Algorithms and Actuators 259
based on the real position of the actuator uA(k - 1), uA(k - 2), .... This is an
advantage for integral acting control algorithms if the lower or the upper position
constraint is reached. Then no wind-up of the manipulated variable occurs.
Some specialties of the actuators of group I to VI will be briefly considered in the
following.
Proportional actuators
For the pneumatic and hydraulic proportional acting actuators (group I), the
change of the manipulated variable u(k) or u'(k) calculated by the process- or
position control algorithm can be used directly to control the actuator, scheme
a) in Figure 29.2. In the case of the actuator position control the schemes Figure
29.2 b) or d) are applicable. Figure 29.3 indicates the symbols used in Figure 29.2.
The integral actuator with transfer function G(s) = 11Ts and with zero-order hold
results in the z-transfer function
UA(Z) To Z-1
GSA(z) = ~u(z) = T 1 _ Z-1 (29.4)
u ---
Amax
U
tLJ;i
------1"'-----+-'-'--'-----.-------
position; UR change of the controller
position; UA change of the actuator
Amln
position.
260 29 Combining Control Algorithms and Actuators
Then control algorithm and actuator together yield the PID-transfer function with
dead time d = 1
G ()G () = UA(Z) = To [qo + q1 Z-1 + Q2 Z-2]Z-1 (29.5)
R Z SA Z e(z) T 1_ Z-l
The actuator then becomes part of the controller. Its integration time T has to be
taken into account when determining the controller parameters. (Note, for math-
ematical treatment of the control loop no sampler follows the actuator.) This
method also avoids a wind-up when reaching the constraint.
UM
tlU R UR UM UA
N
UR '-D
(')
o
3
0-
'- 'v'
vf1rn ~
S'
5'
(JQ
(')
o
~
...,
2-
J "," ;l>-
UA ...:!- __
UR U'A OCi
"L o
... ,,~- - ::l.
/. - ;.
v 3
en
~ L'It '1 To
'::s"
0-
;l>-
n
.~ b i 2
'0-...,"
Figure 29.4 Simplified block diagram of integral acting actuators with constant speed en
N
0\
262 29 Combining Control Algorithms and Actuators
generate the position changes u(k) calculated by the control algorithm, the ac-
tuator control device has to produce pulses with amplitudes URO, 0, -URo and the
switch duration TA(k), i.e. pulse modulated ternary signals, see Figure 29.4. This
introduces a further nonlinearity. The smallest realizable switch duration TAO de-
termines the quantization unit LlA of the actuator position
(29.9)
It is recommended to choose this as the quantization unit of a corresponding DAC
for position changes, LlA = LlDA' i.e. about 6 ... 8 bit. The smallest switch duration
must be large enough such that the motor does actuate. The required switch
duration TA(k) follows for the required position change from one sample point to
another
Llu(k) = u(k) - u(k - 1) = j(k)LlA j = 1,2, 3, ...
from (29.8) as
which is for example transmitted as a pulse number j to the actuator control device.
The largest position change per sample time To is,
A ,
L.l.UAmax = V A max -To •
Ts
Therefore position changes
(29.11)
with quantization unit LlA can be realized within one sample time. They result in
the ramps shown in Figure 29.36.
As these actuators with constant speed introduce nonlinearities into the loop, the
next section briefly discusses when the behaviour can be linearized.
X= ( S. IWTA)/WT
n-- -- A (29.15)
2 2
with TA the switch duration of the ramp function, c.f. [3.11]. If differences of
5 ... 20%, i.e. x = 0.95 ... 0.80 are allowed for the maximum frequency Wmax of
interest it follows that
TAw max ~ 1.1 ... 2.25. (29.16)
In general Wmax = Ws = n/ To (Nyquist frequency). Hence
TA/To ~ 0.35 ... 0.72 (29.17)
or with (29.14)
AUA/AUAmax ~ (0.35 ... 0.72) To/Ts . (29.18)
This leads to a "rule of thumb", using (29.17):
Actuators with constant speed can be linearized if the maximum switch
duration TA is about half of the sample time To.
Note, for the application of this rule the sample time has to be chosen suitably
such that Wmax = Ws.
Two examples in Table 29.2 show how large the changes in the manipulated
variables can be for a linearization of the nonlinear behaviour of the actuators.
Both methods, of course, give different values for the linearizable position ranges
which are relatively large.
If good control performance is required, the settling times of the actuators have
to be always adapted to the time characteristics of the control system. This leads to
small sampling times, so that for small changes around the operating point
constant speed actuators may be linearized for designing the control algorithm.
These considerations have been in reference to feedforward controlled integrated
actuators with constant speed. However, analog or digital feedback control or
a position feedback to the control algorithm due to (29.2) can also be applied to
electrical actuators with constant speed.
264 29 Combining Control Algorithms and Actuators
Table 29.2 Examples for estimation of the linearizable manipulating range for constant
speed actuators.
Saturations of signal velocities that means limited signal velocities mainly occur in
actuators:
Stability is not affected by these constraints, provided the control loop is stable
without restrictions, since the describing function N(iw) of the constraint charac-
teristic is always IN(iw) I ~ 1, [5.14,29.4].
For control algorithms with integral action special measures are necessary to
avoid the wind-up of control deviations when reaching a signal saturation (con-
straint of manipulated variables). If the sign of the control deviation changes it
needs a relatively long time to restore the integrator during which the loop remains
at saturation.
To avoid this wind-up the manipulating range can be considered in the control
program and inserted in the recursive control algorithm
u(k) = - Pl u(k - 1) + P2 u(k - 2) -
when reaching the constraint UAmax or UAmln instead of the calculated u(k - 1),
u(k - 2), ... the true positions UAmax or UAmin. This, in general, presupposes at least
an approximate agreement of the programmed and the real manipulated range.
Another possibility for the feedback of the real actuator position was already
given in (29.2). Also compare section 5.8.
30 Computer-aided Control Algorithm Design
..........._ .....,100
1. Requirement specification
• Operaling range
• Main disturbances
• Control VOrl abIes
• Block behaviour
• Control performance
2. SpeCification of devices
• Sensors / actuators
• Man/machine Interface
• Automation devices
• Cabling 01
3. Design of control structure .S
QJ
'0
• Controlled /manlpulated o
variables E
.... 5150
• Control 100Ps ' MISO
• Multi level control
I.. Design of control algOrithms
• lo wer level
• higher level
.'. I
C C
... -
C 2 C... 0
;::.
'00 _ u
'00
a b
_u
c
Figure 30.1 Design steps for control systems with high performance requirements and
relevant process modelling.
For a given process the design of control systems is of course performed in several
steps, five of which are represented in Figure 30.1.
For the first two steps, which are the definition of the requirements and the
definition of the device technique rough model outlines generally are sufficient. For
control structure design models are required which describe the process statics and
268 30 Computer-aided Control Algorithm Design
the process dynamics at least in a quantitative way. The design of the feedforward
and feedback control algorithms presupposes quantitative models which have to be
rather precise at least for tuning the parameters. With increasing progress of the
control design also the required performance of the process models augments, see
Figure 30.1, column a.
Operator Inputs w
Computer performance Graphic output examples o
(1
In/output No 2/5
o
Sample time a
Testslgnal Sample time: 3s "0
free param
Experiment D c. values Dialogue Dnftellmi. : yes S.
n
Files output ....
preparation Drift elimination Ident verillc : COR/LS :k
Ident method s.:
1 Start command
8-
~ (1
Start o
u ~
Generat test signals nnn ....
Measurement UL- £.
Ron UU
rec par est
duration Dialogue ;l>
Experiments Y ciQ
o
~"""""'" /) ,--t t ::l,
Measurem ent
s-a
data ~ ~
DlOgrams
V. 1 ~
V>
Order selection QQ'
Automatic order del 0 0 1 =d ::s
Dialogue 1
Data processing Deadtlme selection 1 o 0
~
2 34m
Figure 30.2 Organization of an on-line identification with a process computer with interactive dialogue (program package OLID),
Computer aided design of control algorithms
Controller t
Parameters Cho nges
•
Controller
Simulation y 1" <II ~,.,:,.~,:.," _
Simulation Seleclion Dialogue Performance w
Modi flcatlOn measures 9
Disturbance u~.
'1:)
.....
o
(fQ
~ .....
~
Controller Chc nges
t
Select controller . '. 3
Conflguralion
type
On -line programm
. ·. · ~
'1:)
(')
yw. · · ·----
Constraints ref value Dialogue
)<';'
On -line -operation Performance ~
measures j~-~ ~
'------ -- ------- -
'"
IV
-..J
Figure 30.3 Organization of a computer-aided design of control algorithms with interactive dialogue (CADCA)
272 30 Computer-aided Control Algorithm Design
2. Transfer of process models to the program package for the controller design
- from the process computer resident identification program package
- from other sources (e.g. theoretical modelling)
3. Design of various control algorithms e.g.
- parameter-optimized controllers (PI-, PID, PID 2 )
- deadbeat controller
- state controller with observer
- minimum variance controller
4. Simulation of the system behaviour
5. Modification of the control algorithms and final selection
6. Control algorithm implementation in the process computer
- the control algorithms and their parameters are transferred to the real-time
operation system of (he process computer
7. Setting of operation conditions
- restrictions on the manipulated variables
- reference variables
8. Closed-loop operation
- Command for closed-loop operation with start-up conditions
- The closed-loop operation is supervised and compared with the simulated
behaviour
- algorithms can be modified, if required
For off-line design, items 1. to 5. are sufficient.
This computer-aided design method has the following advantages:
- Automation of the design and the start-up of digital control.
- Simulation of the control system with various control schemes and control
algorithms without disturbing the process.
- Saving of implementation and start-up time, especially for processes with large
settling time, complicated behaviour or strong interactions.
- Improvement of the control performance by better-tuned simple algorithms or
more sophisticated control algorithms.
- Determination of the dependence of controller parameters on the operating
point. Therefore feedforward adaptive controllers can be quickly designed.
It is expedient to summarize the individual design tasks in program packages.
Figure 30.3 shows some tasks for single-input/single output processes using the
program package CADCA [30.5, 30.7] as an example. With this program about ten
different control algorithms can be designed. The process model is put in as
difference- or vector difference equation. With this the smallest sampling time is
defined. Mter the program's question-answer dialogue, the selected control algo-
rithms are designed in an interactive dialogue. Then various design parameters have
to be chosen, e.g. manipulated variable weighting factor, manipulated variable
limits, state variable weighting factors. The course of the manipulated and control-
led variable is simulated and put out for selected input signals (reference value).
Various control performance measures and the location of poles and zeros can be
observed. After modification and comparison the operator can decide on one control
30.1 Program Packages 273
1.0
1.0
Figure 30.4 Signal flow during process identification (a) and with four computer-aided
designed control algorithms (b). Process: Gp(s) = 1.2/(1 + 4.25)(1 + 15)(1 + 0.95), To = 2s.
algorithm and can test it on-line with the same program package. A special analysis
program is available for examining the behaviour with the real process.
Figure 30.4 shows the signal flow during the identification with OLID and for
closed control loop after application of CADCA for a simulated analog process.
The advantages of a computer-aided design are especially evident, if intercon-
nected control systems and multivariable control systems are to be designed. This
is described e.g. in: [30.8, 30.10, 30.l3].
It is the preference of the separated identification-, design- and control phases
that arbitrary identification- and control methods can be combined, provided the
process is time invariant. Especially the process behaviour between identification
and control action is not supposed to change essentially. Otherwise selftuning
control algorithms are to be used, see chapter 31.
274 30 Computer-aided Control Algorithm Design
M s3e["CI
5
ll ,Pi I w2 [%1
M Iw20
50
25
25
Figure 30.6a, b. Slave injection cooler control (controller R3 d. Simulation without noise
signals. a Process identification with OUD (To = 3 s; m = 2; d = O. PRBS-clock time: 2);
b Pf-control for a change in the reference variable. CADCA design. r = 2; qo = 0.66;
ql = - 0.50.
attained value behaviour for a reference value step with noise signals. Figure 30.7c
shows the same, however without noise signals.
Multivariable control systems can be designed using the same principles. [30.10,
2.23] describe an example with simultaneous excitation of two manipulated vari-
ables (injection water and fuel flow) used for identification and state control of
steam outlet temperature and steam pressure.
25 , t...>
0
("J
0
100 t[mlnl a
"0
S-
o
....
~
0.:
0
tl"'SJe soli [OCl t 0-
5 ILIA~ ("J
0
25
~-
t[mln!
-25
Disturbances Identification w ith PRBS I _ . - _. Contr with disturbances I Conlr without disturb
•• •I •
a b c
Figure 3O.7a-c. Superheater final temperature control (R32) with slave injection cooler temperature control (simulation with steam
flow disturbances). a process identification with OUD (To = 30s; m = 3; d = 0; PRBS clock time: 1); b state control for step change
in the reference value. CADCA design; c as b, however, with switched-off noise signal.
30.2 Case Studies 277
Warm wa er
Steam valve
Seam
pipe
Condensate
Wa er
valve
Cold wa er
Figure 30.8 Steam-heated heat exchanger. L = 2.5 m. tubes d = 25 mm. Input: change L'lU
of the steam valve. Output: change L'l Y of the water temperature.
ruoonmIII=
ul~1
30j
5.0
o 117 210 303 396 ['89 (s]
YI:~II~
o 117 210 303 396 ['89 (s]
Figure 30.9 Process input and output signal during on-line identification. PRBS: period
N = 31. clock time A = 1. water flow Mw = 3100 kg/ h. Sample time: To = 3 s.
In Figure 30.10 the closed loop response to steps in the reference value is shown for
various control algorithms designed with CADCA. Because of the nonlinear
behaviour of the valve and the heat exchanger, the closed loop response depends on
the direction of the step change. However, satisfactory agreement (on average)
between the simulated and the real response is obtained.
tv
-.I
tis] tlsi 00
0 100
!
200
I
0 100
78.8 78.8J fL 78.8
78':1 1 yl\ I.;.)
0
l·cl \l
0
I I • ........
l.~lll I
Yj~ 3
81 . 2 0I' 81.2l 81.2 81.2 ." "0
100 200 0 100 !:
tlsl (t
..,
~
0:
<>
0..
~
\l
tIs I 0
tiS! I. u ;::.
0 100 ..,
0 100 200 ImAI Q.
U ;l>-
ImAI 6 6 O<i
0
6~.r"
6 0
~ 100 :!.
&
u tlsi
0 100 3
tlsl 200 ImAI
Cl
<>
8 OQ'
'"
::s
lm~~
porometer- opllml zed controller 3PC - 3 (PlDl State controller with observer SC
Figure 3O.IOa Closed loop response for four CADCA designed control algorithms based on an identified process model. Reference variable
steps in both directions - - measured response; .. . .. . simulated response (during design phase).
kl k2 k3 k4 ks r
SC -3.9622 -3.4217 -2.7874 0.4472 1.3372 0.04
qo ql q2 q3 q4 qs Po PI P2 P3 P4 Ps
3PC-3 -1.7125 2.3578 -0.7781 1.0000 -1.0000 0.01
78.8 CO 100 1(5) 78.:CO 100 115)
Y Y
78'~i r
1'C1 I'CI rei
78·C
81) 0 100 I (5)
81.2
I~
81.2 0 100 1 (51
81.2
o 100 I Is) o
u
0 100 lIs)
ImAI
6
II( 0 160 1(5)
U
I:AI 6
ImAI
10 8
'I'
81 0 100 1(5)
12
Oeodbeot controller DB Iv) Deodbeol controller of J.ncreosed order DB I v. ,) w
o
N
Figure 30.10b Closed loop response with four CADCA designed control algorithms based on an identified process model. Reference variable
(')
steps in both directions. - - measured response; .. . ... simulated responses (during design phase). ~
CJ>
o
CIl
qo ql q2 q3 q4 qs Po PI P2 P3 P4 Ps 2
0-
o·
CJ>
DB(v) -8.4448 10.4119 - 4.0384 1.0775 0.0000 1.0000 0.0000 - 0.2317 - 0.5841 - 0.1842
DB (v + I) - 3.8612 0.1770 3.8048 - 1.6993 0.5848 0.0000 1.0000 0.0000 - 0.1059 - 0.3928 - 0.4013 -0.100
tv
-...J
\C)
280 30 Computer-aided Control Algorithm Design
Gases '-.
MeaSUring posItion:
Pressed pulp h bond conveyor
l
" , ~I#h'hl
) ( 11 ,n n 1"l'"7+-, Dned
'I I pulp
Oven ~
Drum II I I I
I 1
I
Flue gases
from the bOiler (I rI
JI l I'~ 'llI -~-~
I~ 0
M."
t:t
Cumbus[,on aIr w
1"0 In Iwps I~M I ~A 1'I',s I 0
tv
Figure 30.11 Schematic of the rotary dryer (Sueddeutsche Zucker AG, Werk Plattling)
drum diameter DD = 4.6 m; oil mass flow MFrtulx ::::: 4.S t/ h; drum length LD = 21.0 m; temperatures 9 0 ::::: 1050 °C; wet pulp mass flow tv
00
MpsmIlx ::::: SOt/h; 9 M ~ 140-21O o C; flue gas mass flow MKGmIlx ~ SOOONm 3 / h; 9 A ::::: llO- 155 °C.
282 30 Computer-aided Control Algorithm Design
Gas temperature
In the middle
Mass flow of of the drum
the fuel
t:1F
Revolutions Gas
of the screw temperature at
conveyor the plant outlet
n
Water content
of the
pressed pulp Dry substance
IjJrs
The evaluation of the data was performed off-line using the parameter estimation
method RCOR-LS of the program package OLID-SISO. The initial identification
experiments have shown that the following values are suitable:
sample time To = 3 min
clock interval A. = 4
amplitudes fuel AMF = 0.25 tlh
amplitudes speed An = 1 rpm
The required identification times varied between 112 To to 335 To which is 5.6 to
16.8 hours. Figure 30.13 shows one example of an identification experiment. The
step responses of the identified models are presented in Figure 30.14. The settling
time is shortest for the oven outlet temperature and increases considerably for the
30.2 Case Studies 283
"'0 ['c)
111,0 ...
1130
.........
1120
.. ..
1110 ...
.... .. . .......
1100
"'M ['c]
210 .. ....
...... .
200
.. ..
190 ..
.. ..
180
.....
"A ['e)
130 ..
120 ......... .... · · .. 0_
110
"'T5 ["to]
96
9S
94
93
92 ...
91 ...
90
89
88
87
86
85
2 3 4 5 I [hI
Figure 30.13 Data records of an identification experiment with fuel flow changes.
gas temperatures in the middle and at the end of the dryer. The dry substance has,
with fuel flow as input, a dead time of 6 min, an all pass behaviour with undershoot
which lasts for about 30 min, and a 95% settling time of about 2.5 h. This behaviour
is one of the main reasons for the control problem. With the screw conveyor speed
as input the dry substance shows a dead time of 18 min. The estimated model
orders and the dead times are given in Figure 30.14.
284 30 Computer-aided Control Algorithm Design
90
80
70 .... ........................................................ ...
60
50
1.0
30
20
10
t::. M (OC)
t::. A (OCI
- 20
- 30
1.0 ... .. .... ............. . ...... ..... ............... -1.0
30 .. .. ~'1 rOC)
m =1 d =1
20
10 2 t [hl
.....
2 t[hl ....
-10 .... ....... .......... ............. ......... .. ......
m =3 d =1
-20
Ll is [%J
.............. ........ ........ -30
10 - 0
8 "'A (OC]
m=3 d=2
6
4
2
2 t[hl
0
-2 2 [hl
-2
m = 5 d=2
-I.
a
-I. ..
-6 ............................ ........ ... ......
-8
-10
t::. ~\5 (%1
m=5 d=6
b
Figure 30.14 Step responses of the identified models. n = 13 rpm. To = 3 min. a change of
the fuel flow tlM F = 1 t/ h; b change of the screw conveyor speed tln = 1 rpm.
30.2 Case Studies 285
Based on the identified process models various control systems were designed
using the program package CADCA [30.1], [30.3]. The manipulated variable is the
fuel flow and the main controlled variable the dry substance. If only the dry
substance is fed back control is poor; feedback of the gas temperatures 8M and .9 A
improves it considerably. Figure 30.15 shows the simulated responses to step
changes of the screw conveyor's speed for a double cascade control system with
3 PID-control algorithms and a state controller with observer. The better control
(better damped and with fewer oscillations) is obtained with state control. Control
can be improved considerably using a second order feedforward control algorithm
GFl measuring the speed n and manipulating the fuel flow, Figure 30.16. Because of
practical reasons the cascaded control system was finally implemented on a
SIEM ENS 310 K process computer (easy transfer to other dryers, transparency for
the operator, computer manufacturer's program package SIMAT C). A block
diagram of the implemented control system is shown in Figure 30.16 which also
shows the positional control algorithms for the actuators, a feedforward control
Gn for the case where a reliable water content measurement of the wet pulp is
possible, a feedforward controller GF8 to change the speed of the screw conveyor
such that the total water mass flow is kept constant and a feedforward controller
GF7 of differential type to reduce the nonminimum phase behaviour of the dry
substance by changing the boiler flue gases such that the gas flow through the dryer
is initially kept constant after a fuel flow change. Figure 30.17a) shows signal
records for manual analog control (original status) and Figure 30.17b) for digital
cascaded control with feedforward control GFl . Although the pulp mass flow MPS
is fairly constant the dry substance oscillates within a tolerance of about ±2.5%
for manualjanalog control. With digital control the tolerance is reduced to about
± 1% for larger pulpmass flow disturbances than in Figure 30.17a), or ± 0.5% for
periods with fairly constant pulpmass flow. This shows a significant improvement
in performance using the digital control.
A report of the practical experience with the digital control of three rotary dryers
with one process computer has shown that the fuel saving because of better control
was about 2.5%, which is about 329 tons of oil annually [30.14]. This rotary dryer
is a typical example of a process with complicated internal behaviour and large
settling time for which manual tuning of controller parameters did not result in
satisfactory control. The process identification and computer aided design of
various control systems led to a good insight into the process' behaviour and
allowed the simulation and comparison of various control systems. As the rotary
dryer generally operates at full load, fixed control algorithms are suitable and an
adaptive algorithm is not required.
The described methods of computer-aided design with process identification
have also been successfully applied for other processers, see [30.9]
- drying plants
- plastic tube extrusion
- material test machine
- motor test bench
l1M F • [lIhJ l1MF 4 [t/h)
1.0 1.0
0.8 O.B
0.6 0.6
0.4 0.4
State control
0.2 0.2
0 o
20 20
;.
15 Cascade controt \ Cascade control
15
I:" ,
... :............ ,. "::., \ .... ... :::, ... ,,::.::: ...... ;" .... . . .' ........... ..... .. \ ......................... .
5 + I 5
. I
State control Stale control
o + // o
\
-5 +i -5
Figure 3O.15a.
M A f ' OC) M" f lOC)
.. ' C05Cod/> conlrol
:: •. ::::: •. ,.•....•....... /COSCOde conlrol
10 I 10 .... . ' ....... .
·:::·:1-::······
.. \ ....................... :: :::: :::::::::~;:;::.:::: ....... \ ................... .
5 5
Slate conlrol State control
o
o I
-5 -5
-5
"'\.i \ -s
w ithout contrct Wllhout control w
o
-6 -6 tv
(")
10
~
a 2 3 4 5 t (hl b 2 3 5 t 'h l Vl
C-
Figure 30.158, b. Simulated control behaviour of the rotary dryer for step changes of the screw conveyor speed of ~n = 1 rpm, measuring the o..
o·
on
dry substance and the flue gas temperatures 8M and 8A' To = 3 min. 8 without feedforward control; b with feedforward control GFl '
N
00
-.J
tv
00
00
I:!.Wps
s
w
o
n
o
I:!.nman
EI
'0
DrYing s::
process y=tJ.ljJTS
~ ...CD
!.
Q.
I:!.~KGman a
n
o
...a
UF U F, e.
w UR -t W, UR' W2 >
~
::l.
ET
Y,=tJ.1JA
EI
Y2=tJ.1JM
~
ci<j'
'"
::l
Figure 30.16 Block diagram of the cascaded control system implemented on a process computer.
30.2 Case Studies 289
<jJTS [% 1
95
94
93
92
91
90
[OCI
,
" .... -~-
[OC I
190
155
n [min · I )
12
9 -- --
~F
3.6
[/ h l
. - . '-'~- ..... - - ..... -
3.0
Wr>s [%1
I I I I I I I I I t
a 7 0• 18 00 19.0 20 00 21. 0 22·· 23·· 0 0• 100 2 00 3.0 4. 0 50. t
Figure 3O.17a.
290 30 Computer-aided Control Algorithm Design
TS (%J
93
92 ". -_........ ,
.~.
91
\)A ('C J
140
120 -----~ ------.. .. ------,-- --~",--.------~-.---- ... ------- - ...~ .....~--...
100
['C J
"M
225
190
155
.. __ - ... _...... . ..... ~-- ,- -" .. --.
nllmin-
I
1
)
12 _
----.---
.9
~F
3.5 [t/ h J
3.0 .
Wps [%J
b 15" 15" 17" 18" 19" 20" 21" 22 00 23 00 0" 1" 2 00 300
Figure JO.17a, b. Signal records of the rotary dryer. Signals are defined in Figure 30.11. MM:
molasses mass flow. a manual control; b digital control with cascaded control system and
feedforward controller, GFl .
30.2 Case Studies 291
Final remarks on the application of the computer-aided control design with process
identification.
Process identification with following computer-aided control design is especially
recommended for complicated or complex processes which still require initial basic
research on the choice of the controller structure and type of the control algo-
rithms. Various process identification - and control design methods can be com-
bined by using this separated procedure. The a-priori knowledge of the process
might be little and can be restricted to the basic behaviour as e.g. linearizable or
stronger nonlinear, proportional or integral-acting behaviour. The process, how-
ever, should not essentially change its behaviour during the design phase, that
means possess time invariant behaviour.
If the principal process behaviour and the control structure are fixed, then the
transition to self tuning control algorithms is suggested. Their application will be
considered in the following chapter. The starting action of the selftuning control
algorithms with pre-identification corresponds with the computer-aided control
design treated above. In this respect the transition of the methods treated in this
chapt.;r to the methods of selftuning control systems is smooth.
31 Adaptive and Selftuning Control Systems Using
Microcomputers and Process Computers
For testing the adaptive digital control systems several microcomputers were set up
[26.25, 26.44, 26.60]. They consist of single board computers with memory enlarge-
ment, input/output unit console processor and operating elements, see Figure 31.1.
A special feature are the console processors which organize the communication
between the microcomputer and the operator. They allow a variety of keyboard
inputs, information representation transmission inside and outside of the system,
Table 31.2 Comparison of computing times for various adaptive control algo-
rithms RLS/DB or RLS/MV3 [26.44]
Table 31.3 Comparison of computing times for various adaptive and fixed control algo-
rithms and microcomputer DMR-16 with arithmetic processor S087. Parameter estimation
with RLS
makes the difference between the two 8-bit computers. Using the higher language
for the 16-bit computer causes an about 50% increase in memory storage. This,
initially is also the case for the computing time. The computing time of the 16-bit
computer becomes significantly smaller, if the arithmetic processor 8087 is being
used. Since the part of arithmetic operation is relatively large, the performance of
the arithmetic processor is of special significance. For model order m = 3 about
half of the computing time is needed for parameter estimation (about 16 ms). If
programmed in FORTRAN IV on a process computer HP 21 MX-E, the same
adaptive control algorithm requires about 500 ms. Table 31.3 shows the required
computational effort for various parameter-adaptive and fixed control algorithms
with one and two control variables.
As expected, the adaptive controllers require significantly more memory storage
(factor 10 to 25) and more computing time (factor 40) than the fixed controllers.
The required memory storage for adaptive single variable control systems for
RLSjDB is about half of RLSjPID and RLSjSC. For RLSjPID and RLSjDB the
computing times are almost equal, for RLSjSC, however, they are 4-times larger.
For adaptive multivariable control systems these differences, however, become
smaller.
It should be noted, that these performance data are valid for microcomputer
prototypes. The goal was a general testing of the functioning of adaptive control
methods implemented on microcomputers. The programs therefore include many
additional functions for performance analysis and supervision. Storage require-
ments as well as computing times can still be reduced. Concerning the computing
time, the lower limit of the applied 16-bit computer seems to be about 10ms.
Control algorithm sampling times of about 2 ms can be realized, if the controller
is not adapted anew after each sampling step but. The recursive parameter estima-
tion calculation then should be spread over several sampling intervals.
31.2 Examples 295
31.2 Examples
Various case studies have already shown the applicability of parameter adaptive
control algorithms to industrial and pilot processes. Early examples (1975-1979)
apply the implicit version of RLS/MV4 for moisture control of a paper machine
[26.12], for heading a tanker [26.22] and to control the titan dioxide content in
a kiln [26.23]. The application of implicit RLS/MV3 with microcomputers is
described in [26.24]. Explicit RLS/DB and RLS/MV3 with microcomputers was
used to control an airheater [26.25, 26.26]. [26.27] describes the application of
RLS/MV4 and RLS/SC on a pH-process. A survey of more applications after 1979
is given e.g. in [23.20, 23.19, 23.22, 30.9].
Some examples of application are described more closely in the following.
Figure 31.4 shows the scheme of an air-conditioning plant which was constructed
of usual components. The air temperature can be changed by changing the position
of the three-way valve. It changes the water flow through the air heater. The air
humidity control is ensured by changing the spray water flow in the air humidifier.
It refers to a strongly coupled two-variable control system which shows a distinctly
nonlinear behaviour and which also is especially dependent on the load (air flow).
25 50 t [min]
r\. 1
2. 5 r-
(l
75 100
2.5
75 tlmln) 100
-2.5 fixed control
Process- Idenh- self tuning
disturbances flea Ion control without
dis urbances
MoIOJo)
2.5
[min) 100
b
Figure 31.2a,b. Selftuning superheater final temperature control as in Fig. 30.7 (simulation).
a Selftuning state control (RLSjSC) with brief pre-identification, without noise signal; b As
a), however with steam flow disturbances.
31.2 Examples 297
Air heo er
Condilloned
Fresh air Heote xchanger Sprayer air
/
0;-
<> ,;-
Parameter
adaotive
controtter
Figure 31.3 Scheme of an air-conditioning plant (pilot process) .•9 air temperature; qJ rela-
tive humidity; M air flow ratio; U I position of the inlet water valve; U 2 position of the spray
water valve.
25
w [OC)
~A
SO
1.8
200 k
1.6
1.1.
u [V)
7.0
6.0
5.0
1..0 50 k
3.0
2.0
Figure 31.5 Adaptive deadbeat control of the air heater with RLSj DB. m = 3; d = 0;
To = l8s; A. = 0.93; r' = 0.83. U 1 = U.
Figure 31.4 depicts the gain factor of the temperature control system. Within the
considered operating points it changes (about t : to), the settling times by 1: 2.
In the following some signal graphs are shown for the adaptive control of the
air temperature, which were obtained for the operating point M = 300m 3/ h;
Voo = 4.3 V; 9A = 47 °C. For the adaptive control algorithms To = 18s; m = 3;
d = 0; A. = 0.93 were chosen and for the pre-identification a PRBS with amplitude
1.25 V and 17 sampling steps [26.44J.
Figure 31.5 shows the control behaviour for the parameter-adaptive
RLS/DB(v + 1). After a brief pre-identification the loop is closed and stabilized.
The controller parameters are adapted anew for each step change of the set point.
With increasing temperature the settling behaviour is more damped because of the
gain increase. If the temperature decreases, the settling behaviour shows an over-
shoot because of a gain decrease. The adaptive state- and PID controller (para-
meter-optimized) show a similar behaviour, Figures 31.6 and 31.7.
In Figure 31.8 the PID-control algorithm was fixed after pre-identification. In
the vicinity of this operating point the expected control behaviour is attained. The
loop, however, reaches the stability limit (oscillating behaviour), if the reference
value is tuned towards a larger process gain.
Figure 31.9 shows the behaviour with a DSFI/ PID (square root filter in informa-
tion form) for an especially large range of process parameter changes. The signals
demonstrate the corresponding adaption.
31.2 Examples 299
w (OC)
1.8
1.6 k
l.l.
u [V)
7.0
6.0
S.O
1..0 50 180 k
3.0
2.0
1.0
w [OC)
.sA
50
1.8
1.6 k
l.l.
u [V)
6.0
5.0
1..0
SO k
3.0
2.0
1.0
Figure 31.7 Adaptive PID-control of the air-heater with RLS/ PID. r = 0.08.
V>
8
~
w ,,"el
"A ~
Q..
50 po
'S.
:;;:: .
i.8 ~ IT 1\ ."
po
V v~ v ::s
50 'V' 150 200 k Q..
i.6 1 0
en
Ll.
1\0
[\1\1\1\_
r ."
::;;
s=
::s
5'
u t IVI O<l
(J
7.0 ~ n 0
a....
6.0 f IL IL 2-
en
'<
5.0 t " I III I'l... ............. '"CD
" 3
L.O t 50 ~ 150 200 '"
I IIi V II)~~ nn I~ I~k
3.0
2.0
Figure 31.8 Unique selftuning PID-control of the air heater with RLS/ PID
W f rOCI
--'"A
5.0
2.0 V>
N
1.0 tTl
:><
~
Figure 31.9 Adaptive PID-control of the air heater with RLSC/ DSFI/ PID. 4 = 0.8. 3
'0
0'
<;,
....,
0
302 31 Adaptive and Selftuning Control Systems
Heat exchanger
¢=l
Inlet air
Condlloned
air
Figure 31.10 Scheme of a high-pressure air conditioning plant of a building (max. air flow
rate: 12 3OOm 3 / h).
Ijl {%I
51
47+L~~~~--~--~~~~~~~~~~----~~~-
43
2.0
UIjl [VI
o LO 55 80 120 k
Figure 31.11 Adaptive two-variable control with RLS and state controller. ,9 = air temper-
ature, rp = relative air humidity, To = 45 s.
Wo er
MIxing vessel
Signol-tronsmi er
Pump
Bose ACId
,----------------,
DMR-2
L---;...I---l A Parameter I
I adopt ive 1-------'
'-------;--,----I 0 controller A I
L _______________ J
pH
10
8
7
L
Figure 31.12 a Scheme of a pH-
T value process. b Measured titration
0 0.5 1.5 2 2.5 M./M
characteristic.
Q
y
9
8
7
4 +---------------~--------------~
10 20 30 40 50 50 70 80 90 100 U [0/0]
K [pHrr.]
0.2
01
10 20 30 40 50 60 70 80 90 100 U [Ofo]
Figure 31.13 Titration characteristic and gain factor of the pH-value process.
:>
0-
~
~(PH)
'S.
9 =;;-
."
~
8 ::s
0-
7 en
."
5 10 15 ([min)
S
c
6
::s
Er
(JQ
ur" hl (j
0
24
l
.,:::.
- 2-
12 III n I I 1,.11..1 -~
~- en
'<
~
."
([mm]
3
(J>
5 10 15 20 25 30
l'lMOI%J !
O +-----------~---------+I----~----------~----------------------------------------
a -10 5 10 15 20 25 30 ([mIn]
Figure31.14a,b. Adaptive control of the pH-value. M.: acid flow; Mb: base flow; Mw: water flow a RLSj DB(v + 1); r' = 0.5; To = ISs; m = 3;
A. = 0.88.
31.2 Examples 307
c: c
E E
o
M
U"> U">
N N
oN o oN
N
U">
00
~ ~ 00
ci
II
""N-
II
""::!
,;.)
g g g II
12
'"
V)
ci
II
\ ..
U'>
M
>-
:2
-----
r/l
.....l
C(
. -.....
...,.
~
.-:
I
.e-
co <-
--:c=>
-oJ
N
N 0
....
0 S? 0
,.
0 0
,
N ~o
z
0
Q,I
3> 0
.:2
.:2
<I
~
<I .tJ
~
308 31 Adaptive and Se1ftuning Control Systems
Chapter 12
12.1 Aoki, M.: Optimization of stochastic systems. New York: Academic Press 1967
12.2 Meditch, J.S.: Stochastic optimal linear estimation and control. New York:
McGraw-Hill 1969
12.3 Bryson, A.E.; Ho, Yc.: Applied optimal control. Watham: Ginn and Co. 1969
12.4 Astrom, K.J.: Introduction to stochastic control theory. New York: Academic Press
1970
12.5 Kushner, H.1.: Introduction to stochastic control. New York: Holt, Rinehart and
Winston 1971
12.6 Schlitt, H.; Dittrich, F.: Statistische Methoden der Regelungstechnik. Mannheim:
Bibliographisches Inst. 1972 Nr. 526
12.7 Davenport, W.; Root, W.: An introduction to the theory of random signals and
noise. New York: McGraw-Hill 1958
12.8 Bendat, J.S.; Piersol, A.G.: Random data: analysis and measurement procedures.
New York: Wiley Interscience 1971
12.9 Box, G.E.P.; Jenkins, G.M.: Time series analysis, forecasting and control.
San Francisco: Holden Day 1970
12.10 Jazwinski, A.H.: Stochastic processes and filtering theory. New York: Academic
Press 1970
12.11 Hansler, E.: Grundlagen der Theorie statistischer Signale. Berlin: Springer-Verlag
1983
Chapter 14
14.1 Clarke, M.A.; Hastings-James, R.: Design of digital controllers for randomly dis-
turbed systems. Proc. lEE, 118 (1971) 1503-1506
14.2 Schumann, R.: Various multi variable computer control algorithms for parameter-
adaptive control systems. IF AC Symp. on Computer Aided Design of Control
Systems, Zurich 1979, Oxford: Pergamon Press
Chapter 15
15.1 Bar-Shalom, Y; Tse, E.: Dual effect, certainty equivalence and separation in stochas-
tic control. IEEE Trans. Autom. Control AC 19 (1974) 494-500
310 References
Chapter 16
16.1 Benennungen fur Steuer- und Regelschaltungen. VDI/VDE-RichtIinie 3526,
VDI/VDE-Handbuch Regelungstechnik. Berlin: Beuth 1972
16.2 PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches Inst. 1967
16.3 Leonhard, W.: Einfiihrung in die Regelungstechnik. Braunschweig: Vieweg 1972
Chapter 17
17.1 Isermann, R.; Bauer, H.: Entwurfvon Steueralgorithmen fiir ProzeBrechner. ETZ-A
36 (1975) 242-245
Chapter 18
18.1 Mesarovic, M.D.: The control of muItivariable systems. New York: Wiley 1960
18.2 Schwarz, H.: Mehrfachregelungen. 1. Bd. Berlin: Springer-Verlag 1967
18.3 Schwarz, H.: Mehrfachregelungen. 2. Bd. Berlin: Springer-Verlag 1971
18.4 Thoma, M.: Theorie linearer Regelsysteme. Braunschweig: Vieweg 1973
18.5 Isermann, R.: Die Berechnung des dynamischen Verhaltens der Dampftemperatur-
regelung von Dampferzeugern. Regelungstechnik 14 (1966) 469-475, 519-522
18.6 Isermann, R.; Baur, u.; Blessing, P.: Test case C for the comparison of different
identification methods. Boston: Proc. 6th IFAC-Congress 1975
18.7 Schramm, H.: Beitriige zur Regelung von ZweigroBensystemen am Beispiel eines
Verdampfers. Fortschr.-Ber. VDI Z Reihe 8 (1976) Nr. 24
18.8 Freund, E.: Zeitvariable MehrgroBensysteme. Lecture Notes in Operations Re-
search and Mathematical Systems. Berlin: Springer-Verlag 1971
18.9 Sinha, N.K.: Minimal realization of transfer functions matrices-a comparative
study of different methods. Int. 1. Control 22 (1975) 627-639
18.10 Kucera, V.: Discrete linear control. Prague: Academia 1979
18.11 Fisher, D.G.; Seborg, D.E.: Multivariable computer control. Amsterdam: North
Holland 1976
Chapter 19
19.1 Kraemer, W.: Grenzen und MOglichkeiten nicht entkoppelter, linearer Zweifach-
regelungen. Fortschr.-Ber. VDI Z. Reihe 8 (1968) Nr. 10
19.2 Muckli, W.: Analyse und Optimierung nicht entkoppelter Zweifachregelkreise.
Aachen: Diss. TH 1968
19.3 Muckli, W.; Kraemer, W.: Reglereinstellung an nicht entkoppelten ZweigroBensys-
temen. Regelungstechnik 20 (1972) 155-163
19.4 Zietz, H.: Stabilitiitsbetrachtungen und Reglerentwurf bei nicht entkoppelten
ZweigroBenregelungen. Mess. Steuem Regeln 16 (1973) 84-88
19.5 Niederlinski, A.: A heuristic approach to the design of linear multivariable inter-
acting control systems. Automatica 7 (1971) 691-701
19.6 Engel, W.: Grundlegende Untersuchungen uber die Entkopplung von Mehrfach-
regelkreisen. Regelungstechnik 14 (1966) 562-568
References 311
Chapter 20
20.1 Schumann, R.: Various multivariable computer control algorithms for parameter-
adaptive control systems. IF AC Symp. on Computer Aided Design of Control
Systems, Zurich 1979, Oxford: Pergamon Press
20.2 Keviczky, L.; HeUhessy, J.: Self-tuning minimum variance control ofMIMO discrete
systems. Autom. Control Theory and Applic. 5 (1977)
Chapter 21
21.1 Falb, P.L.; Wolovich, W.A.: Decoupling in the design and synthesis of multi variable
control systems. IEEE Tras. AC 12 (1967) 651-659
21.2 Gilbert, E.G.: The decoupling of multivariable systems by state feedback. SIAM
Control 7 (1969) 50-63
21.3 Schwarz, H.: Optimale Regelung Iinearer Systeme. Mannheim: Bibliographisches
Inst. (1976)
Chapter 22
22.1 Wiener, N.: Extrapolation, interpolation and smoothing oftime series with engineer-
ing applications. New York: Wiley 1949
22.2 Sage, A.P.; Melsa, lL.: Estimation theory with applications to communications and
control. New York: McGraw-Hili 1971
22.3 Nahi, N.E.: Estimation theory and applications. New York: Wiley 1969
22.4 Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans.
ASME, Series D, 83 (1960) 35-45
22.5 Kalman, RE.; Bucy, R.S.: New results in linear filtering and prediction theory.
Trans. ASME, Series D, 83 (1961) 95-108
22.6 Deutsch, R: Estimation theory. Englewood Cliffs: Prentice Hall 1965
22.7 Bryson, A.E.; Ho, Y.c.: Applied optimal control. Watham: Ginn (Blaisdell) 1969
22.8 Jazwinski, A.H.: Stochastic processes and filtering theory. New York: Academic
Press 1970
22.9 Theory and Applications of Kalman Filtering. AGARDOgraph Nr. 139 (1970).
Zentralstelle fur Luftfahrtdokumentation, Munchen. ESRO/ELDO Space 114,
Neuilly-sur-Seine/Frankreich.
22.10 Brammer, K.; Siffling, G.: Kalman-Bucy-Filter. Munchen: Oldenbourg 1975
Chapter 23
23.1 Aseltine, lA.; Mancini, A.R.; Sature, C.W.: A survey of adaptive control systems.
IRE Trans. Autom. Control 6 (1958) 102
23.2 Stromer, R.R: Adaptive and self-optimizing control systems - a hibliography. IRE
Trans. Autom. Control 4 (1959) 65
23.3 Truxal, J.: Adaptive control- a survey. ProC. 2nd IFAC-Congress, Basel 1963
23.4 Donaldsson, D.P.; Kishi, F.H.: Review of adaptive control systems theories and
techniques. Modern Control Systems Theory. (ed) Leondes. New York: McGraw
1965
312 References
23.5 Tsypkin, Y.Z.: Adaptation, training and self organization in automatic systems.
Autom. Remote Control 27 (1971) 16
23.6 Mishkin, E.; Braun, L.: Adaptive control systems. New York: McGraw-Hill 1961
23.7 Eveleigh, V.W.: Adaptive control and optimization technique. New York:
McGraw-Hill 1967
23.8 Mendel, J.M.; Fu, K.S.: Adaptive, learning and pattern recognition systems.
New York: Academic Press 1970
23.9 Tsypkin, Y.Z.: Adaptation and learning in automatic systems. New York: Academic
Press 1971
23.10 Weber, W.: Adaptive Regelungssysteme. Bd. I und II. Munchen: Oldenbourg 1971
23.11 Maslov, E.P.; Osovskii, L.M.: Adaptive control systems with models. Autom. Re-
mote Control 27 (1966) 1116
23.12 Landau, 1.0.: A survey of model reference adaptive techniques-theory and applica-
tions. Automatica 10 (1974) 353-379
23.13 Lindorff, D.P.; Carroll, R.L.: Survey of adaptive control using Ljapunov design. Int.
J. Control 18 (1973) 897
23.14 Wittenmark, B.: Stochastic adaptive control methods: a survey. Int. J. control 21
(1975) 705-730
23.15 Saridis, G.N.; Mendel, J.M.; Nicolic, Z.J.: Report on definitions of self-organizing
control processes and learning systems. IEEE Control System Soc. Newsletters 48
(1973) 8-13
23.16 Gibson, J.: Nonlinear automatic control. New York: McGraw-Hill 1962
23.17 Astrom, K.J.; Borisson, U.; Ljung, L.; Wittenmark, 8.: Theory and applications of
selftuning regulators. Automatica 13 (1977) 457-476
23.18 Asher, R.B.; Adrisani, D.; Dorato, P.: Bibliography on adaptive control systems.
IEEE Proc. 64 (1976) 1126
23.19 Isermann, R.: Parameter adaptive control algorithmus-a tutorial. Automatica 18
(1982) 513-528
23.20 Astrom, K.J.: Theory and applications of adaptive control- a survey. Automatica 19
(1983) 471-486
23.21 Landau, 1.0.: Adaptive control-the model reference approach. New York: M.
Dekker 1979
23.22 Harris, c.J.; Billings, S.A. (Eds): Self-tuning and adaptive control- theory and
applications. London: P. Peregrinus 1981
23.23 Whitaker, H.P.; Yamron, J.; Krezer, A.: Design of model-reference adaptive systems
for aircraft. Report R-I64, Instrumentation Laboratory MIT, Cambridge 1958
23.24 Osborn, P.V.; Whitaker, H.P.; Kezer, A.: New developments in the design of model
reference adaptive control systems, lAS-paper No 61-39, Inst. of Aeronautical
Sciences, 29th Annual Meeting, New York, 1961
23.25 Parks, P.C.: Lyapunov redesign of model reference adaptive control systems. IEEE
Trans. Autom. Control AC 11 (1966) 362-367
23.26 Narendra, K.S.; Kudva, P.: Stable adaptive schemes for systems identification and
control, Parts I, II, IEEE Trans. Syst. Man Cybern 4 (1974) 542-560
23.27 Desoer, C.A.; Vidyasagar, M.: Feedback systems: Input-Output properties. New
York: Academic Press 1975
23.28 Chen, C.T.: Introduction to linear system theory. New York: Holt, Rinehart,
Winston 1970
23.29 Schmid, Chr.: Ein Beitrag zur Realisierung adaptiver Regelungssysteme mit dem
ProzeBrechner. Diss. Ruhr-Univ. Bochum 1979
References 313
23.30 Unbehauen, H.: Systematic design of discrete model reference adaptive systems. In
'Selftuning and adaptive conrol' lEE - Control Eng. Series 15. Stevenage (UK):
Peregrinus
23.31 Kalman, R.E.: Design of a self-optimizing control systems. Trans. ASME 80 (1958)
468-478
23.32 Peterka, v.: Adaptive digital regulation of noisy systems. 2nd IF AC-Symp. on
Identification. Preprints. Prag: Academia: 1970
23.33 Astrom, K.J.; Wittenmark, B.: On self-tuning regulators. Automatica 9 (1973)
185-199
23.34 Clarke, D.W.; Gawthrop, P.J.: A self-tuning controller. lEE Proc. 122 (1975)
929-934
23.35 Well stead, P.E.; Prager, D.; Zanker, P.: Pole assignment self-tuning regulator. lEE.
Proc. 126 (1979) 781-787
23.36 Kurz, H.; Isermann, R.; Schumann, R.: Experimental comparison and application of
various parameter adaptive control algorithm us. 7th IFAC-Congress, Helsinki 1978
und Automatica 16 (1980) 117-133
23.37 Parks, P.e.: Stability and convergence of adaptive controllers - continuous systems.
In [23.22]
23.38 Fromme, G.; Haverland, M.: Selbsteinstellende Digitalregler im Zeitbereich. Re-
gelungstechnik 31 (1983) 338-345
23.39 Guillemin, E.A.: Synthesis of passive networks. New York: Wiley 1957
23.40 Anderson, B.D.O.: A simplified viewpoint of hyperstability. IEEE Trans. Autom.
Control AC 13 (1968) 292-294
23.41 Goodwin, G.e.; Sin, K.S.: Adaptive filtering, prediction and control. Englewood
Cliffs, N.J. Prentice Hall 1984
Chapter 24
24.1 Panuska, V.: A stochastic approximation method for identification oflinear systems
using adaptive filtering. Proc. JACC 1968
24.2 Panuska, V.: An adaptive recursive least squares identification algorithm. Proc.
IEEE Symp. on Adaptive Processes, Decision and Control 1969
24.3 Young, P.e.: The use of linear regression and related procedures for the identifica-
tion of dynamic processes. Proc. 7th IEEE Symp. on Adaptive Processes. New York:
IEEE 1968
24.4 Young, P.e.; Shellswell, S.H.; Neethling, e.G.: A recursive approach to time-series
analysis. Report CUED/B-Control/TR16, Univ. of Cambridge 1971
24.5 Wong, K.Y.; Polak, E.: Identification of linear discrete time systems using the
instrumental variable method. IEEE Trans. Autom. Control AC 12 (1967) 707-718
24.6 Young, P.e.: An instrumental variable method for real-time identification of a noisy
process. IFAC-Automatica 6 (1970) 271-287
24.7 Fuhrt, B.P.; Carapic, M.: On-line maximum likelihood algorithm for the identifica-
tion of dynamic systems. 4th IF AC-Symp. on Identification, Tbilisi 1976
24.8 Soderstrom, T.: An on-line algorithm for approximate maximum likelihood identifi-
cation of linear dynamic systems. Report 7308. Dept. of Automatic Control, Lund
Inst. of Technology 1973
24.9 Isermann, R.; Baur, u.; Bamberger, W.; Kneppo, P.; Siebert, H.: Comparison of six
on-line identification and parameter estimation methods. IF AC-Automatica 10
(1974) 81-103
314 References
Chapter 25
Chapter 26
26.1 Patchell, J.W.; Jacobs, O.L.R.: Separability, neutrality and certainty equivalence. Int.
J. Control 13 (1971) 337-342
26.2 Bar-Shalom, Y.; Tse, E.: Dual effect, certainty equivalence and separation in stochas-
tic control. IEEE Trans. Autom. Control AC 19 (1974) 494-500
26.3 Tou, J.T.: Optimum design of digital control systems. New York: Academic Press
1963
26.4 Gunckel, T.L.; Franklin, G.F.: A general solution for linear sampled-data control. 1.
Basic Eng. 85 (1963) 197
26.5 Feldbaum, A.A.: Optimal control systems. New York: Academic Press 1965
26.6 Kurz, H.; Isermann, R.; Schumann, R.: Development, comparison and application of
various parameter-adaptive digital control algorithms. 7th IF AC-Congress Helsinki
1978
26.7 Peterka, v.: Adaptive digital regulation of noisy systems. 2. IF AC-Symp. on Identifi-
cation, Prag 1970
26.8 Astrom, K.J.~ Wittenmark, B.: On self tuning regulators. IFAC-Automatica 9 (1973)
185-199
26.9 Wittenmark, B.: A self tuning regulator. Report 7311, Dept. of Automatic Control,
Lund Inst. of Technology 1973
26.10 Ljung, L.; Wittenmark, B.: Asymptotic properties of self tuning regulators. Report
7404. Dept. of Automatic Control, Lung Inst. of Technology 1974
26.11 Borrisson, u.: Self tuning regulators - industrial application and multi variable
theory. Report 7513, Dept. of Automatic Control, Lund Inst. of Technology 1975
26.12 Astrom, K.J.; Borrisson, u.; Ljung, L.; Witten mark, B.: Theory and applications of
adaptive regulators based on recursive parameter estimation. 6th IF AC-Congress.
Paper 50.1 Boston 1975
26.13 Clarke, D.W.; Gawthrop, B.A.: Self tuning controller. Proc. lEE 122 (1975) 929-934
26.14 Kurz, H.; Isermann, R.: Feedback control algorithms for parameter adaptive con-
trol-comparison and identifiability aspects. Joint Automatic Control Conference,
San Francisco 1977
26.15 Kurz, H.; Isermann, R.; Schumann, R.: Experimental comparison and application of
various parameter-adaptive control algorithms. Automatica 16 (1980) 117-133
26.16 Kurz, H.: Digitale adaptive Regelung auf der Grundlage rekursiver Para-
meterschiitzung. Diss. TH Darmstadt. Karlruhe: Ges. f. Kernforschung, Ber. KFK-
PDV 188 (1980)
26.17 Ljung, L.: On positive real transfer functions and the convergence of some recur-
sions. IEEE Trans. AC 22 (1977) 539
26.18 Gawthrop, P.J.: Some interpretations of the self tuning controller. Proc. lEE 124
(1977) 889-894
26.19 Clarke, D.W.; Gawthrop, P.J.: Self tuning control. Proc. lEE 126 (1979)
633-640
26.20 Egardt, B.: Stability of adaptive controllers. Lecture Notes in Control and Informa-
tion Sciences. Berlin: Springer-Verlag 1979
26.21 Kurz, H.: Digital parameter adaptive control of processes with unknown constant or
timevarying dead time. 5th IF AC Symp. on Identification and System Parameter
Estimation. Darmstadt 1979
316 References
26.22 Kallstrom, e.G.; Astrom, KJ.; Thorell, N.E.; Eriksson, J.; Sten, L.: Adaptive auto-
pilots for large tankers. 7th IFAC-Congress Helsinki 1978
26.23 Dumont, G.A.; Belanger, R.R.: Self tuning control of a titanium dioxide kiln. IEEE
Trans. AC 23 (1978) 532-538
26.24 Clarke, D.W.; Gawthrop, P.J.: Implementation and application of microprocessor
based self tuners. 5th IF AC Symp. on Identification and System Parameter Estima-
tion. Darmstadt 1979
26.25 Bergmann, S.; Radke, F.; Isermann, R.: Ein universeller digitaler Regler mit
Mikrorechner. Regeiungstech. Prax. 20 (1978) 289-294, 322-325
26.26 Bergmann, S.; Schumann, R.: Digitale adaptive Regelung einer Liiftungsanlage.
Regeiungstech. Prax. 22 (1980) 280-286
26.27 Buchholt, F.; Kummel, M.: Self tuning control of a pH-neutralization process.
Automatica 15 (1979) 665-671
26.28 Bergmann, S.; Lachmann, K.-H.: Digital parameter adaptive control of a pH
process. San Francisco: Joint Automatic Control Conference 1980
26.29 Schumann, R.; Christ, H.: Adaptive feedforward controllers for measurable disturb-
ances. Denver: Joint Automatic Control Conference 1979
26.30 Peterka, V.; Astrom, K.J.: Control of multivariable systems with unknown but
constant parameters. 3rd IF AC Symp. on Identification and System Parameter
Estimation. The Hague 1973 Oxford: Pergamon Press
26.31 Keviczky, L.; Hetthessy, J.: Selftuning minimum variance control of MIMO discrete
time systems. Automatic Control Theory and Applic. 5 (1977)
26.32 Borisson, V.: Self tuning regulators for a class of multi variable systems. 4th IF AC
Symp. on Identification and System Parameter Estimation. Tbilisi 1976
26.33 Schumann, R.: Identification and adaptive control of multi variable stochastic linear
systems. 5th IF AC Symp. on Identification and System Parameter Estimation.
Darmstadt 1979
26.34 Blessing, P.: Identification of the input-output and noisedynamics of linear multi-
variable systems. 5th IF AC Symp. of Identification and System Parameter Estima-
tion. Darmstadt 1979
26.35 Schumann, R.: Digital parameter-adaptive control of an air conditioning plant. 6th
IFAC/IFIP Conference on Digital Computer Applications to Process Control.
Dusseldorf 1980
26.36 Schuman, R.; Lachmann, K.H.; Isermann, R.: Towards applicability of parameter
adaptive control algorithms. Proc. 8th IF AC-Congress, Kyoto 1981. Oxford:
Pergamon Press.
26.37 Isermann, R.: Parameter adaptive control algorithms-a tutorial. Automatica 18
(1982) 513-528
26.38 Astrom, K.J.: Theory and applications of adaptive control- a survey. Automatica 19
(1983) 471-486
26.39 Matko, D.; Schumann, R.: Comparative stochastic convergence analysis of seven
recursive estimation methods. Proc. 6th IF AC-Symp on Identification, Washington.
Oxford: Pergamon Press 1982
26.40 Egardt, B.: Stability of adaptive controllers. Lecture Notes Nr. 20, Berlin: Springer-
Verlag 1979
26.41 de Larminat, Ph.: On overall stability of certain adaptive control systems. Proc. 5th
IFAC-Symp on Identification, Darmstadt, Oxford: Pergamon Press 1979
26.42 de Larminat, Ph.: Unconditional stabilizers for nonminimum phase systems.
References 317
Methods and applications in adaptive control, Lecture Notes Nr. 24, Berlin:
Springer-Verlag 1980
26.43 Schumann, R.: Digitale parameteradaptive MehrgroBenregelung. Diss. TH
Darmstadt. Karlsruhe: Ges. R. Kernforschung, Ber. PDV 217 (1982)
26.44 Radke, F.: Ein Mikrorechnersystem zur Erprobung parameteradaptiver Regelver-
fahren. Diss. TH Darmstadt. Forstchr.-Ber. VDI-Z. Reihe 8, Nr. 77, Diisseldorf:
VDI -Verlag 1984
26.45 Matko, D.; Schumann, R.: Selftuning deadbeat controllers. Int. J. Control 40 (1984)
393-402
26.46 Buchholt, F.; Kiimmel, M.: Self-tuning control of a pH-neutralization process.
Automatica 15 (1979) 665-671
26.47 Clarke, D.W.: Introduction to self-tuning controllers. In [23.22]
26.48 Banyasz, Cs.; Keviczky, L.: Direct methods for self-tuning PID-regulators. Proc. 6th
IFAC Symp. on Identification, Washington 1982. Oxford: Pergamon Press
26.49 Ortega, R.; Kelly, R.: PID self-tuners: some theoretical and practical aspects. IEEE
Trans. Ind. Electron Control Instrum. 31 (1984) 332-338
26.50 Banyasz, Cs.; Hetthessy, J.; Keviczky, L.: An adaptive PID-Regulator dedicated for
microprocessor based compact controller. Proc. 7th IF AC-Symp. on Identification,
New York 1985, Oxford: Pergamon Press
26.51 Andreiev, N.: A new dimension: A self-tuning controller that continually optimizes
PID constants. Control Eng. Aug. (1981) 84-85
26.52 Kraus, T.W.; Myron, T.J.: Self-tuning PID controller based on a pattern recognition
approach. Control Eng. June (1984)
26.53 Astrom, K.J.; Hagglund, T.: Automatic tuning of simple regulators with specifica-
tions on phase and amplitude margins. Automatica 20 (1984) 645-651
26.54 Kofahl, R.; Isermann, R.: A simple method for automatic tuning of PID-controllers
based on process parameter estimation. American Control Conference. Boston 1985
26.55 Kofahl, R.: Selbsteinstellende digitale PID-Regler-Grundlagen und neue Entwick-
lungen. VDI-Ber. 550 Diisseldorf: VDI-Verlag 1985
26.56 Radke, F.; Isermann, R.: A parameter-adaptive PID-controller with stepwise para-
meter-optimization, Proc. 9th IF AC-Congress, Budapest. Oxford: Pergamon Press
1984 and Automatica 23 (1987)
26.57 Kurz, H.; Goedecke, W.: Digital parameteradaptive control of processes with un-
known constant or time-varying deadtime. Automatica 17 (1981) 245-252
26.58 Isermann, R.; Lachmann, K.H.: Parameteradaptive control with configuration aids
and supervision functions. Automatica 21 (1985) 625-638
26.59 Lachmann, K.H.: Parameteradaptive Regelalgorithmen fiir bestimmte Klassen
nichtIine arer Prozesse mit eindeutigen Nichtlinearitaten. Diss. TH Darmstadt.
VDI-Fortschr Ber. Reihe 8 Nr. 66. Diisseldorf. VDI-Verlag 1983
26.60 Bergmann, S.: Digitale parameteradaptive Regelung mit Mikrorechner. Diss. TH
Darmstadt. VDI-Fortschr.-Ber. Reihe 8 Nr. 55. Diisseldorf: VDI-VerJag 1983
26.61 Schumann, R.: Design and application of multi variable selftuning controllers. Proc.
6th IFAC-Symp. on Identification, York. Oxford: Pergamon Press 1985
26.62 Isermann, R.; Hensel, H.: Sequential design of decentralized controllers with identifi-
cation and selftuning control. Proc. 3rd IF AC-Symp. on Computer Aided Design in
Control, Copenhagen, Oxford: Pergamon Press 1985
26.63 Gawthrop, P.J.: On the stability and convergence of a self-tuning controller. Int.
J Control 31 (1980) 973-998
318 References
Chapter 27
27.1 Bertram, J.E.: The effect of quantization in sampled feedback systems. AlEE on
Applic and Industry 77 (1958) 177-182
27.2 Tsypkin, Y.Z.: An estimate of the influence of amplitude quantization on process in
digital control systems. Automato i Telemekh. 21 (1960) 195
27.3 Knowles, J.B.; Edwards, R.: Effect of a finite word length computer in a sampled-
data-feedback system. Proc. lEE 112 (1965) 1197-1207,2376-2384
27.4 Biondi, E.; Debenedetti, A.; Rotolni, P.: Error determination in quantized sampled-
data-systems. 3rd IFAC-Congress London 1966
27.5 Koivo, A.J.: Quantization error and design of digital control systems. IEEE Trans.
Autom. Control. AC 14 (1969) 55-58
27.6 Scheel, K.H.: Der EinfluB des Rundungsfehlers beim Einsatz des ProzeBrechners.
Regelungstechnik 19 (1971) 326, 329-331, 389-392
27.7 Blackman, R.B.: Linear data-smoothing and prediction in theory and practice.
Reading, Mass.: Addison-Wesley 1965
Chapter 28
Chapter 29
29.1 Fallinger, 0.: Nichtlineare Regelungen. Munchen: Oldenbourg Bd. I 1969, Bd. II
1980
29.2 Leonhard, W.: Einfiihrung in die Regelungstechnik. Nichtlineare Regelvorgiinge.
Braunschweig: Vieweg 1970
29.3 Glattfelder, A.H.: Regelungssysteme mit Begrenzungen. Munchen: Oldenbourg 1974
29.4 PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches Inst. 1967
References 319
Chapter 30
30.1 Baur, u.; Isermann, R.: On-line identification of a heat exchanger-a case study.
IFAC Automatica 13 (1977)
30.2 Baur, U.: On-line Parameterschatzverfahren zur Identifikation Iinearer dynamischer
Prozesse mit Proze13rechnern. Karlsruhe: Ges. f. Kernforschung, Ber. KFK-PDV 65
(1976)
30.3 Blessing, P.; Baur, u.: On-Iine-Identifikation von Ein- und Zweigro13enprozessen
mit den Programmpaketen OLID. VDI-Ber. 276 'Proze13modelle 1977'. Diisseldor:
VOl-Verlag
30.4 Mann, W.: 'OLID-SISO'. Ein Programm zur On-Iine-Identifikation dynamischer
Prozesse mit Prozessrechnern - Benutzeranleitung -. Karlsruhe: Ges. f. Kern-
forschung, Ber. E-PDV 114 (1978)
30.5 Dymschiz, E.; Isermann, R.: Computer aided design of control algorithms based on
identified process models. 5th IFACjIFIP-Conference on Digital Computer Ap-
plications to Process Control. Den Haag 1977
30.6 Dymschiz, E.: Rechnergestiitzter Entwurf von Regelungen mit Proze13rechnern und
dem Programmpaket CADCA. VOI-Ber. 276 'Proze13modelle 1977'. Dusseldorf:
VOl-Verlag
30.7 Hensel, H.: CADAjCAFCA. Ein Programmpaket zum rechnergestiitzten Entwurf
von Regelalgorithmen. Karlsruhe: Ges. f. Kernforschung PDV-E 117 (1983)
30.8 Dymschiz, E.: A process computer program package for interactive computer aided
design of multi variable control systems. 2nd IF ACjIFIP Symp. on Software for
Computer Control. Prague 1979
30.9 Isermann, R.: Rechnerunterstiitzter Entwurf digitaler Regelungen mit Proze13identi-
fikation. Regelungstechnik 32 (1984) 179-189,227-234
30.10 Isermann, R.: Digital control methods for power station plants based on identified
process models. Proc. IF AC-Symp. on Automatic Control in Power Generation,
Pretoria 1980. Oxford: Pergamon Press
30.11 Mann, W.: Identifikation und digitale Regelung eines Trommeltrockners. Diss. TH
Darmstadt. Karlsruhe: Ges. f. Kernforschung, PDV-Ber. 189 (1980)
30.12 Mann, W.: Digital control of a rotary dryer in the sugar industry. 6th IFACjIFIP
Conference on Digital Computer Applications. Diisseldorf 1980. Automatica 19
(1983) 131-148
30.13 Mann, W.: Identifikation und digit ale Regelung eines Trommeltrockners fUr
Zuckerriibenschnitzel. Regelungstechnik 29 (1981) 263-269, 305-311
30.14 Mosel, P.; Feuerstein, E.; Peters, P.; Scholze, G.: Fiihrung einer Trommeltrock-
neranlage fiir Pre13schnitzel mit einem Proze13rechner. Zuckerindustrie 105 (1980)
554-561
30.15 Hensel, H.; Isermann, R.; Schmidt-Mende, P.: Experimentelle Identifikation und
rechnergestiitzter Regler-Entwurf bei technischen Prozessen. Chem.-Ing. Tech. 58
(1986) 875-887
Subject Index