You are on page 1of 12

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No.

1, December 2013

Self-Tuning Controllers: The link between the Theory and Applications


G. E. I. Mohammed*, B. K. Abdalla*, I. El-Azhary***
* International University of Africa, Khartoum, Sudan, gasmelseedibrahim@hotmail.com
** Karary University, Khartoum, Sudan, babiker.k.abdalla@gmail.com
*** Al-Neelain University, Khartoum, Sudan, ielazhary@hotmail.com

Abstract
In this paper an extensive review of the theoretical
aspects of the self-tuning model predictive controllers
[MPC] is presented. Despite the existence of many
computer-based modern control methods, it is shown that
such formulation is still quite suitable for the
implementation in many types of industrial processes.
Key Words:

Self-tuning, Adaptive control, Model

Predictive Control, on-line trimming.

1- Introduction
Self-tuning control has increased in popularity in recent
years, as a research topic, and as a new adaptive method
for application to certain control problems. Much work
has been done on the theoretical aspects of self-tuning, in
defining and validating algorithms, but less has appeared
on the practicalities of the technique. This article covers
the link between the theory of the model predictive
control (MPC) and the application of self-tuning control,
and the practicalities between these two. The point has
been made that self-tuning affords the user powerful
means of on-line fixed control laws designed for tuning
the system to any of a range of more or less acceptable
mode. Although the theoretical background of this
method of control is quite old but recent implementations
have appeared in the literature (Mohammed, et. al., 2010,
Fissore, 2009, Nikolaou, 2009, Daau, et. al., 2008,
Muske, 1995).
1- Background and Literature Review
The idea of adaptive control grew largely after the 2nd
World War with the development of supersonic aircraft,
whose control surface dynamics displayed considerable
changes with flight altitude and speed. The first mention
of the term adaptive control appeared in Draper and
Lis classic paper (Draper and Li 1951) on the control of
the internal combustion engine.
controllers, in which control coefficients are calculated as
if the estimation result was correct and cautious
controllers in which some weighting is given to the

Model reference adaptive controllers (MRAC), in


which the plant output is kept, adaptively, close to a
desired (model) response, originated in the aircraft
industry and developed into such methods as the
Whitaker Servo (Whitaker, et. al., 1958). The early
MRAC controllers suffered from problems of
instability, and some early failures in MRAC
implementations.
Parks, in 1966 presented the first rigorous analysis of
the stability of MRAC methods using Lyapunovs
technique. Since the late 50s MRAC controllers have
been much developed and refined. The reader is
referred to Hand (Hand 1973), Hang and Parks (Hang
and Parks 1973), Landau (Landau 1972, 1974) and
Lindorff and Carroll (Lindorff and Carroll 1973) for
comprehensive surveys and comparisons of MRAC
techniques.
In 1958 Kalman (Kalman 1958) formally initiated the
idea of stochastic adaptive control, using a digital
computer, in which the controller was represented by a
difference equation whose coefficients were determined
from an estimated pulse- transfer-function of the
system. At the same time, the idea of linear quadratic
cost functions was developed by Kalman and Koepckel
(Kalman and Koepcke1958) and sampled data
controller design by other means covered by Ragazzini
and Franklin (Ragazzini and Franklin 1957)
The idea of dual control was developed from Bellmans
principles (Bellman 1957) by Feldbaum (Feldbaum
1961, Feldbaum 1965) in 1961. Dual control involves
the simultaneous optimization of identification and
control in a complex scheme including weighting of
possible future plant measurements. True dual control
is difficult to implement but has been realized in a
number of simplified forms by Mendes (Mendes1971),
Astrm and Wittenmark (Astrm, and Wittenmark
1971), Tse et.al.( Tse 1973,Tse and Athans 1972,Tse
and Bar-Shalom 1973,Tse, Bar-Shalom and Meier
1973) and Alster (Alster and Belanger 1974) .
Astrm in 1965 (Astrm 1965) and Farison et.al in
1967 simplified the dual control concept by using
Feldbaums principle of enforced separation
(Feldbaum 1961) of the estimator and controller. The
simplification evolved into certainty equivalence

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

uncertainty in the parameter estimates. The cautious


control principle has been used by Farison (Farison, et.al
1967), Murphy (1978), Wieslander and Wittenmark
(1971), K .R and Athans (1973), Tse (1973) and Cegrell
(1978).
Cautious control, although far simpler than true dual
control is still difficult to implement (Bar-Shalom and
Tse 1974). The certainty equivalence controllers are more
straightforward; amongst these are the so called selftuning algorithms. The first self-tuning algorithms used
the minimum variance control strategy which was
described by Peterka in 1970 (Peterka 1970) and Astrm
(Astrm 1970) and was proved by Peterka in 1972
(Peterka 1972). Peterka in 1970 (Peterka 1970)
demonstrated what was probably the first modern self
tuner, and a similar method was covered in Astrm and
Wittenmarks comprehensive paper of 1971 (Astrm and
Wittenmark 1971).
The classic paper On Self-tuning Regulators (Astrm
and Wittenmark in 1973) defined the modern form of the
self-tuning regulator. A fuller description appears in
Wittenmarks thesis of 1973 (Wittenmark 1973). The
minimum variance self-tuning approach was extended by
Peterka and Astrm in 1973(Peterka and Astrm 1973)
cover multivariable systems, and Astrm and Wittenmark
in 1974(Astrm and Wittenmark 1974,) described how
non minimum phase systems could be handled by a
factorization approach. Clarke and Hastings-James in
1971(Clarke and Hastings-James 1971) proposed a
modified minimum variance strategy in which a
weighting was given to minimizing control effort. This
method evolved into the Lambda controller described by
Clarke in a definitive paper of 1975 (Clarke and
Gawthrop 1975) and an Oxford University Engineering
Laboratories report of 1975. Clarke (Clarke and
Gawthrop 1975) mentioned a feed forward approach to
set point handling as did Astrm et. al. in 1977 (Astrm,
et. a1., 1977). Weighted minimum variance control has
also been described in a state space self-tuner by Wouters
in 1974.
The more general ideas of quadratic cost function
optimization (Gunckel and Franklin 1963, Joseph and
Tou, 1961, Kalman and Koepckel 1958) have been
incorporated into self-tuning schemes by Gawthrop
(Gawthrop 1977) whose self tuners closely resemble
some MRAC schemes, and have some parallels with the
Smith controller (Ioannides , et. a1., 1979, Smith 1958,
Smith 1959).
An alternative approach to the sampled data control
problem comes from pole placement ideas (Ragazzini and
Franklin 1957) and such ideas have been used in selftuning controllers. Wellstead and Carvalhal (Wellstead
and Carvalhal 1975, Clarke , et. a1., 1975)
column), Keviczky (Keviczky, et. a1., 1978, Csaki, et.
a1., 1978), (cement making, multivariable), Kallstrom
et.al. (Kallstrom, et. a1., 1978, Kallstrom 1979,

have described frequency domain adaptive controllers


for continuous systems which lead directly to the
digital adaptive pole-shifting self-tuner of Edmunds
etc. (Edmunds 1977, Wellstead and Edmunds 1975).
The pole shifting self-tuner has been the subject of
research by the author and others (Wellstead, et. a1.,
1979, Wellstead et. a1., 1978, Wellstead, et. a1., 1979,
Wellstead, et. a1., 1978. The pole shifter is inherently
able to control non-minimum phase systems. It has also
been formulated in a set-point feed forward
configuration (Wellstead and Zandker 1979, Wellstead
and Zanker 1978). The same basic algorithm was
described by Wouters in 1977.
Various certainty equivalence controllers exist, which
are no true self-tuners, but which resemble, to a greater
or lesser extent, self-tuning algorithms. In 1971 Turtle
and Phillipson (Turtle and Phillipson 1971) reviewed
some alternative schemes resembling self tuners. Sardis
and Lobbia (Sardis and Lobbia 1972) described a self
adaptive scheme based upon stochastic approximation.
Morris and Abaza in 1973 (Morris and Abaza 1973)
described a scheme using an identification algorithm
and pole placement law to tune a P + I controller on a
steam turbine, and Joshi and Kaufman (Joshi and
Kaufman 1973) presented an adaptive scheme which
identified and controlled a second order (approximate)
model of a complex plant. McGreavy and Gill
(McGreavy and Gill 1975) described a self-tuning
method which optimized a standard PIL controller, and
Deshpande (Deshpande 1977) has described a state
space linear quadratic cost self tuner.
The details of the modern self tuner, from basic
mathematics to computer implementations, are covered
in Wittenmarks report of (Wittenmark 1973), Clarke
et.al.s report of 1975(Clarke, Cope and Gawthrop
1975) (which also explains Peterkas square root RLS
filter), Wellstead and Zanker (Wellstead and Zanker
1978) and Wellstead (Wellstead 1978).
Work on stability and convergence in self-tuning
systems has been covered by Ljung and Wittenmark in
1974(Ljung 1974a, Ljung 1974b, Ljung and
Wittenmark 1974). They formulated a complicated
differential equation to be solved and their method has
been much simplified by Thiruarooran (Thiruarooran
1978). Convergence properties of the pole shifting self
tuner have been covered by Wellstead, et. a1., 1979,
Wellstead, et. a1., 1978). Ljungs work on the stability
of self -tuning systems has continued (Ljung 1977a,
Ljung 1977b).
Practical applications of self-tuners have been reported
by Jensen and Hansel (Jensen and Hansel 1974)
(enthalpy exchanger), Cegrill and Hedqvist (Cegrell
and Hedqvist 1975) (paper machine), Wouters, 1974,
1977 (Continuously Stirred Tank Chemical Reactor),
Borrison and Syding (Borrison and Syding1976) (ore
crusher), Sastry and Seborg (Sastry, et. a1., 1977)

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013
Kallstrom, et. a1., 1977, Kallstrom 1974) (ship steering),
Clarke et.al (Clarke, et. a1., 1973) (multivariable boiling
ring), Borrison and Wittenmark (Borrison and
Wittenmark
1973) (paper machine), Dumont and
Belanger (Dumont 1977, Dumont and Belanger 1978)
(titanium dioxide kiln), Morris et.al. (Morris, Fenton and
Nazer 1977, Morris Nazer and Chisholm 1979)
(Distillation columns etc.) and Horn (Horn 1978) (batch
chemical reactors).
Work on stability and convergence in self-tuning systems
has been covered by Ljung and Wittenmark in
1974(Ljung 1974a, Ljung 1974b, Ljung and Wittenmark
1974). They formulated a complicated differential
equation to be solved and their method has been much
simplified by Thiruarooran (Thiruarooran 1978).
Convergence properties of the pole shifting self tuner
have been covered by Wellstead, et. a1., 1979, Wellstead,
et. a1., 1978). Ljungs work on the stability of self -tuning
systems has continued (Ljung 1977a, Ljung 1977b).
Practical applications of self-tuners have been reported by
Jensen and Hansel (Jensen and Hansel 1974) (enthalpy
exchanger), Cegrill and Hedqvist (Cegrell and Hedqvist
1975) (paper machine), Wouters, 1974, 1977
(Continuously Stirred Tank Chemical Reactor), Borrison
and Syding (Borrison and Syding1976) (ore crusher),
Sastry and Seborg (Sastry, et. a1., 1977) (distillation
column), Keviczky (Keviczky, et. a1., 1978, Csaki, et.
a1., 1978), (cement making, multivariable), Kallstrom
et.al. (Kallstrom, et. a1., 1978, Kallstrom 1979,
Kallstrom, et. a1., 1977, Kallstrom 1974) (ship steering),
Clarke et.al (Clarke, et. a1., 1973) (multivariable boiling
ring), Borrison and Wittenmark (Borrison and
Wittenmark
1973) (paper machine), Dumont and
Belanger (Dumont 1977, Dumont and Belanger 1978)
(titanium dioxide kiln), Morris et.al. (Morris, Fenton and
Nazer 1977, Morris Nazer and Chisholm 1979)
(Distillation columns etc.) and Horn (Horn 1978) (batch
chemical reactors).
The authors have knowledge of a number of other
projects active at the time of writing. D. Clarkes group at
Oxford University has developed an MPU based portable
self-tuner (Cope 1976) and has done some work on steel
soaking pits. A.J. Morris groups at Newcastle
University have worked on auto electrolysis, and T.
Fortiscue at Imperial College, London has worked on
distillation columns. A. Johnson at Delft University,
Holland is working on batch fomenters.
The authors have (unofficial) knowledge of two
independent studies into the self-tuning control of gas
turbine engines, and knowledge of a study into the selftuning control of petrol engines.
Wittenmark in 1971(Wittenmark 1971) published a
optimal way, help the estimator to identify the system
(Wittenmark 1975).
Within the class of self-tuners, individual algorithms
differ from each other (i) in the identification method

(distillation
survey of adaptive control methods, including MRAC,
hill climbers and stochastic methods and in
1975(Wittenmark 1975) published a very thorough
survey including 86 references. Bar-Shalom and
Gershwin in 1978(Bar-Shalom and Gershwin 1978)
published a brief trends and opinions article on
adaptive control and its applications. Kurz et.al in 1978
published a survey covering self-tuning and alternative
certainty equivalence controllers (Clarke and
Gawthrop1975) Vaneeck in (Vaneeck1978) published a
survey article briefly covering several topics in self
adaptive control and Clarke and Gawthrop (1979)
published a paper on self-tuning control ( Mendes,
1971).
Self-tuning applications include studies on unclear
power plant (De Kayser and Van Cauwenberghe1978)
and DHulster and Van Cauwenberghe (DHulster and
Van Cauwnberghe 1978) have applied a complex new
algorithm to an ethane/propane/naphtha cracking
furnace.
1- Self-Tuning Control
Self- tuning control is the name given to a class of
autonomic control algorithms in which three levels of
activity are combined:
(i) A digital, sampled data, model of an unknown
system, using measurements of the systems output(s)
and input(s).
(ii) Some controller synthesis algorithm is employed to
generate the parameters of a digital control law, based
entirely upon the parameters of the system as estimated
above.
(iii) The control law specified above is used to generate
the plant controller which is then self-tune the output.
The numerical complexity of recursive estimation
schemes means that self-tuners can practically only be
implemented as part of a digital computer control
scheme.
Self-tuning control differs from other
automatic control methods in two notable ways:
(i) It is primarily a digital sampled data control scheme
for application to analog continuous systems.
(ii) Its parameter estimation methods are based upon
maximum extraction of information from the
measurements, rather than upon hill climbing around
functions of the system responses.
Even so, it can intuitively be compared with hillclimbing techniques in that it involves updating a
control law with an optimization routine which
attempts to minimize some error signal.
Historically, self-tuning is regarded as a stochastic
adaptive control method, developed around the optimal
regulation and optimal identification problems. It falls
into the class of Certainty equivalence controllers,
since no allowance is made in the control law for the
uncertainty in the parameter estimates, and no

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

used and (ii) in the controller synthesis cycle. The most


usual identification method used is recursive least squares
(RLS) although other algorithms such as generalized least
squares (GLS), extended least squares (ELS) and
recursive maximum likelihood (RML) can be employed
(Astrm and Eykhoff 1971 , Kailath 1974 ). Many forms
of controller synthesis have been used including
minimum variance (Astrm and Wittenmark 1973),
weighted minimum variance (Clarke and Gawthrop1975)
detuned minimum variance (Wellstead, et. a1., 1978a),
pole assignment (Wellstead, et. a1., 1978b) and model
reference.
The attraction of self-tuning control to the control
engineer lies firstly in its ability to automate the
identification/control synthesis cycle, thus affording the
user a potentially very powerful on-line controller design
method. Its second attraction lies in its property of tuning
to a system and continuing to tune in the face of changes
in the system dynamics, thus providing a tracking
control system for time-varying or otherwise non linear
plant.
Self-tuning automates the identification/synthesis cycle
but does not automatically choose a closed loop response.
This choice is left to the control engineer. Indeed, the
engineer has a great deal of freedom in specifying the
form of closed loop responses and in trading off servo
responses against regulation properties.
3.1 MATHEMATICAL BACKGROUND
The concept of a self-tuning regulator as explained is
relatively intuitive, but a detailed understanding of selftuners requires some explanation of the mathematics
involved. This section will cover some of the properties
of z-transforms not normally covered in standard text,
including some explanation of the effects of non integer
time delays. A formulation will be introduced which
simplifies the writing down of some equations in selftuning and which will be used throughout this project
work. The basic ideas of self tuners were discussed above
and other introductions to the concepts are given by
Wellstead (Wellstead 1978, Wellstead and Zanker 1978)
Clarke (Clarke, et. a1., 1975) and Wittenmark
(Wittenmark 1973). The details which distinguish various
self-tuning algorithms are:
(i) The off-line controller design rule employed.
(ii) The inputs to the estimation (or minimization) routine,
i.e. the details of the variables updated to minimize the
target function.
(iii) The target function to be minimized, and
(iv) The estimation method used.
Recursive Least Squares (RLS) estimation has been used
throughout this thesis, although it is possible to construct
the true system time delay become less than k, then
trailing zero coefficients of F will emerge, with no
degradation of control.
Excess parameters will be tuned, but F and G will remain

weighting is given to producing controls which could,


in dome
self-tuners using other estimation methods (Kurz, et.
a1., 1978). Recursive Least Squares has advantages
over other algorithms in its inherent simplicity, and in
that no explicit estimation of noise coloration is
needed. The choice of RLS as the estimation method
also defines the target function to be minimized.
Briefly, RLS whitens its estimation residual, thus
extracts maximum information from its input data.
The self-tuning algorithms detailed below differ in the
control laws that they use, and in the inputs of the RLS
estimation routine. Two basic forms of control law will
be discussed.
Minimum Variance rules (Astrm and Wittenmark
2008), (Astrm 1970) in which controller coefficients
are chosen to minimize the variance of system output or
some function thereof, and Pole Shifting rules in which
closed loop pole positions are specified, and control
coefficients chosen such that system zeros are not
cancelled (Wellstead., et. al., 1978a, 1978b ).
Two basic system models will be used
(i) Yt = -As .yt + z-k Bs ut + (1+c)et
(ii) yt = z-k As yt + z-k Bs ut + ( 1+ c) et
The two models are equivalent, although the second
being a k-step ahead, (Astrm 1970). The
representation of the system noise as a moving average
process filtered by the systems autoregressive
dynamics is, of course, a mathematical fiction. The
self-tuning algorithm having been specified, there
remain two questions:
(i) Can the self-tuning property be proved? And (ii)
Can stability is proved?
Proofs of the self-tuning properties of the algorithms
discussed below have been presented by many authors
(Prager, et. a1., 1978, Clarke and Gawthrop 1975,
Ljung 1974, Ljung and Wittenmark 1974, Edmunds
1977) . The proofs offered by Prager in particular are
relevant to the self tuners used. Criteria for the stability
of the basic minimum variance self tuner of Astrm
and Wittenmark (Wittenmark 1973, Astrm and
Wittenmark 1973) have been developed; (Astrm, et.
a1., 1977, Ljung and Wittenmark 1974) and further
refined and simplified by Thiruarooran (Thiruarooran
1978) but no satisfactory general stability criteria exist
for the algorithms used in this controlling method. A
proof of the self-tuning property is given by Prager et.
a1. 1978, and Wellstead, et. a1. 1978.
The applicability of the algorithm to systems displaying
varying or unknown pure time delays merits further
explanation: It can be seen from the discussion above
that if Ke is fixed it equals to 1, irrespective of the true
system time delay, then the correct number of
parameters will be tuned if nu = nb + k 1 and ny =

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013
uniquely defined. Should the true system time delay ever
exceed k then the estimator will become under
parameterized and the control solution will become suboptional, and self-tuning, in the strict mathematical sense,
will be lost. A change in the systems pure time delay can,
therefore, be regarded in the same light as any other
change in the systems numerator Bs polynomial, when the
pole shifter is applied.
The equations given can be seen to represent a set of
simultaneous equations to be solved on line for each
iteration of the self-tuning algorithm. These equations
impose a severe computation time penalty over the
minimum variance algorithms. The equation may be
solved by any standard technique (Gaussian elimination
etc.), or by using the RLS algorithm, thus saving some
computer space. The equations can be formulated as a
matrix, in which a structure becomes apparent. Such
structure may be exploited in the future to construct a
fast algorithm for solving the equation set (Kailath
1974).
3.2 The Extended Pole Shifter
The extended pole shifter represents the pole
shifting version of the Astrm-Wittenmark algorithm, and
has been mentioned by Wellstead et.al. (Wellstead, et.
a1., 1978). The algorithm was introduced in parallel with
the pole shifter , with the object of studying their relative
merits. The two methods have proved to display very
similar properties both mathematically and practically.
The pole shifter, however, consumes less computation
time than does the extended pole shifter, thus is generally
more useful.
3.3 Modifications to Wittenmarks Method
Wittenmark drew his method (Wittenmark 1973)
from the intuitions of classical control (Horowitz 1963),
and similarly other schemes suggest themselves.
Consider, for example, the problem of controlling a type
1 system which displays a small, slowly changing and
unpredictable drift input to its constituent integrator
(position control of a system having analog gyro rate
feedback, for instance). Classical control tells us that
simply cascading an integrator would make the system
difficult to control (double integrator), and that P + I are
the obvious answer.
Returning to two basic system model equations, can
replace ut by any xt defining
xt

1+ X
ut
Y

Such that
(1 + A) (1+ X) yt = z-k B.Y xt + (1+ C) (1+ X)et (1 + A)
(1 + X)rt

na. Time delays in the system, if k 2, will be


represented by leading zero coefficients in Bs. Should
Can be shown by the same argument that

k
xt = z [(1 + A)(1 + C ) (1 + C )](1 + X ) y
B(1 + C )Y

is the minimum variance solution and that


xt =
G (1 + X )
1+ F

yt

is the pole shifting with


Y
ut =
Xt
1+ X
in both cases. The transfer function Y/ (1 + X) must, of
course, be inverse stable, and the orders of the
estimated A and B polynomials must be increased by
the orders of X and Y respectively.
Consider the P + I block
ut = k xt + (1 k ) xt
1 z 1

or
ut = k xt + Vt
where Vt = (1 k)xt + Vt-1
with Vt kept explicitly as the state of the integrator,
with the usual considerations of initialization, limiting
and prevention of winding up.
A second appeal to engineering
intuition suggests that if k, the proportional gain,
approaches unity the effect of the integral term on the
estimator should be negligible, i.e. the estimated A
polynomial need not be extended to allow for the
integrator and the sub optimality due to estimating too
few parameters should in this case be hardly noticeable,
Furthermore, it could save multiplication by re-writing
ut as
Vt = Xt + Vt-1
ut = xt + Vt
1
the function Y/(1+X) then becomes (1 + ) z
1 z 1

3.4 Minimum Variance Laws


Referring back to sections 3.2 and 3.3 it can seen
that in both minimum variance methods the regulation
law is defined as
u~t =

G
A'' T ' '
A' ' T ''
yt = e '' yt = e
yt
f0 + F
Be
BS (1 + C )

The system under minimum variance control is


assumed to be inverse stable hence it can be put as
SF =

(1 + AS ) 1 + z k AS' '
=
zBS
zBS''
-1

Now substitute for SF, SP and

and SP = z
G
in Equation - Properties of the servo Self Tuners
f0 + F
The servo self-tuners formulation doubles the number

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

E3.60
[(1 + AS )(1 + C ) + Z 1 (T ' ' Ae' ' )] yop t = Z ( k +1) [(1 + As )(1 + C ) + Z k (T ' ' Ae' ' )] sp t + (1 + C )(1 + C ) et

[( 1 + A s )( 1 + C ) + Z

(T

''

A e' ' )]

( 1 + C )( 1 + Z

''

thus
yop

( k +1)

. sp

1 + C
1 + Z kT

''

et

This equation should be compared with the regulation or


regulation plus set point. The set point has been
incorporated in a dead-beat way (Wittenmark 1973)
essentially by inverting the system, while the regulation
remains independent of the set point.
The estimation schemes for servo self-tuning are derived
as simple extensions of the feedback regulator cases.
Pole Shifting Laws
The pole shifting control laws were
designed, amongst other things, to control NMP systems
(Edmunds 1977, Wellstead Prager, Zanker and Edmunds
1978) In this spirit formulation of the pole shifting servo
self-tuners can be done in such a way that inverse
unstable systems can be handled. Dead beat control is not,
therefore, attempted.
The pole shifting regulation law is defined as

G
u~t =
yt
1+ F

with G and F defined by


(off line case) (1 + AS) (1 + F) + z-k BS G= (1+T)
(1+C)
or

(on line case) (1 + Ae) (1 + F) + z-k Be G= (1+T)

now if SF and SP are defined as


SF=

and

1 + AS
B (1)

SP =

Bs
B(1)

where B(1) denotes the algebraic sum of the coefficients


of BS ( z-1) can guarantee that no zeros of Bs can appear
as controller poles.
Substituting in Equation E3.60 it can be found
yopt

= z-k

BS
(1 + F )
.SPt +
et
B (1)
(1 + T )

The set point response reflects the system numeration


dynamics, but not the noise coloration dynamics which
appear (transiently) in the set-point-handling by
Wittenmarks method response.
settled. Tuning in on simulated systems should usually
be carried out by first tuning to noise or disturbances with
zero set point and then cautiously introducing set point

of unknown parameters to be optimized on line, with


respect to their self- tuning regulator precursors.
Computational effort in RLS increases as the square of
the number of parameters tuned. Increased
computational load can be a severe handicap in
applying servo self-tuners in practical situations where
iteration time may be critical, and this burden is the
penalty paid for the increased degree of control
achievable.
The alternative feed forward schemes require fewer
parameters to be tuned than in the controllers and this
can be a considerable advantage. Simulation studies,
however, indicate that the controllers converge faster
and more exactly than do equivalent controllers
formulated. In particular, estimates of C polynomials in
the alternative formulations tend to be very noisy.
Servo self-tuners were designed to decouple the
regulation and servo following problems in self-tuning,
but they have another advantage. The feed forward
schemes give, in addition to self optimizing control
unbiased estimates of the system under control
(Kailath, 1974). The formulations of sections 3.2 and
3.3 will give estimates of As and Bs and with the
''

''

estimates of Ae and Be or Ae and Be the C


polynomial can be deduced. The alternative
formulations give estimates of Ae, Be, and C or Ae,

Be'' and C from which As and Bs can be deduced.


Implementation of Servo Self Tuners
The implementation problems encountered when using
servo self-tuners are, in most respects, identical to those
found when using self-tuning regulators, and need not
be reiterated here. There are, however, some additional
problems and these are discussed below.
(i) Computation time
The extra computation time needed to run a servo selftuner can become a serious limitation in fast systems,
and even in quite slow systems extra pure time delay
could become an embarrassment, a minimum phase
system could be made NMP when minimum variance
servo self-tuning is attempted and pole shifting servo
self-tuning, which consumes considerably more
computer time, may run too slowly to control the
system adequately.
(ii) Number of parameters
Servo self-tuners require more parameters to be tuned
than do equivalent regulators. More computer storage is
therefore required, more run time data is produced and
more initialization is required for start-up.
(iii) Tuning in transients
Servo self-tuners have two blocks of control parameters
which need to tune in to the system, and each can
produce tuning transients even when the other has
Several factors can be tuned online by the control
engineer to set up the best self-tuner for particular

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013
fluctuations. Tuning times are usually longer and
transients worse than in the feedback self-tuners.
(iv) Saturation limits
The importance of reflecting back saturation limits in
feedback self-tuners was discussed and the same
arguments apply to servo self-tuning. A complication
arises when using laws. The control, ut is made up of two
parts; ut and xt. Limits must be reflected back to one or
both of these before and/or after they are added to form
ut. The best procedure is by no means certain, but the
method adopted in the examples in the next section was
to limit ut after it was formed and reflect the limit back to
u t.
Choice of Control Law
The minimum variance control laws can only be
applied to systems, which are minimum phase (z-plane)
over their entire operating ranges. NMP systems occur
very frequently in practice, most systems of third order
and over are NMP and most systems with significant time
delays are NMP. Most systems with non integer time
delays over 50% of the sampling interval are NMP and
such time delays can frequently be attributed to the
computation time of the algorithm. The pole shifting
controller can cope with NMP systems and with varying
or unknown pure time delays hence is a most practical
general purpose algorithm. The extended pole shifter has
no advantages over the pole shifter and need not be used
in practice.
NMP system can be tackled with the Lambda
controller, which requires much less computation effort
than does the pole shifter. Closed loop pole positions can,
however, vary with changing system dynamics and might
even drift into instability. The minimum variance
algorithms are most useful for controlling fast, low order
systems with negligible pure time delays. The shortened
minimum variance algorithm runs slightly faster than the
Astrm Wittenmark law and tunes the same control law
for ke= 0 or 1. The Astrm Wittenmark algorithm is
optimal for any ke value, most systems with large time
delays, however tend to be NMP.
Feed forward versions of the regulation laws may be
used where servo responses are important but require
considerably more computational effort than do the
feedback laws.
Tuning Behavior and Block Traces
The degree of tuning of a self-tuning controller, in
practice must be a tradeoff between precision of
regulation and control effort, optimality at an operating
point and tracking ability, and disturbance rejection and
step responses, with saturation limits, slow rate limits and
sampling rate limits to be taken into consideration.

by injecting probing disturbances or by


freezing the estimator (by passing the RLS routine)
until some system error is detected.

problem; these include tailoring polynomials and


forgetting factors. Controls and system responses can
be watched on an oscilloscope or chart recorder, but the
tuning behavior of the controllers can best be
summarized qualitatively and quantitatively in the
block traces of the covariance matrix. These traces are
defined as the sums of the diagonal elements in the
covariance matrix corresponding to each set of
estimated parameters (usually two, the A and B
parameter sets, but can be three or four in the servo
self-tuners). The traces can be manipulated online
either with the forgetting factor or with random walks
or by injecting disturbances or test signals (Wellstead
and Zanker 1978).
The traces can be regarded as measures of the
uncertainty in each block of parameter estimates;
alternatively they can be related to the inverse of the
information in each data stream (e.g. control and
system error-from-set point zero) and may be seen as
measures of the total amounts of data presented to the
estimator. The numerical values of the block traces at
any iteration of the estimator depend upon the
variances of the data streams, thus the trace associated
with the A parameters and the system error might be
very different from the trace associated with the B
parameters and control signals, additionally it is
impossible to assign numerical values to qualitative
terms such as high traces and low traces.
Numerical values depend upon the problem in question.
High traces indicate uncertainty in the
parameter estimates or poor tuning. High or steadily
increasing traces in a running self-tuner indicate that
too little information is coming in to balance the
forgetting factor. The appropriate action for the
engineer is to increase the forgetting factor towards
unity and to introduce probing disturbances or set point
variations to give the estimator information to tune to.
If traces fall low when controlling a linear
system and the control law and system responses are
satisfactory then the most appropriate action may be to
read off the self-tuned control law and implement it as
a fixed-term digital algorithm. If traces fall low when
controlling a non-linear system then the self-tuner has
probably tuned too well to a steady operating point and
would be unable to track any dynamic changes in the
system. The appropriate action is to decrease the
forgetting factor to achieve a compromise between
tracking ability, and control at each operating point.
Systems which present long periods of
quiescence (steady input, steady output, little or no
noise and no set point changes) tend to allow the
estimator to blow up (the traces build up to very high
values and the system goes unstable). This situation can
be avoided by putting the forgetting factor to unity or
dead zone or backlash, were set to zero, and this value
reflected back into the estimator. The approach was not

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

The blocks traces can level off to high values or


increase steadily even while disturbances or set point
changes are present. Such symptoms indicate that the
self-tuner cannot tune to the system and it may be
necessary to repeat the control experiment using different
number to optimize itself to some point which is not what
the control engineer wanted. He can force the self-tuner
to re-tune by decreasing the forgetting factor and
injecting random walks of 1% or 2% each block trace.
Disturbance injection may help; alternatively large set
point changes can be imposed to swamp the system noise.
Spurious detuning of the controller can cause
large control and system output excursions which drive
the block traces to low values and this prevent retuning.
The traces can be boosted with a low forgetting factor or
by injecting random walks, and the estimator allowed
retuning.
The block traces provide a simple measure of the selftuning controllers state and indicate to the engineer what
actions should be taken to trim the controllers behavior.
Responses can also be shaped by changing the tailoring
polynomial, to speed up or slow down step responses or
dampen oscillations. The sampling interval can also be
trimmed online in an experimental manner in order to
improve responses. Set point slow rate limits will, of
course, improve set point following and should be used
where a genuine set point step change would require
excessive control actions.
Non Linearity
One of the principal motivations for implementing selftuning control schemes is their parameter adaptive
capability, their ability to tune and re- tune and track time
varying or otherwise changing system dynamics. The
self-tuner will attempt to extract a linear model of the
system and apply the appropriate control. This
linearization property plus the adaptive capability make
self-tuners ideal for controlling systems with operating
point non linearity i.e. system which display a range of
dynamics but which are reasonably linear about any
particular operating point. Saturation is a very common
non linearity which can easily be accommodated in selftuning schemes by the method mentioned.
Self-tuners can have great difficulty in coping with some
non linearity, notably dead zone and backlash. These non
linearity imply a system gain falling to zero within the
normal range of control swings about an operating point.
The self-tuner will tend to estimate a low system gain and
put out excessive controls if, say, a backlash occupies
more than a few percent of the normal range of controls.
The authors has attempted to handle dead zone and
backlash, in self- tuned systems, by a method analogous
to that used to handle control saturations. Controls
generated by the self-tuner, which were within the system
controlling the same system. There would be a timing
mismatch and a possible degradation in control when the
self-tuned law is fixed.

successful, and it has been concluded that the effects of


backlash and dead zone should be alleviated by either
signal techniques or special feedback techniques,
external to the self-tuner.
Fixing Self Tuned Control Laws
Self-tuning may be regarded as an online
automatic method of designing high order digital
control laws, and as such represents a powerful design
technique. Self-tuning allows the engineer to set up a
controller which can track changing dynamics, but
some systems display unchanging dynamics or a small
and definable range of dynamic regions.
Such systems may not need free running selftuning control, fixed-term linear controllers (or a small
range of such controllers with appropriate switching)
would be more suitable. A self-tuner can be set up to
control the system at some steady operating point, the
controller allowed to settle to give what the engineer
accepts as a good response, and the mathematical
modeling and z-domain design techniques.
The permanent installation of self-tuning controllers on
plant is difficult today for three reasons:
(i) Self-tuners require a lot of computer power both in
terms of speed and storage, thus are expensive from a
hardware point of view.
(ii) Much effort must be expended in constructing
safety features, tailored to the individual plant, to
ensure that the self-tuner can never detune, blow up or
otherwise become unsafe and that it retains the correct
level of tracking ability under all circumstances. Such
effort is expensive from a software viewpoint.
(iii) Self-tuners are still very much at a research stage
in their development, hence would encounter strong
resistance from industry, which likes to use well
established control methods such as PID, and still has a
memory of some of the spectacular failures of
autonomic control in the past.
These difficulties imply that the use of self-tuning
algorithms as a laboratory design tool probably
represents the most immediate practical application of
the techniques.
A problem does exist in fixing self-tuned control laws,
namely that of the computational time delay. This
represents a fractional pure time delay which might
form an important part of the system controlled. Selftuning algorithms consume a great deal of computer
time but the control law, when fixed, needs only a
simple vector multiplication and some data
manipulation at each iteration. The fixed control law
could, therefore, run faster than self- tuner, case if:
(i) The computation times of the self-tuner and fixed
law are significantly different, and (ii) the computation
time of either law represents a significant proportion of
the sampling interval, and then the two laws will not be
Identification- A Survey, Automatica 7 pp 123-162.
Astrm K.J., Westerberg B. and Wittenmark B., 1978.

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013
Linking the Model with Self-tuning:
The model equations provided in the above mentioned
section were used to design the different Graphic User
Interfaces (GUI) used in this simulation. The MATLAB
help file provided the procedure steps on the formulation
and design of the GUI (MATLAB User Guide 2004). The
designed GUI was then provided with the model
equations and the industrial Parameters.
The application was made on two refinery units mainly
the Atmospheric Distillation Unit (ADU) and the
Residual Fluid Cracking Unit (RFCC) by making
different manipulations to the inputs, namely temperature
and feed concentration. The results of this
implementations was carried out and the results are
explained in details elsewhere (Nikolaou, Muske, 1995,
Daau, et. al., 2008, Fissore, 2009, Mohammed, et. al.,
2010).
Conclusion
Extensive studies were made into the problems of
initialization of self-tuners and results and guidelines
have been presented. Two methods of handover of control
have been presented, namely handover by prior
simulation, and handover by open loop tuning to the
system under fixed-term digital control.
The point has been made that self-tuned control laws can
be fixed as conventional digital control laws. It has been
asserted that self-tuning can be used as an online
controller design tool, to good effect. The control
problems of some non-linear systems were looked at, and
some distinctions drawn to classify those non-linearity
with which self-tuning can and cannot cope.
The
application of these schemes when coupled with the MPC
and the model and design equations of the industrial
equipment under investigation showed clear and
progressive success on using these type of controlling
systems for industrial purposes.

References
Astrm K.J. and Wittenmark B.,2008, Adaptive Control
second edition.pp51-52; Dover.
Alster J. and Belanger P.R., 1974, A Technique for Dual
Adaptive Control Automatica 10 pp627-634.
Astrm K.J., 1965 Optimal Control of Markov Processes
within complete State Information.
Part 1, J.
Math. Anal. Applic. 10 pp 174-205; Part 2, J.Math. Anal.
Applic.36 pp 403-406 (1969)
Astrm K.J. 1970, Introduction to Stochastic Control
Theory Academic Press. New York.
Astrm K.J., Borrison U. Ljung L. and Wittenmark
B.,1977, Theory and Applications of Self-Tuning
Regulators, Automatica 13 pp. 457-476.
Astrm K.J., and Eykhoff P., 1971 Systems
Cope S.N. 1976, The Microprocessor Implementation
of a Self-tuning Controller, IEE Colloquium on Selftuning and Adaptive Systems.

Self-Tuning Controllers Based on Pole-Placement


Design Lanud Report LUTFD2/(TFRT-3148),Lund
Inst. Tech, Sweden.
Astrm K.J., and Wittenmark B., 1971, Problems in
Identification and Control J.Math, Anal. Applic, 34
pp. 90-113.
Astrm K.J., and Wittenmark B., 1972 On the Control
of Constant but Unknown Systems, Proc. IFAC 5th
World Congress, Paris, France
Astrm K.J., and Wittenmark B., 1973 On Self-tuning
Regulators, Automatica 9 pp. 185-199.
Astrm K.J., and Wittenmark B., 1974, Analysis of a
Self-tuning Regulator for No minimum phase System
IFAC Symposium on Stochastic Control Budapest.
Barker R.H., 1952, The Pulse Transfer Function and
its Application to Sampling Servo Systems, Proc. IEE
99, 4, pp. 302-317.
Bar-Shalom Y. and Gershwin S.B., 1978, Applicability
of Adaptive Control to Real Problems-Trends and
Opinions, Automatica 14 pp. 407-408.
Bar-Shalom Y. and Tse E., 1974 Dual Effect, Certainty
Equivalence and Separation in Stochastic Control
IEEE Trans. Aut. Control AC-19pp.494-500.
Bellman R.E., 1957, Dynamic Programming,
Princeton University Press, Princeton N.J.USA.
Borrison U, and Syding, 1976, Self-Tuning Control
of an Ore Crusher, Automatica 12 pp. 1-7 .
Borrrison U. and Wittenmark B., 1973, Moisture
Control of a paper Machine : An Industrial Application
of a Self-Tuning Regulator Lund Div. Aut. Control
Report 7337, Lund Inst. Tech., Sweden.
Cegrell T., 1978, Some Aspects on a Class of SelfAdjusting Regulators Proc. IFAC 4th Symposium on
Identification and System Parameter Estimation,
Tbilisi, USSR.
Cegrell T. and Hedqvist T., 1975 Successful Adaptive
Control of Paper Machines, Automatica 11 pp. 53-59.
Clarke D.W., Cope S.N. and Gawthrop P.J. 1975,
Feasibility Study of the Application of Microprocessors
to Self-tuning Controllers, Oxford Univ. dept. of Eng.
Sci. report 1137/75, oxford University .
Clarke D.W., Dyer D.A.J., Hastings-James R., Ashton
R.P. and Emery J.B., 1973, Identification and Control
of a Pilot-Scale Boiling Rig, Proc. IFAC 3rd
Symposium on Identification and System Parameter
Estimation, The Hague/Delft, Holland.
Clarke D.W. and Gawthrop P.J., 1975, Self-Tuning
Controller, Proc. IEE 122 no.9.
Clarke D.W. and Gawthrop P.J., 1979, Self-Tuning
Control, Porc. IEE 126 no. 6 pp.633-640.
Clarke D.W. and Hastings-James R., 1971, Design of
Digital Controllers for Randomly Disturbed Systems,
Proc. IEE 118 pp. 1503-1540.
Trans. Aut. Control AC-18 pp 419-428.
Horn J.E., 1978, Feasibility Study of the Application

10

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

Csaki F., Keviczky L.K., Hetthessy J., Hilger M. and


Kolostori J., 1978, Simultaneous Adaptive Control of
Chemical Composition, Fineness and Maximum Quantity
of Ground Materials at a Closed Circuit Ball Mill, Proc.
IFAC 7th World Congress, Helsinki, Finland.
Daaou, M., Mansouri, A., Bouhamida, M. and Chenafa,
M., A Robust Nonlinear Observer for State Variables
Estimation in Multi-Input Multi-Output Chemical
Reactors, Intr. J. Chem. Reac. Eng. Article A86, V6,
2008.
De Kayser R. and Van Cauwenberghe A.R. 1978,
Simulation and Self-Tuning Control in a Nuclear Power
Plant in Troch I,(Ed.) Simulation of Control System
North-Holland.
Deshpande J.G.,1977, Adaptive Control Using Splines,
Proc. 4th National Systems Conference, PSG Coll. Of
Tech,. Indiana, USA.
DHulster F.M. and Van Cauwnberghe A.R., 1978, An
Adaptive Controller for Slow Processes with Variable
Control Periods, Proc. IFAC 7th World Congress,
Helsinki Finland.
Draper C.S. and Li Y.T., 1951, Principles of Optimizing
Control Systems and Applications to the Internal
Combustion Engine ASME, New York, USA.
Dumont G., 1977, Control and Optimization of Titanium
Dioxide Klns, Ph.D. Thesis, Dept. of Electrical Eng.,
McGill University.
Dumont G.A. and Belanger P.R., 1978, Self-Tuning
Control of a Titanium Dioxide Kiln, IEEE Trans. Auto.
Control AC-23 pp 532-538.
Edmunds J.M., 1977, Digital Adaptive Pole Shifting
Controllers, Ph.D. Thesis, UMIST,
Farison B., Graham R.E. and Shelton Jr. R.C., 1967,
Identification and Control of Linear Discrete Systems,
IEEE Trans. Aut. Control, AC-12 pp. 438-442.
Feldbaum A.A., 1961, Theory of Dual Control, Parts
I,II,III and IV Automation and Remote Control 9 and 11
and 1 and 2 (1962).
Feldbaum A.A., 1965, Optimal Control Systems,
Academic Press, New York.
Fissore, D., On the MPC of the Methanol Synthesis in a
Simulated Moving Bed, Chem. Prod. & Proc. Modeling,
Article 39, V4, Iss1, 2009.
Gawthrop P.J., 1977, Some Interpretations of the SelfTuning Controller , Proc.IEE,124 no. 10.
Gunckel II T.L. and Franklin G.F., 1963, A General
Solution for Linear Sampled Data Control Trans. ASME
Series D,J. Basic Eng. 85 pp 197.
Hand C.C., 1973, Design Studies of Model Reference
Adaptive Control and Identification Systems, Ph.D.
Thesis, Univ. of Warwick, U.K.
Hang C.C. and Parks P.C. 1973, Comparative Studies of
Model Reference Adaptive Control Systems IEEE
IFAC 7th World Congress, Helsinki, Finland.
Landau I.D., 1972, Model Reference Adaptive Systems.
A Survey, Trans. ASME, G,J. Dyn. Syst. Meas,

of Self Tuning Controllers to Chemical Batch


Reactors OUEL Report 1248/1978, Oxford Univ.,
U.K.
Horowitz I.M., 1963, Synthesis of Feedback
Systems, Academic Press.
Ioannides A.C., Rogers G.J. and Latham V., 1979,
Stability Limits of a Smith Controller in Simple
Systems Containing a Time Delay, Int. J. Control ,29
,no. 4, pp 557-563.
Jensen L. and Hansel R., 1974, Computer Control of
an Enthalpy Exchanger, Report 7417, Div. of Aut.
Control, Lund Inst. Tech., Sweden.
Joseph P.D. and Tou J.T., 1961, On Linear Control
Theory Trans. AIEE, 80, pt II, pp 193.
Joshi S. and Kaufman H., 1973, Digital Adaptive
Controllers Using Second Order Models with Transport
Lag Proc. IFAC 3rd Symposium on Identification and
System Parameter Estimation, The Hague/Degue/Delft,
Holland.
Kailath T., 1974, A View of Three Decades of Linear
Filtering Theory, IEE Trans. Inf. Theory IT-20pp.
145-181.
Kallstrom C.G., 1974, The Sea Scout Experiments,
Report 7407
(c), Div. Aut. Control , Lund Inst.
Tech., Sweden.
Kallstrom C.G., 1979, Identification and Adaptive
Control Applied to Ship Steering, Lund report no.
LUTDS2/(TFRT-1018)/1-192/(1979), Lund Inst. Tech.,
Sweden.
Kallstrom C.G., Astrm K.J., Throell N.E., Eriksson,
and Sten L., 1977, Adaptive Autopilots for Steering of
Large Tankers, Lund Report no. LUTFD2/(TFRT3145)/1-66/(1977), Lund. Inst. Tech., Sweden.
Kallstrom C.G. Astrm K.J., Thorell N.E., Eriksson J.
and Sten L., 1978, Adaptive Autopilots for Lare
Tankers, Proc. IFAC 7th World Congress, Helsinki,
Finland.
Kalman R.E., 1958, Design of a Self-Optimizing
Control System, Trans. ASME.
Kalman R.E. and Koepcke R.W., 1958, Optimal
Synthesis of Linear Sampling Control Systems Using
Generalized Performance Indexes, Trans. ASME, 80,
pp. 1820.
Keviczky L., Hetthessy J., Hilger M. and Kolostori J.,
1978, Self-Tuning Adaptive Control of Cement Raw
Material Blending, Automatica, 14 pp 525-532.
K R. and Athans M., 1973, On the Adaptive Control
of Linear Systems using the Open Loop Feedback
Optimal Approach IEEE Trans. Aut. Control, AC-18,
pp 489-493.
Kurz H., Isermann R. and Schumann R., 1978,
Development, Comparison and Application of Various
Parameter-Adaptive Digital Control Algorithms, Proc.
Reference Adaptive Control System, IEEE Trans. Aut.
Control AC-11, pp 362-367.
Peterka V., 1970, Adaptive Digital Regulation of

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013
Control., 94 pp 119-132.
Landau I.D., 1974, A Survey of Model Reference
Adaptive Techniques Theory and Application,
Automatica, 10, pp 353-379.
Lindorf D.P. and Carroll R.L., 1973, Survey of Adaptive
Control using Liapunov Design, Int. J. Control, 18, pp
897-914.
Ljung L., 1974a, Convergence of Recursive Stochastic
Algorithms Lund Report 7403, Div. Aut. Control, Lund.
Inst. Tech., Sweden.
Ljung L., 1974b, Convergence of Recursive Stochastic
Algorithms, Proc. IFAC Symposium on Stochastic
Control, Budapest, pp 215-223.
Ljung L., 1977a,Analysis of Recursive Stochastic
Algorithms, IEEE Trans. Aut. Control ,AC-22,pp 551575.
Ljung L., 1977b, On Positive Real Transfer Functions
and the Convergence of some Recursive Schemes, IEEE
Trans. Aut. Control, AC-22,pp 539-550.
Ljung L. and Wittenmark B., 1974, Asymptotic
Properties of Self-Tuning Regulators, Report 7404,Div.
Aut. Control, Lund Inst. Tech., Sweden.
McGreavy C. and Gill P.J., 1975, Self-Tuning State
Variable Methods for DDC, 2nd IEE Conference on
Trends in on Line Computer Control Systems, Sheffield,
UK.( IEE Conference publ.127)
Mendes M., 1971, An On-Line Adaptive Control
Method, Automatica, 7, pp 323-332.
Morris A.J., Fenton T.F. and Nazer Y.,1977,
Application of Self-Tuning Regulators to the Control of
Chemical Processes, IFAC Congress on Digital
Computer Applications to Process Control, The Hague,
Holland.
Morris A.J., Nazer Y. and Chisholm K.,1979, Single
and Multivariable Self-tuning Microprocessor-Based
Regulators, 3rd IEE Conference on Trends in On Line
Computer Control Systems, Sheffield, UK, ( IEE
Conference publ. 172).
Morris E.L. and Abaza B.A., 1973, Adaptive Digital
Control of a Steam Turbine, Proc. IEE, 123,pp 549-553.
Murphy W.J., 1968, Optimal Stochastic Control of
Discrete Linear Systems with Unknown Gain IEEE
Trans. Aut. Control, AC-13 pp 338-344.
Muske, K.R., Linear Model Predictive Control of
Chemical Processes, PhD Dissertation, The Univ. of
Texas at Austin, May 1995.
Nikolaou, M. Model Predictive Controllers : A Critical
Synthesis of theory and Industrial Needs Chemical Eng.
Dept., University of Houston , Houston TX, U,S,A.
2009.
Parks P.C. 1966, Lyapunov Redesign of a Model
Simulation Council Conference on Computer Simulation,
Bowness- on-Windermer.
Wellstead P.E. and Edmunds J.M., 1975, On Line
Process Identification and Regulation, 2nd IEE
Conference on Trends in On Line Computer Control

11

Noisy System, 2nd IFAC Symposium on Identification


and Process Parameter Estimation, Prague.
Peterka V., 1972, On Steady State Minimum
Variance Control Strategy, Kybernetica, 8 , 3, pp 219231.
Peterka V. and Astrm K.J., 1973, Control of
Multivariable Systems with Unknown but Constant
Parameters, Proc. IFAC 3rd Symposium on
Identification and System Parameter Estimation, The
Hague/Delft, Holland.
Prager D.L. and Wellstead P.E., 1979, Multivariable
Pole Assignment Self Tuning Regulators, CSC report
452, Control Systems Centre, UMIST, Manchester,
UK.
Ragazzini J.R. and Franklin G.F., 1957, Sampled Data
Control Systems, McGraw-Hill, New York.
Sardis G.N. and Lobbia R.N.,1972, Parameter
Identification and Control of linear Discrete-Time
Systems, IEEE Trans. Aut. Control,AC-17, pp. 52-60.
Sastry V.A., Seborg D.E. and Wood R.K., 1977, Self
Tuning Regulator Applied t a Binary Distillation
Column, Automatica, 13 pp 417-424.
Smith O.J.M., 1958, Feedback Control System,
McGraw Hill, New York.
Smith O.J.M., 1959, A Controller to Overcome DeadTime, Instrum. Soc. Am. J.,2.
Thiruarooran C., 1978, Identification and Control of a
Turbocharged Diesel Engine, Ph.D. Thesis, UMIST,
Manchester.
Tse E., 1973, Further Comments on Adaptive
Stochastic Control for a Class of Linear Systems,
IEEE Trans. Aut. Control, AC-18, pp 324-325.
Tse E. and Athans M., 1972, Adaptive Stochastic
Control for a Class for Linear Systems, IEEE Trans.
Aut. Control, AC-17, pp. 38-51.
Tse E. and Bar-Shalom Y., 1973, An Actively
Adaptive Control for Linear Systems with Random
Parameters via the Dual Control Approach, IEEE
Trans. Aut. Control, AC-18, pp. 109-117.
Tse E., Bar-Shalom Y. and Meier III L., 1973, WideSense Adaptive Dual Control for Nonlinear Stochastic
Systems IEEE Trans. Aut. Control, AC-18,pp. 98-108.
Turtle D.P. and Phillipson P.H., 1971, Simultaneous
Identification and Control, Automatica,7,pp. 445-453.
Vaneeck A., 1978, On Compensators, Adaptive
Filters and Self Adjusting Regulators, Proc. IFAC 7th
World Congress, Helsinki, Finland.
Wellstead P.E., 1978, Basic Ideas in Self-Tuning,
CSC report 431, Control Systems Center, UMIST,
Manchester.
Wellstead P.E. and Carvalhal F., 1975, Digital
Computer simulation of Seismic Disturbances,
Wellstead P.E. and Zandker P., 1979, Servo Selftuning, Int. J. Control, 30, no. 1, pp. 27-36.
Whitaker H.P., Yarmon J. and Kezer A., 1958, Design
of a Model Reference Adaptive Control System for

12

NUJES Al Neelain University Journal of Engineering Sciences, VOL 1, No. 1, December 2013

Systems, Sheffield, (IEE Conf. Publ. 127).


Wellstead P. Edmunds J.M., Prager D. and Zanker P.,
1979, Self-tuning Pole/Zero Assignment Regulators,
Int. J. Control, 30 no. 1,pp.1-26.
Wellstead P.E., Prager D. and Zanker P., 1978, A pole
Assignment Self-Tuning Regulator, CSC report 434,
Control Systems Center, UMIST, Manchester.
Wellstead P.E., Prager D. and Zanker P.,1979, pole
Assignment Self-Tuning Regulator, Proc.IEE,126 no.
8,pp.781-787.
Wellstead P.E., Prager D., Zanker P. and Edmunds J.M.,
1978, Self-Tuning Pole/Zero Assignment Regulator,
CSC report 404, Control Systems Center, UMIST,
Manchester.
Wellstead P.E. and Zanker P., 1978, The Techniques of
Self-tuning CSC report 432, Control Systems Center,
UMIST, Manchester.
Wellstead P.E. and Zanker P., 1978, Servo SelfTuning, CSC report 419, Control Systems Center,
UMIST, Manchester.

Aircraft MIT Instrumentation Lab. Report R164, MIT,


USA.
Wieslander J. And Wittenmark B., 1971, An
Approach to Adaptive Control Using Real Time
Identification , Automatica, 7 pp. 211-217.
Wittenmark B., 1971, A Survey of Adaptive Control
Methods, Report 7110, Div. Aut. Control, Lund Inst.
Tech., Sweden.
Wittenmark B., 1973, A Self-tuning Regulator
Report 7311, Div. Aut. Control, Lund Inst. Tech.,
Sweden.
Wittenmark B., 1975, Stochastic Adaptive Control
Methods- A Survey , Int. J. Control, 21, pp 705-730.
Wouters W.R.E., 1974, Parameter Adaptive
Regulators, Control for Stochastic SISO Systems,
Theory and Applications, Proc. IFAC Symposium on
Stochastic Control, Budapest.
Wouters W.R.E., 1977, Adaptive Pole Placement for
Linear Stochastic Systems with Unknown Parameters
Proc. IEEE Conference on Decision and Control, New
Orleans, USA.

You might also like