You are on page 1of 25

1

ALGORITHMS FOR INDUSTRIAL MODEL PREDICTIVE CONTROL.


D.J Sandoz, M Desforges, B Lennox and P Goulding
Control Technology Centre Ltd.
School of Engineering
University of Manchester
July 1999
Synopsis
This paper is concerned with control methods that have been embedded in an industrial
Model Predictive Control software package and which have been applied to a wide
variety of industrial processes. Three methods are described and the various features
are evaluated by application to a constrained multivariable simulation. The relative
attributes are contrasted by assessing the ability of the controllers to recover effectively
from the impact of a large unmeasured disturbance. One particular method, that
employs Quadratic Programming to manage Cost Function minimisation and
Manipulated Variable constraints together with degrees of freedom prioritisation to
manage Controlled Variable constraints, is suggested as the most appropriate for
general purpose application.
ABBREVIATIONS
ARX Auto-Regressive-eXogenous
CV
Controlled Variable
DMC Dynamic Matrix Control
FIR
Finite Impulse Response
FSR Finite Step Response
FV
Feedforward Variable
GPC Generalised Predictive Control
LP
Linear Programming
LQ
Linear Quadratic
LR
Long Range
LRQP Long Range QP
MPC Model Predictive Control
MV
Manipulated Variable
NARX Nonlinear ARX
QDMC Quadratic DMC
QP
Quadratic Programming
RBF Radial Basis Function
RFC Residual Feedforward
Compensation
RGA Relative Gain Array
SQP Sequential Quadratic
Programming
SVD Singular Value Decomposition
INTRODUCTION
This paper describes and compares
algorithms that have been employed for

the exploitation of Model Predictive


Control( MPC) in industry.
The most well known algorithm for
industrial MPC is that of Dynamic Matrix
Control( DMC). This algorithm was
developed in the late 1970s(Cutler and
Ramaker, 1980) in association with the
Shell Oil Company in the USA. Today
DMC is so standard that it is taught in
many under-graduate control
engineering courses and yet it retains its
position as market leader in industry.
There are various competitors to DMC,
RMPCT, SMOC, Perfector, , (Qin and
Badgwell, 1996) and each carries its
own style and idiosyncrasy. The only UK
developed offering that is internationally
competitive is the control engineering
associated with the software product
Connoisseur(Sandoz, 1996). Another
very strong UK influence in this area of
technology has been Generalised
Predictive Control or GPC( Clarke D.W.
et al, 1987). However, although GPC
has become a standard in academic
circles, it has not been engineered into
any industrial product of current day
significance.

The initial exploitation of the control


engineering of Connoisseur was in
1984, with application to Cement Kilns,
Spray Drying Towers and Engine Test
Beds( Hesketh and Sandoz, 1987).
More recent applications have related to
Petro-chemical processes, Grinding
Mills, Power Generation Plant and Steel
Annealing Furnaces(Warren 1992,
Norberg 1997, Sandoz et al 1999),
Since that time there has been a slow
evolution in the capability of this control
engineering, to some extent moving to
keep pace with developments in
numerical procedures and computing
power. There has not been any detailed
description of the progression of this
engineering within the literature. This
paper seeks to redress this omission
and details three particular methods that
are now available as options for use with
MPC schemes.
1. The LR Method. This employs the
Linear Quadratic( LQ) method from
State-space Optimal Control
Theory(Jacobs, 1974) in association
with prioritisation of CVs( Controlled
Variables) to provide a basis for
pragmatic management of degrees
of freedom in consequence of
current and anticipated
constraints( Sandoz 1984). This
method has been the backbone of
most industrial applications
associated with Connoisseur to date.
2. The Quadratic Programming( QP)
Method. This utilises QP to minimise
the LQ cost function and
simultaneously to resolve constraints
associated with both CVs and
MVs( Manipulated Variables), i.e. the
soft and hard constraints
respectively. QP has also been
employed to produce an enhanced
version of DMC known as
QDMC( Garcia and Morari, 1986,
Prett and Garcia, 1988). QDMC has
been exploited widely by the Shell
Oil Company in the USA although

application has been restricted to


within this company.
3. The LRQP Method. This is a
combination of the above two
approaches. QP is used to solve the
cost function minimisation and to
simultaneously resolve constraints
associated with MVs. Soft
constraints are resolved using the
same prioritisation approach as with
the LR Method.
Comparative details of these methods
are presented and simulation studies
are used to highlight the various
attributes. The LRQP method is argued
to be the most favoured approach for
industrial exploitation.
All three algorithms employ the same
cause to effect model structure to
describe process dynamics. This is a
multivariable time series format that may
take on both an ARX or an FIR
structure( Sandoz and Wong, 1979).
Many industrial MPC approaches, DMC
being one, restrict consideration to FIR
or FSR structures. An ARX structure can
be a much more compact representation
of process dynamics and hence the
identification of ARX model coefficients
is more efficient, particularly when being
done in real-time adaptive
mode( Hesketh and Sandoz, 1997).
However, the ARX form is less accurate
in representing the steady-state
relationships of the process and is
therefore not best suited for use with
Linear Programming( LP) optimisation.
The ARX form gives rise to superior
control performance in rejecting the
influence of unmeasured disturbances.
Example of this largely unappreciated
factor is illustrated below.
All industrial MPC algorithms employ the
Moving Horizon principle. At each
control instant a set of moves is
computed to cover the complete design
horizon( i.e. the future period of time
across which the cost function is

minimised) although only the first move


is implemented, which is the one that
relates to current time. The future moves
are discarded. In practice this approach
is essential to compensate for modelling
error and for the influence of
disturbances. If an infinite horizon is
considered, then for the linear and
unconstrained case the Moving Horizon
approach gives identical results to those
achieved by successively implementing
the full set of computed moves.
However, if the horizon is not infinite( in
the sense that the solution has not
converged) or if responses are
moderated because of the need to
honour CV constraint boundaries, then
the results will not be the same. Of
particular concern is the management of
CV constraints that are predicted to
arise in the future. The accuracy of such
prediction when only the first move of a
design is applied is questionable. Some
methods of MPC focus upon response
profiling across the future horizon but
then compromise such analysis by
employing Moving Horizon. Some of
the implications of such compromise are
considered.
The QP approaches are based upon a
parametric analysis in that the Hessian
and Gradient coefficients that are
required by the QP are computed using
the dynamic time-series model. An
alternative, that is now becoming
available in the product, is to use
SQP( Sequential QP) which calculates
approximate coefficients from model
predictions and which effects a gradient
based search mechanism( Bazaraa et
al, 1993). SQP is computationally less
efficient but does have the merit of being
much simpler in code and also of being
tractable with non-linear process
relationships( Qin and Badgwell, 1998).
Some of the practicalities of exploiting
SQP are reviewed.

LINEAR MODELS FOR MPC


Industrial Process Dynamics may be
characterised by a variety of different
model types. Signals for such models
are usually grouped into three
categories,
CVs , plant outputs that can be
monitored,
MVs , plant inputs that can be
manipulated and
Feed-forward Variables ( FVs), plant
inputs that cannot be manipulated
and which disturb the process.
For MPC, consideration normally
restricts to models of two basic
categories; Parametric State-Space
(ARX) or Non-parametric FiniteStep/Finite-Impulse Response
(FSR/FIR). In fact both of these
categories may be subsumed by one
general purpose multivariable timeseries format subject to the presence
and scale of various dimensions. Thus,
consider the general purpose linear
model equation.

Yk 1 A . Yk B . U k C. Vk d
1.

with
k the sampling instant, and k->k+1 the
sampling interval,
y a vector of p CVs that the model
predicts,
Y a vector of R samples of y i.e.
T
T
T
Yk | y T
k , y k1 , . . . . , y k R 1 |
U a vector of S samples of u, with u a
vector of m MVs and
V a vector of S samples of v, with v a
vector of q FVs.
The dimensions of the transition matrix
A and the sampled vector Y are set by
the number of CVs and the orders of
process dynamics that prevail for each
cause to effect path of the model. The
transition matrix may be considered as a
composite of all of the transfer function

denominators associated with these


paths. The offset vector d is not
required if differences in the samples of
y, u and v are taken across each
sampling interval (e.g. use yk - yk-1 rather
than yk etc.). The dimensions of the
driving matrices B and C are set by the
number of MVs and FVs and by
however many samples S are necessary
in U and V to establish a model that
accurately reflects the process.
This vague statement firms up very
easily if the equation is representative of
an FIR or FSR structure. The transition
matrix is removed (i.e. the order of
dynamics may be considered to be zero
on all paths so that dynamic behaviour
is represented entirely by the transfer
function numerators). The number of
samples for any path is then as many as
required to encompass the time span to
settle to steady state following a step
change in any input (i.e. the sum total of
any pure time delay and dynamic
transients). The overall dimensions of B
and C are thereby defined by the paths
with the maximum sampling
requirements and the matrices are
padded with zeros to maintain validity. It
can be that the matrices are quite
sparse. A mechanism of crossreferencing pointers is then desirable
and this can dramatically reduce
workspace storage demands (which can
be very large, particularly when the
model involves a mix of fast and slow
response issues).
If a transition matrix is declared, then in
principle the dimensions of B and C
should reduce significantly.
For any input, there needs to be at least
as many samples as are needed to span
the largest time delay into any CV,
otherwise the equation would not be
causal. A few extra samples may be
necessary in order to validate the
description of dynamics, should this be
complex. Further, if there are multiple
delay phenomena, for example such as

arise with processes that involve the


recycling of material streams, then the
sampling must extend to cover their
successive impacts.
Many industrial technologies restrict to
the FIR/FSR form, with the implication of
very large model structures for
sometimes very simple situations. Large
structures impose significant
computational burden in solving for
control moves, but this is of little
consequence except for very large
systems, given the state of todays low
cost computer power. However, such
structures do create problems for
statistical identification methods
because of the large number of
coefficients that have to be determined.
The identification of large amounts of
coefficients requires large amounts of
data that imposes a time constraint on
delivering good results, irrespective of
computer power (e.g. by requiring
extended exposure to plant for data
collection and experimentation). This is
particularly the case for real-time
identification which runs in parallel with
MPC and which may, for example, be
associated with a scheme for adaptive
modelling and control.
The State-space parametric or ARX
forms can be much more compact and
therefore more suited for efficient
identification of their parameters.
However there are downsides. Accurate
prediction for such a model is dependent
upon good reflection of dynamics within
the sampled history of the CVs. Should
these signals be subject to significant
levels of noise then, subject to sampling
intervals and the ability to employ
suitable filtering, the ability of the model
to accurately predict steady state from a
dynamic state reference can be
compromised. In addition, multivariable
structures that involve both fast and
slow dynamics can present difficulties.
For example it is very common for a
controller to involve CVs that are liquid

levels, which respond in seconds and


other CVs which are analysers
monitoring some chemical composition,
which respond in tens of minutes.
Proper description of the dynamic
behaviour of the levels requires a very
short sampling interval which then has
to be imposed upon the model segment
that describes the analysers. Accurate
representation of analyser behaviour
then relies upon a high degree of
precision in the transition matrix
coefficients which is not practicable,
particularly with noise present.
The form of model used also has
practical implications on controller
operation, particularly with respect to
initialising a controller in the first place.
The very first move that a controller
makes can only be correct if the
sampled history in the model vectors
properly reflects earlier plant behaviour.
If the vectors are large, waiting time
before a controller can go on line can be
significant, with as much as one or two
hours being quite common. Of course,
this aspect is irrelevant if the controller is
switched on with the process in a
steady-state condition.
The proper description of processes that
involve CV integration or which are open
loop unstable, necessitates the
involvement of a transition matrix A.
Coefficients within A then reflect
properties that are equivalent to poles
being on or outside the unit circle (in z
domain parlance). The problem with
such systems is to obtain representative
test data. Pulse testing, rather than step
testing is often a good option in such
cases, allowing rich data to be
generated without the process going out
of bounds. From the perspective of the
model, issues of integration can be
avoided by incorporating rate of change
of CV although this may not be
consistent with control requirements.
From the perspective of control the
inclusion of both a CV and its rate of

change can be beneficial for the


management of aspects with very slow
rates of integration, such as often
prevail, for example, with temperatures
in heating systems.
The transition matrix A can be
presented in two forms. The first, termed
homogeneous, restricts the prediction
relationship for any CV to include only
sampled history for that particular CV
and not for any other. The second,
termed heterogeneous, bases the
prediction of each CV upon reference to
the sampled history of all of the other
CVs. The heterogeneous form is
consistent with the standard state space
equation representations of dynamic
systems. The usual form adopted with
chemical plants is homogeneous,
because CVs are often distributed quite
widely around the process and dynamic
interdependence amongst the CVs does
not necessarily make practical sense.
Another advantage to the homogeneous
form is that if a CV fails, it does not
affect the ability of the model to predict
behaviour in the other CVs. With the
heterogeneous form, the failure of any
CV will invalidate the ability to predict for
every CV of the model.
CONTROL SYSTEM DESIGN
There are various techniques that can
be employed to achieve satisfactory
control once a representative cause to
effect dynamic model is available. The
choice of Industrial Model Predictive
Controllers in the marketplace is quite
limited and each one adopts a distinct
and somewhat idiosyncratic approach to
the development and application of the
multivariable controller. Arguments fly as
to their relative nuances and
capabilities. In reality there is probably
little, in performance terms, to choose
between them. Most control methods
may be engineered to work well, given a
model that satisfactorily represents the
process. The approach described here

is straightforward as a direct variation of


the standard LQ method of Optimal
Control. A cost function of the general
form:
N

T
i 1 . Q. E i 1

T
UT
i . P . U i e i . Q. e i

i 1

2.
is minimised, where E = y - s, with s a
vector of set-points and e = u -t, with t a
vector of MV targets. U is constructed
from MV samples that are differenced. N
is chosen large enough to ensure
convergence so that the solution
corresponds to that for an infinite
horizon. The approach generates the
optimal controller which has the general
form:

u k K . A1. E1k B 1. U1k C. Vk B 2. e k


3.
K is a matrix of Controller Gains that
give rise to the optimal control response
and u is an incremental move vector
(i.e. a set of changes to be made in the
MVs, the sampled data equivalent of
integration with the controller so that
unmeasured disturbances will be
rejected). The vector E1 is given by
T
T
T
E1k | E T
k , E k1 , . . . . , E k R1 |

B1 and U1 correspond to B and U, with


the term in u absent. B2 is the first m
rows of B. V is comprised of FV samples
that are differenced. A1 is derived from
A in order to ensure certain properties
reviewed below.
Equation 3 is derived by solving the
Riccati equation that arises from the
above expression of the optimal control
problem. Very efficient procedures that
involve orthogonal transformations
(Householder) can be employed to
calculate the matrix K( Silverman, 1976).

There is a feature of set-point handling


that is distinctive in the approach
described here and which has very
advantageous properties for the
controlled recovery from the impact of
unmeasured disturbances. Consider the
way in which the vector E1 is formed.
The set-point vector s is a constant
throughout all samples of E (i.e. it is not
subject to the sampling index k). It is the
origin of the space corresponding to the
condition with U=0 and V=0. If s
changes, so too do all of the E terms
within E1. This has important and
powerful implications in the performance
of the controller. The controller does not
employ change of set-point information
in generating a response to error. This is
different from the situation with
conventional error feedback control
systems and is also different from
mechanisms adopted with alternative
MPC procedures. In these cases, the
change of set-point is a key contribution
in generating the controlled response to
the new set-point values. This has the
drawback that such a controller lacks
this vital contribution when responding
to errors induced by unmeasured
disturbances. The worst situation that
can then arise is that recovery from the
impact of an unmeasured disturbance is
at a rate consistent with the open loop
time constants of the process. A
controller of the form indicated in
equation 3 does not have this deficiency
and the effort applied by the controller to
minimise error is consistent however
that error is induced.
The shaping of the response of CVs to
set-points is by choice of the elements
of the weighting matrices Q, P and q. In
fact, without any significant loss in
capability, this choice can be reduced to
a single weight per signal, with the
matrices being diagonal and with each
weight duplicating on the diagonal as
appropriate. The exercise of choice is
made even more amenable by
normalising the model matrices prior to

application of the design algorithms.


Normalisation of data is a necessary
and standard feature that is used with
the model identification procedures in
order to prevent numerical problems
with the computations. The model that
arises is itself normalised and the
relative scales between different
variables are standardised to be
equivalent. If this model is used to
develop the control system, and if the
weighting matrices are set as unit
matrices, the resulting controller has
very attractive properties, with effort and
urgency being balanced evenly within
the multivariable infrastructure. It is then
usually quite straightforward for the
design engineer to modify the
weightings up or down from this
standard in order to establish control
responses that are appropriate for real
application. This mechanism of course
necessitates facilities for process
simulation and interactive CAD.
The elements in e in equation 3 provide
the basis for MVs to be also driven to
targets (or set-points). This is of benefit
when the controller is operating with
more MVs than CVs so there is slack in
the available degrees of freedom. This
slack provides opportunity for an
optimiser to position the spare MVs to
best economic benefit.
The elements in V in equation 3, i.e. a
sampled history of differences in the
FVs, provide for feed-forward control.
This leads to discussion of a basic
weakness that is present in all sampled
data control systems. The rate at which
model and control system update (i.e.
the interval k to k+1) is determined by a
balance of factors;
The need to properly regulate the
fastest time constants of the system
(the rule of thumb for this is to chose
the interval to be less than one third
of the smallest time constant).

The desire to keep the model and


controller dimension down to a
manageable number of coefficients.
The smaller the interval, the greater
the number of coefficients needed to
describe the longer term dynamics
and the less viable is the ARX
description because of inability to
detect dynamics across successive
CV samples. This issue is
particularly problematical when the
multivariable system encapsulates a
broad range of fast and slow
dynamics.
The desire to keep the
computational burden of control to a
minimum and therefore to have the
update interval as large as
practicable.
The need to react promptly to
incoming changes in the FVs or to
the impact of unmeasured
disturbances. If an FV changes half
way through the control update
interval then there is a delay before
corrective action is taken. The faster
the controller updates, the smaller
this delay is.
A compromise has to be struck between
these issues. A pragmatic address to the
FV issue is to short cut the update of the
controller when a large disturbance is
detected, i.e. implement control action
directly. This can be effective, most
particularly when the process is close to
steady state when the disturbance
arises. It does, however, disrupt the
sampling process which will give rise to
a temporary deterioration in prediction
and control accuracy. This mechanism
should therefore only be used on an
infrequent basis and only in response to
disturbances of significance.
Consideration of the robustness of
multivariable controllers has become
topical, largely because academics on
an international wave-front have been
making significant contributions to this
topic rather than because any particular

variant of MPC is not robust.


Robustness has two contexts;
The ability of a controller to operate
with an inaccurate model of the
process.
The ability of a controller to achieve
its objectives with the available
degrees of freedom.
With the design of classical regulatory
control systems, the first aspect is dealt
with by gain and phase margin
considerations. For multivariable control
systems, consideration of the infinity
norm rather than the squared cost (or
two norm) criterion of equation 2
provides a basis for determination of a
control system that will be stable for all
model inaccuracies of a declared scale
(Maciejowski 1989). Unfortunately, such
consideration tunes the controller for the
worst case, with the consequent risk of
inducing mediocrity for normal situations
in order to guarantee acceptable
behaviour in the abnormal situation. A
more pragmatic address is
straightforward when working with a
controller that is based upon the cost
criterion of equation 2. A controller is
more tolerant to model inaccuracy when
the moves it is able to make are more
constrained. This is achieved by
increasing the weightings within P that
are associated with the incremental MV
moves. The pragmatic address to the
robustness of an operating controller is
therefore quite simple. If the controller
begins to exhibit hyperactivity, increase
the P weights as necessary in order to
make things settle down (this exercise
can even be made automatic by using
on board control rules). A good rule, if
control performance deteriorates, is to
first check the process to find out if
anything has changed or is faulty. It may
be that the only way to bring the
controller back to par is to obtain a new
model. Adaptive real-time modelling can
then contribute to avoid the need for
repetition of open loop plant tests.

The management of degrees of freedom


within a multivariable infrastructure can
be a very problematical issue. Certain
combinations of MV to CV cause to
effect paths may demonstrate high
levels of dependence between
parameters, in the extreme to the extent
that required combinations of CV setpoints cannot be achieved. Alternatively,
very large combinations of MV moves
may be required to produce small
degrees of discrimination between the
CVs. This issue has nothing to do with
dynamics but rather, reflects in the
steady state gain matrix of the
multivariable process. If this matrix is
close to being singular (or in the non
square case, if a singular value is close
to zero), then this suggests that the
particular cause to effect description is
not viable for control. The issue will be
highlighted by a large Condition Number
for the matrix (i.e. the ratio of the largest
to the smallest singular value). With
appropriate scaling, it is possible to
analyse patterns of the singular values
of the gain matrix to highlight which
parameters exhibit dependence. Given
such knowledge, it is then possible to
select weightings to avoid contradictory
objectives.
There is one more twist to the issue of
the rejection of unmeasured
disturbances. If a process is distributed,
with some responses to input changes
arising well before others, then these
early responses can be used as a
means for early detection of
unmeasured disturbances. A mechanism
which is termed Residual Feed-forward
Compensation (RFC) operates with a
separate model that is used to predict
the early parameters. This model runs
continuously against the process so that
residuals are generated as continuous
signals to reflect the error between plant
and model. These residuals can then be
treated as process signals in their own
right and to be representative of the
hitherto unmeasured disturbances. They

may therefore be employed in the main


multivariable controller as FVs to offer
considerable anticipatory benefit in the
management of the longer responding
parameters within the multivariable
system.
Given a control system that arises in
consideration of the various design
issues described above, it is essential to
then provide the simulation environment
that allows the engineer to thoroughly
assess the implications of the design
selections( i.e. to visualise step and
disturbance rejection responses in a
simulation context that is as close as
possible to reality). It is also important to
make it easy for the engineer to quickly
modify design aspects and to assess the
relative implications of such
modification.
Note that in the industrial context it is
most important that comprehensive
facilities for Gain and/or Model
scheduling are available in order to deal
with nonlinearity in the piecewise linear
sense or to accommodate situations of
variability because of changes in plant
or product.
INDUSTRIAL MODEL PREDICTIVE
CONTROL
MPC is concerned with the operation of
multivariable controllers in the face of
process constraints. Such constraints
come into three categories;
MV minimum and maximum limits,
MV incremental move limits and
CV minimum and maximum limits;
The MV constraints are termed hard
since they can be rigorously enforced.
The CV constraints are termed soft in
that it is desirable that the constraints
are honoured but process conditions
might not always allow this.
It is unusual for a multivariable controller
to be viable for industrial process

application without having to cater for


constraint issues. The problem is
simple. Each MV move computed via
equation 3 is derived in the
consideration that all of the other MV
moves can be implemented. If any MV is
constrained so that the computed move
is not applied, then the calculated
moves for the other MVs are not
appropriate and control will break down
(even if only one MV saturates, there will
be consequent offset of all CVs from
their set-points). If an MV saturates, a
degree of freedom is lost and the
number of CVs that can be driven to setpoint is reduced by one. Practical
management of constraints is a key area
often neglected in the control literature.
The effects of poor constraint
management can nullify any benefits
gained through advanced multivariable
control.
Three approaches to MPC are
discussed below.
The first( The LR Method) is the
method that has been most widely
exploited by Connoisseur users and
is a pragmatic approach that
involves prioritised and common
sense control engineering to
manage MPC.
The second( The QP Method) is the
most sophisticated, with the
complete problem of optimisation in
the face of both hard and soft
constraints being resolved by the
Quadratic Programming( QP). It
replaces the Riccati solving
approach with a QP numerical
optimiser that is able to minimise the
cost J of equation 2 whilst also
accounting for these signal
constraints.
In the third approach( The LRQP
Method) the constraints that are
processed by the QP are restricted
to the MV hard constraints and the
CV soft constraints are dealt with in
similar fashion to the LR Method.

10

The pros and cons for each of these


methods are emphasised below.
The LR Method
A fundamental approach adopted is to
maintain the situation whereby the
number of MVs in play is greater than or
equal to the number of CVs. If an MV
saturates, then that MV is locked at the
saturation limit and a reduced controller
is computed to appropriately take up the
remaining degrees of freedom. To
maintain balance this may require a CV
to be dropped from the multivariable
controller.
To deal with this in a sensibly managed
way, the MVs and the CVs can be
prioritised so that the CVs of least
importance are eliminated first and so
that the MVs that are in play are
consistent with an acceptable condition
number for the steady state gain matrix.
The design engineer defines such
priorities in consideration of the needs of
the process and in consideration of the
relative steady state sensitivities of the
cause to effect paths. For the latter
considerations, simulation of the various
options quickly provides an appropriate
perspective for design, particularly when
used in association with RGA and SVD
tools. Given definition of the priorities,
mechanisms for selection of the signals
that are to be involved in the controller
can then be automatic and
straightforward, altering as required in
the face of variations in the encountered
hard constraints.
Note that whether or not an MV or CV
signal is present within a controller is not
just a matter of saturation. It is also
subject to the health of transducers and
actuators. A multivariable controller can
often continue with effective operations
despite the failure of some of its signals.
In fact there are further refinements
possible here. An MV might be switched

off for the purposes of automatic


manipulation but may still be required to
be involved in a feed-forward context
(i.e. the MV becomes an FV). If a CV is
switched off then it can be desirable to
substitute with an inferred value so that
direct control of that CV can continue.
Such inference is straightforward given
the presence of the multivariable model
that associates with the controller
(inference of this nature should only be
used for short periods since the inferred
CVs can quickly drift away from reality if
models are not precise). The
mechanism can be very useful for
providing interpolated values for
analysers that only provide readings on
an occasional basis.
The priority based approach has the
advantage that it is simple to set up on
the basis of sensible judgement by the
process engineer. It does not
individually address, however, all of the
permutations of cause and effect that
might arise. It is possible that certain
selections might be inappropriate, for
example it may be necessary to drop the
highest priority CV rather than the
lowest, in order to maintain an effective
condition number. Fortunately, operating
experience does suggest that
exceptions of this nature are not
common. It is important, however, that
the engineer be given the facility to trap
such exceptions and to override the
standard procedure with more
appropriate structure selections. This
mechanism has been provided for the
algorithms described here by an
interpretative command language known
as Director. Director subroutines, which
are supplied with condition number and
saturation status, may be called at
critical stages in order to analyse and
perhaps override the standard decision
making procedures. Alternative cause
and effect signals may be selected for
the controller. The introduction of such
Director usually evolves with the
experience of controller operations.

11

For control purposes, CVs divide into


two categories, those for control to setpoints and those which are to be
maintained within soft constraint
boundaries. Constrained CVs normally
float free. At each instant for execution
of the controller, a closed loop
simulation is carried out across a
defined horizon into the future (the Long
Range or LR horizon). It is closed loop
so that the simulation reflects as closely
as possible circumstances that are really
going to happen. Should a constrained
CV be predicted to exceed any
constraint boundary at any stage, then
that CV is introduced into the controller
and the whole process is repeated. This
simulation procedure is multi-pass, with
soft constraints being brought in
successively in consideration of
priorities and available degrees of
freedom. It would be usual to allocate
constrained CVs a higher priority than
set-point CVs so that in a crisis the
controller diverts its attention to the
maintaining of the process within
bounds rather than maintaining required
quality targets. When a constrained CV
comes under control, there is the issue
of selection of an appropriate set-point
to control to. In principle this should be
the constraint boundary itself (or inside it
by a small comfort zone), on the
presumption that an optimiser is pushing
the controller to the boundaries.
However, it is not wise to drive the
process towards a boundary if a
violation is predicted (its going to get
there anyway and extra impetus might
lead to unwelcome overshoot).
Therefore the set-point is manipulated
so that the CV will move gently towards
the boundary.
The closed loop simulation for the
detection of soft constraint violations can
employ either a linear or a non-linear
model( e.g. a neural network RBF
model[ Haykin, 1994]) whereas the
control design mechanism requires a

linear model. A non-linear model


provides the opportunity to predict
potential constraint violations with
greater accuracy with the consequent
possibility of holding the process closer
to constraint boundaries for improved
economic benefit. This ability to employ
non-linear prediction within the MPC
procedure is an important strength of
this LR approach.
Note that every MV and CV combination
that is encountered requires an
individual solution of the Riccati
equation that gives rise to a specific set
of gains K for that situation. Certain
combinations are encountered very
frequently and it is more efficient,
computationally, to save these gains in a
table as they are computed. Then, when
they are required again, the controller
simply needs to point to the appropriate
reference within the table. However, if
any weighting is changed or if any signal
is switched on or off, it is necessary to
throw away all stored gains and recompute from scratch. For large
systems, computation in the initial
stages as the tables are being created,
can be heavy but demand reduces
significantly as the look up mechanism
phases in.
It is possible to dramatically improve the
time required to solve the control design
equations by Blocking together steps in
the design horizon. The most
appropriate form of Blocking is for MV
moves to be compacted further out on
the design horizon, on the basis that
such adjustments should be fairly
gentle. The earlier adjustments should
processed without Blocking. However it
unfortunately arises that such
interference with the very efficient
Householder solution algorithm means
that the answer with Blocking takes
longer to compute!(There is probably
scope for some enlightened algebra
here that could be devised to overcome
this difficulty). An alternative form of

12

Blocking that does give rise to


computational efficiencies is therefore
proposed. Thus, for example, suppose
horizon steps are to be blocked in fives
and that the MVs are constrained to
make adjustments of equal increment
for each of the five steps within each
block. Within the design process, an
adjustment is then computed for every
fifth step on the horizon, with one fifth of
this adjustment being applied for each of
the next successive five steps. The
process is iterated for as many cycles as
necessary to achieve convergence, in
the same fashion as for the single step
case, in order to generate the gains K.
In actuality, only the very first adjustment
is applied to the process (strict
obedience for optimal control would
require that such increments be
recalculated only every fifth step, but
this is unacceptable, for example
because of the feed-forward
compensation issues discussed above).
The effect of updating the MV moves
more quickly is interesting. It produces a
dampening influence upon the controlled
responses, similar to that achieved by
increasing the MV weightings and is in
consequence possibly also a major
contribution to robustness. This is a
variation on observations made when
employing designs based upon a
sampled data approach in a mode with
continuous feedback( Sandoz and
Appleby, 1972). There is the potential,
still to be thoroughly investigated, that
this approach obviates the need for the
designer to be so heavily concerned
with MV weightings in order to secure
robust control, thereby simplifying
design selections for the engineer.
The capability to set up a
comprehensive real time simulation of
the complete mechanism for this form of
MPC is essential. Such simulation would
normally be able to run faster than real
time and provide the engineer with the
capability to feel for and to tune and
refine the modes of operation of the

control system, prior to any encounter


with the real process.
The QP Method
The numerical procedure of Quadratic
Programming provides an alternative for
solution of the control issue as
expressed above. The QP algorithm
requires the presentation of information
that encapsulates predictive behaviour
across the complete design horizon N,
together with the cost weightings and
constraints that are to apply (i.e. a
Hessian matrix, a gradient vector and
constraint inequality matrices). At each
control instant k, the complete problem
may be resolved by just a single call to
the QP procedure. In contrast to the
Ricatti approach, which produces a set
of controller gains, the QP gives rise to a
profile of current and future MV moves
and only the current move is employed.
With reference to equation 3, it is only
when N is large enough to ensure
convergence that a succession of first
moves( as k iterates) produces the
same responses as would arise by the
implementation of the complete profile.
In fact, this Principle of Optimality is
only true without the involvement of soft
CV constraints( see below). The QP
method is computationally demanding,
particularly if N is large, so the choice of
N becomes an issue for design.
Blocking can be employed in a fashion
similar to that described for the LR
Method to improve computational
efficiency by compressing the horizon.
Blocking for QP can be more
sophisticated in that it is possible to
efficiently compact MV moves further
out on the design horizon. Such
Blocking is extremely effective in
reducing matrix dimensions and solution
times. It does, however, need to be
employed with caution. Blocking too
early in the response horizon can
degrade controlled performance
significantly.

13

Consider the hard constraints. The QP


delivers MV moves that account for the
hard constraints and which in
consequence are valid to employ on
plant irrespective of whether these
moves are constrained. There is no
need to prioritise and reduce structure
simply because of MV constraint and in
particular because of incremental MV
move constraints. In this respect, the QP
approach is more intelligent and
delivers better performance. However,
even with QP, if an MV locks at a
constraint boundary there is still the loss
of a degree of freedom and continued
effective management of set-points still
requires re-computation with a reduced
structure subject to priorities. One
interpretation of lock is currently that
the QP generates an MV move that is
bounded for the complete design
horizon, which appears satisfactory for
resolution of the degrees of freedom
issue.
A QP solver may also be presented with
a matrix of constraint inequalities that
relate to the CVs in addition to the
aspects discussed above. This provides
an elegant mechanism for solving within
soft constraint bounds but which, from
the Control Engineering perspective,
has certain weaknesses.
This form of QP implementation has
two objectives, first to satisfy soft
constraints and then to minimise
cost. Thus if the QP can satisfy a
soft constraint issue by taking a
sledgehammer to the process, it will
do so. This can give rise to violent
MV moves irrespective of cost
function weightings. These
exaggerated moves are simple to
reduce by the use of the hard
incremental move constraints.
However, this perspective makes the
proper selection of these constraints
a different and a far more important
design issue with the QP method
than with the LR method. Too much

constraint of the MV incremental


moves can also give rise to
unwelcome unfeasibility and soft
constraint relaxation( see below).
It is possible that the QP will not be
able to establish a solution that
satisfies the soft constraint
requirements. In such a case it is
necessary to relax the soft constraint
boundaries to a degree that allows a
solution to prevail. Such relaxation is
achieved by introducing an auxiliary
cost function that involves dummy
MVs( sometimes known as slack
variables). A dummy MV adds to the
soft constraint values that are within
the inequality matrices that are
presented to the QP, in order to
broaden the range of validity. A very
high cost penalty is imposed upon
the dummy MVs so that they are
only exploited if there is no
alternative, i.e. if the QP could not
otherwise obtain a solution. The
dummy MVs may also be assigned
relative cost weightings so that
certain constraints are relaxed in
preference to others. However, if
such relaxation arises it is at the
expense of the control engineering
that is expressed in the primary cost
function and unwelcome overall
effects can arise. An example of
such a situation is presented below.
Soft constraint considerations are on
the basis that the complete horizon
of MV moves is to be implemented
on the process, rather than a
sequence of first MV moves. The
solving for soft constraints is not
associated with the primary cost
function and so the Principle of
Optimality does not apply. It is
possible that the MV moves at the
first stage are influenced by
boundary considerations further out.
In such a circumstance the moving
horizon approach will not generate
the same responses as application
of the full set of MV moves from a

14

single QP iteration. Most activists in


the arena of MPC appear to ignore
the implications of this fundamental
point, i.e. that sophisticated analysis
is employed to generate a solution
that is then ignored.
The LRQP Method
The LRQP method restricts the
constraints handled by the QP to the MV
hard constraints( both incremental and
absolute) whilst the soft constraints are
managed in similar fashion to the LR
method( i.e. by simulation and
prioritisation). It seeks to combine the
strengths of the above two methods to
provide a more effective overall address
for control engineering application.
This approach avoids the need to
have to address the issue of
unfeasibility.
The dimension of the QP problem is
significantly reduced to give
enhanced computational efficiency.
The CV constraints are managed by
feedback. This is more tolerant to
model inaccuracies than the QP
approach for which the constraint
avoidance mechanisms are
essentially open-loop( i.e. are not
associated with the error based
minimisation of the cost function).
The assessment of whether CVs will
violate constraint boundaries is most
accurately carried out by a simulation
that reflects what is actually going to be
done in the future, and that is with only
the first move of the QP design being
implemented at each step into the
future. This argument justifies the
incorporation of the QP approach to
replace the control design engine within
the LR prioritised management
procedure. Note that one benefit of this
is that it leaves open the opportunity to
operate a non-linear simulation for
detection of the soft constraint
violations. The downside of this method

of utilisation of QP is the computational


effort to achieve solution. Each control
step will involve multiple passes of the
QP so that the optimisation calculations
have to be repeatedly implemented (the
luxury of tables of Gains for fast
reference is not available). This
approach is therefore expensive
computationally.
Each execution of the QP implicitly
involves a simulation across the design
horizon, since the QP delivers a
complete set of MV moves for that
horizon. Therefore, embedded within the
QP is a basis for directly calculating
future CV behaviour, without need for
any further control calculations. This is
the approach may be employed within
the LRQP method as a compromise for
the sake of computational efficiency,
although there is a price to pay in terms
of control effectiveness, as shown
below.
SQP( Sequential QP)
Sequential (or Successive) Quadratic
programming (SQP) is a constrained
non-linear optimisation technique which
minimises a specified function using a
series of quadratic approximations. SQP
is an iterative process, with two distinct
stages at each iteration: firstly a QP
problem is solved to yield a direction in
which the solution will move and then a
step length is estimated which reduces
the objective function in this direction in
some optimal way.
In terms of Model Predictive Control,
use of an SQP method enables either
linear or non-linear models to be used,
with the SQP minimising the resulting
quadratic or general non-linear MPC
cost functions.
The advantage of using an SQP
solution, rather than the QP methods
discussed above, is that a range of
model structures may be used without

15

the need to re-define the QP problem


specifically for each model type. Instead,
it is only necessary to be able to
evaluate the cost function for any given
set of MVs. As a result, non-linear
models based on, for example, Neural
Networks or NARX structures, may be
incorporated into the MPC problem for
the control of general non-linear
processes. For such models, the
Hessian matrix required for the QP
problem will not be known in advance;
indeed it will vary with the solution as
the SQP proceeds towards a minimum.
Therefore, most SQP algorithms start
with a generic initial value of the
Hessian matrix, typically an identity
matrix multiplied by a constant. The
Hessian matrix is then updated with
successive iterations of the SQP using,
for example, the BFGS updating
algorithm, converging to the true
Hessian as the solution itself converges.
Clearly, the flexibility of the SQP
approach comes at a cost. For linear
models, where the Hessian is
calculable, solution of the MPC problem
using an SQP is massively inefficient.
Typically, between 10 and 50 iterations
of the SQP (each consisting of QP
solution, Hessian update and step
length calculation) would be performed
at each control step for a reasonably
sized MPC problem. The large number
of iterations required, even for
minimisation of a quadratic cost
function, results largely from the initially
unknown Hessian matrix. Use of a
warm start approach, where the
Hessian is primed using the final value
of the previous iteration, may alleviate
the computational requirement
somewhat.
As a general replacement for QP, SQP
may be implemented in any of the ways
described in the previous section i.e.
with the soft constraints either included
in the SQP solution or dealt with
externally using the LR methodology.

The latter is in fact the most


advantageous. If only hard constraints
are accommodated within the SQP, the
coding is greatly simplified and errors
associated with linear constraint
approximation are eliminated.
SIMULATED CASE STUDIES
The Simulated Process
The Shell Heavy Oil Fractionator
Problem is used as the basis for
simulated comparisons and
examples( Prett, 1987). This problem
was posed by Shell in 1987 to provide a
benchmark for the assessment of
multivariable control procedures. The
model consists of 35 first order plus
time delay transfer functions. These
transfer functions are normalised so all
process parameters have a nominal
standard range of between 0.5 and
+0.5. These transfer functions appear to
have been chosen deliberately so that
the column is ill conditioned, i.e. column
interactions make it difficult to
simultaneously control multiple issues
because of heavy interaction and a mix
of fast and slow time constants and
short and long time delays. A simplified
structure of Cause and Effect is chosen,
as shown in fig. 1. There are three MVs,
Top Draw( signal A1), Side Draw( signal
A2) and Bottom Duty( signal A3). There
are seven CVs, Top End Point
Analysis( signal M1), Side End Point
Analysis, and five column Temperatures(
signals M3 to M7 from column top to
bottom). There is a
MVs
A1 Top Draw
A2 Side Draw Causes and
A3 Bottoms Duty
Effects
FV
A4 Intermediate
Duty

CVs
M1 Top Analysis
M2 Side Analysis
M3 Top Temperature
M4
M5
M6
M7 Bottom
Temperature

16

Figure 1. Cause and Effect Diagram for


Simulation Study.
single FV, signal A4, the Intermediate
Duty.
For the purpose of comparison between
the various methods of MPC that are
reviewed above, an example is
contrived which drives the process to an
extreme constraint situation, but for
which there is a final acceptable solution
that can satisfy set-point demands within
both hard and soft constraint
boundaries. This involves the process
being initially at a steady-state at the
centre of the bounded region. A large
and unmeasured disturbance is then
applied to the Intermediate Duty( signal
A4) and the comparison discussions are
concerned with the manner in which the
control engineering manages recovery.
Fig. 2 presents four sets of responses:
Fig. 2a for the situation with the LR
method; Figs. 2b and 2c with the LRQP
and 2d with the QP. In each situation,
the configuration and tuning selections
are identical and the controller updates
every 5 s. The controller is required to
hold the two analysers( M1 and M2) at
setpoint and to maintain the other 5 CVs
within minimum and maximum
temperature bounds. In fact, only two of
these temperatures, M3( top) and
M7( bottom) prove to have relevance.
The CVs are prioritised in order of
importance with M7 highest, then the
other temperatures including M3, then
M2 and then M1. Thus the management
of CV constraints takes priority over setpoints, which would be the normal
application situation.
Consider Fig. 2a for the LR method. The
figure shows, for a time span of 10
minutes:
Analyser signals M1( top) and
M2( side) and their associated setpoints which are constant at 0;

Temperatures M3( top) and


M7( bottom) and their associated
set-points which are manoeuvred to
accommodate anticipated soft
constraint violations. The bounds are
0.5 to 0.5 for both M3 and M7;
The three MV signals(A1, A2 and
A3) which are bounded to the range
0.5 to 0.5 and to a move constraint
of no more than 0.1 per update; and
The unmeasured disturbance A4,
which is seen to undergo a large
change from 0 to 2 to the left side of
the trends.
Fig. 2a is reviewed in detail to highlight
the characteristics of the contrived
problem that the various approaches are
required to resolve. Progressing from
left to right along the trends, detail may
be observed as follows:
Following the disturbance impact
M1 peaks at 0.44 after 40s. and
recovers to set-point after 90s.
following some undershoot.
M2 peaks at 1.208 after 35s. but
takes more than 250s. to recover to
set-point.
M3 peaks at 1.386 after 25s. It is
quickly brought back within bounds
but stays active as a soft constraint
until about 190s. after the impact. It
is at this point that the degree of
freedom that is directed to hold M3
is released and redirected to bring
M2 back to set-point.
M7 peaks at 0.458 after just 10s. it
then moves down to the lower
constraint boundary and it thereafter
remains at that boundary.
A1 takes 5 steps to reach its
minimum of 0.5 and then stays
locked at this minimum for a further
4 steps. It then freely manoeuvres
for about 260s. before finally locking
at the minimum of 0.5.

17

A2 manipulates up and down with


maximum moves of +-0.1 for a

period of 60s. and thereafter moves


around with less aggression at no

Figure 2a) The LR Method

Figure 2b) The LRQP method( single pass)

18

Figure 2c) The LRQP method( multi-pass)

Figure 2d) The QP method

19

time moving outside the range 0.3


to 0.2.
A3 drives in sympathy with A1 to the
minimum of 0.5 but then steps off
immediately. It returns to this
minimum after 150s, stays there for
a further 60s. and then floats away
freely.

It takes about 480s. for the process to


recover to a steady-state. There are five
main phases to this recovery.
The first, for a period of about 60s.,
is crisis management to attempt to
hold the process within soft
constraint bounds and the MVs
move as aggressively as they can
for this purpose. Set-points for M1
and M2 are completely abandoned.
There is then a period of some 120s.
during which M2 is not controlled
because both M3 and M7 are under
control to prevent soft constraint
violation. The MVs can be seen to
be slowly moving during this phase
as the longer time constants of the
process prevail.
Eventually, the constraint on M3
ceases to be predicted to be violated
and the third phase is evident. The
MVs then manoeuvre to bring M2
down to set-point, which takes about
70s.
Thereafter, for the fourth phase, the
MVs move in a gentle fashion
compensating for the long time
constants, maintaining both M1 and
M2 at set-point at the same time as
holding M7 within soft constraint
bounds.
Eventually A1 reaches the lower limit
of 0.5 and the controller is just able
to hold the two setpoints and the soft
constraint associated with M7
despite having only two degrees of
freedom left. This is because in the
final steady-state, M7 settles just
inside the lower soft constraint
boundary.

Now consider Fig. 2b for the LRQP fast


method that bases soft constraint
evaluation from a single pass of the QP
procedure. The responses follow the
same general pattern as for the LR
method but with the following major
differences:
The reaction to the initial crisis is
less fierce. The two temperatures
are brought back within bounds
more slowly and the MVs do not
drive to their limits. Initial excursions
of these two CVs are in general
slightly larger and are outside
bounds for about twice the period.
The time for recovery to set-point is
faster at 180s rather than 250s.
Fig. 2c illustrates the LRQP method but
this time with horizon behaviour being
computed by multiple use of the QP to
more accurately reflect the reality of the
moving horizon mechanism. In this case
the response timing closely follows the
pattern of the LR method which also recomputes the MVs for each step of the
simulation horizon. The time for
recovery to set-point is back to 250s., a
price to pay for the improved
management of the CV constraints
during the initial crisis. A major
difference from the LR Method is much
smoother control action during the
recovery period following the initial crisis
management phase and reduced
amplitude in these manipulations and in
the CV responses. This smoother action
is as a result of better management of
the MV constraints that arises from the
application of QP.
Now consider Fig. 2d for the QP
method. Following the disturbance
impact, for the first 30s, the pattern of
responses is very similar to the LR
method, however, from then onwards
things are quite different. A1 strikes the
lower boundary for the second time after

20

90s. and then essentially stays locked


on for the duration. A steady-state is

Figure 3 Succession of Responses

resolved after 240s. however there is


considerable offset. M1 settles at 0.141,
M2 at 0.015, M3 at 0.515 and M7 at
0.521. Thus all requirements for control
are dishonoured. Neither M1 or M2 are
at set-point and both M3 and M7 are
beyond their soft constraint boundaries.
The reason for this is that the final
resting-place is an unfeasible operating
point for the QP. The slack variables are
invoked to allow the constraint
boundaries to relax. In consequence all
attention is given to the auxiliary cost
function and the set-point objectives
suffer because of consequential lack of
degrees of freedom.
The contrast between the four sets of
responses may also be assessed by
reference to Fig. 3, which incorporates
the Fig. 2 items in succession on a
single trend display. It is clear that the

LRQP multi-pass algorithm provides the


most effective management of recovery
Fig 4. shows the performance of the
SQP algorithm with LR soft constraint
management on the same problem. The
responses are very similar to those seen
in Fig. 2b), as would be expected, as the
SQP solution is simply replacing the QP
solution used in the LRQP approach.
Slight variations are seen due to the
approximate nature of the SQP
algorithm, and the use of a slightly
different set of weights within the SQP.
Unfortunately, whereas the previous
methods performed real-time control
successfully for 5s sample period on a
standard desktop PC, the SQP solution
tended to take around 30 seconds for
each control step. Whilst a factor of 6 is
recoverable through the use of a faster
computer, or more efficient coding, the

21

Figure 4 SQP control of linear system

relative inefficiency of the SQP with


respect to the QP is highlighted.
Blocking with the LR method
Fig. 5 presents responses that indicate
the implication of the use of Blocking
with the LR Method. Responses to setpoint changes from 0 to 0.05 and back
again are presented. The first pair of
transitions is for the situation without
Blocking. The second is for the situation
with a Blocking Width of 3 steps. The

Design Horizon N is 24 for this case.


The solution is therefore obtained in 8
iterations for the case with Blocking, with
the requirement that the MVs move the
same amount for each step within the
Width( approximating a ramp
adjustment). In fact the MVs are
recomputed and adjusted at every step.
The effect is to dampen the responses.
The manipulations are significantly
reduced in amplitude and response to
set-point change is slowed down.

22

Figure 5 Effect of Blocking

Unmeasured Disturbance
Rejection( comparison between FIR
and ARX solutions).
Fig.6 presents a comparison between
the behaviour of a controller that uses a
compact ARX model representation and
one that uses an FIR format. The left
hand portion repeats the responses of
fig. 2a for the LR Method. The right hand
portion presents the equivalent with an
FIR model being employed. The
deviations and duration from set-point
and constraint boundaries are much
larger for the FIR case. The ARX model
employs just 8 samples( i.e. S=8) within
the U vector in contrast with over 30 for

the FIR case( 30 being the minimum to


catch the complete impulse response
profile). The reason for the better
behaviour with the ARX model is simply
that it will pick up and start tracking plant
accurately just 8 steps following the
disturbance impact. This interval is
rather like a window of blindness that
confuses the controller. The window is
much larger for the FIR case since it
takes over 30 steps before accurate
prediction is once more in play. Under
situations where all disturbances are
measured, the performance of the two
types of controller is essentially the
same.

23

Figure 6 Unmeasured disturbance rejection

CONCLUSIONS
This paper describes MPC algorithms
that are in use to address a broad range
of industrial process control situations.
The LR method has, to date, been the
most popular for exploitation because, in
spite of pragmatism in dealing with
constraint issues, it has proven robust,
reliable and computationally efficient.
The progress with the efficiency of
computation( both hardware and
software) now makes it practicable to
consider the use of QP for medium to
large scale industrial problems. QP
provides a more elegant address for the
management of constraints. However, it
is argued that the straightforward
application of QP in a control
engineering context has its drawbacks,
particularly in the management of
constraints associated with Control
Variables( i.e. soft constraints). The

LRQP method, which is a combination


of the best attributes of the LR and QP
approaches has therefore been
introduced. LRQP uses prioritised
control engineering to manage the soft
constraints and QP to deal with the
Manipulated Variable constraints. LRQP
is now considered the favoured method
for most industrial applications.
It must be said that the contrived
example described in this paper
represents a very extreme situation
which would rarely be encountered in
day to day situations. The example
serves to emphasise the draw backs of
being comprehensively elegant in
dealing with both control engineering
and constraint issues within a single QP
address. Under normal operating
conditions that do not require
simultaneous relaxation of multiple
constraints, the QP method provides a

24

powerful control engine that can out


perform the LRQP approach because of
its uncompromising treatment of the soft
constraint boundaries. However, it is
unacceptable that a control method
should fail because process operations
are in crisis and for this reason the QP
method is advised to be employed with
extreme caution.
REFERENCES
1. Cutler C.R. and Ramaker B.L.
( 1980), Dynamic Matrix Control, a
Computer Control Algorithm, Proc.
Joint Automatic Control Conference.
2. Qin S.J. and Badgwell T.A.( 1996),
An Overview of Industrial Model
Predictive Control Technology,
CPC-V. Tahoe.
3. Sandoz D.J.( 1996) The Capability
of Model Predictive Contro1,
Measurement and Control, Vol. 29,
No. 4, May.
4. Clarke, D.W., Mohtadi, C, and Tuffs,
P.S. (1987). Generalised predictive
control. Part I: the basic algorithm
and part II: extensions and
interpretations. Automatica,
23(3),137-160.
5. Hesketh T. and Sandoz D.J.( 1987),
Application of a Multivariable
adaptive controller Proc ACC.
6. Warren J.( 1992) Model Based
Control of Catalytic Cracking
Control and Instrumentation, July.
7. Norberg P.O.( 1997) Challenges in
the Control of a Reheating and
Annealing Process, Proc.
Conference, Iron and Steel, Today,
Yesterday and Tomorrow,
Stockholm, Vol 2, pp 575-595.
8. Sandoz D.J. et al( 1999), Innovation
in Industrial Model Predictive
Control, IEE Workshop on Model
Predictive Control, Savoy Place,
April.
9. Jacobs O.L.R.( 1974) Introduction
to Control Theory Oxford Press.
10. Sandoz D.J.( 1984) CAD for the
Design and Evaluation of Industrial

Control Systems, Proc IEE, Vol 131,


No. 4.
11. Prett D.M. and Garcia C.E.( 1988)
Fundamentals of Process Control,
Butterworths.
12. Garcia C.E. and Morari M.( 1986).
Quadratic programming solution of
dynamic matrix control( QDMC),
Chem. Eng. Commun. 46: 73-87.
13. Sandoz D.J. and Wong O.( 1979)
Design of Hierarchical Computer
Control Systems for Industrial Plant,
Proc IEE, Vol 125, no.11
14. Bazaraa, M.S.,Sherali H.D. and
Shetty C.M.( 1993). Nonlinear
Programming Theory and
Algorithms (Second Edition), John
Wiley & Sons.
15. Qin S.J.and Badgwell T.A.( 1998),
An Overview of Nonlinear Model
Predictive Control Applications.
Nonlinear MPC Workshop, Ascona,
Switzerland, June.
16. Silverman L.M.( 1976) Discrete
Riccati Equations: alternative
algorithm, asymptotic properties and
systems theory interpretations in GT
Leonde (Ed), Control and Dynamic
Systems, Vol 12, Academic Press,
New York.
17. Maciejowski J.M.( 1989)
Multivariable Feedback Design,
Addison-Wesley.
18. Sandoz D.J. and Appleby P.( 1972)
Further Analysis of a Discrete
Single Stage Control Law, Proc
IEE, Vol 119, No. 8.
19. Prett D.M.( 1997) Shell Process
Control Workshop, Butterworths,
Stoneham, MA.
20. Haykin S.( 1994), Neural networks A comprehensive foundation,
Macmillan College Publishing
Company, Inc., 844 Third Avenue,
New York.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge the
support of Invensys PLC in funding
aspects of this work.

25

You might also like