You are on page 1of 45

Advanced Control and Systems Engineering

2012
Enso Ikonen
12/2011
Contents
I Model Predictive Control (MPC) 3
1 Dynamic Matrix Control (DMC) 4
1.1 Introduction to MPC . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Simple LTI models . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 About notation . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Finite Impulse Response . . . . . . . . . . . . . . . . . . . 7
1.2.3 Finite Step Response . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Relation between FIR and FSR . . . . . . . . . . . . . . . 10
1.3 Prediction models . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Output prediction . . . . . . . . . . . . . . . . . . . . . . 10
1.3.2 Free response recursion . . . . . . . . . . . . . . . . . . . 12
1.4 Prediction model for a plant with disturbances . . . . . . . . . . 13
1.4.1 Output prediction . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2 Free response . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.3 Control horizon . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6 DMC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.1 O-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.2 On-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Advanced Process Control and Industrial MPC . . . . . . . . . . 23
1.8.1 History of MPC . . . . . . . . . . . . . . . . . . . . . . . 26
1.8.2 Pros, cons and challenges . . . . . . . . . . . . . . . . . . 27
2 Quadratic DMC (QDMC) 29
2.1 Input-output constraints . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 Constraints in change of MV . . . . . . . . . . . . . . . . 29
2.1.2 Constraints in MV . . . . . . . . . . . . . . . . . . . . . . 30
2.1.3 Constraints in output . . . . . . . . . . . . . . . . . . . . 31
2.1.4 Combination of constraints . . . . . . . . . . . . . . . . . 32
2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.1 Control problem as QP . . . . . . . . . . . . . . . . . . . 34
2.2.2 *Quadratic programming algorithms . . . . . . . . . . . . 35
2.3 QDMC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1
2.3.1 O-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3.2 On-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 *Soft constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Multivariable DMC and QDMC . . . . . . . . . . . . . . . . . . . 39
2.7 Integrating processes . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.7.1 *Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.8 *Identication of FSR models . . . . . . . . . . . . . . . . . . . . 42
2.9 Conclusions: DMC and QDMC . . . . . . . . . . . . . . . . . . . 42
2.10 Homework - DMC/QDCM . . . . . . . . . . . . . . . . . . . . . . 43
3 QDMC power plant case study 44
3.1 Review of homeworks . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Guided exercise / power plant case study . . . . . . . . . . . . . 44
2
Part I
Model Predictive Control
(MPC)
3
Chapter 1
Dynamic Matrix Control
(DMC)
Chapter 1 introduces the unconstrained Dynamic Matrix Control (DMC). We
start with simple linear models. Sections 1.3 and 1.4 make use of these models
in deriving predictions of future plant outputs. The predictions are then used
by the optimizer to nd the proper plant control inputs. Section 1.6 gives the
DMC algorithm.
The chapter is terminated by drafting the role of DMC and also other ap-
proaches for model predictive control (MPC) in the palette of advanced process
control methods.
1.1 Introduction to MPC
At each control interval, the MPC algorithm answers to three questions:
1. Update: Where is the process going?
2. Target: Where should the process go?
3. Control: How to get there?
Basic components of the MPC methodology include the following:
Digital algorithm = Software implemented on a computer;
Model-based approach = Control is based on a dynamic process model;
Predictive approach = Future behaviour of the plant is predictied in a
future time window (prediction horizon) using the dynamic model;
Optimal control = Goals of process control are expressed as a cost function
to be optimized (minimized);
4
On-line optimization = At each sample time, a seach for a sequence of
future Manipulated Variables (MVs) is conducted (i.e., minimization of
cost function, with or without constraints);
Receeding horizon = Only the rst element of the control sequence is ap-
plied to control the process, the whole optimization procedure is repeated
at next sampling instant.
In general, MPC is not an explicit control law but a control philosophy.
A classical cost function looks as follows:
J =
p

i=1
_
y
ref
(k + i) y (k + i)
_
2
+ r
c1

i=0
u(k + i)
2
where
k is the sampling instant, k Z, which relates to real time t via t = kT
s
where T
s
is the sampling time;
y
ref
(k) is a future reference (set point) at instant k;
y (k) is prediction of future plant output at instant k;
u(k) is control move at instant k, u(k) = u(k) u(k 1);
p is the prediction horizon, c is the control horizon, specifying the number
of terms in future that are taken into account when computing the cost
function;
r is a weighting factor, i.e. the ratio of importance between costs due to
deviation from desired output and costs due to control moves.
Minimization of the classical cost function results in a classical optimization
problem, subject to constraints in the system output, manipulated variable and
change of manipulated variable:
min
u
J
y
min
< y < y
max
u
min
< u < u
max
u
min
< u < u
max
where u is a sequence of future control moves in the control horizon.
5
1.2 Simple LTI models
1.2.1 About notation
Before starting, a few words about notation are in place. In general the following
conventions are used
variables in italic, x, denote scalar variables;
variables in bold, x, denote vectors;
variables in bold capital letters, X, denote matrices;
arguments in parentheses, e.g., x(k) relates the variable x with a sampling
instant k;
additional (non-italic) sub- or superscripts, e.g., p in y
p
, denote a dierent
variable (i.e., y is not the same variable as y
p
).
Special care should be used with the various outts of variable u. In
general:
u is a vector of future changes in the manipulated variable;
u is a scalar component of the vector u;
u(k) is the vector u at instant k,
u(k) = [u(k) ; u(k + 1) ; :::; u(k + c 1)]
T
u(k) is a scalar component of vector u(k);
u(k + i[k) is the k + ith element of vector u, computed based on
information available at instant k;
The choice of notation depends on the occasion. When there is no ambi-
guity, the simplest possible notation is used.
Two types of vector notations are in use:
x = [x(1) ; x(2) ; :::; x(n)]
T
or x =
_
x(1) x(2) x(n)

T
. Both
denote a n 1 column vector;
Operations between scalars and vectors/matrices are allowed: in x+y the
scalar y is added to each of the components of x element-wise. The same
applies for multiplication and subtraction operations.
Some common short hand notations are being used
MV denotes the Manipulated Variable, i.e. the control variable (controller
output, plant input);
CV denotes the Controlled Variable (plant output);
sp denotes the setpoint, a reference trajectory dened as a constant.
6
Figure 1.1: Impulse response.
1.2.2 Finite Impulse Response
For an impulse input
u = [1; 0; 0; :::; 0]
T
i.e. u(0) = 1; u(j) = 0 for \j ,= 0
the system output is given by
y = [0; h(1) ; h(2) ; h(3) ; :::; h(n) ; 0; 0; :::]
T
Figure 1.1 illustrates an impulse response.
The Finite Impuse Response (FIR) h is given by
h = [h(1) ; h(2) ; h(3) ; :::; h(n)]
T
It is assumed that
the system does not react instantenously to the input (digital systems),
h(0) = 0
the system transient ends after n instants, h(n + k) = 0 for k > 0.
The dynamics of the system can be fully described with the coecients of the
FIR model.
Any input
u = [u(0) ; u(1) ; u(2) ; u(3) :::]
T
can be seen as an addition of impulses
u = [1; 0; 0; 0; :::]
T
u(0)
+[0; 1; 0; 0:::]
T
u(1)
+[0; 0; 1; 0:::]
T
u(2)
+:::
7
Consequently, the system output
y = [y (0) ; y (1) ; y (2) ; y (3) :::]
T
can be obtained as
y = [0; h(1) ; h(2) ; h(3) ; :::; h(n) ; 0; 0; 0; :::]
T
u(0)
+[0; 0; h(1) ; h(2) ; :::; h(n 1) ; h(n) ; 0; 0; :::]
T
u(1)
+[0; 0; 0; h(1) ; :::; h(n 2) ; h(n 1) ; h(n) ; 0; :::]
T
u(2)
+:::
_

_
y (0)
y (1)
y (2)
y (3)
.
.
.
_

_
=
_

_
0
h(1) u(0)
h(2) u(0) + h(1) u(1)
h(3) u(0) + h(2) u(1) + h(1) u(2)
.
.
.
_

_
For the kth output, we can write
y (k) =
n

i=1
h(i) u(k i)
For calculating the kth output, n past inputs are needed (signals with negative
time are taken to be zeros). The impulse response coecient h(i) shows how
the input applied i instants ago inuences the current output, at instant k. If
the process input is an impulse, the sampled process output directly represents
the coecients h(i) of the FIR model.
1.2.3 Finite Step Response
For a unitary step input
u = [1; 1; 1; :::; 1]
T
i.e. u(j)
_
1 for \j _ 0
0 for \j < 0
the system output is given by
y =
_
0; h(1) ; h(1) + h(2) ; h(1) + h(2) + h(3) ; :::;

n
i=1
h(i) ;

n
i=1
h(i) ; :::
_
T
= [0; s (1) ; s (2) ; s (3) ; :::; s (n) ; s (n) ; :::]
T
Figure 1.2 illustrates a step response.
The Finite Step Response (FSR) s is dened by
s = [s (1) ; s (2) ; s (3) ; :::; s (n)]
T
:
It is assumed that
8
Figure 1.2: Step response.
the system does not react instantenously to the input (digital systems),
s (0) = 0
the system transient ends after n instants, s (n + 1) = s (n + 2) = ::: =
s ().
The dynamics of the system can be fully described by just having the coecients
of the FSR model.
Any input
u = [u(0) ; u(1) ; u(2) ; u(3) :::]
T
can be rewritten as an addition of steps
u = [1; 1; 1; 1; :::]
T
u(0)
+[0; 1; 1; 1; :::]
T
(u(1) u(0))
+[0; 0; 1; 1; :::]
T
(u(2) u(1))
+:::
Dene u(k) = u(k) u(k 1). The system output can be obtained as
y = [y (0) ; y (1) ; y (2) ; y (3) ; :::]
T
= [0; s (1) ; s (2) ; s (3) ; :::; s (n) ; s (n) ; s (n) ; :::]
T
u(0)
+[0; 0; s (1) ; s (2) ; :::; s (n 1) ; s (n) ; s (n) ; :::]
T
u(1)
+[0; 0; 0; s (1) ; :::; s (n 2) ; s (n 1) ; s (n) ; :::]
T
u(2)
+:::
9
_

_
y (0)
y (1)
y (2)
y (3)
.
.
.
y (n)
.
.
.
_

_
=
_

_
0
s (1) u(0)
s (2) u(0) + s (1) u(1)
s (3) + s (2) u(1) + s (1) u(2)
.
.
.
s (n) u(0) + s (n 1) u(1) + ::: + s (1) u(n 1)
.
.
.
_

_
For the kth output, we can write
y (k) =
1

i=1
s (i) u(k i)
= s (n) u(k n) +
n1

i=1
s (i) u(k i) : (1.1)
If the process input is a unit step, the sampled process output directly represents
the coecients s (i) of the FSR model.
1.2.4 Relation between FIR and FSR
The FIR and FSR models with coecient vectors h and s, respectively, are
related by
s (k) =
k

i=1
h(i)
h(k) = s (k) s (k 1) :
1.3 Prediction models
It is convenient to see multiple step ahead predictions to consist of a free response
and a forced response part.
Output predictions = Free response predictions + Forced reponse predictions
Free reponse is the system response assuming that the current and future input
changes are zero, u(k) = u(k + 1) = : : : = 0. The forced reponse is the
system response due to the changes in current and future inputs.
1.3.1 Output prediction
Multiple step ahead predictions for step response models can be obtained by
writing out the individual predictions y (k[k), y (k + 1[k), . . . . The notation
x(i[k) is a short hand notation for the value of variable x at instant i, given the
information up to and including instant k. In the following the signicance of
10
this notation comes from u(i[k) which denotes the control change at (future)
instant i _ k, as determined at the current instant k. The control actions before
k have already taken place and are therefore known, hence they are noted simply
by u(k 1), u(k 2), etc.
Using (1.1) for the FSR model, we have the following development:
y (k[k) =
n1

i=1
s (i) u(k i) + s (n) u(k n)
. .
f(kjk)
y (k + 1[k) =
n1

i=1
s (i) u(k + 1 i) + s (n) u(k + 1 n)
=
n1

i=2
s (i) u(k + 1 i) + s (n) u(k + 1 n)
. .
f(k+1jk)
+ s (1) u(k[k)
y (k + 2[k) =
n1

i=1
s (i) u(k + 2 i) + s (n) u(k + 2 n)
=
n1

i=3
s (i) u(k + 1 i) + s (n) u(k + 2 n)
. .
f(k+2jk)
+s (1) u(k + 1[k) + s (2) u(k[k)
.
.
. (1.2)
y (k + n 1[k) =
n1

i=1
s (i) u(k + n 1 i) + s (n) u(k 1)
= s (n) u(k 1)
. .
f(k+n1jk)
+
n1

i=1
s (i) u(k i + n 1[k)
y (k + n[k) =
n1

i=1
s (i) u(k + n i) + s (n) u(k)
= s (n) u(k 1)
. .
f(k+njk)
+
n

i=1
s (i) u(k i + n[k)
In the right hand side expressions, the rst term is the free response f (a function
of the past inputs). The second term is the forced response (a function of the
current or future changes in the input variable.
Let vector f collect the predictions of the free response at instant k:
f (k) = [f (k[k) ; f (k + 1[k) ; f (k + 2[k) ; :::; f (k + n 1[k)]
T
(1.3)
11
If u(k + j) = 0 for \j _ 0, the free response equals the system response,
f (k + i[k) = y (k + i).
From instant k+n1 forwards the free response is a constant: f (k + n + j[k) =
f (k + n 1[k) for \j _ 0.
The multiple step ahead development can be written as
y (k[k) = f (k[k)
y (k + 1[k) = f (k + 1[k) + s (1) u(k[k)
y (k + 2[k) = f (k + 2[k) + s (1) u(k + 1[k) + s (2) u(k[k)
y (k + 3[k) = f (k + 3[k) + s (1) u(k + 2[k) + s (2) u(k + 1[k) + s (3) u(k[k)
.
.
.
where in the right hand expression the rst term is the free response and the
remaining terms make the forced response. The above can be expressed in a
matrix form
_

_
y (k + 1[k)
y (k + 2[k)
.
.
.
y (k + n[k)
_

_
. .
b y(k+1)
=
_

_
f (k + 1[k)
f (k + 2[k)
.
.
.
f (k + n[k)
_

_
. .
f (k+1jk)
+
_

_
s (1) 0 0
s (2) s (1)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
s (n) s (n 1) s (1)
_

_
. .
G
_

_
u(k[k)
u(k + 1[k)
.
.
.
u(k + n 1[k)
_

_
. .
u(k)
where f (k + 1[k) is the free response prior to knowledge of u(k[k). In short
y (k + 1) = Mf (k) +Gu(k)
where Mf is the free response (see next subsection). G is called the dynamic
matrix, which describes how the current and future input changes eect the
system output (recall that these were to be optimized by the controller), i.e.,
Gu constitutes the forced response.
1.3.2 Free response recursion
A recursion can be developed for the vector f . At instant k the vector f is given
by (1.3)
f (k) = [f (k[k) ; f (k + 1[k) ; f (k + 2[k) ; :::; f (k + n 1[k)]
T
:
12
At instant k + 1
f (k + 1) = [f (k + 1[k) ; f (k + 2[k) ; :::; f (k + n 1[k) ; f (k + n 1[k)]
T
+su(k)
where the last term f (k + n 1[k) is repeated (since the system transient is
assumed to have ended after n instants) and the rightmost term is the change
due to the (step) input u which was applied at instant k. Therefore, a matrix
mechanization for f can be given
f (k + 1) =
_

_
0 1 0 0
0 1
.
.
.
.
.
.
.
.
. 0
.
.
. 0
.
.
. 1
0 1
_

_
. .
M
f (k) +
_

_
s (1)
s (2)
s (3)
.
.
.
s (n)
_

_
. .
s
u(k) :
M is a diagonal matrix with ones above the main diagonal, and s is the vector
of step response coecients. In short
f (k + 1) = Mf (k) +su(k) : (1.4)
Now that we are in possession of a convenient way to predict plant future
behaviour, we would be ready to start to build up a basic DMC algorithm. In
fact, we could skip the next section and proceed directly to optimization (using
y
p
= f , p = c = n). However, for practical reasons, in the next section we rst
develop a prediction model with measured and unmeasured disturbances, and
with two handy parameters: the prediction horizon and the control horizon.
1.4 Prediction model for a plant with distur-
bances
Let us consider a control problem shown in Fig. 1.3 with a scalar manipulated
variable u, a measured disturbance d, and an unmeasured disturbance w (in-
cluding unmodelled dynamics, etc). Denote the (scalar) system output by y and
the desired reference output by y
ref
.
1.4.1 Output prediction
The DMC computations for the free response f can be initiated by assuming
that the system will be in a disturbanceless steady state:
u(k) = u(k + 1) = ::: = 0
d (k) = d (k + 1) = ::: = 0
w(k) = w(k + 1) = ::: = 0
13
Figure 1.3: Block diagram of the considered DMC problem.
Let us now develop an output prediction using information up to instant k. The
output is a sum of the following terms:
predicted free response, f (k + 1), as characterized in (1.2)
forced response due to the manipulated variable, S
u
u
forced response due to the measured disturbance, S
d
d
response due to unmeasured disturbances, w
14
The p-step ahead output prediction can be written as
y = f (k + 1) +S
u
u +S
d
d +w
_

_
y (k + 1[k)
y (k + 2[k)
.
.
.
y (k + p[k)
_

_
=
_

_
f (k + 1[k)
f (k + 2[k)
.
.
.
f (k + p[k)
_

_
(1.5)
+
_

_
s
u
(1) 0 0
s
u
(2) s
u
(1)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
s
u
(p) s
u
(p 1) s
u
(1)
_

_
_

_
u(k[k)
u(k + 1[k)
.
.
.
u(k + p 1[k)
_

_
+
_

_
s
d
(1) 0 0
s
d
(2) s
d
(1)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
s
d
(p) s
d
(p 1) s
d
(1)
_

_
_

_
d (k[k)
d (k + 1[k)
.
.
.
d (k + p 1[k)
_

_
+
_

_
w(k + 1[k)
w(k + 2[k)
.
.
.
w(k + p[k)
_

_
The matrices S
u
and S
d
have as many rows as there are predictions in the
horizon (p). If n < p, the missing elements in the step responses s
u
and s
d
are
obtained by duplicating the last values s
u
(n) and s
d
(n) of the corresponding
FSR models (recall that the transient was assumed to have ended in n instants).
The change in measured disturbance at instant k is obtained from d (k[k) =
d (k)d (k 1) where d (k) is the disturbance measured at instant k. The future
values for d and w are not known, so let us make the following assumptions:
The measured disturbance remains constant in the future: d (k + 1) =
d (k + 2) = ::: = 0.
The unmeasured disturbance at k can be estimated from the dierence
between predicted and measured output at instant k: w(k[k) = y (k)
y (k), where y (k) is the measured output and y (k) = f (k[k):
w(k[k) = y (k) f (k[k)
The unmeasured disturbances remain constant in the future: w(k + 1[k) =
w(k + 2[k) = ::: = w(k + p[k).
The assumptions are valid if we consider that the system integrates all (measured
and unmeasured) output disturbances; and that both the process output and
15
disturbance measurements are noiseless. The equation (1.5) simplies into
_

_
y (k + 1[k)
y (k + 2[k)
.
.
.
y (k + p[k)
_

_
=
_

_
f (k + 1[k)
f (k + 2[k)
.
.
.
f (k + p[k)
_

_
+
_

_
s
u
(1) 0 0
s
u
(2) s
u
(1)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
s
u
(p) s
u
(p 1) s
u
(1)
_

_
_

_
u(k[k)
u(k + 1[k)
.
.
.
u(k + p 1[k)
_

_
+
_

_
s
d
(1)
s
d
(2)
.
.
.
s
d
(p)
_

_
d (k[k) +
_

_
1
1
.
.
.
1
_

_
(y (k) f (k[k))
y (k + 1)
. .
prediction
= Tf (k)
. .
past
+s
d
d (k) + (y (k) f (k[k))
. .
present
+Gu(k)
. .
future
: (1.6)
The prediction thus consists of: a free fresponse Tf due to past system life (see
subsection that follows), a feedforward (measured disturbance) and feedback
(bias) terms based on the present system status, and a term due to future
actions to the plant Gu (to be determined in the optimization).
1.4.2 Free response
In the past notation, the free response f was a column vector of n elements,
see (1.3). However, a p-step ahead prediction was to be determined. Therefore
equation (1.6) introduced a matrix T of size p n. This matrix depends on the
number of output predictions p:
if p > n, T displaces f and repeats the last element a sucient number of
times
if p = n, T displaces f and repeats the last element
if p < n, T displaces f , and cuts it to have p elements, only.
16
T =
_

_
0 1 0 0
0 0 1
.
.
. 0
.
.
. 0
.
.
. 0
.
.
.
0
.
.
. 1 0
0 0 1
.
.
.
.
.
.
.
.
.
0 0 1
_

_
pn
p>n
(1.7)
T =
_

_
0 1 0 0
0 0 1
.
.
. 0
.
.
. 0
.
.
. 0
.
.
.
0
.
.
. 1 0
0 0 1
0 0 1
_

_
pn
p=n
(1.8)
T =
_

_
0 1 0 0 0
0 0 1
.
.
. 0
.
.
. 0
.
.
. 0
.
.
.
0
.
.
. 1 0 0
0 0 1 0 0
_

_
pn
p<n
(1.9)
For the DMC algorithm with a manipulated variable and a measured dis-
turbance the update recursion for the free response (1.4) needs to be updated
17
to include also the feedforward term:
f (k + 1) =
_

_
0 1 0 0 0
0 1
.
.
.
.
.
.
.
.
. 0
.
.
.
.
.
. 1 0
0 0 1
0 0 1
_

_
. .
M
f (k) (1.10)
+
_

_
s
u
(1)
s
u
(2)
s
u
(3)
.
.
.
s
u
(n)
_

_
. .
s
u
u(k) +
_

_
s
d
(1)
s
d
(2)
s
d
(3)
.
.
.
s
d
(n)
_

_
. .
s
d
d (k)
= Mf (k) +s
u
u(k) +s
d
d (k)
where M is a n n square matrix, and s
u
and s
d
are models with the same
number of coecients.
1.4.3 Control horizon
The control horizon c can be introduced as a handy tuning parameter. The
control horizon means that only c future control moves are considered in the
optimization. The remaining changes will be zeros:
u(k + c[k) = u(k + c + 1[k) = ::: = u(k + c + p 1[k) = 0:
18
The output prediction can then be rewritten as
_

_
y (k + 1[k)
y (k + 2[k)
.
.
.
y (k + p[k)
_

_
=
_

_
f (k + 1[k)
f (k + 2[k)
.
.
.
f (k + p[k)
_

_
+
_

_
s
u
(1) 0 0
s
u
(2) s
u
(1)
.
.
.
.
.
.
s
u
(2)
.
.
. 0
.
.
.
.
.
. s
u
(1)
.
.
. s
u
(2)
.
.
.
s
u
(p) s
u
(p 1) s
u
(p c + 1)

0 0
0
.
.
.
s
u
(1)
.
.
.
.
.
.
.
.
. 0
s
u
(p c) s
u
(1)
_

_
_

_
u(k[k)
u(k + 1[k)
.
.
.
u(k + c 1[k)
0
.
.
.
0
_

_
+
_

_
s
d
(1)
s
d
(2)
.
.
.
s
d
(p)
_

_
d (k[k) +
_

_
1
1
.
.
.
1
_

_
(y (k) f (k[k))
Finally, we now have the output prediction with a prediction horizon p and
a control horizon c, given by two terms:
y (k + 1) = Tf (k) +s
d
d (k) + (y (k) f (k[k))
. .
y
p
(k+1)
+Gu(k) (1.11)
= y
p
(k + 1) +Gu(k)
where
G =
_

_
s
u
(1) 0 0
s
u
(2) s
u
(1)
.
.
.
.
.
.
s
u
(2)
.
.
. 0
.
.
.
.
.
. s
u
(1)
.
.
. s
u
(2)
.
.
.
s
u
(p) s
u
(p 1) s
u
(p c + 1)
_

_
(1.12)
The y
p
y
p
(k + 1) = [y
p
(k + 1) ; y
p
(k + 2) ; ; y
p
(k + p)]
T
is the term due to past (free) and present (feedforward and feedback) terms.
The Gu is the term due to future control moves, where the dynamic matrix
G is of size p c.
19
1.5 Optimization
The goal of the DMC algorithm is to nd a MV sequence u that results in a
(forced) response Gu such that the plant output is close to the desired future
reference sequence y
ref
. The future reference sequence is often dened by a set
point y
sp
. In such a case we wish to nd
Gu +y
p
=y
sp
(1.13)
This problem has p equations (number of predictions) and c unknowns (number
of future control moves). If p = c, (1.13) has a unique solution.
If c < p, as usually is the case, there is in general no exact solution. Instead,
we may want to nd a vector u which best minimizes the sum of squared
errors = Gu (y
sp
y
p
):
J =
T

= [Gu (y
sp
y
p
)]
T
[Gu (y
sp
y
p
)]
= [y
p
+Guy
sp
]
T
[y
p
+Guy
sp
]
= [ y y
sp
]
T
[ y y
sp
] = | y y
sp
| :
The optimization problem
min
u
J
can be solved by noticing that J is quadratic, and setting the derivative of the
quadratic function to zero. We have
@J
@u
= 2G
T
[Gu (y
sp
y
p
)] = 0
G
T
Gu = G
T
(y
sp
y
p
)
u =
_
G
T
G

1
G
T
(y
sp
y
p
)
This optimum is the minimum if the second derivative G
T
G is positive denite
(A matrix A is positive denite if x
T
Ax _ 0 for all x ,= 0).
It is often advantageous to add a weighting factor for the control moves
to the cost function:
J = "
T
" + u
T
u
with a solution
u =
_
G
T
G+ I

1
G
T
. .
H
(y
sp
y
p
)
. .
e
(1.14)
= He
This helps to avoid invertibility problems, and (in the unconstrained case) to
formulate problems which result in a smoother sequence of control actions.
20
1.6 DMC algorithm
Let us now collect the results into a DMC algorithm. The algorithm consists of
two phases: an initial phase (o-line), and the actual controller (on-line).
1.6.1 O-line
To construct the DMC controller, the following are needed as input data:
Step response model for the control variable s
u
(p 1);
Step response model for the disturbance variable s
d
(p 1);
Prediction horizon p;
Control horizon c;
Weighting factor for control moves .
O-line computations include
Computing the dynamic matrix G, eq. (1.12);
Constructing binary matrices M and T, equations (1.10), and (1.7)(1.9);
Computing solution matrix H, eq. (1.14).
Assuming that there are initially no disturbances, the recursive calculations
of n 1 vector f can be initialized with
f (k) = [y
0
; y
0
; :::; y
0
]
T
where y
0
is the steady state output of the system with input u
0
(systems initial
state).
1.6.2 On-line
The following steps need to be conducted on-line in the control loop:
1. Obtain the output measurement y (k) and the measurement of the distur-
bance d (k). Compute bias b and change in disturbance d:
b (k) = y (k) f (k[k)
d (k) = d (k) d (k 1)
where f (k[k) is the rst element in f (k).
2. Obtain the desired output setpoint y
sp
(k + 1). Compute the prediction
y
p
(1.11) and error e (1.14):
y
p
(k + 1) = Tf (k) +s
d
d (k) + b (k)
e (k + 1) = y
sp
(k + 1) y
p
(k + 1)
21
3. Compute the optimal future changes in the manipulated variable (1.14):
u(k) = He (k + 1)
4. Apply the rst element u(k) of u(k) to control the process. If the
actual value for the MV is needed, use u(k) = u(k 1) + u(k).
5. Compute the free response prediction (1.10)
f (k + 1) = Mf (k) +s
u
u(k) +s
d
d (k)
6. At next sample time, increase the sampling index k := k + 1, and goto
Step 1.
The above algorithm presents the basic version of the simple, yet powerful,
DMC predictive controller. If uses a simplest possible system model, the FSR.
Identication of an FSR model from data is straightforward, or the parameters
can be obtained by simulation of other type of plant models. DMC is a linear
optimal SISO (single input single output) controller, based on the use of a
FSR plant model.
A major restriction of the DMC is that it does not take constraints into
account. Constraints are always present in real control problems. In some
simple cases the input constraints can be dealt with the parameter . For a
more systematic handling of the constraints, a quadratic version of the DMC is
available. This will be considered in the next Chapter.
1.7 Exercises
1. Build a process simulator which computes the plant output at the next
sampling instant. Simulate the process a few samples ahead. The process
is given by:
(a) y (k + 1) = 0:5y (k) + 0:2u(k)
(b)
Y (z
1
)
U(z
1
)
= G
_
z
1
_
=
z
5
10:9z
1
(c)
Y (s)
U(s)
= G(s) =
s1
(2s+1)(5s+1)
, sampling time 1s.
(d) 10
dy
dt
(t) = y (t) +
1
10
u(t), sampling time 1s.
2. Calculate few sampling instants of DMC by hand...
3. Code the DMC algorithm to Matlab.
4. Simulate the closed loop behavior of a DCM controlled process and ex-
periment various tuning parameters (c = 1; 2; p, p =
_
n;
n
2
;
n
3
_
, =
_
1
1000
;
1
10
; 1
_
, s
d
= 0; 1, when the disturbance is passed directly to the
output.
22
5. *Examine the eect of sampling time and n.
6. *Examine the eect of plantmodel mismatch in exercise 2. Consider:
mismatch in static gain, dynamics {faster, slower}, system order, etc.
See Ex_DMC0.m (exercise 1) and. Ex_DMC.m (exercises 34).
1.8 Advanced Process Control and Industrial MPC
MPC is a commonly used abbreviation for Model Predictive Control. In some
occations, the same abbreviation is used for Model-based Process Control, which
denotes all control approaches making use of a process model (including model
predictive control). In industry, MPC has become a synonym for multivariable
control and advanced process control. Advanced process control aims at "high
level" tasks such as
increased product throughput, yield of high valued products;
decreased energy consumption, pollution, o-spec products;
improved safety, equipment life;
improved operability, decreased production labor.
These tasks relate to control engineering challenges, such as
minimizing product variablity and delivering a product that consistently
meets customer specications;
improving the operating range and reliability of plant production;
maximising cost benets (of implementing and supporting control and
information systems);
meeting safety and regulatory (environmental) requirements;
maximizing asset utilization and plant exibility.
Classical operational hierarchy can be seen as layers, as illustrated in Fig.
1.4. The layered structure consist of plant wide optimization (in the time scale
of days) using real-time optimization (RTO); unit-level optimizers (hours); ad-
vanced controls (minutes) with logic selectors, ratios, overrides, cascades, ...;
low-level controls (seconds) consisting of ow and level controls etc. imple-
mented with PIDs, ...; and nally the process actuators: valves, pumps, screws,
belts, etc.
The introduction of MPC aects in particular the layer of advanced control,
where MPC has the potential to replace the various cascade, feedforward, etc
control structures by a model-based optimization approach.
In short, the MPC methodology
23
Figure 1.4: Classical operational hierarchy.
24
uses a dynamic model to make predictions (linear, nonlinear)
denes a cost function (quadratic, ...),
uses an optimization method to minimize costs (numerically, heuristically,
...),
involves a method to handle constraints. The most protable process
operation is often obtained when a process is running at a constraint.
Constraints may be associated with direct costs, e.g., energy consumption,
or control signals (for example saturation of ow rates or valve positions).
Typical examples of problem formulation are:
linear state equation [and measurement equation]
x(k) = Ax(k) +Bu(k)
y (k) = Cx(k)
quadratic cost function
J = x
T
Qx +u
T
Ru
linear constraints
Hx +Gu _ 0
and a quadratic program. A, B, C, Q, R, H, and G are matrices of
suitable dimension describing the system, costs and constraints. The DMC
is an example of an approach in this category. The constrained case will
be covered in the next Chapter.
nonlinear state equation [and measurement equation]
x(k) = f (x(k) ; u(k))
y (k) = g (x(k))
a general cost function
J = F (x; u)
linear and nonlinear constraints
h(x; u) _ 0
and a nonlinear program. f, g, F and h are suitable functions describing
the system, costs, and constraints.
25
1.8.1 History of MPC
The MPC was originally proposed independently by several people, by pio-
neering industrial practitioners. It was proposed without deep understanding
of theoretical properties, with just basic algorithms and plain models. Early
applications were in the petrochemical processes.
1960s:
Optimal control, Linear Quadratic Regulator (LQR): no constraints,
no uncertainty considerations, quadratic indexes
late 1970s:
Richalets IDCOM: FIR model, quadratic costs, ad hoc treatment of
constraints, iterative least squares solution
Cutlers DMC: FSR model, quadratic costs, no constraints, least
squares solution
early 1980s:
Cutlers QDMC: Linear inequality constraints for inputs and outputs,
Quadratic Programming (QP) solutions
late 1980s early 1990s:
feasibility issues are treated
constraint classication (hard, soft, ranked)
state-space models and state estimation (Kalman lter)
late 1990s:
multiple levels of optimization (economic optimization)
simple plantmodel mismatch considerations
2000s:
nonlinear predictive control
rst principle models, neural network models, adaptive control
Many commercial vendors with packages (dierent models, objective
functions, etc.): Aspen Tech DMC Plus; Honeywell Prot NLC;
Siemens Mod PreCon PCS7; Andritz Automation Brainwave;
ABB Predict & Pro; ...
26
1.8.2 Pros, cons and challenges
The keys for MPCs success in industry are due to
intuitive concepts (practicality), e.g., inclusion of economic aspects inside
the controller;
applicability for a wide range of processes (including long time delays and
non-minimum phase systems);
straightforward extension to multivariable case;
unied treatment of load changes and setpoint changes;
feedforward control are easily included to design (no need for separate
decoupling, delay compensation);
ability to handle constraints in process variables;
ability to handle future references (cf. batch processes);
easy implementation (however, in MPC the tolerable control update rate
is relatively low; luckily this is usually ok for applications in the process
industry);
open methodology, with many active research lines:
linear/nonlinear modelling and identication (e.g., rst principle mod-
els, computational intelligence methods, ...);
nonlinear/stochastic optimization (dynamic programming, evolution-
ary algorithms, ...), explicite (pre-computed) algorithms;
combinations with other domains of control engineering (adaptive/learning
control, state estimation/monitoring, robust control/performance analy-
sis, ...).
Major challenges in industrial MPC:
It may be dicult/time-consuming to obtain empirical models from ex-
haustive plant tests => need to simplify model development process.
Controller stability and robustness are shown only by simulations => need
for robust MPC with guaranteed feasibility and stability properties.
Theres a lack of sensors for key process variables => need to improve
state estimation.
Computational complexity/load can be high => need to develop approx-
imate solutions.
27
It is dicult to cope with uncertainties in the real world => need to create
models with uncertainty information, and/or estimate parameters/states
on-line and/or use robust optimization techniques.
Additional future development lines include: decentralized MPC (for plant
wide cooperative tasks), MPC for hybrid systems (mixing of continuous
states with nite states), ...
Finally, it is worth to note that introduction of MPC may change the schedul-
ing of industrial control projects. With classical controls, projects start with
process analysis but most of the time is spent with design and tuning of con-
trollers. In MPC projects the availability of process knowledge becomes more
important. The role of modelling and identication tasks is greatly emphasized,
and that of tuning is decreased.
28
Chapter 2
Quadratic DMC (QDMC)
In real control problems very often some of the control specications can be
expressed as constraints:
input (MV) constraints,
rate-of-change (MV) input constraints,
output (CV) constraints,
constraints on other outputs of interest.
These type of constraints can be expressed as inequations that depend on future
control moves u.
2.1 Input-output constraints
The essential idea in this section is to convert various typical constraints into
LE (less than or equal) inequality constraints in u.
2.1.1 Constraints in change of MV
Consider upper and lower bound constraints on the change of the control variable
u
min
_ u(k) _ u
max
:
These contain both a LE and a GE (greater than or equal) constraints. These
can be converted into LE constraint as
_
u _ u
max
u _ u
min
29
Recall that u is a vector containing all future changes to the control variable,
u(k) =
_

_
u(k)
u(k + 1)
.
.
.
u(k + c 1)
_

_
:
The minimum and maximum are usually dened for all elements u(j) of this
vector. In matrix form
_
I
I
_
. .
A1
_

_
u(k)
u(k + 1)
.
.
.
u(k + c 1)
_

_
. .
x
_
_
u
max
u
min
_
. .
b1
=
_

_
u
max;1
u
max;2
.
.
.
u
max;c
u
min;1
u
min;2
.
.
.
u
min;c
_

_
A
1
x _ b
1
Often the constraints on u are xed and they do not depend on real time,
or time relative in the horizon as in above. Then u
max
(j) = u
max
and
u
min
(j) = u
min
for \j.
2.1.2 Constraints in MV
Consider upper and lower bound constraints on the actual value of the manip-
ulated variable
u
min
_ u(k) _ u
max
The upper bound constraints
u(k) _ u
max;1
u(k + 1) _ u
max;2
.
.
.
u(k + c 1) _ u
max;c
can be rewritten as
u(k) _ u
max;1
u(k 1)
u(k + 1) + u(k) _ u
max;2
u(k 1)
.
.
.
u(k + c 1) + + u(k) _ u
max;c
u(k 1)
30
A similar development can be made with the lower bound constraint. Writ-
ing the combined results in a matrix form gives
_
I
L
I
L
_
. .
A2
_

_
u(k)
u(k + 1)
.
.
.
u(k + c 1)
_

_
. .
x
_
_

_
u
max;1
u
max;2
.
.
.
u
max;c
u
min;1
u
min;2
.
.
.
u
min;c
_

_
1
1
.
.
.
1
1
1
.
.
.
1
_

_
u(k 1)
. .
b2
A
2
x _ b
2
where I
L
is a binary lower triangular matrix:
I
L
=
_

_
1 0 0
1 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 1 1 1 0
1 1 1 1 1
_

_
: (2.1)
2.1.3 Constraints in output
Consider constraints in the output variable,
y
min
_ y _ y
max
i.e.,
y
min;1
_ y (k + 1) _ y
max;1
y
min;2
_ y (k + 2) _ y
max;2
.
.
.
y
min;p
_ y (k + p) _ y
max;p
Recall that the predictions could be obtained from (1.11)
y = y
p
+Gu
We can rewrite the constraints as LEs
_
Gu +y
p
_ y
max
(Gu +y
p
) _ y
min
31
In matrix form:
_
G
G
_
. .
A3
_

_
u(k)
u(k + 1)
.
.
.
u(k + c 1)
_

_
. .
x
_
_
y
max
y
p
(k + 1)
y
min
+y
p
(k + 1)
_
. .
b3
A
3
x _ b
3
2.1.4 Combination of constraints
The three types of inputoutput constraints represent the most important types
of constraints in the control of industrial processes. They can be merged together
to make one complete constraint system:
Au _ b
_
_
A
1
A
2
A
3
_
_
u _
_
_
b
1
b
2
b
3
_
_
32
Writing out gives:
_

_
I
I
I
L
I
L
G
G
_

_
u _
_

_
u
max;1
u
max;2
.
.
.
u
max;c
u
min;1
u
min;2
.
.
.
u
min;c
u
max;1
u(k 1)
u
max;2
u(k 1)
.
.
.
u
max;c
u(k 1)
u
min;1
+ u(k 1)
u
min;2
+ u(k 1)
.
.
.
u
min;c
+ u(k 1)
y
max;1
y
p
(k + 1)
y
max;2
y
p
(k + 2)
.
.
.
y
max;p
y
p
(k + p)
y
min;1
+ y
p
(k + 1)
y
min;2
+ y
p
(k + 2)
.
.
.
y
min;p
+ y
p
(k + p)
_

_
(2.2)
Au _ b
Observe that matrix Ais the same for all control instants, and it can therefore be
pre-computed o-line. Vector b can be dierent at each sample time, however.
2.2 Optimization
The control problem is now stated as follows
min
u
J
where
J =
T
+ u
T
u
subject to Au _ b
33
The constraint system Au _ b encompasses the inputoutput constraints:
_
_
_
u
min
_ u(k) _ u
max
u
min
_ u(k) _ u
max
y
min
_ y _ y
max
There is no analytical solution available for this problem, and it has to be
solved numerically. In order to solve it, the control problem can be rewrit-
ten as a well known quadratic programming (QP) optimization problem. We
can then take advantage of the numerical routines available in various software
environments, such as Matlab (quadprog).
2.2.1 Control problem as QP
The quadratic programming provides means to solve convex optimization prob-
lems. If the problem is feasible (a solution exists), the QP is guaranteed to
convergence and nd a global minimum. The QP solves a problem of the form
min
x
_
1
2
x
T
Hx +c
T
x
_
(2.3)
subject to Ax _ b (2.4)
We have already developed the constraints to the form Ax _ b. We still need to
write the DMC cost function to the above form. Reordering terms and deleting
terms which do not depend on u:
J =
T
+ u
T
u
= (Gu e)
T
(Gu e) + u
T
u
=
_
u
T
G
T
e
T
_
(Gu e) + u
T
u
= u
T
_
G
T
G+ I
_
. .
H
u 2e
T
G
. .
c
T
u +e
T
e (2.5)
= u
T
Hu +c
T
u + constant
where e = y
sp
y
p
. The constant term does not inuence the location of the
minimization problem and can be omitted.
Comparing with the QP problem setup (2.3)(2.4), the problems now have
the same form (substituting u for x). The H is called the Hessian, and c
T
is
the gradient vector
H = G
T
G+ I
c
T
=
_
G
T
e
_
T
The Hessian is a constant matrix, while the gradient vector changes at each
sampling instant.
34
2.2.2 *Quadratic programming algorithms
The time execution of QP routines depends on
the number of optimization variables (control horizon, c);
the number of constraints: a priori 4c + 2p.
QP problems can be unfeasible if there is no combination of control actions that
satises the constraints. Standard QP routines just give up when the problem is
infeasible. Consequently, in industrial on-line control applications the solution
procedures need to be modied so that some solution is always provided. Ad-hoc
strategies include:
Keep the input variable constant (apply the same input as in the past);
Apply the control input u(k + 1[k) proposed at the previous optimiza-
tion round;
Use some constraint handling technique, such as ordering of constraints
and relaxation of the less important ones. This will be considered in more
detail in a later section.
2.3 QDMC algorithm
Let us now collect the results into a QDMC algorithm. As with DMC, the
algorithm consists of two phases: an initial phase (o-line), and the actual
controller (on-line).
2.3.1 O-line
To construct the QDMC controller, the following are needed as input data (

a
star denotes topics additional to the DMC algorithm presented earlier):
Step response model for the control variable s
u
;
Step response model for the disturbance variable s
d
;
Prediction horizon p;
Control horizon c;
Weighting factor for control moves ;


MV limits (variable and rate of change), CV limits.
O-line computations include
Computing the dynamic matrix G, eq (1.12);
35


Constructing binary matrices M, T,and I
L
; equations (1.10), (1.7)(1.9),
and (2.1);


Constructing the xed part of constraints, matrix A, equation (2.2);


Computing the Hessian H, eq (2.5). Note that the H is dierent from
the one used in DMC (1.14).
As with DMC, the recursive calculations of n 1 vector f can be initialized
with
f (k) = [y
0
; y
0
; :::; y
0
]
T
where y
0
is the steady state output of the system with input u
0
.
2.3.2 On-line
The following steps need to be conducted on-line in the control loop (

a star
denotes topics additional to/dierent from DMC):
1. Obtain the output measurement y (k) and the measurement of the distur-
bance d (k). Compute bias b and change in disturbance d:
b (k) = y (k) f (k[k)
d (k) = d (k) d (k 1)
where f (k[k) is the rst element in f (k).
2. Obtain the desired output setpoint y
sp
(k + 1). Compute the prediction
y
p
and error e:
y
p
(k + 1) = Tf (k) +s
d
d (k) + b (k)
e (k + 1) = y
sp
(k + 1) y
p
(k + 1)
3.

Compute the gradient vector c:
c = G
T
e (k + 1)
4.

Compute the right-hand part of the constraints, vector b, eq. (2.2)
5.

Solve the optimal future changes in the manipulated variable u(k) by
solving the QP problem.
6. Apply the rst element u(k) of u(k) to control the process. If the
actual value for the MV is needed, use u(k) = u(k 1) + u(k).
7. Compute the free response prediction (1.10)
f (k + 1) = Mf (k) +s
u
u(k) +s
d
d (k)
8. At next sample time, increase the sampling index k := k + 1, and goto
Step 1.
36
2.4 Exercises
1. Code the QDMC algorithm to Matlab.
2. Simulate the closed loop behavior of a QDCM controlled process and ex-
periment various constraints
See Ex_QDMC.m.
2.5 *Soft constraints
If there are problems in nding a feasibile solution, one approach is to rank the
constraints into hard and soft ones.
Hard constraints must be accomplished. Typically these include MV con-
straints (e.g., valve openings or pump speeds must be between 0% and
100%), or output variables constraints which come from security consid-
erations.
Soft constraints are constraints where some violation can be tolerated,
but only if really necessary. These may include CV constraints where
operation limits are exible and conservative.
Soft constraints provide a mechanism to avoid unfeasibility in QP. The optimizer
nds a solution subject to hard contraints and with minimum violation of soft
constraints.
In QP with soft constraints, the optimization problem is extended with new
slack variables. In general, there is one slack variable for each output variable
(this ensures that a feasible solution will always be found). In what follows, the
SISO case is considered for simplicity. A slack variable is dened such that it is
non-zero only if a constraint is violated. It is incorporated in the cost function,
with a strong penalization:
J =
T
+ u
T
u + "
T
"
subject to hard constraints, and
y
min
" _ y _ y
max
+ "
" _ 0
The problem with the new cost function can be rewritten as a standard QP
problem. The minimum of the cost function
J =
T
+ u
T
u + "
T
"
= (Gu e)
T
(Gu e) + u
T
u + "
T
"
= u
T
_
G
T
G+
_
u 2e
T
Gu+ e
T
e
..
constant
+ "
T
"
37
is at the same location as the minimum of
J
0
=
1
2
u
T
_
G
T
G+
_
u e
T
Gu+
1
2
"
T
"
=
1
2
_
u
"
_
T
. .
x
T
_
G
T
G+ 0
0
_
. .
H
_
u
"
_
. .
x
+
_
G
T
e
0
_
T
. .
c
T
_
u
"
_
. .
x
=
1
2
x
T
Hx +c
T
x
The cost function is now in the QP form.
We still need to reformulate the soft output constraints
y
min
" _ y _ y
max
+ "
" _ 0
They can be rewritten as a set of LE constraints
y
min
" _ Gu +y
p
_ y
max
+ "
_
Gu +y
p
_ y
max
+ "
Gu y
p
_ y
min
+ "
For the upper bound we have
Gu +y
p
_ y
max
+ "
Gu " _ y
max
y
p
_
G 1

_
u
"
_
_ y
max
y
p
Similarly for the lower bound:
Gu " _ y
min
+y
p
_
G 1

_
u
"
_
_ y
min
+y
p
Combining the constraints, we obtain
_
_
G 1
G 1
0 1
_
_
_
u
"
_
_
_
_
y
max
y
p
y
min
+y
p
0
_
_
where the last constraint is the requirement for the slack variable to be non-
negative.
38
The hard and the soft constraints can then be combined into one constraint
system, similar to (2.2):
_

_
I 0
I 0
I
L
0
I
L
0
G 1
G 1
0 1
_

_
_
u
"
_
_
_

_
u
max;1
u
max;2
.
.
.
u
max;c
u
min;1
u
min;2
.
.
.
u
min;c
u
max;1
u(k 1)
u
max;2
u(k 1)
.
.
.
u
max;c
u(k 1)
u
min;1
+ u(k 1)
u
min;2
+ u(k 1)
.
.
.
u
min;c
+ u(k 1)
y
max;1
y
p
(k + 1)
y
max;2
y
p
(k + 2)
.
.
.
y
max;p
y
p
(k + p)
y
min;1
+ y
p
(k + 1)
y
min;2
+ y
p
(k + 2)
.
.
.
y
min;p
+ y
p
(k + p)
0
_

_
2.6 Multivariable DMC and QDMC
What is nice with MPC is that multivariable control problems can be solved
without (explicit) decoupling. All interactions between variables are taken into
account during the optimization, since they are included in the plant model.
It is necessary to know the eect of each manipulated variable and mesured
disturbance to each of the (controlled and/or constrained) output variables.
Step responses can be identied for each MISO (Multiple-Input Single-Output)
subprocess models. In a large dimensional problem this may result in a large
number of experiments.
The simplicity of the extension of the SISO DMC/QDMC developments
to the MIMO (Multiple-Input Multiple-Output) case is due to the fact that
in MISO predictions y the multiple inputs and measured disturbances can be
superimposed (summed) with each other. This property holds for linear systems.
39
The MIMO model can be formed by piling the MISO predictions on top of
each others. Denote the number of system inputs by m and system outputs by
n. Then y
p
is a column vector with n p elements; G is the dynamic matrix
with np mc elements; and u is the column vector of future control moves of
size mc 1. As a result, the future predictions will have the form
_

_
y
1
y
2
.
.
.
y
n
_

_
. .
b y
=
_

_
Tf
1
Tf
2
.
.
.
Tf
n
_

_
+
_

_
s
d
1;1
s
d
1;2
s
d
1;q
s
d
2;1
s
d
2;2
s
d
2;q
.
.
.
.
.
.
.
.
.
.
.
.
s
d
n;1
s
d
n;2
s
d
n;q
_

_
_

_
d
1
(k)
d
2
(k)
.
.
.
d
q
(k)
_

_
+
_

_
1b
1
1b
2
.
.
.
1b
n
_

_
. .
y
p
+
_

_
G
1;1
G
1;2
G
1;m
G
2;1
G
2;2
G
2;m
.
.
.
.
.
.
.
.
.
.
.
.
G
n;1
G
n;2
G
n;m
_

_
. .
G
_

_
u
1
(k)
u
2
(k)
.
.
.
u
m
(k)
_

_
. .
u
The essential point is that the prediction equation has exactly the same form
as in the SISO case:
y = y
p
+Gu:
Conseqently, the solution of the control (optimization) problem follows excatly
the same mechanization as the SISO case. The weight is now replaced by a
weight matrix , typically a diagonal matrix; a weighing between CVs can be
accomplished by adding a similar weighting matrix for the deviations between
reference and predicted output.
2.7 Integrating processes
Integrating processes (e.g., tanks and reservoirs) are common in the process
industry. A FSR model can not be formulated for such processes, since the
system transient does not end after n instants, s (n + 1) ,= s (n + 2) ,= ::: ,=
s (). Similarily, the a FIR model is not viable, since the coecients h are not
zero in the innity.
A common approach is to redene the problem so that the rate-of-change of
an integrating process is controlled. For example, let the (integrating) plant be
given by
z (k) =
bq
1
1 q
1
u(k)
z (k) = z (k 1) + bu(k 1)
then dene y (k) = z (k) = z (k) z (k 1):
y (k) = bu(k 1)
40
The transients of y in a step response will die out in a nite time (eventually
settling to y () = b), and a FSR structure can be used to model the process.
2.7.1 *Constraints
It is typical that integrating processes have upper and lower limits (e.g., tank
levels or reservoir upper and lower volumes). Upper and lower bound constraints
on the integral variable z can be set as follows
z
min;i
_ z (k + i) _ z
max;i
leading to
_
z (k + i) _ z
max;i
z (k + i) _ z
min i
For the upper bound we have
z (k + i) _ z
max;i
z (k + i) z (k + i 1)
. .
z(k+i)
_ z
max;i
z (k + i 1)
Since
z (k + i) = z (k) +

i
j=1
z (k + j)
and
z (k + i 1) = z (k 1) +

i
j=1
z (k + j 1)
= z (k 1) + z (k)
. .
z(k)
+

i
j=2
z (k + j 1)
= z (k) +

i1
j=1
z (k + j)
for i > 1. We can now write the upper bound constraint as
z (k + i)
. .
y(k+i)
_ z
max;i
z (k)

i1
j=1
z (k + j)
. .
y(k+j)
y (k + i) +

i1
j=1
y (k + j)
. .
P
i
j=1
y(k+j)
_ z
max;i
z (k)
where z (k) is a measurement of the process variable z. In matrix form
I
L
y _ z
max
z (k)
41
where I
L
is a lower triangular matrix:
I
L
=
_

_
1 0 0
1 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 1 1 1 0
1 1 1 1 1
_

_
Now y = Gu +y
p
, and we can write
I
L
y _ z
max
z (k)
I
L
(Gu +y
p
) _ z
max
z (k)
I
L
Gu _ z
max
z (k) I
L
y
p
For the lower bound we have similarly
I
L
y _ z
min
+ z (k) :
and
I
L
Gu _ z
min
+ z (k) +I
L
y
p
Finally, we can write the upper and lower constraints in the QP form:
_
I
L
G
I
L
G
_
. .
A4
_

_
u(k)
u(k + 1)
.
.
.
u(k + c 1)
_

_
. .
x
_
_
z
max
z (k) I
L
y
p
z
min
+ z (k) +I
L
y
p
_
. .
b4
A
4
x _ b
4
2.8 *Identication of FSR models
2.9 Conclusions: DMC and QDMC
Let us summarize the main features of DMC and QDMC. These approaches to
MPC:
use a linear nite step response (FSR) model. This modelling approach is
only valid for open-loop stable processes, but does include non-minimum
phase dynamics and delayed plants. Recall that these type of processes
are dicult to control using standard PIDs;
use a quadratic cost function;
provide oset free tracking (due to in-built mechanisms to include future
references, measured disturbances, and existence of a bias component that
integrates error at the output);
42
provide intuitive tuning (a subjective statement..);
handle multivariable control problems naturally, and with ease;
can take into account actuator limitations;
allow operation close to constraints, leading to more protable operation;
can be computing power consuming, which may restrict application for
real life processes.
DMC can be represented as a linear controller, i.e., an equivalent linear con-
troller exists. The QDMC controller is non-linear, and no closed-form solution
can be given. Instead, the controller solves a QP problem at each sample time.
2.10 Homework - DMC/QDCM
Background: A sample Matlab code for DMC and QDMC is available from
exercises in this and previous Chapters.
Task: For a SISO plant model to be specied (tf/ss/ode/..., cont/discr):
Identify a FSR model. Compare the FSR model behavior to the orginal
model by simulations. [1p]
Design and implement a DMC controller using Matlab. Simulate the
closed-loop performance of the system and illustrate the eect of tuning
parameters [+1p]
Design and implement a QDMC controller using some inputoutput con-
straints. Simulate the closed-loop performance and illustrate the signi-
cance of the constraints. [+1p]
Design and implement a QDMC controller with soft constraints. Illustrate
the eect of soft constraints by simulations. [+2p]
Write a short report with gures of simulation outcomes (with plant inputs,
outputs, reference outputs,...) and your main observations and conclusions.
Prepare a 5-10 min presentation of your work to other students, and be prepared
to defend your work (i.e., answer questions..).
43
Chapter 3
QDMC power plant case
study
3.1 Review of homeworks
10 min presentations / group = 1 h
3.2 Guided exercise / power plant case study
MIMO DMC/QDMC control of a drum boiler. A guided exercise (by Antti).
44

You might also like