You are on page 1of 8

System identication for generic model control

Erik Weyer
a,
*, Geo Bell
b,1
, Peter L. Lee
c,2
a
Department of Electrical and Electronic Engineering, University of Melbourne, Parkville, VIC 3052, Australia
b
Centre for Process Systems Engineering, Imperial College of Science Technology and Medicine, London SW7 2BY
c
School of Engineering, Murdoch University, Murdoch, WA 6150, Australia
Abstract
System identication methods build mathematical models of dynamical systems based on observed data. The intended use of the
model should always be reected in the methods and techniques used for identication. In this paper an identication scheme is
derived for the case where the model is going to be used for GMC controller design. The aim of GMC control is to make the output
approach a setpoint along a given desired trajectory. This is reected in the identication scheme which is non-standard in two
ways. Firstly, the emphasis is on the output trajectories of the models, and secondly we try to make the prediction errors follow an
error trajectory determined by the controller parameters. Simulation studies are included which show that the derived identication
scheme performs well. #1999 Elsevier Science Ltd. All rights reserved.
Keywords: System identication; Generic model control; Process control
1. Introduction
System identication deals with the problem of
building mathematical models of dynamical systems
based on observed data. System identication is always
done with a purpose, and the intended use of the
obtained model should be reected in the methods and
criteria used.
Recently system identication for control has attrac-
ted much attention, see e.g. the surveys by Gevers [1]
and Van den Hof and Schrama [2]. In this paper identi-
cation for Generic Model Control (GMC) is con-
sidered. GMC [3,4], is a control strategy for nonlinear
systems which is popular in the process industry. It is
closely related to feedback linearisation (see e.g. Isidori
[5]). The aim of GMC is to make the output approach a
given setpoint along a prespecied desired trajectory.
This aim is reected in the derived identication
method. The predictors employed put emphasis on the
output trajectories of the models and not on the indivi-
dual equation errors. Hence the identication method
has common features with methods inspired by the
behavioural framework [68].
The paper is organised as follows. In Section 2 we
give a brief introduction to GMC control before we
derive the identication method in Section 3. Section 4
is devoted to a simulation study where we compare the
derived method to other system identication methods.
We end with some concluding remarks in Section 5.
In the paper the identication approach is illustrated
on a linear system. This may seem a bit strange since
GMC is a control strategy for nonlinear systems. How-
ever, the principles and techniques remain the same for
a nonlinear model structure.
2. GMC control
The main idea behind GMC is to nd values of the
manipulated input variable which force the model out-
put to follow a desired trajectory. The model considered
is given by the dierential equation
_ y(t) = f(y. x. u. t. ) (1)
Eq. (1) is a deterministic model where f is a known
(nonlinear) function, y is the output, u the input, x a
state vector, t the time variable, and a vector of model
parameters. In order to avoid problems with unstable
predictors we assume that the autonomous system
_ y(t) = f(y. x. 0. t. ) is asymptotically stable in the oper-
ating region. GMC is based on solving f with respect to
Journal of Process Control 9 (1999) 357364
0959-1524/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved.
PII: S0959-1524(98)00050-X
* Corresponding author. Tel.: +61-3-9344-9726; fax: +61-3-9344-
6678; e-mail: e.weyer@ee.mu.oz.au
1
E-mail: g.bell@ic.ac.uk
2
E-mail: peter@eng.murdoch.edu.au
the input u, and it is therefore assumed that u appears
directly in the equation for _ y (and not say through some
higher derivatives of y).
The commonly used state space models for control
ane systems are given by
_ x =
~
f(x) ~ g(x)u
(2)
y =
~
h(x) (3)
They can be brought into the form Eq. (1) by introdu-
cing the Lie derivatives L
~
f
~
h =
~
f
T o
~
h
ox
and L
~ g
~
h = ~ g
T o
~
h
ox
. We
then have that y = L
~
f
~h L
~ g
~
hu. The requirement that
the input appears directly in the model equation for _ y
implies that L
~ g
~
h ,= 0, i.e. the system is of relative degree
one, and it is therefore also assumed that the models (1)
have stable zero dynamics. The assumptions of asymp-
totically stable autonomous models and stable zero
dynamics may impose restrictions on the set of models
in the sense that must be restricted to a set /

where
these assumptions are satised. The desired trajectories
y
+
, used in GMC are given by
_ y
+
(t) = K
1
(w(t) y
+
(t)) K
2
_
t
0
(w(t) y
+
(t))dt (4)
where w is the reference value, t the current time instant
and K
1
and K
2
are two design parameters. Notice the
distinction between the reference value w and the
desired trajectory y
+
. In this paper w is a constant set-
point, and y
+
describes how we want the output to
approach this setpoint.
Our goal is that the actual output trajectory approa-
ches the setpoint along a desired trajectory. By combin-
ing (1) and (4) this leads to the implicit control law
f ( y. x. u. t. ) = K
1
(w(t)y(t))K
2
_
t
0
(w(t) y(t))dt (5)
which is the GMC controller. To nd the value of the
input variable, (5) is solved with respect to u. In practise
we may encounter problems with unmeasured state
variables when solving (5). This problem can be solved
by employing a state estimator. However, in order to
keep things simple, we assume that the state variables
can all be recovered from the output, and we will from
now on drop x from the list of arguments of f.
2.1. Performance measure for GMC control
In order to evaluate the GMC controller (and the
identication method to be derived) we need a perfor-
mance measure. As the aim of GMC control is to make
the output follow a trajectory given by (4) a natural
performance measure is as follows. Let [0. T[ be the
observation interval, and dene the set of trajectories
Y
+
= {y
+
(t). t c [0. T] [ y
+
(t) satisfies (4)\t c [0. T]]
This set can be parameterised in terms of the initial
condition y
+
(0). Let y(t) be the output of the system
under GMC control. As a performance measure we use
the least squares criterion
P
GMC
= min
y
+
cY
+
_
T
0
(y(t) y
+
(t))
2
dt (6)
Here we have minimised over the initial condition of
y
+
(t) to nd the desired trajectory closest to the
observed output.
3. Identication for GMC
3.1. Introduction
Given data up to time t
k
(assuming a sampled sys-
tem), the models can be used to predict the output at
time t
k1
. The prediction error c(t
k
. ) is the dierence
between the observed output y(t
k
) and the predicted
output y(t
k
. ), i.e. c(t
k
. ) = y(t
k
) ^ y(t
k
. ). The system
parameters are estimated using a prediction error
method [9,10], and the estimate is given by
^
= arg min

N
k=0
J(c(t
k
. ))
where J is a non-negative function and N1 is the total
number of sampled data points.
The function J(c(t
k
. )) is called the identication
criterion. It is clear that
^
depends on J. Moreover,
^
is
also dependent on how the predicted output ^ y(t
k
. ) is
computed. The choice of predictor and identication
criterion should always reect the intended use of the
obtained model. The aim of GMC control is to make
the output follow a desired trajectory given by (4),
hence the identication scheme should be judged on
how well the actual output follows a desired trajectory
when the system operates in closed loop. As a measure
for the latter we use (6).
3.2. Dierences between observed and desired output
By controller design the model output will follow a
desired trajectory, and we now examine the factors
which cause the observed output to deviate from the
model output. It is important to distinguish between the
output of the model and the actual observed output,
and we therefore start with introducing some notation.
358 E. Weyer et al. / Journal of Process Control 9 (1999) 357364
We denote the output of the model (1) by y. Obviously y
depends on the input signal u and the model represented
by , so we use the notation y(t. u. ). Similarly we
denote the input generated by the GMC controller by u.
u is dependent on the output y (through feedback) and
of course hence we use the notation u(t. y. ). Thus an
overbar indicates either model output or input gener-
ated by a GMC controller. For the actual input and
output values we use y and u without overbars. (Of
course there is nothing that prevents the actual input to
be generated by a GMC controller in which case u = u.)
As the actual output depends on the input we will use
the notation y(t. u). In this notation the output of the
model (1) when the input is given by the GMC con-
troller is y(t. u( y. ). ), where the GMC input u(t. y. )
is based on feedback from the model output y and not
on feedback from the actual output y.
Next we introduce the error signal
~ c
om
(t. ) = y(t. u) y(t. u( y. ). ) (7)
The desired trajectory for the output is given by
_ y(t. u) K
1
(w(t) y(t. u)) K
2
_
t
0
(w(t) y(t. u)dt = 0
(8)
By substituting (7) in (8) we have (arguments omitted)
_
y K
1
(w y) K
2
_
(w y) ~ c
om
K
1
~ c
om
K
2
_
~ c
om
= 0
By controller design we have that
_
y K
1
(w y) K
2
_
(w y) = 0 (9)
and (8) reduces to
_
~ c
om
K
1
~ c
om
K
2
_
~ c
om
= 0 (10)
and ideally we would like the identication algorithm to
ensure that (10) is satised or at least that the left hand
side is small in some sense when the system operates in
closed loop.
3.3. Predictors
Now we examine ~ c
om
(t. ) from an identication
point of view. Data collection and identication are
usually performed in discrete time, and we assume that
the system is sampled at regular time instants
t
k
. k = 0. . . . . N. The models at hand
y = f(y. u. . t) (11)
are nonlinear and deterministic. Given input and output
data up to time t
k
and future input data, an intuitive
way to predict the output at time t
k1
is
^ y
2
(t
k1
. u. ) = y(t
k
)
_
t
k1
t
k
f(y. u. . t)dt (12)
where the integral is evaluated using some numerical
integration method. Another way of predicting the out-
put is
^ y
1
(t
k1
. u. ) = ^ y
1
(t
k
. u. )
_
t
k1
t
k
f( ^ y
1
. u. . t)dt
= ^ y
1
(t
0
)
_
t
k1
t
0
f( ^ y
1
. u. . t)dt
(13)
where ^ y
1
(t
0
) is the initial value of the predictor. We see
that (13) is dependent on the initial conditions at time t
0
which may also include higher derivatives of ^ y
1
through
f. It is common to set these initial conditions equal to
the observed ones, but sometimes (as in this paper) we
may want to optimise over them. In such cases we use
the notation ^ y
1
(t
k
. u. y
init
(t
0
). ).
The dierence between the predictors is that in (12)
the observed output y(t
k
) is used as the initial value
when ^ y
2
(t
k1
. u. ) is predicted while in (13) the pre-
dicted value ^ y
1
(t
k
. u. ) is used as the initial value. We
see that if the system is given by (11) and no errors are
introduced in the numerical integration then ^ y
1
and ^ y
2
are equal provided the initial conditions are identical.
Assuming no errors are introduced in the integration,
the predicted points ^ y
1
(t
k
. u. ) will all lie on a trajectory
of the system (11). Hence the predictions ^ y
1
(t
k
. u. ) will
globally satisfy (11). Eq. (12) on the other hand will
only locally satisfy (11), since it uses y(t
k
) as the initial
value computing ^ y
2
(t
k1
. u. ). We call the predictor
^ y
2
(t
k1
. u. ) equation oriented since it satises Eq. (11)
locally at every time point. Similarly we call
^ y
1
(t
k1
. u. ) trajectory oriented.
Example. Consider a rst order linear system
_ y(t) = ay(t) bu(t) (14)
with = [a b[. We assume that the input is constant
over the sampling interval T
s
. At the sampling points
t
k
= kT
s
, k = 0. . . . . N the output is given by
y(t
k1
) = ~ ay(t
k
)
~
bu(t
k
) (15)
where ~ a=e
aT
s
and
~
b=
b
a
(1 e
aT
s
). Hence a and b can
be computed from ~ a and
~
b. The predictor ^ y
1
is given by
E. Weyer et al. / Journal of Process Control 9 (1999) 357364 359
^ y
1
(t
k1
. u. ) = ~ a^ y
1
(t
k
. u. )
~
bu(t
k
)
= ~ a
k1
^ y
1
(t
0
)
~
b

k
i=0
~ a
ki
u(t
i
)
(16)
where ^ y
1
(t
0
) is the initial value of the predictor. The
predictor ^ y
2
is given by
^ y
2
(t
k1
. u. ) = ~ ay(t
k
)
~
bu(t
k
)
We note that in the stochastic linear system setting ^ y
1
and ^ y
2
are the one step ahead predictors for the output
error (OE) and ARX model structures, respectively.
These model structures are given by (OE)
y(t) =
B(q
1
)
A(q
1
)
u(t) e(t)
and (ARX)
A(q
1
)y(t) = B(q
1
)u(t) e(t)
where A(q
1
) and B(q
1
) are polynomials in the
backward shift operator q
1
, and e(t) is a sequence of
independent and identically distributed random vari-
ables. The dierence between these model structures is
that the noise enters the OE model as measurement
noise, while it enters the ARX model as process noise.
We now return our attention to ~ e
om
(t
k
. ). The term
y(t. u( y. ). ) describes the output trajectory of the
model (1) under GMC control, and ideally using the
trajectory based predictor (13) we should have
y(t
k
. u( y. ). ) = ^ y
1
(t
k
. u( ^ y
1
. ). )
for identical initial conditions, but dierences may
occur due to errors introduced by sampling and numer-
ical integration. However, these errors should be small
provided we have made good choices of sampling inter-
val and numerical integration method. Hence we do not
make any large errors in replacing ~ c
om
(t
k
. ) by
c
om
(t
k
. ) = y(t
k
. u) y
1
(t
k
. u( ^ y
1
. ). )
c
om
(t
k
. ) depends on the input signal u(x^ y
1
. ) which is
unknown since is unknown, and c
om
(t
k
. ) is therefore
not suited for identication use. In an identication
context the observed input signal is used, and this leads
naturally to the decomposition
c
om
(t
k
. ) = c
i
(t
k
. ) c
u
(t
k
. ) (17)
c
i
(t
k
. ) = y(t
k
. u) ^ y
1
(t
k
. u. ) (18)
c
u
(t
k
. ) = ^ y
1
(t
k
. u. ) ^ y
1
(t
k
. u( ^ y
1
. ). ) (19)
The rst term c
i
(t
k
. ), is the dierence between the
actual observed output and the predicted output. The
output is predicted using the trajectory based predictor
(13), and the actual input is used in the computations.
The second term c
u
(t
k
. ) is the dierence in the pre-
dicted outputs caused by the dierence between the
observed input and the model input given by the GMC
controller. In open loop there is not much we can do
about (19). However, when the loop is closed, so that
the actual input u is given by a GMC controller, it
would be expected that (19) is small provided the model
is suciently accurate. In open loop identication it is
therefore a reasonable simplication to ignore the term
(19).
We then nd from (10) that the prediction errors
should satisfy (for the moment assuming continuous
prediction errors)
_ c
i
(. t) K
1
c
i
(. t) K
2
_
t
0
c
i
(. t)dt = 0 (20)
3.4. Identication criterion
Since the performance of GMC control is judged by
how well the output follows a desired trajectory (6), the
models should be judged by how close in a least squares
sense c
i
(t
k
. ) is to a trajectory r(t) which is a solution to
_ r(t) K
1
r(t) K
2
_
t
0
r(t)dt = 0 (21)
The solution curves to (21) are given by
r(t) =
1
o[
(oe
ot
[e
[t
)r(0) K
2
1
4K
2
> 0
(e
,t
,te
,t
)r(0) K
2
1
4K
2
= 0
e
,t
(cos ot
,
o
sin ot)r(0) K
2
1
4K
2
- 0
_
_
_
(22)
where
o = (K
1

K
2
1
4K
2
_
),2.
[ = (K
1

K
2
1
4K
2
_
),2.
, = K
1
,2 and
o =

K
2
1
4K
2
_
,2.
360 E. Weyer et al. / Journal of Process Control 9 (1999) 357364
Dene the set of trajectories
1 = {r(t). t c [0. T] [ r(t) satisfies (21) \t c [0. T]]
The identication criterion for GMC control is then
given by
min
rc1

N
k=0
(c(t
k
. ) r(t
k
))
2
(23)
In (23) general prediction errors c(t
k
. ) are used to
stress that the identication criterion and the predictor
used are two separate entities. Putting it all together we
arrive at the following identication scheme for GMC
control.
3.5. System identication for GMC control
Given observation y(t) and u(t), t [0. T[ and a set of
deterministic models (1) parameterised by /

,
where /

is a set such that for /

the system is
asymptotically stable in the operating regions under
zero input and has stable zero dynamics. Let
^ y
1
(t
k
. u. y
init
(0). ) denote the predicted output at time t
k
using the trajectory based predictor (13) where y
init
(0) is
the initial conditions of the predictor at time t
0
= 0. Let
the prediction error be given by
c
i
(t
k
. u. y
init
(0). ) = y(t
k
. u) ^ y
1
(t
k
. u. y
init
(0). )
k = 0. . . . . N. t
0
= 0. t
N
= T
The identied parameter vector is given by
^
= arg min
c/

min
rc1
min
y
init
(0)

N
k=0
(c
i
(t
k
. u. y
init
(0). ) r(t
k
))
2
(24)
The criterion tells us that we should select the para-
meters such that the prediction errors are as close as
possible to one of the desired trajectories in a least
squares sense. The criterion involves optimisation over
the initial conditions of both the predictor and the error
trajectory (21). This identication scheme is non-stan-
dard in two respects. The rst is that we want the pre-
diction errors to follow a trajectory r(t). This is a
consequence of the way (as determined by the controller
parameters K
1
and K
2
) we want the output to approach
the setpoint. The second non-standard feature is that
the output is predicted using a trajectory based pre-
dictor. The reason for the use of trajectory based pre-
dictors is that the GMC controller is concerned with
how the output trajectories approach the setpoint on a
global scale. Since K
1
and K
2
are known from the con-
troller design, the minimisation over the trajectories
r 1 reduces to a minimisation over the initial condi-
tion r(0).
The eect of optimising over r(0) and y
init
(0) may not
be very large since the trajectories r(t) decay exponen-
tially to zero and the predictors are exponentially stable
with respect to the initial conditions. The simulation
examples in Section 4 indicate that the eect is so small
that there is little to be gained by this optimisation.
Setting these initial conditions to zero we obtain the
simplied identication criterion
^
= arg min
c/

N
k=0
c
i
(t
k
. u. 0. )
2
(25)
Example (Continued). We now illustrate the identica-
tion criterion (24) on the rst order linear model
_ y(t) = ay(t) bu(t) (26)
From (16) we have that the trajectory based predictor is
given by
y
1
(t
k1
. u. ^ y
1
(t
0
). ) = ~ a
k1
~ y
1
(t
0
)
~
b

k
i=0
~ a
ki
u(t
i
)
The prediction errors are given by c(t
k
. u. y
1
(t
0
). ) =
y(t
k
) y
1
(t
k
. u. y
1
(t
0
). ). Assuming K
2
1
4K
2
> 0 the
values of the desired trajectory at the sampling points
are given by
1
o [
(oe
ot
k
[e
[t
k
)r(0)
where o and [ are given after Eq. (22). This all com-
bined leads to the identication criterion
^
= arg min

min
^ y
1
(t
0
).r(0)

N
k=0
_
c(t
k
. u. ^ y
1
(t
0
). )

1
o [
(e
ot
k
[e
[t
k
)r(0)
_
2
(27)
3.5.1. Criterion based on ltered prediction errors
An intuitive approach to measure how well a model
ts the criterion (20) is to use the measure

N
k=0
c
2
fi
(t
k
. ) (28)
E. Weyer et al. / Journal of Process Control 9 (1999) 357364 361
where
c
fi
(t. ) = _ c
i
(t. ) K
1
c
i
(t. ) K
2
_
t
0
c
i
(t. )dt (29)
Taking the Laplace transform of (29) we obtain
e
fi
(s) =
s
2
K
1
sK
2
s
c
i
(s) =
~
h(s)c
i
(s) and we see that the
gain of the lter
~
h(s) tends to innity with increased
frequency. A remedy for this problem is to use the lter
h(s) =
K
1
s K
2
s
(30)
which is a good approximation to h(s) in the low fre-
quency range and has nite gain at high frequencies.
Compared to (23), the criterion (28) has the advan-
tage that we do not have to optimise over the initial
condition of the prediction error trajectory. However,
since (28) only measures locally how well the prediction
errors t a desired trajectory, we lose the global aspect.
The dierence between the criteria (23) and (28) is
therefore of the same nature as the dierence between
the predictors (13) and (12).
4. Simulation results
4.1. Comparison of dierent identication methods
In this section we present results from a simulation
study comparing dierent methods of identication for
GMC control. From the preceding control considerations
we expect that trajectory oriented predictors will yield
better performance than equation oriented predictors. In
this study we considered linear models and the aim was
1. to compare methods based on trajectory oriented
and equation oriented predictors and
2. to see if there were any signicant dierences
between methods based on one type of predictors
when the transfer function between the input and
output could be represented exactly within the
model structure.
Three basic methods were considered. They were
1. methods using the criterion (23)
2. standard least squares methods
3. standard least squares methods with data ltered
through the lter (30).
For linear models the dierence between trajectory
oriented predictors (13) and the equation oriented
predictors (12) reduces to the dierence between pre-
dictors for OE and ARX model structures.
It is well known that the least squares criterion using
an OE model structure usually gives a good t in the
low frequency range, while a good t in the high fre-
quency range is obtained using an ARX model structure
[9]. To ensure a good t in the low frequency range for
ARX models, it is common to low-pass lter the data.
We therefore applied the above methods to both unl-
tered and low-pass ltered data. This gave a total of 12
dierent methods (see Table 1).
The process we simulated was
_ x(t) = ax(t) bu(t) v(t)
y(t) = x(t) w(t)
with parameter values a = b = 0.25. The system was
sampled using a sampling interval T
s
= 1,6. The input u
was a random binary signal with amplitude 1, which
was held constant over the sampling interval. The pro-
cess was simulated for 100 s using a time step of 1,60.
For each simulation time step v(t) was constant and
drawn from a normal distribution with zero mean and
variance 0.01
2
. At each sampling instant t
k
, the mea-
surement noise w(t
k
), also zero mean and with variance
0.01
2
, was added to x(t
k
) to produce the sampled output
y(t
k
). Since the process was simulated for 100 s with
sampling interval 1,6, this gave us a total of 601 data
points (y(t
k
). u(t
k
)). t
k
= kT
s
. k = 0. . . . . 600 to use for
identication. As a model class we used the class of rst
order linear discrete time systems
y(t
k1
) = ~ ay(t
k
)
~
bu(t
k
) (31)
The controller parameters were chosen as K
1
= 0.5
and K
2
= 0.01. The low-pass lter used was a sixth
order Butterworth lter with cut o frequency a quarter
of the Nyquist frequency. The simulations were carried
out in Matlab using the identication and optimisation
tool boxes.
The 16 identication methods were evaluated on
the basis of the performance of the associated GMC
controller as follows. The system was simulated giving a
set of 601 data points which were used for identication.
Then based on the estimated value of a and b a GMC
Table 1
Numbering of identication methods
No. Model structure, criterion
1 OE, (23)
2 ARX, (23)
3 OE, standard LS
4 ARX, standard LS
5 OE, LS, lter (30)
6 ARX, LS, lter (30)
7 As 1, LP ltered data
8 As 2, LP ltered data
9 As 3, LP ltered data
10 As 4, LP ltered data
11 As 5, LP ltered data
12 As 6, LP ltered data
362 E. Weyer et al. / Journal of Process Control 9 (1999) 357364
controller was implemented (sampled, zero order hold).
Then the step response of the closed loop system was
simulated ve times. The statistical properties of the
noise sources were the same as during the gathering of
identication data. For each step response
min
y
+
(t
0
)

600
k=0
(y(t
k
) y
+
(t
k
))
2
was computed where y(t
k
) is the output and y
+
(t
k
) the
value of the desired trajectory (4) at the sampled time
instants. The whole procedure was repeated ve times,
giving a total of ve parameter estimates and 25 step
tests. As a total performance measure for the identica-
tion methods we used

25
i=1
min
y
+
i
(t
0
)

600
k=0
(y
i
(t
k
) y
+
i
(t
k
))
2
(32)
where y
i
is the output from the ith step response and y
+
i
is the desired trajectory closest to the observed output.
The values of (32) for the dierent methods are shown
in increasing order in Table 2.
As expected from the control considerations of the
identication problem we found that the use of trajec-
tory oriented predictors gave better controller perfor-
mance than the use of equation oriented predictors.
This was conrmed in several other simulation experi-
ments with dierent parameters, noise properties, input
amplitude, sampling interval and cut o frequency of
the low pass lter.
There were no big dierences between the methods
using trajectory oriented predictors which is not sur-
prising since the sampled transfer function between the
input and the output can be exactly represented by a
model in the model class. Methods 3, 5, 9 and 11 are all LS
methods for the OE model structure. In 5, 9 and 11 we
have passed the data through lters to further enhance the
low frequency t, which in the rst place should be quite
good since we are using OE model structures and the
transfer function can be exactly represented. Moreover if
the initial condition of the desired trajectory of the pre-
diction errors is close to zero the criterion (23) is very close
to the standard LS criterion.
4.2. Three CSTRs in series
In this simulation example we tested how the trajec-
tory based methods performed in the presence of
undermodelling. We considered the following process
taken from Ingham et al. [11]. The process consisted of
three continuously stirred tank reactors connected in
series. A rst order reaction A P took place, and
assuming isothermal conditions, constant volume,
incompressible uid phase, and inert component in
excess, a model of the tanks can be written as
dC
a.i
dt
=
F
V
i
(C
A.i1
C
A.i
) kC
A.i.
i = 1. 2. 3 (33)
where C
A.i
is the concentration of A in tank number i.
The inlet concentration C
A.0
to the rst tank is the
manipulated input variable, and the concentration C
A.3
in the third tank is the controlled output variable. F is
the feed rate, V
i
is the volume of tank number i, and k is
the reaction constant. The process was simulated
using the following values V
i
= 3 m
3
. i = 1. 2. 3.
F = 1 m
3
,min and k = 0.1 min
1
. With these values the
transfer function from the inlet concentration to the
concentration in tank number three is
C
A.3
(s) =
1
27(s 0.4333)
3
C
A.0
(s)
To the model of each tank we added process noise
(band limited white noise with variance 0.01
2
) and we
added normally distributed measurement noise with
variance 0.002
2
to the measurements of C
A.3
. The out-
put measurements y(t
k
) = C
A.3
(t
k
) and input measure-
ments u(t
k
) = C
A.0
(t
k
) were sampled every 0.2 min. As a
model for the process we used the rst order model
y(t
k
) = ay(t
k1
) bu(t
k1
)
The controller parameters were chosen as K
1
= 0.33 and
K
2
= 0.0044. The task of the controller was to take the
concentration C
A.3
from 0.55 to 0.64 mole,m
3
. The
input signal used for identication was a random binary
signal with amplitudes 0.8 and 1.6 mole,m
3
. The signal
could change value every fth sample. The process was
simulated for 2 h giving a total of 600 data points, and
the last 550 of these were used for identication. The
simulations were again carried out in Matlab using
Simulink and the identication and optimisation tool-
boxes.
To measure the performance of the identication
method we followed the same procedure as in the pre-
vious subsection, apart from the step change from 0.55
Table 2
Values of the performance criterion for the identication methods
Method no. 7 1 11 9 5 3 12 8 10 6 4 2
Value 1.64 1.65 1.67 1.69 1.74 1.76 2.22 2.36 2.37 27.68 36.19 37.23
E. Weyer et al. / Journal of Process Control 9 (1999) 357364 363
to 0.64 mole,m
3
, and we repeated the procedure 20
times yielding a total of 100 step tests. Each step test
was simulated for 2 h giving 600 data points. The results
are shown in Table 3. The numbers were computed as in
(32) apart from i which ranged from 1 to 100 (the num-
ber of step tests).
As we can see there are no large dierences between
these methods even in the presence of under modelling.
The methods which optimised over the initial conditions
of the model and the error trajectory did not perform
any better than the other methods. In other words the
estimate (25) was as good as the estimate (24). This is
not unexpected since the eect of non zero initial con-
ditions decays exponentially to zero.
5. Conclusions
In this paper we have derived an identication scheme
for the case where the obtained model is going to be
used for GMC control design. The scheme is non-stan-
dard in two ways. The rst non-standard feature is that
a trajectory based predictor is used. The second non-
standard feature is that we do not try to make the pre-
diction errors zero, but instead we want them to follow
a trajectory determined by the controller parameters.
The reason for the focus upon trajectories instead of
equation errors is that the aim of GMC control is to
make the output follow a desired trajectory.
In all the simulation examples the derived scheme
performed well. However, it did not perform any better
than schemes which just minimised the prediction errors
using a trajectory oriented predictor. This was not
unexpected since the error trajectory decays exponen-
tially to zero. We can therefore conclude that better
GMC control was achieved by using an identication
scheme based on a trajectory oriented predictor, but it
seemed that there was little, if anything, to be gained by
optimising over the initial conditions of the predictor
and the error trajectory.
References
[1] M. Gevers, Towards a joint design of identication and control?
In: H.L. Trentelman, J.C. Willems (Eds.), Essays on Control:
Perspectives in the Theory and its Applications, Birkha user,
Boston, MA, 1993, pp. 111115.
[2] P.M.J. Van den Hof, R.J.P. Schrama, Identication and con-
trolclosed loop issues, Preprints of the 10th IFAC Symposium
on System Identication, vol. 2, Copenhagen, Denmark, pp. 1
13.
[3] P.L. Lee, G.R. Sullivan, Generic model control (GMC), Comput.
and Chem. Eng. 12 (1988) 573580.
[4] P.L. Lee, (Ed.), Nonlinear Process Control: Applications of
Generic Model Control, SpringerVerlag, New York, 1993.
[5] A. Isidori, Nonlinear Control Systems, third ed., Springer
Verlag, New York, 1995.
[6] B. Roorda, C. Heij, Global total least squares modelling of mul-
tivariable time series, IEEE Transactions on Automatic Control
40 (1), (1995) 5064.
[7] E. Weyer, I.M.Y. Mareels, R.C. Williamson, Behavioural orien-
ted identication in a stochastic framework, Proceedings of IFAC
13th World Congress, Vol. I, San Francisco, CA, July 1996, pp.
16.
[8] E. Weyer, R.C. Williamson, I.M.Y. Mareels, On the relationship
between behavioural and standard methods for system identi-
cation, Automatica 34 (6)(1998) 801804.
[9] L. Ljung, System IdenticationTheory for the User, Prentice
Hall, Englewood Clis, NJ, 1987.
[10] T. So derstro m, P. Stoica, System Identication, Prentice Hall,
Englewood Clis, NJ, 1988.
[11] J. Ingham, I.J. Dunn, E. Heinzle, J.E. Pr enosil, Chemical Engi-
neering Dynamics. Modelling with PC Simulation, VCH, New
York, 1994.
Table 3
Values of the performance criterion for the CSTR process
Method 3 9 1 7 5 11
Values 5.32 5.34 5.58 5.59 5.88 5.92
364 E. Weyer et al. / Journal of Process Control 9 (1999) 357364

You might also like