You are on page 1of 41

Parameter Estimation for Differential Equations: A Gen-

eralized Smoothing Approach

J. O. Ramsay, G. Hooker, D. Campbell and J. Cao


J. O. Ramsay,
Department of Psychology,
1205 Dr. Penfield Ave.,
Montreal, Quebec,
Canada, H3A 1B1.
ramsay@psych.mcgill.ca

The research was supported by Grant 320 from the Natural Science and Engineering
Research Council of Canada, Grant 107553 from the Canadian Institute for Health Research,
and Grant 208683 from Mathematics of Information Technology and Complex Systems
(MITACS) to J. O. Ramsay. The authors wish to thank Professors K. McAuley and J.
McLellan and Mr. Saeed Varziri of the Department of Chemical Engineering at Queens
University for instruction in the language and principles of chemical engineering, many
consultations and much useful advice. Appreciation is also due to the referees, whose
comments on an earlier version of the paper have been invaluable.
Summary. We propose a new method for estimating parameters in non-linear differential
equations. These models represent change in a system by linking the behavior of a derivative
of a process to the behavior of the process itself. Current methods for estimating parameters
in differential equations from noisy data are computationally intensive and often poorly suited
to statistical techniques such as inference and interval estimation. This paper describes a new
method that uses noisy data to estimate the parameters defining a system of nonlinear differ-
ential equations. The approach is based on a modification of data smoothing methods along
with a generalization of profiled estimation. We derive interval estimates and show that these
have good coverage properties on data simulated from chemical engineering and neurobiol-
ogy. The method is demonstrated using real-world data from chemistry and from the progress
of the auto-immune disease lupus.

Keywords: Differential equations, profiled estimation, estimating equations, Gauss-Newton


methods, functional data analysis

1. The challenges in dynamic systems estimation

We have in mind a process that transforms a set of m input functions, with values as
functions of time t [0, T ] indicated by vector u(t), into a set of d output functions with
values x(t). Examples are a single neuron whose response is determined by excitation from
a number of other neurons, and a chemical reactor that transforms a set of chemical species
into a product within the context of additional inputs such as temperature and flow of a
coolant and additional outputs such as the temperature of the product. The number of
outputs may be impressive; d 50 is not unusual in modeling polymer production, for
example, and Deuflhard and Bornemann (2000), in their nice introduction to the world of
dynamic systems models, cite chemical reaction kinetic models where d is in the thousands.
It is routine that only some of the outputs will be measured. For example, temperatures
in a chemical process can usually be obtained online cheaply and accurately, but concen-
trations of chemical species can involve expensive assays that can take months to complete
and have relatively high error levels as well. The abundance of a predacious species may be
estimable, but the subpopulation of reproducing individuals may be impossible to count.
On the other hand, we take the values u(t) to be available with negligible error at all times
t.
Ordinary differential equations (ODEs) model output change directly by linking the

derivatives of the output to x itself and, possibly, to inputs u. That is, using x(t) to denote
the value of the first derivative of x at time t,


x(t) = f(x, u, t|). (1)

Solutions of the ODE given initial values x(0) exist and are unique over a neighborhood
of (0, x(0) if f is continuously differentiable with respect to x or, more generally, Lipschitz
continuous with respect to x. Vector contains any parameters defining f whose values
are not known from experimental data, theoretical considerations or other sources of infor-
mation. Although (1) appears to cover only first order systems, systems with the highest
order derivative Dn x on the left side are reducible to a first order form by defining n new
variables, x1 = x, x2 = x 1 and so on up to xn1 = Dn1 x, and (1) can easily be ex-
tended to include more general differential equation systems. Dependencies of f on t other
than through x and u arise when, for example, certain parameters defining the system are
themselves time-varying.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 3

Most ODE systems are not solvable analytically, so that conventional data-fitting method-
ology is not directly applicable. Exceptions are linear systems with constant coefficients,
where the machinery of the Laplace transform and transform functions plays a role, are
solvable, and a statistical treatment of these is available in Bates and Watts (1988) and
Seber and Wild (1989). Discrete versions of such systems, that is, stationary systems of
difference equations for equally spaced time points, are also well treated in the classical time
series ARIMA and state-space literature, and will not be considered further in this paper,
where we consider systems of nonlinear ordinary differential equations or ODEs. In fact,
it is the capacity of relatively simple nonlinear differential equations to define functional
relations of great complexity that explains why they are so useful.
We also set aside stochastic differential equation systems involving inputs or pertur-
bations of parameters that are the derivative of a Wiener process. The mathematical
complexities of working with such systems has implied that, in practice, the range of ODE
structures considered has remained extremely restricted, and we focus on modeling situ-
ations where much more complex ODE structures are required and where inputs can be
considered as being at least piecewise smooth.
The insolvability of most ODEs has meant that statistical science has had little impact
on the fitting of such models to data. Current methods for estimating ODEs from noisy
data are often slow, uncertain to provide satisfactory results, and do not lend themselves well
collateral analyses such as interval estimation and inference. Moreover, when only a subset
of variables in a system are actually measured, the remainder are effectively functional
latent variables, a feature that adds further challenges to data analysis. Finally, although
one would hope that the total number of measured values, along with its distribution over
measured values, would have a healthy ratio to the dimension of the parameter vector ,
such is often not the case. Measurements in biological, medical and physiology, for example,
may require invasive or destructive procedures that can strictly control the number of
measurements that can realistically be obtained. These problems can be often be offset,
however, by a high level of measurement precision.
This paper describes a method that is based an extension of data smoothing methods
along with a generalization of profiled estimation to estimate the parameters defining a
system of nonlinear differential equations. High dimensional basis function expansions are
used to represent the functions in x, and the approach depends critically on considering
the coefficients of these expansions as nuisance parameters. This leads to the notion of a
parameter cascade, and the impact of nuisance parameter on the estimation of structural
parameters is controlled through a multi-criterion optimization process rather than the more
usual marginalization procedure.
Differential equations as a rule do not define their solutions uniquely, but rather as a
manifold of solutions of typical dimension d. For example, x = x and D2 x = 2 x imply
solutions of the form x(t) = c1 exp(t) and x(t) = c1 sin(t) + c2 cos(t), respectively,
where coefficients c1 and c2 are arbitrary. Thus, at least d observations are required to
identify the solution that best fits the data, and initial value problems supply these values
as x(0), while boundary value value problems require d values selected from x(0) and x(T ).
If initial or boundary values are considered to be available without error, then the
large collection of numerical methods for estimating these solutions, treated in texts such
as Deuflhard and Bornemann (2000), may be brought into play. On the other hand, if
either there are no observations at 0 and T or the observations supplied are subject to
measurement error, than these initial or boundary values, if required, can be considered
parameters that must be included in an augmented parameter vector = (x(0)0 , 0 )0 . Our
approach may be considered as an extension of methods for these two situations where
the data over-determine the system, are distributed anywhere in [0, T ], and are subject to
observational error. We may call such a situation a distributed data ODE problem.

1.1. The data and error model contexts


We assume that a subset I of the d output variables are measured at time points tij , i
I {1, . . . , d}; j = 1, ..., Ni , and that yij is a corresponding measurement that is subject
to measurement error eij = yij xi (tij ). Let ei indicate the vector of errors associated
with observed variable i I, and let gi (ei | i ) indicate the joint density of these errors
conditional on a parameter vector i . In practice it is common to assume independently
distributed Gaussian errors with mean 0 and standard deviation i , but in fact autocorre-
lation structure and nonstationary variance are often evident in the data, and when these
features are also modeled, these parameters are also incorporated into i . Let indicate
the concatenation of the i vectors. Although our notation is consistent with assuming
that errors are independent across variables, inter-variable error dependencies, too, can be
accommodated by the approach developed in this paper.

1.2. Two test-bed problems


Two elementary problems will be used in the paper to illustrate aspects of the data fitting
problem.

1.2.1. The FitzHugh-Nagumo neural spike potential equations


These equations were developed by FitzHugh (1961) and Nagumo et al. (1962) as simplifi-
cations of the Hodgkin and Huxley (1952) model of the behavior of spike potentials in the
giant axon of squid neurons:

V3
V = c V + R + u(t)
3
1
R = (V a + bR) (2)
c
The system describes the reciprocal dependencies of the voltage V across an axon membrane
and a recovery variable R summarizing outward currents, as well as the impact of a time-
varying external input u. Although not intended to provide a close fit to actual neural spike
potential data, solutions to the FitzHugh-Nagumo ODEs do exhibit features common to
elements of biological neural networks (Wilson (1999)).
The parameters are = {a, b, c}, to which we will assign values (0.2, 0.2, 3), respectively.
The R equation is the simple constant coefficient linear system R = (b/c)R linearly forced
by V and a. However, the V equation is nonlinear; when V > 0 is small, V cV and
consequently exhibits nearly exponential increase, but as V passes 3, the influence of
V 3 /3 takes over and turns V back toward 0. Consequently, unforced solutions, where
u(t) = 0, quickly converge from a range of starting values to periodic behavior that alter-
nates between the smooth evolution and the sharp changes in direction shown in Figure
1.
A particular concern in ODE modeling is the possibly complex nature of the fit surface.
The existence of many local minima has been commented on in Esposito and Floudas (2000)
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 5

FitzHugh Nagumo Equations: V


4

2
0 5 10 15 20
FitzHugh Nagumo Equations: R
2

1
0 5 10 15 20

Fig. 1. The solid lines show the limiting behavior of voltage V and recovery R defined by the unforced
FitzHugh-Nagumo equations (2) with parameter values a = 0.2, b = 0.2 and c = 3.0 and initial
conditions (V0 , R0 ) = (1, 1).

1500

1000

500

0
1
0.5
0 1
0.5
0
1
1.5 1 b
a

Fig. 2. The integrated squared difference between solutions of the FitzHugh-Nagumo equations for
parameters (a, b) and (0.2, 0.2) as a and b are varied about (0.2, 0.2).
and a number of computationally demanding algorithms, such as simulated annealing, have
been proposed to overcome this problem. For example, Jaeger et al. (2004) reported using
weeks of computation to compute a point estimate. Figure 2 displays the integrated squared
difference surface obtained by varying only the parameters a and b of the FitzHugh-Nagumo
equations (2) in a fit to the errorless paths shown in Figure 1. The features of this surface
include ripples due to changes in the shape and period of the limit cycle and breaks due
to bifurcations, or sharp changes in behavior.

1.2.2. The tank reactor equations


The concept of a continuously stirred tank reactor, or a CSTR, in chemical engineering
consists of a tank surrounded by cooling jacket and an impeller which stirs the contents. A
fluid is pumped into the tank containing a reagent with concentration Cin at a flow rate Fin
and temperature Tin . The reaction produces a product that leaves the tank with concen-
tration Cout and temperature Tout . A coolant enters the cooling jacket with temperature
Tcool and flow rate Fcool .
The differential equations used to model a CSTR, taken from Marlin (2000) and simpli-
fied by setting the volume of the tank to one, are

C out = CC (Tout , Fin )Cout + Fin Cin


Tout = T T (Fcool , Fin )Tout + T C (Tout , Fin )Cout + Fin Tin + (Fcool )Tcool . (3)

The input variables play two roles in the right sides of these equations: Through added
terms such as Fin Cin and Fin Tin , and via the weight functions CC , T C , T T and that
multiply the output variables and Tcin , respectively. These time-varying multipliers depend
on four system parameters as follows:

CC (Tout , Fin ) = exp[104 (1/Tout 1/Tref )] + Fin


T T (Fcool , Fin ) = (Fcool ) + Fin
T C (Tout , Fin ) = 130CC (Tout , Fin )
b+1 b
(Fcool ) = aFcool /(Fcool + aFcool /2), (4)

where Tref a fixed reference temperature within the range of the observed temperatures, and
in this case was 350 deg K. These functions are defined by two pairs of parameters: (, )
defining coefficient CC and (a, b) defining coefficient . The factor 104 in CC rescales so
that all four parameters are within [0.4, 1.8]. These parameters are gathered in the vector
in (1), and determine the rate of the chemical reactions involved, or the reaction kinetics.
The plant engineer needs to understand the dynamics of the two output variables Cout
and Tout as determined by the five inputs Cin , Fin , Tin , Tcool and Fcool . A typical experiment
designed to reveal these dynamics is illustrated in Figure 3, where we see each input variable
stepped up from a baseline level, stepped down, and then returned to baseline. Two baseline
levels are presented for the most critical input, the coolant temperature Tcool .
The behaviors of output variables Cout and Tout under the experimental regime, given
values 0.833, 0.461, 1.678 and 0.5 for parameters , , a and b, respectively, are shown in
Figure 4. When the reactor runs in the cool mode, where the baseline coolant temperature
is 335 degrees Kelvin, the two outputs respond smoothly to the step changes in all inputs.
However, an increase in baseline coolant temperature by 30 degrees Kelvin generates os-
cillations that come close to instability when the coolant temperature decreases, and this
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 7

Input flow rate


1.5
F(t)

1
0.5
0 10 20 30 40 50 60
Input concentration
C0(t)

2.2
2
1.8
0 10 20 30 40 50 60
Input temperature
350
T0(t)

300
0 10 20 30 40 50 60
Coolant temperature (red = hot, blue = cool)
Tcin(t)

360
340
0 10 20 30 40 50 60
Coolant flow rate
Fc(t)

20
15
10
0 10 20 30 40 50 60

Fig. 3. The five inputs to the chemical reactor modeled by the two equations (3): flow rate F (t), input
concentration C0 (t), input temperature T0 (t), coolant temperature Tcin (t) and coolant flow F0 (t).

Output concentration (red = hot, blue = cool)


2

1.5
C(t)

0.5

0
0 10 20 30 40 50 60

Output temperature
420

400
T(t)

380

360

340
0 10 20 30 40 50 60

Fig. 4. The two outputs, for each of coolant temperatures Tcool of 335 and 365 deg. K, from the
chemical reactor modeled by the two equations (3): concentration C(t) and temperature T (t). The
input functions are shown in Figure 3. Times at which an input variable is changed are shown as
vertical dotted lines.
would be highly undesirable in an actual industrial process. These perturbations are due to
the double impact of a decrease in output temperature, which increases the size of both CC
and T C . Increasing T C raises the forcing term in the T equation, thus increasing temper-
ature. Increasing CC makes concentration more responsive to changes in temperature, but
decreases the size of the response. This pushpull process has a resonant frequency that
depends on the kinetic constants, and when the ambient operating temperature reaches a
certain level, the resonance appears. For coolant temperatures either above or below this
critical zone, the oscillations disappear.
The CSTR equations present two challenges that are not an issue for the Fitz-Hugh
Nagumo equations. The step changes in inputs induce corresponding discontinuities in
the output derivatives that complicate the estimation of solutions by numerical methods.
Moreover, the engineer must estimate the reaction kinetics parameters in order to estimate
the cooling temperature range to avoid, but a key question is whether all four parameters are
actually estimable given a particular data configuration. We have noted that step changes
in inputs and near over-parameterization are common problems in dynamic modeling.

1.3. A review of current ODE parameter estimation strategies


Procedures for estimating the parameters defining an ODE from noisy data tend to fall
into three broad classes: linearization and discretization methods for initial value value
problems, and basis function expansion or collocation methods for boundary and distributed
data problems. Linearization involves replacing nonlinear structures by first order Taylor
series expansions, and tends only to be useful over short time intervals combined with rather
mild nonlinearities, and will not be considered further.

1.3.1. Data fitting by numerical approximation of an initial value problem


The numerical methods most often used to approximate solutions of ODEs over a range
[t0 , t1 ] use fixed initial values x0 = x(t0 ) and adaptive discretization techniques. The
data fitting process, often referred to by textbooks as the nonlinear least squares or NLS
method, goes as follows. A numerical method such as the Runge-Kutta algorithm is used
to approximate the solution given a trial set of parameter values and initial conditions, a
procedure referred to by engineers as simulation. The fit value is input into an optimization
algorithm that updates parameter estimates. If the initial conditions x(0) are unavailable,
they must added to the parameters as quantities with respect to which the fit is optimized.
The optimization process can proceed without using gradients, or these may be also be
approximated by solving the sensitivity differential equations

d dx f f dx dx(0)
= + , with = 0. (5)
dt d x d d

In the event that x(0) = x0 must also be estimated, the corresponding sensitivity equations
are
d dx f dx dx(0)
= , with = I. (6)
dt dx0 x dx0 dx0
There are a number of variants on this theme; any numerical method could conceivably
be used with any optimization algorithm. The most conventional of these are Runge-Kutta
integration methods, combined with gradient descent in the survey paper Biegler et al.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 9

(1986), and with a Nelder-Mead simplex algorithm in Fussmann et al. (2000). Systems for
which solutions beginning at varying initial values tend to converge to a common trajectory
are called stiff, and require special methods that make use of the Jacobian f /x.
The NLS procedure has many problems. It is computationally intensive since a numerical
approximation to a possibly complex process is required for each update of parameters and
initial conditions. The inaccuracy of the numerical approximation can be a problem, and
especially for stiff systems or for discontinuous inputs such as step functions or functions
concentrating their masses at discrete points. In any case, numerical solution noise is added
to that of the data so as to further degrade parameter estimates. The size of the parameter
set may be increased by the set of initial conditions needed to solve the system. NLS also
only produces point estimates of parameters, and where interval estimation is needed, a
great deal more computation can be required. As a consequence of all this, Marlin (2000)
warns process control engineers to expect an error level of the order of 25% in parameter
estimates. Nevertheless, the wide use of NLS testifies to the fact that, at least for simple
smooth systems, it can meet the goals of the application.
A Bayesian approach which avoids the problems of local minima was suggested in Gel-
man et al. (2004). The authors set up a model where observations yj at times tj , conditional
on , are modelled with a density centered on the numerical solution to the differential equa-
tion, x such as yj N [
(tj |), x(tj |), 2 ]. Since x
(tj |) has no closed form solution, the
posterior density for has no closed form and inference must be based on simulation from
a Metropolis-Hastings algorithm or other sampler. At each iteration of the sampler is up-
dated. Since x (tj |) must be numerically approximated conditional on the latest parameter
estimates, this approach has some of the problems of the NLS method.

1.3.2. Collocation methods using basis function expansions


Our own approach belongs in the family of collocation methods that express xi in terms a
basis function expansion
Ki
X
xi (t) = cik ik (t) = c0i i (t), (7)
k

where the number Ki of basis functions in vector i is chosen so as to ensure enough


flexibility to capture the variation in xi and its derivatives that is required to satisfy the
system equations (1). Although the original collocation methods used polynomial bases,
spline systems tend to be used currently because of their computational efficiency, but also
because they allow control over the smoothness of the solution at specific values of t. The
latter property is especially useful for dealing with discontinuities in x associated with step
and point changes in inputs u. The problem of estimating xi is transformed into the problem
of estimating the coefficients in ci . Collocation, of course, has its analogues everywhere in
applied mathematics and statistics, and is especially close in spirit to finite element methods
for approximating solutions to partial differential equations. Basis function approaches to
data smoothing in statistics adopt the same approach, but in the approach that we propose,
xi (t|ci ) must come at least close to solving (1), the structure of f being a source of additional
data that inform the fitting process.
Collocation methods were originally developed for boundary value problems, but the use
of a spline basis to approximate an initial value problem is equivalent to the use of an implicit
Runge-Kutta method for stepping points located at the knots defining the basis (Deuflhard
and Bornemann (2000)). Collocation with spline bases was applied to data fitting problems
involving an ODE model by Varah (1982), who suggested a two-stage procedure in which
each xi is first estimated by data smoothing methods without considering satisfying (1),
followed by the minimization of a least squares measure of the fit of x to f(x, u, t|) with
respect to . The method worked well for the simple equations that were considered in that
paper, but considerable care was required in the smoothing step to ensure a satisfactory
and the technique also required that all variables in the system be measured.
estimate of x,
Voss et al. (1998) suggested using finite difference methods to approximate x, but difference
approximations are frequently too noisy and biassed to be useful.
Ramsay and Silverman (2005) and Poyton et al. (2006) took Varahs method further
by iterating the two steps, and replacing the previous iterations roughness penalty by a
penalty on the size of x f (x, u, t|) using the last minimizing value of . They found that
this process, iterated principal differential analysis (iPDA), converged quickly to estimates
of both x and that had substantially improved bias and precision. However, iPDA is a
joint estimation procedure in the sense that it optimizes a single roughness-penalized fitting
criterion with respect to both c and , an aspect that will be discussed further in the next
section.
Bock (1983) proposed a multiple shooting method for data fitting combined with Gauss-
Newton minimization, and a similar approach is followed in Li et al. (2005). Multiple
shooting has been extended to systems of partial differential equations in M uller and Timmer
(2004). These methods incorporate parameter estimation into the numerical scheme for
solving the differential equation; an approach also followed in Tjoa and Biegler (1991).
They bear some similarity to our own methods in the sense that solutions to the differential
equations are not achieved at intermediate steps. However, our method can be viewed
as enforcing an soft-threshold that represents an interpretable compromise between data
fitting and solving the ODE.

1.4. An overview of the paper


Our approach to fitting differential equation models is developed in Section 2, where we
develop the concepts of estimating functions and a generalization of profiled estimation.
Section 2.8 follows up with some results on limiting behavior of estimates as the smoothing
parameters increase, and discusses some heuristics.
Sections 3 and 4 show how the method performs in practice. Section 3 tests the method
on simulated data for the FitzHugh-Nagumo and CSTR equations, and Section 4 estimates
differential equation models for data drawn from chemical engineering and medicine. Gen-
eralizations of the method are discussed in Section 5 and some open problems in fitting
differential equations are given in Section 6. Some consistency results are provided in the
Appendix.

2. The generalized profiling estimation procedure

We first give an overview of our estimation strategy, and then provide further details below.
As we noted above, our method is a variant of the collocation method, and as such, repre-
sents each variable in terms
P of a basis function expansion (7). Let c indicate the composite
vector of length K = iI Ki that results from concatenating P the ci s. Let i be the
Ni by Ki matrix of values k (tij ), and let be the N = iI Ni by K supermatrix
constructed by placing the matrices i along the diagonals and zeros elsewhere. According
to this notation, we have the composite basis expansion x = c.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 11

2.1. An overview of the estimation procedure


Defining x as a set of basis function expansions implies that there are two classes of pa-
rameters to estimate: the parameters defining the equation, such as the four reaction
kinetics parameters in the CSTR equations; and the coefficients in ci defining each basis
function expansion. The equation parameters are structural in the sense of being of primary
interest, as are the error distribution parameters in i , i I. But the coefficients ci are
considered as nuisance parameters that are essential for fitting the data, but usually not of
direct concern. The sizes of these vectors are apt to vary with the length of the observation
interval, density of observation, and other aspects of the structure of the data; and the
number of these nuisance parameters can be orders of magnitude larger than the number
of structural parameters, with a ratio of about 200 applying in the CSTR problem.
In our profiling procedure, the nuisance parameter estimates are defined to be implicit
functions c i (, ; ) of the structural parameters, in the sense that each time and
are changed, an inner fitting criterion J( c|, , ) is re-optimized with respect to c alone.
The estimating function c i (, ; ) is regularized by incorporating a penalty term in J that
controls the size of the extent that x =c 0 fails to satisfy the differential equation exactly,
in a manner specified below. The amount of regularization is controlled by smoothing
parameters in vector . This process of eliminating the direct impact of nuisance parameters
on the fit of the model to the data resembles the common practice of eliminating random
effect parameters in mixed effect models by marginalization.
A data fitting criterion H(, |) is then optimized with respect to the structural pa-
rameters alone. The dependency of H on (, ) is two-fold: directly, and implicitly through
the involvement of c i (, ; ) in defining the fit x i . Because ci (, ; ) is already regular-
ized, criterion H does not require further regularization, and is a straightforward measure
of fit such as error sum of squares, log likelihood or some other measure that is appropriate
given the distribution of the errors eij .
While in some applications users may be happy to adjust the values in manually,
we envisage also the data-driven estimation of through the use of a measure F () of
model complexity or mean squared error, such as the generalized cross-validation or GCV
criterion often used in least squares spline smoothing. In this event, the vector defines
a third level of parameters, and leads us to define a parameter cascade in which structural
parameter estimates are in turn defined to be functions ()
and () of regularization
or complexity parameters, and nuisance parameters now also become functions of via
their dependency on structural parameters. Our estimation procedure is, in effect, a multi-
criterion optimization problem, and we can refer to J, H and F as inner, middle and outer
criteria, respectively. We have applied this approach to semi-parametric regression in Cao
and Ramsay (2006), and also note that Keilegom and Carroll (2006) use a similar approach,
also in semiparametric regression.
We motivate this approach as follows. Fixing complexity parameters for the purposes
of discussion, we appreciate here, as in random effects modeling and nonparametric regres-
sion, that it would be unwise to employ joint estimation using a fixed data-fitting criterion
H with respect to all of , and c since the overwhelmingly larger number of nuisance
parameters would tend to lead to over-fitting the data and consequently unacceptable bias
and sampling variance in and . By assessing smoothness of the fit x to the data in
terms of departure from satisfying (1), we are, in effect, bringing additional data into the
fitting process in the form of the roughness penalty in much the same way that a Bayesian
brings prior information to parameter estimation in the form of the logarithm of a prior
density. However, the Bayesian strategy suffers from the problem that the integration in
the marginalization process is seldom available analytically, thus leading to computationally
intensive MCMC technology. We show here that our parameter cascade approach leads to
analytic derivatives required for efficient optimization, and also for linear approximation to
interval estimates. We find that this results in much faster computation than in our parallel
experiments with MCMC methods, and is far easier to deploy to users in the form of flexible
and extendable computer code.

2.2. The data fitting criterion


In general, the data-fitting criterion can be taken to be the negative log likelihood
X
H(, |) = ln g(ei | i , , ) (8)
iI

where
i ( i , ; )0 (tij ).
eij = yij c
Because the use of least squares as a criterion is so common, some remarks are offered
on the case eij s are independently distributed as N (0, i2 ). The output variables xi will
as a rule have different units; the concentration of the output in the CSTR equations is a
percentage, while temperature is in degrees Kelvin. Consequently, each error sum of squares
must be multiplied by a normalizing weight wi that, ideally, should be 1/i2 , so that the
normalized error sums of squares are of roughly comparable sizes. However, given enough
data per variable, it can suffice to use data-defined values, such as the squared reciprocals
of initial values wi = xi (0) or the variance taken over values xi (tij ) for some trial or initial
estimate of a solution of the equation. Letting yi indicate the data available for variable i
consisting of observations at time points ti , and x i (ti ) indicate the vector of fitted values
corresponding to yi , the composite error sum of squares criterion is
X
H(|) = wi kyi xi (ti )k2 , (9)
iI

where the norm may allow for features like autocorrelation and heteroscedasticity.

2.3. Assessing fidelity to the equations


We may express each equation in (1) as the differential operator equation

Li, (xi ) = x i fi (x, u, t|) = 0. (10)

The extent to which an actual function xi satisfies the ODE system can then be assessed
by Z
PENi (x) = [Li, (xi (t))]2 dt (11)

where the integration is over an interval which contains the times of measurement. The
normalization constant wi may be required here, too, to allow for different units of mea-
surement. Other norms are also possible, and total variation, defined as
Z
PENi (x) = |Li, (xi (t))|dt (12)
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 13

has turned out to be an important alternative in situations where there are sharp breaks in
the function being estimated (Koenker and Mizera (2002)). A composite fidelity to equation
measure is
Xn
PEN(x|L , ) = i PENi (x) (13)
i

where L is denotes the vector containing the d differential operators Li, . Note that in this
case the summation will be over all d variables in the equation. The multipliers i 0
permit us to weight fidelities differently, and also control the relative emphasis on fitting
the data and solving the equation for each variable.

2.4. Estimating c (; )
Finally, the data-fitting and equation-fidelity criteria are combined into the penalized log
likelihood criterion
X
J(c|, , ) = ln g(ei | i , , ) + PEN(x|). (14)
iI

In the least squares case, this reduces to


X
J(c|, , ) = i (ti )k2 + PEN(x|).
wi kyi x (15)
iI

In general the minimization of J will require numerical optimization, but in the least squares
(; ) analytically (Ramsay and Silverman
case and linear ODEs, it is possible to express c
(2005)).

2.5. Outer optimization for


In this and the remainder of the section, we simplify the notation considerably by dropping
the dependency of criterion H on and ; and regarding the latter as a fixed parameter.
These results can easily be extended to get the results for the joint estimation of system
parameters and error distribution parameters where required. It is assumed that H is
twice continuously differentiable with respect to both and c, and that the second partial
derivative or Hessian matrices
2H 2H
2 and c2

are positive definite over a nonempty neighborhood N of y in data space.
The gradient or total derivative, DH(), with respect to is
H H d
c
DH() = + . (16)
c d
() is not available explicitly, we apply the implicit function theorem to obtain
Since c

d
c 2 J 1 2 J
= . (17)
d c2 c
and
H H 2 J 1 2 J
DH() = . (18)
c c2 c
The matrices used in these equations and those below have complex expressions in terms
of the basis functions in and the functions f on the right side of the differential equation.
Appendix A provides explicit expressions for them for the case of least squares estimation.

2.6. Approximating the sampling variation of and c


Let be the variancecovariance matrix for y. Making explicit the dependency of H on
the data y by using the notation H(|y), the estimate (y) of is the solution of the
stationary equation H(, |y)/ = 0. Here and below, all partial derivatives as well as
total derivatives are assumed to be evaluated at and c which are in turn evaluated at
(),
y.
The usual -method employed in nonlinear least squares produces a variance estimate
of the form
dx 0 dx 1

d d
by making use of the approximation

d2 H dx 0 dx
2 d d
.
d
We will instead provide an exact estimation of the Hessian above and employ it with a
pseudo -method. Although this implies considerably more computation, our experiments
in Section 3.1 suggest that this method provides more accurate results that the usual -
method estimate.
By applying the Implicit Function Theorem to H/ as a function of y, we may say that

for any y in N there exists a value (y) satisfying H/ = 0. By taking the y-derivative
of this relation, we obtain:

d dH d2 H d2 H
d
= + 2 = 0, (19)
dy d (y) ddy (y) d (y) dy

where 0
d2 H 2H 2 H
c
c 2 H
c H 2 c

2 = 2 + 2 + 2 + , (20)
d
c c
c 2
and
d2 H 2H 2 H
c 2 H c 2 H
c
c H 2 c

= + + + 2 + . (21)
ddy y cy c y c y
c y
The formulas (20) and (21) involve the terms / 2 and 2 c
c/y, 2 c /y, which can
also be derived by the Implicit Function Theorem and are given in Appendix A. Solving
with respect to y:
(19), we obtain the first derivative of
2 1 2
d H H
= . (22)
dy 2 (y) y (y)


Let = E(y), the first order Taylor expansion for d/dy is:

d
d
d2
+ 2 (y ) . (23)
dy d d
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 15
2 is uniformly bounded, we can take the expectation on both sides of (23)
When d2 /d

and derive E(d/d)
E(d/dy).
We can also approximate (y) by using the first order
Taylor expansion:

d

(y)
() + (y ) .
d
Taking variance on both side of (24), we derive
0
d d
Var[(y)] , (24)
d d
0
d d d d
, since E E . (25)
dy dy d dy
Similarly, the sampling variance of c
](y)] is estimated by
d
c dc 0
Var[
c((y))] = ) ) , (26)
dy dy
where
d
c d
c d
c
= + . (27)
dy
d dy y

2.7. Numerical integration in the inner optimization


The integrals in PENi will normally require approximation by the linear functional
Q
X
PENi (x) vq [Li (xi (tq ))]2 (28)
q

where Q, the evaluation points tq , and the weights vq are chosen so as to yield a reasonable
approximation to the integrals involved.
Let ` indicate a knot location or a breakpoint. It may be that there will be multiple
knots at such a location in order to deal with step function inputs that will imply discontin-
uous derivatives. We have obtained satisfactory results by dividing each interval [` , `+1 ]
into four equal-sized intervals, and using Simpsons rule weights [1, 4, 2, 4, 1](`+1 ` )/5.
The total set of these quadrature points and weights along with basis function values may
be saved at the beginning of the computation so as to save time. If a B-spline basis is used,
great improvements in speed of computation are achieved by using sparse matrix methods.
Efficiency in the inner optimization is essential since this will be invoked far more often
than the outer optimization. In the case of least squares fitting, the minimization of (14)
can be expressed as a large nonlinear least squares approximation P problem by observing
that we can express the numerical quadrature approximation to i i PENi (x) as
XX
[0 (i vq )1/2 Li (xi (tq ))]2 .
i q

These squared residuals can then be appended to those in H, and Gauss-Newton minimiza-
tion can then be used. When the coefficients enter linearly into the expression for the fitting
function, the inner optimization can be avoided entirely by using the explicit solution that
is available in this case.
2.8. Choosing the amount of smoothing
Recall that the central goal of this paper is to estimate parameters, rather than to smooth
the data. This means that traditional approaches to the choice of smoothing parameter,
such as those based on cross validation, may no longer be appropriate. The theory derived
in Section 2.9, suggests that when the data agree well with the ODE model, the i should
be chosen as large as possible, bounded only by the possibility of distortion from our choice
of basis expansion (7).
In our experience, however, real world systems are rarely perfectly described by ODEs.
In such situations, we may wish to choose a limited value for i in order to be able to
account for systematic discrepancies between ODE solutions and the data. In this sense,
the amount of smoothing provides a continuum of solutions representing trade-offs between
the problem of estimating and fitting the data well. For each value of the i , we are given
two fits to the data; the smooth x at the estimated and the set of exact solutions to the

ODE at . The discrepancy between these two will decrease as i increases and can be
viewed as a diagnostic for lack of fit in the model, and therefore an additional benefit of this
approach. The fit to the data defined by an exact solution to the equations can be obtained
by computing solutions to the initial value problem corresponding to the estimated initial
values x (0). It may be helpful to try optimizing these initial conditions using the NLS
method, where parameter values are kept fixed at their estimated values.
The degree of smoothing also affects the numerical properties of our estimation scheme.
Typically, larger values of i make the inner optimization harder, increasing the number of
Gauss-Newton iterations required. Smaller values also appear to make the response surface
for the outer optimization more convex, a point discussed further in Section 2.10. This
suggests a scheme of estimating at increasing amounts of smoothness in order to overcome
the local minima seen in Figure 2. Under this scheme an upper limit on i is reached when
the basis approximation begins to add too much numerical error to the estimation of x. A
simple diagnostic is therefore to solve the ODEs by a Runge-Kutta method and attempt to
perform the smoothing in the inner optimization on the resulting data. i should be kept
below a level at which the smoothing process distorts these data.

2.9. Parameter estimate behavior as


In this section, we consider the behavior of our parameter estimate as becomes large.
This analysis takes an idealized form in the sense that we assume that this optimization
may be done globally and that the function being estimated can be expressed exactly and
without the approximation error that would come from a basis expansion. We show that
as becomes large, the estimates defined through our profiling procedure converge to the
estimates that we would obtain if we estimated by minimizing negative log likelihood over
both and the initial conditions x0 . In other words, we treat x0 as nuisance parameters
and estimate by profiling. When f is Lipschitz continuous in x and continuous in , the
likelihood is continuous in and the usual consistency theorems (e.g. Cox and Hinkley
(1974)) hold and in particular, the estimate is asymptotically unbiassed.
For the purposes of this section, we will make a few simplifying conventions Firstly, we
will take:
X
l(x) = ln g(ei | i , , )
iI
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 17

Secondly, we will represent


n
X Z
2
PEN(x|) = ci wi (x i (t) fi (x, u, t|)) dt
i=1

where the ci are taken to be constants and the i used in the definition (13) are given by
ci for some .
We will also assume that solutions to the data fitting problem exist and are well defined,
and therefore that there are objects x that satisfy PEN(x|) = 0. This is guaranteed locally
by the following theorem adapted from Bellman (1953):

Theorem 2.1. Let f be Lipschitz continuous and u differentiable almost everywhere,


then the initial value problem:


x(t) = f(x, u, t|), x(t0 ) = x0

has a unique solution.

Finally, we will need to make some assumptions about the spline smooths minimizing

l(x) + PEN(x|).

Specifically, we will assume that the minimizers of these are well-defined and bounded
uniformly over . Guarantees on boundedness may be given for whenever x f(x, u, t|) < 0
for kxk greater than some K. This is true for reasonable parameter values in all systems
presented in this paper. More general characteristics of functions f for which these properties
hold is a matter of continued research. It seems reasonable, however, that they will hold
for systems of practical interest.
We will assume that the solutions of interest lie in the Hilbert space H = (W 1 )n ; the
direct sum of n copies of W 1 where W 1 is the Sobolev space of functions on the the time-
observation interval [t1 t2 ] whose first derivatives are square integrable. The analysis will
examine both inner and outer optimization problems as . For the inner optimization,
we can show

Theorem 2.2. Let k and assume that

xk = argmin l(x) + k PEN(x|)


x(W 1 )n

is well defined and uniformly bounded over . Then xk converges to x with PEN(x |) = 0.

Further, when PEN(x|) is given by (13), x is the solution of the differential equations
(1) that is obtained by minimizing squared error over the choice of initial conditions. The
proof of this, and of the theorem below, is left to Appendix B.
Turning to the outer optimization, we obtain the following:

Theorem 2.3. Let X (W 1 )n and Rp be bounded. Let

x , = argmin l(x) + PEN(x|)


xX
be well defined for each and , define x to be such that

l(x ) = min l(x)


x:P (x| )=0
and let
() = argmin l(x , ) and = argmin l(x )

also be well defined for each . Then

lim () =

This theorem requires fairly strong assumptions about the regularity of solutions to the
inner optimization problem. Conditions on f that will provide this regularity is a matter
of ongoing research. We conjecture that it will hold for any f such that the parameter
estimation problem is well defined for exact solutions to (1).
Taken together, these theorems state that as is increased, the solutions obtained from
this scheme tend to those that would be obtained by estimating the parameters directly
while profiling out the initial conditions. In particular, the path of parameter values as
changes is continuous, motivating a successive approximation scheme. This analysis also
highlights the distinction between these methods and traditional smoothing; our penalties
are highly informative and it is, in fact, the data which plays the minor role in finding a
solution.

2.10. Heuristics for robust estimates


We believe that our method provides a computationally tractable parameter estimate that
is numerically stable and easy to implement. It has also been our experience that these
estimates are robust with respect to starting values for the optimization procedure. Figure
5 plots a similar to Figure 2 but providing the squared error of the spline fit as parameters
a and b are varied. The plot shown is for = 105 , experimentally, as becomes smaller,
the surfaces become more regular.
We do not have a formal mathematical statement to indicate that these response surfaces
become more regular. As a heuristic, we have already noted that

l(x, ) l(x )

for any x that satisfies P (x |) = 0. The squared error surface at is therefore an under-
estimate of the response surface for exact solutions to the differential equation. Moreover,
Appendix A provides an expression for the derivative of c with respect to that is of the
form
1
[A + B] C
whose norm increases with . Thus these surfaces must be less steep as becomes smaller.
This, however, does not demonstrate the observation that they eventually become convex.
Our experimental evidence suggests that for small values of , parameter estimates tend
to be more variable and can become quite biassed. However, Theorem 2.3 demonstrates
that as becomes large, the estimates become approximately unbiassed. This suggests
that a scheme that uses a small values of to find a global optimum and then increases
incrementally may be useful for particularly challenging surfaces.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 19

2500

2000

1500

1000

500

0
1
1
0.5
0.5

0 0

0.5 0.5
b a

Fig. 5. FitzHugh-Nagumo response surfaces over a and b for = 105 . Values of the surface are
calculated using the same data as in Figure 2

3. Simulated data examples

3.1. Fitting the FitzHugh-Nagumo equations


We set up simulated data for V from the FitzHugh-Nagumo equations as a mathematical
test-bed of our estimation procedure. Data were generated by taking solutions to the
equations with parameters {a, b, c} = {0.2, 0.2, 3} and initial conditions {V, R} = {1, 1}
measured at 0.05 time units on the interval [0,20]. Noise was then added to the solution
with standard deviation 0.5.
We estimated the smooths for each component using a third order B-spline basis with
knots at each data point. A five-point quadrature rule was used for the numerical integra-
tion. Figure 6 gives quartiles of the parameter estimates for 60 simulations as is varied
from 102 to 105 . It is apparent that there is a large amount of bias for small values of
. This is not surprising the spline fit is affected very little by and, in being very
irregular, has high derivatives. Effectively, we select a fit that nearly interpolates the data
and then choose to try to mimic the fit as well as possible. However, as becomes large,
parameter estimates become nearly unbiased and tightly centered on the true parameter
values. Table 3.1 provides bias and variance estimates from 500 simulations at = 104 .
These are provided along with the estimate of standard error developed in Section 2.6 and
the usual Gauss-Newton standard error. We obtain good coverage properties for our esti-
mates of variance while the Gauss-Newton estimates are somewhat less accurate. However,
the estimates based on Section 2.6 required 10 times the computer time than the standard
estimates and we found that these could be unreliable for smaller sample sizes. Parameter
estimates for a and c are very close to the true values. There appears to be a small amount
of bias for the estimate of d, which we conjecture to be due to the use of a basis expansion.
FitzHugh Nagumo Equations
3.5

a
parameter estimates

2.5 b
c
2

1.5

0.5

0.5
2 1 0 1 2 3 4
log lambda

Fig. 6. Quartiles of parameter estimates for the FitzHugh-Nagumo Equations as is varied. Hori-
zontal lines represent the true parameter values.

Table 1. Summary statistics for parameter esti-


mates for 500 simulated samples of data gener-
ated from the FitzHugh-Nagumo equations.
a b c
True value 0.2000 0.2000 3.0000
Mean value 0.2005 0.1984 2.9949
Std. Dev. 0.0149 0.0643 0.0264
Est. Std. Dev. 0.0143 0.0684 0.0278
GN. Std. Dev. 0.0167 0.0595 0.0334
Bias 0.0005 -0.0016 -0.0051
Std. Err. 0.0007 0.0029 0.0012
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 21

1.7
C(t) (%)

1.6

1.5

1.4

1.3
0 10 20 30 40 50 60

355
T(t) (deg K)

350

345

340

335

0 10 20 30 40 50 60
t (mins)
Fig. 7. The solid curves are the two outputs, concentration C(t) and temperature T (t), defined
by the chemical reactor model (3). The dots associated with the temperature curve are simulated
measurements with a error level of about 20% of the variability in the smooth curve.

3.2. Fitting the tank reactor equations


The data in Figure 7 were simulated by adding zero mean Gaussian noise to numerical
estimates of the solutions C(t) and T (t) of the equations for values of the parameters given
in Marlin (2000): = 0.461, = 0.833, a = 1.678 and b = 0.5. The standard deviations of
the errors were 0.0223 for concentration and 0.79 for temperature, values which are about
20% of the standard deviations of the respective variable values, this being an error level
that is considered typical for such processes.
Temperature measurements are relatively cheap and accurate relative to those for con-
centration, and the engineer may wish to base his estimates on these alone, in which case
concentration effectively becomes a functional latent variable. Naturally, it would be wise
to use data collected in the stable cool experimental regime in order to predict the response
in the hot reaction mode.
We now consider how well the parameters , and a and the equation solutions can
be estimated from the simulated data in Figure 7, keeping b fixed at 0.5 because we have
determined that the accurate estimation of all four parameters is impossible within the data
design described above.
We attempted to estimate these parameters using the nonlinear least squares or NLS
method described in Section 1.3.1. At the times of step changes in inputs, the approximation
to solutions using the Runge-Kutta algorithm with inaccurate and unstable with respect
to small changes in parameters. As a consequence, the estimation of the gradient of fit (9)
by differencing was so unstable that gradient-free optimization was impossible to realize.
When we estimated the gradient by solving the sensitivity equations (5) and 6), we could
only achieve optimization when starting values for parameters and initial values were much
closer to the optimal values than could be realized in practice. By contrast, our approach
Table 2. Summary statistics for parameter estimates for 1000 simulated sam-
ples. Results are for measurements on both concentration and temperature,
and also for temperature measurements only. The estimate of the standard
deviation of parameter values is by the delta method usual in nonlinear least
squares analyses.
C and T data Only T data
a a
True value 0.4610 0.8330 1.6780 0.4610 0.8330 1.6780
Mean value 0.4610 0.8349 1.6745 0.4613 0.8328 1.6795
Std. Dev. 0.0034 0.0057 0.0188 0.0084 0.0085 0.0377
Est. Std. Dev. 0.0035 0.0056 0.0190 0.0088 0.0090 0.0386
Bias 0.0000 0.0000 -0.0001 0.0003 -0.0002 0.0015
Std. Err. 0.0002 0.0004 0.0012 0.0005 0.0005 0.0024

was able to converge reliably from random starting values far removed from the optimal
estimates.
Table 3.2 displays bias and sampling precision results for parameter estimates by our
parameter cascade method for 1000 simulated samples for each of two measurement regimes:
both variables measured, and only temperature measured. The smoothing parameters C
and T were 100 and 10, respectively. The first two lines of the table compare the true
parameter values with the mean estimates, and the last two lines compare the biases of
the estimates with the standard errors of the mean estimates. We see that the estimation
biases can be considered negligible for both measurement situations. The third and fourth
lines compare the actual standard deviations of the parameter estimates with the values
estimated with the usual Gauss-Newton method, using the Jacobian with respect to the
parameters, and the two values seem sufficiently close for all three parameters to permit us
to trust the Gauss-Newton estimates. As one might expect, the main impact of having only
temperature measurements is to increase the sampling error in the parameter estimates.
The principal components of variation of the correlation matrix for the parameter es-
timates derived from both variables measured accounted for 85.0, 14.0 and 1.0 percent of
the variance, respectively, indicating that, even after re-scaling the parameters, most of
the sampling variation in these three parameters is in only two dimensions. Moreover, the
scatter is essentially Gaussian in distribution, indicating that a further reduction the dimen-
sionality of the parameter space using linear transformations might be worth considering.
In particular, the correlation between parameters and a is 0.94, suggesting that these may
be linked together without much loss in fitting power.
When the equations were solved using the parameters estimated from measurements on
both variables, the maximum absolute discrepancy between the fitted concentration curve
and the true curve was 0.11% of the true curve. The corresponding temperature discrepancy
was 0.03%. When these parameter estimates were used to calculate the solutions in the
hot mode of operation, the maximum concentration and temperature discrepancies became
1.72% and 0.05%, respectively. These error levels would be regarded as negligible by engi-
neers interested in forecasting the consequences of running the reactor in hot mode. Finally,
when the parameters were estimated from only the temperature data, the concentration and
temperature discrepancies became 0.10% and 0.04%, respectively, so that only the quickly
and cheaply attainable measurements of temperature seem sufficient for identifying this
system in either mode of operation.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 23

4. Working with real data

4.1. Modeling nylon production


This illustration concerns the decomposition of the polymer nylon into its constituents. If
water (W ) in the form of steam is bubbled through molten nylon (L) under high temper-
atures, W will split L into amine (A) and carboxyl (C) groups. To produce nylon, on
the other hand, A and C are mixed together under high temperatures, and their reaction
produces L and W , water then escaping as steam. These competing reactions are depicted
symbolically by A + C L + W . In an experiment described in (Zheng et al. (2005)),
a mixture of steam and an inert gas was bubbled into a molten nylon to maintain an ap-
proximately constant amount of W in the system, thereby causing A, C, L and W to move
towards equilibrium concentrations. Within each of six experimental runs the pressure of
the steam was first stepped down from its initial level at times j1 , j = 1, . . . , 6, then back
up at to its initial pressure at time j2 until the end of the experiment. The temperature
Tj was kept constant within a run, but varied over runs, as did the initial concentrations of
A and C. The goal was to estimate the rate parameters governing the chemical reactions
of nylon production.
Samples of the molten mixture were extracted at irregularly spaced intervals, and the
concentrations of A and C were measured, all though the more expensive measurements
of C were not made at all A measurement times. Figure 8 shows the data for the runs
aligned by experiment within columns. Vertical lines correspond to j1 and j2 . Since
concentrations of A and C are expected to differ only by a vertical shift, their plots within
an experimental run are shifted versions of the same vertical spread. The temperature of
each run is given above the plots for each set of components.
The model for the reaction dynamics was

DL = DA = DC = kp 103 (CA LW/Ka ) (29)


DW = kp 103 (CA LW/Ka ) km (W Weq )

The constant km = 24.3 was estimated in previous studies. The two step changes in input
Weq induces two discontinuities in the derivatives given in (29). Due to the mass balance
of the reactions, if A, C and W are known then L can be algebraically removed from the
equations. Consequently, we will only estimate those three components. The reaction rate
parameter Ka , which depends on the temperature T , is
h g i h H 1 1 i
Ka = 1+ Weq CT Ka0 exp
1000 R T T0

where the ideal gas constant R = 8.3145 103 , CT = 20.97 exp[9.624 + 3613/T ] and a
reference temperature T0 = 549.15 was chosen to be in the middle of the range of experimen-
tally manipulated temperatures. The parameter vector to estimate is = [kp , g, Ka0 , H].
The scaling factor of 1000 selected to scale all initial parameter absolute values into the
range [17, 78.1]. Further details concerning the experiment, and these and other analyses,
can be found in Zheng et al. (2005) and Campbell et al. (2006).
Since the input Weq is a step function of time, it induces a discontinuity in the derivative
of the smooth for all three system outputs. This means that the linear differential operator
in (10) is not defined at the times {j1 , j2 }, and consequently we removed a small neigh-
borhood [ , + ] around these points before computing the integral in PEN, being
106 times the smallest interval between unique neighboring knots. We used a fifth order
557 557 557
30 40
20
A

40 20
10
20 0 0
60 220 140
C

40 210
200 120

40 40 40
30 30 30
W

20 20 20
10 10 10
0 2 4 6 8 0 2 4 6 0 2 4 6

554 544 536


40 35 30
20 20
A

20
10
0 0
115 100
90
85
C

95 80
70
75 65
40 60
60
W

25 30 40
10 0 20
0 2 4 6 8 0 2 4 6 8 10 0 2 4 6 8 10 12

Fig. 8. Nylon components A, C and W along with the solution to the differential equations using
initial values estimated by the smooth for each of six experiments. The times of step change in input
pressures are marked by thin vertical lines. Horizontal axes indicate time in hours, and vertical axes
are concentrations in moles. The labels above each experiment indicate the constant temperature in
degrees Kelvin.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 25

b-spline basis with knots at each observation of A and included additional knots in order to
assure a knot rate of at least five per hour. Multiple knots were included at times j1 and
j2 to allow a discontinuity in the smoothing functions first derivative. The same basis was
used for the all three components within an experimental run. We use weights wA = 1/.6
and wC = 1/2.4, these being the reciprocals of the measurement standard deviations.
The profile estimation process was run initially with = 104 . Upon convergence of ,
was increased by a factor of ten and the estimation process rerun using the most recent
estimates as the latest set of initial parameter guesses, increasing up to 103 . Beginning
with such a small value of made the results robust to choice of initial parameter guesses.
The parameter estimates along with 95% limits were: kp = 20.593.26, g = 26.866.82,
Ka0 = 50.22 6.34 and H = 36.46 7.57. The solutions to the differential equations
using the final parameter estimates for and the initial system states estimated by the data
smooth are shown in Figure 8. While the fit to the data is quite good overall, there does
seem to be a positive autocorrelation of residuals within a run.

4.2. Modeling flare dynamics in lupus


Lupus is an auto-immune disease characterized by sudden flares of symptoms caused by
the bodys immune system attacking various organs. The name derives from a rash on the
face and chest that is characteristic, but the most serious effects tend to be in the kidneys.
The resulting nephritis and other symptoms can require immediate treatment, usually with
the drug Prednisone, a corticosteroid that itself has serious long-term side effects, such as
osteoporosis.
Various scales have been developed to measure the severity of symptoms, and Figure
9 shows the course of one of the more popular measures, the SLEDAI scale, for a patient
that experienced 48 flares over about 19 years before expiring. A definition of a flare event
is commonly agreed to be a change in a scale value of at least 3 with a terminal value of at
least 8, and the figure shows flare events as heavy solid lines.
Because of the rapid onset of symptoms, and because the resulting treatment program
usually involves a SLEDAI assessment and a substantial increase in Prednisone dose, we
can pin down the time of a flare with some confidence. Thus, the set of flare times combined
with the accompanying SLEDAI score constitute a marked point process. Our goal here is
to illustrate a simple model for flare dynamics, or the time course of symptoms over the
onset period and the period of recovery. We hope that this model will also show how these
short-term flare dynamics interact with longer term trends in symptom severity.
We postulate that the immune system goes on the attack for a fixed period of years,
after which it returns to normal function due to treatment or normal recovery. For purposes
of this illustration, we take = 0.02 years, or about two weeks. We represent the time course
of attacks as a box function u(t) that is 0 during normal functioning and 1 during a flare.
We begin with the following simple linear differential equation for symptom severity s(t)
at time t
Ds(t) = (t)s(t) + (t)u(t). (30)

This equation has the solution


Z t
s(t) = Cs0 (t) + s0 (t) (z)u(z)/s0 (z) dz
0
25

20
SLEDAI Score

15

10

0
0 2 4 6 8 10 12 14 16 18 20
Year
Fig. 9. Symptom level s(t) for a patient suffering from lupus as assessed by the SLEDAI scale.
Changes in SLEDAI score corresponding to a flare are shown as heavy solid lines, and other the
remaining changes are shown as dashed lines.

where Z t
s0 (t) = exp[ (z) dz].
0
Function (t) tracks the long-term trend in the severity of the disease over the 19 years,
and we will represent this as a linear combination of 8 cubic B-spline basis functions defined
by equally spaced knots and with about three years between knots. We expect that a flare
plays itself out over a much shorter time interval, so that (t) cannot capture any aspect
of flare dynamics.
The flare dynamics depend directly on weight function (t). At the point where an
attack begins, a flare increases in intensity with a slope that is proportional to , and rises
to a new level in roughly 4/(t) time units if (t) is approximately constant. Likewise,
when an attack ceases, s(t) decays exponentially to zero with rate (t).
It seems reasonable to propose that (t) is affected by an attack as well as s(t). This is
because (t) reflects to some extent the health of the individual in the sense that responding
to an attack in various ways requires the bodys resources, and these are normally at their
optimum level just before an attack. The response drains these resources, and thus the
attack is likely to reduce (t). Consequently, we propose a second simple linear equation to
model this mechanism:
D(t) = (t) + [1 u(t)]. (31)
This model suggests that an attack results in an exponential decay in with rate , and
that the cessation of the attack results in (t) returning to its normal level in about 4/
time units. This normal level is defined by the gain K = /. However, if is large, the
model behaves like
D(t) = [1 u(t)], (32)
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 27

1.5
(t)

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

3
s(t)

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
Fig. 10. The top panel shows the effect of a lupus attack on the weight function (t) in differential
equation (30). The bottom panel shows the time course of the symptom severity function s(t). These
results are for parameters = 4.

which is to say that (t) increases and decreases linearly.


The top panel in Figure 10 shows how (t) responds to an attack indicated by the box
function u(t) when = = 4, corresponding to a time to reach a new level of about 1 time
unit. The initial value (0) = 0 in this plot. The bottom panel shows that the increase
in symptoms is nearly linear during the period of attack, but that when the attack ceases,
symptom level declines exponentially and takes around 3 time units to return to zero.
When we estimated this model with smoothing parameter value = 1, we obtained the
results shown in Figure 11. We found that parameter was indeed so high that the fitted
symptom rise was effectively linear, so we deleted and used the simpler equation (32).
This left only the constant to estimate for (t), which now controls the rate of decrease
of symptoms after an attack ceases. This was estimated to be 1.54, corresponding to a
recovery period of about 4/1.54 = 2.6 years. Figure 11 shows the variation in (t) as a
dashed line, indicating the long-term change in the intensity of the symptoms, which are
especially severe around year 6, 11, and in the patients last three years.
Our model provides two estimates of the symptom levels. The fitted function s(t) is
shown as a solid line. It was defined by positioning three knots at each of the flare onset
and offset times in order to accommodate the sudden break in the first derivative of s(t),
and a single knot midway between two flare times. Order 4 B-splines were used, and this
corresponded to 290 knot values and 292 basis functions in the expansion s(t) = c0 (t). We
see that the fitted function seems to do a reasonable job of tracking the SLEDAI scores,
both in the period during and following an attack and also in terms of its long-term trend.
The model also defines the differential equation (30), and the solution to this equation
is shown as a dashed line. The discrepancy between the fit defined by the equation and the
smoothing function s(t) is important in years 8 to 11, where the equation solution over-
25

20
SLEDAI score

15

10

0
0 2 4 6 8 10 12 14 16 18
Year

Fig. 11. The circles indicate SLEDAI scores, the jagged solid line is the smoothing functions s(t),
the dashed jagged line is the solution to the differential equation and the smooth dashed line is the
smooth trend (t).

estimates symptom level. In this region, new flares come too fast for recovery, and thus
build on each other. A more detailed view over the years 14 to the end of the record is in
Figure 12, and we see there that the ODE solution is less able than the smooth to track the
data when flares come close together.
Nevertheless, the fit to the 208 SLEDAI scores achieved by an investment of 9 structural
parameters seems impressive for both the smoothing function s(t) and equation solution,
taking into consideration that the SLEDAI score is a rather imprecise measure. Moreover,
the model goes a long way to modeling the within-flare dynamics, the general trend in the
data, and the interaction between flare dynamics and trend.

5. Generalizations

The methodology presented here has been described for systems of ordinary differential
equations. However, the idea is much more general. In any parametric situation, if we can
define a PEN(x|) whose zero set is indexed by nuisance parameters and the estimation of
is of interest, then similar methods may be applied. The generalization of Theorems 2.2
and 2.3 are immediate.
In dynamical systems, we have already noted that an mth order system of the form:

Dn x(t) = f(x, x,
. . . , Dn1 x, u, t|) (33)

may be reduced to a larger first-order system by defining the derivatives x up to Dn1 x


as new variables. Initial conditions need to be given for each of these new variables in
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 29

25

20
SLEDAI score

15

10

0
14 15 16 17 18 19
Year

Fig. 12. The data in Figure 11 plotted over the last five years of the record.

order to define a unique solution. Equation (33), however, can be used directly to define
a differential operator as in (10), saving the estimation of the derivative terms and all the
initial conditions. There is, of course, no need for n in (33) to be constant across components
of x, or to restrict to equations that may be written in the form (33).
A slight generalization of (33) is to allow n to be zero for some components, that is
define
xi (t) = fi (x, u, t|) (34)

some some components i. Such a system is labelled a Differential-Algebraic System and


these have been used in chemical engineering (Biegler et al. (1986)). In general, a numerical
solution of such equations requires (34) to be solved numerically given the other values of
x. Our approach also allows (34) to appear as a term in PEN(x|), providing an easier
implementation of such systems.
A further generalization allows f to include lags. That is


x(t) = f(x(t 1 ), x(t 2 ), . . . , x(t 3 ), u(t 4 ), t|) (35)

in which case x(t) needs to be specified for all values in [t0 max i , t0 ] as initial conditions.
Again, in its generality, our methodology can include such systems without knowing initial
conditions. We can also estimate the i ; an example of doing so in a simple system is given
in Koulis et al. (2006).
Although we have only considered ordinary differential equations in this paper, the
methodology extends naturally to partial differential equations in which a system x(s, t) is
described over spatial variables s as well as time t. In this case, the system may be described
in terms of both time and space derivatives

x x
f x, , u, t| .
t s

The smooth x(s, t) now requires a multi-dimensional basis expansion, but the same estima-
tion and variance estimation schemes already discussed can be carried out in a straightfor-
ward manner.
Finally, we note that the data criterion (14) may be interpreted as the log likelihood for
an observation from the stochastic differential equation:

dW (t)

x(t) = f(x, u, t|) +
dt
where W (t) is a d-dimensional Brownian motion. Thus for a fixed interpreted as the
ratio of the Brownian motion variance to that of the observational error the procedure
may be thought of as profiling an estimate of the realized Brownian motion. This notion is
appealing and suggests the use of alternative smoothing penalties based on the likelihood of
other stochastic processes. The flares in the lupus data, for example, could be considered to
be triggered by events in a Poisson process and we expect this to be a fruitful area of future
research. However, this interpretation relies on the representation of dW (t)/dt in terms of
f(x, u, t|) where x is given by a basis expansion (7). For nonlinear f
the discrepancy x(t)
the approximation properties of this discrepancy are not immediately clear. Moreover, it is
frequently the case that lack of fit in nonlinear dynamics is due more to miss-specification
of the system under consideration than to stochastic inputs. We have therefore restricted
the discussion in this paper solely to deterministic systems.

6. Further issues in fitting differential equations

Although we have emphasized situations where initial and/or boundary values for a system
are not known, in fact these can be incorporated into the method as constraints on the
optimization of inner criterion (14). These constraints can be incorporated explicitly by the
use of constrained optimization methods, or implicitly as data that receive large weights or
high prior probability through the specification of density gi (ei | i ) used in fitting criterion
(8). Integral constraints arise in statistical contexts such as the nonparametric estimation
of density functions, and these, too, can be applied without much additional effort.
Our experiences with real-world data suggest that differential equation models are often
not well specified. This is particularly true in biological sciences where the first principles
from which they are commonly deduced tend to be less exact than those derived from physics
and chemistry. These models are commonly selected only to provide the right qualitative
behavior and may take values orders of magnitude different from the observed data.
There is therefore a great need for diagnostic tools for such systems. Both to determine
the appropriateness of the model and, where it is inappropriate, to suggest ways in which
it may be modified. One approach to this is to estimate additional components of u that
will provide good fits. These may then be correlated with observed values of the system, or
external factors, to suggest new model formulae.
A typical industrial process involves many outputs and many inputs, with at least some
of each varying over time. Engineers plan experiments in which inputs are varied under
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 31

various regimes, including randomly or systematically timed changes; and step, ramp, curvi-
linear and harmonic perturbations. Often the effects of input perturbations are localized
and also interactive. These considerations point to a wide spectrum of experimental de-
sign problems that statisticians need to address with the help of the system estimation
technology proposed here.
We can add to these design issues the choice of sampling rate and accuracy for mea-
surements taken on both input and output variables. For example, in stable systems minor
changes in initial values of variables wash out quickly, but for systems that are close to
instability, estimating the initial state of the system requires considerable high quality data
at start-up. Certain parameters may also affect system behavior only locally, and therefore
also require more information where it counts.

7. Conclusions

Differential equations have a long and illustrious history in mathematical modeling. How-
ever, there has been little development of statistical theory for estimating such models or
assessing their agreement with observational data. Our approach, a variety of collocation
method, combines the concepts of smoothing and estimation, providing a continuum of
trade-offs between fitting the data well and fidelity to the hypothesized differential equa-
tions. This has been done by defining a fit through a penalized spline criterion for each
value of and then estimating through a profiling scheme in which the fit is regarded as
a nuisance parameter.
We have found that our approaches has a number of important advantages relative to
older methods such as nonlinear least squares. Parameter estimates can be obtained from
data on partially measured systems, a common situation where certain variables are expen-
sive to measure or are intrinsically latent. Comparisons with other approaches suggest that
the bias and sampling variance of these estimates is at least as good as for other approaches,
and rather better relative to methods such NLS that add solution approximation noise to
data noise. The sampling variation in the estimates is easily estimable, and our simulation
experiments and experience indicate that there is good agreement between these estimation
precision indicators and the actual estimation accuracies. Our approach also gains from
not requiring a formulation of the dynamic model as an initial value problem in situations
where initial values are not available or not required.
On the computational side, the parameter cascade algorithm is as fast or faster than
NLS and other approaches, and much faster than the Bayesian-MCMC method, which has
comparable estimation efficiency. Unlike MCMC, the parameter cascade or generalized
profiling approach is relatively straightforward to deploy to a wide range of applications,
and software in Matlab described below merely requires that the user to code up the various
partial derivatives that are involved, and which are detailed in the Appendix. Finally, the
method is also robust in the sense of converging over a wide range of starting parameter
values, and the possibility of beginning with a smaller range of smoothing parameters so
as to work with a smooth criterion, and then stepping these values up toward those defining
near approximations to the ODE further adds to the methods robustness.
Finally the fitting of a compromise between an actual ODE solution and a simple smooth
of the data adds a great deal of flexibility that should prove useful to users wishing to explore
variation in the data not representable in the ODE model. By comparing fits with smaller
values of with fits that are near or exact ODE solutions, the approach offers a diagnostic
capability that can guide further extensions and elaborations of the model.
The methodology that we have presented can be adapted to a large number of problems
that extend beyond ordinary differential equations; an area that we have yet to explore.
For example, it seems fairly straightforward to apply the parameter cascade approach to
the solution of partial or distributed differential equation systems, where a finite element
basis system would be more practical than a spline basis. Differential-algebraic, integral-
differential equations and other more general systems also seem approachable in this way.
Finally, we hope that this method will prove to open wide a door to the statistical
community that leads to an exciting range of data analysis opportunities in the burgeoning
world of dynamic systems modeling.

7.1. Software
All the results in this paper have been generated in the MATLAB computing language,
making use of functional data analysis software intended to compliment Ramsay and Sil-
verman (2005). A set of software routines that may be applied to any differential equation
is available from the URL: http://www.functionaldata.org.

References

Bates, D. M. and D. B. Watts (1988). Nonlinear Regression Analysis and Its Applications.
New York: Wiley.
Bellman, R. (1953). Stability Theory of Differential Equations. New York: Dover.
Biegler, L., J. J. Damiano, and G. E. Blau (1986). Nonlinear parameter estimation: a case
study comparison. AIChE Journal 32, 2945.
Bock, H. G. (1983). Recent advances in parameter identification techniques for ode. In
P. Deuflhard and E. Harrier (Eds.), Numerical Treatment of Inverse Problems in Differ-
ential and Integral Equations, pp. 95121. Basel: Birkhauser.
Campbell, D. A., G. Hooker, J. Ramsay, K. McAuley, J. McLellan, and S. Varziri (2006).
Parameter estimation in differential equation models: An application to dynamic systems.
McGill University unpublished manuscript.
Cao, J. and J. O. Ramsay (2006). Parameter cascades and profiling in functional data
analysis. In press.
Cox, D. R. and D. V. Hinkley (1974). Theoretical Statistics. London: Chapman & Hall.
Deuflhard, P. and F. Bornemann (2000). Scientific Compuitng with ordinary Differential
Equations. New York: Springer-Verlag.
Esposito, W. R. and C. Floudas (2000). Deterministic global optimization in nonlinear
optimal control problems. Journal of Global Optimization 17, 97126.
FitzHugh, R. (1961). Impulses and physiological states in models of nerve membrane.
Biophysical Journal 1, 445466.
Fussmann, G. F., S. P. Ellner, K. W. Shertzer, and N. G. J. Hairston (2000). Crossing the
hopf bifurcation in a live predator-prey system. Science 290, 13581360.
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 33

Gelman, A., J. B. C. andH.S. Stern, and D. B. Rubin (2004). Bayesian Data Analysis. New
York: Chapman and Hall/CRC.
Hodgkin, A. L. and A. F. Huxley (1952). A quantitative descripotion of membrane current
and its application to conduction and excitation in nerve. J. Physiol. 133, 444479.
Jaeger, J., M. Blagov, D. Kosman, K. Kolsov, Manu, E. Myasnikova, S. Surkova, C. Vanario-
Alonso, M. Samsonova, D. Sharp, and J. Reinitz (2004). Dynamical analysis of regulatory
interactions in the gap gene system of drosophila melanogaster. Genetics 167, 17211737.
Keilegom, I. V. and R. J. Carroll (2006). Backfitting versus profiling in general criterion
functions. Submitted to Statistica Sinica.
Koenker, R. and I. Mizera (2002). Elastic and plastic splines: Some experimental compar-
isons. In Y. Dodge (Ed.), Statistical Data Analysis based on the L1-norm and Related
Methods, pp. 405414. Basel: Birkhauser.
Koulis, T., J. . Ramsay, and D. Levitin (2006). Input-output systems in psychoacoustics.
submitted to Psychometrika.
Li, Z., M. Osborne, and T. Prvan (2005). Parameter estimation in ordinary differential
equations. IMA Journal of Numerical Analysis 25, 264285.
Marlin, T. E. (2000). Process Control. New York: McGraw-Hill.
M
uller, T. G. and J. Timmer (2004). Parameter identification techniques for partial differ-
ential equations. International Journal of Bifurcation and Chaos 14, 20532060.
Nagumo, J. S., S. Arimoto, and S. Yoshizawa (1962). An active pulse transmission line
simulating a nerve axon. Proceedings of the IRE 50, 20612070.
Poyton, A. A., M. S. Varziri, K. B. McAuley, P. J. McLellan, and J. O. Ramsay (2006).
Parameter estimation in continuous dynamic models using principal differential analysis.
Computational Chemical Engineering 30, 698708.
Ramsay, J. O. and B. W. Silverman (2005). Functional Data Analysis. New York: Springer.
Seber, G. A. F. and C. J. Wild (1989). Nonlinear Regression. New York: Wiley.
Tjoa, I.-B. and L. Biegler (1991). Simultaneous solution and optimization strategies for
parameter estimation of differential-algebraic equation systems. Industrial Engineering
and Chemical Research 30, 376385.
Varah, J. M. (1982). A spline least squares method for numerical parameter estimation in
differential equations. SIAM Journal on Scientific Computing 3, 2846.
Voss, H., M. M. B
unner, and M. Abel (1998). Indentification of continuous spaciotemporal
systems. Physical Review E 57, 28202823.
Wilson, H. R. (1999). Spikes, decisions and actions: the dynamical foundations of neuro-
science. Oxford: Oxford University Press.
Zheng, W., K. McAuley, K. Marchildon, and K. Z. Yao (2005). Effects of end-group balance
on melt-phase nylon 612 polycondensation: Experimental study and mathematical model.
Ind. Eng. Chem. Res. 44, 26752686.
Appendices

A. Matrix calculations for profiling

The calculations used throughout this paper have been based on matrices defined in terms
of derivatives of F and H with respect to and c. In many cases, these matrices are non-
trivial to calculate and expressions for their entries are derived here. For these calculations,
we have assumed that the outer criterion, F is a straight-forward weighted sum of squared
errors and only depends on through x.

A.1. Inner optimization


Using a Gauss-Newton method, we require the derivative of the fit at each observation
point:
dxi (ti,k )
= i (ti,k )
dci
where i (ti,k ) is the vector corresponding to the evaluation of all the basis functions used
to represent xi evaluated at ti,k . This gradient of xi with respect to cj is zero.
A numerical quadrature rule allows the set of errors to be augmented with the evaluation
of the penalty at the quadrature points and weighted by the quadrature rule:

(i vq )1/2 (Dxi (tq ) fi (x(tq ), u(tq ), tq |))

Each of these then has derivative with respect to cj :

(i vq )1/2 (Dxi (tq ) fi (x(tq ), u(tq ), tq |)) I(i = j)Di (tq )


n !
X dfk
(i vq )1/2 (Dxi (tq ) fi (x(tq ), u(tq ), tq |)) j (tq )
dxj
k=1

and the augmented errors and gradients can be used in a Gauss-Newton scheme. I() is used
as the indicator function of its argument.

A.2. Outer optimization


As in the inner optimization, in employing a Gauss-Newton scheme, we merely need to
write a gradient for the point-wise fit with respect to the parameters:

dx(ti,k ) dx(ti,k ) dc
=
d dc d
where dx(ti )/dc has already be calculated and
2 1 2
dc d H d H
= 2
d dc dcd

by the implicit function theorem.


Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 35

Hessian matrix d2 H/dc2 may be expressed as a block form, the (i, j)th block corre-
sponding to the cross-derivatives of the coefficients in the ith and jth components of x.
This blocks (p, q)th entry is given by:
n Z !
X i

i,p (ti,k )jq (ti,k ) + i,p (t)jq (t)dt I(i = j)


k=1
Z Z
dfi dfi
i Di,p (t) jq (t)dt j i,p (t) Djq (t)dt
dxj dxj
Z " n 2 #
X d fk dfk dfk
+ i,p (t) k (fk Dxk (t)) + jq (t)dt
dxi dxj dxi dxj
k=1

with the integrals evaluated by numeric integration. The arguments to fk (x, u, t|) have
been dropped in the interests of notational legibility.
We can similarly express the cross-derivatives d2 H/dcd as a block vector, the ith block
corresponding to the coefficients in the basis expansion for the ith component of x. The
pth entry of this block can now be expressed as:
Z Z X
n 2 !
dfi d fk dfk dfk
i i,p (t)dt k (fk Dxk (t)) + i,p (t)dt
d dxi d dxi d
k=1


A.3. Estimating the variance of
The variance of the parameter estimates is calculated using

2 1 2
d d H d H
= ,
dy d 2 ddy

where 0
d2 H 2 H 2 H
c
c 2 H
c H 2 c

2 2 + 2 + 2 + , (36)
d c c c 2
and
d2 H 2 H 2 H
c 2 H c 2 H
c
c H 2 c

+ + + 2 + . (37)
ddy y cy c y c y
c y
The formulas (36) and (37) for d2 H/d 2 and d2 H/ddy involve the terms / 2
c/y, 2 c
2
and c/y. In the following, we derive their analytical formulas by the Implicit Func-
tion Theorem. We introduce the following convention, which is caller Einstein Summation
Notation. If a Latin index is repeated in a term, then it is understood
P as a summation with
respect to that index. For instance, instead of the expression i ai xi , we merely write ai xi .
c

y
Similar as the deduction for d
c/d, we obtain the formula for
c/y by applying the
Implicit Function Theorem:
2 1 2

c J(c|, y) J(c|, y)
. (38)
y c2 c cy c
c2

y
By taking the second derivative on both sides of the identity J(c|, y)/c|c = 0 with
respect to and yk , we derive:

d2 J(c|, y) 3 J(c|, y) 3 J(c|, y) ci
+
ddyk c c cy k c cc i y
c k

3 J(c|, y)
c 3 J(c|, y) ci c 2 J(c|, y) 2 c
+ 2 + 2 + 2
c yk c c ci c yk c c yk
=0 (39)

2c
Solving for , with respect to and yk :
we obtain the second derivative of c
yk

1 3
2c
2 J(c|, y) J(c|, y) 3 J(c|, y)
ci
= 2 +
yk c cyk c cci c yk
c


3
J(c|, y) c 3
J(c|, y) ci
c
+ + (40)
c2 yk c c2 ci c yk

2c

2

Similar to the deduction of 2 c
/yk , the second partial derivative of c with respect
to and j is:
1 3
2c
2 J(c|, y) J(c|, y) 3 J(c|, y)
ci
= 2 +
j c cj c cci c j
c


3
J(c|, y) c 3
J(c|, y) ci
c
+ + (41)
c2 j c c2 ci c j

When estimating ODEs, we define J(c|, y) as (14) and H(, c ()|y) as (8), and further
write the above formulas in terms of the basis functions in and the functions f on the
right side of the differential equation. For instance, d2 H/dc2 is a block-diagonal matrix
with the ith block being wi i (ti )T i (ti ) and dF/dc is a block vector containing blocs
wi i (ti )T (yi xi (ti )).
The three-dimensional array 3 J/ccp cq can be written in the same block vector form
as 2 J/c with the uth entry of the kth block given by
Z X
n !
d2 fl dfl d2 fl dfl d2 fl dfl
l + + i,p (t)jq (t)ku (t)dt
dxi dxj dxk dxi dxk dxj dxj dxk dxi
l=1
Z X n
d3 fk
+ l (fl Dxl (t)) i,p (t)jq (t)ku (t)dt
dxi dxj dxk
l=1
Z Z
d2 fi d2 fj
i Di,p (t)jq (t)ku (t)dt j i,p (t)Djq (t)ku (t)dt
dxj dxk dxi dxk
Z
d2 fk
k i,p (t)jq (t)Dku (t)dt
dxi dxj
Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 37

assuming cp is a coefficient in the basis representation of xi and cq a corresponds to xj . The


array 3 J/ci j is also expressed in the same block form with entry p in the kth block
being:
Z X n 2 !
d fl dfl d2 fl dfl d2 fl dfl
l + + kp (t)dt
di dj dxk di dxk dj dj dxk di
l=1
Z X n Z
d3 fk d2 fk
+ l (fl Dxl (t)) kp (t)dt k kp (t)dt.
dxk di dj di dk
l=1

3 J/ccp i is in the same block from, with the qth entry of the jth block being:
Z X n 2 !
d fl dfl d2 fl dfl d2 fl dfl
l + + kp (t)jq (t)dt
di dxj dxk di dxk dxj dxj dxk di
l=1
Z X n
d3 fk
+ l (fl Dxl (t)) kp (t)jq (t)dt
dxj dxk di
l=1
Z Z
d 2 fj d2 fk
j Djq (t)kp (t)dt k jq (t)Dkp (t)dt
di dxk di dxj

where cp corresponds to the basis representation of xk .


Similar calculations give matrix d2 H/ddy explicitly as:

c T 2H
d 2 H d
c
+
d
cy c2 dy
1 ( XN N
)
H 2 H cp T 3 J
d cq X 3 J d
d cp
+
c c2 p,q=1
d ccp cq dy p=1 ccp dy

with d
c/dy given by
1
2J 2J

c2 cy
and 2 J/cdy being block diagonal with the ith block containing wi i (ti ).

B. Proofs of theorems in section 2.9

B.0.1. Preliminaries
The following theorem is a well-known consequence of the method of Lagrange multipliers:

Theorem B.1. Suppose that x minimizes F (x) + P (x), then x minimizes F (z) for
z {x : P (x) < P (x )}. Moreover, for 0 > , P (x0 ) P (x ).

Two corollaries:

Corollary B.1. For 0 > , F (x0 ) F (x ).

Corollary B.2. If x such that P (x) = 0, then P (x ) 0 as .


follow immediately.
The proofs of Theorems 2.2 and 2.3 rely heavily on the following:

Theorem B.2. Let X and Y be metric spaces with X closed and bounded. Let g(x, ) :
X Y R be uniformly continuous in x and , such that

x() = argmin g(x, )


xX

is well defined for each . Then x() : Y X is continuous.

We begin with two lemmas:

Lemma B.1. Let X be a closed and bounded metric space. Suppose that

x = argmin g(x) (42)


xX

is well defined and g(x) is continuous. Then

> 0, > 0 such that kx x k > f (x) f (x ) > .

holds for all x X .

Proof. Assume that the the statement is not true. That is, for some > 0 we can find
a sequence xn X such that kxn x k > but kg(xn ) g(x )k < 1/n. Since X is closed
and bounded, it is compact and there exists a subsequence xn0 x 6= x for some x .
By the continuity of g, we have g(x ) = g(x ) violating the assumption that (42) is well
defined.

Lemma B.2. Let X and Y be metric spaces and g(x, ) : X Y R be bounded below
and uniformly continuous in and x, then j() = minxX g(x, ) is a continuous function.

Proof. Assume j() is not continuous: that is, for some Y, > 0 such that
> 0, 0 with |0 | < and |j() j(0 )| > .
By the uniformity of g in across x, we can choose 0 > 0 so that |g(x, )g(x, 0 )| < /3
for all x when |0 | < 0 . By assumption, we can find some such 0 so that |j()j(0 )| >
. Without loss of generality, let j() < j(0 ).
Now, choose x X so that g(x, ) < j() + /3. Then g(x, 0 ) < j() + 2/3 < j(0 ),
contradicting j(0 ) = minxX g(x, ).

Using these, we can now prove Theorem B.2:

Proof. Let > 0, by Lemma B.1 there exists 0 > 0 such that

g(x, ) g(x(), ) < 0 kx x()k < .

By Lemma B.2, j() is continuous. Since g(x, ) is uniformly continuous , we can choose
so that

| 0 | < |j() j(0 )| < 0 /3 and x, |g(x, ) g(x, 0 )| < 0 /3


Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 39

giving

|g(x(), ) g(x(0 ), )| < |g(x(), ) g(x(0 ), 0 )| + |g(x(0 ), 0 ) g(x(0 ), )|


= |j() j(0 )| + |g(x(0 ), 0 ) g(x(0 ), )|
< /3 + /3
<

from which we conclude kx() x(0 )k < .

B.0.2. The inner optimization


Proof of Theorem 2.2:

Proof. We first note that we can re-express xk as

xk = argmin (1 k )l(x) + k PEN(xk |) (43)


x(W 1 )n

where k = k /(1 + k ) 1.
By the continuity of point-wise evaluation in (W 1 )n , l(x) is a continuous functional of
x and PEN(x|) is similarly continuous. Since the xk lie in a bounded set X , we have that

l(x) < F and PEN(x|) < P

for all x X . Both l(x) and PEN(x|) are bounded below by 0 and we note that

g(x, ) = (1 )l(x) + PEN(x|)

is uniformly bounded on C by 0 and F + P and is therefore uniformly continuous in


and x.
By Theorem B.2,
x() = argmin g(x, )
xC
is a continuous function from (0, 1) to (W 1 )n . Since kx()k is bounded by assumption, it
is uniformly continuous. Since n 1 is convergent, we must have that xn = x(n ) x.
By the continuity of PEN(x|), PEN(x |) = 0.

Note that if it were possible to define x() as a continuous function on [0, 1], the need for
a bound on ||x()|| would be removed. However, since we do not expect g(x, 1)PEN(x|) to
have a well-defined minimum, boundedness is required to ensure that x() has a limit as
1.
We can now go further when PEN(x|) is given by (13), by specifying that x is the
solution of the differential equations (1) that is obtained by minimizing squared error over
the choice of initial conditions. To see this, we observe that Theorem 2.1 ensures that


x(t) = f(x, u, t|).

with
x(t0 ) = x0
specifies a unique element of (W 1 )n . Let

F = {x, PEN(x|) = 0},

then
lim l(xn ) min l(x).
k xF
Since l is a continuous functional on (W 1 )n , and PEN(x |)0, we must have

l(x ) = min l(x).


xF
By the assumption that the solutions to (43) are well defined and bounded, this specifies a
unique set of initial conditions x0 such that

x (t) = f(x , u, t|).

with
x (t0 ) = x0 .

B.0.3. The outer optimization


Proof of Theorem 2.3:

Proof. The proof is very similar to that of Theorem 2.2. Setting = /(1 + )

g(x, , ) = (1 )l(x) + PEN(x|)

is uniformly continuous in , and x. As observed in Theorem 2.2, x , can be equivalently


written as
x , = argmin g(x, , ).
x(W 1 )k
with /(1 + ). By Theorem B.2, x , is continuous in and . On the set X , therefore,
l(x) is uniformly continuous in x and x , is uniformly continuous in and . l(x , ) is
therefore uniformly continuous in and . Under the assumption that () is well defined
for each , we can now employ Theorem B.2 again to give us that () is continuous in
and the boundedness of provides uniform continuity.
Assume that
= lim () 6=

1
k > . From Lemma B.1 there must exist a > 0 such that
and in particular k

l(x ) < l(x ) .

for all k k > /2. Since () is uniformly continuous in , there is some a such that
k() k > /2 for all > a. Now by the uniform continuity of l(x , ) in and , we
can choose a1 > a so that

l(x (), ) l(x ) < /3
for all > a1 . By the same uniform continuity, we can choose > a1 so that

|l(x , ) l(x )| < /2


Parameter Estimation for Differential Equations: A Generalized Smoothing Approach 41

giving
l(x , ) < l(x (), )
contradicting the definition of (). Finally, note that is also uniformly continuous in
and lim () = 1.

You might also like