You are on page 1of 149

See

discussions, stats, and author profiles for this publication at:


https://www.researchgate.net/publication/228031678

Ullmann's Encyclopedia of
Industrial Chemistry

Chapter December 2006


DOI: 10.1002/14356007.b01_01.pub2

CITATIONS READS

2 3,946

3 authors, including:

Bruce A. Finlayson
University of Washington Seattle
124 PUBLICATIONS 5,427 CITATIONS

SEE PROFILE

Available from: Bruce A. Finlayson


Retrieved on: 13 September 2016
Mathematics in Chemical Engineering 3

Mathematics in Chemical Engineering


Bruce A. Finlayson, Department of Chemical Engineering, University of Washington, Seattle, Washington,
United States (Chap. 1, 2, 3, 4, 5, 6, 7, 8, 9, 11 and 12)
Lorenz T. Biegler, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States (Chap. 10)
Ignacio E. Grossmann, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States (Chap. 10)

1. Solution of Equations . . . . . . . . . . 5 7.2. Initial Value Methods . . . . . . . . . . 62


1.1. Matrix Properties . . . . . . . . . . . . 5 7.3. Finite Difference Method . . . . . . . 63
1.2. Linear Algebraic Equations . . . . . . 7 7.4. Orthogonal Collocation . . . . . . . . 65
1.3. Nonlinear Algebraic Equations . . . 10 7.5. Orthogonal Collocation on
1.4. Linear Difference Equations . . . . . 12 Finite Elements . . . . . . . . . . . . . . 69
1.5. Eigenvalues . . . . . . . . . . . . . . . . 12 7.6. Galerkin Finite Element Method . . 71
2. Approximation and Integration . . . 13 7.7. Cubic B-Splines . . . . . . . . . . . . . . 73
2.1. Introduction . . . . . . . . . . . . . . . . 13 7.8. Adaptive Mesh Strategies . . . . . . . 73
2.2. Global Polynomial Approximation . 13 7.9. Comparison . . . . . . . . . . . . . . . . 73
2.3. Piecewise Approximation . . . . . . . 15 7.10. Singular Problems and Innite
2.4. Quadrature . . . . . . . . . . . . . . . . 18 Domains . . . . . . . . . . . . . . . . . . 74
2.5. Least Squares . . . . . . . . . . . . . . . 20 8. Partial Differential Equations . . . . 75
2.6. Fourier Transforms of Discrete Data 21 8.1. Classication of Equations . . . . . . 75
2.7. Two-Dimensional Interpolation and 8.2. Hyperbolic Equations . . . . . . . . . . 77
Quadrature . . . . . . . . . . . . . . . . 22 8.3. Parabolic Equations in One
3. Complex Variables . . . . . . . . . . . 22 Dimension . . . . . . . . . . . . . . . . . 79
3.1. Introduction to the Complex Plane . 22 8.4. Elliptic Equations . . . . . . . . . . . . 84
3.2. Elementary Functions . . . . . . . . . 23 8.5. Parabolic Equations in Two or Three
3.3. Analytic Functions of a Complex Dimensions . . . . . . . . . . . . . . . . . 86
Variable . . . . . . . . . . . . . . . . . . . 25 8.6. Special Methods for Fluid Mechanics 86
3.4. Integration in the Complex Plane . . 26 8.7. Computer Software . . . . . . . . . . . 88
3.5. Other Results . . . . . . . . . . . . . . . 28 9. Integral Equations . . . . . . . . . . . 89
4. Integral Transforms . . . . . . . . . . 29 9.1. Classication . . . . . . . . . . . . . . . 89
4.1. Fourier Transforms . . . . . . . . . . . 29 9.2. Numerical Methods for Volterra
4.2. Laplace Transforms . . . . . . . . . . . 33 Equations of the Second Kind . . . . 91
4.3. Solution of Partial Differential 9.3. Numerical Methods for Fredholm,
Equations by Using Transforms . . . 37 Urysohn, and Hammerstein
5. Vector Analysis . . . . . . . . . . . . . . 40 Equations of the Second Kind . . . . 91
6. Ordinary Differential Equations as 9.4. Numerical Methods for Eigenvalue
Initial Value Problems . . . . . . . . . 49 Problems . . . . . . . . . . . . . . . . . . 92
6.1. Solution by Quadrature . . . . . . . . 50 9.5. Greens Functions . . . . . . . . . . . . 92
6.2. Explicit Methods . . . . . . . . . . . . . 50 9.6. Boundary Integral Equations and
6.3. Implicit Methods . . . . . . . . . . . . . 54 Boundary Element Method . . . . . . 94
6.4. Stiffness . . . . . . . . . . . . . . . . . . . 55 10. Optimization . . . . . . . . . . . . . . . 95
6.5. Differential Algebraic Systems . . . 56 10.1. Introduction . . . . . . . . . . . . . . . . 95
6.6. Computer Software . . . . . . . . . . . 57 10.2. Gradient-Based Nonlinear
6.7. Stability, Bifurcations, Limit Cycles 58 Programming . . . . . . . . . . . . . . . 96
6.8. Sensitivity Analysis . . . . . . . . . . . 60 10.3. Optimization Methods without
6.9. Molecular Dynamics . . . . . . . . . . 61 Derivatives . . . . . . . . . . . . . . . . . 105
7. Ordinary Differential Equations as 10.4. Global Optimization . . . . . . . . . . 106
Boundary Value Problems . . . . . . . 61 10.5. Mixed Integer Programming . . . . . 110
7.1. Solution by Quadrature . . . . . . . . 62 10.6. Dynamic Optimization . . . . . . . . . 121

Ullmanns Modeling and Simulation


c 2007 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim
ISBN: 978-3-527-31605-2
4 Mathematics in Chemical Engineering

10.7. Development of Optimization 12. Multivariable Calculus Applied to


Models . . . . . . . . . . . . . . . . . . . . 124 Thermodynamics . . . . . . . . . . . . . 135
11. Probability and Statistics . . . . . . . 125 12.1. State Functions . . . . . . . . . . . . . . 135
11.1. Concepts . . . . . . . . . . . . . . . . . . 125
12.2. Applications to Thermodynamics . . 136
11.2. Sampling and Statistical Decisions . 129
11.3. Error Analysis in Experiments . . . . 132 12.3. Partial Derivatives of All
11.4. Factorial Design of Experiments and Thermodynamic Functions . . . . . . 137
Analysis of Variance . . . . . . . . . . . 133 13. References . . . . . . . . . . . . . . . . . 138

Symbols q heat ux
Variables Q volumetric ow rate; rate of heat gener-
a 1, 2, or 3 for planar, cylindrical, spher- ation for heat transfer problems
ical geometry r radial position in cylinder
ai acceleration of i-th particle in molecu- ri position of i-th particle in molecular dy-
lar dynamics namics
A cross-sectional area of reactor; R radius of reactor or catalyst pellet
Helmholtz free energy in thermody- Re Reynolds number
namics S entropy in thermodynamics
Bi Biot number Sh Sherwood number
Bim Biot number for mass t time
c concentration T temperature
Cp heat capacity at constant pressure u velocity
Cs heat capacity of solid U internal energyy in thermodynamics
Cv heat capacity at constant volume vi velocity of i-th particle in molecular dy-
Co Courant number namics
D diffusion coefcient V volume of chemical reactor; vapor ow
Da Damkohler number rate in distillation column; potential en-
De effective diffusivity in porous catalyst ergy in molecular dynamics; specic
E efciency of tray in distillation column volume in thermodynamics
F molar ow rate into a chemical reactor x position
G Gibbs free energy in thermodynamics z position from inlet of reactor
hp heat transfer coefcient
H enthalpy in thermodynamics Greek symbols
J mass ux a thermal diffusivity
k thermal conductivity; reaction rate con- Kronecker delta
stant sampling rate
ke effective thermal conductivity of porosity of catalyst pellet
porous catalyst viscosity in uid ow; effectiveness
kg mass transfer coefcient factor for reaction in a catalyst pellet
K chemical equilibrium constant 0 zero-shear rate viscosity
L thickness of slab; liquid ow rate in dis- Thiele modulus
tillation column; length of pipe for ow void fraction of packed bed
in pipe time constant in polymer ow
mi mass of i-th particle in molecular dy- density
namics s density of solid
M holdup on tray of distillation column shear stress
n power-law exponent in viscosity for- viscosity
mula for polymers
p pressure
Pe Peclet number
Mathematics in Chemical Engineering 5

Special symbols for special structures. They are useful because


| subject to they increase the speed of solution. If the un-
: mapping. For example, h : Rn Rm , knowns appear in a nonlinear fashion, the prob-
states that functions h map real num- lem is much more difcult. Iterative techniques
bers into m real numbers. There are m must be used (i.e., make a guess of the so-
functions h written in terms of n vari- lution and try to improve the guess). An im-
ables portant question then is whether such an iter-
member of ative scheme converges. Other important types
maps into of equations are linear difference equations and
eigenvalue problems, which are also discussed.

1. Solution of Equations
1.1. Matrix Properties
Mathematical models of chemical engineering
systems can take many forms: they can be sets A matrix is a set of real or complex numbers
of algebraic equations, differential equations, arranged in a rectangular array.
and/or integral equations. Mass and energy bal-
a11 a12 ... a1n
ances of chemical processes typically lead to a21
a22 ... a2n

large sets of algebraic equations: A=
.. .. .. ..

. . . .
a11 x1 +a12 x2 = b1
a21 x1 +a22 x2 = b2 am1 am2 ... amn

Mass balances of stirred tank reactors may lead The numbers aij are the elements of the matrix
to ordinary differential equations: A, or (A)ij = aij . The transpose of A is (AT ) =
aji .
dy
= f [y (t)] The determinant of a square matrix A is
dt
 
Radiative heat transfer may lead to integral  a11 a12 ... a1n 

 a21 a22 ... a2n 
equations: 
A=  . .. .. .. 
1  .. . . . 

y (x) = g (x) + K (x, s) f (s) d s a an2 ... ann 
n1
0
If the i-th row and j-th column are deleted, a
Even when the model is a differential equa- new determinant is formed having n1 rows and
tion or integral equation, the most basic step in columns. This determinant is called the minor of
the algorithm is the solution of sets of algebraic aij denoted as M ij . The cofactor Aij of the el-
equations. The solution of sets of algebraic equa- ement aij is the signed minor of aij determined
tions is the focus of Chapter 1. by
A single linear equation is easy to solve for
either x or y: Aij =(1)i+j Mij

y = ax+b The value of |A| is given by


n
n

If the equation is nonlinear, |A| = aij Aij or aij Aij
j=1 i=1
f (x) = 0

it may be more difcult to nd the x satisfying where the elements aij must be taken from a
this equation. These problems are compounded single row or column of A.
when there are more unknowns, leading to si- If all determinants formed by striking out
multaneous equations. If the unknowns appear whole rows or whole columns of order greater
in a linear fashion, then an important considera- than r are zero, but there is at least one determi-
tion is the structure of the matrix representing the nant of order r which is not zero, the matrix has
equations; special methods are presented here rank r.
6 Mathematics in Chemical Engineering

The value of a determinant is not changed if
0 if j<i1
the rows and columns are interchanged. If the aij = aij otherwise

0 if j>i+1
elements of one row (or one column) of a de-
terminant are all zero, the value of the deter- Block diagonal and pentadiagonal matrices also
minant is zero. If the elements of one row or arise, especially when solving partial differential
column are multiplied by the same constant, the equations in two- and three-dimensions.
determinant is the previous value times that con-
stant. If two adjacent rows (or columns) are in- QR Factorization of a Matrix. If A is an m
terchanged, the value of the new determinant is n matrix with m n, there exists an m m
the negative of the value of the original determi- unitary matrix Q = [q1 , q2 , . . .qm ] and an m n
nant. If two rows (or columns) are identical, the right-triangular matrix R such that A=QR. The
determinant is zero. The value of a determinant QR factorization is frequently used in the ac-
is not changed if one row (or column) is multi- tual computations when the other transforma-
plied by a constant and added to another row (or tions are unstable.
column).
A matrix is symmetric if Singular Value Decomposition. If A is an
aij =aji m n matrix with m n and rank k n, con-
sider the two following matrices.
and it is positive denite if
n
n AA and A A
T
x Ax= aij xi xj 0
i=1j=1
An m m unitary matrix U is formed from the
eigenvectors ui of the rst matrix.
for all x and the equality holds only if x = 0.
If the elements of A are complex numbers, A* U = [u1 ,u2 ,. . .um ]
denotes the complex conjugate in which (A*)ij
An n n unitary matrix V is formed from the
= a*ij . If A=A* the matrix is Hermitian.
eigenvectors vi of the second matrix.
The inverse of a matrix can also be used to
solve sets of linear equations. The inverse is a V =[v1 ,v2 ,. . .,vn ]
matrix such that when A is multiplied by its in-
verse the result is the identity matrix, a matrix Then the matrix A can be decomposed into
with 1.0 along the main diagonal and zero else- A=U V
where.
where is a k k diagonal matrix with diagonal
AA1 =I elements dii =i >0 for 1 i k. The eigenval-
If AT =A1 the matrix is orthogonal. ues of are i2 . The vectors ui for k+1 i
Matrices are added and subtracted element m and vi for k+1 i n are eigenvectors asso-
by element. ciated with the eigenvalue zero; the eigenvalues
for 1 i k are i2 . The values of i are called
A+B is aij +bij the singular values of the matrix A. If A is real,
Two matrices A and B are equal if aij = bij . then U and V are real, and hence orthogonal ma-
Special relations are trices. The value of the singular value decompo-
sition comes when a process is represented by a
(AB)1 =B 1 A1 , (AB)T =B T AT

1 T
T 1 linear transformation and the elements of A, aij ,
A = A , (ABC)1 =C 1 B 1 A1
are the contribution to an output i for a partic-
A diagonal matrix is zero except for elements ular variable as input variable j. The input may
along the diagonal. be the size of a disturbance, and the output is
the gain [1]. If the rank is less than n, not all the
aii , i=j
aij = variables are independent and they cannot all be
0, i=j
controlled. Furthermore, if the singular values
A tridiagonal matrix is zero except for elements are widely separated, the process is sensitive to
along the diagonal and one element to the right small changes in the elements of the matrix and
and left of the diagonal. the process will be difcult to control.
Mathematics in Chemical Engineering 7

1.2. Linear Algebraic Equations


Consider the nn linear system
a11 x1 +a12 x2 + . . . +a1n xn = f1
a21 x1 +a22 x2 + . . . +a2n xn = f2
...
an1 x1 +an2 x2 + . . . +ann xn = fn

In this equation a11 , . . . , ann are known pa-


rameters, f 1 , . . . , f n are known, and the un-
knowns are x 1 , . . . , x n . The values of all un-
knowns that satisfy every equation must be
found. This set of equations can be represented
as follows:
n

aij xj = fj or Ax = f
j=1

The most efcient method for solving a set of


linear algebraic equations is to perform a lower
upper (LU) decomposition of the correspond-
ing matrix A. This decomposition is essentially
a Gaussian elimination, arranged for maximum
efciency [2, 3].
The LU decomposition writes the matrix as
A=LU

The U is upper triangular; it has zero elements


below the main diagonal and possibly nonzero
values along the main diagonal and above it (see Figure 1. Structure of L and U matrices
Fig. 1). The L is lower triangular. It has the value
1 in each element of the main diagonal, nonzero
values below the diagonal, and zero values above If the value of the determinant is a very large or
the diagonal (see Fig. 1). The original problem very small number, it can be divided or multi-
can be solved in two steps: plied by 10 to retain accuracy in the computer;
the scale factor is then accumulated separately.
Ly = f , U x = y solves Ax = LU x = f The condition number can be dened in terms
Each of these steps is straightforward because of the singular value decomposition as the ratio
the matrices are upper triangular or lower trian- of the largest d ii to the smallest d ii (see above).
gular. It can also be expressed in terms of the norm of
When f is changed, the last steps can be the matrix:
done without recomputing the LU decomposi- (A) = AA1 
tion. Thus, multiple right-hand sides can be com-
puted efciently. The number of multiplications where the norm is dened as
and divisions necessary to solve for m right-hand n
 
Ax ajk 
sides is: Asupx=0 = maxk
x j=1
1 3 1
Operation count = n n+m n2
3 3 If this number is innite, the set of equations
The determinant is given by the product of is singular. If the number is too large, the ma-
the diagonal elements of U. This should be cal- trix is said to be ill-conditioned. Calculation of
culated as the LU decomposition is performed.
8 Mathematics in Chemical Engineering

the condition number can be lengthy so another The LU decomposition algorithm for solving
criterion is also useful. Compute the ratio of the this set is
largest to the smallest pivot and make judgments
b1 = b1
on the ill-conditioning based on that.
for k = 2, n do
When a matrix is ill-conditioned the LU
ak = bak , bk = bk ak
bk1
ck1
decomposition must be performed by using piv- k1

oting (or the singular value decomposition de- enddo


scribed above). With pivoting, the order of the d1 = d1
elimination is rearranged. At each stage, one for k = 2, n do
looks for the largest element (in magnitude); dk = dk ak dk1
the next stages if the elimination are on the row enddo
and column containing that largest element. The xn = dn /bn
largest element can be obtained from only the for k = n 1, 1do
dk ck xk+1
diagonal entries (partial pivoting) or from all xk = bk
the remaining entries. If the matrix is nonsingu- enddo
lar, Gaussian elimination (or LU decomposition)
could fail if a zero value were to occur along the The number of multiplications and divisions for
diagonal and were to be a pivot. With full pivot- a problem with n unknowns and m right-hand
ing, however, the Gaussian elimination (or LU sides is
decomposition) cannot fail because the matrix Operation count = 2 (n1) +m (3 n2)
is nonsingular.
The Cholesky decomposition can be used for If
real, symmetric, positive denite matrices. This
algorithm saves on storage (divide by about 2) |bi |>|ai |+|ci |
and reduces the number of multiplications (di- no pivoting is necessary. For solving two-point
vide by 2), but adds n square roots. boundary value problems and partial differential
The linear equations are solved by equations this is often the case.
x = A1 f

Generally, the inverse is not used in this way


because it requires three times more operations
than solving with an LU decomposition. How-
ever, if an inverse is desired, it is calculated most
efciently by using the LU decomposition and
then solving
Ax(i)
= b(i)
(i) 0 j=i
bj =
1 j=i

Then set
 
A1 = x(1) |x(2) |x(3) ||x(n)

Solutions of Special Matrices. Special ma- Figure 2. Structure of tridiagonal matrices


trices can be handled even more efciently. A
tridiagonal matrix is one with nonzero entries
Sparse matrices are ones in which the major-
along the main diagonal, and one diagonal above
ity of elements are zero. If the zero entries oc-
and below the main one (see Fig. 2). The corre-
cur in special patterns, efcient techniques can
sponding set of equations can then be written
be used to exploit the structure, as was done
as
above for tridiagonal matrices, block tridiagonal
ai xi1 +bi xi +ci xi+1 = di
Mathematics in Chemical Engineering 9

matrices, arrow matrices, etc. These structures where A is symmetric and positive denite, is
typically arise from numerical analysis applied to be solved. A preconditioning matrix M is de-
to solve differential equations. Other problems, ned in such a way that the problem
such as modeling chemical processes, lead to Mt = r
sparse matrices but without such a neatly dened
structure just a lot of zeros in the matrix. For is easy to solve exactly (M might be diagonal,
matrices such as these, special techniques must for example). Then the preconditioned conju-
be employed: efcient codes are available [4]. gate gradient method is
These codes usually employ a symbolic factor- Guess x0
ization, which must be repeated only once for Calculate r0 = f Ax0
each structure of the matrix. Then an LU factor-
Solve M t0 = r0 , and set p0 = t0
ization is performed, followed by a solution step
using the triangular matrices. The symbolic fac- for k = 1, n (or until convergence)
T
torization step has signicant overhead, but this rk tk
ak =
pT
k
Apk
is rendered small and insignicant if matrices
xk+1 = xk +ak pk
with exactly the same structure are to be used
over and over [5]. rk+1 = rk ak Apk
The efciency of a technique for solving sets Solve M tk+1 = rk+1
T
of linear equations obviously depends greatly on rk+1 tk+1
bk = Tt
the arrangement of the equations and unknowns rk k

because an efcient arrangement can reduce the pk+1 = tk+1 +bk pk


bandwidth, for example. Techniques for renum- test for convergence
bering the equations and unknowns arising from enddo
elliptic partial differential equations are avail-
Note that the entire algorithm involves only
able for nite difference methods [6] and for -
matrix multiplications. The generalized mini-
nite element methods [7].
mal residual method (GMRES) is an iterative
method that can be used for nonsymmetric sys-
Solutions with Iterative Methods. Sets of
tems and is based on a modied Gram Schmidt
linear equations can also be solved by using it-
orthonormalization. Additional information, in-
erative methods; these methods have a rich his-
cluding software for a variety of methods, is
torical background. Some of them are discussed
available [9 13].
in Chapter 8 and include Jacobi, Gauss Sei-
In dimensional analysis if the dimensions of
del, and overrelaxation methods. As the speed
each physical variable Pj (there are n of them)
of computers increases, direct methods become
are expressed in terms of fundamental measure-
preferable for the general case, but for large
ment units mj (such as time, length, mass; there
three-dimensional problems iterative methods
are m of them):
are often used.
The conjugate gradient method is an itera- [Pj ] = m1 1j m2 2j mmmj
tive method that can solve a set of n linear equa- then a matrix can be formed from the ij . If
tions in n iterations. The method primarily re- the rank of this matrix is r, n r independent
quires multiplying a matrix by a vector, which dimensionless groups govern that phenomenon.
can be done very efciently on parallel comput- In chemical reaction engineering the chemical
ers: for sparse matrices this is a viable method. reaction stoichiometry can be written as
The original method was devised by Hestenes n

and Stiefel [8]; however, more recent imple- ij Ci = 0, j = 1, 2, . . . , m
mentations use a preconditioned conjugate gra- i=1

dient method because it converges faster, pro- where there are n species and m reactions. Then
vided a good preconditioner can be found. The if a matrix is formed from the coefcients ij ,
system of n linear equations which is an nm matrix, and the rank of the ma-
trix is r, there are r independent chemical reac-
Ax = f tions. The other n r reactions can be deduced
from those r reactions.
10 Mathematics in Chemical Engineering

1.3. Nonlinear Algebraic Equations successive iterates is something like 0.1, 0.01,
0.0001, 108 . The error (when it is known) goes
Consider a single nonlinear equation in one un- the same way; the method is said to be quadrati-
known, cally convergent when it converges. If the deriva-
tive is difcult to calculate a numerical approx-
f (x) = 0
imation may be used.
In Microsoft Excel, roots are found by using


df f xk + f xk
Goal Seek or Solver. Assign one cell to be x, |xk =
dx
put the equation for f (x) in another cell, and let
Goal Seek or Solver nd the value of x making In the secant method the same formula is
the equation cell zero. In MATLAB, the process used as for the Newton Raphson method, ex-
is similar except that a function (m-le) is de- cept that the derivative is approximated by using
ned and the command fzero(f,x0) provides the values from the last two iterates:



the solution x, starting from the initial guess x0 . df f xk f xk1
Iterative methods applied to single equations | k =
dx x xk xk1
include the successive substitution method
    This is equivalent to drawing a straight line
xk+1 = xk +f xk g xk through the last two iterate values on a plot of f
(x) versus x. The Newton Raphson method is
and the Newton Raphson method. equivalent to drawing a straight line tangent to


f xk the curve at the last x. In the method of false po-
xk+1 = xk
sition (or regula falsi), the secant method is used
df /dx xk
to obtain x k +1 , but the previous value is taken
The former method converges if the derivative as either x k 1 or x k . The choice is made so that
of g(x) is bounded [3]. The latter method the function evaluated for that choice has the
 
 dg



opposite sign to f (x k +1 ). This method is slower
 dx (x) for |x| <h than the secant method, but it is more robust and
keeps the root between two points at all times. In
is based on a Taylor series of the equation about
all these methods, appropriate strategies are re-
the k-th iterate:
quired for bounds on the function or when df /dx
    df  
f xk+1 = f xk + |xk xk+1 xk + = 0. Brents method combines bracketing, bisec-
dx tion, and an inverse quadratic interpolation to
d2 f 1  k+1 k 2 provide a method that is fast and guaranteed to
2
|xk x x +
dx 2 converge, if the root can be bracketed initially
The second and higher-order terms are neglected [15, p. 251].
and f (x k +1 ) = 0 to obtain the method. In the method of bisection, if a root lies bet-
  
 ween x 1 and x 2 because f (x 1 ) < 0 and f (x 2 ) >
 df     f x0
  >0, x1 x0  =  
 b, 0, then the function is evaluated at the center, x c =
 dx  0  df /dx (x 0) 
x
 2 
x0 0.5 (x 1 + x 2 ). If f (x c ) > 0, the root lies between
d f  x 1 and x c . If f (x c ) < 0, the root lies between x c
and  2  c
dx and x 2 . The process is then repeated. If f (x c ) =
Convergence of the NewtonRaphson method 0, the root is x c . If f (x 1 ) > 0 and f (x 2 ) > 0,
depends on the properties of the rst and second more than one root may exist between x 1 and x 2
derivative of f (x) [3, 14]. In practice the method (or no roots).
may not converge unless the initial guess is good, For systems of equations the Newton Raph-
or it may converge for some parameters and not son method is widely used, especially for equa-
others. Unfortunately, when the method is non- tions arising from the solution of differential
convergent the results look as though a mistake equations.
occurred in the computer programming; distin- fi ({xj }) = 0, 1i, jn,
guishing between these situations is difcult, so
where {xj } = (x1 , x2 ,. . ., xn ) = x
careful programming and testing are required.
If the method converges the difference between Then, an expansion in several variables occurs:
Mathematics in Chemical Engineering 11
    n
fi  
fi xk+1 = fi xk + |xk xk+1 xkj + Raphson method to solve for x. Since the ini-
j
j=1
xj tial guess is presumably pretty good, this has a
high chance of being successful. That solution
The Jacobian matrix is dened as

is then used as the initial guess for t = 2 t and
k fi  the process is repeated by moving stepwise to t
Jij =
xj  x
k
= 1. If the Newton Raphson method does not
converge, then t must be reduced and a new
and the Newton Raphson method is solution attempted.
n
    Another way of using homotopy is to create
k
Jij xk+1 xk = fi xk an ordinary differential equation by differentiat-
j=1 ing the homotopy equation along the path (where
h = 0).
For convergence, the norm of the inverse of
the Jacobian must be bounded, the norm of dh [x (t) ,t] h dx h
= + =0
the function evaluated at the initial guess must dt x dt t
be bounded, and the second derivative must be This can be expressed as an ordinary differential
bounded [14, p. 115], [3, p. 12]. equation for x (t):
A review of the usefulness of solution meth-
h dx h
ods for nonlinear equations is available [16]. =
x dt t
This review concludes that the Newton Raph-
son method may not be the most efcient. Broy- If Eulers method is used to solve this equation,
dens method approximates the inverse to the Ja- a value x 0 is used, and dx/dt from the above
cobian and is a good all-purpose method, but a equation is solved for. Then
good initial approximation to the Jacobian ma- dx
x1,0 = x0 +t
trix is required. Furthermore, the rate of con- dt
vergence deteriorates for large problems, for
is used as the initial guess and the homotopy
which the Newton Raphson method is better.
equation is solved for x1 .
Browns method [16] is very attractive, whereas
Brents is not worth the extra storage and compu- h  1,k+1   
x x1,k = h x1,k ,t
tation. For large systems of equations, efcient x
software is available [11 13]. Then t is increased by t and the process is re-
Homotopy methods can be used to ensure peated.
nding the solution when the problem is espe- In arc-length parameterization, both x and t
cially complicated. Suppose an attempt is made are considered parameterized by a parameter s,
to solve f (x) = 0, and it fails; however, g (x) = which is thought of as the arc length along a
0 can be solved easily, where g (x) is some func- curve. Then the homotopy equation is written
tion, perhaps a simplication of f (x). Then, the along with the arc-length equation.
two functions can be embedded in a homotopy
h dx
by taking x ds
+ h dt
t ds
=0
 2
h (x, t) = t f (x) + (1t) g (x) dxT dx dt
ds ds
+ ds
=1

In this equation, h can be a nn matrix for The initial conditions are


problems involving n variables; then x is a vec-
tor of length n. Then h (x, t) = 0 can be solved x (0) = x0
for t = 0 and t gradually changes until at t = 1, h t (0) = 0
(x, 1) = f (x). If the Jacobian of h with respect The advantage of this approach is that it works
to x is nonsingular on the homotopy path (as even when the Jacobian of h becomes singu-
t varies), the method is guaranteed to work. In lar because the full matrix is rarely singular. Il-
classical methods, the interval from t = 0 to t = lustrations applied to chemical engineering are
1 is broken up into N subdivisions. Set t = 1/N available [17]. Software to perform these com-
and solve for t = 0, which is easy by the choice putations is available (called LOCA) [18].
of g (x). Then set t = t and use the Newton
12 Mathematics in Chemical Engineering

1.4. Linear Difference Equations This gives



b b2 4ac
Difference equations arise in chemical engineer- 1,2 =
2a
ing from staged operations, such as distillation or
extraction, as well as from differential equations and the solution to the difference equation is
modeling adsorption and chemical reactors. The
xn = An n
1 +B2
value of a variable in the n-th stage is noted by a
subscript n. For example, if yn,i denotes the mole where A and B are constants that must be speci-
fraction of the i-th species in the vapor phase on ed by boundary conditions of some kind.
the n-th stage of a distillation column, x n,i is the When the equation is nonhomogeneous, the
corresponding liquid mole fraction, R the reux solution is represented by the sum of a particular
ratio (ratio of liquid returned to the column to solution and a general solution to the homoge-
product removed from the condenser), and K n,i neous equation.
the equilibrium constant, then the mass balances
about the top of the column give xn = xn,P +xn,H

R 1 The general solution is the one found for the


yn+1,i = xn,i + x0,i
R+1 R+1 homogeneous equation, and the particular so-
and the equilibrium equation gives lution is any solution to the nonhomogeneous
difference equation. This can be found by meth-
yn,i = Kn,i xn,i ods analogous to those used to solve differential
If these are combined, equations: the method of undetermined coef-
cients and the method of variation of parame-
R 1 ters. The last method applies to equations with
Kn+1,i xn+1,i = xn,i + x0,i
R+1 R+1 variable coefcients, too. For a problem such as
is obtained, which is a linear difference equation.
xn+1 fn xn = 0
This particular problem is quite complicated,
x0 = c
and the interested reader is referred to [19, Chap.
6]. However, the form of the difference equa- the general solution is
tion is clear. Several examples are given here for n

solving difference equations. More complete in- xn =c fi1
formation is available in [20]. i=1
An equation in the form This can then be used in the method of variation
xn+1 xn =fn+1 of parameters to solve the equation
can be solved by xn+1 fn xn = gn
n

xn = fi
i=1

Usually, difference equations are solved analyt-


1.5. Eigenvalues
ically only for linear problems. When the coef-
The nn matrix A has n eigenvalues i , i = 1, .
cients are constant and the equation is linear and
. . , n, which satisfy
homogeneous, a trial solution of the form
xn = n det (Ai I) = 0

is attempted; is raised to the power n. For ex- If this equation is expanded, it can be represented
ample, the difference equation as
cxn1 +bxn +axn+1 = 0 Pn () = ()n +a1 ()n1 +a2 ()n2 +
coupled with the trial solution would lead to the +an1 () +an = 0
equation If the matrix A has real entries then ai are real
a2 +b+c = 0 numbers, and the eigenvalues either are real
Mathematics in Chemical Engineering 13

numbers or occur in pairs as complex numbers 2) Experimental data must be t with a math-
with their complex conjugates (for denition of ematical model. The data have experimental
complex numbers, see Chap. 3). The Hamilton error, so some uncertainty exists. The param-
Cayley theorem [19, p. 127] states that the ma- eters in the model as well as the uncertainty in
trix A satises its own characteristic equation. the determination of those parameters is de-
sired.
Pn (A) = (A)n +a1 (A)n1 +a2 (A)n2 +
+an1 (A) +an I = 0
These problems are addressed in this chapter.
Section 2.2 gives the properties of polynomi-
A laborious way to nd the eigenvalues of als dened over the whole domain and Section
a matrix is to solve the n-th order polynomial 2.3 of polynomials dened on segments of the
for the i far too time consuming. Instead the domain. In Section 2.4, quadrature methods are
matrix is transformed into another form whose given for evaluating an integral. Least-squares
eigenvalues are easier to nd. In the Givens methods for parameter estimation for both linear
method and the Housholder method the matrix and nonlinear models are given in Sections 2.5.
is transformed into the tridiagonal form; then, in Fourier transforms to represent discrete data are
a xed number of calculations the eigenvalues described in Section 2.7. The chapter closes with
can be found [15]. The Givens method requires extensions to two-dimensional representations.
4 n3 /3 operations to transform a real symmetric
matrix to tridiagonal form, whereas the House-
holder method requires half that number [14]. 2.2. Global Polynomial Approximation
Once the tridiagonal form is found, a Sturm se-
quence is applied to determine the eigenvalues. A global polynomial Pm (x) is dened over the
These methods are especially useful when only entire region of space
a few eigenvalues of the matrix are desired. m

If all the eigenvalues are needed, the Q R al- Pm (x) = c j xj
gorithm is preferred [21]. j=0

The eigenvalues of a certain tridiagonal ma- This polynomial is of degree m (highest power
trix can be found analytically. If A is a tridiago- is x m ) and order m + 1 (m + 1 parameters {cj }).
nal matrix with If a set of m + 1 points is given,
aii = p, ai,i+1 = q,ai+1,i = r, qr>0 y1 = f (x1 ) , y2 = f (x2 ) ,. . ., ym+1 = f (xm+1 )

then the eigenvalues of A are [22] then Lagranges formula yields a polynomial of
i degree m that goes through the m + 1 points:
i = p+2(qr)1/2 cos i = 1,2,. . .,n
n+1 (xx2 )(xx3 )(xxm+1 )
Pm (x) = y +
(x1 x2 )(x1 x3 )(x1 xm+1 ) 1
This result is useful when nite difference meth-
(xx1 )(xx3 )(xxm+1 )
ods are applied to the diffusion equation. y ++
(x2 x1 )(x2 x3 )(x2 xm+1 ) 2

(xx1 )(xx2 )(xxm )


ym+1
(xm+1 x1 )(xm+1 x2 )(xm+1 xm )
2. Approximation and Integration Note that each coefcient of yj is a polynomial
of degree m that vanishes at the points {x j } (ex-
2.1. Introduction cept for one value of j ) and takes the value of
1.0 at that point, i.e.,
Two types of problems arise frequently:
Pm (xj ) = yj j = 1, 2,. . ., m+1
1) A function is known exactly at a set of points
and an interpolating function is desired. The If the function f (x) is known, the error in the
interpolant may be exact at the set of points, or approximation is [23]
it may be a best t in some sense. Alterna- |xm+1 x1 |m+1
|error (x)| (m+1)!
tively it may be desired to represent a function
in some other way.  
maxx1 xxm+1 f (m+1) (x)
14 Mathematics in Chemical Engineering

The evaluation of Pm (x) at a point other than The orthogonality includes a nonnegative
the dening points can be made with Nevilles weight function, W (x) 0 for all a x b.
algorithm [15]. Let P1 be the value at x of the This procedure species the set of polynomials
unique function passing through the point (x 1 , to within multiplicative constants, which can be
y1 ); i.e., P1 = y1 . Let P12 be the value at x of the set either by requiring the leading coefcient to
unique polynomial passing through the points x 1 be one or by requiring the norm to be one.
and x 2 . Likewise, Pijk ...r is the unique polyno- b
mial passing through the points x i , x j , x k , . . . , 2
W (x) Pm (x) dx = 1
x r . The following scheme is used: a

The polynomial Pm (x) has m roots in the closed


interval a to b.
The polynomial
p (x) = c0 P0 (x) +c1 P1 (x) +cm Pm (x)
These entries are dened by using minimizes
Pi(i+1)(i+m) = b
I= W (x) [f (x) p (x)]2 dx
(xxi+m )Pi(i+1)(i+m1) +(xi x)P(i+1)(i+2)(i+m)
a
xi xi+m
when
Consider P1234 : the terms on the right-hand side
of the equation involve P123 and P234 . The par- b
W (x) f (x)Pj (x)dx
a
ents, P123 and P234 , already agree at points cj = Wj
,
2 and 3. Here i = 1, m = 3; thus, the parents
b
agree at x i+1 , . . . , x i+m1 already. The for- Wj = W (x) Pj2 (x) dx
mula makes Pi(i+1)...(i+m) agree with the func- a

tion at the additional points x i+m and x i . Thus, Note that each cj is independent of m, the num-
Pi(i+1)...(i+m) agrees with the function at all the ber of terms retained in the series. The minimum
points {x i , x i+1 , . . . , x i+m }. value of I is
b m

Orthogonal Polynomials. Another form of Imin = W (x) f 2 (x) dx Wj c2j
the polynomials is obtained by dening them so a j=0
that they are orthogonal. It is required that Pm
(x) be orthogonal to Pk (x) for all k = 0, . . . , Such functions are useful for continuous data,
m 1. i.e., when f (x) is known for all x.
Typical orthogonal polynomials are given in
b Table 1. Chebyshev polynomials are used in
W (x) Pk (x) Pm (x) dx = 0
a spectral methods (see Chap. 8). The last two
k = 0,1,2,. . ., m1 rows of Table 1 are widely used in the orthogo-
nal collocation method in chemical engineering.
Table 1. Orthogonal polynomials [15, 23]
a b W (x) Name Recursion
relation
1 1 1 Legendre (i+1) Pi+1 =(2 i+1)xPi i Pi1

1
1 1 Chebyshev T i+1 =2xT i T i1
1x2

0 1 x q1 (1x)pq Jacobi ( p, q)
2
ex Hermite H i+1 =2xH i 2 i H i1
0 x c ex Laguerre (c) (i+1) L ci+1 =(x+2 i+c+1) L ci (i+c) L ci1
0 1 1 shifted Legendre
0 1 1 shifted Legendre, function of x 2
Mathematics in Chemical Engineering 15

The last entry (the shifted Legendre polynomial 2.3. Piecewise Approximation
as a function of x 2 ) is dened by
1



Piecewise approximations can be developed
W x2 Pk x2 Pm x2 xa1 dx = 0 from difference formulas [3]. Consider a case
0
k = 0,1,. . ., m1
in which the data points are equally spaced
xn+1 xn = x
where a = 1 is for planar, a = 2 for cylindrical,
and a = 3 for spherical geometry. These func- yn = y (xn )
tions are useful if the solution can be proved to
be an even function of x. forward differences are dened by
yn = yn+1 yn
Rational Polynomials. Rational polynomi-
als are ratios of polynomials. A rational poly- 2 yn = yn+1 yn = yn+2 2yn+1 +yn
nomial Ri(i+1)...(i+m) passing through m + 1 Then, a new variable is dened
points
x x0
=
yi = f (xi ) , i = 1,. . .,m+1 x

is and the nite interpolation formula through the



points y0 , y1 , . . . , yn is written as follows:
P (x) p0 +p1 x++p x
Ri(i+1)(i+m) = Q (x)
= q0 +q1 x++q x
,
(1) 2
y = y0 +y0 + 2!
y0 ++
m+1 = ++1 (1)
(1)(n+1) n
n!
y0
An alternative condition is to make the rational
polynomial agree with the rst m + 1 terms in Keeping only the rst two terms gives a straight
the power series, giving a Pade approximation, line through (x 0 , y0 ) (x 1 , y1 ); keeping the rst
i.e., three terms gives a quadratic function of posi-
tion going through those points plus (x 2 , y2 ).
dk Ri(i+1)(i+m) dk f (x)
= k = 0,. . .,m The value = 0 gives x = x 0 ; = 1 gives x =
dxk dxk x 1 , etc.
The Bulirsch Stoer recursion algorithm can be Backward differences are dened by
used to evaluate the polynomial:
yn = yn yn1
Ri(i+1)(i+m) = R(i+1)(i+m)
2 yn = yn yn1 = yn 2yn1 +yn2
R R
+ (i+1)(i+m) Deni(i+1)(i+m1) The interpolation polynomial of order n through
 
Den = xxi the points y0 , y1 , y2 , . . . is
xxi+m
  y = y0 +y0 +
(+1) 2
y0 ++
R R 2!
1 R (i+1)(i+m)R
i(i+1)(i+m1)
1
(i+1)(i+m) (i+1)(i+m1)
(+1)(+n1) n
n!
y0
Rational polynomials are useful for approximat-
ing functions with poles and singularities, which The value = 0 gives x = x 0 ; = 1 gives x =
occur in Laplace transforms (see Section 4.2). x 1 . Alternatively, the interpolation polynomial
Fourier series are discussed in Section 4.1. of order n through the points y1 , y0 , y1 , . . . is
Representation by sums of exponentials is also y = y1 + (1) y1 +
(1) 2
y1 ++
2!
possible [24].
In summary, for discrete data, Legendre poly- (1)(+1)(+n2) n
n!
y1
nomials and rational polynomials are used. For
continuous data a variety of orthogonal polyno- Now = 1 gives x = x 1 ; = 0 gives x = x 0 .
mials and rational polynomials are used. When The nite element method can be used for
the number of conditions (discrete data points) piecewise approximations [3]. In the nite ele-
exceeds the number of parameters, then see Sec- ment method the domain a x b is divided
tion 2.5.
16 Mathematics in Chemical Engineering

into elements as shown in Figure 3. Each func- x in e-th element and ci = ceI within the element
tion N i (x) is zero at all nodes except x i ; N i e. Thus, given a set of points (x i , yi ), a nite ele-
(x i ) = 1. Thus, the approximation is ment approximation can be made to go through
NT
NT

them.
y (x) = ci Ni (x) = y (xi ) Ni (x) Quadratic approximations can also be used
i=1 i=1 within the element (see Fig. 4). Now the trial
where ci = y (x i ). For convenience, the trial functions are
functions are dened within an element by using
new coordinates:
xxi
u=
xi
The x i need not be the same from element
to element. The trial functions are dened as
N i (x) (Fig. 3 A) in the global coordinate sys-
tem and N I (u) (Fig. 3 B) in the local coordinate
system (which also requires specication of the
element). For x i < x < x i+1

Figure 4. Finite elements approximation quadratic ele-


ments
A) Global numbering system; B) Local numbering system


NI=1 = 2 (u1) u 12
(3)
NI=2 = 4u (1u)


NI=3 = 2u u 12

Figure 3. Galerkin nite element method linear functions The approximation going through an odd num-
A) Global numbering system; B) Local numbering system ber of points (x i , yi ) is then

3
NT
y (x) = ceI NI (u) x in eth element
y (x) = ci Ni (x) = ci Ni (x) +ci+1 Ni+1 (x) I=1
i=1 with ceI = y (xi ) ,i = (e1) 2+I
because all the other trial functions are zero in the eth element
there. Thus Hermite cubic polynomials can also be used;
y (x) = ci NI=1 (u) +ci+1 NI=2 (u) , these are continuous and have continuous rst
derivatives at the element boundaries [3].
xi <x<xi+1 , 0<u<1 Splines. Splines are functions that match
Then given values at the points x 1 , . . . , x NT , shown
in Figure 5, and have continuous derivatives up
NI=1 = 1u, NI=2 = u to some order at the knots, or the points x 2 , . . . ,
and the expansion is rewritten as x NT 1 . Cubic splines are most common. In this
case the function is represented by a cubic poly-
2

y (x) = ceI NI (u) (2)
nomial within each interval and has continuous
I=1
rst and second derivatives at the knots.
Mathematics in Chemical Engineering 17
 
yi = Ci (xi ) ,yi = Ci (xi ) ,yi = Ci (xi )
is dened so that

Ci (x) = yi +yi (xxi ) + 12 yi (xxi )2 +
  

1
6xi
yi+1 yi (xxi )3

A number of algebraic steps make the interpola-


tion easy. These formulas are written for the i-th
element as well as the i 1-th element. Then
the continuity conditions are applied for the rst
Figure 5. Finite elements for cubic splines
and second derivatives, and the values y i and
A) Notation for spline knots. B) Notation for one element y i1 are eliminated [15]. The result is
  
yi1 xi1 +yi 2 (xi1 +xi ) +yi+1 xi =
 
Consider the points shown in Figure 5 A. The 6
yi yi1 y
i+1
yi
xi1 xi
notation for each interval is shown in Figure 5 B.
Within each interval the function is represented This is a tridiagonal system for the set of {y i }
as a cubic polynomial. in terms of the set of {yi }. Since the continu-
ity conditions apply only for i = 2, . . . , NT
Ci (x) = a0i +a1i x+a2i x2 +a3i x3 1, only NT 2 conditions exist for the NT val-
The interpolating function takes on specied ues of y i . Two additional conditions are needed,
values at the knots. and these are usually taken as the value of the
second derivative at each end of the domain, y1 ,
Ci1 (xi ) = Ci (xi ) = f (xi ) y NT . If these values are zero, the natural cu-
bic splines are obtained; they can also be set to
Given the set of values {x i , f (x i )}, the objective achieve some other purpose, such as making the
is to pass a smooth curve through those points, rst derivative match some desired condition at
and the curve should have continuous rst and the two ends. With these values taken as zero,
second derivatives at the knots. in the natural cubic spline, an NT 2 system

Ci1 (xi ) = Ci (xi ) of tridiagonal equations exists, which is easily
solved. Once the second derivatives are known
 
Ci1 (xi ) = Ci (xi ) at each of the knots, the rst derivatives are given
by
The formulas for the cubic spline are derived
yi+1 yi  xi  xi
as follows for one region. Since the function is yi = yi yi+1
xi 3 6
a cubic function the third derivative is constant
and the second derivative is linear in x. This is The function itself is then known within each
written as element.
   xx Orthogonal Collocation on Finite Elements.
   i
Ci (x) = Ci (xi ) + Ci (xi+1 ) Ci (xi ) In the method of orthogonal collocation on nite
xi
elements the solution is expanded in a polyno-
and integrated once to give mial of order NP = NCOL + 2 within each el-
 ement [3]. The choice NCOL = 1 corresponds
Ci (x) = Ci (xi ) +Ci (xi ) (xxi ) +
to using quadratic polynomials, whereas NCOL
  
 (xxi )2 = 2 gives cubic polynomials. The notation is
Ci (xi+1 ) Ci (xi ) 2xi
shown in Figure 6. Set the function to a known
and once more to give value at the two endpoints

Ci (x) = Ci (xi ) +Ci (xi ) (xxi ) +Ci (xi ) y1 = y (x1 )
  
(xxi )2  (xxi )3 yN T = y (xN T )
2
+ Ci (xi+1 ) Ci (xi ) 6xi
and then at the NCOL interior points to each el-
Now ement
18 Mathematics in Chemical Engineering

Figure 6. Notation for orthogonal collocation on nite elements


Residual condition;  Boundary conditions; | Element boundary, continuity
NE = total no. of elements.
NT = (NCOL + 1) NE + 1

yie = yi = y (xi ) , i = (N COL+1) e+I This corresponds to passing a straight line


The actual points x i are taken as the roots of the through the points (x 0 , y0 ), (x 1 , y1 ) and inte-
orthogonal polynomial. grating under the interpolant. For equally spaced
points at a = x 0 , a + x = x 1 , a + 2 x = x 2 , . .
PN COL (u) = 0 gives u1 ,u2 ,. . ., uN COL . , a + N x = x N , a + (N + 1) x = b = x n+1 ,
the trapezoid rule is obtained.
and then
xi = x(e) +xe uI xeI Trapezoid Rule.

The rst derivatives must be continuous at the b h


y (x) dx = 2
(y0 +2y1 +2y2 ++2yN
element boundaries: a


dy dy +yN +1 )+O h3
| = |
dx x=x(2) dx x=x(2)+ The rst ve terms in Equation 1 are retained
and integrated over two intervals.
Within each element the interpolation is a poly-
nomial of degree NCOL + 1. Overall the func- x0 
+2h 2 h
y (x) dx = y hd = (y0 +4y1 +y2 )
tion is continuous with continuous rst deriva- x0 0
3

tives. With the choice NCOL = 2, the same ap- 5 (IV)


h90 y0 () , x0 x0 +2h
proximation is achieved as with Hermite cubic
polynomials. This corresponds to passing a quadratic func-
tion through three points and integrating. For an
even number of intervals and an odd number of
2.4. Quadrature points, 2 N + 1, with a = x 0 , a + x = x 1 , a + 2
x = x 2 , . . . , a + 2 N x = b, Simpsons rule
To calculate the value of an integral, the func- is obtained.
tion can be approximated by using each of the
methods described in Section 2.3. Using the rst Simpsons Rule.
three terms in Equation 1 gives
b h
x0+h 1 y (x) dx = 3
(y0 +4y1 +2y2 +4y3 +2y4
y (x) dx = y hd a


x0 0 ++2y2N 1 +4y2N +y2N +1 )+O h5
h 1 3 
= 2
(y0 +y1 ) 12 h y0 () , x0 x0 +h
Mathematics in Chemical Engineering 19

Within each pair of intervals the interpolant is When orthogonal polynomials are used, as in
continuous with continuous derivatives, but only Equation 1, the m roots to Pm (x) = 0 are cho-
the function is continuous from one pair to an- sen as quadrature points and called points {x j }.
other. Then the quadrature is Gaussian :
If the nite element representation is used 1 m

(Eq. 2), the integral is y (x) dx = Wj y (xj )
xi+1 0 j=1
 1 
2
y (x) dx = ceI NI (u) (xi+1 xi ) du The quadrature is exact when y is a polynomial
xi 0 I=1

2 1

of degree 2 m 1 in x. The m weights and m
= xi cI NI (u) du = xi ce1 12 +c2 2
e e1
Gauss points result in 2 m parameters, chosen to
I=1 0
xi
exactly represent a polynomial of degree 2 m
= 2
(yi +yi+1 ) 1, which has 2 m parameters. The Gauss points
Since ce1 = yi and ce2 = yi+1 , the result is the and weights are given in Table 2. The weights
same as the trapezoid rule. These formulas can can be dened with W (x) in the integrand as
be added together to give linear elements: well.
b xe Table 2. Gaussian quadrature points and weights *
y (x) dx = (y1e +y2e )
e 2 N xi Wi
a 1 0.5000000000 0.6666666667
2 0.2113248654 0.5000000000
If the quadratic expansion is used (Eq. 3), the
0.7886751346 0.5000000000
endpoints of the element are x i and x i+2 , and 3 0.1127016654 0.2777777778
x i+1 is the midpoint, here assumed to be equally 0.5000000000 0.4444444445
spaced between the ends of the element: 0.8872983346 0.2777777778
4 0.0694318442 0.1739274226
xi+2
 1 
3 0.3300094783 0.3260725774
y (x) dx = ceI NI (u) (xi+2 xi ) du 0.6699905218 0.3260725774
xi 0 I=1
0.9305681558 0.1739274226

3 1 5 0.0469100771 0.1184634425
= xi ceI NI (u) du 0.2307653450 0.2393143353
I=1 0
0.5000000000 0.2844444444


= xe ce1 16 +ce2 23 +ce3 16
0.7692346551 0.2393143353
0.9530899230 0.1184634425
For many elements, with different x e , * For a given N the quadrature points x 2 , x 3 , . . . , x NP1 are
quadratic elements : given above. x 1 = 0, x NP = 1. For N = 1, W 1 = W 3 = 1/6 and for
N 2, W 1 = W NP = 0.
b xe
y (x) = (y1e +4y2e +y3e ) For orthogonal collocation on nite elements
e 6
a the quadrature formula is
1 NP

If the element sizes are all the same this gives
y (x) dx = xe Wj y (xeJ )
Simpsons rule. e
0 j=1
For cubic splines the quadrature rule within
one element is Each special polynomial has its own quadra-
ture formula. For example, Gauss Legendre
xi+1

Ci (x) dx = 1
xi (yi +yi+1 )
polynomials give the quadrature formula
2
xi 
n

  
1
24

x3i yi +yi+1 ex y (x) dx = Wi y (xi )
0 i=1

For the entire interval the quadrature formula is (points and weights are available in mathe-
xN
T N
T 1 matical tables) [23].
1
y (x) dx = 2
xi (yi +yi+1 ) For Gauss Hermite polynomials the
x1 i=1
N
T 1
quadrature formula is
1  
24 x3i (yi +yi+1 ) 
n

i=1 2
ex y (x) dx = Wi y (xi )
with y1 = 0, y NT = 0 for natural cubic splines. i=1
20 Mathematics in Chemical Engineering

(points and weights are available in mathemati- experimental measurements contain error; the
cal tables) [23]. goal is to nd the set of parameters in the model
Rombergs method uses extrapolation tech- that best represents the experimental data. Ref-
niques to improve the answer [15]. If I 1 is the erence [23] gives a complete treatment relating
value of the integral obtained by using interval the least-squares procedure to maximum likeli-
size h = x, I 2 the value of I obtained by using hood.
interval size h/2, and I 0 the true value of I, then In a least-squares parameter estimation, it is
the error in a method is approximately hm , or desired to nd parameters that minimize the sum
of squares of the deviation between the experi-
I1 I0 +chm
mental data and the theoretical equation
 m
h N   2
I2 I0 +c yi y(xi ;a1 ,a2 ,. . .,aM )
2 2 =
i=1
i
Replacing the by an equality (an approxima-
tion) and solving for c and I 0 give where yi is the i-th experimental data point for
2m I2 I1 the value x i , y(xi ;a1 ,a2 ,. . .,aM ) the theoreti-
I0 = cal equation at x i , i the standard deviation
2m 1
of the i-th measurement, and the parameters
This process can also be used to obtain I 1 , I 2 , . .
{a1 ,a2 ,...,aM } are to be determined to mini-
. , by halving h each time, calculating new esti-
mize 2 . The simplication is made here that
mates from each pair, and calling them J 1 , J 2 , .
the standard deviations are all the same. Thus,
. . (i.e., in the formula above, I 0 is replaced with
we minimize the variance of the curve t.
J 1 ). The formulas are reapplied for each pair of J
s to obtain K 1 , K 2 , . . . . The process continues N
[yi y (xi ;a1 ,a2 ,. . .,aM )]2
until the required tolerance is obtained. 2 =
i=1
N
I1 I2 I3 I4
J1 J2 J3
K1 K2
L1 Linear Least Squares. When the model is a
straight line, one is minimizing
Rombergs method is most useful for a low-
N

order method (small m) because signicant im- 2 = [yi abxi ]2
provement is then possible. i=1
When the integrand has singularities, a va-
riety of techniques can be tried. The integral The linear correlation coefcient r is dened by
may be divided into one part that can be inte- 
N
(xi x) (yi y)
grated analytically near the singularity and an- i=1
r=  
other part that is integrated numerically. Some- 
N 
N
2
times a change of argument allows analytical (xi x) (yi y)2
i=1 i=1
integration. Series expansion might be help-
ful, too. When the domain is innite, Gauss and
Legendre or Gauss Hermite quadrature can be N


used. Also a transformation can be made [15]. 2 = 1r2 [yi y]2
For example, let u = 1/x and then i=1

b 1/a
   where y is the average of yi values. Values of r
1 1 near 1 indicate a positive correlation; r near 1
f (x) dx = f du a, b>0
u2 u means a negative correlation, and r near zero
a 1/b
means no correlation. These parameters are eas-
ily found by using standard programs, such as
Microsoft Excel.
2.5. Least Squares
When tting experimental data to a mathemat-
ical model, it is necessary to recognize that the
Mathematics in Chemical Engineering 21

Polynomial Regression. In polynomial re- (For denition of i, see Chap. 3.) The Nyquist
gression, one expands the function in a poly- critical frequency or critical angular frequency
nomial in x. is
M
1
y (x) = aj xj1 fc = , c =
2
j=1
If a function y (t) is bandwidth limited to fre-
The parameters are easily determined using quencies smaller than f c , i.e.,
computer software. In Microsoft Excel, the data
is put into columns A and B and the graph is cre- Y () = 0 for >c
ated as for a linear curve t. Then add a trendline
and choose the degree of polynomial desired. then the function is completely determined by
its samples yn . Thus, the entire information con-
Multiple Regression. In multiple regres- tent of a signal can be recorded by sampling at a
sion, any set of functions can be used, not just rate 1 = 2 f c . If the function is not bandwidth
polynomials. limited, then aliasing occurs. Once a sample rate
M
is chosen, information corresponding to fre-
quencies greater than f c is simply aliased into
y (x) = aj fj (x)
j=1 that range. The way to detect this in a Fourier
transform is to see if the transform approaches
where the set of functions {fj (x)} is known zero at f c ; if not, aliasing has occurred and a
and specied. Note that the unknown param- higher sampling rate is needed.
eters {aj } enter the equation linearly. In this Next, for N samples, where N is even
case, the spreadsheet can be expanded to have
a column for x, and then successive columns yk = y (tk ) , tk = k, k = 0, 1, 2,. . .,N 1
for fj (x). In Microsoft Excel, choose Regres-
sion under Tools/Data Analysis, and complete and the sampling rate is ; with only N values
the form. In addition to the actual correlation, {yk } the complete Fourier transform Y () can-
one gets the expected variance of the unknowns, not be determined. Calculate the value Y ( n )
which allows one to assess how accurately they at the discrete points
were determined. n = 2n
,n = N ,. . ., 0,. . ., N
N 2 2

Nonlinear Regression. In nonlinear regres- N


1
Yn = yk e2ikn/N
sion, the same procedure is followed except that k=0
an optimization routine must be used to nd the Y (n ) = Yn
minimum 2 . See Chapter 10.
The discrete inverse Fourier transform is
N 1
1
2.6. Fourier Transforms of Discrete yk = Yn e2ikn/N
N k=0
Data [15]
The fast Fourier transform (FFT ) is used to
Suppose a signal y (t) is sampled at equal inter- calculate the Fourier transform as well as the
vals inverse Fourier transform. A discrete Fourier
transform of length N can be written as the
yn = y (n) ,n = . . ., 2, 1, 0, 1, 2, . . .
sum of two discrete Fourier transforms, each of
= sampling rate length N/2, and each of these transforms is sep-
arated into two halves, each half as long. This
(e.g., number of samples per second)
continues until only one component is left. For
The Fourier transform and inverse transform are this reason, N is taken as a power of 2, N = 2p .

The vector {yj } is lled with zeroes, if need
Y () = y (t) eit dt be, to make N = 2p for some p. For the com-

 puter program, see [15, p. 381]. The standard
1
y (t) = 2
Y () eit d

22 Mathematics in Chemical Engineering

Fourier transform takes N 2 operations to cal- 2.7. Two-Dimensional Interpolation and


culate, whereas the fast Fourier transform takes Quadrature
only N log2 N. For large N the difference is sig-
nicant; at N = 100 it is a factor of 15, but for N Bicubic splines can be used to interpolate a set of
= 1000 it is a factor of 100. values on a regular array, f (x i , yj ). Suppose NX
The discrete Fourier transform can also be points occur in the x direction and NY points
used for differentiating a function; this is used occur in the y direction. Press et al. [15] sug-
in the spectral method for solving differential gest computing NY different cubic splines of size
equations. Consider a grid of equidistant points: NX along lines of constant y, for example, and
L storing the derivative information. To obtain the
xn = nx, n = 0,1,2,. . ., 2 N 1,x = value of f at some point x, y, evaluate each of
2N
these splines for that x. Then do one spline of
the solution is known at each of these grid points
size NY in the y direction, doing both the deter-
{Y (x n )}. First, the Fourier transform is taken:
mination and the evaluation.
2 N 1
1 Multidimensional integrals can also be bro-
yk = Y (xn ) e2ikxn /L ken down into one-dimensional integrals. For
2 N n=0
example,
The inverse transformation is
b f2(x) b
N z (x, y) dxdy = G (x) dx;
1 a f1 (x) a
Y (x) = yk e2ikx/L
L k=N f2(x)
G (x) = z (x, y) dx
which is differentiated to obtain f1 (x)

N
dY 1 2ik 2ikx/L
= yk e
dx L k=N L
3. Complex Variables [25 31]
Thus, at the grid points
 N
3.1. Introduction to the Complex Plane
dY  1 2ik 2ikxn /L
 = yk e
dx n L k=N L A complex number is an ordered pair of real
numbers, x and y, that is written as
The process works as follows. From the solu-
tion at all grid points the Fourier transform is z = x+iy
obtained by using FFT {yk }. This is multiplied
by 2 i k/L to obtain the Fourier transform of The variable i is the imaginary unit which has
the derivative: the property
2ik i2 = 1
yk = yk
L The real and imaginary parts of a complex num-
The inverse Fourier transform is then taken by ber are often referred to:
using FFT, to give the value of the derivative at
each of the grid points: Re (z) = x, Im (z) = y

 N A complex number can also be represented


dY  1  2ikxn /L
 = y e graphically in the complex plane, where the real
dx n L k=N k
part of the complex number is the abscissa and
Any nonlinear term can be treated in the same the imaginary part of the complex number is the
way: evaluate it in real space at N points and take ordinate (see Fig. 7).
the Fourier transform. After processing using Another representation of a complex number
this transform to get the transform of a new func- is the polar form, where r is the magnitude and
tion, take the inverse transform to obtain the new is the argument.
function at N points. This is what is done in di- 
r = |x+i y| = x2 +y 2 , = arg (x+i y)
rect numerical simulation of turbulence (DNS).
Mathematics in Chemical Engineering 23

The magnitude of z1 + z2 is bounded by


|z1 z2 | |z1 | + |z2 | and |z1 | |z2 | |z1 z2 |

as can be seen in Figure 8. The magnitude and


arguments in multiplication obey
|z1 z2 | = |z1 | |z2 | ,arg (z1 z2 ) = arg z1 +arg z2

The complex conjugate is z* = x i y when z =


x + i y and |z* | = |z |, arg z* = arg z

Figure 7. The complex plane

Write
Figure 8. Addition in the complex plane
z = x+i y = r (cos+i sin)

so that
For complex conjugates then
x = rcos , y = rsin
z z = |z|2
and
The reciprocal is
y  
= arctan 1 z 1 1
x = = (cosi sin) , arg = arg z
z |z|2 r z
Since the arctangent repeats itself in multiples
of rather than 2 , the argument must be de- Then
ned carefully. For example, the given above z1 x1 +i y1 x2 i y2
= = (x1 +i y1 )
could also be the argument of (x + i y). The z2 x2 +i y2 x22 +y2
2

function z = cos + i sin obeys |z | = |cos + = x1 x2 +y1 y2


+i x2 xy12 x1 y2
x2 2
2 +y2
2
2 +y2
i sin | = 1.
The rules of equality, addition, and multipli- and
cation are z1 r1
= [cos (1 2 ) +i sin (1 2 )]
z1 = x1 +i y1 , z2 = x2 +i y2 z2 r2

Equality :
z1 = z2 if and only if x1 = x2 and y1 = y2
Addition : 3.2. Elementary Functions
z1 +z2 = (x1 +x2 ) +i (y1 +y2 )
Multiplication : Properties of elementary functions of complex
z1 z2 = (x1 x2 y1 y2 ) +i (x1 y2 +x2 y1 ) variables are discussed here [32]. When the po-
lar form is used, the argument must be specied
The last rule can be remembered by using the because the same physical angle can be achieved
standard rules for multiplication, keeping the with arguments differing by 2 . A complex
imaginary parts separate, and using i2 = 1. In number taken to a real power obeys
the complex plane, addition is illustrated in Fig-
ure 8. In polar form, multiplication is u = z n , |z n | = |z|n , arg (z n ) = n arg z (mod2)

z1 z2 = r1 r2 [cos (1 +2 ) +i sin (1 +2 )] u = z n = r n (cos n+i sin n)


24 Mathematics in Chemical Engineering

Roots of a complex number are complicated by The circular functions with complex argu-
careful accounting of the argument ments are dened
ei z +ei z ei z ei z
z = w1/n with w = R (cos +i sin ) , 02 cos z = 2
, sin z = 2
,

sin z
then tan z = cos z
 
zk = R1/n {cos
+ (k1) 2 and satisfy
n n
  sin (z) = sin z, cos (z) = cos z
+i sin
n
+ (k1) 2
n
}
sin (i z) = i sinh z, cos (i z) = cosh z
such that
All trigonometric identities for real, circular
(zk )n = w for every k functions with real arguments can be extended
without change to complex functions of complex
z = r (cos +i sin ) arguments. For example,
rn = R, n = (mod 2)
sin2 z+cos2 z = 1,
The exponential function is
sin (z1 +z2 ) = sin z1 cos z2 +cos z1 sin z2
ez = ex (cos y+i sin y)
The same is true of hyperbolic functions. The
Thus, absolute boundaries of sin z and cos z are not
bounded for all z.
z = r (cos +i sin )
Trigonometric identities can be dened by us-
can be written ing
z = rei ei = cos +i sin
For example,
and
ei(+) = cos (+) +i sin (+)
|ez | = ex , arg ez = y (mod 2)
= ei ei = (cos +i sin )
The exponential obeys (cos +i sin )
ez =0 for every nite z = cos cos sin sin
+i (cos sin +cos sin )
and is periodic with period 2 :
Equating real and imaginary parts gives
ez+2 i = ez cos (+) = cos cos sin sin
Trigonometric functions can be dened by sin (+) = cos sin +cos sin
using
The logarithm is dened as
eiy = cos y+i sin y, and eiy = cos yi sin y ln z = ln |z| +i arg z
Thus, and the various determinations differ by multi-
ei y +ei y
ples of 2 i. Then,
cos y = 2
= cosh i y
eln z = z
ei y ei y
sin y = 2i
= i sinh i y
ln (ez ) z0 (mod 2 i)
The second equation follows from the deni-
Also,
tions
ln (z1 z2 ) ln z1 ln z2 0 (mod 2 i)
ez +ez ez ez
cosh z , sinh z
2 2 is always true, but
The remaining hyperbolic functions are ln (z1 z2 ) = ln z1 +ln z2
sinh z 1 holds only for some determinations of the loga-
tanh z cosh z
, coth z tanh z
rithms. The principal determination of the argu-
1 1
sech z cosh z
, csch z sinh z ment can be dened as < arg .
Mathematics in Chemical Engineering 25

3.3. Analytic Functions of a Complex Analyticity implies continuity but the con-
Variable verse is not true: z* = x i y is continuous but,
because the Cauchy Riemann conditions are
Let f (z) be a single-valued continuous function not satised, it is not analytic. An entire func-
of z in a domain D. The function f (z) is differ- tion is one that is analytic for all nite values of z.
entiable at the point z0 in D if Every polynomial is an entire function. Because
the polynomials are analytic, a ratio of polyno-
f (z0 +h) f (z0 )
lim mials is analytic except when the denominator
h0 h
vanishes. The function f (z) = |z2 | is continuous
exists as a nite (complex) number and is in- for all z but satises the Cauchy Riemann con-
dependent of the direction in which h tends to ditions only at z = 0. Hence, f  (z) exists only
zero. The limit is called the derivative, f  (z0 ). at the origin, and |z |2 is nowhere analytic. The
The derivative now can be calculated with h ap- function f (z) = 1/z is analytic except at z = 0. Its
proaching zero in the complex plane, i.e., any- derivative is 1/z2 , where z = 0. If ln z = ln |
where in a circular region about z0 . The function z | + i arg z in the cut domain < arg z ,
f (z) is differentiable in D if it is differentiable then f (z) = 1/ln z is analytic in the same cut do-
at all points of D; then f (z) is said to be an an- main, except at z = 1, where log z = 0. Because
alytic function of z in D. Also, f (z) is analytic ez is analytic and i z are analytic, eiz is ana-
at z0 if it is analytic in some neighborhood of lytic and linear combinations are analytic. Thus,
z0 . The word analytic is sometimes replaced by the sine and cosine and hyperbolic sine and co-
holomorphic or regular. sine are analytic. The other functions are analytic
The Cauchy Riemann equations can be except when the denominator vanishes.
used to decide if a function is analytic. Set The derivatives of the elementary functions
f (z) = f (x+i y) = u (x, y) +i v (x, y)
are
d z d n
e = ez , z = n z n1
Theorem [30, p. 51]. Suppose that f (z) is dz dz

dened and continuous in some neighborhood d


(ln z) = 1 d
, sin z = cos z,
dz z dz
of z = z0 . A necessary condition for the exis-
tence of f  (z0 ) is that u (x, y) and v (x, y) have d
dz
cos z = sin z
rst-order partials and that the Cauchy Rie-
mann conditions (see below) hold. In addition,
u v u v
d
dz
(f g) = f dg
dz
+g ddzf
= and = at z0
x y y x d d f dg
dz
f [g (z)] = dg dz
Theorem [30, p. 61]. The function f (z) is
d
analytic in a domain D if and only if u and v dz
sin w = cos w dw
dz
, d
dz
cos w = sin w dw
dz
are continuously differentiable and satisfy the
Dene z a = ea ln z for complex constant a. If
Cauchy Riemann conditions there.
the determination is < arg z , then z a is
If f 1 (z) and f 2 (z) are analytic in domain D,
analytic on the complex plane with a cut on the
then 1 f 1 (z) + 2 f 2 (z) is analytic in D for any
negative real axis. If a is an integer n, then e2in
(complex) constants 1 , 2 .
= 1 and zn has the same limits approaching the
f1 (z) +f2 (z) is analytic in D cut from either side. The function can be made
continuous across the cut and the function is an-
f1 (z) /f2 (z) is analytic in D except where f2 (z) = 0 alytic there, too. If a = 1/n where n is an integer,
An analytic function of an analytic function is then
analytic. If f (z) is analytic, f  (z) = 0 in D, f z 1/n = e(ln z)/n = |z|l/n ei(arg z)/n
(z1 ) = f (z2 ) for z1 = z2 , then the inverse func-
tion g (w) is also analytic and So w = z1/n has n values, depending on the
1
choice of argument.
g  (w) = f  (z)
where w = f (z) ,

g (w) = g [f (z)] = z
26 Mathematics in Chemical Engineering

Laplace Equation. If f (z) is analytic, where The integral is linear with respect to the inte-
grand:
f (z) = u (x, y) +i v (x, y) 
[1 f1 (z) +2 f2 (z)] dz
the Cauchy Riemann equations are satised. C
 
Differentiating the Cauchy Riemann equa- = 1 f1 (z) dz+2 f2 (z) dz
C C
tions gives the Laplace equation:
The integral is additive with respect to the path.
2u 2v 2v 2
x2
= xy
= yx
= yu
2 or Let curve C 2 begin where curve C 1 ends and
2u 2 C 1 + C 2 be the path of C 1 followed by C 2 . Then,
x2
+ yu2 =0   
f (z) dz = f (z) dz+ f (z) dz
Similarly,
C1 +C2 C1 C2
2v 2v
+ =0 Reversing the orientation of the path replaces the
x2 y 2
integral by its negative:
Thus, general solutions to the Laplace equation  
can be obtained from analytic functions [30, p. f (z) dz = f (z) dz
60]. For example, C C

1 If the path of integration consists of a nite num-


ln ber of arcs along which z (t) has a continuous
|zz0 |
derivative, then
is analytic so that a solution to the Laplace equa-
 1
tion is
f (z) dz = f [z (t)] z  (t) dt
 1/2
ln (xa)2 +(yb)2 C 0

Also if s (t) is the arc length on C and l (C ) is


A solution to the Laplace equation is called a the length of C
harmonic function. A function is harmonic if,  
 
and only if, it is the real part of an analytic func-  
 f (z) dz  max |f (z)| l (C)
tion. The imaginary part is also harmonic. Given   zC
 
C
any harmonic function u, a conjugate harmonic
function v can be constructed such that f = u + and
 
i v is locally analytic [30, p. 290].    1
 
 f (z) dz  |f (z)| |dz| = |f [z (t)]| ds ( t)
 
Maximum Principle. If f (z) is analytic in a  
C C 0
domain D and continuous in the set consisting of
D and its boundary C, and if | f (z)| M on C, Cauchys Theorem [25, 30, p. 111]. Sup-
then | f (z)| < M in D unless f (z) is a constant pose f (z) is an analytic function in a domain
[30, p. 134]. D and C is a simple, closed, rectiable curve
in D such that f (z) is analytic inside and on C.
Then
3.4. Integration in the Complex Plane 
f (z) dz = 0 (4)
Let C be a rectiable curve in the complex plane C

If D is simply connected, then Equation 4 holds


C:z = z (t) , 0t 1
for every simple, closed, rectiable curve C in
where z (t) is a continuous function of bounded D. If D is simply connected and if a and b are
variation; C is oriented such that z1 = z (t 1 ) pre- any two points in D, then
cedes the point z2 = z (t 2 ) on C if and only if t 1 < b
t 2 . Dene f (z) dz
 1 a
f (z) dz = f [z (t)] dz (t) is independent of the rectiable path joining a
C 0 and b in D.
Mathematics in Chemical Engineering 27

Cauchys Integral. If C is a closed contour In particular,


such that f (z) is analytic inside and on C, z0 is 
1
a point inside C, and z traverses C in the coun- A1 = f (z) dz
2i
terclockwise direction, C
 f (z)
f (z0 ) = 1
dz where the curve C is a closed, counterclockwise
2i zz0
C

curve containing z0 and is within the neighbor-
1 f (z)
f  (z0 ) = 2i (zz0 )2
dz hood where f (z) is analytic. The complex num-
C
ber A1 is the residue of f (z) at the isolated
Under further restriction on the domain [30, p. singular point z0 ; 2 i A1 is the value of the
127], integral in the positive direction around a path
 containing no other singular points.
m! f (z)
f (m) (z0 ) = dz If f (z) is dened and analytic in the exterior
2i (zz0 )m+1
C |z z0 | > R of a circle, and if
 
1 1
Power Series. If f (z) is analytic interior to v () = f z0 + obtained by =
zz0
a circle |z z0 | < r 0 , then at each point inside
the circle the series has a removable singularity at = 0, f (z) is an-

alytic at innity. It can then be represented by
f (n) (z0 )
f (z) = f (z0 ) + (zz0 )n a Laurent series with nonpositive powers of z
n=1
n! z0 .
If C is a closed curve within which and on
converges to f (z). This result follows from
which f (z) is analytic except for a nite number
Cauchys integral. As an example, ez is an en-
of singular points z1 , z2 , . . . , zn interior to the
tire function (analytic everywhere) so that the
region bounded by C, then the residue theorem
MacLaurin series
states
n
z 
ez = 1+
n! f (z) dz = 2i(?1 +?2 +?n )
n=1
C
represents the function for all z.
Another result of Cauchys integral formula where n denotes the residue of f (z) at zn .
is that if f (z) is analytic in an annulus R, r 1 < |z The series of negative powers in Equation 5
z0 | < r 2 , it is represented in R by the Laurent is called the principal part of f (z). If the princi-
series pal part has an innite number of nonvanishing

terms, the point z0 is an essential singularity.
f (z) = An (zz0 )n , r1 < |zz0 | r2 If Am = 0, An = 0 for all m < n, then z0 is
n= called a pole of order m. It is a simple pole if m
where = 1. In such a case,
 f (z)
An = 1
dz, A1
2i
C
(zz0 )n+1 f (z) = + An (zz0 )n
zz0 n=0
n = 0, 1, 2,. . .,
If a function is not analytic at z0 but can be made
and C is a closed curve counterclockwise in R.
so by assigning a suitable value, then z0 is a re-
movable singular point.
Singular Points and Residues [33, p. 159,
When f (z) has a pole of order m at z0 ,
30, p. 180]. If a function in analytic in every
neighborhood of z0 , but not at z0 itself, then z0 is (z) = (zz0 )m f (z) , 0< |zz0 | <r0
called an isolated singular point of the function.
About an isolated singular point, the function has a removable singularity at z0 . If (z0 ) =
can be represented by a Laurent series. Am , then (z) is analytic at z0 . For a simple
A A
pole,
2 1
f (z) = + (zz )2
+ zz +A0
0 0
(5) A1 = (z0 ) = lim (zz0 ) f (z)
zz0
+A1 (zz0 ) +0< |zz0 | r0
28 Mathematics in Chemical Engineering

Also | f (z) | as z z0 when z0 is a pole. is the analytic continuation onto the entire z
Let the function p (z) and q (z) be analytic at z0 , plane except for z = 1.
where p (z0 ) = 0. Then An extension of the Cauchy integral formula
p (z)
is useful with Laplace transforms. Let the curve
f (z) = C be a straight line parallel to the imaginary axis
q (z)
and z0 be any point to the right of that (see Fig.
has a simple pole at z0 if, and only if, q (z0 ) = 0 9). A function f (z) is of order zk as |z | if
and q (z0 ) = 0. The residue of f (z) at the simple positive numbers M and r 0 exist such that
pole is  k 
z f (z) <M when |z| >r0, i.e.,
p (z0 )
A1 = 
q (z0 ) |f (z)| <M |z|k for |z| sufciently large

If q(i1) (z0 ) = 0, i = 1, . . . , m, then z0 is a pole


of f (z) of order m.

Branch [33, p. 163]. A branch of a multiple-


valued function f (z) is a single-valued function
that is analytic in some region and whose value
at each point there coincides with the value of f
(z) at the point. A branch cut is a boundary that is
needed to dene the branch in the greatest pos-
sible region. The function f (z) is singular along
a branch cut, and the singularity is not isolated.
For example,
 
z 1/2 = f1 (z) = r cos 2 +i sin 2

<<, r>0

is double valued along


the negative real axis.
The
function tends to ri when and to ri
when ; the function has no limit as z Figure 9. Integration in the complex plane
r (r > 0). The ray = is a branch cut.

Analytic Continuation [33, p. 165]. If f 1 (z) Theorem [33, p. 167]. Let f (z) be analytic
is analytic in a domain D1 and domain D con- when R (z) and O (zk ) as |z | in that
tains D1 , then an analytic function f (z) may ex- half-plane, where and k are real constants and
ist that equals f 1 (z) in D1 . This function is the k > 0. Then for any z0 such that R (z0 ) >
analytic continuation of f 1 (z) onto D, and it is
+i

unique. For example, 1 f (z)
f (z0 ) = lim dz,

2 i zz0
i
f1 (z) = z n , |z| <1
n=0 i.e., integration takes place along the line x = .
is analytic in the domain D1 : |z | < 1. The se-
ries diverges for other z. Yet the function is the
MacLaurin series in the domain 3.5. Other Results
1
f1 (z) = , |z| <1 Theorem [32, p. 84]. Let P (z) be a polyno-
1z
mial of degree n having the zeroes z1 , z2 , . . . , zn
Thus, and let be the least convex polygon containing
1 the zeroes. Then P  (z) cannot vanish anywhere
f1 (z) =
1z in the exterior of .
Mathematics in Chemical Engineering 29

If a polynomial has real coefcients, the roots a0
f (x) = + (an cos n x+bn sin n x)
are either real or form pairs of complex conju- 2 n=1
gate numbers. where
The radius of convergence R of the Taylor se-
 
ries of f (z) about z0 is equal to the distance from a0 = 1
f (x) dx, an = 1
f (x) cos n x dx,

z0 to the nearest singularity of f (z).

1

bn =
f (x) sin n x dx
Conformal Mapping. Let u (x, y) be a har-

monic function. Introduce the coordinate trans- The values {an } and {bn } are called the nite
formation cosine and sine transform of f, respectively. Be-
x=x
(, ) , y = y (, )
cause
1


cos n x = 2
einx +einx
It is desired that
1


and sin n x = einx einx
U (, ) = u [
x (, ) , y (, )] 2i

the Fourier series can be written as


be a harmonic function of and .


Theorem [30, p. 284]. The transformation
f (x) = cn einx
n=
z = f () (6)
where
takes all harmonic functions of x and y into har-
monic functions of and if and only if either 1
2
(an +i bn ) for n0
cn =
f () or f * () is an analytic function of = + 1
2
(an i bn ) for n<0
i .
Equation 6 is a restriction on the transforma- and
tion which ensures that 
1
cn = f (x) einx dx
2u 2u 2U 2U 2
if + = 0 then + =0
x2 y 2 2 2
If f is real
Such a mapping with f () analytic and f  () =
0 is a conformal mapping. cn = cn .
If Laplaces equation is to be solved in the
region exterior to a closed curve, then the point If f is continuous and piecewise continuously
at innity is in the domain D. For ow in a long differentiable
channel (governed by the Laplace equation) the

f  (x) = (i n) cn einx
inlet and outlet are at innity. In both cases the

transformation
If f is twice continuously differentiable
a z+b
=

zz0 

f (x) = n2 cn einx
takes z0 into innity and hence maps D into a
bounded domain D*.
Inversion. The Fourier series can be used to
solve linear partial differential equations with
4. Integral Transforms [34 39] constant coefcients. For example, in the prob-
lem
4.1. Fourier Transforms T 2T
t
= x2

Fourier Series [40]. Let f (x) be a function T (x, 0) = f (x)


that is periodic on < x < . It can be ex-
panded in a Fourier series T (, t) = T (, t)
30 Mathematics in Chemical Engineering

Let If f is real F [ f ( )] = F [ f ()*]. The real




part is an even function of and the imaginary
T = cn (t) einx part is an odd function of .
A function f (x) is absolutely integrable if the
Then, improper integral


dcn

einx = cn (t) n2 einx |f (x)| dx

dt

Thus, cn (t) satises has a nite value. Then the improper integral
dcn 2 

= n2 cn , or cn = cn (0) en t
dt f (x) dx
Let cn (0) be the Fourier coefcients of the ini-

tial conditions: converges. The function is square integrable if




f (x) = cn (0) einx 

|f (x)|2 dx

The formal solution to the problem is

has a nite value. If f (x) and g (x) are square
2
T = cn (0) en t einx integrable, the product f (x) g (x) is absolutely
integrable and satises the Schwarz inequality:
 2
 
  
Fourier Transform [40]. When the func-  f (x) g (x) dx
 
tion f (x) is dened on the entire real line, the 


Fourier transform is dened as |f (x)|2 dx |g (x)|2 dx



F [f ] f () = f (x) eix dx The triangle inequality is also satised:



1/2


|f +g|2 dx
This integral converges if
1/2 1/2




|f (x)| dx |f |2 dx |g|2 dx

A sequence of square integrable functions f n


does. The inverse transformation is (x) converges in the mean to a square integrable

function f (x) if
1
f (x) = f () eix d
2 


lim |f (x) fn (x)|2 dx = 0
n
If f (x) is continuous and piecewise continuously
differentiable,
The sequence also satises the Cauchy criterion


f (x) eix dx 

lim |fn fm |2 dx = 0
n
m

converges for each , and
Theorem [40, p. 307]. If a sequence of
lim f (x) = 0 square integrable functions f n (x) converges
x
to a function f (x) uniformly on every nite
then interval a x b, and if it satises Cauchys
 
df criterion, then f (x) is square integrable and f n
F = iF [f ]
dx (x) converges to f (x) in the mean.
Mathematics in Chemical Engineering 31

Theorem (Riesz Fischer) [40, p. 308]. To () is odd. If f (x) is real and even, then f ()
every sequence of square integrable functions
f n (x) that satisfy Cauchys criterion, there cor- is real and even. If f (x) is real and odd, then f
responds a square integrable function f (x) such () is imaginary and odd. If f (x) is imaginary

that f n (x) converges to f (x) in the mean. Thus, and even, then f () is imaginary and even. If f
the limit in the mean of a sequence of functions
is dened to within a null function. (x) is imaginary and odd, then f () is real and
Square integrable functions satisfy the Parse- odd.
val equation.


Convolution [40, p. 326].
 2
 
f () d = 2 |f (x)|2 dx 


f h (x0 ) f (x0 x) h (x ) dx

This is also the total power in a signal, which can 



= 1
2
eix0 f () h
() d
be computed in either the time or the frequency
domain. Also
Theorem. The product



f () g () d = 2 f (x) g (x) dx f () h
()

is the Fourier transform of the convolution prod-
Fourier transforms can be used to solve dif- uct f * h. The convolution permits nding in-
ferential equations too. Then it is necessary to verse transformations when solving differential
nd the inverse transformation. If f (x) is square equations. To solve
integrable, the Fourier transform of its Fourier T 2T
=
transform is 2 f ( x), or t x2

  
T (x, 0) = f (x) , <x<
f (x) = F f () = 1
2
f () eix d

T bounded
1


= 2
f (x) eix dxeix d
take the Fourier transform
1 1 dT
f (x) = 2
F [F f (x)] or f (x) = 2
F [F [f ]]
dt
+ 2 T =0

T (, 0) = f ()
Properties of Fourier Transforms [40, p.
324], [15]. The solution is
  2
F df
= i F [f ] = i f T (, t) = f () e t
dx
The inverse transformation is
F [i x f (x)] = d
d
F [f ] = d
d
f



1 ib/a 1 2
F [f (a xb)] = |a|
e f a T (x, t) = eix f () e t d
2
! "
F eicx f (x) = f (+c)
  Because
F [cos 0 x f (x)] = 1
2
f (+0 ) +f (0 )  
2 1 2
  e t
=F ex /4t

F [sin 0 x f (x)] = 1
f (+0 ) f (0 ) 4t
2i
! " the convolution integral can be used to write the
F ei0 x f (x) = f (0 ) solution as
If f (x) is real, then f ( ) = f * (). If f (x) is 1


2
T (x, t) = f (y) e(xy) /4t
dy
imaginary, then f ( ) = f * (). If f (x) is 4t


even, then f () is even. If f (x) is odd, then f
32 Mathematics in Chemical Engineering

Finite Fourier Sine and Cosine Transform Table 3. Finite sine transforms [41]

[41]. In analogy with nite Fourier transforms fs (n) = F (x) (0<x<)



(on to ) and Fourier transforms (on to F (x) sin n xdx (n = 1,2 ,. . .)
0
+ ), nite Fourier sine and cosine transforms
(0 to ) and Fourier sine and cosine transforms (1)n+1 f s (n) F (x)
(on 0 to + ) can be dened. 1 (x)
The nite Fourier sine and cosine transforms n

are (1)n+1 x
n
2

fs (n) = Fsn [f ] =
f (x) sin n xdx, 1(1)n
1
0 n

n = 1, 2,. . ., (c) x (xc)
n2
sin n c (0<c<)
c (x) (xc)
2

fc (n) = Fcn [f ] =
f (x) cos n xdx 
0 x (x<c)
n cos n c (0<c<) x (x)
n = 0, 1, 2,. . .
2 (1)n1 2[1(1)n ]

n n3
x2
f (x) = fs (n) sin n x,
n=1  2

(1)n 6
n3
n x3
1 

f (x) = 2
fc (0) + fc (n) cos n x
n=1 n
n2 +c2
[1(1)n ec ] ecx
They obey the operational properties n sinh c(x)
  n2 +c2 sinh c
d2 f
Fsn dx2
= n2 Fsn [f ]
n sin k(x)
n2 k2
(|k| =0, 1, 2,. . .) sin k
n
+ 2n [f (0) (1) f ()]

0 (n =m) ; fs (m) = 2 sin m x (m = 1,2, . . . )
f, f  are continuous, f  is piecewise continu-
ous on 0 x .
n
n2 k2
[1(1)n cos k x] cos k x (| k | = 1,2, . . . )
 
n
n2 m2
1(1)n+m ,
cos m x (m = 1,2, . . . )
(n =m) ; fs (m) = 0
The material is reproduced with permission of McGrawHill, Inc.

Also,
 
and f 1 is the extension of f, k is a constant Fsn df
= nFcn [f ]
dx
 
df
Fcn dx
= nFsn [f ] 2 f (0) +(1)n 2 f ()

When two functions F (x) and G (x) are dened


on the interval 2 < x < 2 , the function
Also, 
 
d2 f F (x) G (x) = f (xy) g (y) dy
Fcn dx2
= n2 Fcn [f ] 2 ddxf (0)

df
+(1)n 2 ()
dx is the convolution on < x < . If F and G
!1 " are both even or both odd, the convolution is
fc (n) cos n k = Fcn f (xk) + 12 f2 (x+k)
2 2
even; it is odd if one function is even and the
fc (n) (1)n = Fcn [f (x)] other odd. If F and G are piecewise continuous
on 0 x , then
and f 2 is the extension of f. ! "
fs (n) gs (n) = Fcn 12 F1 (x) G1 (x)
! "
fs (n) gc (n) = Fsn 12 F1 (x) G2 (x)
! "
fc (n) gc (n) = Fcn 12 F2 (x) G2 (x)
Mathematics in Chemical Engineering 33

where F 1 and G 1 are odd extensions of F and G, transforms and then using the fast Fourier trans-
respectively, and F 2 and G2 are even extensions form (see Chap. 2).
of F and G, respectively. Finite sine and cosine
transforms are listed in Tables 3 and 4.
4.2. Laplace Transforms
Table 4. Finite cosine transforms [41]
fc (n) = F (x) (0<x<) Consider a function F (t) dened for t > 0. The

F (x) cos n xdx (n = 0,1,. . .) Laplace transform of F (t) is [41]
0


(1)n f c (n) F (x) L [F ] = f (s) = est F (t) dt
0
0 when n=1,2, . . . ; f c (0)= 1
 The Laplace transformation is linear, that is,
2 1 (0<x<c)
n sin n c; fc (0) = 2 c
1 (c<x<)
L [F +G] = L [F ] +L [G]
1(1)n 2
n2
; fc (0) = 2 x Thus, the techniques described herein can be ap-
(1)n 2 x2
plied only to linear problems. Generally, the as-
n2
; fc (0) = 6 2
sumptions made below are that F (t) is at least
1
; fc (0) = 0
(x)2

piecewise continuous, that it is continuous in
n2 2 6
each nite interval within 0 < t < , and that
(1)n ecx 1 1 cx it may take a jump between intervals. It is also
n2 +c2 ce
of exponential order, meaning et |F (t) | is
1
n2 +c2
cosh c(x)
c sinh c bounded for all t > T, for some nite T.
The unit step function is
(1)n cos k 1 1
n2 k2
(|k| =0,1,. . .) k sin k x
0 0t<k
(1)n+m 1
Sk (t) =
; fc (m) = 0 (m = 1,. . .) 1
m sin m x
1 t>k
n2 m2

1
(|k| =0,1,. . .) k sin k x
cos k(x) and its Laplace transform is
n2 k2
0 (n =m) ; fc (m) = 2 (m = 1,2,. . .) cos m x eks
L [Sk (t)] =
The material is reproduced with permission of McGrawHill, Inc. s
In particular, if k = 0 then
On the semi-innite domain, 0 < x < , the
Fourier sine and cosine transforms are 1
L [1] =

s
Fs [f ] f (x) sin x dx,
0 The Laplace transforms of the rst and second


derivatives of F (t) are
Fc [f ] f (x) cos x dx and
0  
2  2  dF
f (x) = F [Fs [f ]] , f (x) = F [Fc [f ]] L dt
= s f (s) F (0)
s c
The sine transform is an odd function of ,  
d2 F
L dt2
= s2 f (s) s F (0) ddtF (0)
whereas the cosine function is an even function
of . Also, More generally,
 
d2 f  
Fs dx2
= f (0) 2 Fs [f ] dn F
L dtn
= sn f (s) sn1 F (0)
 
d2 f
Fc dx2
= ddxf (0) 2 Fc [f ] dn1 F
sn2 dF
dt
(0) dtn1
(0)
provided f (x) and f  (x) 0 as x . Thus,
the sine transform is useful when f (0) is known The inverse Laplace transformation is
and the cosine transform is useful when f  (0) F (t) = L1 [f (s)] where f (s) = L [F ]
is known.
Hsu and Dranoff [42] solved a chemical en- The inverse Laplace transformation is not unique
gineering problem by applying nite Fourier because functions that are identical except for
34 Mathematics in Chemical Engineering

isolated points have the same Laplace transform. Substitution.


They are unique to within a null function. Thus, ! "
f (sa) = L eat F (t)
if
This can be used with polynomials. Suppose
L [F1 ] = f (s) and L [F2 ] = f (s)
1 1 2 s+3
it must be that f (s) = + =
s s+3 s (s+3)
F2 = F1 +N (t) Because
T 1
where (N (t)) dt = 0 for every T L [1] =
0
s

Laplace transforms can be inverted by using Ta- Then


ble 5, but knowledge of several rules is helpful. F (t) = 1+e3t , t0

Table 5. Laplace transforms (see [23] for a more complete list) More generally, translation gives the following.
L [F] F (t)
1
1
Translation.
s    !
"
1 f (a sb) = f a s ab = L a1 ebt/a F at ,
s2
t

1 tn1 a>0
sn (n1)!

1
The step function
1
s t 1

0 0t< h
s3/2 2 t/

1 2
S (t) = 1 t< h
(k)

h
(k>0) tk1

sk 2 3
2 h
t< h
1 at
sa e
has the Laplace transform
1
(sa)n
(n = 1,2,. . .) 1
(n1)!
tn1 eat 1 1
L [S (t)] =
(k) k 1 at s 1ehs
(k>0) t e
(sa)k

 
The Dirac delta function (t t 0 ) (see Equa-
1
(sa)(sb)
1
ab eat ebt tion 116 in Mathematical Modeling) has the
  property
s 1
aeat bebt
(sa)(sb) ab 

1 1 (tt0 ) F (t) dt = F (t0 )


s2 +a2 a sin a t
0
s
s2 +a2
cos a t
Its Laplace transform is
1 1
s2 a2 a sinh a t L [ (tt0 )] = est0 , t0 0, s>0
s
s2 a2
cosh a t The square wave function illustrated in Figure
s t 10 has Laplace transform
(s2 +a2 )2 2 a sin a t
1 cs
L [Fc (t)] = tanh
s2 a2 s 2
t cos a t
(s2 +a2 )2
The triangular wave function illustrated in Fig-
1 1 at
(sa)2 +b2 be sin b t ure 11 has Laplace transform
sa
eat cos b t 1 cs
(sa)2 +b2 L [Tc (t)] = tanh
s2 2
Other Laplace transforms are listed in Table 5.
Mathematics in Chemical Engineering 35

Partial Fractions [43]. Suppose q (s) has m


factors
q (s) = (sa1 ) (sa2 ) (sam )
All the factors are linear, none are repeated, and
the an are all distinct. If p (s) has a smaller de-
gree than q (s), the Heaviside expansion can be
used to evaluate the inverse transformation:
  m
p (s) p (ai ) ai t
L1 = e
Figure 10. Square wave function q (s) i=1
q  (ai )

If the factor (s a) is repeated m times, then


p (s) Am Am1
f (s) = q (s)
= (sa)m
+ (sa) m1 +

A1
+ sa +h (s)

where
(sa)m p (s)
(s) q (s)

1 dmk (s)
Am = (a) , Ak = (mk)! dsmk
|a ,

Figure 11. Triangular wave function k = 1,. . .,m1


The term h (s) denotes the sum of partial frac-
Convolution properties are also satised: tions not under consideration. The inverse trans-
formation is then
t 
F (t) G (t) = F ( ) G (t ) d tm1 tm2
F (t) = eat Am +Am1 +
(m1) ! (m2) !
0 
t
and +A2 +A1 +H (t)
1!
f (s) g (s) = L [F (t) G (t)] The term in F (t) corresponding to
sa in q (s) is (a) ea t
Derivatives of Laplace Transforms. The
Laplace integrals L [F (t)], L [t F (t)], L [t 2 F (sa)2 in q (s) is [ (a) + (a) t] ea t
(t)], . . . are uniformly convergent for s1 and !  "
(sa)3 in q (s) is 1
2
(a) +2 (a) t+ (a) t2 ea t
lim f (s) = 0, lim L [tn F (t)] = 0, n = 1, 2,. . .
s s For example, let
and 1
f (s) =
dnf (s2) (s1)2
= L [(t)n F (t)]
dsn For the factor s 2,
Integration of Laplace Transforms. 1
(s) = , (2) = 1
(s1)2

 
F (t)
f () d = L For the factor (s 1)2 ,
t
s 1 1
(s) = s2
,  (s) = (s2) 2
If F (t) is a periodic function, F (t) = F (t + a),
then (1) = 1,  (1) = 1

1
a The inverse Laplace transform is then
f (s) = 1eas
est F (t) dt,
0
F (t) = e2t + [1t] et
where F (t) = F (t+a)
36 Mathematics in Chemical Engineering
a s
Quadratic Factors. Let p (s) and q (s) have y (s) = s
+2 y (s) s2 +1
real coefcients, and q (s) have the factor a (s2 +1)
or y (s) = s (s1)2
(sa)2 +b2 , b>0
Taking the inverse transformation gives
where a and b are real numbers. Then dene ! "
Y (t) = s 1+2tet
(s) and h (s) and real constants A and B such that
Next, let the variable s in the Laplace trans-
p (s) (s) form be complex. F (t) is still a real-valued func-
f (s) = q (s)
= (sa)2 +b2
tion of the positive real variable t. The properties
A s+B
= (sa)2 +b2
+h (s) given above are still valid for s complex, but ad-
ditional techniques are available for evaluating
Let 1 and 2 be the real and imaginary parts of the integrals. The real-valued function is O [exp
the complex number (a + i b). (x 0 t)]:
(a+i b) 1 +i 2 |F (t)| <M ex0 t , z0 = x0 +i y0

Then The Laplace transform




1 (sa)2 +b1
f (s) = b (sa)2 +b2
+h (s) f (s) = est F (t) dt
0
1
F (t) = eat (2 cos b t+1 sin b t) +H (t)
b is an analytic function of s in the half-plane x >
To solve ordinary differential equations by x 0 and is absolutely convergent there; it is uni-
using these results: formly convergent on x x 1 > x 0 .
dn f

dsn
= L [(t)n F (t)] n = 1, 2, . . .,x>x0
Y (t) 2Y  (t) +Y (t) = e2t
and f (s) = f (s )
Y (0) = 0, Y  (0) = 0
The functions | f (s)| and |x f (s)| are bounded
Taking Laplace transforms in the half-plane x x 1 > x 0 and f (s) 0 as |
   ! " 1 y | for each xed x. Thus,
L Y (t) 2L Y  (t) +L [Y (t)] =
s2 |f (x+ i y)| <M , |x f (x+ i y)| <M ,
using the rules
xx1 >x0
s2 y (s) s Y (0) Y  (0) 2 [s y (s) Y (0)] +y (s)
lim f (x+ i y) = 0, x>x0
1 y
=
s2 If F (t) is continuous, F  (t) is piecewise contin-
and combining terms uous, and both functions are O [exp (x 0 t)], then

1
| f (s) | is O (1/s) in each half-plane x x 1 > x 0 .
s2 2 s+1 y (s) = s2
|s f (s)| <M
1
y (s) = (s2)(s1)2 If F (t) and F  (t) are continuous, F  (t) is
piecewise continuous, and all three functions are
lead to O [exp (x 0 t)], then
 2 
Y (t) = e2t (1+t) et s f (s) s F (0) <M , xx1 >x0

To solve an integral equation: The additional constraint F (0) = 0 is necessary


and sufcient for | f (s) | to be O (1/s2 ).
t
Y (t) = a+2 Y ( ) cos (t ) d Inversion Integral [41]. Cauchys integral
0 formula for f (s) analytic and O (sk ) in a half-
it is written as plane x y, k > 0, is
+i

Y (t) = a+Y (t) cos t 1 f (z)
f (s) = lim dz, Re (s) >
2 i sz
Then the Laplace transform is used to obtain i
Mathematics in Chemical Engineering 37


Applying the inverse Laplace transformation on F (t) = ?n (t)
either side of this equation gives n=1

+i
 When sn is a simple pole
1
F (t) = lim ezt f (z) dz ?n (t) = lim (zsn ) ezt f (z)
2 i zsn
i
= e sn t lim (zsn ) f (z)
If F (t) is of order O [exp (x 0 t)] and F (t) and zsn

F  (t) are piecewise continuous, the inversion When


integral exists. At any point t 0 , where F (t) is
p (z)
discontinuous, the inversion integral represents f (z) =
q (z)
the mean value
1 where p (z) and q (z) are analytic at z = sn , p
F (t0 ) = lim [F (t0 +) +F (t0 )] (sn ) = 0, then
2
When t = 0 the inversion integral represents 0.5 p (sn ) sn t
?n (t) = e
F (O +) and when t < 0, it has the value zero. q  (sn )
If f (s) is a function of the complex variable If sn is a removable pole of f (s), of order m,
s that is analytic and of order O (sk m ) on R then
(s) x 0 , where k > 1 and m is a positive inte-
ger, then the inversion integral converges to F n (z) = (zsn )m f (z)
(t) and is analytic at sn and the residue is
+i

dn F 1 n (sn ) m1 ! "
dtn
= lim
2 i
ezt z n f (z) dz, ?n (t) = where n (z) = n (z) ezt
i (m1)! z m1
n = 1,2,. . ., m An important inversion integral is when
1  
Also F (t) and its n derivatives are continuous f (s) = exp s1/2
functions of t of order O [exp (x 0 t)] and they s
vanish at t = 0. The inverse transform is
   

F (0) = F (0) = F (m)
(0) = 0 1 1
F (t) = 1 erf = erfc
2 t 2 t
where erf is the error function and erfc the com-
Series of Residues [41]. Let f (s) be an ana-
plementary error function.
lytic function except for a set of isolated singu-
lar points. An isolated singular point is one for
which f (z) is analytic for 0 < |z z0 | <  but
z0 is a singularity of f (z). An isolated singular 4.3. Solution of Partial Differential
point is either a pole, a removable singularity, or Equations by Using Transforms
an essential singularity. If f (z) is not dened in
the neighborhood of z0 but can be made analytic A common problem facing chemical engineers
at z0 simply by dening it at some additional is to solve the heat conduction equation or dif-
points, then z0 is a removable singularity. The fusion equation
function f (z) has a pole of order k 1 at z0 if T 2T c 2c
? Cp =k or =D
(z z0 )k f (z) has a removable singularity at z0 t x2 t x2
whereas (z z0 )k 1 f (z) has an unremovable The equations can be solved on an innite do-
isolated singularity at z0 . Any isolated singular- main < x < , a semi-innite domain 0
ity that is not a pole or a removable singularity x < , or a nite domain 0 x L. At a
is an essential singularity. boundary, the conditions can be a xed tem-
Let the function f (z) be analytic except for perature T (0, t) = T 0 (boundary condition of
the isolated singular point s1 , s2 , . . . , sn . Let n the rst kind, or Dirichlet condition), or a xed
(t) be the residue of ezt f (z) at z = sn (for de- ux k Tx (0,t) = q0 (boundary condition of
nition of residue, see Section 3.4). Then the second kind, or Neumann condition), or a
38 Mathematics in Chemical Engineering

combination k T x (0,t) = h [T ( 0, t) T0 ] Problem 2. Semi-innite domain, boundary


(boundary condition of the third kind, or Robin condition of the rst kind, on 0 x
condition).
T (x, 0) = T0 = constant
The functions T 0 and q0 can be functions of
time. All properties are constant (, C p , k, D, h), T (0, t) = T1 = constant
so that the problem remains linear. Solutions are
presented on all domains with various boundary The solution is
conditions for the heat conduction problem.   
T (x ,t) = T0 + [T1 T0 ] 1 erf x/ 4 t
T 2T k
= , =  
t x2 ?Cp or T (x, t) = T0 + (T1 T0 ) erfc x/ 4 t
Problem 1. Innite domain, on < x <
. Problem 3. Semi-innite domain, boundary
condition of the rst kind, on 0 x <
T (x, 0) = f (x) , initial conditions
T (x, 0) = f (x)
T (x, t) bounded
T (0, t) = g (t)
Solution is via Fourier transforms

The solution is written as
T (, t) = T (x, t) e ix
dx
T (x, t) = T1 (x, t) +T2 (x, t)

Applied to the differential equation where


 
F 2T
= 2 F [T ] T1 (x, 0) = f (x) , T2 (x, 0) = 0
x2

T T1 (0, t) = 0, T2 (0, t) = g (t)


t
+ 2 T = 0, T (, 0) = f ()

By solving Then T 1 is solved by taking the sine transform


2
T (, t) = f () e t U1 = Fs [T1 ]

the inverse transformation gives [40, p. 328], U1


[44, p. 58] = 2 U1
t
L
1 2
T (x, t) = lim eix f () e t
d U1 (, 0) = Fs [f ]
2 L
L
Thus,
Another solution is via Laplace transforms; 2
take the Laplace transform of the original differ- U1 (, t) = Fs [f ] e t

ential equation. and [40, p. 322]


2t
s t (s, x) f (x) = 

x2 2 2
T1 (x, t) = Fs [f ] e t
sin x d
This equation can be solved with Fourier trans-
0
forms [40, p. 355]
Solve for T 2 by taking the sine transform

# $ %
1 s
t (s, x) = e |xy| f (y) dy U2 = Fs [T2 ]
2 s

U2
The inverse transformation is [40, p. 357], [44, t
= 2 U2 + g (t)
p. 53]
U2 (, 0) = 0
T (x, t) =

Thus,
1 (xy)2 /4t
e f (y) dy
2 t

Mathematics in Chemical Engineering 39
t T (x, 0) = f (x)
2
U2 (, t) = e (t )
g ( ) d
0 T
k (0, t) = h T (0, t)
and [40, p. 435] x


t Take the Laplace transform
2 2
T2 (x, t) = sin x e (t )
g ( ) d d 2t
s tf (x) =
0 0 x2

t
The solution for T 1 can also be obtained by k x
(0, s) = h t (0, s)
Laplace transforms.
The solution is
t1 = L [T1 ] 

Applying this to the differential equation t (x, s) = f () g (x, , s) d


0
2 t1
st1 f (x) = , t1 (0, s) = 0 where [41, p. 227]
x2
   
and solving gives 2 s/ g (x, , s) = exp |x| s/
x   
1 
sh /k
t1 = e s/(x x) f x dx + s+h/k exp (x+) s/
s
0
One form of the inverse transformation is [41, p.
and the inverse transformation is [40, p. 437], 228]
[44, p. 59]
2

2 

T (x, t) =
e t cos [ x ()] f ()
T1 (x, t) = (7) 0 0


  cos [ ()] dd
1 2 2
e(x) /4t e(x+) /4t f () d
4t () = arg (+i h/k)
0

Problem 4. Semi-innite domain, boundary Another form of the inverse transformation


conditions of the second kind, on 0 x < . when f = T 0 = constant is [41, p. 231], [44, p.
71]
T (x, 0) = 0   
x 2 2
T (x, t) = T0 erf +ehx/k eh t/k
T 4t
k (0, t) = q0 = constant ( )'
x h t x
erfc +
Take the Laplace transform k 4t

t (x, s) = L [T (x, t)] Problem 6. Finite domain, boundary condi-


tion of the rst kind
2t
st = x2 T (x, 0) = T0 = constant
t q0
k x
= s T (0, t) = T (L, t) = 0
The solution is Take the Laplace transform

q0 xs/
t (x, s) = e 2t
k s3/2 s t (x, s) T0 =
x2
The inverse transformation is [41, p. 131], [44,
p. 75] t (0, s) = t (L, s) = 0
& $  '
q0 t x2 /4t x The solution is
T (x, t) = 2 e x erfc
k 4t * *
s s
T0 sinh x T0 sinh (Lx) T0
Problem 5. Semi-innite domain, boundary t (x, s) = * * +
s sinh s L s sinh s
L s
conditions of the third kind, on 0 x <
40 Mathematics in Chemical Engineering

The inverse transformation is [41, p. 220], [44,


p. 96] t
s t (x, s) T0 = x
2
2
2 2 2 nx
2 t/L2
T (x, t) = T0 en sin t
(0, s) = 0, t (L, s) = 0
n=1,3,5,... n L x

or (depending on the inversion technique) [40, The solution is


&  '
pp. 362, 438] T0 cosh x s/
t (x, s) = 1 
L s cosh L s/
T0
 2
T (x, t) = e[(x)+2nL] /4t Its inverse is [41, p. 138]
4t
0 n=   
(2n+1) Lx
 T (x, t) = T0 T0 (1)n erfc
2 4t
e[(x+)+2nL] /4t
d n=0
 
(2n+1) L+x
+erfc
4t
Problem 7. Finite domain, boundary condi-
tion of the rst kind
T (x, 0) = 0 5. Vector Analysis
T (0, t) = 0 Notation. A scalar is a quantity having mag-
nitude but no direction (e.g., mass, length, time,
T (L, 0) = T0 = constant temperature, and concentration). A vector is
a quantity having both magnitude and direc-
Take the Laplace transform
tion (e.g., displacement, velocity, acceleration,
2t force). A second-order dyadic has magnitude
s t (x, s) = x 2
and two directions associated with it, as dened
t (0, s) = 0, t (L, s) = Ts0 precisely below. The most common examples
are the stress dyadic (or tensor) and the veloc-
The solution is

ity gradient (in uid ow). Vectors are printed
x
T0 sinh L s/ in boldface type and identied in this chap-
t (x, s) = 
s sinh s/ ter by lower-case Latin letters. Second-order
dyadics are printed in boldface type and are
and the inverse transformation is [41, p. 201], identied in this chapter by capital or Greek
[44, p. 313] letters. Higher order dyadics are not discussed
&
'
x 2 (1)n n2 2 t/L2 nx here. Dyadics can be formed from the compo-
T (x, t) = T0 + e sin nents of tensors including their directions, and
L n=1 n L
some of the identities for dyadics are more eas-
An alternate transformation is [41, p. 139], [44, ily proved by using tensor analysis, which is not
p. 310] presented here (see also, Transport Phenom-
   ena, Chap. 1.1.4). Vectors are also rst-order
(2 n+1) L+x
T (x, t) = T0 erf dyadics.
n=0 4t Vectors. Two vectors u and v are equal if they
 
(2 n+1) Lx have the same magnitude and direction. If they
erf
4t have the same magnitude but the opposite di-
rection, then u = v. The sum of two vectors
Problem 8. Finite domain, boundary condi- is identied geometrically by placing the vector
tion of the second kind v at the end of the vector u, as shown in Fig-
ure 12. The product of a scalar m and a vector
T (x, 0) = T0 u is a vector m u in the same direction as u but
with a magnitude that equals the magnitude of
T
(0, t) = 0, T (L, t) = 0 u times the scalar m. Vectors obey commutative
x
and associative laws.
Take the Laplace transform
Mathematics in Chemical Engineering 41
u+v=v+u Commutative law for addition
u + (v + w) = (u + v) + w Associative law for addition
mu=um Commutative law for scalar
multiplication
m (n u) = (m n) u Associative law for scalar
multiplication
(m + n) u = m u + n u Distributive law
m (u + v) = m u + m v Distributive law

The same laws are obeyed by dyadics, as well.

Figure 13. Cartesian coordinate system

Figure 12. Addition of vectors

A unit vector is a vector with magnitude 1.0


and some direction. If a vector has some magni-
tude (i.e., not zero magnitude), a unit vector eu
can be formed by Figure 14. Vector components
u
eu =
|u|
Dyadics. The dyadic A is written in compo-
The original vector can be represented by the nent form in cartesian coordinates as
product of the magnitude and the unit vector.
A = Axx ex ex +Axy ex ey +Axz ex ez
u = |u| eu
+Ayx ey ex +Ayy ey ey +Ayz ey ez
In a cartesian coordinate system the three prin-
ciple, orthogonal directions are customarily re- +Azx ez ex +Azy ez ey +Azz ez ez
presented by unit vectors, such as {ex , ey , ez }
or {i, j, k}. Here, the rst notation is used (see Quantities such as ex ey are called unit dyadics.
Fig. 13). The coordinate system is right handed; They are second-order dyadics and have two di-
that is, if a right-threaded screw rotated from the rections associated with them, ex and ey ; the
x to the y direction, it would advance in the z di- order of the pair is important. The components
rection. A vector can be represented in terms of Axx , . . . , Azz are the components of the tensor
its components in these directions, as illustrated Aij which here is a 33 matrix of numbers that
in Figure 14. The vector is then written as are transformed in a certain way when variables
undergo a linear transformation. The y x mo-
u = ux ex +uy ey +uz ez mentum ux can be dened as the ux of x mo-
mentum across an area with unit normal in the
The magnitude is
*
y direction. Since two directions are involved, a
|u| = u2x +u2y +u2z second-order dyadic (or tensor) is needed to re-
present it, and because the y momentum across
The position vector is an area with unit normal in the x direction may
not be the same thing, the order of the indices
r = xex +yey +zez
must be kept straight. The dyadic A is said to be
with magnitude symmetric if

|r| = x2 +y 2 +z 2 Aij = Aji
42 Mathematics in Chemical Engineering
uv=vu Commutative law for scalar
Here, the indices i and j can take the values x, products
y, or z; sometimes (x, 1), ( y, 2), (x, 3) are iden- u (v + w) = u v + u w Distributive law for scalar
tied and the indices take the values 1, 2, or 3. products
ex ex = ey ey = ez ez = 1
The dyadic A is said to be antisymmetric if ex ey = ex ez = ey ez = 0

Aij = Aji
If the two vectors u and v are written in com-
The transpose of A is ponent notation, the scalar product is
uv = ux vx +uy vy +uz vz
AT
ij = Aji
If u v = 0 and u and v are not null vectors, then
Any dyadic can be represented as the sum of u and v are perpendicular to each other and =
a symmetric portion and an antisymmetric por- /2.
tion. The single dot product of two dyadics is
 
Aij = Bij +Cij , Bij 12 (Aij +Aji ) , AB = ei ej Aik Bkj
i j k

Cij 12 (Aij Aji ) The double dot product of two dyadics is



An ordered pair of vectors is a second-order A:B = Aij Bji
i j
dyadic.
Because the dyadics may not be symmetric, the

uv = e i e j ui v j order of indices and which indices are summed
i j
are important. The order is made clearer when
The transpose of this is the dyadics are made from vectors.
(u v) (w x) = u (vw) x = u x (vw)
(u v)T = v u

but (u v) : (w x) = (ux) (vw)


The dot product of a dyadic and a vector is
u v=v u  
Au = ei Aij uj
The Kronecker delta is dened as i j

The cross or vector product is dened by


1 if i = j
ij =
0 if i=j c = uv = a |u| |v| sin , 0

and the unit dyadic is dened as where a is a unit vector in the direction of uv.
The direction of c is perpendicular to the plane

= ei ej ij of u and v such that u, v, and c form a right-
i j
handed system. If u = v, or u is parallel to v,
then = 0 and uv = 0. The following laws are
Operations. The dot or scalar product of valid for cross products.
two vectors is dened as
uv =vu Commutative law fails for
uv = |u| |v| cos , 0 vector product
u(vw) = (uv)w Associative law fails for
where is the angle between u and v. The scalar vector product
product of two vectors is a scalar, not a vector. It u(v + w) = uv + uw Distributive law for vector
product
is the magnitude of u multiplied by the projec- ex ex = ey ey = ez ez = 0
tion of v on u, or vice versa. The scalar product ex ey = ez , ey ez = ex , ez ex =
of u with itself is just the square of the magnitude ey

of u. ex ey ez
 
uv = det ux uy uz
uu = u2  = u2
vx vy vz
The following laws are valid for scalar products = ex (uy vz vy uz ) +ey (uz vz ux vz )

+ez (ux vy uy vx )
Mathematics in Chemical Engineering 43

This can also be written as can be formed where is an eigenvalue. This


expression is
uv = kij ui vj ek
i j
3 I1 2 +I2 I3 = 0
where
An important theorem of Hamilton and Cayley

1 if i,j,k is an even permutation of 123 [47] is that a second-order dyadic satises its
ijk = 1 if i,j,k is an odd permutation of 123 own characteristic equation.

0 if any two of i,j,k are equal
A3 I1 A2 +I2 AI3 = 0 (7)
Thus 123 = 1, 132 =- 1, 312 = 1, 112 = 0, for
3
example. Thus A can be expressed in terms of , A, and
The magnitude of uv is the same as the area A2 . Similarly, higher powers of A can be ex-
of a parallelogram with sides u and v. If uv = pressed in terms of , A, and A2 . Decomposi-
0 and u and v are not null vectors, then u and v tion of a dyadic into a symmetric and an anti-
are parallel. Certain triple products are useful. symmetric part was shown above. The antisym-
metric part has zero trace. The symmetric part
(uv) w=u (vw)
can be decomposed into a part with a trace (the
u (vw) = v (wu) = w (uv) isotropic part) and a part with zero trace (the
deviatoric part).
u (vw) = (uw) v (uv) w
A = 1/3 : A + 1/2 [A + AT - 2/3 : A] + 1/2 [A - AT ]
(uv) w = (uw) v (vw) u Isotropic Deviatoric Antisymmetric

The cross product of a dyadic and a vector is


dened as
  Differentiation. The derivative of a vector is
Au = ei ej klj Aik ul dened in the same way as the derivative of a
i j k l
scalar. Suppose the vector u depends on t. Then
The magnitude of a dyadic is
$ $ du u (t+t) u (t)
1 1 2 = lim
|A| = A = (A:AT ) = A dt t0 t
2 2 i j ij
If the vector is the position vector r (t), then the
There are three invariants of a dyadic. They difference expression is a vector in the direction
are called invariants because they take the same of r (see Fig. 15). The derivative
value in any coordinate system and are thus an
intrinsic property of the dyadic. They are the dr r r (t+t) r (t)
= lim = lim
trace of A, A2 , A3 [45]. dt t0 t t0 t
 is the velocity. The derivative operation obeys
I = trA = i Aii
the following laws.

II = trA2 = j Aij Aji d du
i
dt
(u+v) = dt
+ dv
dt
 
III = trA3 = k Aij Ajk Aki d du
i j
dt
(uv) = dt
v+u dv
dt
The invariants can also be expressed as d du
dt
(uv) = dt
v+u dv
dt
I1 = I
d d

dt
(u) = dt
u+ du
dt
1
I2 = 2
I2 II
If the vector u depends on more than one vari-
1


I3 = 6
I3 3 III+2 III = det A able, such as x, y and z, partial derivatives are
dened in the usual way. For example,
Invariants of two dyadics are available [46]. Be-
cause a second-order dyadic has nine compo- if u (x, y, z) , then
nents, the characteristic equation u(x+x, y, z)u(x, y, z)
u
x
= lim x
t0
det ( A) = 0
44 Mathematics in Chemical Engineering

Rules for differentiation of scalar and vector Differential Operators. The vector differ-
products are ential operator (del operator) is dened in

(uv) = u
v+u v cartesian coordinates by
x x x

(uv) = u
v+u v = ex +ey +ez
x x x x y z
Differentials of vectors are
The gradient of a scalar function is dened
du = dux ex +duy ey +duz ez

= ex +ey +ez
d (uv) = duv+udv x y z

and is a vector. If is height or elevation, the gra-


d (uv) = duv+udv
dient is a vector pointing in the uphill direction.
du = u
dx+ u dy+ u dz The steeper the hill, the larger is the magnitude
x y z
of the gradient.
The divergence of a vector is dened by
ux uy uz
u = + +
x y z

and is a scalar. For a volume element V, the


net outow of a vector u over the surface of the
element is

undS
Figure 15. Vector differentiation
S

This is related to the divergence by [48, p. 411]


If a curve is given by r (t), the length of the 
1
curve is [43] u = lim undS
V 0 V
b $ S
dr dr
L = dt Thus, the divergence is the net outow per unit
dt dt
a
volume.
The arc-length function can also be dened: The curl of a vector is dened by
t $  
dr dr
u = ex x
+ey y
+ez z
s (t) = dt
dt dt
a
(ex ux +ey uy +ez uz )
This gives    
 2  2  2  2 uz uy ux
ds dr dr dx dy dz = ex y
z
+ey z
u
x
z
= = + +
dt dt dt dt dt dt  
uy
+ez x
u
y
x
Because
dr = dxex +dyey +dzez and is a vector. It is related to the integral
 
then uds = us ds
ds2 = drdr = dx2 +dy 2 +dz 2 C C

The derivative dr/dt is tangent to the curve in the which is called the circulation of u around path
direction of motion C. This integral depends on the vector and the
dr contour C, in general. If the circulation does not
u =  dt  depend on the contour C, the vector is said to be
 dr 
 dt 
irrotational; if it does, it is rotational. The rela-
Also, tionship with the curl is [48, p. 419]
dr
u=
ds
Mathematics in Chemical Engineering 45

1
u) = lim
n ( uds invariant scalar eld is invariant; the same is true
0 S
C for the divergence and curl of invariant vectors
Thus, the normal component of the curl equals elds.
the net circulation per unit area enclosed by the The gradient of a vector eld is required in
contour C. uid mechanics because the velocity gradient is
The gradient, divergence, and curl obey a dis- used. It is dened as
tributive law but not a commutative or associa-  vj
v = i j ei ej xi and
tive law.
 vi
v)T =
( j ei ej xj

(+) = + i

v
(u+v) = u+ The divergence of dyadics is dened
  
ji
v
(u+v) = u+ = i ei j xj and
  

=
(u v) = i ei j xj (uj vi )

u=u
where is any second-order dyadic.
Useful formulas are [49] Useful relations involving dyadics are
u
(u) = u+ v) = (
( : v)

u
(u) = u+ ( ) =

u) u (
(uv) = v ( v)
( ) = +

uv (
(uv) = v u) u
v+u (
v) n t: = t n = :n t

u) =
( u) 2 u
( A surface can be represented in the form

) = 2 =
( 2 2
+ y 2 2 f (x, y, z) = c = constant
x2 2 + z 2 , where

is called the Laplacian operator. () = 0. The normal to the surface is given by


The curl of the gradient of is zero. n=
f
f|
|
u) = 0
(
provided the gradient is not zero. Operations can
The divergence of the curl of u is zero. Formulas be performed entirely within the surface. Dene
useful in uid mechanics are

,
II n n, II II n

n
v)T = (
( v)
vII II v, vn nv
) + :
( v) = v ( v
Then a vector and del operator can be decom-
1
v =
v 2
v)
(vv) v ( posed into
If a coordinate system is transformed by a rota-
v = vII +nvn , = II +n
tion and translation, the coordinates in the new n
system (denoted by primes) are given by The velocity gradient can be decomposed into

x l11 l12 l13 vII vn
 II n) vn +n
v = II vII + ( II vn +n +n n
y = l21 l22 l23 n n
z l31 l32 l33 The surface gradient of the normal is the nega-

x a1 tive of the curvature dyadic of the surface.

y + a2
II n = B
z a3
The surface divergence is then
Any function that has the same value in all coor-
dinate systems is an invariant. The gradient of an II v = II : v = II vII 2 H vn
46 Mathematics in Chemical Engineering

where H is the mean curvature. or if f, g, and h are regarded as the x, y, and z


1 components of a vector v:
H= II : B
2 v = 0
The surface curl can be a scalar Consequently, the line integral is independent of
v = II :
II v = II : II vII = n (
v) , the path (and the value is zero for a closed con-
tour) if the three components in it are regarded
II = n as the three components of a vector and the vec-
tor is derivable from a potential (or zero curl).
or a vector
The conditions for a vector to be derivable from
II vII = n
II v II v = n n
v a potential are just those in the third theorem. In
two dimensions this reduces to the more usual
theorem.
Vector Integration [48, pp. 206 212]. If u Theorem [48, p. 207]. If M and N are contin-
is a vector, then its integral is also a vector. uous functions of x and y that have continuous
   rst partial derivatives in a simply connected do-
u (t) dt = ex ux (t) dt+ey uy (t) dt
main D, then the necessary and sufcient condi-

+ez uz (t) dt tion for the line integral

If the vector u is the derivative of another vector, (M dx+N dy)
then C
 
dv dv to be zero around every closed curve C in D is
u= , u (t) dt = dt = v+constant
dt dt M N
=
If r (t) is a position vector that denes a curve y x
C, the line integral is dened by If a vector is integrated over a surface with
  incremental area d S and normal to the surface
udr = (ux dx+uy dy+uz dz) n, then the surface integral can be written as
C C    
udS = undS
Theorems about this line integral can be written
S S
in various forms.
Theorem [43]. If the functions appearing in If u is the velocity then this integral represents
the line integral are continuous in a domain D, the ow rate past the surface S.
then the line integral is independent of the path
C if and only if the line integral is zero on every Divergence Theorem [48, 49]. If V is a vol-
simple closed path in D. ume bounded by a closed surface S and u is
Theorem [43]. If u = where is single- a vector function of position with continuous
valued and has continuous derivatives in D, then derivatives, then
the line integral is independent of the path C and    
the line integral is zero for any closed curve in udV = nudS = undS = udS
D. V S S S

Theorem [43]. If f, g, and h are continuous where n is the normal pointing outward to S. The
functions of x, y, and z, and have continuous normal can be written as
rst derivatives in a simply connected domain
n = ex cos (x, n) +ey cos (y, n) +ez cos (z, n)
D, then the line integral
 where, for example, cos (x, n) is the cosine of the
(f dx+gdy+hdz) angle between the normal n and the x axis. Then
C the divergence theorem in component form is
is independent of the path if and only if   ux uy

x
+ y
+ u
z
z
dxdydz =
V
h g f h g f 
= , = , = [ux cos (x, n) +uy cos (y, n) +uz cos (z, n)] dS
y z z x x y
S
Mathematics in Chemical Engineering 47

If the divergence theorem is written for an incre- Stokes Theorem [48, 49]. Stokes theorem
mental volume says that if S is a surface bounded by a closed,
 nonintersecting curve C, and if u has continuous
1
u = lim un dS derivatives then
V 0 V
S     
udr = u) ndS =
( u) dS
(
the divergence of a vector can be called the in-
C S S
tegral of that quantity over the area of a closed
volume, divided by the volume. If the vector re- The integral around the curve is followed in the
presents the ow of energy and the divergence is counterclockwise direction. In component nota-
positive at a point P, then either a source of en- tion, this is
ergy is present at P or energy is leaving the region 
[ux cos (x, s) +uy cos (y, s) +uz cos (z, s)] ds =
around P so that its temperature is decreasing.
   uz uy 
C
If the vector represents the ow of mass and the z cos (x, n)
y
divergence is positive at a point P, then either a S 
source of mass exists at P or the density is de- + u z
x
u
x
z
cos (y, n)
creasing at the point P. For an incompressible   
uy
uid the divergence is zero and the rate at which + x
u
y
x
cos (z, n) dS
uid is introduced into a volume must equal the Applied in two dimensions, this results in
rate at which it is removed. Greens theorem in the plane:
Various theorems follow from the divergence     
theorem. N M
(M dx+N dy) = dxdy
Theorem. If is a solution to Laplaces equa- x y
C S
tion
The formula for dyadics is
2 = 0   
) dS =
n ( T dr
in a domain D, and the second partial deriva-
S C
tives of are continuous in D, then the integral
of the normal derivative of over any piece-
wise smooth closed orientable surface S in D is Representation. Two theorems give infor-
zero. Suppose u = satises the conditions mation about how to represent vectors that obey
of the divergence theorem: then Greens theo- certain properties.
rem results from use of the divergence theorem Theorem [ 48, p. 422]. The necessary and
[49]. sufcient condition that the curl of a vector van-
  ish identically is that the vector be the gradient


2 +

dV = dS of some function.
n
V S Theorem [ 48, p. 423]. The necessary and
and sufcient condition that the divergence of a vec-
    tor vanish identically is that the vector is the curl

of some other vector.
2 2 dV = dS
n n
V S
Leibniz Formula. In uid mechanics and
Also if satises the conditions of the theorem transport phenomena, an important result is the
and is zero on S then is zero throughout D. If derivative of an integral whose limits of inte-
two functions and both satisfy the Laplace gration are moving. Suppose the region V (t) is
equation in domain D, and both take the same moving with velocity vs . Then Leibnizs rule
values on the bounding curve C, then = ; i.e., holds:
the solution to the Laplace equation is unique.        
The divergence theorem for dyadics is d
dV = dV + vs ndS
  dt t
V (t) V (t) S
dV = n dS
V S
48 Mathematics in Chemical Engineering
e
Curvilinear Coordinates. Many of the re- = er + +ez ,
r r z
lations given above are proved most easily by e
using tensor analysis rather than dyadics. Once = er + +ez
r r z
proven, however, the relations are perfectly gen-  
1 2 2

2 = r1 r r
r
+ r 2 2
+ z2
eral in any coordinate system. Displayed here are
the specic results for cylindrical and spherical = 2
+ r1
2
+ r12 2
r 2 r 2 + z 2
geometries. Results are available for a few other 1 1 v vz
geometries: parabolic cylindrical, paraboloidal, v = (r vr ) + +
r r r z
elliptic cylindrical, prolate spheroidal, oblate    
v = er 1 vz
r
v
z

+e vr
z
v
r
z
spheroidal, ellipsoidal, and bipolar coordinates
 
[45, 50]. +ez (r v ) r1 v
1 r
r r
For cylindrical coordinates, the geometry is  
r
shown in Figure 16. The coordinates are related

= er r1 r (rrr ) + r1
+ zr
z

r
+
to cartesian coordinates by 


+e 1
r 2 r
r2 r + r1
+ z
z
+ r
r
r
+
 
z
+ez 1
r r
+
(rrz ) + r1 zz
z
v
v = er er r +er e r +er ez v
vr
r
z
+
   
+e er 1 vr
r
vr +e e 1 v
r
+ vrr +

+e ez r1 +ez er v
vz

z
r
+ez e v
z

+ez ez v z
z

   2 2

2 v = er r 1
r r
(r vr ) + r12 v2r + zv2r

v
r22
+
   
2 v 2
+e
r
1
r r
(r v ) + r12 2
+ zv2 + r22 vr

+
   
2 vz 2
+ez 1
r r
r v
r
z
+ r12 2
+ zv2z

For spherical coordinates, the geometry is


shown in Figure 17. The coordinates are related
to cartesian coordinates by

Figure 16. Cylindrical coordinate system


x = rcos r = x2 +y 2
y
y = rsin = arctan x
z=z z=z

The unit vectors are related by

er = cosex + siney ex = coser sine


e = sinex + cosey ey = siner + cose
ez = e z ez = e z

Derivatives of the unit vectors are


de = er d, der = e d, dez = 0

Differential operators are given by [45] Figure 17. Spherical coordinate system
Mathematics in Chemical Engineering 49
 v
x = r sin cos r= x2 +y 2 +z 2 v = er er v
r
r
+er e v
r

+er e r
+
 
y = r sin sin = arctan x2 +y 2 /z    
y +e er 1 vr
r
vr +e e 1 v
r
+ vrr +
z = r cos = arctan x
v
 v

1 vr
The unit vectors are related by +e e r1
+e er rsin
r
+
 v

er = sin cos ex +sin sin ey +cos ez 1 v
+e e rsin
r
cot +
e = cos cos ex +cos sin ey sin ez  
1 v
+e e rsin
+ vrr + vr cot
e = sin ex +cos ey
 
2 
1
ex = sin cos er +cos cos e sin e 2 v = er r r 2 r
r vr
ey = sin sin er +cos sin e +cos e  
1
+ r2 sin


sin v

r
ez = cos er sin e

1 2 vr 2 2 v
Derivatives of the unit vectors are + r2 sin2 2
r2 sin
(v sin ) r2 sin
    
e
er

= e ,
= er +e 1
r 2 r
r2 v
r

+ r12

1
sin
(v sin )

e

er
= e sin , = e cos , 1 2 v vr cot v
+ r2 sin2 2
+ r22
r22 sin
e   v  


= er sin e cos +e 1
r2 r + r12 1
v sin
r 2 r sin

Others 0 1 2 v 2 vr cot v
+ r2 sin2 2
+ r2 sin
+ r22 sin
Differential operators are given by [45]
1 1
= er +e +e ,
r r r sin
1 1 6. Ordinary Differential Equations
= er +e +e
r r r sin as Initial Value Problems
   
2 = r12 r

r2
r
1
+ r2 sin


sin

+ A differential equation for a function that de-
1 2 pends on only one variable (often time) is called
+ r2 sin2 2
an ordinary differential equation. The general
1

1 solution to the differential equation includes
v = r 2 r
r2 vr + r sin
(v sin ) +
many possibilities; the boundary or initial con-
1 v
+ r sin
ditions are required to specify which of those

 are desired. If all conditions are at one point,
1 1 v
v = er r sin
v sin r sin
+ the problem is an initial value problem and can

 be integrated from that point on. If some of the
1 vr
+e r sin
r1
r
r v + conditions are available at one point and others
  at another point, the ordinary differential equa-
1 vr
+e r r
(r v ) r1 tions become two-point boundary value prob-

lems, which are treated in Chapter 7. Initial value


= er 1
r2 rr problems as ordinary differential equations arise
r 2 r
 in control of lumped-parameter models, tran-
1 1 r +
+ r sin
(r sin ) + r sin
r
+ sient models of stirred tank reactors, polymer-

ization reactions and plug-ow reactors, and
1 1
+e r 3 r
r3 r + r sin
( sin ) + generally in models where no spatial gradients
r r cot
 occur in the unknowns.
1
r sin
+ r
+



1 1
+e r 3 r
r3 r + r sin
sin +

+ cot

1
r sin
+ r rr
50 Mathematics in Chemical Engineering

6.1. Solution by Quadrature that can be used to turn the left-hand side into
an exact differential and can be found by using
When only one equation exists, even if it is non- Frechet differentials [51]. In this case,
linear, solving it by quadrature may be possible.       
Ft dc F d Ft
For exp + c = exp c
V dt V dt V
dy
dt
= f (y) Thus, the differential equation can be written as
      
y (0) = y0 d Ft Ft F
exp c = exp cin
dt V V V
the problem can be separated
This can be integrated once to give
dy
= dt   t  
f (y) Ft F F t

exp c = c (0) + exp cin t dt
and integrated: V V V
0

y t or
dy   
= dt = t
f (y  ) c (t) = exp FV t c0
y0 0
 
t F (tt )
If the quadrature can be performed analytically F
+V exp cin (t ) dt
V
then the exact solution has been found. 0

For example, consider the kinetics problem If the integral on the right-hand side can be calcu-
with a second-order reaction. lated, the solution can be obtained analytically.
dc If not, the numerical methods described in the
= kc2 , c (0) = c0 next sections can be used. Laplace transforms
dt
To nd the function of the concentration versus can also be attempted. However, an analytic so-
time, the variables can be separated and inte- lution is so useful that quadrature and an inte-
grated. grating factor should be tried before resorting to
numerical solution.
dc
c2
= kdt,

1c = kt+D

Application of the initial conditions gives the


solution:
1 1
= kt+
c c0
For other ordinary differential equations an
integrating factor is useful. Consider the prob-
lem governing a stirred tank with entering uid
having concentration cin and ow rate F, as
shown in Figure 18. The ow rate out is also Figure 18. Stirred tank
F and the volume of the tank is V. If the tank is
completely mixed, the concentration in the tank
is c and the concentration of the uid leaving the
tank is also c. The differential equation is then 6.2. Explicit Methods
dc Consider the ordinary differential equation
V = F (cin c) , c (0) = c0
dt
dy
Upon rearrangement, = f (y)
dt
dc F F Multiple equations that are still initial value
+ c = cin
dt V V problems can be handled by using the same tech-
is obtained. An integrating factor is used to solve niques discussed here. A higher order differen-
this equation. The integrating factor is a function tial equation
Mathematics in Chemical Engineering 51
 
y (n) +F y (n1) , y (n2) ,. . ., y  , y = 0 Adams Bashforth Methods. The second-
order Adams Bashforth method is
with initial conditions

t !
"
Gi y (n1) (0) , y (n2) (0) ,. . .,y  (0) , y (0) = 0 y n+1 = y n + 3 f (y n ) f y n1
2
i = 1,. . .,n The fourth-order Adams Bashforth method is
can be converted into a set of rst-order equa- t !

y n+1 = y n + 55 f (y n ) 59 f y n1
tions. By using 24


n3 "
+37 f y n 2 9 f y
d(i1) y d (i2) dyi1
yi y (i1) = = y =
dt(i1) dt dt Notice that the higher order explicit methods re-
the higher order equation can be written as a set quire knowing the solution (or the right-hand
of rst-order equations: side) evaluated at times in the past. Because
these were calculated to get to the current time,
dy1
dt
= y2 this presents no problem except for starting the
dy2 evaluation. Then, Eulers method may have to be
dt
= y3
used with a very small step size for several steps
dy3
= y4 to generate starting values at a succession of time
dt
... points. The error terms, order of the method,
dyn
= F (yn1 , yn2 ,. . .,y2, y1 ) function evaluations per step, and stability limi-
dt
tations are listed in Table 6. The advantage of the
The initial conditions would have to be specied fourth-order Adams Bashforth method is that
for variables y1 (0), . . . , yn (0), or equivalently y it uses only one function evaluation per step and
(0), . . . , y(n1) (0). The set of equations is then yet achieves high-order accuracy. The disadvan-
written as tage is the necessity of using another method to
dy start.
= f (y, t)
dt
Runge Kutta Methods. Runge Kutta
All the methods in this chapter are described for
methods are explicit methods that use several
a single equation; the methods apply to the multi-
function evaluations for each time step. The
ple equations as well. Taking the single equation
general form of the methods is
in the form
v

dy y n+1 = y n + w i ki
= f (y)
dt i=1

multiplying by dt, and integrating once yields with


tn+1
 tn+1

dy 

i1

dt = f y t dt ki = t f t +ci t, y +
n n
aij kj
dt t
tn n j=1

This is Runge Kutta methods traditionally have been


tn+1

writen for f (t, y) and that is done here, too. If
dy  these equations are expanded and compared with
y n+1 = y n + dt
dt a Taylor series, restrictions can be placed on the
tn
parameters of the method to make it rst order,
The last substitution gives a basis for the various second order, etc. Even so, additional param-
methods. Different interpolation schemes for y eters can be chosen. A second-order Runge
(t) provide different integration schemes; using Kutta method is
low-order interpolation gives low-order integra-
t n
tion schemes [3]. y n+1 = y n + [f +f (tn +t, y n +t f n )]
2
Eulers method is rst order
The midpoint scheme is another second-order
y n+1 = y n +tf (y n ) Runge Kutta method:
52 Mathematics in Chemical Engineering
Table 6. Properties of integration methods for ordinary differential equations
Method Error term Order Function evaluations per step Stability limit, t
Explicit methods
h2 
Euler 2 y 1 1 2.0

5 3 
Second-order Adams Bashforth 12 h y 2 1

251 5 (5)
Fourth-order Adams Bashforth 720 h y 4 1 0.3
Second-order Runge Kutta
(midpoint) 2 2 2.0
Runge Kutta Gill 4 4 2.8
n+1 n+1
Runge Kutta Feldberg y z 5 6 3.0

Predictor corrector methods


Second-order Runge Kutta 2 2 2.0
Adams, fourth-order 2 2 1.3

Implicit methods, stability limit


Backward Euler 1 many, iterative *
Trapezoid rule 1
12 h3 y  2 many, iterative 2*
Fourth-order Adams Moulton 4 many, iterative 3*
* Oscillation limit, t .

 
t n t
y n+1 = y n +t f tn + ,y + f n
2 2 
A popular fourth-order method is the Runge k6 = t f tn + t n 8 3544
2 , y 27 k1 +2k2 2565 k3
Kutta Gill method with the formulas 
+ 1859 11
4104 k4 40 k5
k1 = t f (tn , y n )
25
  y n+1 = y n + 216 k1 + 1408 2197 1
2565 k3 + 4104 k4 5 k5
k2 = t f tn + t
2
, y n + k21
16
  z n+1 = y n + 135 k1 + 126656
825 k3
k3 = t f tn + t
2
, y n +a k1 +b k2
+ 28 561 9 2
56 430 k4 50 k5 + 55 k6
k4 = t f (tn +t, y n +c k 2 +d k3 )
The value of yn+1 zn+1 is an estimate of the
y n+1 = y n + 16 (k1 +k4 ) + 13 (b k2 +d k3 ) error in yn+1 and can be used in step-size control
schemes.
21 2 2
a= 2
,b = 2
, Generally, a high-order method should be

2

2
used to achieve high accuracy. The Runge
c= 2
,d = 1+ 2 Kutta Gill method is popular because it is high
Another fourth-order Runge Kutta method is order and does not require a starting method
given by the Runge Kutta Feldberg formu- (as does the fourth-order Adams Bashforth
las [52]; although the method is fourth-order, it method). However, it requires four function eval-
achieves fth-order accuracy. The popular inte- uations per time step, or four times as many as
gration package RKF 45 is based on this method. the Adams Bashforth method. For problems
in which the function evaluations are a signif-
k1 = t f (tn , y n ) icant portion of the calculation time this might
  be important. Given the speed of computers and
k2 = t f tn + t
4
, y n + k41 the widespread availability of desktop comput-

ers, the efciency of a method is most important
k3 = t f tn + 38 t, y n + 32
3 9
k1 + 32 k2
only for very large problems that are going to


k4 = tf tn + 12 t,y n + 1932 k 7200 k + 7296 k be solved many times. For other problems the
13 2197 1 2197 2 2197 3


most important criterion for choosing a method
k5 = tf tn +t, y n + 439 k 8 k2 + 3680
216 1
k 845 k
513 3 4104 4 is probably the time the user spends setting up
the problem.
Mathematics in Chemical Engineering 53

The stability of an integration method is best


estimated by determining the rational polyno-
mial corresponding to the method. Apply this
method to the equation
dy
= y, y (0) = 1
dt
and determine the formula for r mn :
y k+1 = rmn ( t) y k

The rational polynomial is dened as


pn (z)
rmn (z) = ez
qm (z)

and is an approximation to exp ( z), called a


Pade approximation. The stability limits are the Figure 19. Rational approximations for explicit methods
largest positive z for which a) Euler; b) Runge Kutta 2; c) Runge Kutta Gill;
d) Exact curve; e) Runge Kutta Feldberg
|rmn (z)| 1

The method is A acceptable if the inequality In solving sets of equations


holds for Re z > 0. It is A (0) acceptable if the in- dy
= A y+f , y (0) = y0
equality holds for z real, z > 0 [53]. The method dt
will not induce oscillations about the true solu- all the eigenvalues of the matrix A must be exam-
tion provided ined. Finlayson [3] and Amundson [54, p. 197
rmn (z) >0 199] both show how to transform these equa-
tions into an orthogonal form so that each equa-
A method is L acceptable if it is A acceptable tion becomes one equation in one unknown, for
and which single equation analysis applies. For lin-
ear problems the eigenvalues do not change, so
lim rmn (z) = 0
z the stability and oscillation limits must be satis-
For example, Eulers method gives ed for every eigenvalue of the matrix A. When
solving nonlinear problems the equations are lin-
y n+1 = y n ty n or y n+1 = (1 t) y n earized about the solution at the local time, and
the analysis applies for small changes in time,
or rmn = 1 t
after which a new analysis about the new solu-
The stability limit is then tion must be made. Thus, for nonlinear problems
the eigenvalues keep changing.
t 2 Richardson extrapolation can be used to im-
The Euler method will not oscillate provided prove the accuracy of a method. Step forward
one step t with a p-th order method. Then redo
t 1 the problem, this time stepping forward from the
same initial point but in two steps of length t/2,
The stability limits listed in Table 6 are obtained
thus ending at the same point. Call the solution
in this fashion. The limit for the Euler method
of the one-step calculation y1 and the solution of
is 2.0; for the Runge Kutta Gill method it is
the two-step calculation y2 . Then an improved
2.785; for the Runge Kutta Feldberg method
solution at the new time is given by
it is 3.020. The rational polynomials for the vari-
ous explicit methods are illustrated in Figure 19. 2p y2 y1
y=
As can be seen, the methods approximate the ex- 2p 1
act solution well as t approaches zero, and This gives a good estimate provided t is small
the higher order methods give a better approxi- enough that the method is truly convergent with
mation at high values of t.
54 Mathematics in Chemical Engineering

order p. This process can also be repeated in 6.3. Implicit Methods


the same way Rombergs method was used for
quadrature (see Section 2.4). By using different interpolation formulas, in-
The accuracy of a numerical calculation de- volving yn+1 , implicit integration methods can
pends on the step size used, and this is chosen be derived. Implicit methods result in a nonlin-
automatically by efcient codes. For example, in ear equation to be solved for yn+1 so that itera-
the Euler method the local truncation error LTE tive methods must be used. The backward Euler
is method is a rst-order method:


t2  y n+1 = y n +t f y n+1
LTE = y
2 n The trapezoid rule (see Section 2.4) is a second-
Yet the second derivative can be evaluated by order method:
using the difference formulas as t !
"
y n+1 = y n + f (y n ) + f y n+1

2
 ) = t y  y 
yn = (t yn n n1 = When the trapezoid rule is used with the nite
t (fn fn 1 )
difference method for solving partial differen-
tial equations it is called the Crank Nicolson
Thus, by monitoring the difference between the method. Adams methods exist as well, and the
right-hand side from one time step to another, an fourth-order Adams Moulton method is
estimate of the truncation error is obtained. This t !

error can be reduced by reducing t. If the user y n+1 = y n + 9f y n+1 +19 f (y n )
24

n1
"
species a criterion for the largest local error es- 5 f y + f y n2
timate, then t is reduced to meet that criterion.
The properties of these methods are given in Ta-
Also, t is increased to as large a value as pos-
ble 6. The implicit methods are stable for any
sible, because this shortens computation time. If
step size but do require the solution of a set of
the local truncation error has been achieved (and
nonlinear equations, which must be solved it-
estimated) by using a step size t 1
eratively. An application to dynamic distillation
LTE = c tp1 problems is given in [56].
All these methods can be written in the form
and the desired error is , to be achieved using a k k


step size t 2 y n+1 = i y n+1i +t i f y n+1i
i=1 i=0
= c tp2 or


then the next step size t 2 is taken from y n+1 = t 0 f y n+1 +wn
 p
LTE
=
t1 where wn represents known information. This
t2 equation (or set of equations for more than one
Generally, things should not be changed too of- differential equation) can be solved by using suc-
ten or too drastically. Thus one may choose not cessive substitution:
 
to increase t by more than a factor (such as 2) y n+1, k+1 = t 0 f y n+1, k +wn
or to increase t more than once every so many
steps (such as 5) [55]. In the most sophisticated Here, the superscript k refers to an iteration
codes the alternative exists to change the order counter. The successive substitution method is
of the method as well. In this case, the trunca- guaranteed to converge, provided the rst deriva-
tion error of the orders one higher and one lower tive of the function is bounded and a small
than the current one are estimated, and a choice enough time step is chosen. Thus, if it has not
is made depending on the expected step size and converged within a few iterations, t can be re-
work. duced and the iterations begun again. The New-
ton Raphson method (see Section 1.2) can also
be used.
In many computer codes, iteration is allowed
to proceed only a xed number of times (e.g.,
Mathematics in Chemical Engineering 55

three) before t is reduced. Because a good his- 6.4. Stiffness


tory of the function is available from previous
time steps, a good initial guess is usually possi- Why is it desirable to use implicit methods that
ble. lead to sets of algebraic equations that must be
The best software packages for stiff equations solved iteratively whereas explicit methods lead
(see Section 6.4) use Gears backward differ- to a direct calculation? The reason lies in the sta-
ence formulas. The formulas of various orders bility limits; to understand their impact, the con-
are [57]. cept of stiffness is necessary. When modeling a


1: y n+1 = y n +t f y n+1 physical situation, the time constants governing

different phenomena should be examined. Con-
4
2: y n+1 = 3
y n 13 y n1 + 23 t f y n+1 sider ow through a packed bed, as illustrated in
18 9 2
Figure 21.
3: y n+1 = 11
y n 11 y n1 + 11 y n2

6


+ 11 t f y n+1

48
4: y n+1 = 25
y n 36
25
y n1 + 16
25
3
y n2 25 y n3


+ 12
25
t f y n+1

300
5: y n+1 = 137
y n 300
137
y n1 + 200
137
y n2 Figure 21. Flow through packed bed

75 12
137 y n3 + 137 y n4

60

The supercial velocity u is given by
+ 137 t f y n+1
Q
The stability properties of these methods are u=
A
determined in the same way as explicit methods.
They are always expected to be stable, no matter where Q is the volumetric ow rate, A is the
what the value of t is, and this is conrmed in cross-sectional area, and is the void fraction.
Figure 20. A time constant for ow through the device is
then
L AL
tow = =
u Q

where L is the length of the packed bed. If a


chemical reaction occurs, with a reaction rate
given by
Moles
= k c
Volume time
where k is the rate constant (time1 ) and c is the
concentration (moles/volume), the characteris-
tic time for the reaction is
1
trxn =
k
If diffusion occurs inside the catalyst, the time
Figure 20. Rational approximations for implicit methods constant is
a) Backward Euler; b) Exact curve; c) Trapezoid; d) Euler
R2
tinternal diffusion =
De
Predictor corrector methods can be em-
ployed in which an explicit method is used to where is the porosity of the catalyst, R is the
predict the value of yn+1 . This value is then used catalyst radius, and De is the effective diffusion
in an implicit method to evaluate f ( yn+1 ).
56 Mathematics in Chemical Engineering

coefcient inside the catalyst. The time constant The idea of stiffness is best explained by con-
for heat transfer is sidering a system of linear equations:
R2 ?s C s R 2 dy
tinternal heat transfer = = = Ay
ke dt

where s is the catalyst density, C s is the cata- Let i be the eigenvalues of the matrix A. This
lyst heat capacity per unit mass, k e is the effec- system can be converted into a system of n equa-
tive thermal conductivity of the catalyst, and tions, each of them having only one unknown;
is the thermal diffusivity. The time constants for the eigenvalues of the new system are the same
diffusion of mass and heat through a boundary as the eigenvalues of the original system [3, pp.
layer surrounding the catalyst are 39 42], [54, pp. 197 199]. Then the stiffness
ratio SR is dened as [53, p. 32]
R
texternal diffusion = kg maxi |Re (i )|
SR =
texternal heat transfer = >s Cs R mini |Re (i )|
hp
SR = 20 is not stiff, SR = 103 is stiff, and SR =
where k g and hp are the mass-transfer and heat-
106 is very stiff. If the problem is nonlinear, the
transfer coefcients, respectively. The impor-
solution is expanded about the current state:
tance of examining these time constants comes
n
from realization that their orders of magnitude dyi fi
= fi [y (tn )] + [yj yj (tn )]
differ greatly. For example, in the model of an dt j=1
yj
automobile catalytic converter [58] the time con-
stant for internal diffusion was 0.3 s, for internal The question of stiffness then depends on the
heat transfer 21 s, and for ow through the de- eigenvalue of the Jacobian at the current time.
vice 0.003 s. Flow through the device is so fast Consequently, for nonlinear problems the prob-
that it might as well be instantaneous. Thus, the lem can be stiff during one time period and not
time derivatives could be dropped from the mass stiff during another. Packages have been devel-
balance equations for the ow, leading to a set oped for problems such as these. Although the
of differential-algebraic equations (see below). chemical engineer may not actually calculate the
If the original equations had to be solved, the eigenvalues, knowing that they determine the
eigenvalues would be roughly proportional to stability and accuracy of the numerical scheme,
the inverse of the time constants. The time in- as well as the step size employed, is useful.
terval over which to integrate would be a small
number (e.g., ve) multiplied by the longest time
constant. Yet the explicit stability limitation ap- 6.5. Differential Algebraic Systems
plies to all the eigenvalues, and the largest eigen-
value would determine the largest permissible Sometimes models involve ordinary differential
time step. Here, = 1/0.003 s1 . Very small time equations subject to some algebraic constraints.
steps would have to be used, e.g., t 2 0.003 For example, the equations governing one equi-
s, but a long integration would be required to librium stage (as in a distillation column) are
n
reach steady state. Such problems are termed M dx
dt
= V n+1 y n+1 Ln xn V n y n
stiff, and implicit methods are very useful for
them. In that case the stable time constant is not +Ln1 xn1
of any interest, because any time step is stable.

What is of interest is the largest step for which a xn1 xn = E n xn1 x,n
N
solution can be found. If a time step larger than xi = 1
i=1
the smallest time constant is used, then any phe-
nomena represented by that smallest time con- where x and y are the mole fractions in the liquid
stant will be overlooked at least transients in it and vapor, respectively; L and V are liquid and
will be smeared over. However, the method will vapor ow rates, respectively; M is the holdup;
still be stable. Thus, if the very rapid transients and the superscript n is the stage number. The
of part of the model are not of interest, they can efciency is E, and the concentration in equilib-
be ignored and an implicit method used [59]. rium with the vapor is x*. The rst equation is
Mathematics in Chemical Engineering 57

an ordinary differential equation for the mass of When A is independent of y, the inverse (from
one component on the stage, whereas the third LU decompositions) need be computed only
equation represents a constraint that the mass once.
fractions add to one. As a second example, the In actuality, higher order backward-differ-
following kinetics problem can be considered: ence Gear methods are used in the computer
dc1 program DASSL [60, 61].
dt
= f (c1 , c2 )
Differential-algebraic systems are more com-
dc2
= k1 c1 k2 c22 plicated than differential systems because the so-
dt
lution may not always be dened. Pontelides et
The rst equation could be the equation for a al. [62] introduced the term index to identify
stirred tank reactor, for example. Suppose both possible problems. The index is dened as the
k 1 and k 2 are large. The problem is then stiff, minimum number of times the equations must
but the second equation could be taken at equi- be differentiated with respect to time to convert
librium. If the system to a set ofordinary differential equa-
tions. These higher derivatives may not exist,
c1 2c2
and the process places limits on which variables
The equilibrium condition is then can be given initial values. Sometimes the ini-
c22
tial values must be constrained by the algebraic
k1
= K equations [62]. For a differential-algebraic sys-
c1 k2
tem modeling a distillation tower, the index de-
Under these conditions the problem becomes pends on the specication of pressure for the
dc1
= f (c1 , c2 )
column [62]. Several chemical engineering ex-
dt
amples of differential-algebraic systems and a
0 = kl c1 k2 c22 solution for one involving two-phase ow are
given in [63].
Thus, a differential-algebraic system of equa-
tions is obtained. In this case, the second equa-
tion can be solved and substituted into the rst to 6.6. Computer Software
obtain differential equations, but in the general
case that is not possible. Efcient software packages are widely avail-
Differential-algebraic equations can be writ- able for solving ordinary differential equations
ten in the general notation as initial value problems. In each of the pack-
  ages the user species the differential equation
dy
F t, y, =0 to be solved and a desired error criterion. The
dt
package then integrates in time and adjusts the
or the variables and equations may be separated step size to achieve the error criterion, within the
according to whether they come primarily from limitations imposed by stability.
differential [y(t)] or algebraic equations [x(t)]: A popular explicit Runge Kutta package is
dy RKF 45. An estimate of the truncation error at
= f (t, y, x) , g (t, y, x) = 0
dt each step is available. Then the step size can
Another form is not strictly a differential- be reduced until this estimate is below the user-
algebraic set of equations, but the same prin- specied tolerance. The method is thus auto-
ciples apply; this form arises frequently when matic, and the user is assured of the results. Note,
the Galerkin nite element is applied: however, that the tolerance is set on the local
truncation error, namely, from one step to an-
dy
A = f (y) other, whereas the user is generally interested
dt
in the global trunction error, i.e., the error af-
The computer program DASSL [60, 61] can ter several steps. The global error is generally
solve such problems. They can also be solved made smaller by making the tolerance smaller,
by writing the differential equation as but the absolute accuracy is not the same as the
dy tolerance. If the problem is stiff, then very small
=A1 f (y) step sizes are used and the computation becomes
dt
58 Mathematics in Chemical Engineering

very lengthy. The RKF 45 code discovers this the Newton Raphson method can be xed over
and returns control to the user with a message several time steps. Then if the iteration does not
indicating the problem is too hard to solve with converge, the Jacobian can be reevaluated at the
RKF 45. current time step. If the iteration still does not
A popular implicit package is LSODE, a ver- converge, then the step size is reduced and a new
sion of Gears method [57] written by Alan Jacobian is evaluated. The successive substitu-
Hindmarsh at Lawrence Livermore Laboratory. tion method can also be used wh ich is even
In this package, the user species the differential faster, except that it may not converge. How-
equation to be solved and the tolerance desired. ever, it too will converge if the time step is small
Now the method is implicit and, therefore, sta- enough.
ble for any step size. The accuracy may not be The Runge Kutta methods give extremely
acceptable, however, and sets of nonlinear equa- good accuracy, especially when the step size
tions must be solved. Thus, in practice, the step is kept small for stability reasons. If the prob-
size is limited but not nearly so much as in the lem is stiff, though, backward difference implicit
Runge Kutta methods. In these packages both methods must be used. Many chemical reactor
the step size and the order of the method are ad- problems are stiff, necessitating the use of im-
justed by the package itself. Suppose a k-th or- plicit methods. In the MATLAB suite of ODE
der method is being used. The truncation error solvers, ode45 uses a revision of the RKF45
is determined by the (k + 1)-th order derivative. program, while the ode15s program uses an im-
This is estimated by using difference formulas proved backward difference method. Ref. [64]
and the values of the right-hand sides at previ- gives details of the programs in MATLAB. For-
ous times. An estimate is also made for the k-th tunately, many packages are available. On the
and (k + 2)-th derivative. Then, the errors in a NIST web page, http://gams.nist.gov/ choose
(k 1)-th order method, a k-th order method, problem decision tree, and then differen-
and a (k + 1)-th order method can be estimated. tial and integral equations to nd packages
Furthermore, the step size required to satisfy the which can be downloaded. On the Netlib web
tolerance with each of these methods can be de- site, http://www.netlib.org/, choose ode to nd
termined. Then the method and step size for the packages which can be downloaded. Using Mi-
next step that achieves the biggest step can be crosoft Excel to solve ordinary differential equa-
chosen, with appropriate adjustments due to the tions is cumbersome, except for the simplest
different work required for each order. The pack- problems.
age generally starts with a very small step size
and a rst-order method the backward Euler
method. Then it integrates along, adjusting the 6.7. Stability, Bifurcations, Limit Cycles
order up (and later down) depending on the er-
In this section, bifurcation theory is discussed in
ror estimates. The user is thus assured that the
a general way. Some aspects of this subject in-
local truncation error meets the tolerance. A fur-
volve the solution of nonlinear equations; other
ther difculty arises because the set of nonlinear
aspects involve the integration of ordinary differ-
equations must be solved. Usually a good guess
ential equations; applications include chaos and
of the solution is available, because the solution
fractals as well as unusual operation of some
is evolving in time and past history can be ex-
chemical engineering equipment. An excellent
trapolated. Thus, the Newton Raphson method
introduction to the subject and details needed to
will usually converge. The package protects it-
apply the methods are given in [65]. For more
self, though, by only doing a few (i.e., three)
details of the algorithms described below and
iterations. If convergence is not reached within
a concise survey with some chemical engineer-
these iterations, the step size is reduced and the
ing examples, see [66] and [67]. Bifurcation re-
calculation is redone for that time step. The
sults are closely connected with stability of the
convergence theorem for the Newton Raph-
steady states, which is essentially a transient
son method (Chap. 1) indicates that the method
phenomenon.
will converge if the step size is small enough.
Consider the problem
Thus, the method is guaranteed to work. Further u
economies are possible. The Jacobian needed in = F (u, )
t
Mathematics in Chemical Engineering 59

Figure 22. Limit points and bifurcation limit points


A) Limit point (or turning point); B) Bifurcation-limit point (or singular turning point or bifurcation point)

uss 
The variable u can be a vector, which makes F a t
+ u
t
= F (uss +u , )
vector, too. Here, F represents a set of equations
F (uss , ) + F | u +
u uss
that can be solved for the steady state:
The result is
F (u, ) = 0
u
If the Newton Raphson method is applied, = Fuss u
t
Fus us = F (us , ) A solution of the form
us+1 = us +us u (x, t) = et X (x)
is obtained, where gives
F
Fus = (us ) et X = Fuss et X
u
is the Jacobian. Look at some property of the so- The exponential term can be factored out and
lution, perhaps the value at a certain point or the
maximum value or an integral of the solution. (Fuss ) X = 0
This property is plotted versus the parameter ;
A solution exists for X if and only if
typical plots are shown in Figure 22. At the point
shown in Figure 22 A, the determinant of the Ja- det |Fuss | = 0
cobian is zero:
The are the eigenvalues of the Jacobian.
det Fu = 0 Now clearly if Re () > 0 then u grows with
For the limit point, time, and the steady solution uss is said to be
unstable to small disturbances. If Im () = 0 it
F
=0 is called stationary instability, and the distur-

bance would grow monotonically, as indicated


whereas for the bifurcation-limit point in Figure 23 A. If Im () = 0 then the distur-
F bance grows in an oscillatory fashion, as shown
=0
in Figure 23 B, and is called oscillatory insta-
The stability of the steady solutions is also of bility. The case in which Re () = 0 is the di-
interest. Suppose a steady solution uss ; the func- viding point between stability and instability. If
tion u is written as the sum of the known steady Re () = 0 and Im () = 0 the point governing
state and a perturbation u : the onset of stationary instability then = 0.
However, this means that = 0 is an eigenvalue
u = uss +u of the Jacobian, and the determinant of the Jaco-
This expression is substituted into the original bian is zero. Thus, the points at which the deter-
equation and linearized about the steady-state minant of the Jacobian is zero (for limit points
value:
60 Mathematics in Chemical Engineering

Figure 23. Stationary and oscillatory instability


A) Stationary instability; B) Oscillatory instability

and bifurcation-limit points) are the points gov- u0 = u0 +u ( 0 )


erning the onset of stationary instability. When and apply Newton Raphson with this initial
Re () = 0 but Im () = 0, which is the onset of guess and the new value of . This will be a
oscillatory instability, an even number of eigen- much better guess of the new solution than just
values pass from the left-hand complex plane to u0 by itself.
the right-hand complex plane. The eigenvalues Even this method has difculties, however.
are complex conjugates of each other (a result of Near a limit point the determinant of the Jaco-
the original equations being real, with no com- bian may be zero and the Newton method may
plex numbers), and this is called a Hopf bifurca- fail. Perhaps no solutions exist at all for the cho-
tion. Numerical methods to study Hopf bifurca- sen parameter near a limit point. Also, the
tion are very computationally intensive and are ability to switch from one solution path to an-
not discussed here [65]. other at a bifurcation-limit point is necessary.
To return to the problem of solving for the Thus, other methods are needed as well: arc-
steady-state solution: near the limit point or length continuation and pseudo-arc-length con-
bifurcation-limit point two solutions exist that tinuation [66]. These are described in Chapter 1.
are very close to each other. In solving sets of
equations with thousands of unknowns, the dif-
culties in convergence are obvious. For some
dependent variables the approximation may be 6.8. Sensitivity Analysis
converging to one solution, whereas for another
set of dependent variables it may be converging Often, when solving differential equations, the
to the other solution; or the two solutions may solution as well as the sensitivity of the solu-
all be mixed up. Thus, solution is difcult near tion to the value of a parameter must be known.
a bifurcation point, and special methods are re- Such information is useful in doing parameter
quired. These methods are discussed in [66]. estimation (to nd the best set of parameters for
The rst approach is to use natural continu- a model) and in deciding whether a parameter
ation (also known as Euler Newton continua- needs to be measured accurately. The differen-
tion). Suppose a solution exists for some param- tial equation for y (t, ) where is a parameter,
eter . Call the value of the parameter 0 and is
the corresponding solution u0 . Then dy
= f (y, ) , y (0) = y0
dt
F (u0 , 0 ) = 0
Also, compute u as the solution to If this equation is differentiated with respect to
, then because y is a function of t and
Fuss u = F  
dy f y f
at this point [0 , u0 ]. Then predict the starting = +
dt y
guess for another using
Mathematics in Chemical Engineering 61

Exchanging the order of differentiation in the forces are calculated to move from one time to
rst term leads to the ordinary differential equa- another time. Rewrite this equation in the form
tion of an acceleration.
 
d y f y f d2 ri 1
= + = Fi ({r}) ai
dt y dt2 mi
The initial conditions on y/ are obtained by In the Verlot method, this equation is written us-
differentiating the initial conditions ing central nite differences (Eq. 12). Note that
y
the accelerations do not depend upon the veloc-
[y (0, ) = y0 ] , or (0) = 0 ities.

Next, let ri (t+t) =2ri (t) ri (tt) +ai (t) t2

y The calculations are straightforward, and no ex-


y1 = y, y2 =
plicit velocity is needed. The storage require-
and solve the set of ordinary differential equa- ment is modest, and the precision is modest (it
tions is a second-order method). Note that one must
start the calculation with values of {r} at time t
dy1
dt
= f (y1 , ) y1 (0) = y0 and tt.
dy2 f
In the Velocity Verlot method, an equation is
dt
= y
(y1 , ) y2 + f y2 (0) = 0
written for the velocity, too.
Thus, the solution y (t, ) and the derivative with dvi
=ai
respect to are obtained. To project the impact dt
of , the solution for = 1 can be used: The trapezoid rule (see page 18) is applied to
y (t, ) = y
y1 (t, 1 ) + (t, 1 ) (1 ) + obtain
1
= y1 (t, 1 ) +y2 (t, 1 ) (1 ) + vi (t+t) =vi (t) + [ai (t) +ai (t+t)] t
2
This is a convenient way to determine the sensi- The position of the particles is expanded in a
tivity of the solution to parameters in the prob- Taylor series.
lem. 1
ri (t+t) =ri (t) +vi t+ ai (t) t2
2
Beginning with values of {r} and {v} at time
6.9. Molecular Dynamics zero, one calculates the new positions and then
( Molecular Dynamics Simulations) the new velocities. This method is second order
in t, too. For additional details, see [68 72].
Special integration methods have been devel-
oped for molecular dynamics calculations due
to the structure of the equations. A very large 7. Ordinary Differential Equations
number of equations are to be integrated, with
the following form based on molecular interac- as Boundary Value Problems
tions between molecules
Diffusion problems in one dimension lead to
d2 ri boundary value problems. The boundary con-
mi 2 =Fi ({r}) ,Fi ({r}) =V
dt ditions are applied at two different spatial loca-
where mi is the mass of the i-th particle, ri is tions: at one side the concentration may be xed
the position of the i-th particle, Fi is the force and at the other side the ux may be xed. Be-
acting on the i-th particle, and V is the poten- cause the conditions are specied at two differ-
tial energy that depends upon the location of all ent locations the problems are not initial value
the particles (but not their velocities). Since the in character. To begin at one position and inte-
major part of the calculation is in the evalua- grate directly is impossible because at least one
tion of the forces, or potentials, a method must of the conditions is specied somewhere else and
be used that minimizes the number of times the not enough conditions are available to begin the
62 Mathematics in Chemical Engineering

calculation. Thus, methods have been developed where v is the velocity and the viscosity. Then
especially for boundary value problems. Exam- the variables can be separated again and the re-
ples include heat and mass transfer in a slab, sult integrated to give
reaction diffusion problems in a porous cata- p r 2
lyst, reactor with axial dispersion, packed beds, v = +c1 lnr+c2
L 4
and countercurrent heat transfer.
Now the two unknowns must be specied from
the boundary conditions. This problem is a two-
point boundary value problem because one of
7.1. Solution by Quadrature the conditions is usually specied at r = 0 and
the other at r = R, the tube radius. However, the
When only one equation exists, even if it is non-
technique of separating variables and integrating
linear, it may possibly be solved by quadrature.
works quite well.
For
dy
dt
= f (y)

y (0) = y0

the problem can be separated


dy
= dt
f (y)

and integrated Figure 24. Flow in pipe

y t
dy 
= dt = t When the uid is non-Newtonian, it may not
f (y  )
y0 0 be possible to do the second step analytically.
For example, for the Bird Carreau uid [74, p.
If the quadrature can be performed analytically,
171], stress and velocity are related by
the exact solution has been found.
0
As an example, consider the ow of a non- = 
 2 (1n)/2
Newtonian uid in a pipe, as illustrated in Figure 1+ dv
dr
24. The governing differential equation is [73]
1 d p where 0 is the viscosity at v = 0 and the time
(r ) = constant.
r dr L
Putting this value into the equation for stress
where r is the radial position from the center of as a function of r gives
the pipe, is the shear stress, p is the pres-
sure drop along the pipe, and L is the length over 0 p r c1
 = +
 2 (1n)/2 L 2 r
which the pressure drop occurs. The variables dv
1+ dr
are separated once
p This equation cannot be solved analytically for
d (r ) = rdr dv/dr, except for special values of n. For prob-
L
lems such as this, numerical methods must be
and then integrated to give
used.
p r 2
r = +c1
L 2
Proceeding further requires choosing a consti- 7.2. Initial Value Methods
tutive relation relating the shear stress and the
velocity gradient as well as a condition specify- An initial value method is one that utilizes the
ing the constant. For a Newtonian uid techniques for initial value problems but al-
lows for an iterative calculation to satisfy all
dv the boundary conditions. Suppose the nonlinear
=
dr boundary value problem
Mathematics in Chemical Engineering 63
 
d2 y dy Packages to solve boundary value problems
=f x,y,
dx2 dx are available on the internet. On the NIST web
with the boundary conditions page, http://gams.nist.gov/ choose problem de-
dy cision tree, and then differential and integral
a0 y (0) a1 dx (0) = , ai 0
equations, then ordinary differential equa-
dy
b0 y (1) b1 dx (1) = , bi 0 tions, multipoint boundary value problems.
On the Netlib web site, http://www.netlib.org/,
Convert this second-order equation into two search on boundary value problem. Any
rst-order equations along with the boundary spreadsheet that has an iteration capability can
conditions written to include a parameter s. be used with the nite difference method. Some
du packages for partial differential equations also
dx
=v
have a capability for solving one-dimensional
dv
= f (x,u,v) boundary value problems [e.g., Comsol Multi-
dx
physics (formerly FEMLAB)].
u (0) = a1 sc1

v (0) = a0 sc0 7.3. Finite Difference Method


The parameters c0 and c1 are specied by the
analyst such that To apply the nite difference method, we rst
spread grid points through the domain. Figure
a1 c0 a0 c1 = 1 25 shows a uniform mesh of n points (nonuni-
form meshes are also possible). The unknown,
This ensures that the rst boundary condition here c (x), at a grid point x i is assigned the sym-
is satised for any value of parameter s. If the bol ci = c (x i ). The nite difference method can
proper value for s is known, u (0) and u (0) can be derived easily by using a Taylor expansion of
be evaluated and the equation integrated as an the solution about this point.
initial value problem. The parameter s should  
dc  d2 c  x2
be chosen iteratively so that the last boundary ci+1 = ci + dx i
x+ 
dx2 i 2
+. . .
condition is satised.   (8)
dc  d2 c  x2
The model for a chemical reactor with axial ci1 = ci dx i
x+ 
dx2 i 2
. . .
diffusion is These formulas can be rearranged and divided
1 dc2 dc
dz = DaR (c) by x to give
P e dz 2  
dc  ci+1 ci d2 c  x
dc
P1e dz (0) +c (0) = dc
cin , dz (1) = 0 = +. . . (9)
dx i x dx2 i 2
 
where Pe is the Peclet number and Da the dc  ci ci1 d2 c  x
 = 2 +. . . (10)
Damkohler number. dx i x dx i 2
The boundary conditions are due to Danckw- which are representations of the rst deriva-
erts [75] and to Wehner and Wilhelm [76]. This tive. Alternatively the two equations can be sub-
problem can be treated by using initial value tracted from each other, rearranged and divided
methods also, but the method is highly sensi- by x to give
tive to the choice of the parameter s, as out-  
lined above. Starting at z = 0 and making small dc  ci+1 ci1 d2 c  x2
= (11)
changes in s will cause large changes in the solu- dx i 2x dx3 i 3!
tion at the exit, and the boundary condition at the If the terms multiplied by x or x 2 are ne-
exit may be impossible to satisfy. By starting at glected, three representations of the rst deriva-
z = 1, however, and integrating backwards, the tive are possible. In comparison with the Taylor
process works and an iterative scheme converges series, the truncation error in the rst two expres-
in many cases [77]. However, if the problem is sions is proportional to x, and the methods are
extremely nonlinear the iterations may not con- said to be rst order. The truncation error in the
verge. In such cases, the methods for boundary last expression is proportional to x 2 , and the
value problems described below must be used. method is said to be second order. Usually, the
64 Mathematics in Chemical Engineering

last equation is chosen to ensure the best accu- Four times Equation 8 minus this equation, with
racy. rearrangement, gives

dc  3ci +4ci+1 ci+2

= +O x2
dx i 2x
Thus, for the rst derivative at point i = 1

dc  3ci +4c2 c3
= (15)
Figure 25. Finite difference mesh; x uniform dx i 2x
This one-sided difference expression uses only
the points already introduced into the domain.
The nite difference representation of the
The third alternative is to add a false point,
second derivative can be obtained by adding the
outside the domain, as c0 = c (x = x). Then
two expressions in Equation 8. Rearrangement
the centered rst derivative, Equation 11, can be
and division by x 2 give
used:
  
d2 c  ci+1 2ci +ci1 d4 c  x2 dc  c2 c0
= +. . . (12)
dx2 i x2 dx4 i 4! dx 1
=
2x
The truncation error is proportional to x 2 . Because this equation introduces a new variable,
To see how to solve a differential equation, another equation is required. This is obtained by
consider the equation for convection, diffusion, also writing the differential equation (Eq. 13),
and reaction in a tubular reactor: for i = 1.
1 d2 c dc The same approach can be taken at the other
= Da R (c) end. As a boundary condition, any of three
P e dx2 dx
choices can be used:
To evaluate the differential equation at the i-th 
dc  cn cn1
grid point, the nite difference representations dx n
= x
of the rst and second derivatives can be used to 
dc  cn2 4cn1 +3cn
give dx n
= 2x

1 ci+1 2ci +ci1 ci+1 ci1 dc  cn+1 cn1
= Da R (13) dx n
= 2x
Pe x2 2x
This equation is written for i = 2 to n 1 (i.e., The last two are of order x 2 and the last one
the internal points). The equations would then would require writing the differential equation
be coupled but would involve the values of c1 (Eq. 13) for i = n, too.
and cn , as well. These are determined from the Generally, the rst-order expression for the
boundary conditions. boundary condition is not used because the er-
If the boundary condition involves a deriva- ror in the solution would decrease only as x,
tive, the nite difference representation of it and the higher truncation error of the differential
must be carefully selected; here, three possibili- equation (x 2 ) would be lost. For this problem
ties can be written. Consider a derivative needed the boundary conditions are
at the point i = 1. First, Equation 9 could be used P1e dc
(0) +c (0) = cin
dx
to write
 dc
(1) = 0
dc  c2 c1 dx
= (14)
dx 1 x Thus, the three formulations would give rst or-
Then a second-order expression is obtained that der in x
c2 c1
is one-sided. The Taylor series for the point ci+2 P1e x
+c1 = cin
is written:
cn cn1
  =0
dc  d2 c  4x2 x
ci+2 = ci + dx  2x+ 
dx2 i 2!

i plus Equation 13 at points i = 2 through n 1;
d3 c  8x3 second order in x, by using a three-point one-
+ dx3  3! +. . .
i
sided derivative
Mathematics in Chemical Engineering 65
3c1 +4c2 c3
P1e 2x
+c1 = cin The advantage of this approach is that it is eas-
cn2 4cn1 +3cn ier to program than a full Newton Raphson
2x
=0
method. If the transport coefcients do not vary
plus Equation 13 at points i = 2 through n 1; radically, the method converges. If the method
second order in x, by using a false boundary does not converge, use of the full Newton
point Raphson method may be necessary.
P1e c2 c0
+c1 = cin Three ways are commonly used to evaluate
2x
the transport coefcient at the midpoint. The rst
cn+1 cn1
2x
=0 one employs the transport coefcient evaluated
at the average value of the solutions on either
plus Equation 13 at points i = 1 through n. side:
The sets of equations can be solved by us-  
ing the Newton Raphson method, as outlined
1
D ci+1/2 D (ci+1 +ci )
in Section 1.2. 2
Frequently, the transport coefcients (e.g., The second approach uses the average of the
diffusion coefcient D or thermal conductivity) transport coefcients on either side:
depend on the dependent variable (concentration

1
or temperature, respectively). Then the differen- D ci+1/2 [D (ci+1 ) +D (ci )] (17)
tial equation might look like 2
  The truncation error of these approaches is also
d dc
D (c) =0 x 2 [78, Chap. 14], [3, p. 215]. The third ap-
dx dx
proach employs an upstream transport coef-
This could be written as cient.
dJ

=0 (16) D Ci+1/2 D (ci+1 ) , when D (ci+1 ) >D (ci )
dx
in terms of the mass ux J, where the mass ux

D ci+1/2 (ci ) , when D (ci+1 ) <D (ci )
is given by
dc This approach is used when the transport coef-
J = D (c) cients vary over several orders of magnitude
dx
and the upstream direction is dened as the
Because the coefcient depends on c the equa-
one in which the transport coefcient is larger.
tions are more complicated. A nite difference
The truncation error of this approach is only x
method can be written in terms of the uxes at
[78, Chap. 14] , [3, p. 253], but this approach is
the midpoints, i + 1/2. Thus,
useful if the numerical solutions show unrealis-
Ji+1/2 Ji1/2 tic oscillations [3, 78].
=0
x Rigorous error bounds for linear ordinary dif-
Then the constitutive equation for the mass ux ferential equations solved with the nite dif-
can be written as ference method are dicussed by Isaacson and

ci+1 ci Keller [79, p. 431].
Ji+1/2 = D ci+1/2
x
If these are combined,


7.4. Orthogonal Collocation
D ci+1/2 (ci+1 ci ) D ci1/2 (ci ci1 )
=0
x2
The orthogonal collocation method has found
This represents a set of nonlinear algebraic equa- widespread application in chemical engineer-
tions that can be solved with the Newton Raph- ing, particularly for chemical reaction engineer-
son method. However, in this case a viable iter- ing. In the collocation method [3], the dependent
ative strategy is to evaluate the transport coef- variable is expanded in a series.
cients at the last value and then solve
      N
+2
D cki+1/2 ck+1 k+1
i+1 ci D cki1/2 ck+1
i ck+1
i1
y (x) = ai yi (x) (18)
i=1
x2
=0
66 Mathematics in Chemical Engineering
N
+2
Suppose the differential equation is dy
(xj ) = Ajk y (xk ) ,
dx
k=1
N [y] = 0 N
+2
Ajk = [yi (xk )]1 dy
dx
i
(xj )
Then the expansion is put into the differential i=1

equation to form the residual: Similar steps can be applied to the second deriva-
&N +2 ' tive to obtain

Residual = N ai yi (x) N
+2
d2 y
i=1 dx2
(xj ) = Bjk y (xk ) ,
k=1
In the collocation method, the residual is set to N
+2 2
Bjk = [yi (xk )]1 ddxy2i (xj )
zero at a set of points called collocation points: i=1
&N +2 ' This method is next applied to the differential

N ai yi (xj ) = 0, j = 2,. . .,N +1 equation for reaction in a tubular reactor, after
i=1 the equation has been made nondimensional so
This provides N equations; two more equa- that the dimensionless length is 1.0.
tions come from the boundary conditions, giving 1 d2 c dc
dx = Da R (c) ,
P e dx2
N + 2 equations for N + 2 unknowns. This pro- (19)
cedure is especially useful when the expansion is dc dc
dx (0) = P e [c (0) cin ] , dx
(1) = 0
in a series of orthogonal polynomials, and when
the collocation points are the roots to an orthog- The differential equation at the collocation
onal polynomial, as rst used by Lanczos [80, points is
N +2 N +2
81]. A major improvement was the proposal by 1
Villadsen and Stewart [82] that the entire so- Bjk c (xk ) Ajk c (xk ) = Da R (cj ) (20)
P e k=1 k=1
lution process be done in terms of the solution at
the collocation points rather than the coefcients and the two boundary conditions are
in the expansion. Thus, Equation 18 would be N
+2
evaluated at the collocation points Alk c (xk ) = P e (c1 cin ) ,
k=1
(21)
N +2 N
+2
AN +2,k c (xk ) = 0
y (xj ) = ai yi (xj ) , j = 1,. . .,N +2 k=1
i=1
Note that 1 is the rst collocation point (x = 0)
and solved for the coefcients in terms of the and N + 2 is the last one (x = 1). To apply the
solution at the collocation points: method, the matrices Aij and Bij must be found
N
+2 and the set of algebraic equations solved, per-
ai = [yi (xj )]1 y (xj ) , i = 1,. . .,N +2 haps with the Newton Raphson method. If or-
j=1 thogonal polynomials are used and the colloca-
Furthermore, if Equation 18 is differentiated tion points are the roots to one of the orthogonal
once and evaluated at all collocation points, the polynomials, the orthogonal collocation method
rst derivative can be written in terms of the val- results.
ues at the collocation points: In the orthogonal collocation method the so-
lution is expanded in a series involving orthog-
N
+2
dy dyi onal polynomials, where the polynomials Pi1
(xj ) = ai (xj ) , j = 1,. . .,N +2
dx dx (x) are dened in Section 2.2.
i=1

N
This can be expressed as y = a+bx+x (1x) ai Pi1 (x)
i=1
dy (22)
N
+2
(xj ) = = bi Pi1 (x)
dx
i=1
N +2
dyi
[yi (xk )]1 y (xk ) (xj ) , j = 1,. . .,N +2 which is also
i,k=1
dx
N
+2

or shortened to y= di xi1
i=1
Mathematics in Chemical Engineering 67
dy
d = Q1 y, dx
= CQ1 y = Ay,
(23)
d2 y
dx2
= DQ1 y = By
Thus the derivative at any collocation point can
be determined in terms of the solution at the col-
Figure 26. Orthogonal collocation points location points. The same property is enjoyed by
the nite difference method (and the nite ele-
ment method described below), and this prop-
The collocation points are shown in Figure 26. erty accounts for some of the popularity of the
There are N interior points plus one at each end, orthogonal collocation method. In applying the
and the domain is always transformed to lie on 0 method to Equation 19, the same result is ob-
to 1. To dene the matrices Aij and Bij this ex- tained; Equations 20 and 21, with the matrices
pression is evaluated at the collocation points; it dened in Equation 23. To nd the solution at
is also differentiated and the result is evaluated a point that is not a collocation point, Equation
at the collocation points. 22 is used; once the solution is known at all col-
N
+2 location points, d can be found; and once d is
y (xj ) = di xi1
j
i=1 known, the solution for any x can be found.
N
+2 To use the orthogonal collocation method, the
dy
dx
(xj ) = di (i1) xi2
j matrices are required. They can be calculated as
i=1
N
+2
shown above for small N (N < 8) and by us-
d2 y
dx2
(xj ) = di (i1) (i2) xi3
j ing more rigorous techniques, for higher N (see
i=1
Chap. 2). However, having the matrices listed
These formulas are put in matrix notation, where explicitly for N = 1 and 2 is useful; this is shown
Q, C, and D are N + 2 by N + 2 matrices. in Table 7.
2 For some reaction diffusion problems, the so-
y = Qd, dy
dx
= Cd, ddxy2 = Dd lution can be an even function of x. For example,
for the problem
Qji = xi1
j , Cji = (i1) xi2
j ,
d2 c dc
= kc, (0) = 0, c (1) = 1 (24)
Dji = (i1) (i2) xi3
j
dx2 dx
the solution can be proved to involve only even
In solving the rst equation for d, the rst and powers of x. In such cases, an orthogonal col-
second derivatives can be written as location method, which takes this feature into
Table 7. Matrices for orthogonal collocation
68 Mathematics in Chemical Engineering

account, is convenient. This can easily be done orthogonal collocation is applied at the interior
by using expansions that only involve even pow- points
ers of x. Thus, the expansion N +1

N Bji ci = 2 R (cj ) , j = 1,. . .,N

2



y x = y (1) + 1x2 ai Pi1 x2 i=1
i=1 and the boundary condition solved for is
is equivalent to
cN +1 = 1
N
+1 N
+1

2

2
The boundary condition at x = 0 is satised auto-
y x = bi Pi1 x = di x2i2
i=1 i=1 matically by the trial function. After the solution
has been obtained, the effectiveness factor is
The polynomials are dened to be orthogonal
obtained by calculating
with the weighting function W (x 2 ).
1 N
+1
1


R [c (x)] xa1 dx Wj R (cj )
W x2 Pk x2 Pm x2 xa1 dx = 0 0 i=1
0 (25) =
1 N+1
km1 R [c (1)] xa1 dx Wj R (1)
0 i=1

where the power on x a1 denes the geometry Note that the effectiveness factor is the average
as planar or Cartesian (a = 1), cylindrical (a = reaction rate divided by the reaction rate eval-
2), and spherical (a = 3). An analogous devel- uated at the external conditions. Error bounds
opment is used to obtain the (N + 1)(N + 1) have been given for linear problems [83, p. 356].
matrices For planar geometry the error is
N
+1
2(2N +1)
y (xj ) = di x2i2
j Error in =
i=1 (2N +1) ! (2N +2) !
N
+1
dy
dx
(xj ) = di (2i2) x2i3
j
This method is very accurate for small N (and
i=1
 small 2 ); note that for nite difference methods
2
N+1
2

2i2  the error goes as 1/N 2 , which does not decrease
y (xi ) = di x 
i=1  as rapidly with N. If the solution is desired at the
xj
y = Qd, dy = Cd,2 y = Dd center (a frequent situation because the center
dx
concentration can be the most extreme one), it
Qji = x2i2
j , Cji = (2i2) x2i3
j , is given by

N
+1
Dji = 2 x2i2 |xj ! 1 "
c (0) = d1 Q y
1i i
i=1
d = Q1 y, dy
dx
= CQ1 y = Ay,
The collocation points are listed in Table 8.
2 y = DQ1 y = By For small N the results are usually more accu-
rate when the weighting function in Equation
In addition, the quadrature formula is
25 is 1 x 2 . The matrices for N = 1 and N =
W Q = f , W = f Q1 2 are given in Table 9 for the three geometries.
Computer programs to generate matrices and a
where program to solve reaction diffusion problems,
1 N
+1 OCRXN, are available [3, p. 325, p. 331].
x2i2 xa1 dx = Wj x2i2
j Orthogonal collocation can be applied to dis-
0 j=1
1
tillation problems. Stewart et al. [84, 85] devel-
= 2i2+a
fi
oped a method using Hahn polynomials that re-
As an example, for the problem tains the discrete nature of a plate-to-plate distil-
  lation column. Other work treats problems with
1 d dc
xa1 dx
xa1 dx = 2 R (c) multiple liquid phases [86]. Some of the appli-
dc
cations to chemical engineering can be found in
dx
(0) = 0, c (1) = 1 [87 90].
Mathematics in Chemical Engineering 69
Table 8. Collocation points for orthogonal collocation with symmetric polynomials and W =1
Geometry
N Planar Cylindrical Spherical
1 0.5773502692 0.7071067812 0.7745966692
2 0.3399810436 0.4597008434 0.5384693101
0.8611363116 0.8880738340 0.9061793459
3 0.2386191861 0.3357106870 0.4058451514
0.6612093865 0.7071067812 0.7415311856
0.9324695142 0.9419651451 0.9491079123
4 0.1834346425 0.2634992300 0.3242534234
0.5255324099 0.5744645143 0.6133714327
0.7966664774 0.8185294874 0.8360311073
0.9602898565 0.9646596062 0.9681602395
5 0.1488743390 0.2165873427 0.2695431560
0.4333953941 0.4803804169 0.5190961292
0.6794095683 0.7071067812 0.7301520056
0.8650633667 0.8770602346 0.8870625998
0.9739065285 0.9762632447 0.9782286581

7.5. Orthogonal Collocation on Finite in the last element. The orthogonal collocation
Elements method is applied at each interior collocation
point.
In the method of orthogonal collocation on - P
N P
N
1 a1 1
nite elements, the domain is rst divided into x2
B I J cJ + x AI J cJ =
k (k) +uI xk xk
J=1 J=1
elements, and then within each element orthog-
= 2 R (cJ ) , I = 2,. . .,N P 1
onal collocation is applied. Figure 27 shows the
domain being divided into NE elements, with The local points i = 2, . . . , NP 1 represent
NCOL interior collocation points within each el- the interior collocation points. Continuity of the
ement, and NP = NCOL + 2 total points per el- function and the rst derivative between ele-
ement, giving NT = NE * (NCOL + 1) + 1 total ments is achieved by taking
number of points. Within each element a local 
P
N 
coordinate is dened 1 
xk1
AN P ,J cJ 
J=1 
xx(k)  element k1
u= , xk = x(k+1) x(k) NP 
xk 1 
= xk
A1,J cJ 
J=1 
element k
The reaction diffusion equation is written as
  at the points between elements. Naturally, the
1 d dc d2 c a1 dc
xa1 = + = 2 R (c) computer code has only one symbol for the so-
xa1 dx dx dx2 x dx
lution at a point shared between elements, but
and transformed to give the derivative condition must be imposed. Fi-
1 d2 a1 1 dc nally, the boundary conditions at x = 0 and x =
+ = 2 R (c) 1 are applied:
x2k du2 x(k) +uxk xk du
NP
The boundary conditions are typically 1
A1,J cJ = 0,
xk J=1
dc dc
(0) = 0, (1) = Bim [c (1) cB ]
dx dx in the rst element;
where Bim is the Biot number for mass transfer. N P
1
These become AN P ,J cJ = Bim [cN P cB ] ,
xN E J=1
1 dc
(u = 0) = 0,
x1 du in the last element.
in the rst element; These equations can be assembled into an
overall matrix problem
1 dc
(u = 1) = Bim [c (u = 1) cB ] ,
xN E du A Ac = f
70 Mathematics in Chemical Engineering

Table 9. Matrices for orthogonal collocation with symmetric polynomials and W =1x 2

The form of these equations is special and is dis- 337]. See also the program COLSYS described
cussed by Finlayson [3, p. 116], who also gives below.
the computer code to solve linear equations aris- The error bounds of DeBoor [91] give the fol-
ing in such problems. Reaction diffusion prob- lowing results for second-order problems solved
lems are solved by the program OCFERXN [3, p. with cubic trial functions on nite elements with
Mathematics in Chemical Engineering 71

Figure 27. Grid for orthogonal collocation on nite elements


1 T
N  
continuous rst derivatives. The error at all po- bj (x) 1
ai xa1 d
xa1 db i
dx dx
sitions is bounded by 0 i=1
& '
2
T
N
di R ai bi (x) xa1 dx = 0
 (yyexact )  constant|x|2 i=1
dxi
j = 1,. . .,N T
The error at the collocation points is more accu-
rate, giving what is known as superconvergence. This process makes the method a Galerkin
 i 
method. The basis for the orthogonality condi-
 d  tion is that a function that is made orthogonal to
 (yy )  constant|x|4
 dxi exact  each member of a complete set is then zero. The
collocation points
residual is being made orthogonal, and if the ba-
sis functions are complete, and an innite num-
ber of them are used, then the residual is zero.
7.6. Galerkin Finite Element Method Once the residual is zero the problem is solved.
It is necessary also to allow for the boundary
In the nite element method the domain is di- conditions. This is done by integrating the rst
vided into elements and an expansion is made term of Equation 26 by parts and then inserting
for the solution on each nite element. In the the boundary conditions:
Galerkin nite element method an additional
1  
idea is introduced: the Galerkin method is used bj (x) 1 d
xa1 dbi
xa1 dx =
xa1 dx dx
to solve the equation. The Galerkin method is 0
explained before the nite element basis set is 1   1 db
d
dx
bj (x) xa1 db
dx
i
dx dxj dbi a1
dx
x dx
introduced. 0 0 (27)
To solve the problem  1 1
db
  = bj (x) xa1 db
dx
i
dxj dbi a1
dx
x dx
1 d dc 0
xa1 dx
xa1 dx = 2 R (c) 0
1 dbj dbi a1
= dx dx
x dxBim bj (1) [bi (1) cB ]
dc dc
dx
(0) = 0, dx (1) = Bim [c (1) cB ] 0

the unknown solution is expanded in a series of Combining this with Equation 26 gives
known functions {bi (x)}, with unknown coef- NT 1 dbj dbi a1
cients {ai }. dx dx
x dxai
i=1 0 & '
N T
N T
Bim bj (1) ai bi (1) cB
c (x) = ai bi (x) (28)
&i=1 '
i=1 1 N T
= 2 bj (x) ai bi (x) xa1 dx
The series (the trial solution) is inserted into the 0 i=1
differential equation to obtain the residual: j = 1,. . .,N T

T
N   This equation denes the Galerkin method, and
Residual = 1
ai xa1 d
dx
xa1 db
dx
i
i=1 a solution that satises this equation (for all j =
& '
T
N 1, . . . , ) is called a weak solution. For an ap-
2 R ai bi (x) proximate solution the equation is written once
i=1
for each member of the trial function, j = 1, . . .
The residual is then made orthogonal to the set , NT. If the boundary condition is
of basis functions.
c (1) = cB
72 Mathematics in Chemical Engineering

then the boundary condition is used (instead of 1 dNJ dNI


e = 1
BJI x du du
(xe +uxe )a1 du,
e
Eq. 28) for j = NT, 0
& '
1 P
N
N
T FJe = 2 xe NJ (u) R ceI NI (u)
ai bi (1) = cB 0 I=1
i=1 (xe +uxe )a1 du
The Galerkin nite element method results whereas the boundary element integrals are
when the Galerkin method is combined with
e = Bi N (1) N (1) ,
B BJI
a nite element trial function. Both linear and m J I
quadratic nite element approximations are de-
F FJe = Bim NJ (1) c1
scribed in Chapter 2. The trial functions bi (x)
are then generally written as N i (x). Then the entire method can be written in the
N
T compact notation
c (x) = ci Ni (x)
e e e e
BJI cI + B BJI cI = F e+ F FJe
i=1 e e e J e

Each N i (x) takes the value 1 at the point x i and The matrices for various terms are given in Ta-
zero at all other grid points (Chap. 2). Thus ci ble 10. This equation can also be written in the
are the nodal values, c (x i ) = ci . The rst deriva- form
tive must be transformed to the local coordinate
system, u = 0 to 1 when x goes from x i to x i + AAc = f
x. where the matrix AA is sparse. If linear elements
dNj 1 dNJ are used the matrix is tridiagonal. If quadratic
= , dx = xe du
dx xe du elements are used the matrix is pentadiagonal.
in the e-th element. Then the Galerkin method Naturally the linear algebra is most efciently
is carried out if the sparse structure is taken into
account. Once the solution is found the solution
 P 1
N
dNJ dNI at any point can be recovered from
1
e xe du du
(xe +uxe )a1 duceI
I=1 0
& ' ce (u) = ceI=1 (1u) +ceI=2 u
 P
N
Bim e NJ (1) ceI NI (1) c1
I=1 (29) for linear elements
& '
 1 P
N

= 2 ceI NI (u) ce (u) = ceI=1 2 (u1) u 12
e xe NJ (u) R
0 I=1


(xe +uxe )a1 du +ceI=2 4u (1u) +ceI=3 2u u 12

The element integrals are dened as for quadratic elements

Table 10. Element matrices for Galerkin method


Mathematics in Chemical Engineering 73
1 d3 ci 1 d3 ci+1
Because the integrals in Equation 28 may be ai = x3 du3
, ai+1 = x3 du3
i i+1
complicated, they are usually formed by using  
ai ai1 a a
Gaussian quadrature. If NG Gauss points are u(4) i 12 1 + i+1 i
2 (xi+1 xi1 ) 12 (xi+2 xi )
used, a typical term would be
& ' Element sizes are then chosen so that the follow-
1 
N P
NJ (u) R ceI NI (u) (xe +uxe )a1 du ing error bounds are satised
0 I=1
& '

N G 
N P Cx4i u(4) i for all i
= Wk NJ (uk ) R ceI NI (uk )
k=1 I=1

(xe +uk xe )a1 These features are built into the code COLSYS
(http://www.netlib.org/ode/).
The error expected from a method one order
higher and one order lower can also be dened.
7.7. Cubic B-Splines Then a decision about whether to increase or
decrease the order of the method can be made
Cubic B-splines have cubic approximations by taking into account the relative work of the
within each element, but rst and second deriva- different orders. This provides a method of ad-
tives continuous between elements. The func- justing both the mesh spacing (x, sometimes
tions are the same ones discussed in Chapter 2, called h) and the degree of polynomial ( p). Such
and they can be used to solve differential equa- methods are called h p methods.
tions, too. See Sincovec [92].

7.9. Comparison
7.8. Adaptive Mesh Strategies
What method should be used for any given prob-
In many two-point boundary value problems, lem? Obviously the error decreases with some
the difculty in the problem is the formation of power of x, and the power is higher for the
a boundary layer region, or a region in which higher order methods, which suggests that the
the solution changes very dramatically. In such error is less. For example, with linear elements
cases small mesh spacing should be used there, the error is
either with the nite difference method or the
nite element method. If the region is known a y (x) = yexact +c2 x2
priori, small mesh spacings can be assumed at for small enough (and uniform) x. A computer
the boundary layer. If the region is not known code should be run for varying x to conrm
though, other techniques must be used. These this. For quadratic elements, the error is
techniques are known as adaptive mesh tech-
niques. The general strategy is to estimate the y (x) = yexact +c3 x3
error, which depends on the grid size and deriva- If orthogonal collocation on nite elements is
tives of the solution, and rene the mesh where used with cubic polynomials, then
the error is large.
The adaptive mesh strategy was employed by y (x) = yexact +c4 x4
Ascher et al. [93] and by Russell and Chris- However, the global methods, not using nite el-
tiansen [94]. For a second-order differential ements, converge even faster [95], for example,
equation and cubic trial functions on nite el-  N COL
ements, the error in the i-th element is given by 1
y (N ) = yexact +cN
N COL
Errori = cx4i u(4) i
Yet the workload of the methods is also differ-
Because cubic elements do not have a nonzero ent. These considerations are discussed in [3].
fourth derivative, the third derivative in adjacent Here, only sweeping generalizations are given.
elements is used [3, p. 166]: If the problem has a relatively smooth so-
lution, then the orthogonal collocation method
74 Mathematics in Chemical Engineering

is preferred. It gives a very accurate solution, mesh near the singularity, by relying on the bet-
and N can be quite small so the work is small. ter approximation due to a smaller x. Another
If the problem has a steep front in it, the nite approach is to incorporate the singular trial func-
difference method or nite element method is tion into the approximation. Thus, if the solution
indicated, and adaptive mesh techniques should approaches f (x) as x goes to zero, and f (x) be-
probably be employed. Consider the reaction comes innite, an approximation may be taken
diffusion problem: as the Thiele modulus in- as
creases from a small value with no diffusion lim- N

itations to a large value with signicant diffusion y (x) = f (x) + ai yi (x)
limitations, the solution changes as shown in i=1
Figure 28. The orthogonal collocation method
This function is substituted into the differential
is initially the method of choice. For interme-
equation, which is solved for ai . Essentially, a
diate values of , N = 3 6 must be used, but
new differential equation is being solved for a
orthogonal collocation still works well (for
new variable:
down to approximately 0.01). For large , use of
the nite difference method, the nite element u (x) y (x) f (x)
method, or an asymptotic expansion for large
is better. The decision depends entirely on the The differential equation is more complicated
type of solution that is obtained. For steep fronts but has a better solution near the singularity (see
the nite difference method and nite element [97, pp. 189 192], [98, p. 611]).
method with adaptive mesh are indicated. Sometimes the domain is innite. Boundary
layer ow past a at plate is governed by the Bla-
sius equation for stream function [99, p. 117].
3 2
d d
2 d 3 +f d 2 = 0

df
f = d
= 0 at = 0

df
d
= 1 at

Because one boundary is at innity using a mesh


with a constant size is difcult! One approach is
to transform the domain. For example, let
z = e

Then = 0 becomes z = 1 and = becomes


z = 0. The derivatives are
dz d2 z
d
= e = z, d 2
= e = z

df df dz
Figure 28. Concentration solution for different values of d
= dz d
= z df
dz
Thiele modulus  2
d2 f d2 f d2 z 2
d 2
= dz 2
dz
d
+ df
dz d 2
= z 2 ddzf2 +z df
dz

The Blasius equation becomes


 3 2

7.10. Singular Problems and Innite 2 z 3 ddzf3 3z 2 ddzf2 z df
dz
Domains  
d2 df
+f z 2 dz 2 +z dz =0 for 0z1.
If the solution being sought has a singularity, a
good numerical solution may be hard to nd. The differential equation now has variable coef-
Sometimes even the location of the singular- cients, but these are no more difcult to handle
ity may not be known [96, pp. 230 238]. One than the original nonlinearities.
method of solving such problems is to rene the
Mathematics in Chemical Engineering 75

Another approach is to use a variable mesh, happen in the case of nonlinear problems; an ex-
perhaps with the same transformation. For ex- ample is a compressible ow problem with both
ample, use z = e and a constant mesh size in subsonic and supersonic regions. Characteristic
z. Then with 101 points distributed uniformly curves are curves along which a discontinuity
from z = 0 to z = 1, the following are the nodal can propagate. For a given set of equations, it
points: is necessary to determine if characteristics ex-
z = 0.,0.01,0.02,. . .,0.99,1.0 ist or not, because that determines whether the
equations are hyperbolic, elliptic, or parabolic.
= ,4.605,3.912,. . .,0.010,0
Linear Problems For linear problems, the
= ,0.639,. . .,0.01 theory summarized by Joseph et al. [105] can
Still another approach is to solve on a nite be used.
mesh in which the last point is far enough away
that its location does not inuence the solution. , ,. . .,
t xi xn
A location that is far enough away must be found
by trial and error. is replaced with the Fourier variables
i0 ,i1 ,. . .,in

8. Partial Differential Equations If the m-th order differential equation is



P = a + b
Partial differential equations are differential ||=m ||<m
equations in which the dependent variable is a where
function of two or more independent variables.
These can be time and one space dimension, or 
n
= (0 ,1 ,. . .,n ) , || = i
time and two or more space dimensions, or two i=0
||
or more space dimensions alone. Problems in- =
t0 x1 1 ...xnn
volving time are generally either hyperbolic or
parabolic, whereas those involving spatial di- the characteristic equation for P is dened as
mensions only are often elliptic. Because the 
||=m a = 0, = (0 ,1 ,. . .,n )
methods applied to each type of equation are (30)
very different, the equation must rst be classi- = 00 11 . . .n
n

ed as to its type. Then the special methods ap-


plicable to each type of equation are described. where represents coordinates. Thus only the
For a discussion of all methods, see [100 103]; highest derivatives are used to determine the
for a discussion oriented more toward chemical type. The surface is dened by this equation plus
engineering applications, see [104]. Examples a normalization condition:
of hyperbolic and parabolic equations include n

chemical reactors with radial dispersion, pres- k2 = 1
k=0
sure-swing adsorption, dispersion of an efu-
ent, and boundary value problems with transient The shape of the surface dened by Equation 30
terms added (heat transfer, mass transfer, chem- is also related to the type: elliptic equations give
ical reaction). Examples of elliptic problems in- rise to ellipses; parabolic equations give rise to
clude heat transfer and mass transfer in two and parabolas; and hyperbolic equations give rise to
three spatial dimensions and steady uid ow. hyperbolas.
2
1 2
a2
+ b22 = 1, Ellipse
8.1. Classication of Equations
0 = a12 , Parabola
A set of differential equations may be hyper-
02 a12 = 0, Hyperbola
bolic, elliptic, or parabolic, or it may be of mixed
type. The type may change for different param- If Equation 30 has no nontrivial real zeroes then
eters or in different regions of the ow. This can the equation is called elliptic. If all the roots are
76 Mathematics in Chemical Engineering

real and distinct (excluding zero) then the oper- The equation is thus second order and the type
ator is hyperbolic. is determined by
This formalism is applied to three basic types
02 +12 = 0
of equations. First consider the equation arising
from steady diffusion in two dimensions: The normalization condition
2c 2c 02 +12 = 1
+ =0
x2 y 2
is required. Combining these gives
This gives
1 (1+) 02 = 0


12 22 = 21 +22 = 0
The roots are real and the equation is hyperbolic.
Thus, When = 0
12 +22 = 1 (normalization) 12 = 0

and the equation is parabolic.


12 +22 = 0 (equation)
First-order quasi-linear problems are written
These cannot both be satised so the problem is in the form
elliptic. When the equation is 
n
u
A1 x = f , x = (t,x1 . . .,xn )
1
2u 2u l=0 (31)
=0
t2 x2 u = (u1 ,u2 ,. . .,uk )
then The matrix entries A1 is a kk matrix whose
entries depend on u but not on derivatives of u.
02 +12 = 0
Equation 31 is hyperbolic if
Now real 0 can be solved and the equation is A = A
hyperbolic
is nonsingular and for any choice of real 1 , l =
02 +12 = 1 (normalization) 0, . . . , n, l = the roots k of

02 +12 = 0 (equation)

n
When the equation is

det A 1 A1
  =0

c 2c 2c l=0
=D +
t x2 y 2 l=
then are real. If the roots are complex the equation
02 +12 +22 = 1 (normalization)
is elliptic; if some roots are real and some are
complex the equation is of mixed type.
12 +22 = 0 (equation) Apply these ideas to the advection equation
u u
thus we get +F (u) =0
t x
02 = 1 (for normalization) Thus,
and the characteristic surfaces are hyperplanes det (A0 1 A1 ) = 0 or det (A1 0 A0 ) = 0
with t = constant. This is a parabolic case.
In this case,
Consider next the telegraphers equation:
n = 1, A0 = 1, A1 = F (u)
T 2T 2T
+ 2 =
t t x2 Using the rst of the above equations gives
Replacing the derivatives with the Fourier vari- det (1 F (u)) = 0, or = 1 F (u)
ables gives
Thus, the roots are real and the equation is hy-
i0 02 +12 = 0 perbolic.
Mathematics in Chemical Engineering 77

The nal example is the heat conduction where D is a diffusion coefcient. Special cases
problem written as are the convective diffusive equation
T q T c c 2c
?Cp = , q = k +u =D 2 (33)
t x x t x x
In this formulation the constitutive equation for and Burgers viscosity equation
heat ux is separated out; the resulting set of u u 2u
+u = 2 (34)
equations is rst order and written as t x x

?Cp T q
+ x =0 where u is the velocity and is the kinematic vis-
t
cosity. This is a prototype equation for the Navier
k T
x
= q Stokes equations ( Fluid Mechanics). For
adsorption phenomena [106, p. 202],
In matrix notation this is
c c df c
& '& T
' & '& T
' & ' +u + (1) =0 (35)
?Cp 0 t
0 1 x
0 t x dc t
+ =
0 0 q
t
k 0 q
t
q where is the void fraction and f (c) gives the
equilibrium relation between the concentrations
This compares with in the uid and in the solid phase. In these exam-
u u ples, if the diffusion coefcient D or the kine-
A0 +A1 =f
x0 x1 matic viscosity is zero, the equations are hyper-
bolic. If D and are small, the phenomenon may
In this case A0 is singular whereas A1 is nonsin-
be essentially hyperbolic even though the equa-
gular. Thus,
tions are parabolic. Thus the numerical methods
det (A1 0 A0 ) = 0 for hyperbolic equations may be useful even for
parabolic equations.
is considered for any real 0 . This gives Equations for several methods are given here,
  as taken from [107]. If the convective term is
 ?C 
 p 0
 =0 treated with a centered difference expression the
 k 0
solution exhibits oscillations from node to node,
or and these vanish only if a very ne grid is used.
The simplest way to avoid the oscillations with
2 k = 0
a hyperbolic equation is to use upstream deriva-
Thus the is real, but zero, and the equation is tives. If the ow is from left to right, this would
parabolic. give the following for Equations (40):
dci F (ci ) F (ci1 ) ci+1 2ci +ci1
+ =D
dt x x2
8.2. Hyperbolic Equations for Equation 34:
dui ui ui1 ui+1 2ui +ui1
The most common situation yielding hyperbolic +ui =
dt x x2
equations involves unsteady phenomena with
and for Equation 35:
convection. A prototype equation is
dci ci ci1 df dci
c F (c) +ui + (1) | =0
+ =0 dt x dc i dt
t x
If the ow were from right to left, then the for-
Depending on the interpretation of c and F (c), mula would be
this can represent accumulation of mass and con- dci F (ci+1 ) F (ci ) ci+1 2ci +ci1
vection. With F (c) = u c, where u is the veloc- + =D
dt x x2
ity, the equation represents a mass balance on
concentration. If diffusive phenomenon are im- If the ow could be in either direction, a local
portant, the equation is changed to determination must be made at each node i and
the appropriate formula used. The effect of us-
c F (c) 2c ing upstream derivatives is to add articial or
+ =D 2 (32)
t x x numerical diffusion to the model. This can be
78 Mathematics in Chemical Engineering

ascertained by taking the nite difference form Leaving out the u2 t 2 terms gives the Galerkin
of the convective diffusion equation method. Replacing the left-hand side with
dci ci ci1 ci+1 2ci +ci1
+u =D cn+1 cn
i
dt x x2 i

and rearranging gives the Taylor nite difference method, and


dci c c
+u i+12xi1 dropping the u2 t 2 terms in that gives the
dt
  centered nite difference method. This method
ci+1 2ci +ci1
= D+ ux
2 x2
might require a small time step if reaction
phenomena are important. Then the implicit
Thus the diffusion coefcient has been changed
Galerkin method (without the Taylor terms) is
from
appropriate
ux
D to D+ A stability diagram for the explicit methods
2
applied to the convective diffusion equation is
Expressed in terms of the cell Peclet num- shown in Figure 29. Notice that all the methods
ber, P e =ux/D, this is D is changed to require
D[1+P e /2]
The cell Peclet number should always be cal- ut
Co = 1
culated as a guide to the calculations. Using a x
large cell Peclet number and upstream deriva- where Co is the Courant number. How much Co
tives leads to excessive (and articial) smooth- should be less than one depends on the method
ing of the solution proles. and on r = D t/x 2 , as given in Figure 29.
Another method often used for hyperbolic The MacCormack method with ux correction
equations is the MacCormack method. This requires a smaller time step than the MacCor-
method has two steps; it is written here for Equa- mack method alone (curve a), and the implicit
tion 33. Galerkin method (curve e) is stable for all values
 
cn+1 = cn ut
cn n of Co and r shown in Figure 29 (as well as even
i i x i+1 ci
 
larger values).
+ tD
x2
cn n n
i+1 2ci +ci1
   
cn+1
i = 1
2
cn n+1
i +ci
ut
2x cn+1
i cn+1
i1
 
tD
+ 2x2 cn+1 n+1
i+1 2ci +cn+1
i1

The concentration prole is steeper for the Mac-


Cormack method than for the upstream deriva-
tives, but oscillations can still be present. The
ux-corrected transport method can be added to
the MacCormack method. A solution is obtained
both with the upstream algorithm and the Mac-
Cormack method; then they are combined to add
just enough diffusion to eliminate the oscilla-
tions without smoothing the solution too much.
The algorithm is complicated and lengthy but
well worth the effort [107 109].
If nite element methods are used, an explicit Figure 29. Stability diagram for convective diffusion equa-
Taylor Galerkin method is appropriate. For the tion (stable below curve)
a) MacCormack; b) Centered nite difference; c) Taylor
convective diffusion equation the method is nite difference; d) Upstream; e) Galerkin; f) Taylor
     
1
6
cn+1 n 2
i+1 ci+1 + 3 ci
n+1
cn 1 n+1 n
i + 6 ci1 ci1
Galerkin
   
ut tD u2 t2
= 2x cn n
i+1 ci1 + x2 + 2x2 cn
i+1
Each of these methods tries to avoid oscil-
 lations that would disappear if the mesh were
2cn n
i +ci1
Mathematics in Chemical Engineering 79

ne enough. For the steady convective diffusion with boundary and initial conditions
equation these oscillations do not occur provided
c(x,0) = 0
ux P e
= <=1 (36)
2D 2 c (0,t) = 1, c (L,t) = 0

For large u, x must be small to meet this con- A solution of the form
dition. An alternative is to use a small x in
regions where the solution changes drastically. c (x,t) = T (t) X (x)
Because these regions change in time, the el- is attempted and substituted into the equation,
ements or grid points must move. The crite- with the terms separated to give
ria to move the grid points can be quite com-
1 dT 1 d2 X
plicated, and typical methods are reviewed in =
DT dt X dx2
[107]. The criteria include moving the mesh in
a known way (when the movement is known a One side of this equation is a function of x alone,
priori), moving the mesh to keep some prop- whereas the other side is a function of t alone.
erty (e.g., rst- or second-derivative measures) Thus, both sides must be a constant. Otherwise,
uniform over the domain, using a Galerkin or if x is changed one side changes, but the other
weighted residual criterion to move the mesh, cannot because it depends on t. Call the constant
and Euler Lagrange methods which move part and write the separate equations
of the solution exactly by convection and then dT d2 X
add on some diffusion after that. DT , X
dt dx2
The nal illustration is for adsorption in a The rst equation is solved easily
packed bed, or chromatography. Equation 35
can be solved when the adsorption phenomenon T (t) = T (0) eDt
is governed by a Langmuir isotherm.
and the second equation is written in the form
c
f (c) = d2 X
1+Kc +X = 0
dx2
Similar numerical considerations apply and sim-
ilar methods are available [110 112]. Next consider the boundary conditions. If
they are written as
c (L,t) = 1 = T (t) X (L)
8.3. Parabolic Equations in One
c (0,t) = 0 = T (t) X (0)
Dimension
the boundary conditions are difcult to satisfy
In this section several methods are applied to because they are not homogeneous, i.e. with a
parabolic equations in one dimension: separa- zero right-hand side. Thus, the problem must be
tion of variables, combination of variables, - transformed to make the boundary conditions
nite difference method, nite element method, homogeneous. The solution is written as the sum
and the orthogonal collocation method. Separa- of two functions, one of which satises the non-
tion of variables is successful for linear prob- homogeneous boundary conditions, whereas the
lems, whereas the other methods work for linear other satises the homogeneous boundary con-
or nonlinear problems. The nite difference, the ditions.
nite element, and the orthogonal collocation
c (x,t) = f (x) +u (x,t)
methods are numerical, whereas the separation
or combination of variables can lead to analyti- u (0,t) = 0
cal solutions.
u (L,t) = 0
Analytical Solutions. Consider the diffu- Thus, f (0) = 1 and f (L) = 0 are necessary. Now
sion equation the combined function satises the boundary
c 2c conditions. In this case the function f (x) can
=D 2
t x be taken as
80 Mathematics in Chemical Engineering

Each eigenvalue has a corresponding eigenfunc-


tion
f (x) = Lx
Xn (x) = Esin n x/L
The equation for u is found by substituting for c
in the original equation and noting that the f (x) The composite solution is then
drops out for this case; it need not disappear in n x n Dt
the general case: Xn (x) Tn (t) = E A sin e
L
u 2u This function satises the boundary conditions
=D 2
t x and differential equation but not the initial con-
The boundary conditions for u are dition. To make the function satisfy the initial
condition, several of these solutions are added
u (0,t) = 0 up, each with a different eigenfunction, and E A
u (L,t) = 0
is replaced by An .

n x n2 2 Dt/L2
The initial conditions for u are found from the u (x,t) = An sin e
L
initial condition n=1

x The constants An are chosen by making u (x, t)


u (x,0) = c (x,0) f (x) = 1
L satisfy the initial condition.
Separation of variables is now applied to this
nx x
equation by writing u (x,0) = An sin = 1
n=1
L L
u (x,t) = T (t) X (x)
The residual R (x) is dened as the error in the
The same equation for T (t) and X (x) is ob- initial condition:
tained, but with X (0) = X (L) = 0.
x nx
2
d X
R (x) = 1 An sin
+X =0 L n=1
L
dx2

X (0) = X (L) = 0 Next, the Galerkin method is applied, and the


residual is made orthogonal to a complete set of
Next X (x) is solved for. The equation is an functions, which are the eigenfunctions.
eigenvalue problem. The general solution is ob-
tained by usingemx and nding that m2 + = L
x mx
L
1 sin L
dx
0; thus m = i . The exponential term 0

L mx nx Am
ei x = An sin L
sin L
dx = 2
n=1 0

is written in terms of sines and cosines, so that The Galerkin criterion for nding An is the same
the general solution is as the least-squares criterion [3, p. 183]. The so-
lution is then
X = Bcos x+Esin x

x n x n2 2 Dt/L2
The boundary conditions are c (x,t) = 1 + An sin e
L n=1 L
X (L) = Bcos L+Esin L = 0
This is an exact solution to the linear prob-
X (0) = B = 0 lem. It can be evaluated to any desired accuracy
by taking more and more terms, but if a nite
If B = 0, then E = 0 is required to have any so-
number of terms are used, some error always oc-
lution at all. Thus, must satisfy
curs. For large times a single term is adequate,

sin L = 0 whereas for small times many terms are needed.
For small times the Laplace transform method
This is true for certain values of , called eigen- is also useful, because it leads to solutions that
values or characteristic values. Here, they are converge with fewer terms. For small times, the
n = n2 2 /L2 method of combination of variables may be used
Mathematics in Chemical Engineering 81

as well. For nonlinear problems, the method of


separation of variables fails and one of the other c (x,t) = 1erf = erfc
methods must be used.
The method of combination of variables is  2
e d
useful, particularly when the problem is posed 0
erf =
 2
in a semi-innite domain. Here, only one exam- e d
0
ple is provided; more detail is given in [3, 113,
This is a tabulated function [23].
114]. The method is applied here to the nonlinear
problem
Numerical Methods. Numerical methods
   
c c 2 c d D (c) c 2 are applicable to both linear and nonlinear prob-
= D (c) = D (c) +
t x x x2 dc x lems on nite and semi-innite domains. The
with boundary and initial conditions nite difference method is applied by using the
method of lines [115]. In this method the same
c (x,0) = 0 equations are used for the spatial variations of
the function, but the function at a grid point
c (0,t) = 1, c (,t) = 0 can vary with time. Thus the linear diffusion
The transformation combines two variables into problem is written as
one dci ci+1 2ci +ci1
=D (37)
x dt x2
c (x,t) = f () where =
4D0 t This can be written in the general form
The use of the 4 and D0 makes the analysis below dc
= AAc
simpler. The equation for c (x, t) is transformed dt
into an equation for f () This set of ordinary differential equations can
c
= df c
, = df be solved by using any of the standard methods.
t d t x d x
The stability of explicit schemes is deduced from
 2
2c
= d2 f
+ df 2 the theory presented in Chapter 6. The equations
x2 d 2 x d x2
are written as
x/2 2

t
= , x
= 1 ,
4D0 t x2
=0 dci ci+1 2ci +ci1 D
n+1
4D0 t3
=D = Bij cj
dt x2 x2 j=1
The result is
  where the matrix B is tridiagonal. The stability
df
d
d
K (c) d
+2 df
d
=0
of the integration of these equations is governed
K (c) = D (c) /D0 by the largest eigenvalue of B. If Eulers method
is used for integration,
The boundary conditions must also combine. In D 2
this case the variable is innite when either x is t
x2 ||max
innite or t is zero. Note that the boundary con-
ditions on c (x, t) are both zero at those points. The largest eigenvalue of B is bounded by the
Thus, the boundary conditions can be combined Gerschgorin theorem [14, p. 135].
to give n

||max max2<j<n |Bji | = 4
f () = 0 i=2

The other boundary condition is for x = 0 or = This gives the well-known stability limit
0, D 1
t
x2 2
f (0) = 1
If other methods are used to integrate in time,
Thus, an ordinary differential equation must be then the stability limit changes according to
solved rather than a partial differential equation. the method. It is interesting to note that the
When the diffusivity is constant the solution is eigenvalues of Equation 37 range from D 2 /L 2
the well-known complementary error function: (smallest) to 4 D/x 2 (largest), depending on
82 Mathematics in Chemical Engineering

the boundary conditions. Thus the problem be- Rearrangement for the case when the velocity
comes stiff as x approaches zero [3, p. 263]. u is the same for all nodes gives
Implicit methods can also be used. Write a
cn+1
i cn
i u(ci+1 ci1 ) D
nite difference form for the time derivative and + = (ci+1 2ci +ci1 )
t 2x x2
average the right-hand sides, evaluated at the old
and new times: This is the same equation as obtained using
the nite difference method. This is not always
cn+1 cn
i cn n n
i+1 2ci +ci1
i
t
= D (1) x2 true, and the nite volume equations are easy to
derive. In two- and three-dimensions, the mesh
cn+1 n+1
i+1 2ci +cn+1
i1
+D x2
need not be rectangular, as long as it is possi-
ble to compute the velocity normal to an edge of
Now the equations are of the form the cell. The nite volume method is useful for
 
Dt cn+1 + 1+2 Dt cn+1 Dt cn+1 applications involving lling, such as injection
x2 i+1 x2 i x2 i1
  molding, when only part of the cell is lled with
Dt(1) uid. Such applications do involve some approx-
= cn
i + x2
cn n n
i+1 2ci +ci1
imations, since the interface is not tracked pre-
and require solving a set of simultaneous equa- cisely, but they are useful engineering approxi-
tions, which have a tridiagonal structure. Using mations.
= 0 gives the Euler method (as above); = 0.5
gives the Crank Nicolson method; = 1 gives
the backward Euler method. The stability limit
is given by
Dt 0.5

x2 12
whereas the oscillation limit is given by
Dt 0.25 Figure 30.

x2 1
If a time step is chosen between the oscillation The nite element method is handled in a sim-
limit and stability limit, the solution will oscil- ilar fashion, as an extension of two-point bound-
late around the exact solution, but the oscilla- ary value problems by letting the solution at the
tions remain bounded. For further discussion, nodes depend on time. For the diffusion equation
see [3, p. 218]. the nite element method gives
Finite volume methods are utilized exten- dceI
e
sively in computational uid dynamics. In this CJI = B e ce
e I dt e I JI I
method, a mass balance is made over a cell ac-
counting for the change in what is in the cell and with the mass matrix dened by
the ow in and out. Figure 30 illustrates the ge- 1
e
ometry of the i-th cell. A mass balance made on CJI = xe NJ (u) NI (u) du
this cell (with area A perpendicular to the paper) 0
is
This set of equations can be written in matrix
Ax(cn+1
i cn
i ) = tA(Jj1/2 Ji+1/2 )
form
where J is the ux due to convection and diffu- dc
CC = AAc
sion, positive in the +x direction. dt

c ci ci1/2 Now the matrix C C is not diagonal, so that a


J = ucD , Ji1/2 = ui1/2 ci1/2 D set of equations must be solved for each time
x x
step, even when the right-hand side is evaluated
The concentration at the edge of the cell is explicitly. This is not as time-consuming as it
taken as seems, however. The explicit scheme is written
1 as
ci1/2 = (ci +ci1 )
2
Mathematics in Chemical Engineering 83
cn+1 cn
i Spectral methods employ the discrete Fourier
CCji i
= AAji cn
i
t transform (see 2 and Chebyshev polynomials on
and rearranged to give rectangular domains [116]).
  In the Chebyshev collocation method, N + 1
CCji cn+1
i cn
i = tAAji cn
i or collocation points are used


CC cn+1 cn = tAAc j
xj = cos , j = 0,1,. . .,N
N
This is solved with an L U decomposition (see
Section 1.1) that retains the structure of the mass As an example, consider the equation
matrix C C. Thus, u u
+f (u) =0
CC = LU t x
An explicit method in time can be used
At each step, calculate 
un+1 un u n
cn+1 cn = tU 1 L1 AAcn +f (un ) =0
t x 
This is quick and easy to do because the inverse and evaluated at each collocation point
of L and U are simple. Thus the problem is re- 
duced to solving one full matrix problem and un+1
j un
j
u n
+f un  =0
then evaluating the solution for multiple right- t j
x j
hand sides. For implicit methods the same ap-
The trial function is taken as
proach can be used, and the LU decomposition
remains xed until the time step is changed. N
pj
The method of orthogonal collocation uses a uj (t) = ap (t) cos , un n
j = uj (t ) (39)
p=0
N
similar extension: the same polynomial of x is
used but now the coefcients depend on time. Assume that the values unj exist at some time.
 Then invert Equation 39 using the fast Fourier
c  dc (xj ,t) dcj
= = transform to obtain {ap } for p = 0, 1, . . . , N;
t xj dt dt
then calculate S p
Thus, for diffusion problems
Sp = Sp+2 + (p+1) ap+1 , 0pN 1
N
+2
dcj
= Bji ci , j = 2,. . .,N +1 SN = 0, SN +1 = 0
dt i=1

This can be integrated by using the standard and nally


methods for ordinary differential equations as (1) 2Sp
ap =
initial value problems. Stability limits for ex- cp
plicit methods are available [3, p. 204].
Thus, the rst derivative is given by
The method of orthogonal collocation on -
nite elements can also be used, and details are N
u (1) pj
provided elsewhere [3, pp. 228 230]. | = ap (t) cos
x j p=0
N
The maximum eigenvalue for all the methods
is given by This is evaluated at the set of collocation points
LB by using the fast Fourier transform again. Once
||max = (38) the function and the derivative are known at each
x2
collocation point the solution can be advanced
where the values of LB are as follows:
forward to the n + 1-th time level.
The advantage of the spectral method is that
Finite difference 4
Galerkin, linear elements, lumped 4
it is very fast and can be adapted quite well to
Galerkin, linear elements 12 parallel computers. It is, however, restricted in
Galerkin, quadratic elements 60 the geometries that can be handled.
Orthogonal collocation on nite elements, 36
cubic
84 Mathematics in Chemical Engineering

8.4. Elliptic Equations complicated expressions are needed and the -


nite element method may be the better method.
Elliptic equations can be solved with both - Equation 40 is rewritten in the form
nite difference and nite element methods. One-  2
 2
dimensional elliptic problems are two-point 2 1+ x
y 2
Ti,j = Ti+1,j +Ti1,j + x
y 2
boundary value problems and are covered in Qi,j
(Ti,j+1 +Ti,j1 ) x2 k
Chapter 7. Two-dimensional elliptic problems
are often solved with direct methods, and it- And this is converted to the GaussSeidel itera-
erative methods are usually used for three- tive method.
dimensional problems. Thus, two aspects must  2

s+1 s+1
be considered: how the equations are discretized 2 1+ x
y 2
Ti,j s
= Ti+1,j +Ti1,j
to form sets of algebraic equations and how the 2
 
s+1 Q
algebraic equations are then solved. + x
y 2
s
Ti,j+1 +Ti,j1 x2 ki,j
The prototype elliptic problem is steady-state
Calculations proceed by setting a low i, com-
heat conduction or diffusion,
puting from low to high j, then increasing i and
 
2T 2T repeating the procedure. The relaxation method
k + =Q
x2 y 2 uses
 2

possibly with a heat generation term per unit s+1 x2
2 1+ x
y 2
= Ts
Ti,j i+1,j +Ti1,j + y 2
volume, Q. The boundary conditions can be  
s s+1 Q
Ti,j+1 Ti,j1 x2 ki,j
Dirichlet or 1st kind: T = T 1 on boundary S 1
s+1 s

s

Ti,j = Ti,j + Ti,j Ti,j
Neumann or 2nd kind: k T
n = q2 on boundary S 2
If = 1, this is the Gauss Seidel method. If
Robin, mixed, or 3rd kind: k T
n = h (T T3 ) on
boundary S 3 > 1, it is overrelaxation; if < 1, it is underre-
laxation. The value of may be chosen empiri-
Illustrations are given for constant physical cally, 0 < < 2, but it can be selected theoreti-
properties k, h, while T 1 , q2 , T 3 are known func- cally for simple problems like this [117, p. 100],
tions on the boundary and Q is a known function [3, p. 282]. In particular, the optimal value of the
of position. For clarity, only a two-dimensional iteration parameter is given by
problem is illustrated. The nite difference for-
mulation is given by using the following nomen- ln (opt 1) R
clature and the error (in solving the algebraic equation)
Ti,j = T (ix,jy)
is decreased by the factor (1 R)N for every N
iterations. For the heat conduction problem and
The nite difference formulation is then Dirichlet boundary conditions,
Ti+1,j 2Ti,j +Ti1,j
2
x2 R=
(40) 2n2
Ti,j+1 2Ti,j +Ti,j1
+ y 2
= Qi,j /k (when there are n points in both x and y direc-
tions). For Neumann boundary conditions, the
Ti,j = T1 for i,j on boundary S1
value is
k T | = q2 for i,j on boundary S2
n i,j 2 1
R=
2n2 1+max [x2 /y 2 ,y 2 /x2 ]
k T | = h (Ti,j T3 ) for i,j on boundary S3
n i,j
Iterative methods can also be based on lines
If the boundary is parallel to a coordinate axis the (for 2D problems) or planes (for 3D problems).
boundary slope is evaluated as in Chapter 7, by Preconditioned conjugate gradient methods
using either a one-sided, centered difference or have been developed (see Chap. 1). In these
a false boundary. If the boundary is more irreg- methods a series of matrix multiplications are
ular and not parallel to a coordinate line, more done iteration by iteration; and the steps lend
Mathematics in Chemical Engineering 85

themselves to the efciency available in paral- represent a large set of linear equations, which
lel computers. In the multigrid method the prob- are solved using matrix techniques (Chap. 1).
lem is solved on several grids, each more rened If the problem is nonlinear, e.g., with k or Q
than the previous one. In iterating between the a function of temperature, the equations must be
solutions on different grids, one converges to the solved iteratively. The integrals are given for a
solution of the algebraic equations. A chemical triangle with nodes I, J, and K in counterclock-
engineering application is given in [118]. Soft- wise order. Within an element,
ware for a variety of these methods is available,
T = NI (x,y) TI +NJ (x,y) TJ +NK (x,y) TK
as described below.
aI +bI x+cI y
The Galerkin nite element method (FEM) is NI = 2
useful for solving elliptic problems and is partic-
aI = xJ yK xK yJ
ularly effective when the domain or geometry is
irregular [119 125]. As an example, cover the bI = yI yK
domain with triangles and dene a trial func-
cI = xK xJ
tion on each triangle. The trial function takes
the value 1.0 at one corner and 0.0 at the other plus permutation on I,K,J
corners, and is linear in between (see Fig. 31).
1 xI yI
These trial functions on each triangle are pieced

together to give a trial function on the whole 2 = det 1 xJ yJ = 2 (area of triangle)

domain. For the heat conduction problem the 1 xK yK
method gives [3] aI +aJ +aK = 1

bI +bJ +bK = 0

cI +cJ +cK = 0
k
AeIJ = 4 (bI bJ +cI cJ )
e = Q QD
FIJ 2
(aI +bI x+cI y) = 3
xI +xJ +xK yI +yJ +yK
x= 3
, y= 3
2
aI +bI x+cI y = 3

The trial functions in the nite element


method are not limited to linear ones. Quadratic
functions, and even higher order functions, are
frequently used. The same considerations hold
as for boundary value problems: the higher order
Figure 31. Finite elements trial function: linear polynomi- trial functions converge faster but require more
als on triangles work. It is possible to rene both the mesh (h)
and power of polynomial in the trial function
(p) in an hp method. Some problems have con-
AeIJ TJe = FIe (41)
e J e J straints on some of the variables. If singularities
where exist in the solution, it is possible to include them
  in the basis functions and solve for the difference
AeIJ = kNI NJ dA h3 NI NJ dC
C3 between the total solution and the singular func-
  
FIe = NI QdA+ NI q2 dC NI h3 T3 dC tion [126 129].
C2 C3 When applying the Galerkin nite element
Also, a necessary condition is that method, one must choose both the shape of the
element and the basis functions. In two dimen-
Ti = T1 on C1
sions, triangular elements are usually used be-
In these equations I and J refer to the nodes of cause it is easy to cover complicated geometries
the triangle forming element e and the summa- and rene parts of the mesh. However, rectan-
tion is made over all elements. These equations gular elements sometimes have advantages, par-
86 Mathematics in Chemical Engineering
n+1 n
Ti,j Ti,j
ticularly when some parts of the solution do not Cp t
change very much and the elements can be long.  
In three dimensions the same considerations ap- k n n n
= x2
Ti+1,j 2Ti1,j +Ti1,j
ply: tetrahedral elements are frequently used, but  
k n n +T n
brick elements are also possible. While linear el- + y 2 Ti,j+1 2Ti,j i,j1 Q
ements (in both two and three dimensions) are
When Q = 0 and x = y, the time step is lim-
usually used, higher accuracy can be obtained by
ited by
using quadratic or cubic basis functions within
the element. The reason is that all methods con- x2 ?Cp x2
t or
verge according to the mesh size to some power, 4k 4D
and the power is larger when higher order ele- These time steps are smaller than for one-
ments are used. If the solution is discontinuous, dimensional problems. For three dimensions,
or has discontinuous rst derivatives, then the the limit is
lowest order basis functions are used because
the convergence is limited by the properties of x2
t
the solution, not the nite element approxima- 6D
tion. To avoid such small time steps, which must be
One nice feature of the nite element method smaller when x decreases, an implicit method
is the use of natural boundary conditions. In could be used. This leads to large sparse matri-
this problem the natural boundary conditions are ces, rather than convenient tridiagonal matrices.
the Neumann or Robin conditions. When using
Equation 41, the problem can be solved on a do-
main that is shorter than needed to reach some 8.6. Special Methods for Fluid
limiting condition (such as at an outow bound- Mechanics
ary). The externally applied ux is still applied
at the shorter domain, and the solution inside The method of operator splitting is also useful
the truncated domain is still valid. Examples are when different terms in the equation are best
given in [107] and [131]. The effect of this is to evaluated by using different methods or as a
allow solutions in domains that are smaller, thus technique for reducing a larger problem to a se-
saving computation time and permitting the so- ries of smaller problems. Here the method is
lution in semi-innite domains. illustrated by using the Navier Stokes equa-
tions. In vector notation the equations are
u
8.5. Parabolic Equations in Two or ? +?uu = ?f p+2 u
t
Three Dimensions
The equation is solved in the following steps
Computations become much more lengthy with ?u

un
= ?un un +?f +2 un
t
two or more spatial dimensions, for example, the
1
unsteady heat conduction equation 2 pn+1 = t
u
 
T 2T 2T n+1
u
?Cp =k + Q ?u t
= p
t x2 y 2

or the unsteady diffusion equation This can be done by using the nite differ-
 
ence [132, p. 162] or the nite element method
c 2T 2c [133 135].
=D + R (c)
t x2 y 2 In uid ow problems solved with the nite
In the nite difference method an explicit element method, the basis functions for pressure
technique would evaluate the right-hand side at and velocity are often different. This is required
the n-th time level: by the LBB condition (named after Ladyshen-
skaya, Brezzi, and Babuska) [134, 135]. Some-
times a discontinuous basis function is used for
pressure to meet this condition. Other times a
Mathematics in Chemical Engineering 87

penalty term is added, or the quadrature is done Consider the lattice shown in Figure 32. The var-
using a small number of quadrature points. Thus, ious velocities are
one has to be careful how to apply the nite el-
c1 = (1,0), c3 = (0,1), c5 = (1,0), c7 = (0,1)
ement method to the Navier Stokes equations.
Fortunately, software exists that has taken this c2 = (1,1), c4 = (1,1), c6 = (1,1), c8 = (1,1)
into account.
c0 = (0,0)
Level Set Methods. Multiphase problems
are complicated because the terms in the equa- The density, velocity, and shear stress (for
tions depend on which phase exists at a partic- some k) are given by

ular point, and the phase boundary may move = u=
fi (x,t), u ci fi (x,t),
i i
or be unknown. It is desirable to compute on
a xed grid, and the level set formulation al- eq
=k ci ci [fi ( x,t)fi (x,t)]
lows this. Consider a curved line in a two- i
dimensional problem or a curved surface in a For this formulation, the kinematic viscosity
three-dimensional problem. One denes a level and speed of sound are given by
set function , which is the signed distance func-
tion giving the distance from some point to the 1 1 1
= ( ), cs =
closest point on the line or surface. It dened 3 2 3
to be negative on one side of the interface and The equilibrium distribution is
positive on the other. Then the curve & '
eq ci u u)2 u u
u (ci u u
fi = wi 1+ 2 + 2
(x,y,z) = 0 cs 2c4s 2cs

represents the location of the interface. For ex- where the weighting functions are
ample, in ow problems the level set function is
4 1
dened as the solution to w0 = , w1 = w3 = w5 = w7 = ,
9 9
1
+u = 0 w2 = w 4 = w 6 = w 8 =
t 36
The physics governing the velocity of the With these conditions, the solution for veloc-
interface must be dened, and this equation is ity is a solution of the NavierStokes equation.
solved along with the other equations repre- These equations lead to a large computational
senting the problem [130 137]. problem, but it can be solved by parallel pro-
cessing on multiple computers.
Lattice Boltzmann Methods. Another way
to solve uid ow problems is based on a mo-
lecular viewpoint and is called the Lattice Boltz-
mann method [138 141]. The treatment here
follows [142]. A lattice is dened, and one solves
the following equation for fi (x,t), the probabil-
ity of nding a molecule at the point x with speed
ci .
eq
fi fi fi
+ci fi =
t
The right-hand side represents a single time
relaxation for molecular collisions, and is re-
lated to the kinematic viscosity. By means of
a simple stepping algorithm, the computational Figure 32.
algorithm is
t eq
x+cci t,t+t) = fi (x
fi (x x,t) (fi fi )

88 Mathematics in Chemical Engineering

8.7. Computer Software introduces the same complexity that occurs in


problems with a large Peclet number, with the
A variety of general-purpose computer pro- added difculty that the free surface moves bet-
grams are available commercially. Mathe- ween mesh points, and improper representation
matica (http://www.wolfram.com/), Maple can lead to unphysical oscillations. The method
(http://www.maplesoft.com/) and Mathcad used to solve the equations is important, and
(http://www.mathcad.com/) all have the capa- both explicit and implicit methods (as described
bility of doing symbolic manipulation so that above) can be used. Implicit methods may in-
algebraic solutions can be obtained. For ex- troduce unacceptable extra diffusion, so the en-
ample, Mathematica can solve some ordinary gineer needs to examine the solution carefully.
and partial differential equations analytically; The methods used to smooth unphysical oscilla-
Maple can make simple graphs and do linear tions from node to node are also important, and
algebra and simple computations, and Math- the engineer needs to verify that the added diffu-
cad can do simple calculations. In this sec- sion or smoothing does not give inaccurate so-
tion, examples are given for the use of Mat- lutions. Since current-day problems are mostly
lab (http://www.mathworks.com/), which is a nonlinear, convergence is always an issue since
package of numerical analysis tools, some of the problems are solved iteratively. Robust pro-
which are accessed by simple commands, and grams provide several methods for convergence,
some of which are accessed by writing pro- each of which is best in some circumstance or
grams in C. Spreadsheets can also be used to other. It is wise to have a program that includes
solve simple problems. A popular program used many iterative methods. If the iterative solver
in chemical engineering education is Polymath is not very robust, the only recourse to solving
(http://www.polymath-software.com/), which a steady-state problem may be to integrate the
can numerically solve sets of linear or nonlinear time-dependent problem to steady state. The so-
equations, ordinary differential equations as ini- lution time may be long, and the nal result may
tial value problems, and perform data analysis be further from convergence than would be the
and regression. case if a robust iterative solver were used.
The mathematical methods used to solve par- A variety of computer programs is avail-
tial differential equations are described in more able on the internet, some of them free. First
detail in [143 148]. Since many computer pro- consider general-purpose programs. On the
grams are available without cost, consider the NIST web page, http://gams.nist.gov/ choose
following decision points. The rst decision is problem decision tree, and then differen-
whether to use an approximate, engineering ow tial and integral equations, then partial dif-
model, developed from correlations, or to solve ferential equations. The programs are orga-
the partial differential equations that govern the nized by type of problem (elliptic, parabolic,
problem. Correlations are quick and easy to ap- and hyperbolic) and by the number of spatial
ply, but they may not be appropriate to your dimensions (one or more than one). On the
problem, or give the needed detail. When us- Netlib web site, http://www.netlib.org/, search
ing a computer package to solve partial differ- on partial differential equation. The website:
ential equations, the rst task is always to gener- http://software.sandia.gov has a variety of pro-
ate a mesh covering the problem domain. This is grams available. Lau [141, 145] provides many
not a trivial task, and special methods have been programs in C++ (also see http://www.nr.com/).
developed to permit importation of a geometry The multiphysics program Comsol Multiphysics
from a computer-aided design program. Then, (formerly FEMLAB) also solves many standard
the mesh must be created automatically. If the equations arising in Mathematical Physics.
boundary is irregular, the nite element method Computational uid dynamics (CFD) (
is especially well-suited, although special em- Computational Fluid Dynamics) programs are
bedding techniques can be used in nite differ- more specialized, and most of them have been
ence methods (which are designed to be solved designed to solve sets of equations that are ap-
on rectangular meshes). Another capability to propriate to specic industries. They can then in-
consider is the ability to track free surfaces that clude approximations and correlations for some
move during the computation. This phenomenon features that would be difcult to solve for di-
Mathematics in Chemical Engineering 89

rectly. Four widely used major packages are said to be weakly singular. A Volterra equation
Fluent (http://www.uent.com/), CFX (now part of the second kind is
of ANSYS), Comsol Multiphysics (formerly t
FEMLAB) (http://www.comsol.com/), and AN- y (t) = g (t) + K(t,s)y (s) ds (42)
SYS (http://www.ansys.com/). Of these, Com-
a
sol Multiphysics is particularly useful because
it has a convenient graphical user interface, whereas a Volterra equation of the rst kind is
permits easy mesh generation and renement t
(including adaptive mesh renement), allows y (t) = K (t,s) y (s) ds
the user to add in phenomena and additional a
equations easily, permits solution by continu-
ation methods (thus enhancing convergence), Equations of the rst kind are very sensitive to
and has extensive graphical output capabilities. solution errors so that they present severe nu-
Other packages are also available (see http://cfd- merical problems.
online.com/), and these may contain features An example of a problem giving rise to a
and correlations specic to the engineers indus- Volterra equation of the second kind is the fol-
try. One important point to note is that for tur- lowing heat conduction problem:
bulent ow, all the programs contain approxima- ?Cp T
2
= k xT2 , 0x<, t>0
t
tions, using the k-epsilon models of turbulence,
or large eddy simulations; the direct numerical T
T (x,0) = 0, x
(0,t) = g (t) ,
simulation of turbulence is too slow to apply it to
T
very big problems, although it does give insight lim T (x,t) = 0, lim =0
x x x
(independent of any approximations) that is use-
ful for interpreting turbulent phenomena. Thus, If this is solved by using Fourier transforms the
the method used to include those turbulent cor- solution is
relations is important, and the method also may t
1 1 2
affect convergence or accuracy. T (x,t) = g (s) ex /4(ts) ds
ts
0

Suppose the problem is generalized so that the


9. Integral Equations [149 155] boundary condition is one involving the solution
T, which might occur with a radiation boundary
If the dependent variable appears under an inte-
condition or heat-transfer coefcient. Then the
gral sign an equation is called an integral equa-
boundary condition is written as
tion; if derivatives of the dependent variable ap-
pear elsewhere in the equation it is called an inte- T
= G (T ,t) , x = 0, t>0
grodifferential equation. This chapter describes x
the various classes of equations, gives informa- The solution to this problem is
tion concerning Greens functions, and presents
t
numerical methods for solving integral equa- 1 1 2
T (x,t) = G (T (0,s) ,s) ex /4(ts) ds
tions. ts
0

If T * (t) is used to represent T (0, t), then


9.1. Classication t
1 1
T (t) = G (T (s) ,s) ds
Volterra integral equations have an integral with ts
0
a variable limit, whereas Fredholm integral
equations have a xed limit. Volterra equations Thus the behavior of the solution at the bound-
are usually associated with initial value or evolu- ary is governed by an integral equation. Nagel
tionary problems, whereas Fredholm equations and Kluge [156] use a similar approach to solve
are analogous to boundary value problems. The for adsorption in a porous catalyst.
terms in the integral can be unbounded, but still The existence and uniqueness of the solution
yield bounded integrals, and these equations are can be proved [151, p. 30, 32].
90 Mathematics in Chemical Engineering

Sometimes the kernel is of the form t


y (t) = y (0) + F (s,y (s)) ds
K (t,s) = K (ts) 0
which is a nonlinear Volterra equation. The gen-
Equations of this form are called convolution eral nonlinear Volterra equation is
equations and can be solved by taking the t
Laplace transform. For the integral equation y (t) = g (t) + K (t,s,y (s)) ds (45)
0
t
Y (t) = G (t) + K (t ) Y ( ) d Theorem [ 151, p. 55]. If g (t) is continuous,
0
t
the kernel K (t, s, y) is continuous in all variables
K (t) Y (t) K (t ) Y ( ) d and satises a Lipschitz condition
0
|K (t,s,y) K (t,s,z)| L |yz|
the Laplace transform is
then the nonlinear Volterra equation has a unique
y (s) = g (s) +k (s) y (s) continuous solution.
A successive substitution method for its so-
k (s) y (s) = L [K (t) Y (t)]
lution is
Solving this for y (s) gives t
g (s) yn+1 (t) = g (t) + K [t,s,yn (s)] ds
y (s) = 0
1k (s)
Nonlinear Fredholm equations have special
If the inverse transform can be found, the inte- names. The equation
gral equation is solved.
1
A Fredholm equation of the second kind is
f (x) = K [x,y,f (y)] dy
b 0
y (x) = g (x) + K (x,s) y (s) ds (43)
is called the Urysohn equation [150 p. 208]. The
a
special equation
whereas a Fredholm equation of the rst kind is 1
b f (x) = K [x,y] F [y,f (y)] dy
K (x,s) y (s) ds = g (x) 0

a is called the Hammerstein equation [150, p.


209]. Iterative methods can be used to solve
The limits of integration are xed, and these
these equations, and these methods are closely
problems are analogous to boundary value prob-
tied to xed point problems. A xed point prob-
lems. An eigenvalue problem is a homogeneous
lem is
equation of the second kind.
x = F (x)
b
y (x) = K (x,s) y (s) ds (44) and a successive substitution method is
a
xn+1 = F (xn )
Solutions to this problem occur only for spe- Local convergence theorems prove the process
cic values of , the eigenvalues. Usually the convergent if the solution is close enough to
Fredholm equation of the second or rst kind the answer, whereas global convergence theo-
is solved for values of different from these, rems are valid for any initial guess [150, p. 229
which are called regular values. 231]. The successive substitution method for
Nonlinear Volterra equations arise naturally nonlinear Fredholm equations is
from initial value problems. For the initial value
1
problem yn+1 (x) = K [x,s,yn (s)] ds
dy 0
= F (t,y (t))
dt Typical conditions for convergence include that
both sides can be integrated from 0 to t to obtain the function satises a Lipschitz condition.
Mathematics in Chemical Engineering 91

9.2. Numerical Methods for Volterra 9.3. Numerical Methods for Fredholm,
Equations of the Second Kind Urysohn, and Hammerstein Equations
of the Second Kind
Volterra equations of the second kind are anal-
ogous to initial value problems. An initial value Whereas Volterra equations could be solved
problem can be written as a Volterra equation of from one position to the next, like initial value
the second kind, although not all Volterra equa- differential equations, Fredholm equations must
tions can be written as initial value problems be solved over the entire domain, like bound-
[151, p. 7]. Here the general nonlinear Volterra ary value differential equations. Thus, large sets
equation of the second kind is treated (Eq. 45). of equations will be solved and the notation is
The simplest numerical method involves replac- designed to emphasize that.
ing the integral by a quadrature using the trape- The methods are also based on quadrature
zoid rule. formulas. For the integral
#
1 b
yn y (tn ) = g (tn ) +t K (tn ,t0 ,y0 )
2 I () = (y) dy
n1
1 a
+ K (tn ,ti ,yi ) + K (tn ,tn ,yn )
i=1
2 a quadrature formula is written:
n

This equation is a nonlinear algebraic equation
I () = wi (yi )
for yn . Since y0 is known it can be applied to
i=0
solve for y1 , y2 , . . . in succession. For a single
integral equation, at each step one must solve a Then the integral Fredholm equation can be
single nonlinear algebraic equation for yn . Typ- rewritten as
ically, the error in the solution to the integral 
n
equation is proportional to t , and the power f (x) wi K (x,yi ) f (yi ) = g (x) ,
i=0 (46)
is the same as the power in the quadrature error
axb
[151, p. 97].
The stability of the method [151, p. 111] can If this equation is evaluated at the points x = yj ,
be examined by considering the equation n

t f (yj ) wi K (yj ,yi ) f (yi ) = g (yi )
i=0
y (t) = 1 y (s) ds
0 is obtained, which is a set of linear equations
to be solved for { f ( yj )}. The solution at any
whose solution is
point is then given by Equation 46.
y (t) = et A common type of integral equation has a
singular kernel along x = y. This can be trans-
Since the integral equation can be differentiated formed to a less severe singularity by writing
to obtain the initial value problem
b b
dy K (x,y) f (y) dy = K (x,y) [f (y) f (x)] dy
= y, y (0) = 1 a a
dt b b
+ K (x,y) f (x) dy = K (x,y) [f (y)
the stability results are identical to those for ini- a a
tial value methods. In particular, using the trape- f (x)] dy+f (x) H (x)
zoid rule for integral equations is identical to
where
using this rule for initial value problems. The
method is A-stable. b
Higher order integration methods can also be H (x) = K (x,y) f (x) dy
used [151, p. 114, 124]. When the kernel is in- a
nite at certain points, i.e., when the problem has is a known function. The integral equation is then
a weak singularity, see [151, p. 71, 151]. replaced by
92 Mathematics in Chemical Engineering
n

f (x) = g (x) + wi K (x,yi ) [f (yi ) f (x)] 9.4. Numerical Methods for Eigenvalue
i=0 Problems
+f (x) H (x)
Eigenvalue problems are treated similarly to
Collocation methods can be applied as well
Fredholm equations, except that the nal equa-
[149, p. 396]. To solve integral Equation 43 ex-
tion is a matrix eigenvalue problem instead of a
pand f in the function
set of simultaneous equations. For example,
n

n
f = ai i (x) wi K (yi ,yi ) f (yi ) = f (yj ) ,
i=0 i=1

Substitute f into the equation to form the resid- i = 0,1,. . .,n


ual leads to the matrix eigenvalue problem
n
n b K D f = f
ai i (x) ai K (x,y) i (y) dy = g (x)
Where D is a diagonal matrix with Dii = wi .
i=0 i=0 a

Evaluate the residual at the collocation points


n n  b 9.5. Greens Functions [158 160]

ai i (xj ) ai K (xj ,y) i (y) dy = g (xj )
i=0 i=0
Integral equations can arise from the formula-
a
tion of a problem by using Greens function.
The expansion can be in piecewise polynomials, For example, the equation governing heat con-
leading to a collocation nite element method, or duction with a variable heat generation rate is
global polynomials, leading to a global approxi- represented in differential forms as
mation. If orthogonal polynomials are used then d2 T Q (x)
the quadratures can make use of the accurate = , T (0) = T (1) = 0
dx2 k
Gaussian quadrature points to calculate the inte- In integral form the same problem is [149, pp.
grals. Galerkin methods are also possible [149, 57 60]
p. 406]. Mills et al. [157] consider reaction
1
1
diffusion problems and say the choice of tech- T (x) = k
G (x,y) Q (y) dy
nique cannot be made in general because it is 0

highly dependent on the kernel. x (1y) xy
When the integral equation is nonlinear, iter- G (x,y) =

y (1x) yx
ative methods must be used to solve it. Con-
vergence proofs are available, based on Ba- Greens functions for typical operators are given
nachs contractive mapping principle. Consider below.
the Urysohn equation, with g (x) = 0 without For the Poisson equation with solution decay-
loss of generality: ing to zero at innity
b 2 = 4?
f (x) = F [x,y,f (y)] dy the formulation as an integral equation is
a 
(r) = (r0 ) G (r,r0 ) dV0
The kernel satises the Lipschitz condition
V
maxax,yb |F [x,y,f (y)] F [x,z,f (z)]| K |yz| where Greens function is [50, p. 891]
Theorem [150, p. 214]. If the constant K is < G (r,r0 ) = 1
r
in three dimensions
1 and certain other conditions hold, the succes-
sive substitution method = 2 ln r in two dimensions
*
b where r = (xx0 )2 +(yy0 )2 +(zz0 )2
fn+1 (x) = F [x,y,fn (y)] dy,n = 0,1. . .
in three dimensions
a *
and r = (xx0 )2 +(yx0 )2
converges to the solution of the integral equa- in two dimensions
tions.
Mathematics in Chemical Engineering 93

For the problem a=1


2
u
t
= D2 u, u = 0 on S, 1+ Sh x, yx
G (x,y,Sh) =
2
with a point source at x0 ,y0 ,z0 1+ Sh y, x<y
a=2
2
Greens function is [44, p. 355] Sh
lnx, yx
G (x,y,Sh) = 2
Sh
lny, x<y
1
u=
8[D(t )]3/2 a=3
2
[(xx0 )2 +(yy0 )2 +(zz0 )2 ]/4 D(t ) Sh
+ x1 1, yx
e G (x,y,Sh) = 2
Sh
+ y1 1, x<y
When the problem is
Greens functions for the reaction diffusion
c
t
= D2 c problem were used to provide computable error
bounds by Ferguson and Finlayson [163].
c = f (x,y,z) in S at t = 0 If Greens function has the form

c = (x,y,z) on S,t>0 u (x) v (y) 0yx
K (x,y) =
u (y) v (x) xy1
the solution can be represented as [44, p. 353]
 the problem
c= (u) =0 f (x,y,z) dxdydz
1
t   u
+D (x,y,z, ) n
dSdt f (x) = K (x,y) F [y,f (y)] dy
0
0
When the problem is two dimensional, may be written as
2 2
u= 1 e[(xx0 ) +(yy0 ) ]/4D(t ) x
4D(t ) f (x) [u (x) v (y) u (y) v (x)]
 0
c= (u) =0 f (x,y) dxdy
f [y,f (y)] dy = v (x)
t  u
+D (x,y, ) n
dCdt where
0
1
For the following differential equation and = u (y) F [y,f (y)] dy
boundary conditions 0

  Thus, the problem ends up as one directly for-


1 d dc
xa1 dx
xa1 dx = f [x,c (x)] , mulated as a xed point problem:
dc 2 dc
dx
(0) = 0, Sh dx
(1) +c (1) = g f = (f )

where Sh is the Sherwood number, the problem When the problem is the diffusion reaction
can be written as a Hammerstein integral equa- one, the form is
tion: x
c (x) = g [u (x) v (y) u (y) v (x)]
0
1 f [y,c (y)] y a1 dyv (x)
c (x) = g G (x,y,Sh) f [y,c (y)] y a1 dy
0
1
= u (y) f [y,c (y)] y a1 dy
0
Greens function for the differential operators
are [163] Dixit and Taularidis [164] solved problems
involving Fischer Tropsch synthesis reactions
in a catalyst pellet using a similar method.
94 Mathematics in Chemical Engineering

9.6. Boundary Integral Equations and 


=2
lnr0
lim r0 d = (P ) 2
Boundary Element Method r0 0 n
=0

The boundary element method utilizes Greens Thus for an internal point,
theorem and integral equations. Here, the   
1 lnr
method is described briey for the following (P ) = lnr dS (47)
2 n n
boundary value problem in two or three dimen- S

sions If P is on the boundary, the result is [165, p. 464]


2   
= 0, = f1 on S1 , = f2 on S2 1 lnr
n (P ) = lnr dS
n n
Greens theorem (see page 46) says that for any S

functions sufciently smooth Putting in the boundary conditions gives


   

  lnr 
2 2 dV = dS (P ) = f1 n lnr dS
n n n
1 S
V S
  lnr  (48)
+ n f2 lnr dS
Suppose the function satises the equation S2

2 = 0 This is an integral equation for on the bound-


ary. Note that the order is one less than the orig-
In two and three dimensions, such a function is inal differential equation. However, the integral
*
equation leads to matrices that are dense rather
= ln r, r = (xx0 )2 +(yy0 )2
than banded or sparse, so some of the advan-
in two dimensions tage of lower dimension is lost. Once this inte-
* gral equation (Eq. 48) is solved to nd on the
= r1 , r = (xx0 )2 +(yy0 )2 +(zz0 )2 boundary, it can be substituted in Equation 47 to
in three dimensions nd anywhere in the domain.

where {x 0 , y0 } or {x 0 , y0 , z0 } is a point in the


domain. The solution also satises
2 = 0

so that
  

dS = 0
n n
S

Consider the two-dimensional case. Since


the function is singular at a point, the inte-
grals must be carefully evaluated. For the region Figure 33. Domain with singularity at P
shown in Figure 33, the domain is S = S 1 + S 2 ;
a small circle of radius r 0 is placed around the
point P at x 0 , y0 . Then the full integral is In the boundary nite element method, both
  lnr  the function and its normal derivative along the
n lnr
n
dS boundary are approximated.
S
 
=2  N
N  
+ lnr0
lnr0 r0 d = 0
n n = j Nj () , = Nj ()
=0
j=1
n j=1
n j
As r 0 approaches 0,
One choice of trial functions can be the piece-
lim r0 lnr0 = 0 wise constant functions shown in Figure 34. The
r0 0
integral equation then becomes
and
Mathematics in Chemical Engineering 95

2 = g (x,y,)
Then the integral appearing in Equation 50 must
be evaluated over the entire domain, and the so-
lution in the interior is given by Equation 49. For
further applications, see [167] and [168].

10. Optimization
Figure 34. Trial function on boundary for boundary nite
element method We provide a survey of systematic methods for
a broad variety of optimization problems. The
survey begins with a general classication of

N    mathematical optimization problems involving
lnr i j
i j dS lnri ds continuous and discrete (or integer) variables.
n n
j=1 sj sj This is followed by a review of solution methods
of the major types of optimization problems for
The function j is of course known along s1 ,
continuous and discrete variable optimization,
whereas the derivative j /n is known along
particularly nonlinear and mixed-integer nonlin-
s2 . This set of equations is then solved for i and
ear programming. In addition, we discuss direct
i /n along the boundary. This constitutes the
search methods that do not require derivative in-
boundary integral method applied to the Laplace
formation as well as global optimization meth-
equation.
ods. We also review extensions of these methods
If the problem is Poissons equation
for the optimization of systems described by dif-
2 = g (x,y) ferential and algebraic equations.

Greens theorem gives


   
lnr 10.1. Introduction
lnr dS+ g lnr dA = 0
n n
S A
Optimization is a key enabling tool for decision
Thus, for an internal point, making in chemical engineering [306]. It has
  lnr  evolved from a methodology of academic in-
2 (P ) = r lnr
n
dS terest into a technology that continues to make
S
 (49) signicant impact in engineering research and
+ glnrdA
A practice. Optimization algorithms form the core
tools for a) experimental design, parameter esti-
and for a boundary point, mation, model development, and statistical anal-
  lnr 
ysis; b) process synthesis analysis, design, and
(P ) = n lnr
n
dS
S retrot; c) model predictive control and real-
 (50)
+ glnrdA time optimization; and d) planning, scheduling,
A and the integration of process operations into the
If the region is nonhomogeneous this method supply chain [307, 308].
can be used [165, p. 475], and it has been ap- As shown in Figure 35, optimization prob-
plied to heat conduction by Hsieh and Shang lems that arise in chemical engineering can be
[166]. The nite element method can be ap- classied in terms of continuous and discrete
plied in one region and the boundary nite ele- variables. For the former, nonlinear program-
ment method in another region, with appropriate ming (NLP) problems form the most general
matching conditions [165, p. 478]. If the prob- case, and widely applied specializations include
lem is nonlinear, then it is more difcult. For ex- linear programming (LP) and quadratic pro-
ample, consider an equation such as Poissons gramming (QP). An important distinction for
in which the function depends on the solution as NLP is whether the optimization problem is con-
well vex or nonconvex. The latter NLP problem may
96 Mathematics in Chemical Engineering

have multiple local optima, and an important to a mixed-integer nonlinear program (MINLP)
question is whether a global solution is required when any of the functions involved are nonlin-
for the NLP. Another important distinction is ear. If all functions are linear it corresponds to
whether the problem is assumed to be differ- a mixed-integer linear program (MILP). If there
entiable or not. are no 01 variables, then problem 51 reduces
Mixed-integer problems also include discrete to a nonlinear program 52 or linear program 65
variables. These can be written as mixed-integer depending on whether or not the functions are
nonlinear programs (MINLP) or as mixed- linear.
integer linear programs (MILP) if all variables
appear linearly in the constraint and objective
functions. For the latter an important case oc-
curs when all the variables are integer; this gives
rise to an integer programming (IP) problem. IPs
can be further classied into many special prob-
lems (e.g., assignment, traveling salesman, etc.),
which are not shown in Figure 35. Similarly, the
MINLP problem also gives rise to special prob-
lem classes, although here the main distinction
is whether its relaxation is convex or nonconvex.
The ingredients of formulating optimization
problems include a mathematical model of the
system, an objective function that quanties a
criterion to be extremized, variables that can
serve as decisions, and, optionally, inequality
constraints on the system. When represented in
algebraic form, the general formulation of dis-
crete/continuous optimization problems can be Figure 35. Classes of optimization problems and algorithms
written as the following mixed-integer optimiza-
tion problem:
Min f (x,y)
We rst start with continuous variable opti-
mization and consider in the next section the so-
s.t. h (x,y) =0 lution of NLPs with differentiable objective and
(51) constraint functions. If only local solutions are
g (x,y) 0
required for the NLP, then very efcient large-
xn ,y{0,1}t scale methods can be considered. This is fol-
where f (x, y) is the objective function (e.g., lowed by methods that are not based on local
cost, energy consumption, etc.), h(x, y) = 0 are optimality criteria; we consider direct search op-
the equations that describe the performance of timization methods that do not require deriva-
the system (e.g., material balances, production tives, as well as deterministic global optimiza-
rates), the inequality constraints g(x, y) 0 can tion methods. Following this, we consider the
dene process specications or constraints for solution of mixed-integer problems and outline
feasible plans and schedules, and s.t. denotes the main characteristics of algorithms for their
subject to. Note that the operator Max f (x) is solution. Finally, we conclude with a discussion
equivalent to Min f (x). We dene the real n- of optimization modeling software and its im-
vector x to represent the continuous variables plementation in engineering models.
while the t-vector y represents the discrete vari-
ables, which, without loss of generality, are of-
ten restricted to take 01 values to dene logi- 10.2. Gradient-Based Nonlinear
cal or discrete decisions, such as assignment of Programming
equipment and sequencing of tasks. (These vari-
ables can also be formulated to take on other in- For continuous variable optimization we con-
teger values as well.) Problem 51 corresponds sider problem 51 without discrete variables y.
Mathematics in Chemical Engineering 97

The general NLP problem 52 is presented be- direction. Invoking a theorem of the alterna-
low: tive (e.g., Farkas Lemma) leads to the cele-
brated Karush Kuhn Tucker (KKT) condi-
Min f (x)
tions [169]. Instead of a formal development of
s.t. h (x) =0 (52) these conditions, we present here a more intu-
itive, kinematic illustration. Consider the con-
g (x) 0
tour plot of the objective function f (x) given in
and we assume that the functions f (x), h(x), and Figure 36 as a smooth valley in space of the vari-
g(x) have continuous rst and second deriva- ables x 1 and x 2 . For the contour plot of this un-
tives. A key characteristic of problem 52 is constrained problem, Min f (x), consider a ball
whether the problem is convex or not, i.e., rolling in this valley to the lowest point of f (x),
whether it has a convex objective function and a denoted by x*. This point is at least a local min-
convex feasible region. A function (x) of x in imum and is dened by a point with zero gra-
some domain X is convex if and only if for all dient and at least nonnegative curvature in all
points x 1 , x 2 X: (nonzero) directions p. We use the rst deriva-
tive (gradient) vector f (x) and second deriva-
[x1 + (1) x2 ] (x1 ) + (1) (x2 ) (53) tive (Hessian) matrix xx f (x) to state the nec-
essary rst- and second-order conditions for un-
holds for all (0, 1). If (x) is differentiable, constrained optimality:
then an equivalent denition is:
x f (x ) =0 pT xx f (x ) p0 for all p=0 (55)
T
(x1 ) +(x1 ) (xx1 ) (x) (54) These necessary conditions for local optimality
Strict convexity requires that the inequalities 53 can be strengthened to sufcient conditions by
and 54 be strict. Convex feasible regions require making the inequality in the relations 55 strict
g(x) to be a convex function and h(x) to be linear. (i.e., positive curvature in all directions). Equiv-
If 52 is a convex problem, than any local solution alently, the sufcient (necessary) curvature con-
is guaranteed to be a global solution to 52. More- ditions can be stated as: xx f (x ) has all pos-
over, if the objective function is strictly convex, itive (nonnegative) eigenvalues and is therefore
then this solution x* is unique. On the other dened as a positive (semi-)denite matrix.
hand, nonconvex problems may have multiple
local solutions, i.e., feasible solutions that min-
imize the objective function within some neigh-
borhood about the solution.
We rst consider methods that nd only lo-
cal solutions to nonconvex problems, as more
difcult (and expensive) search procedures are
required to nd a global solution. Local methods
are currently very efcient and have been devel-
oped to deal with very large NLPs. Moreover,
by considering the structure of convex NLPs
(including LPs and QPs), even more powerful
methods can be applied. To study these methods,
we rst consider conditions for local optimality.

Local Optimality Conditions A Kine- Figure 36. Unconstrained minimum


matic Interpretation. Local optimality condi-
tions are generally derived from gradient infor- Now consider the imposition of inequality
mation from the objective and constraint func- [g(x) 0] and equality constraints [h(x) = 0] in
tions. The proof follows by identifying a lo- Figure 37. Continuing the kinematic interpreta-
cal minimum point that has no feasible descent tion, the inequality constraints g(x) 0 act as
fences in the valley, and equality constraints
98 Mathematics in Chemical Engineering

h(x) = 0 as rails. Consider now a ball, con- is required. The KKT conditions are derived
strained on a rail and within fences, to roll to from gradient information, and the CQ can be
its lowest point. This stationary point occurs viewed as a condition on the relative inuence of
when the normal forces exerted by the fences constraint curvature. In fact, linearly constrained
[g (x )] and rails [h (x )] on the ball are problems with nonempty feasible regions re-
balanced by the force of gravity [f (x )]. quire no constraint qualication. On the other
This condition can be stated by the following hand, as seen in Figure 38, the problem Min x 1 ,
KKT necessary conditions for constrained opti- s.t. x 2 0, (x 1 )3 x 2 has a minimum point at the
mality: origin but does not satisfy the KKT conditions
because it does not satisfy a CQ.

Figure 38. Failure of KKT conditions at constrained mini-


mum (note linear dependence of constraint gradients)
Figure 37. Constrained minimum

CQs be dened in several ways. For in-


Stationarity Condition: It is convenient to stance, the linear independence constraint qual-
dene the Lagrangian function L(x, , ) = ication (LICQ) requires that the active con-
f (x) + g(x)T + h(x)T along with weights or straints at x* be linearly independent, i.e., the
multipliers and for the constraints. These matrix [h (x ) |gA (x )] is full column rank,
multipliers are also known as dual variables where gA is the vector of inequality constraints
and shadow prices. The stationarity condition with elements that satisfy gA,i (x ) =0. With
(balance of forces acting on the ball) is then LICQ, the KKT multipliers (, ) are guaranteed
given by: to be unique at the optimal solution. The weaker
L(x,,)=f (x) +h (x) +g (x) =0 (56) Mangasarian Fromovitz constraint qualica-
tion (MFCQ) requires only that h(x*) have full
Feasibility: Both inequality and equality con- column rank and that a direction p exist that sat-
straints must be satised (ball must lie on the ises h(x*)T p = 0 and gA (x*)T p > 0. With
rail and within the fences): MFCQ, the KKT multipliers (, ) are guaran-
h (x) =0, g (x) 0 (57)
teed to be bounded (but not necessarily unique)
at the optimal solution. Additional discussion
Complementarity: Inequality constraints are ei- can be found in [169].
ther strictly satised (active) or inactive, in Second Order Conditions: As with uncon-
which case they are irrelevant to the solution. strained optimization, nonnegative (positive)
In the latter case the corresponding KKT multi- curvature is necessary (sufcient) in all of the
plier must be zero. This is written as: allowable (i.e., constrained) nonzero directions
p. This condition can be stated in several ways.
T g (x) =0, 0 (58)
A typical necessary second-order condition re-
Constraint Qualication: For a local optimum to quires a point x* that satises LICQ and rst-
satisfy the KKT conditions, an additional regu- order conditions 5658 with multipliers (, )
larity condition or constraint qualication (CQ) to satisfy the additional conditions given by:
Mathematics in Chemical Engineering 99

pT xx L(x ,,)p0 for all and f (x*) = 1.8421. Checking the second-
order conditions 59 leads to:
p=0, h(x )T p=0,
xx L(x,,)=xx [f (x)+h(x)+g(x)] =
gA (x )T p=0 (59)

The corresponding sufcient conditions require 2+1/(x1 )2 1 2.5 0.068
=
that the inequality in 59 be strict. Note that for the
1 3+1/(x2 )2 0.068 3.125
example in Figure 36, the allowable directions
p span the entire space for x while in Figure 37, [h(x)|gA (x)]
there are no allowable directions p.
Example: To illustrate the KKT conditions, 2 2.8284
consider the following unconstrained NLP: p= p=0,p=0
1 1.4142
Min (x1 )2 4x1 +3/2(x2 )2
Note that LICQ is satised. Moreover, because
7x2 +x1 x2 +9lnx1 lnx2 (60)
[h (x ) |gA (x )] is square and nonsingular,
corresponding to the contour plot in Figure 36. there are no nonzero vectors p that satisfy the al-
The optimal solution can be found by solving lowable directions. Hence, the sufcient second-
for the rst order conditions 54: order conditions (pT xx L(x , , )>0, for all
& '
2x1 4+x2 1/x1 allowable p) are vacuously satised for this
f (x) = =0 problem.
3x2 7+x1 1/x2
& '
1.3475
x = (61) Convex Cases of NLP. Linear programs and
2.0470 quadratic programs are special cases of prob-
and f (x*) = 2.8742. Checking the second- lem 52 that allow for more efcient solution,
order conditions leads to: based on application of KKT conditions 5659.
&

2 ' Because these are convex problems, any locally
2+1/ x1 1
xx f (x)=
2 optimal solution is a global solution. In partic-
1 3+1/ x2
& ' ular, if the objective and constraint functions in
2.5507 1 problem 52 are linear then the following linear
xx f (x)= (positive denite) (62)
1 3.2387 program (LP):
Now consider the constrained NLP: Min cT x
Min 2 2
(x1 ) 4x1 +3/2(x1 ) 7x2 +x1 x2 s.t. Ax=b (65)
+9lnx1 lnx2 Cxd
(63)
s.t. 4x1 x2 0 can be solved in a nite number of steps, and
2x1 x2 =0 the optimal solution lies at a vertex of the poly-
that corresponds to the plot in Figure 37. The hedron described by the linear constraints. This
optimal solution can be found by applying the is shown in Figure 39, and in so-called primal
rst-order KKT conditions 5658: degenerate cases, multiple vertices can be alter-
L(x,,)=f (x) +h (x) +g (x) =
nate optimal solutions with the same values of
the objective function. The standard method to

2x1 4+x2 1/x1 2 x2 solve problem 65 is the simplex method, devel-
+ + =0 oped in the late 1940s [170], although, starting
3x2 7+x1 1/x2 1 x1 from Karmarkars discovery in 1984, interior
g (x) =4x1 x2 0, h (x) =2x1 x2 =0 point methods have become quite advanced and
competitive for large scale problems [171]. The
g (x) =(4x1 x2 ), 0
simplex method proceeds by moving succes-
sively from vertex to vertex with improved ob-
jective function values. Methods to solve prob-

1.4142 lem 65 are well implemented and widely used,
x= , =1.036, =1.068
2.8284
100 Mathematics in Chemical Engineering

especially in planning and logistical applica- If the matrix Q is positive semidenite (posi-
tions. They also form the basis for MILP meth- tive denite), when projected into the null space
ods (see below). Currently, state-of-the-art LP of the active constraints, then problem 66 is
solvers can handle millions of variables and con- (strictly) convex and the QP is a global (and
straints and the application of further decom- unique) minimum. Otherwise, local solutions
position methods leads to the solution of prob- exist for problem 66, and more extensive global
lems that are two or three orders of magnitude optimization methods are needed to obtain the
larger than this [172, 173]. Also, the interior global solution. Like LPs, convex QPs can be
point method is described below from the per- solved in a nite number of steps. However, as
spective of more general NLPs. seen in Figure 40, these optimal solutions can
lie on a vertex, on a constraint boundary, or in
the interior. A number of active set strategies
have been created that solve the KKT conditions
of the QP and incorporate efcient updates of
active constraints. Popular QP methods include
null space algorithms, range space methods, and
Schur complement methods. As with LPs, QP
problems can also be solved with interior point
methods [171].

Figure 40. Contour plots of convex quadratic programs

Solving the General NLP Problem. Solu-


tion techniques for problem 52 deal with sat-
isfaction of the KKT conditions 5659. Many
NLP solvers are based on successive quadratic
programming (SQP) as it allows the construction
of a number of NLP algorithms based on the
Newton Raphson method for equation solv-
ing. SQP solvers have been shown to require
the fewest function evaluations to solve NLPs
Figure 39. Contour plots of linear programs [174] and they can be tailored to a broad range
of process engineering problems with different
Quadratic programs (QP) represent a slight structure.
modication of problem 65 and can be stated The SQP strategy applies the equivalent of a
as: Newton step to the KKT conditions of the non-
linear programming problem, and this leads to a
Min cT x+ 12 xT Qx fast rate of convergence. By adding slack vari-
s.t. Ax=b (66) ables s the rst-order KKT conditions can be
xd rewritten as:
Mathematics in Chemical Engineering 101

f (x) +h (x) +g (x) =0 (67a) step for 67a67c applied at iteration k. Also, se-
lection of the active set is now handled at the QP
h (x) =0 (67b) level by satisfying the conditions 69d, 69e. To
evaluate and change candidate active sets, QP
g (x) +s=0 (67c) algorithms apply inexpensive matrix-updating
strategies to the KKT matrix associated with the
SV e=0 (67d)
QP 68. Details of this approach can be found in
[175, 176].
As alternatives that avoid the combinatorial
(s,) 0 (67e)
problem of selecting the active set, interior point
where e = [1, 1, . . ., 1]T , S = diag(s) and V = (or barrier) methods modify the NLP problem 52
diag(). SQP methods nd solutions that sat- to form problem 70
isfy Equations 67a, 67b, 67c, 67d and 67e by

Min f xk i ln si
generating Newton-like search directions at it-
k
s.t. h x =0 (70)
eration k. However, equations 67d and active

g xk +s=0
bounds 67e are dependent and serve to make
the KKT system ill-conditioned near the solu- where the solution to this problem has s > 0 for
tion. SQP algorithms treat these conditions in the penalty parameter > 0, and decreasing to
two ways. In the active set strategy, discrete de- zero leads to solution of problem 52. The KKT
cisions are made regarding the active constraint conditions for this problem can be written as
set, iI= {i|gi (x ) =0}, and Equation 67d is Equation 71
replaced by si = 0, i I, and vi = 0, i I. Deter-
f (x) +h (x) +g (x) =0
mining the active set is a combinatorial problem,
and a straightforward way to determine an esti- h (x) =0
mate of the active set [and also satisfy 67e] is to (71)
formulate, at a point x k , and solve the following g (x) +s=0
QP at iteration k: SV e=e

T

Min f xk p+ 12 pT xx L xk ,k , k p and at iteration k the Newton steps to solve 71

k
k T
s.t. h x +h x p=0 (68) are given by:


T
g xk +g xk p+s=0, s0 xx L(xk ,k ,k ) h (xk ) g (xk )
Sk1 Vk I
The KKT conditions of 68 are given by:

    h(xk )T
f xk +2 L xk ,k , k p g(xk )T I
   
+h xk +g xk =0 (69a) x x L(xk ,k ,k )
s S 1 e
k k
= (72)
   T h (xk )
h xk +h xk p=0 (69b) g (xk ) +sk
   T A detailed description of this algorithm, called
g xk +g xk p+s=0 (69c)
IPOPT, can be found in [177].
Both active set and interior point methods
SV e = 0 (69d) have clear trade-offs. Interior point methods may
require more iterations to solve problem 70 for
(s,) 0 (69e) various values of , while active set methods
where the Hessian of require the solution of the more expensive QP
 the Lagrange function subproblem 68. Thus, if there are few inequality
T T
xx L (x,,) =xx f (x) +h(x) +g(x) constraints or an active set is known (say from
is calculated directly or through a quasi- a good starting guess, or a known QP solution
Newtonian approximation (created by differ- from a previous iteration) then solving problem
ences of gradient vectors). It is easy to show 68 is not expensive and the active set method
that 69a69c correspond to a NewtonRaphson is favored. On the other hand, for problems with
102 Mathematics in Chemical Engineering

many inequality constraints, interior point meth- Min f (z)


ods are often faster as they avoid the combina- s.t. c (z) =0 (74)
torial problem of selecting the active set. This azb
is especially the case for large-scale problems
and when a large number of bounds are active. Variables are partitioned as nonbasic vari-
Examples that demonstrate the performance of ables (those xed to their bounds), basic vari-
these approaches include the solution of model ables (those that can be solved from the
predictive control (MPC) problems [178 180] equality constraints), and superbasic variables
and the solution of large optimal control prob- (those remaining variables between bounds that
lems using barrier NLP solvers. For instance, serveto drive the
 optimization); this leads to
IPOPT allows the solution of problems with z T = zNT T T
,zS ,zB . This partition is derived from
more than 106 variables and up to 50 000 de- local information and may change over the
grees of freedom [181, 182]. course of the optimization iterations. The cor-
responding KKT conditions can be written as
Other Gradient-Based NLP Solvers. In ad- Equations 75a, 75b, 75c, 75d and 75e
dition to SQP methods, a number of NLP solvers N f (z) +N c (z) =a b (75a)
have been developed and adapted for large-
scale problems. Generally, these methods re-
S f (z) +S c (z) =0 (75b)
quire more function evaluations than SQP meth-
ods, but they perform very well when interfaced
to optimization modeling platforms, where func- B f (z) +B c (z) =0 (75c)
tion evaluations are cheap. All of these can be
derived from the perspective of applying New- c (z) =0 (75d)
ton steps to portions of the KKT conditions.
LANCELOT [183] is based on the solution zN,j =aj or bj , a,j 0,b,j =0 or b,j 0,a,j =0 (75e)
of bound constrained subproblems. Here an aug-
mented Lagrangian is formed from problem 52 where and are the KKT multipliers for
and subproblem 73 is solved. the equality and bound constraints, respectively,
and 75e replaces the complementarity condi-
Min f (x) +T h (x)
tions 58. Reduced gradient methods work by
+ (g (x) +s) + 12 g (x) ,g (x) +s2
(73) nesting equations 75b, 75d within 75a, 75c. At
k
s.t. s0 iteration k, for xed values of zN and zSk , we
can solve for zB using 75d and for using 75b.
The above subproblem can be solved very ef- Moreover, linearization of these equations leads
ciently for xed values of the multipliers to constrained derivatives or reduced gradients
and v and penalty parameter . Here a gradient- (Eq. 76)
projection, trust-region method is applied. Once
df
subproblem 73 is solved, the multipliers and =fs cs (cB )1 fB (76)
dzs
penalty parameter are updated in an outer loop
and the cycle repeats until the KKT conditions which indicate how f (z) (and zB ) change with
for problem 52 are satised. LANCELOT works respect to zS and zN . The algorithm then pro-
best when exact second derivatives are avail- ceeds by updating zS using reduced gradients in
able. This promotes a fast convergence rate in a Newton-type iteration to solve equation 75c.
solving each subproblem and allows a bound Following this, bound multipliers are calcu-
constrained trust-region method to exploit direc- lated from 75a. Over the course of the iter-
tions of negative curvature in the Hessian matrix. ations, if the variables zB or zS exceed their
Reduced gradient methods are active set bounds or if some bound multipliers become
strategies that rely on partitioning the variables negative, then the variable partition needs to
and solving Equations 67a, 67b, 67c, 67d and be changed and the equations 75a, 75b, 75c,
67e in a nested manner. Without loss of gener- 75d and 75e are reconstructed. These reduced
ality, problem 52 can be rewritten as problem gradient methods are embodied in the popular
74. GRG2, CONOPT, and SOLVER codes [173].
Mathematics in Chemical Engineering 103

The SOLVER code has been incorporated into not checked by methods that do not use second
Microsoft Excel. CONOPT [184] is an efcient derivatives). With the recent development of
and widely used code in several optimization automatic differentiation tools, many model-
modeling environments. ing and simulation platforms can provide ex-
MINOS [185] is a well-implemented package act rst and second derivatives for optimiza-
that offers a variation on reduced gradient strate- tion. When second derivatives are available
gies. At iteration k, equation 75d is replaced by for the objective or constraint functions, they
its linearization (Eq. 77) can be used directly in LANCELOT, SQP and,
 T  T less efciently, in reduced gradient methods.
k k k
c(zN ,zS ,zB )+B c z k (zB zB
k
)+S c z k
Otherwise, for problems with few superbasic
(zS zSk )=0 (77) variables, reduced gradient methods and re-
duced space variants of SQP can be applied.
and (75a75c, 75e) are solved with Equation 77
Referring to problem 74 with n variables and
as a subproblem using concepts from the re-
m equalities, we can write the QP step from
duced gradient method. At the solution of this
problem 68 as Equation 78.
subproblem, the constraints 75d are relinearized

T 1

and the cycle repeats until the KKT conditions of Min f z k p+1 2 pT xx L z k , k p
75a, 75b, 75c, 75d and 75e are satised. The aug-

T
mented Lagrangian function 73 is used to penal- s.t. c z k +c z k p=0 (78)
ize movement away from the feasible region. For az k +pb
problems with few degrees of freedom, the re-
sulting approach leads to an extremely efcient Dening the search direction as p = Z k pZ +
method even for very large problems. MINOS Y k pY , where c(x k )T Z k = 0 and [Y k |Z k ] is
has been interfaced to a number of modeling a nonsingular n n matrix, allows us to form
systems and enjoys widespread use. It performs the following reduced QP subproblem (with
especially well on large problems with few non- nm variables)
linear constraints. However, on highly nonlinear ! T
"T
Min Zk f z k +wk pZ + 12 pz T Bk pZ
problems it is usually less reliable than other re- (79)
duced gradient methods. s.t. az k +Zk pZ +Yk pY b

Algorithmic Details for NLP Methods. All where pY = [c(zk )T Y k ]1 c(zk ). Good
of the above NLP methods incorporate con- choices of Y k and Z k , which allow sparsity
k T
 in c(z ), are: Yk =[0|I]
cepts from the Newton Raphson Method for to be exploited 
equation solving. Essential features of these 1
and ZkT = I|N,S c z k B c z k .
methods are a) providing accurate derivative in-
formation to solve for the KKT conditions, Here we dene  the
 reduced Hessian
b) stabilization strategies to promote conver- Bk =ZzT zz L z k, k Zk and wk =
gence of the Newton-like method from poor ZkT zz L z k , k Yk pY . In the absence of
starting points, and c) regularization of the Ja- second-derivative information, Bk can be
cobian matrix in Newtons method (the so- approximated using positive denite quasi-
called KKT matrix) if it becomes singular or Newton approximations [175]. Also, for the
ill-conditioned. interior point method, a similar reduced space
decomposition can be applied to the Newton
a) Providing rst and second derivatives: The step given in 72.
KKT conditions require rst derivatives to de- Finally, for problems with least-squares func-
ne stationary points, so accurate rst deriva- tions, as in data reconciliation, parameter es-
tives are essential to determine locally optimal timation, and model predictive control, one
solutions for differentiable NLPs. Moreover, can often assume that the values of the objec-
NewtonRaphson methods that are applied to tive function and its gradient at the solution
the KKT conditions, as well as the task of are vanishingly small. Under these conditions,
checking second-order KKT conditions, nec- one can show that the multipliers (, ) also
essarily require information on second deriva- vanish and xx L(x*, , ) can be substituted
tives. (Note that second-order conditions are
104 Mathematics in Chemical Engineering

by xx f (x*). This Gauss Newton approxi- leads to better performance than those based
mation has been shown to be very efcient for on merit functions.
the solution of least-squares problems [175]. c) Regularization of the KKT matrix for the NLP
b) Line-search and trust-region methods are subproblem (e.g., in Equation 72) is essen-
used to promote convergence from poor start- tial for good performance of general purpose
ing points. These are commonly used with algorithms. For instance, to obtain a unique
the search directions calculated from NLP solution to Eqn. 72, active constraint gradi-
subproblems such as problem 68. In a trust- ents must be full rank, and the Hessian matrix,
region approach, the constraint, p is when projected into the null space of the ac-
added and the iteration step is taken if there tive constraint gradients, must be positive def-
is sufcient reduction of some merit func- inite. These properties may not hold far from
tion (e.g., the objective function weighted the solution, and corrections to the Hessian in
with some measure of the constraint viola- SQP may be necessary [176]. Regularization
tions). The size of the trust region is ad- methods ensure that subproblems like Eqns.
justed based on the agreement of the reduc- 68 and 72 remain well-conditioned; they in-
tion of the actual merit function compared clude addition of positive constants to the di-
to its predicted reduction from the subprob- agonal of the Hessian matrix to ensure its pos-
lem [183]. Such methods have strong global itive deniteness, judicious selection of active
convergence properties and are especially ap- constraint gradients to ensure that they are lin-
propriate for ill-conditioned NLPs. This ap- early independent, and scaling the subprob-
proach has been applied in the KNITRO code lem to reduce the propagation of numerical er-
[186]. Line-search methods can be more ef- rors. Often these strategies are heuristics built
cient on problems with reasonably good start- into particular NLP codes. While quite effec-
ing points and well-conditioned subproblems, tive, most of these heuristics do not provide
as in real-time optimization. Typically, once convergence guarantees for general NLPs.
a search direction is calculated from problem
68, or other related subproblem, a step size Table 11 summarizes the characteristics of
(0, 1] is chosen so that x k + p leads to a a collection of widely used NLP codes. Much
sufcient decrease of a merit function. As a more information on widely available codes
recent alternative, a novel lter-stabilization can also be found on the NEOS server (www-
strategy (for both line-search and trust-region neos.mcs.anl.gov) and the NEOS Software
approaches) has been developed based on a Guide.
bicriterion minimization, with the objective
function and constraint infeasibility as com-
peting objectives [187]. This method often

Table 11. Representative NLP solvers


Method Algorithm type Stabilization Second-order information
CONOPT [184] reduced gradient line search exact and quasi-Newton
GRG2 [173] reduced gradient line search quasi-Newton
IPOPT [177] SQP, barrier line search exact
KNITRO [186] SQP, barrier trust region exact and quasi-Newton
LANCELOT [183] aug mented Lagrangian, bound trust region exact and quasi-Newton
constrained
LOQO [188] SQP, barrier line search exact
MINOS [185] reduced gradient, augmented line search quasi-Newton
Lagrangian
NPSOL [189] SQP, Active set line search quasi-Newton
SNOPT [190] reduced space SQP, active set line search quasi-Newton
SOCS [191] SQP, active set line search exact
SOLVER [173] reduced gradient line search quasi-Newton
SRQP [192] reduced space SQP, active set line search quasi-Newton
Mathematics in Chemical Engineering 105

10.3. Optimization Methods without


Derivatives
A broad class of optimization strategies does not
require derivative information. These methods
have the advantage of easy implementation and
little prior knowledge of the optimization prob-
lem. In particular, such methods are well suited
for quick and dirty optimization studies that
explore the scope of optimization for new prob-
lems, prior to investing effort for more sophisti-
cated modeling and solution strategies. Most of
these methods are derived from heuristics that
naturally spawn numerous variations. As a re-
sult, a very broad literature describes these meth-
ods. Here we discuss only a few important trends
in this area.

Classical Direct Search Methods. Devel-


oped in the 1960s and 1970s, these methods in-
clude one-at-a-time search and methods based
on experimental designs [193]. At that time,
these direct search methods were the most popu-
lar optimization methods in chemical engineer-
ing. Methods that fall into this class include
the pattern search [194], the conjugate direction
method [195], simplex and complex searches
[196], and the adaptive random search methods
[197 199]. All of these methods require only
objective function values for unconstrained min-
imization. Associated with these methods are
numerous studies on a wide range of process
problems. Moreover, many of these methods in- Figure 41. Examples of optimization methods without
clude heuristics that prevent premature termina- derivatives
tion (e.g., directional exibility in the complex A) Pattern search method; B) Random search method
Circles: 1st phase; Triangles: 2nd phase; Stars: 3rd phase
search as well as random restarts and direction
generation). To illustrate these methods, Figure
41 illustrates the performance of a pattern search accepted with x as the new point. Otherwise,
method as well as a random search method on x is accepted with probability p(,x,x). Op-
an unconstrained problem. tions include the Metropolis distribution, p(,x,
Simulated Annealing. This strategy is related x) = exp[{f (x)f (x)}/], and the Glauber
to random search methods and derives from a distribution, p(, x, x) = exp[{f (x)f (x)}/(1
class of heuristics with analogies to the motion + exp[{f (x)f (x)}]/)]. The parameter is
of molecules in the cooling and solidication then reduced and the method continues until no
of metals [200]. Here a temperature param- further progress is made.
eter can be raised or lowered to inuence Genetic Algorithms. This approach, rst pro-
the probability of accepting points that do not posed in [201], is based on the analogy of im-
improve the objective function. The method proving a population of solutions through mod-
starts with a base point x and objective value ifying their gene pool. It also has similar perfor-
f (x). The next point x is chosen at random mance characteristics as random search meth-
from a distribution. If f (x) < f (x), the move is ods and simulated annealing. Two forms of ge-
106 Mathematics in Chemical Engineering

netic modication, crossover or mutation, are and FOCUS developed at Boeing Corporation
used and the elements of the optimization vec- [205].
tor x are represented as binary strings. Crossover Direct search methods are easy to apply to
deals with random swapping of vector elements a wide variety of problem types and optimiza-
(among parents with highest objective function tion models. Moreover, because their termina-
values or other rankings of population) or any tion criteria are not based on gradient informa-
linear combinations of two parents. Mutation tion and stationary points, they are more likely
deals with the addition of a random variable to el- to favor the search for global rather than locally
ements of the vector. Genetic algorithms (GAs) optimal solutions. These methods can also be
have seen widespread use in process engineering adapted easily to include integer variables. How-
and a number of codes are available. A related ever, rigorous convergence properties to glob-
GA algorithm is described in [173]. ally optimal solutions have not yet been disco-
Derivative-Free Optimization (DFO). In the vered. Also, these methods are best suited for
past decade, the availability of parallel com- unconstrained problems or for problems with
puters and faster computing hardware and the simple bounds. Otherwise, they may have dif-
need to incorporate complex simulation models culties with constraints, as the only options open
within optimization studies have led a number of for handling constraints are equality constraint
optimization researchers to reconsider classical elimination and addition of penalty functions for
direct search approaches. In particular, Dennis inequality constraints. Both approaches can be
and Torczon [202] developed a multidimen- unreliable and may lead to failure of the opti-
sional search algorithm that extends the simplex mization algorithm. Finally, the performance of
approach [196]. They note that the Nelder direct search methods scales poorly (and often
Mead algorithm fails as the number of variables exponentially) with the number of decision vari-
increases, even for very simple problems. To ables. While performance can be improved with
overcome this, their multidimensional pattern- the use of parallel computing, these methods are
search approach combines reection, expansion, rarely applied to problems with more than a few
and contraction steps that act as line search algo- dozen decision variables.
rithms for a number of linear independent search
directions. This approach is easily adapted to
parallel computation and the method can be 10.4. Global Optimization
tailored to the number of processors available.
Moreover, this approach converges to locally Deterministic optimization methods are avail-
optimal solutions for unconstrained problems able for nonconvex nonlinear programming
and exhibits an unexpected performance syn- problems of the form problem 52 that guaran-
ergy when multiple processors are used. The tee convergence to the global optimum. More
work of Dennis and Torczon [202] has spawned specically, one can show under mild conditions
considerable research on analysis and code de- that they converge to an distance to the global
velopment for DFO methods. Moreover, Conn optimum on a nite number of steps. These
et al. [203] construct a multivariable DFO al- methods are generally more expensive than local
gorithm that uses a surrogate model for the NLP methods, and they require the exploitation
objective function within a trust-region method. of the structure of the nonlinear program.
Here points are sampled to obtain a well-dened Global optimization of nonconvex programs
quadratic interpolation model, and descent con- has received increased attention due to their
ditions from trust-region methods enforce con- practical importance. Most of the deterministic
vergence properties. A number of trust-region global optimization algorithms are based on spa-
methods that rely on this approach are reviewed tial branch-and-bound algorithm [206], which
in [203]. Moreover, a number of DFO codes divides the feasible region of continuous vari-
have been developed that lead to black-box op- ables and compares lower bound and upper
timization implementations for large, complex bound for fathoming each subregion. The one
simulation models. These include the DAKOTA that contains the optimal solution is found by
package at Sandia National Lab [204], eliminating subregions that are proved not to
http://endo.sandia.gov/DAKOTA/software.html contain the optimal solution.
Mathematics in Chemical Engineering 107

For nonconvex NLP problems, Quesada and containing the feasible region) into subregions
Grossmann [207] proposed a spatial branch- (see Fig. 42). Upper bounds on the objective
and-bound algorithm for concave separable, lin- function are computed over all subregions of
ear fractional, and bilinear programs using lin- the problem. In addition, lower bounds can be
ear and nonlinear underestimating functions derived from convex underestimators of the ob-
[208]. For nonconvex MINLP, Ryoo and Sahini- jective function and constraints for each subre-
dis [209] and later Tawarmalani and Sahini- gion. The algorithm then proceeds to eliminate
dis [210] developed BARON, which branches all subregions that have lower bounds that are
on the continuous and discrete variables with greater than the least upper bound. After this, the
bounds reduction method. Adjiman et al. [211, remaining regions are further partitioned to cre-
212] proposed the SMIN-BB and GMIN-BB ate new subregions and the cycle continues until
algorithms for twice-differentiable nonconvex the upper and lower bounds converge. Below
MINLPs. Using a valid convex underestima- we illustrate the specic steps of the algorithm
tion of general functions as well as for special for nonlinear programs that involve bilinear, lin-
functions, Adjiman et al. [213] developed the ear fractional, and concave separable terms [207,
BB method, which branches on both the con- 214].
tinuous and discrete variables according to spe-
cic options. The branch-and-contract method Nonconvex NLP with Bilinear, Linear
[214] has bilinear, linear fractional, and concave Fractional, and Concave Separable Terms.
separable functions in the continuous variables Consider the following specic nonconvex NLP
and binary variables, uses bound contraction, problem,
and applies the outer-approximation (OA) algo-   xi
Min f (x) = (i,j)BL0 aij xi xj + (i,j)LF0 bij xj
rithm at each node of the tree. Smith and Pan- x
telides [215] proposed a reformulation method 
+ iC0 gi (xi ) +h (x)
combined with a spatial branch-and-bound al-
gorithm for nonconvex MINLP and NLP. subject to
  (80)
xi
fk (x) = (i,j)BLk aijk xi xj + (i,j)LFk bijk xj

+ iCk gi,k (xi ) +hk (x) 0 kK

xS0 Rn

where aij , aijk , bij , bijk are scalars with


iI={1,2, ,n}, jJ={1,2, ,n}, and
kK= {1,2, ,m}. BL0 , BLk , LF0 , LFk are
(i, j)-index sets, with i =j, that dene the bi-
linear and linear fractional terms present in the
problem. The functions h (x) , hk (x) are con-
vex, and twice continuously differentiable. C 0
and C k are index sets for the univariate twice
continuously differentiable concave functions
gi (xi ) ,gi,k (xi ). The set SRn is convex, and
0 Rn is an n-dimensional hyperrectangle de-
ned in terms of the initial variable bounds x L,in
and x U,in :
Figure 42. Convex underestimator for nonconvex function 0 ={xRn :0xL,in xxU,in , xL,in
j >0
if (i,j) LF0 LFk , iI, jJ, kK}
Because in global optimization one cannot
exploit optimality conditions like the KKT con- The feasible region of problem 80 is denoted
ditions for a local optimum, these methods work by D. Note that a nonlinear equality constraint
by rst partitioning the problem domain (i.e., of the form fk (x) =0 can be accommodated in
108 Mathematics in Chemical Engineering

problem 51 through the representation by the in- BL+ = {(i,j):(i,j)BL0 BLk , aij >0
2
equalities fk (x) 0 and fk (x) 0, provided or aijk >0, kK
hk (x) is separable.
BL = {(i,j):(i,j)BL0 BLk , aij <0
To obtain a lower bound LB () for the 2
or aijk <0, kK
global minimum of problem 80 over D,
where ={xRn :xL xxU }0 , the fol- The inequalities 84 were rst derived by Mc-
lowing problem is proposed: Cormick [208], and along with the inequalities
 85 theoretically characterized by Al-Khayyal
Min f (x,y,z) = (i,j)BL0 aij yij and Falk [217, 218].
(x,y,z)

+
  c) z= {zij } is a vector of additional variables for
(i,j)LF0 bij zij + iC0 g
i (xi ) +h (x)
relaxing the linear fractional terms in problem
subject to 80; these variables are used in the following
 
inequalities:
fk (x,y,z) = (i,j)BLk aijk yij + (i,j)LFk bijk zij  
 zij xLi +xU
i
1
1
(i,j)LF +
xj xj xL
+ iCk g
i,k (xi ) +hk (x) 0 kK  j  (86)
zij xUi +xL
i
1
1
(i,j)LF +
xj xj xU
(x,y,z) T () Rn Rn1 Rn2 j
* 2
n1 n2 x + x L xU
xSRn , yR+ , zR+ , 1 i i i
(81) zij * * (i,j)LF + (87)
xj xL + xU
i i
where the functions and sets are dened as fol-  
lows: zij 1
xU L L L (i,j) LF
xL U j xi xi xj +xi xj
j xj
a) gi (xi ) and gi,k (xi ) are the convex envelopes  
1
zij xL U U U
j xi xi xj +xi xj (i,j) LF
for the univariate functions over the domain xL U
j xj
xi [xL U
i ,xi ] [216]: (88)

gi (xi ) = where
(

L ) 
  gi xU 
i gi xi LF + = {(i,j):(i,j)LF0 LFk , bij >0
gi xL i + U L
xi xL
i
xi xi 2
or bijk >0, kK
gi (xi ) (82)
LF = {(i,j):(i,j)LF0 LFk , bij <0
gi,k (xi ) = 2
(

L )  or bijk <0, kK
  gi,k xU 
L i gi,k xi L
gi,k xi + x i x i The inequalities 86 and 87, 88 are convex un-
xU L
i xi
derestimators due to Quesada and Grossmann
gi,k (xi ) (83) [207, 219] and Zamora and Grossmann [214],
where gi (xi ) =gi (xi ) at xi =xLi , and xi =xU
i ;
respectively.
likewise, gi,k (xi ) =gi,k (xi ) at xi =xLi , and d) T () = {(x,y,z) Rn Rn1 Rn2 : 82 87
xi =xU are satised with xL ,xU as in }. The fea-
i .
b) y= {yij } is a vector of additional variables for sible region, and the solution of problem 52
relaxing the bilinear terms in 80, and is used are denoted by M (), and ( x,
y ,
z ) , respec-
in the following inequalities which determine tively. We dene the approximation gap ()
the convex and concave envelopes of bilinear at a branch-and-bound node as

terms:
if OUB=
() = LB () if OUB=0 (89)
yij xL L L L
j xi +xi xj xi xj (i,j)BL+
(OUBLB())
(84) otherwise
yij xU
j x i +x U x xU xU
i j i j (i,j)BL+ |OUB|

yij xL U U L (i,j)BL
where the overall upper bound (OUB) is the
j xi +xi xj xi xj
yij xU x +x L x xL xU (i,j)BL
(85) value of f (x) at the best available feasible
j i i j i j
point xD; if no feasible point is available,
where then OUB=.
Mathematics in Chemical Engineering 109

Note that the underestimating problem 81 is This basic concept in spatial branch-and-
a linear program if LF + =. During the ex- bound for global optimization is as follows.
ecution of the spatial branch-and-bound algo- Bounds are related to the calculation of upper
rithm, problem (NBNC) is solved initially over and lower bounds. For the former, any feasi-
M (0 ) (root node of the branch-and-bound ble point or, preferably, a locally optimal point
tree). If a better approximation is required, in the subregion, can be used. For the lower
M (0 ) is rened by partitioning 0 into two bound, convex relaxations of the objective and
smaller hyperrectangles 01 and 02 , and two constraint function are derived such as in prob-
children nodes are created with relaxed feasible lem 81. The rening step deals with the construc-
regions given by M (01 ) and M (02 ). Prob- tion of partitions in the domain and further parti-
lem 81 might be regarded as a basic underesti- tioning them during the search process. Finally,
mating program for the general problem 80. In the selection step decides on the order of ex-
some cases, however, it is possible to develop ad- ploring the open subregions. Thus, the feasible
ditional convex estimators that might strengthen region and the objective function are replaced
the underestimating problem. See, for instance, by convex envelopes to form relaxed problems.
the projections proposed by Quesada and Gross- Solving these convex relaxed problems leads to
mann [207], the reformulationlinearization global solutions that are lower bounds to the
technique by Sherali and Alameddine [220], NLP in the particular subregion. Finally, we see
and the reformulationconvexication approach that gradient-based NLP solvers play an impor-
by Sherali and Tuncbilek [221]. tant role in global optimization algorithms, as
The Set of Branching Variables. A set of they often yield the lower and upper bounds for
branching variables, characterized by the index the subregions. The following spatial branch-
set BV () dened below, is determined by con- and-bound global optimization algorithm can
sidering the optimal solution (
x,
y ,
z ) of the un- therefore be given by the following steps:
derestimating problem: 0. Initialize algorithm: calculate upper bound by
BV () ={i,j: |
yij
xi x
j | =l or |
zij
xi /
xj | =l obtaining a local solution to problem 80. Cal-
culate a lower bound solving problem 81 over
or gi (
xi )
gi (
xi ) =l or gi,k (
xi )
gi,k (
xi ) =l ,
the entire (relaxed) feasible region 0 .
for iI, jJ, kK, lL} (90)
For iteration k with a set of partitions k,j
where, for a prespecied number ln , and bounds in each subregion OLBj and OUBj :
L={1,2,. . .,ln } and 1 is the magnitude of the
largest approximation error for a nonconvex 1) Bound: Dene best upper bound: OUB =
term in problem 80 evaluated at (x,
y ,
z ) : Minj OUBj and delete (fathom) all subre-
gions j with lower bounds OLBj OUB. If
1 =Max [|
yij
xi x
j | , |
zij
xi /
xj | , gi (
xi )
gi (
xi ) , OLBj OUB , stop.
" 2) Rene: Divide the remaining active subre-
gi,k (
xi )
gi,k (
xi ) gions into partitiions k,j1 and k,j2 . (Sev-
eral branching rules are available for this
iI,jJ,kK
step.)
Similarly, we dene l <l1 with lL\ {1} as 3) Select: Solve the convex NLP 81 in the new
the l-th largest magnitude for an approximation partitions to obtain OLBj 1 and OLBj 2 . Delete
error; for instance, 2 <1 is the second largest partition if no feasible solution.
magnitude for an approximation error. Note that 4) Update: Obtain upper bounds, OUBj 1 and
in some cases it might be convenient to intro- OUBj 2 to new partitions if present. Set k =
duce weights in the determination of BV () k + 1, update partition sets and go to step 1.
in order to scale differences in the approxima- Example: To illustrate the spatial branch-
tion errors or to induce preferential branching and-bound algorithm, consider the global solu-
schemes. This might be particularly useful in tion of:
applications where specic information can be
exploited by imposing an order of precedence Min f (x) =5/2 x4 20 x3 +55 x2 57 x
(91)
on the set of complicating variables. s.t. 0.5x2.5
110 Mathematics in Chemical Engineering

cussion of all of these options can be found


in [222 224]. Also, a number of efcient
global optimization codes have recently been
developed, including BB, BARON, LGO, and
OQNLP. An interesting numerical comparison
of these and other codes can be found in [225].

10.5. Mixed Integer Programming

Mixed integer programming deals with both dis-


crete and continuous decision variables. For sim-
plicity in the presentation we consider the most
Figure 43. Global optimization example with partitions common case where the discrete decisions are
binary variables, i.e., yi = 0 or 1, and we con-
As seen in Figure 43, this problem has local so- sider the mixed integer problem 51. Unlike lo-
lutions at x* = 2.5 and at x* = 0.8749. The latter cal optimization methods, there are no optimal-
is also the global solution with f (x*) = 19.7. ity conditions, like the KKT conditions, that can
To nd the global solution we note that all but be applied directly.
the 20 x 3 term in problem 91 are convex, so
we replace this term by a new variable and a lin- Mixed Integer Linear Programming. If the
ear underestimator within a particular subregion, objective and constraint functions are all lin-
i.e.: ear in problem 51, and we assume 01 binary
Min fL (x) =5/2 x4 20 w+55 x2 57 x variables for the discrete variables, then this
gives rise to a mixed integer linear programming
s.t. x1 xxu (92) (MILP) problem given by Equation 93.
(x x) (xx )
w=(xl )3 (x ux ) +(xu )3 (x xl ) Min Z=aT x+cT y
u l u l

In Figure 43 we also propose subregions that are


s.t. Ax+Byb (93)
created by simple bisection partitioning rules,
and we use a loose bounding tolerance of x0, y{0,1}t
=0.2. In each partition the lower bound, f L is
As is well known, the (MILP) problem is NP-
determined by problem 92 and the upper bound
hard. Nevertheless, an interesting theoretical re-
f U is determined by the local solution of the orig-
sult is that it is possible to transform it into an LP
inal problem in the subregion. Figure 44 shows
with the convexication procedures proposed by
the progress of the spatial branch-and-bound
Lovacz and Schrijver [226], Sherali and Adams
algorithm as the partitions are rened and the
[227], and Balas et al. [228]. These procedures
bounds are updated. In Figure 43, note the de-
consist of sequentially lifting the original re-
nitions of the partitions for the nodes, and the se-
laxed x y space into higher dimension and pro-
quence numbers in each node that show the order
jecting it back to the original space so as to yield,
in which the partitions are processed. The grayed
after a nite number of steps, the integer convex
partitions correspond to the deleted subregions
hull. Since the transformations have exponential
and at termination of the algorithm we see that
complexity, they are only of theoretical interest,
f Lj f U (i.e., 19.85 19.70.2), with
although they can be used as a basis for deriving
the gray subregions in Figure 43 still active. Fur-
cutting planes (e.g. lift and project method by
ther partitioning in these subregions will allow
[228]).
the lower and upper bounds to converge to a
As for the solution of problem (MILP), it
tighter tolerance.
should be noted that this problem becomes an
A number of improvements can be made to
LP problem when the binary variables are re-
the bounding, renement, and selection strate-
laxed as continuous variables 0 y 1. The
gies in the algorithm that accelerate the con-
most common solution algorithms for problem
vergence of this method. A comprehensive dis-
Mathematics in Chemical Engineering 111

Figure 44. Spatial branch-and-bound sequence for global optimization example

(MILP) are LP-based branch-and-bound meth- MILPs that can be solved with efcient special-
ods, which are enumeration methods that solve purpose methods are the knapsack problem, the
LP subproblems at each node of the search tree. set-covering and set-partitioning problems, and
This technique was initially conceived by Land the traveling salesman problem. See [237] for a
and Doig [229], Balas [230], and later formal- detailed treatment of these problems.
ized by Dakin, [231]. Cutting-plane techniques, The branch-and-bound algorithm for solv-
which were initially proposed by Gomory [232], ing MILP problems [231] is similar to the spa-
and consist of successively generating valid in- tial branch-and-bound method of the previous
equalities that are added to the relaxed LP, have section that explores the search space. As seen
received renewed interest through the works of in Figure 45, binary variables are successively
Crowder et al. [233], van Roy and Wolsey [234], xed to dene the search tree and a number of
and especially the lift-and-project method of bounding properties are exploited in order to
Balas et al. [228]. A recent review of branch- fathom nodes in to avoid exhaustive enumera-
and-cut methods can be found in [235]. Finally, tion of all the nodes in the tree.
Benders decomposition [236] is another tech-
nique for solving MILPs in which the problem is
successively decomposed into LP subproblems
for xed 01 and a master problem for updating
the binary variables.

LP-Based Branch and Bound Method. We


briey outline in this section the basic ideas
behind the branch-and-bound method for solv-
ing MILP problems. Note that if we relax the
t binary variables by the inequalities 0 y
1 then 93 becomes a linear program with a
(global) solution that is a lower bound to the Figure 45. Branch and bound sequence for MILP example
MILP 93. There are specic MILP classes in
which the LP relaxation of 93 has the same The basic idea in the search is as follows. The
solution as the MILP. Among these problems top, or root node, in the tree is the solution to the
is the well-known assignment problem. Other linear programming relaxation of 93. If all the
112 Mathematics in Chemical Engineering

y variables take on 01 values, the MILP prob- commonly used: branch on the dichotomy 01
lem is solved, and no further search is required. at each node (i.e., like breadth-rst), but expand
If at least one of the binary variables yields a as in depth-rst. Additional description of these
fractional value, the solution of the LP relax- strategies can be found in [237].
ation yields a lower bound to problem 93. The Example: To illustrate the branch-and-bound
search then consists of branching on that node approach, we consider the MILP:
by xing a particular binary variable to 0 or 1,
Min Z=x+y1 +2y2 +3y3
and the corresponding restricted LP relaxations
are solved that in turn yield a lower bound for s.t.x+3y1 +y2 +2y3 0
any of their descendant nodes. In particular, the (94)
4y1 8y2 3y3 10
following properties are exploited in the branch-
and-bound search: x0,y1 ,y2 ,y3 ={0,1}
Any node (initial, intermediate, leaf node) that The solution to problem 94 is given by x = 4, y1 =
leads to feasible LP solution corresponds to a 1, y2 = 1, y3 = 0, and Z = 7. Here we use a depth-
valid upper bound to the solution of the MILP rst strategy and branch on the variables closest
problem 93. to zero or one. Figure 45 shows the progress of
Any intermediate node with an infeasible LP the branch-and-bound algorithm as the binary
solution has infeasible leaf nodes and can be variables are selected and the bounds are up-
fathomed (i.e., all remaining children of this dated. The sequence numbers for each node in
node can be eliminated). Figure 45 show the order in which they are pro-
If the LP solution at an intermediate node is cessed. The grayed partitions correspond to the
not less than an existing integer solution, then deleted nodes and at termination of the algo-
the node can be fathomed. rithm we see that Z = 7 and an integer solution
These properties lead to pruning of the nodes is obtained at an intermediate node where coin-
in the search tree. Branching then continues in cidentally y3 = 0.
the tree until the upper and lower bounds con- Mixed integer linear programming (MILP)
verge. methods and codes have been available and
The basic concepts outlined above lead to a applied to many practical problems for more
branch-and-bound algorithm with the following than twenty years (e.g., [237]. The LP-based
features. LP solutions at intermediate nodes are branch-and-bound method [231] has been im-
relatively easy to calculate since they can be ef- plemented in powerful codes such as OSL,
fectively updated with the dual simplex method. CPLEX, and XPRESS. Recent trends in MILP
The selection of binary variables for branching, include the development of branch-and-price
known as the branching rule, is based on a num- [238] and branch-and-cut methods such as the
ber of different possible criteria; for instance, lift-and-project method [228] in which cutting
choosing the fractional variable closest to 0.5, or planes are generated as part of the branch-and-
the one involving the largest of the smallest pseu- bound enumeration. See also [235] for a recent
docosts for each fractional variable. Branching review on MILP. A description of several MILP
strategies to navigate the tree take a number of solvers can also be found in the NEOS Software
forms. More common depth-rst strategies ex- Guide.
pand the most recent node to a leaf node or infea- Mixed Integer Nonlinear Programming The
sible node and then backtrack to other branches most basic form of an MINLP problem when
in the tree. These strategies are simple to pro- represented in algebraic form is as follows:
gram and require little storage of past nodes. On min z=f (x,y)
the other hand, breadth-rst strategies expand
all the nodes at each level of the tree, select the s.t. gj (x,y) 0 jJ (P1) (95)
node with the lowest objective function, and then xX, yY
proceed until the leaf nodes are reached. Here,
more storage is required, but generally fewer where f (), g() are convex, differentiable func-
nodes are evaluated than in depth-rst search. tions, J is the index set of inequalities, and
In practice, a combination of both strategies is x and y are the continuous and discrete
Mathematics in Chemical Engineering 113

variables, respectively. The set X is com- b) NLP subproblem for xed yk :


monly assumed
 to be a convex compact set, Min k = f (x,y k )
ZU
e.g., X= x|xR Rn , Dxd, xL xxU ; the
discrete set Y corresponds to a polyhedral s.t. gj (x,y k ) 0 jJ (97)
set of integer points, Y = {y|xZ m , Aya}, xX
which in most applications is restricted to k
which yields an upper bound ZU to problem
01 values, y {0,1}m . In most applications
95 provided problem 97 has a feasible solu-
of interest the objective and constraint func-
tion. When this is not the case, we consider
tions f (), g() are linear in y (e.g., xed
the next subproblem:
cost charges and mixed-logic constraints):
c) Feasibility subproblem for xed yk :
f (x,y) =cT y+r (x) , g (x,y) =By+h (x).
Min u
Methods that have addressed the solu-
tion of problem 95 include the branch- s.t. gj (x,y k )u jJ (98)
and-bound method (BB) [239 243], gener-
xX, uR1
alized Benders decomposition (GBD) [244],
outer-approximation (OA) [245 247], LP/NLP which can be interpreted as the minimization
based branch-and-bound [248], and extended of the innity-norm measure of infeasibility
cutting plane method (ECP) [249]. of the corresponding NLP subproblem. Note
There are three basic NLP subproblems that that for an infeasible subproblem the solution
can be considered for problem 95: of problem 98 yields a strictly positive value
of the scalar variable u.
a) NLP relaxation
k =f (x,y)
Min ZLB

s.t. gj (x.y) 0 jJ

xX, yYR (NLP1) (96)

yi ki iIF
k
L

yi ik iIF
k
U

where Y R is the continuous relaxation of


k k
the set Y, and IFL , IFU are index sub-
sets of the integer variables yi , i I which
are restricted to lower and upper bounds
ik , ik at the k-th step of a branch-
and-bound enumeration procedure. Note
that ik =yil ,ik =yim , l<k, m<k, where
yil ,yim are noninteger values at a previous step,
and ,  are the oor and ceiling functions,
respectively.
k k
Also note that if IFU =IFL = (k=0), prob-
lem 96 corresponds to the continuous NLP
relaxation of problem 95. Except for few
and special cases, the solution to this prob-
lem yields in general a noninteger vector for
the discrete variables. Problem 96 also corre-
sponds to the k-th step in a branch-and-bound
o
search. The optimal objective function ZLB
provides an absolute lower bound to prob-
lem 95; for m k, the bound is only valid for
k m k m
IFL IFL , IFU IFL . Figure 46. Geometrical interpretation of linearizations in
master problem 99
114 Mathematics in Chemical Engineering

The convexity of the nonlinear functions is BB method starts by solving rst the continu-
exploited by replacing them with supporting hy- ous NLP relaxation. If all discrete variables take
perplanes, which are generally, but not necessar- integer values the search is stopped. Otherwise,
ily, derived at the solution of the NLP subprob- a tree search is performed in the space of the
lems. In particular, the new values yK (or (x K , integer variables yi , i I. These are successively
yK ) are obtained from a cutting-plane MILP xed at the corresponding nodes of the tree, giv-
problem that is based on the K points (x k , yk ), k ing rise to relaxed NLP subproblems of the form
= 1. . .K, generated at the K previous steps: (NLP1) which yield lower bounds for the sub-
problems in the descendant nodes. Fathoming of
Min ZLK =
nodes occurs when the lower bound exceeds the
st f (xk ,y k )


current upper bound, when the subproblem is in-
xxk


feasible, or when all integer variables yi take on
+f (xk ,y k )
T


discrete values. The last-named condition yields
yy k
k=1,. . .K an upper bound to the original problem.
gj (xk ,y k )



k

xx

T
k k
+gj (x ,y ) 0 jJ
k


yy k
xX, yY
(99)

where J k J. When only a subset of lineariza-


tions is included, these commonly correspond
to violated constraints in problem 95. Alterna-
tively, it is possible to include all linearizations
in problem 99. The solution of 99 yields a valid
lower bound ZLk to problem 95. This bound is
nondecreasing with the number of linearization
points K. Note that since the functions f (x,y) and
g(x,y) are convex, the linearizations in problem
99 correspond to outer approximations of the
nonlinear feasible region in problem 95. A ge-
ometrical interpretation is shown in Figure 46,
where it can be seen that the convex objective
function is being underestimated, and the con-
vex feasible region overestimated with these lin-
Figure 47. Major steps in the different algorithms
earizations.

Algorithms. The different methods can be The BB method is generally only attractive if
classied according to their use of the subprob- the NLP subproblems are relatively inexpensive
lems 9698, and the specic specialization of to solve, or when only few of them need to be
the MILP problem 99, as seen in Figure 47. In solved. This could be either because of the low
the GBD and OA methods (case b), as well in the dimensionality of the discrete variables, or be-
LP/NLP-based branch-and-bound mehod (case cause the integrality gap of the continuous NLP
d), problem 98 is solved if infeasible subprob- relaxation of 95 is small.
lems are found. Each of the methods is explained
next in terms of the basic subproblems. Outer Approximation [245 247]. The
OA method arises when NLP subproblems 97
and MILP master problems 99 with J k = J are
Branch and Bound. While the earlier work
solved successively in a cycle of iterations to
in branch and bound (BB) was aimed at lin-
generate the points (x k , yk ). Since the master
ear problems [231] this method can also be ap-
problem 99 requires the solution of all feasi-
plied to nonlinear problems [239 243]. The
ble discrete variables yk , the following MILP
Mathematics in Chemical Engineering 115

relaxation is considered, assuming that the so- infeasible the choice of the previous 01 values
lution of K different NLP subproblems (K = generated at the K previous iterations [245]:
|KFS KIS|), where KFS is a set of solutions  
 
yi yi B k  1 k=1,. . .K (101)
from problem 97, and KIS set of solutions from iB k iN k
   
problem 98 is available: where B k = i|yik =1 , N k = i|yik =0 , k=1,
Min ZLK = . . .K. This cut becomes very weak as the dimen-
sionality of the 01 variables increases. How-

st f (xk ,y k )
ever, it has the useful feature of ensuring that


xxk
new 01 values are generated at each major it-


+f (xk ,y k )
T
eration. In this way the algorithm will not return


yy k to a previous integer point when convergence is
k=1,. . .K
gj (xk ,y k )
achieved. Using the above integer cut the termi-


xxk




nation takes place as soon as ZLK UBK .
T
k k
+gj (x ,y ) 0 jJ

The OA algorithm trivially converges in one
k
yy iteration if f (x,y) and g(x,y) are linear. This prop-
xX, yY erty simply follows from the fact that if f (x,y)
(100) and g(x,y) are linear in x and y the MILP master
problem 100 is identical to the original problem
Given the assumption on convexity of the func- 95. It is also important to note that the MILP
tions f (x,y) and g(x,y), it can be proved that the master problem need not be solved to optimal-
solution of problem 100 ZLk corresponds to a ity.
lower bound of the solution of problem 95. Note
that this property can be veried in Figure 46. Generalized Benders Decomposition
Also, since function linearizations are accumu- (GBD) [244]. The GBD method [250] is
lated as iterations proceed, the master problems similar to the outer-approximation method.
100 yield a nondecreasing sequence of lower The difference arises in the denition of
bounds ZL1 . . .ZLk . . .ZLk since linearizations are the MILP master problem 99. In the GBD
accumulated as iterations k proceed. method only active inequalities are considered
The OA algorithm as proposed by Duran and J k = {j| gj (xk , y k )=0} and the set x X is
Grossmann consists of performing a cycle of ma- disregarded. In particular, consider an outer-
jor iterations, k = 1,..K, in which problem 97 is approximation given at a given point (x k , yk )
solved for the corresponding yK , and the relaxed
MILP master problem 100 is updated and solved

T xxk
f xk ,y k +f xk ,y k
with the corresponding function linearizations
yy k
at the point (x k ,yk ) for which the correspond- k
(102)
ing subproblem NLP2 is solved. If feasible, the
k k
k k T xx
g x ,y +g x ,y 0
solution to that problem is used to construct the yy k
rst MILP master problem; otherwise a feasi-
bility problem 98 is solved to generate the cor- where for a xed yk the point x k corresponds
responding continuous point [247]. The initial to the optimal solution to problem 97. Making
MILP master problem 100 then generates a new use of the Karush Kuhn Tucker conditions
vector of discrete variables. The subproblems and eliminating the continuous variables x, the
97 yield an upper bound that is used inequalities in 102 can be reduced as follows
 tok dene [248]:
the best current solution, UBk =min ZU . The
k    T  
cycle of iterations is continued until this upper f xk ,y k +y f xk ,y k yy k +
bound and the lower bound of the relaxed mas-  T     T  
ter problem ZLk are within a specied tolerance. k g xk ,y k +y g xk ,y k yy k (103)
One way to avoid solving the feasibility problem
98 in the OA algorithm when the discrete vari- which is the Lagrangian cut projected in the y-
ables in problem 95 are 01, is to introduce the space. This can be interpreted as a surrogate con-
following integer cut whose objective is to make straint of the equations in 102, because it is ob-
tained as a linear combination of these.
116 Mathematics in Chemical Engineering

For the case when there is no feasible solu- added per iteration is equal to the number of non-
tion to problem 97, then if the point x k is ob- linear constraints plus the nonlinear objective.
tained from the feasibility subproblem (NLPF), If problem 95 has zero integrality gap, the
the following feasibility cut projected in y can GBD algorithm converges in one iteration once
be obtained using a similar procedure. the optimal (x*, y*) is found [252]. This prop-
 T     T   erty implies that the only case in which one can
k g xk ,y k +y g xk ,y k yy k 0 expect the GBD method to terminate in one iter-
(104) ation is that in which the initial discrete vector is
the optimum, and when the objective value of the
In this way, problem 99 reduces to a problem NLP relaxation of problem 95 is the same as the
projected in the y-space: objective of the optimal mixed-integer solution.
Min ZLK =


T

st f xk ,y k +y f xk ,y k yy k + Extended Cutting Plane (ECP) [249]. The
(105) ECP method, which is an extension of Kelleys

k T 
k k
T

g x ,y +y g xk ,y k yy k cutting-plane algorithm for convex NLP [253],
kKF S does not rely on the use of NLP subproblems

k T 
k k
T
 and algorithms. It relies only on the iterative so-
g x ,y +y g xk ,y k yy k 0
lution of problem 99 by successively adding a
kKIS linearization of the most violated constraint at
xX, R1
the predicted point (x k , yk ):
# #  %%
where KFS is the set of feasible subproblems Jk=
jarg max gj xk ,y k
jJ
97, and KIS the set of infeasible subproblems
whose solution is given by problem 98. Also Convergence is achieved when the maximum
|KFS KIS| = K. Since master problem 105 can constraint violation lies within the specied tol-
be derived from master problem 100, in the con- erance. The optimal objective value of problem
text of problem 95, GBD can be regarded as a 99 yields a nondecreasing sequence of lower
particular case of the outer-approximation al- bounds. It is of course also possible to either
gorithm. In fact one can prove that given the add to problem 99 linearizatons of all the vio-
same set of K subproblems, the lower bound lated constraints in the set J k , or linearizations
predicted by the relaxed master problem 100 is of all the nonlinear constraints j J. In the ECP
greater than or equal to that predicted by the re- method the objective must be dened as a linear
laxed master problem 105 [245]. This proof fol- function, which can easily be accomplished by
lows from the fact that the Lagrangian and fea- introducing a new variable to transfer nonlinear-
sibility cuts 103 and 104 are surrogates of the ities in the objective as an inequality.
outer-approximations 102. Given the fact that Note that since the discrete and continuous
the lower bounds of GBD are generally weaker, variables are converged simultaneously, the ECP
this method commonly requires a larger num- method may require a large number of itera-
ber of cycles or major iterations. As the num- tions. However, this method shares with the OA
ber of 01 variables increases this difference method Property 2 for the limiting case when all
becomes more pronounced. This is to be ex- the functions are linear.
pected since only one new cut is generated per
iteration. Therefore, user-supplied constraints LP/NLP-Based Branch and Bound [248].
must often be added to the master problem to This method is similar in spirit to a branch-and-
strengthen the bounds. Also, it is sometimes pos- cut method, and avoids the complete solution of
sible to generate multiple cuts from the solution the MILP master problem (M-OA) at each major
of an NLP subproblem in order to strengthen the iteration. The method starts by solving an initial
lower bound [251]. As for the OA algorithm, NLP subproblem, which is linearized as in (M-
the trade-off is that while it generally predicts OA). The basic idea consists then of performing
stronger lower bounds than GBD, the computa- an LP-based branch-and-bound method for (M-
tional cost for solving the master problem (M- OA) in which NLP subproblems 97 are solved at
OA) is greater, since the number of constraints those nodes in which feasible integer solutions
Mathematics in Chemical Engineering 117

are found. By updating the representation of the is obtained. This, however, comes at the price
master problem in the current open nodes of the of having to solve an MIQP instead of an MILP
tree with the addition of the corresponding lin- at each iteration.
earizations, the need to restart the tree search is
avoided.
This method can also be applied to the GBD
and ECP methods. The LP/NLP method com-
monly reduces quite signicantly the number of
nodes to be enumerated. The trade-off, however,
is that the number of NLP subproblems may in-
crease. Computational experience has indicated
that often the number of NLP subproblems re-
mains unchanged. Therefore, this method is bet-
ter suited for problems in which the bottleneck
corresponds to the solution of the MILP master
problem. Leyffer [254] has reported substantial
savings with this method.
Example: Consider the following MINLP
problem, whose objective function and con-
straints contain nonlinear convex terms
Min Z=y1 +1.5y2 +0.5y3 +x21 +x22
Figure 48. Progress of iterations with OA and GBD meth-
s.t. (x1 2)2 x2 0 ods, and number of subproblems for the BB, OA, GBD, and
ECP methods.
x1 2y1 0
The master problem 100 can involve a rather
x1 x2 4 (1y2 ) 0
large number of constraints, due to the accumu-
x2 y2 0 (106) lation of linearizations. One option is to keep
only the last linearization point, but this can
x1 +x2 3y3
lead to nonconvergence even in convex prob-
y1 +y2 +y3 1 lems, since then the monotonic increase of the
lower bound is not guaranteed. As shown [248],
0x1 4,0x2 4
linear approximations to the nonlinear objective
y1 ,y2 ,y3 =0,1 and constraints can be aggregated with an MILP
master problem that is a hybrid of the GBD and
The optimal solution of this problem is given by OA methods.
y1 = 0, y2 = 1, y3 = 0, x 1 = 1, x 2 = 1, Z = 3.5. Fig- For the case when linear equalities of the form
ure 48 shows the progress of the iterations with h(x, y) = 0 are added to problem 95 there is no
the OA and GBD methods, while the table lists major difculty, since these are invariant to the
the number of NLP and MILP subproblems that linearization points. If the equations are nonlin-
are solved with each of the methods. For the case ear, however, there are two difculties. First, it
of the MILP problems the total number of LPs is not possible to enforce the linearized equali-
solved at the branch-and-bound nodes are also ties at K points. Second, the nonlinear equations
reported. may generally introduce nonconvexities, unless
they relax as convex inequalities [255]. Kocis
Extensions of MINLP Methods. Exten- and Grossmann [256] proposed an equality re-
sions of the methods described above include laxation strategy in which the nonlinear equali-
a quadratic approximation to (RM-OAF) [247] ties are replaced by the inequalities
using an approximation of the Hessian matrix.
The quadratic approximations can help to re-  T xxk
k k k
T h x ,y 0 (107)
duce the number of major iterations, since an
yy k
improved representation of the continuous space
118 Mathematics in Chemical Engineering
   
where T k = tkii , and tkii =sign ki sign in lies in how to perform the branching on the dis-
which ki is the multiplier associated to the equa- crete and continuous variables. Some methods
tion hi (x, y) = 0. Note that if these equations relax perform the spatial tree enumeration on both
as the inequalities h(x, y) 0 for all y and h(x, the discrete and continuous variables of prob-
y) is convex, this is a rigorous procedure. Other- lem 108. Other methods perform a spatial branch
wise, nonvalid supports may be generated. Also, and bound on the continuous variables and solve
in the master problem 105 of GBD, no special the corresponding MINLP problem 108 at each
provision is required to handle equations, since node using any of the methods reviewed above.
these are simply included in the Lagrangian cuts. Finally, other methods branch on the discrete
However, similar difculties as in OA arise if the variables of problem 108, and switch to a spa-
equations do not relax as convex inequalities. tial branch and bound on nodes where a fea-
When f (x,y) and g(x,y) are nonconvex in sible value for the discrete variables is found.
problem 95, or when nonlinear equalities h(x, The methods also rely on procedures for tighten-
y) = 0 are present, two difculties arise. First, ing the lower and upper bounds of the variables,
the NLP subproblems 96 98 may not have a since these have a great effect on the quality of
unique local optimum solution. Second, the mas- the underestimators. Since the tree searches are
ter problem (M-MIP) and its variants (e.g., M- not nite (except for convergence), these meth-
MIPF, M-GBD, M-MIQP) do not guarantee a ods can be computationally expensive. However,
valid lower bound ZLK or a valid bounding repre- their major advantage is that they can rigorously
sentation with which the global optimum may be nd the global optimum. Specic cases of non-
cut off. convex MINLP problems have been handled. An
Rigorous global optimization approaches for example is the work of Porn and Westerlund
addressing nonconvexities in MINLP problems [261], who addressed the solution of MINLP
can be developed when special structures are problems with pseudoconvex objective function
assumed in the continuous terms (e.g. bilinear, and convex inequalities through an extension of
linear fractional, concave separable). Speci- the ECP method.
cally, the idea is to use convex envelopes or un- The other option for handling nonconvexities
derestimators to formulate lower-bounding con- is to apply a heuristic strategy to try to reduce
vex MINLP problems. These are then combined as much as possible the effect of nonconvexities.
with global optimization techniques for contin- While not being rigorous, this requires much less
uous variables [206, 207, 209, 214, 216, 222, computational effort. We describe here an ap-
257], which usually take the form of spatial proach for reducing the effect of nonconvexities
branch-and-bound methods. The lower bound- at the level of the MILP master problem.
ing MINLP problem has the general form, Viswanathan and Grossmann [262] proposed
Min Z=f (x,y)
to introduce slacks in the MILP master prob-
lem to reduce the likelihood of cutting off feasi-
s.t. g j (x,y) 0 jJ (108) ble solutions. This master problem (augmented
xX, yY
penalty/equality relaxation) has the form:

where f , g are valid convex underestimators 


K
min Z K =+ wpk pk +wqk q k
such that f (x,y) f (x,y) and the inequalities
k k k=1
s.t. f x , y
g (x,y) 0 are satised if g (x,y) 0. A typical k




k k T x x

example of convex underestimators are the con-

+f x , y

vex envelopes for bilinear terms [208]. k

yy

Examples of global optimization methods

x x k



for MINLP problems include the branch-and- k k k T k
T h x , y p k=1,. . .K
reduce method [209, 210], the -BB method


y yk

[212], the reformulation/spatial branch-and-
k

k k
k k T x x

bound search method [258], the branch-and-cut g x , y +g x , y


y yk
method [259], and the disjunctive branch-and- (109)


bound method [260]. All these methods rely on k
q
a branch-and-bound procedure. The difference
Mathematics in Chemical Engineering 119
 
 
yi y B k  1 k=1,. . .K
k i
Logic-Based Optimization. Given difcul-
iB k iN
ties in the modeling and solution of mixed in-
xX,yY , R1 , pk ,q k 0
teger problems, the following major approaches
where wpk , wqk are weights that are chosen suf- based on logic-based techniques have emerged:
ciently large (e.g., 1000 times the magnitude of generalized disjunctive programming 110 [266],
the Lagrange multiplier). Note that if the func- mixed-logic linear programming (MLLP) [267],
tions are convex then the MILP master problem and constraint programming (CP) [268]. The
109 predicts rigorous lower bounds to problem motivations for this logic-based modeling has
95 since all the slacks are set to zero. been to facilitate the modeling, reduce the com-
binatorial search effort, and improve the han-
Computer Codes for MINLP. Computer dling of nonlinearities. In this section we mostly
codes for solving MINLP problems include the concentrate on generalized disjunctive program-
following. The program DICOPT [262] is an ming and provide a brief reference to constraint
MINLP solver that is available in the modeling programming. A general review of logic-based
system GAMS [263]. The code is based on the optimization can be found in [269, 309].
master problem 109 and the NLP subproblems Generalized disjunctive programming in 110
97. This code also uses relaxed 96 to generate the [266] is an extension of disjunctive program-
rst linearization for the above master problem, ming [270] that provides an alternative way of
with which the user need not specify an initial modeling MILP and MINLP problems. The gen-
integer value. Also, since bounding properties of eral formulation 110 is as follows:
problem 109 cannot be guaranteed, the search 
Min Z= kK ck +f (x)
for nonconvex problems is terminated when
there is no further improvement in the feasible s.t. g (x) 0
NLP subproblems. This is a heuristic that works
Yjk
reasonably well in many problems. Codes that
(110)
implement the branch-and-bound method using hjk (x) 0 , kK
jJk
subproblems 96 include the code MINLP BB,
ck =jk
which is based on an SQP algorithm [243] and
(Y ) =True
is available in AMPL, the code BARON [264],
which also implements global optimization ca- xRn , cRm , Y {true, false}m
pabilities, and the code SBB, which is available
in GAMS [263]. The code a-ECP implements where Y jk are the Boolean variables that decide
the extended cutting-plane method [249], in- whether a term j in a disjunction k K is true or
cluding the extension by Porn and Westerlund
false, and x are continuous variables. The objec-
[261]. Finally, the code MINOPT [265] also im- tive function involves the term f (x) for the con-
plements the OA and GBD methods, and applies tinuous variables and the charges ck that depend
them to mixed-integer dynamic optimization on the discrete choices in each disjunction k K.
problems. It is difcult to derive general con- The constraints g(x) 0 hold regardless of the
clusions on the efciency and reliability of all discrete choice, and hjk (x) 0 are conditional
these codes and their corresponding methods, constraints that hold when Y jk is true in the j-th
since no systematic comparison has been made. term of the k-th disjunction. The cost variables
However, one might anticipate that branch-and- ck correspond to the xed charges, and are equal
bound codes are likely to perform better if the to jk if the Boolean variable Y jk is true. (Y )
relaxation of the MINLP is tight. Decomposi- are logical relations for the Boolean variables
tion methods based on OA are likely to perform expressed as propositional logic.
better if the NLP subproblems are relatively ex- Problem 110 can be reformulated as an
pensive to solve, while GBD can perform with MINLP problem by replacing the Boolean vari-
some efciency if the MINLP is tight and there ables by binary variables yjk ,
are many discrete variables. ECP methods tend
to perform well on mostly linear problems.
120 Mathematics in Chemical Engineering
 
Min Z= kK jJk jk yjk +f (x) jk are the weight factors that determine the fea-
s.t. g (x) 0 sibility of the disjunctive term. Note that when

jk is 1, then the j-th term in the k-th disjunc-
hjk (x) Mjk 1yjk , jJk , kK (BM) tion is enforced and the other terms
 (111)  are ignored.
The constraints jk hjk v jk /jk are convex if
jJk yjk =1, kK
hjk (x) is convex [274, p. 160]. A formal proof
Aya can be found in [242]. Note that the convex hull
0xxU , yjk {0,1} , jJk , kK 113 reduces to the result by Balas [275] if the
constraints are linear. Based on the convex hull
where the disjunctions are replaced by Big- relaxation 113, Lee and Grossmann [273] pro-
M constraints which involve a parameter M jk posed the following convex relaxation program
and binary variables yjk . The propositional logic of problem 110.
statements (Y ) = True are replaced by the lin-  
ear constraints Ay a [271] and [272]. Here we Min Z L = kK jJk jk jk +f (x)
assume that x is a nonnegative variable with g (x) 0
s.t.
nite upper bound x U . An important issue in  
model 111 is how to specify a valid value for the x= jJk v jk , jJk jk =1, kK (CRP)
Big-M parameter M jk . If the value is too small, 0x, v jk xU , 0jk 1, jJk , kK (114)
then feasible points may be cut off. If M jk is too

large, then the continuous relaxation might be jk hjk v jk / jk 0, jJk , kK
too loose and yield poor lower bounds. There- Aa
fore, nding the smallest valid value for M jk
is desired. For linear constraints, one can use 0x, v jk xU , 0jk 1, jJk , kK
the upper and lower bound of the variable x to
calculate the maximum value of each constraint, where x U is a valid upper bound for x and v. Note
which then can be used to calculate a valid value that the number of constraints and variables in-
of M jk . For nonlinear constraints one can in prin- creases in problem 114 compared with problem
ciple maximize each constraint over the feasible 110. Problem 114 has a unique optimal solution
region, which is a nontrivial calculation. and it yields a valid lower bound to the optimal
Lee and Grossmann [273] have derived the
solution of problem 110 [273]. Grossmann and
Lee [276] proved that problem 114 has the use-
convex hull relaxation of problem 110. The ba-
sic idea is as follows. Consider a disjunction k ful property that the lower bound is greater than
K that has convex constraints or equal to the lower bound predicted from the
relaxation of problem 111.
Yjk Further description of algorithms for disjunc-

hjk (x) 0 tive programming can be found in [277].
jJk (112)
c=jk

0xxU ,c0
Constraint Programming. Constraint pro-
gramming (CP) [268, 269] is a relatively new
where hjk (x) are assumed to be convex and modeling and solution paradigm that was orig-
bounded over x. The convex hull relaxation of inally developed to solve feasibility problems,
disjunction 112 [242] is given as follows: but it has been extended to solve optimiza-
 jk ,
 tion problems as well. Constraint programming
x= jJk v c= jJ jk jk is very expressive, as continuous, integer, and
0v jk jk xUjk , jJk
Boolean variables are permitted and, moreover,
 variables can be indexed by other variables. Con-
jJk jk =1, 0jk 1, jJk (CH) (113)
straints can be expressed in algebraic form (e.g.,


jk hjk v jk /jk 0, jJk h(x) 0), as disjunctions (e.g., [A1 x b1 ] A2 x
b2 ]), or as conditional logic statements (e.g., If
x,c,v jk 0, jJk
g(x) 0 then r(x) 0). In addition, the language
where vjk are disaggregated variables that are as- can support special implicit functions such as
signed to each term of the disjunction k K, and the all-different (x 1 , x 2 , . . .x n ) constraint for
Mathematics in Chemical Engineering 121

assigning different values to the integer vari- discretization that converts the original continu-
ables x 1 , x 2 , . . .x n . The language consists of ous time problem into a discrete problem. Early
C ++ procedures, although the recent trend has solution strategies, known as indirect methods,
been to provide higher level languages such as were focused on solving the classical variational
OPL. Other commercial CP software packages conditions for optimality. On the other hand,
include ILOG Solver [278], CHIP [279], and methods that discretize the original continuous
ECLiPSe [280]. time formulation can be divided into two cate-
gories, according to the level of discretization.
Here we distinguish between the methods that
10.6. Dynamic Optimization discretize only the control proles (partial dis-
cretization) and those that discretize the state and
Interest in dynamic simulation and optimiza- control proles (full discretization). Basically,
tion of chemical processes has increased sig- the partially discretized problem can be solved
nicantly during the last two decades. Chem- either by dynamic programming or by applying a
ical processes are modeled dynamically using nonlinear programming (NLP) strategy (direct-
differential-algebraic equations (DAEs), con- sequential). A basic characteristic of these meth-
sisting of differential equations that describe the ods is that a feasible solution of the DAE system,
dynamic behavior of the system, such as mass for given control values, is obtained by integra-
and energy balances, and algebraic equations tion at every iteration of the NLP solver. The
that ensure physical and thermodynamic rela- main advantage of these approaches is that, for
tions. Typical applications include control and the NLP solver, they generate smaller discrete
scheduling of batch processes; startup, upset, problems than full discretization methods.
shut-down, and transient analysis; safety stud- Methods that fully discretize the continuous
ies; and the evaluation of control schemes. We time problem also apply NLP strategies to solve
state a general differential-algebraic optimiza- the discrete system and are known as direct-
tion problem 115 as follows: simultaneous methods. These methods can use
different NLP and discretization techniques, but
Min (z (tf ) ;y (tf ) ;u (tf ) ;tf ;p) the basic characteristic is that they solve the DAE
s.t. F (dz/dt; z (t) ; u (t) ; t; p) =0, z (0) =z0 system only once, at the optimum. In addition,
they have better stability properties than partial
Gs [z (ts ) ; y (ts ) ; u (ts ) ; ts ; p)] =0 discretization methods, especially in the pres-
z L z (t) xU ence of unstable dynamic modes. On the other
(115) hand, the discretized optimization problem is
y L y (t) y U larger and requires large-scale NLP solvers, such
uL u (t) y U as SOCS, CONOPT, or IPOPT.
With this classication we take into account
pL ppU the degree of discretization used by the differ-
ttf tf tU
f
ent methods. Below we briey present the de-
scription of the variational methods, followed
where is a scalar objective function at nal by methods that partially discretize the dynamic
time t f , and F are DAE constraints, G s additional optimization problem, and nally we consider
point conditions at times t s , z(t) differential state full discretization methods for problem 115.
prole vectors, y(t) algebraic state prole vec-
tors, u(t) control state prole vectors, and p is a Variational Methods. These methods are
time-independent parameter vector. based on the solution of the rst-order necessary
We assume, without loss of generality, that conditions for optimality that are obtained from
the index of the DAE system is one, consistent Pontryagins maximum principle [281, 282]. If
initial conditions are available, and the objective we consider a version of problem 115 without
function is in the above Mayer form. Otherwise, bounds, the optimality conditions are formulated
it is easy to reformulate problems to this form. as a set of DAEs:
Problem 115 can be solved either by the vari- F (z,y,u,p,t)  H F (z,y,u,p,t)
ational approach or by applying some level of = = (116a)
z  z z
122 Mathematics in Chemical Engineering

F (z,y,u,p,t) =0 (116b) iteration. This produces the value of the ob-


jective function, which is used by a nonlinear


Gf z,y,u,p,tf =0 (116c) programming solver to nd the optimal pa-
rameters in the control parametrization . The
Gs (z,y,u,p,ts ) =0 (116d) sequential method is reliable when the system
contains only stable modes. If this is not the
H F (z,y,u,p,t) case, nding a feasible solution for a given set
= =0 (116e) of control parameters can be very difcult. The
y y
time horizon is divided into time stages and at
H F (z,y,u,p,t)
= =0 (116f) each stage the control variables are represented
u u with a piecewise constant, a piecewise linear,
tf
 or a polynomial approximation [283, 284]. A
F (z,y,u,p,t) common practice is to represent the controls as
dt=0 (116g)
p a set of Lagrange interpolation polynomials.
0
For the NLP solver, gradients of the objec-
where the Hamiltonian H is a scalar function of tive and constraint functions with respect to the
the form H(t) = F(z, y, u, p, y)T (t) and (t) is a control parameters can be calculated with the
vector of adjoint variables. Boundary and jump sensitivity equations of the DAE system, given
conditions for the adjoint variables are given by: by:
F
z 
(tf ) +
z
+ G
z
f
vf =0 F T F T F T F T
    (117) 
sk + sk + wk + =0,
z z y qk
F
z 
t
s + G s
v = F t+
z s z s
z (0)
sk (0) = k=1,. . .Nq (118)
where vf , vs are the multipliers associated with qk
the nal time and point constraints, respectively.
where sk (t) = z(t) y(t)
qk , wk (t) = qk , and q =
T
The most expensive step lies in obtaining a solu-
tion to this boundary value problem. Normally, [pT , T ]. As can be inferred from Equation 118,
the state variables are given as initial condi- the cost of obtaining these sensitivities is directly
tions, and the adjoint variables as nal condi- proportional to N q , the number of decision vari-
tions. This formulation leads to boundary value ables in the NLP. Alternately, gradients can be
problems (BVPs) that can be solved by a num- obtained by integration of the adjoint Equations
ber of standard methods including single shoot- 116a, 116e, 116g [282, 285, 286] at a cost in-
ing, invariant embedding, multiple shooting, or dependent of the number of input variables and
some discretization method such as collocation proportional to the number of constraints in the
on nite elements or nite differences. Also the NLP.
point conditions lead to an additional calcula- Methods that are based on this approach can-
tion loop to determine the multipliers vf and vs . not treat directly the bounds on state variables,
On the other hand, when bound constraints are because the state variables are not included in
considered, the above conditions are augmented the nonlinear programming problem. Instead,
with additional multipliers and associated com- most of the techniques for dealing with inequal-
plementarity conditions. Solving the resulting ity path constraints rely on dening a measure
system leads to a combinatorial problem that is of the constraint violation over the entire hori-
prohibitively expensive except for small prob- zon, and then penalizing it in the objective func-
lems. tion, or forcing it directly to zero through an
end-point constraint [287]. Other techniques ap-
Partial Discretization. With partial dis- proximate the constraint satisfaction (constraint
cretization methods (also called sequential aggregation methods) by introducing an exact
methods or control vector parametrization), only penalty function [286, 288] or a Kreisselmeier
the control variables are discretized. Given the Steinhauser function [288] into the problem.
initial conditions and a given set of control Finally, initial value solvers that handle path
parameters, the DAE system is solved with a constraints directly have been developed [284].
differential algebraic equation solver at each
Mathematics in Chemical Engineering 123

The main idea is to use an algorithm for con- but path constraints for the states may not be
strained dynamic simulation, so that any admis- satised between grid points. This problem can
sible combination of the control parameters pro- be avoided by applying penalty techniques to
duces an initial value problem that is feasible enforce feasibility, like the ones used in the se-
with respect to the path constraints. The algo- quential methods.
rithm proceeds by detecting activation and de- The resulting NLP is solved using SQP-type
activation of the constraints during the solution, methods, as described above. At each SQP iter-
and solving the resulting high-index DAE sys- ation, the DAEs are integrated in each stage and
tem and their related sensitivities. objective and constraint gradients with respect to
p, zi , and i are obtained using sensitivity equa-
Full Discretization. Full discretization tions, as in problem 118. Compared to sequen-
methods explicitly discretize all the variables tial methods, the NLP contains many more vari-
of the DAE system and generate a large-scale ables, but efcient decompositions have been
nonlinear programming problem that is usually proposed [290] and many of these calculations
solved with a successive quadratic programming can be performed in parallel.
(SQP) algorithm. These methods follow a simul- In collocation methods, the continuous time
taneous approach (or infeasible path approach); problem is transformed into an NLP by ap-
that is, the DAE system is not solved at each iter- proximating the proles as a family of poly-
ation; it is only solved at the optimum point. Be- nomials on nite elements. Various polynomial
cause of the size of the problem, special decom- representations are used in the literature, in-
position strategies are used to solve the NLP ef- cluding Lagrange interpolation polynomials for
ciently. Despite this characteristic, the simul- the differential and algebraic proles [291]. In
taneous approach has advantages for problems [191] a HermiteSimpson collocation form is
with state variable (or path) constraints and for used, while Cuthrell and Biegler [292] and
systems where instabilities occur for a range of Tanartkit and Biegler [293] use a monomial
inputs. In addition, the simultaneous approach basis for the differential proles. All of these re-
can avoid intermediate solutions that may not presentations stem from implicit RungeKutta
exist, are difcult to obtain, or require excessive formulae, and the monomial representation is
computational effort. There are mainly two dif- recommended because of smaller condition
ferent approaches to discretize the state variables numbers and smaller rounding errors. Control
explicitly, multiple shooting [289, 290] and col- and algebraic proles, on the other hand, are ap-
location on nite elements [181, 191, 291]. proximated using Lagrange polynomials.
With multiple shooting, time is discretized Discretizations of problem 115 using colloca-
into P stages and control variables are tion formulations lead to the largest NLP prob-
parametrized using a nite set of control parame- lems, but these can be solved efciently using
ters in each stage, as with partial discretization. large-scale NLP solvers such as IPOPT and by
The DAE system is solved on each stage, i = exploiting the structure of the collocation equa-
1,. . .P and the values of the state variables z(t i ) tions. Biegler et al. [181] provide a review of
are chosen as additional unknowns. In this way a dynamic optimization methods using simultane-
set of relaxed, decoupled initial value problems ous methods. These methods offer a number of
(IVP) is obtained: advantages for challenging dynamic optimiza-
tion problems, which include:
F (dz/dt; z (t) ; y (t) ; i ; p) =0,
Control variables can be discretized at the
t [ti1 , ti ] , z (ti1 ) =zi (119) same level of accuracy as the differential and
algebraic state variables. The KKT conditions
zi+1 z (ti ; zi ; i ; p) =0, i=1, . . .P 1
of the discretized problem can be shown to be
Note that continuity among stages is treated consistent with the variational conditions of
through equality constraints, so that the nal so- problem 115. Finite elements allow for dis-
lution satises the DAE system. With this ap- continuities in control proles.
proach, inequality constraints for states and con- Collocation formulations allow problems
trols can be imposed directly at the grid points, with unstable modes to be handled in an
124 Mathematics in Chemical Engineering

efcient and well-conditioned manner. The sied into two broad areas: optimization mod-
NLP formulation inherits stability properties eling platforms and simulation platforms with
of boundary value solvers. Moreover, an ele- optimization.
mentwise decomposition has been developed Optimization modeling platforms provide
that pins down unstable modes in problem general purpose interfaces for optimization al-
115. gorithms and remove the need for the user to
Collocation formulations have been proposed interface to the solver directly. These platforms
with moving nite elements. This allows allow the general formulation for all problem
the placement of elements both for accurate classes discussed above with direct interfaces
breakpoint locations of control proles as well to state of the art optimization codes. Three re-
as accurate DAE solutions. presentative platforms are GAMS (General Al-
gebraic Modeling Systems), AMPL (A Mathe-
Dynamic optimization by collocation meth-
matical Programming Language), and AIMMS
ods has been used for a wide variety of pro-
(Advanced Integrated Multidimensional Model-
cess applications including batch process opti-
ing Software). All three require problem-model
mization, batch distillation, crystallization, dy-
input via a declarative modeling language and
namic data reconciliation and parameter estima-
provide exact gradient and Hessian information
tion, nonlinear model predictive control, poly-
through automatic differentiation strategies. Al-
mer grade transitions and process changeovers,
though possible, these platforms were not de-
and reactor design and synthesis. A review of
signed to handle externally added procedural
this approach can be found in [294].
models. As a result, these platforms are best ap-
plied on optimization models that can be devel-
oped entirely within their modeling framework.
10.7. Development of Optimization Nevertheless, these platforms are widely used
Models for large-scale research and industrial applica-
tions. In addition, the MATLAB platform allows
The most important aspect of a successful op- the exible formulation of optimization models
timization study is the formulation of the op- as well, although it currently has only limited ca-
timization model. These models must reect pabilities for automatic differentiation and lim-
the real-world problem so that meaningful op- ited optimization solvers. More information on
timization results are obtained, and they also these and other modeling platforms can be found
must satisfy the properties of the problem class. on the NEOS server www-neos.mcs.anl.gov
For instance, NLPs addressed by gradient-based Simulation platforms with optimization are
methods require functions that are dened in the often dedicated, application-specic modeling
variable domain and have bounded and continu- tools to which optimization solvers have been
ous rst and second derivatives. In mixed integer interfaced. These lead to very useful optimiza-
problems, proper formulations are also needed tion studies, but because they were not originally
to yield good lower bounds for efcient search. designed for optimization models, they need to
With increased understanding of optimization be used with some caution. In particular, most of
methods and the development of efcient and these platforms do not provide exact derivatives
reliable optimization codes, optimization prac- to the optimization solver; often they are approx-
titioners now focus on the formulation of opti- imated through nite difference. In addition, the
mization models that are realistic, well-posed, models themselves are constructed and calcu-
and inexpensive to solve. Finally, convergence lated through numerical procedures, instead of
properties of NLP, MILP, and MINLP solvers through an open declarative language. Examples
require accurate rst (and often second) deriva- of these include widely used process simulators
tives from the optimization model. If these con- such as Aspen/Plus, PRO/II, and Hysys. More
tain numerical errors (say, through nite differ- recent platforms such as Aspen Custom Mod-
ence approximations) then performance of these eler and gPROMS include declarative models
solvers can deteriorate considerably. As a result and exact derivatives.
of these characteristics, modeling platforms are For optimization tools linked to procedural
essential for the formulation task. These are clas- models, reliable and efcient automatic differ-
Mathematics in Chemical Engineering 125

entiation tools are available that link to models 11.1. Concepts


written, say, in FORTRAN and C, and calcu-
late exact rst (and often second) derivatives. Suppose N values of a variable y, called y1 , y2 , .
Examples of these include ADIFOR, ADOL- . . , yN , might represent N measurements of the
C, GRESS, Odyssee, and PADRE. When used same quantity. The arithmetic mean E ( y) is
with care, these can be applied to existing 
N
procedural models and, when linked to mod- yi
i=1
ern NLP and MINLP algorithms, can lead to E (y) =
N
powerful optimization capabilities. More infor-
The median is the middle value (or average of
mation on these and other automatic differ-
the two middle values) when the set of numbers
entiation tools can be found on: http://www-
is arranged in increasing (or decreasing) order.
unix.mcs.anl.gov/autodiff/AD Tools/.
The geometric mean y G is
Finally, the availability of automatic differ-
entiation and related sensitivity tools for differ- y G = (y1 y2 . . .yN )1/N
ential equation models allows for considerable
The root-mean-square or quadratic mean is
exibility in the formulation of optimization 6
models. In [295] a seven-level modeling hierar- * 7N
7
chy is proposed that matches optimization algo- Rootmeansquare = E (y 2 ) = 8 yi2 /N
i=1
rithms with models that range from completely
open (fully declarative model) to fully closed The range of a set of numbers is the difference
(entirely procedural without sensitivities). At the between the largest and the smallest members in
lowest, fully procedural level, only derivative- the set. The mean deviation is the mean of the
free optimization methods are applied, while the deviation from the mean.
highest, declarative level allows the application 
N
of an efcient large-scale solver that uses rst |yi E (y)|
i=1
and second derivatives. Depending on the mod- Meandeviation =
N
eling level, optimization solver performance can
vary by several orders of magnitude. The variance is

N
(yi E (y))2
2 i=1
var (y) = =
11. Probability and Statistics N
[15, 296 303] and the standard deviation is the square root
of the variance.
6
The models treated thus far have been determin- 7 N
7
istic, that is, if the parameters are known the 7 (y E (y))2
8 i=1 i
outcome is determined. In many situations, all =
N
the factors cannot be controlled and the outcome
may vary randomly about some average value. If the set of numbers {yi } is a small sample
Then a range of outcomes has a certain proba- from a larger set, then the sample average
bility of occurring, and statistical methods must 
n
yi
be used. This is especially true in quality control i=1
y=
of production processes and experimental mea- n
surements. This chapter presents standard sta- is used in calculating the sample variance
tistical concepts, sampling theory and statistical 
n
decisions, and factorial design of experiments (yi y)2
2 i=1
or analysis of variances. Multivariant linear and s =
n1
nonlinear regression is treated in Chapter 2.
and the sample standard deviation
6
7
7 n
7 (y y)2
8 i=1 i
s=
n1
126 Mathematics in Chemical Engineering

The value n 1 is used in the denominator be- error must be present and one error cannot pre-
cause the deviations from the sample average dominate (unless it is normally distributed). The
must total zero: normal distribution function is calculated eas-
n
ily; of more value are integrals of the function,
(yi y) = 0 which are given in Table 12; the region of inter-
i=1 est is illustrated in Figure 50.
Thus, knowing n 1 values of yi y and the
fact that there are n values automatically gives
the n-th value. Thus, only n 1 degrees of free-
dom exist. This occurs because the unknown
mean E ( y) is replaced by the sample mean y
derived from the data.
If data are taken consecutively, running totals
can be kept to permit calculation of the mean and
variance without retaining all the data:

n 
n 
n
(yi y)2 = y12 2y yi +(y)2
i=1 i=1 i=1

n
Figure 49. Frequency of occurrence of different scores
y= yi /n
i=1

Thus,
n n

n, y12 , and yi
i=1 i=1

are retained, and the mean and variance are com-


puted when needed.
Repeated observations that differ because of
experimental error often vary about some cen-
tral value in a roughly symmetrical distribution
in which small deviations occur more frequently Figure 50. Area under normal curve
than large deviations. In plotting the number of
times a discrete event occurs, a typical curve is
obtained, which is shown in Figure 49. Then the For a small sample, the variance can only
probability p of an event (score) occurring can be estimated with the sample variance s2 . Thus,
be thought of as the ratio of the number of times the normal distribution cannot be used because
it was observed divided by the total number of is not known. In such cases Students t-
events. A continuous representation of this prob- distribution, shown in Figure 51 [303, p. 70],
ability density function is given by the normal is used:
distribution y0 yE (y)
p (y) =  n/2 , t =
1 2 2
t2 s/ n
p (y) = e[yE(y)] /2 . (120) 1+ n1
2
This is called a normal probability distribution and y0 is chosen such that the area under the
function. It is important because many results curve is one. The number = n 1 is the de-
are insensitive to deviations from a normal dis- grees of freedom, and as increases, Students t-
tribution. Also, the central limit theorem says distribution approaches the normal distribution.
that if an overall error is a linear combination of The normal distribution is adequate (rather than
component errors, then the distribution of errors the t-distribution) when > 15, except for the
tends to be normal as the number of components tails of the curve which require larger . Inte-
increases, almost regardless of the distribution of grals of the t-distribution are given in Table 13,
the component errors (i.e., they need not be nor- the region of interest is shown in Figure 52.
mally distributed). Naturally, several sources of
Mathematics in Chemical Engineering 127
Table 12. Area under normal curve *
z 2 /2
F (z) = 1
2
ez dz
0
z F (z) z F (z)
0.0 0.0000 1.5 0.4332
0.1 0.0398 1.6 0.4452
0.2 0.0793 1.7 0.4554
0.3 0.1179 1.8 0.4641
0.4 0.1554 1.9 0.4713
0.5 0.1915 2.0 0.4772
0.6 0.2257 2.1 0.4821
0.7 0.2580 2.2 0.4861
0.8 0.2881 2.3 0.4893
0.9 0.3159 2.4 0.4918
1.0 0.3413 2.5 0.4938
1.1 0.3643 2.7 0.4965
1.2 0.3849 3.0 0.4987
1.3 0.4032 4.0 0.499968
1.4 0.4192 5.0 0.4999997

* Table gives the probability F that a random variable will fall in the shaded region of Figure 50. For a more complete table (in slightly
different form), see [23, Table 26.1]. This table is obtained in Microsoft Excel with the function NORMDIST(z,0,1,1)-0.5.

Table 13. Percentage points of area under Students t-distribution *


=0.10 =0.05 =0.01 =0.001
1 6.314 12.706 63.657 636.619
2 2.920 4.303 9.925 31.598
3 2.353 3.182 5.841 12.941
4 2.132 2.776 4.604 8.610
5 2.015 2.571 4.032 6.859
6 1.943 2.447 3.707 5.959
7 1.895 2.365 3.499 5.408
8 1.860 2.306 3.355 5.041
9 1.833 2.262 3.250 4.781
10 1.812 2.228 3.169 4.587
15 1.753 2.131 2.947 4.073
20 1.725 2.086 2.845 3.850
25 1.708 2.060 2.787 3.725
30 1.697 2.042 2.750 3.646
1.645 1.960 2.576 3.291

* Table gives t values such that a random variable will fall in the shaded region of Figure 52 with probability . For a one-sided test the
condence limits are obtained for /2. For a more complet table (in slightly different form), see [23, Table 26.10]. This table is obtained in
Microsoft Excel with the function TINV(,).

Other probability distribution functions are p (x) 0


useful. Any distribution function must satisfy the 

following conditions: p (x) dx = 1

0F (x) 1
The Bernoulli distribution applies when the
F () = 0,F (+) = 1 outcome can take only two values, such as heads
or tails, or 0 or 1. The probability distribution
F (x) F (y) when xy function is
The probability density function is p (x = k) = pk (1p)1k , k = 0 or 1
dF (x)
p (x) = and the mean of a function g (x) depending on x
dx
is
where
E [g (x)] = g (1) p+g (0) (1p)
dF = pdx

is the probability of x being between x and x +


dx. The probability density function satises
128 Mathematics in Chemical Engineering

Figure 51. Students t-distribution.


For explanation of see text

one kind and N M are of another kind. Then


the objects are drawn one by one, without re-
placing the last draw. If the last draw had been
replaced the distribution would be the binomial
distribution. If x is the number of objects of type
M drawn in a sample of size n, then the proba-
bility of x = k is
p (x = k) =

M !(N M )!n!(N n)!


k!(M k)!(nk)!(N M n+k)!N !
Figure 52. Percentage points of area under Students
t-distribution The mean and variance are
nM
E (x) = N

The binomial distribution function applies when var (x) = np (1p) N n


N 1
there are n trials of a Bernoulli event; it gives the
probability of k occurrences of the event, which The Poisson distribution is
occurs with probability p on each trial k
p (x = k) = e
n! k!
p (x = k) = pk (1p)nk
k (nk) ! with a parameter . The mean and variance are
The mean and variance are E (x) =
E (x) = np
var (x) =
var (x) = np (1p)
The simplest continuous distribution is the
The hypergeometric distribution function ap- uniform distribution. The probability density
plies when there are N objects, of which M are of function is
Mathematics in Chemical Engineering 129

if they are statistically dependent, and




1
ba
a<x<b p yA, yB = p (yA ) p (yB )
p=
0 x<a, x>b
if they are statistically independent. If a set of
and the probability distribution function is variables yA , yB , . . . is independent and identi-
cally distributed,

0 x<a
xa
F (x) = a<x<b

1
ba p (yA ,yB ,. . .) = p (yA ) p (yB ) . . .
b<x
Conditional probabilities are used in hazard
The mean and variance are analysis of chemical plants.
a+b A measure of the linear dependence between
E (x) = 2
variables is given by the covariance
(ba)2
var (x) = 12 Cov (yA ,yB ) = E {[yA E (yA )] [(yB E (yB ))]}
The normal distribution is given by Equation N

120 with variance 2 . i=1


[yAi E(yA )][yBi E(yB )]

The log normal probability density function = N


is The correlation coefcient  is
& '
1 (logx)2 Cov (yA ,yB )
p (x) = exp ? (yA ,yB ) =
x 2 2 2 A B
and the mean and variance are [305, p. 89] If yA and yB are independent, then Cov ( yA ,
 2
 yB ) = 0. If yA tends to increase when yB de-
E (x) = exp + 2 creases then Cov ( yA , yB ) < 0. The sample cor-


relation coefcient is [15, p. 484]
var (x) = exp 2 1 exp 2+ 2

n
(yAi y A ) (yBi y B )
i=1
r (yA ,yB ) =
(n1) sA sB
11.2. Sampling and Statistical Decisions
If measurements are for independent, identi-
Two variables can be statistically dependent or cally distributed observations, the errors are in-
independent. For example, the height and diam- dependent and uncorrelated. Then y varies about
eter of all distillation towers are statistically in- E ( y) with variance 2 /n, where n is the number
dependent, because the distribution of diameters of observations in y. Thus if something is mea-
of all columns 10 m high is different from that sured several times today and every day, and the
of columns 30 m high. If yB is the diameter and measurements have the same distribution, the
yA the height, the distribution is written as variance of the means decreases with the num-
ber of samples in each days measurement n. Of
p (yB |yA = constant) , or here
course, other factors (weather, weekends) may
p (yB |yA = 10) =p (yB |yA = 30) cause the observations on different days to be
distributed nonidentically.
A third variable yC , could be the age of the oper- Suppose Y, which is the sum or difference of
ator on the third shift. This variable is probably two variables, is of interest:
unrelated to the diameter of the column, and for
the distribution of ages is Y = yA yB

p (yC |yA ) = p (yC ) Then the mean value of Y is


Thus, variables yA and yC are distributed inde- E (Y ) = E (yA ) E (yB )
pendently. The joint distribution for two vari- and the variance of Y is
ables is
2 (Y ) = 2 (yA ) + 2 (yB )
p (yA ,yB ) = p (yA ) p (yB |yA )
130 Mathematics in Chemical Engineering

More generally, consider the random variables mean of some property could be measured be-
y1 , y2 , . . . with means E ( y1 ), E ( y2 ), . . . and fore and after the change; if these differ, does
variances 2 ( y1 ), 2 ( y2 ), . . . and correlation it mean the process modication caused it, or
coefcients ij . The variable could the change have happened by chance? This
is a statistical decision. A hypothesis H 0 is de-
Y = 1 y1 +2 y2 +. . . ned; if it is true, action A must be taken. The
has a mean reverse hypothesis is H 1 ; if this is true, action
B must be taken. A correct decision is made if
E (Y ) = 1 E (y1 ) +2 E (y2 ) +. . . action A is taken when H 0 is true or action B
is taken when H 1 is true. Taking action B when
and variance [303, p. 87] H 0 is true is called a type I error, whereas taking

n action A when H 1 is true is called a type II error.
2 (Y ) = 2i 2 (yi )
i=1 The following test of hypothesis or test of

n 
n signicance must be dened to determine if the
+2 i j (yi ) (yj ) ?ij
i=1j=i+1 hypothesis is true. The level of signicance is
or the maximum probability that an error would be
accepted in the decision (i.e., rejecting the hy-

n
pothesis when it is actually true). Common lev-
2 (Y ) = 2i 2 (yi )
i=1
(120)
els of signicance are 0.05 and 0.01, and the test

n 
n
of signicance can be either one or two sided. If
+2 i j Cov (yi ,yj )
i=1j=i+1 a sampled distribution is normal, then the prob-
ability that the z score
If the variables are uncorrelated and have the
yy
same variance, then z=
( ) sy
n

2 (Y ) = 2i 2
i=1

This fact can be used to obtain more accu-


rate cost estimates for the purchased cost of a
chemical plant than is true for any one piece of
equipment. Suppose the plant is composed of
a number of heat exchangers, pumps, towers,
etc., and that the cost estimate of each device is
40 % of its cost (the sample standard devia- Figure 53. Two-sided statistical decision
tion is 20 % of its cost). In this case the i are
the numbers of each type of unit. Under special is in the unshaded region is 0.95. Because a two-
conditions, such as equal numbers of all types sided test is desired, F = 0.95/2 = 0.475. The
of units and comparable cost, the standard devi- value given in Table 12 for F = 0.475 is z = 1.96.
ation of the plant costs is If the test was one-sided, at the 5 % level of
signicance, 0.95 = 0.5 (for negative z) + F (for
(Y ) =
n positive z). Thus, F = 0.45 or z = 1.645. In the
two-sided test (see Fig. 53), if a single sample
and is then (40/ n) %. Thus the standard de- is chosen and z < 1.96 or z > 1.96, then this
viation of the cost for the entire plant is the stan- could happen with probability 0.05 if the hy-
dard deviation of each piece of equipment di- pothesis were true. This z would be signicantly
vided by the square root of the number of units. different from the expected value (based on the
Under less restrictive conditions the actual num- chosen level of signicance) and the tendency
bers change according to the above equations, would be to reject the hypothesis. If the value of
but the principle is the same. z was between 1.96 and 1.96, the hypothesis
Suppose modications are introduced into would be accepted.
the manufacturing process. To determine if the The same type of decisions can be made for
modication causes a signicant change, the other distributions. Consider Students t-distri-
Mathematics in Chemical Engineering 131
6
7
bution. At a 95 % level of condence, with = 7N N2
7  (xi x)2 +  (yi y)2 
1

10 degrees of freedom, the t values are 2.228. 7 


8 i=1 i=1 1 1
Thus, the sample mean would be expected to be sD = +
N1 +N2 2 N1 N2
between
Next, compute the value of t
s
ytc xy
n t=
sD
with 95 % condence. If the mean were outside
this interval, the hypothesis would be rejected. and evaluate the signicance of t using Students
The chi-square distribution is useful for ex- t-distribution for N 1 + N 2 2 degrees of free-
amining the variance or standard deviation. The dom.
statistic is dened as If the samples have different variances, the
relevant statistic for the t-test is
ns2
2 = 2 xy
t= 
(y1 y)2 +(y2 y)2 +...+(yn y)2 var (x) /N1 +var (y) /N2
= 2
The number of degrees of freedom is now taken
and the chi-square distribution is
approximately as
2
p (y) = y0 2 e /2 
var(x) var(y)
2
N1
+ N
2
= n 1 is the number of degrees of free- =
[var(x)/N1 ]2 [var(y)/N2 ]2
+
dom and y0 is chosen so that the integral of p ( N1 1 N2 1

y) over all y is 1. The probability of a deviation There is also an F-test to decide if two dis-
larger than 2 is given in Table 14; the area in tributions have signicantly different variances.
question, in Figure 54. For example, for 10 de- In this case, the ratio of variances is calculated:
grees of freedom and a 95 % condence level,
the critical values of 2 are 0.025 and 0.975. var (x)
F =
Then var (y)
where the variance of x is assumed to be larger.
s n s n
<< Then, a table of values is used to determine the
0.975 0.025
signicance of the ratio. The table [23, Table
or
26.9] is derived from the formula [15, p. 169]

s n s n  1 
<< 2
20.5 3.25 Q (F |1 ,2 ) = I2 /(2 +1 F ) ,
2 2
with 95 % condence. where the right-hand side is an incomplete beta
Tests are available to decide if two distribu- function. The F table is given by the Microsoft
tions that have the same variance have differ- Excel function FINV(fraction, x , y ), where
ent means [15, p. 465]. Let one distribution be fraction is the fractional value ( 1) representing
called x, with N 1 samples, and the other be called the upper percentage and x and y are the de-
y, with N 2 samples. First, compute the standard grees of freedom of the numerator and denomi-
error of the difference of the means: nator, respectively.
Example. For two sample variances with 8 de-
grees of freedom each, what limits will bracket
their ratio with a midarea probability of 90 %?
FINV(0.95,8,8) = 3.44. The 0.95 is used to get
both sides to toal 10 %. Then
P [1/3.44var (x) /var (y) 3.44] = 0.90.

Figure 54. Percentage points of area under chi-squared


distribution with degrees of freedom
132 Mathematics in Chemical Engineering
Table 14. Percentage points of area under chi-square distribution with degrees of freedom *
=0.995 =0.99 =0.975 =0.95 =0.5 =0.05 =0.025 =0.01 =0.005
1 7.88 6.63 5.02 3.84 0.455 0.0039 0.0010 0.0002 0.00004
2 10.6 9.21 7.38 5.99 1.39 0.103 0.0506 0.0201 0.0100
3 12.8 11.3 9.35 7.81 2.37 0.352 0.216 0.115 0.072
4 14.9 13.3 11.1 9.49 3.36 0.711 0.484 0.297 0.207
5 16.7 15.1 12.8 11.1 4.35 1.15 0.831 0.554 0.412
6 18.5 16.8 14.4 12.6 5.35 1.64 1.24 0.872 0.676
7 20.3 18.5 16.0 14.1 6.35 2.17 1.69 1.24 0.989
8 22.0 20.1 17.5 15.5 7.34 2.73 2.18 1.65 1.34
9 23.6 21.7 19.0 16.9 8.34 3.33 2.70 2.09 1.73
10 25.2 23.2 20.5 18.3 9.34 3.94 3.25 2.56 2.16
12 28.3 26.2 23.3 21.0 11.3 5.23 4.40 3.57 3.07
15 32.8 30.6 27.5 25.0 14.3 7.26 6.26 5.23 4.60
17 35.7 33.4 30.2 27.6 16.3 8.67 7.56 6.41 5.70
20 40.0 37.6 34.2 31.4 19.3 10.9 9.59 8.26 7.43
25 46.9 44.3 40.6 37.7 24.3 14.6 13.1 11.5 10.5
30 53.7 50.9 47.0 43.8 29.3 18.5 16.8 15.0 13.8
40 66.8 63.7 59.3 55.8 39.3 26.5 24.4 22.2 20.7
50 79.5 76.2 71.4 67.5 49.3 34.8 32.4 29.7 28.0
60 92.0 88.4 83.3 79.1 59.3 43.2 40.5 37.5 35.5
70 104.2 100.4 95.0 90.5 69.3 51.7 48.8 45.4 43.3
80 116.3 112.3 106.6 101.9 79.3 60.4 57.2 53.5 51.2
90 128.3 124.1 118.1 113.1 89.3 69.1 65.6 61.8 59.2
100 140.2 135.8 129.6 124.3 99.3 77.9 74.2 70.1 67.3

* Table value is 2 ; 2 < 2 with probability . For a more complete table (in slightly different form), see [23, Table 26.8]. The
Microsoft Excel function CHIINV(1-,) gives the table value.

11.3. Error Analysis in Experiments seemingly identical conditions and several dif-
ferent values are obtained, with means E (y1 ), E
Suppose a measurement of several quantities is (y2 ), . . . and variances 12 , 22 , . . . Next suppose
made and a formula or mathematical model is the errors are small and independent of one an-
used to deduce some property of interest. For other. Then a change in Y is related to changes
example, to measure the thermal conductivity in yi by
of a solid k, the heat ux q, the thickness of the Y Y
dY = dy1 + dy2 +. . .
sample d, and the temperature difference across y1 y2
the sample T must be measured. Each mea- If the changes are indeed small, the partial
surement has some error. The heat .
ux q may be derivatives are constant among all the samples.
the rate of electrical heat input Q divided by the Then the expected value of the change is
area A, and both quantities are measured to some N 

Y
tolerance. The thickness of the sample is mea- E (dY ) = E (dyi )
i=1
yi
sured with some accuracy, and the temperatures
are probably measured with a thermocouple, to Naturally E (dyi ) = 0 by denition so that E (dY
some accuracy. These measurements are com- ) = 0, too. However, since the errors are indepen-
bined, however, to obtain the thermal conduc- dent of each other and the partial derivatives are
tivity, and the error in the thermal conductivity assumed constant because the errors are small,
must be determined. The formula is the variances are given by Equation 121 [296, p.
d . 550]
k= Q N  
AT Y 2
2 (dY ) = i2 (121)
If each measured quantity has some variance, i=1
yi
what is the variance in the thermal conductiv- Thus, the variance of the desired quantity Y can
ity? be found. This gives an independent estimate
Suppose a model for Y depends on various of the errors in measuring the quantity Y from
measurable quantities, y1 , y2 , . . . Suppose sev- the errors in measuring each variable it depends
eral measurements are made of y1 , y2 , . . . under upon.
Mathematics in Chemical Engineering 133

11.4. Factorial Design of Experiments pling analysis discussed in Section 11.2 can be
and Analysis of Variance used to deduce if the means of the two treatments
differ signicantly. With more treatments, the
Statistically designed experiments consider, of analysis is more detailed. Suppose the experi-
course, the effect of primary variables, but they mental results are arranged as shown in Table 15,
also consider the effect of extraneous variables, i.e., several measurements for each treatment.
the interactions among variables, and a measure The objective is to see if the treatments differ sig-
of the random error. Primary variables are those nicantly from each other, that is, whether their
whose effect must be determined. These vari- means are different. The samples are assumed to
ables can be quantitative or qualitative. Quanti- have the same variance. The hypothesis is that
tative variables are ones that may be t to a model the treatments are all the same, and the null hy-
to determine the model parameters. Curve tting pothesis is that they are different. Deducing the
of this type is discused in Chapter 2. Qualita- statistical validity of the hypothesis is done by
tive variables are ones whose effect needs to be an analysis of variance.
known; no attempt is made to quantify that ef-
fect other than to assign possible errors or mag- Table 15. Estimating the effect of four treatments
nitudes. Qualitative variables can be further sub- Treatment 1 2 3 4
divided into type I variables, whose effect is de-

termined directly, and type II variables, which
contribute to performance variability, and whose
effect is averaged out. For example, in studying

the effect of several catalysts on yield in a chem-
ical reactor, each different type of catalyst would
be a type I variable, because its effect should be Treatment average, y t
known. However, each time the catalyst is pre- Grand average, y

pared, the results are slightly different, because
of random variations; thus, several batches may
exist of what purports to be the same catalyst. The data for k = 4 treatments are arranged in
The variability between batches is a type II vari- Table 15. Each treatment has nt experiments,
able. Because the ultimate use will require us- and the outcome of the i-th experiment with
ing different batches, the overall effect including treatment t is called yti . The treatment average
that variation should be known, because know- is
ing the results from one batch of one catalyst
precisely might not be representative of the re- 
nt
yti
sults obtained from all batches of the same cat- i=1
yt =
alyst. A randomized block design, incomplete nt
block design, or Latin square design, for exam- and the grand average is
ple, all keep the effect of experimental error in

k
the blocked variables from inuencing the ef- nt y t k
t=1

fect of the primary variables. Other uncontrolled y= , N = nt
variables are accounted for by introducing ran- N t=1
domization in parts of the experimental design. Next, the sum of squares of deviations is com-
To study all variables and their interaction re- puted from the average within the t-th treatment
quires a factorial design, involving all possible
nt

combinations of each variable, or a fractional
St = (yti y t )2
factorial design, involving only a selected set. i=1
Statistical techniques are then used to determine
the important variables, the important interac- Since each treatment has nt experiments, the
tions and the error in estimating these effects. number of degrees of freedom is nt 1. Then
The discussion here is a brief overview of [303]. the sample variances are
If only two methods exist for preparing some St
s2t =
product, to see which treatment is best, the sam- nt 1
134 Mathematics in Chemical Engineering
Name Formula Degrees of
The within-treatment sum of squares is freedom
k Average SA = nky 2 1

n
SR = St Blocks SB = k (y t y)2 n-1
t=1 i=1

k
2
Treatments ST = n (y t y) k-1
and the within-treatment sample variance is t=1

n
SR Residuals SR = (yti y i y t +y)2 (n-1)(k-1)
s2R = t=1i=1
N k

k
n
2
Total S= yti N=nk
t=1i=1
Now, if no difference exists between treatments,
a second estimate of 2 could be obtained by The key test is again a statistical one, based
calculating the variation of the treatment aver- on the value of
ages about the grand average. Thus, the between- ST
s2T /s2R , where s2T = k1
treatment mean square is computed:
SR
k and s2R = (n1)(k1)
ST
s2T = , ST = nt (y t y)2 and the F distribution for R and T degrees of
k1 t=1
freedom [303, p. 636]. The assumption behind
Basically the test for whether the hypothesis is the analysis is that the variations are linear [303,
true or not hinges on a comparison between the p. 218]. Ways to test this assumption as well as
within-treatment estimate s2R (with R = N k transformations to make if it is not true are pro-
degrees of freedom) and the between-treatment vided in [303], where an example is given of how
estimate s2T (with T = k 1 degrees of free- the observations are broken down into a grand
dom). The test is made based on the F distri- average, a block deviation, a treatment deviation,
bution for R and T degrees of freedom [23, and a residual. For two-way factorial design, in
Table 26.9], [303, p. 636]. which the second variable is a real one rather
than one you would like to block out, see [303,
p. 228].
Table 16. Block design with four treatments and ve blocks

Treatment 1 2 3 4 Block
average Table 17. Two-level factorial design with three variables
Block 1
Run Variable Variable Variable
Block 2
1 2 3
Block 3
Block 4 1
Block 5 2 +
3 +
4 + +
Treatment grand
5 +
average average
6 + +
7 + +
8 + + +
Next consider the case in which randomized
blocking is used to eliminate the effect of some
variable whose effect is of no interest, such as To measure the effects of variables on a single
the batch-to-batch variation of the catalysts in outcome, a factorial design is appropriate. In a
the chemical reactor example. With k treatments two-level factorial design, each variable is con-
and n experiments in each treatment, the results sidered at two levels only, a high and low value,
from n k experiments can be arranged as shown often designated as a + and a . The two-level
in Table 16; within each block, various treat- factorial design is useful for indicating trends
ments are applied in a random order. The block and showing interactions; it is also the basis for
average, the treatment average, and the grand a fractional factorial design. As an example, con-
average are computed as before. The following sider a 23 factorial design, with 3 variables and
quantities are also computed for the analysis of 2 levels for each. The experiments are indicated
variance table: in Table 17. The main effects are calculated by
determining the difference between results from
Mathematics in Chemical Engineering 135

all high values of a variable and all low values of 12. Multivariable Calculus Applied
a variable; the result is divided by the number of to Thermodynamics
experiments at each level. For example, for the
rst variable, calculate Many of the functional relationships required
Effect of variable 1 = [(y2 +y4 +y6 +y8 )
in thermodynamics are direct applications of
the rules of multivariable calculus. In this short
[(y1 +y3 +y5 +y7 )]] /4
chapter, those rules are reviewed in the context
Note that all observations are being used to sup- of the needs of thermodynamics. These ideas
ply information on each of the main effects and were expounded in one of the classic books on
each effect is determined with the precision of a chemical engineering thermodynamics [299].
fourfold replicated difference. The advantage of
a one-at-a-time experiment is the gain in preci-
sion if the variables are additive and the measure 12.1. State Functions
of nonadditivity if it occurs [303, p. 313].
Interaction effects between variables 1 and 2 State functions depend only on the state of the
are obtained by comparing the difference bet- system, not on its past history or how one got
ween the results obtained with the high and low there. If z is a function of two variables x and
value of 1 at the low value of 2 with the differ- y, then z (x, y) is a state function, because z is
ence between the results obtained with the high known once x and y are specied. The differen-
and low value 1 at the high value of 2. The 12- tial of z is
interaction is
dz = M dx+N dy
12 interaction = [(y4 y3 +y8 y7 )
The line integral
[(y2 y1 +y6 y5 )]] /2 
The key step is to determine the errors associ- (M dx+N dy)
ated with the effect of each variable and each C

interaction so that the signicance can be de- is independent of the path in x y space if and
termined. Thus, standard errors need to be as- only if
signed. This can be done by repeating the exper-
M N
iments, but it can also be done by using higher = (122)
y x
order interactions (such as 123 interactions in a
24 factorial design). These are assumed negligi- Because the total differential can be written as
ble in their effect on the mean but can be used    
z z
to estimate the standard error [303, pp. 319 dz = dx+ dy (123)
x y y x
328]. Then calculated effects that are large com-
pared to the standard error are considered impor- for path independence
tant, whereas those that are small compared to    
z z
the standard error are considered due to random =
y x y x y x
variations and are unimportant.
In a fractional factorial design, only part of or
the possible experiments is performed. With k 2z 2z
variables, a factorial design requires 2k exper- yx
=
xy
(124)
iments. When k is large, the number of exper-
iments can be large; for k = 5, 25 = 32. For k is needed.
this large, Box et al. [296, p. 235] do a frac- Various relationships can be derived from
tional factorial design. In the fractional factorial Equation 123. If z is constant,
design with k = 5, only 8 experiments are cho- &     '
z z
sen. Cropley [298] gives an example of how to 0= dx+ dy
x y y x
combine heuristics and statistical arguments in z
application to kinetics mechanisms in chemical Rearrangement gives
engineering.
136 Mathematics in Chemical Engineering
     
z y z (y/x)z
= = (125) which here is
x y x z y x (y/z)x    
T p
Alternatively, if Equation 123 is divided by dy = (129)
V S S V
while some other variable w is held constant,
        This is one of the Maxwell relations and is
z z x z
= + (126) merely an expression of Equation 124.
y w x y y w y x
The differentials of the other energies are
Dividing both the numerator and the denomina-
tor of a partial derivative by dw while holding a dH = T dS+V dp (130)
variable y constant yields
      dA = SdT pdV (131)
z (z/w)y z w
= = (127)
x y (x/w)y w y x y
dG = S dT +V dp (132)
In thermodynamics the state functions in-
clude the internal energy U, the enthalpy H, and From these differentials, other Maxwell rela-
the Helmholtz and Gibbs free energies A and G, tions can be derived in a similar fashion by ap-
respectively, which are dened as follows: plying Equation 124.
   
H = U +pV T V
= (133)
p S S p
A = U T S    
S p
= (134)
G = HT S = U +pV T S = A+pV V T T V
   
where S is the entropy, T the absolute temper- S V
= (135)
ature, p the pressure, and V the volume. These p T T p
are also state functions, in that the entropy is
specied once two variables (e.g., T and p) are The heat capacity at constant pressure is de-
specied. Likewise V is specied once T and p ned as
 
are specied, and so forth. H
Cp =
T p

12.2. Applications to Thermodynamics If entropy and enthalpy are taken as functions of


T and p, the total differentials are
All of the following applications are for closed   
S S
systems with constant mass. If a process is re- dS = dp dT +
p T p T
versible and only p V work is done, one form    
H H
of the rst law states that changes in the internal dH = dT + dp
T p p T
energy are given by the following expression  
H
= Cp dT + dp
dU = T dSpdV (128) p T
If the internal energy is considered a function of If the pressure is constant,
S and V, then  
    S
U U dS = dT and dH = Cp dT
dU = dS+ dV T p
S V V S
When enthalpy is considered a function of S and
This is the equivalent of Equation 123 and p, the total differential is
   
U U
T = , p= dH = T dS+V dp
S V V S

Because the internal energy is a state function, When the pressure is constant, this is
Equation 124 is required:
dH = T dS
2U 2U
=
V S SV
Mathematics in Chemical Engineering 137

Thus, at constant pressure Type 1 (3 possibilities plus reciprocals).


     
S a p
dH = Cp dT = T dS = T dT General: , Specic:
T p b c T V

which gives Equation 125 yields


       
S Cp p V p
= =
T p T T V T p V T
(V /T )p
When p is not constant, using the last Maxwell = (141)
relation gives (V /p)T
  This relates all three partial derivatives of this
Cp V
dS = dT dp (136) type.
T T p

Then the total differential for H is Type 2 (30 possibilities plus reciprocals).
     
V G
dH = T dS+V dp = Cp dT T dp+V dp General: , Specic:
T p b c T V

Rearranging this, when H (T, p), yields Using Equation 132 gives
&    
  ' G p
V = S+V
dH = Cp dT + V T dp (137) T V T V
T p
Using the other equations for U, H, A, or S gives
This equation can be used to evaluate enthalpy the other possibilities.
differences by using information on the equation
of state and the heat capacity: Type 3 (15 possibilities plus reciprocals).
   
T
2 a V
H (T2 ,p2 ) H (T1 ,p1 ) = Cp (T ,p1 ) dT General: , Specic:
T b T S
1
p2     (138)
+ V
V T T |T2 ,p dp First the derivative is expanded by using Equa-
p1 p tion 125, which is called expansion without in-
The same manipulations can be done for in- troducing a new variable:
     
ternal energy: V S V (S/T )V
= =
  T S T V S T (S/V )T
S Cv
= (139)
T V T Then the numerator and denominator are evalu-
& ' ated as type 2 derivatives, or by using Equations
(V /T )p Cv (99) and (100):
dS = dV + dT (140)
(V /p)T T  
V Cv /T
& ' =
(V /T )p T S (V /T )p (p/V )T
dU = Cv dT p+T dV  
(V /p)T V
Cv p T
=   (142)
T V
T p

12.3. Partial Derivatives of All These derivatives are important for reversible,
Thermodynamic Functions adiabatic processes (e.g., in an ideal turbine or
compressor) because the entropy is constant.
The various partial derivatives of the thermody- Similar derivatives can be obtained for isen-
namic functions can be classied into six groups. thalpic processes, such as a pressure reduction
In the general formulas below, the variables U, at a valve. In that case, the Joule Thomson co-
H, A, G, or S are denoted by Greek letters, efcient is obtained for constant H:
  &   '
whereas the variables V, T, or p are denoted by T 1 V
Latin letters. = V +T
p H Cp T p
138 Mathematics in Chemical Engineering

Type 4 (30 possibilities plus reciprocals). To evaluate the derivative, Equation 126 is used
    to express dS in terms of p and T :
G
General: , Specic:      
c A p S V p Cp
= +
T V T p T V T
Now, expand through the introduction of a new
variable using Equation 127: Substitution for (p/T )V and rearrangement
      give
G G T (G/T )p
= =    
A T A (A/T )p V p
p p p Cp Cv = T
T p T V
This operation has created two type 2 deriva-    
tives. Substitution yields V 2 p
= T
  T p V T
G S
=
A p S+p(V /T )p Use of this equation permits the rearrangement
of Equation 142 into
  Cp
Type 5 (60 possibilities plus reciprocals). V (V /T )2p + T
(V /p)T
=
    T S (V /T )p
G
General: , Specic:
b p A The ratio of heat capacities is
Starting from Equation 132 for dG gives Cp T (S/T )p
=
    Cv T (S/T )V
G T
= S +V
p A p A Expansion by using Equation 125 gives
The derivative is a type 3 derivative and can be Cp (p/T )S (S/p)T
=
evaluated by using Equation 125. Cv (V /T )S (S/V )T
 
G (A/p)T and the ratios are then
=S +V
p A (A/T )p    
Cp p V
=
The two type 2 derivatives are then evaluated: Cv V S p T
  Using Equation 125 gives
G Sp (V /p)T
= +V      
p A S+p(V /T )p Cp p T V
=
Cv V S p V T p
These derivatives are also of interest for free ex-
pansions or isentropic changes. Entropy is a variable in at least one of the partial
derivatives.
Type 6 (30 possibilities plus reciprocals).
   
G
General: , Specic:
A H

Equation 127 is used to obtain two type 5 deriva- 13. References


tives.
  Specic References
G (G/T )H
= 1. D. E. Seborg, T. F. Edgar, D. A. Mellichamp:
A H (A/T )H
Process Dynamics and Control, 2nd ed., John
These can then be evaluated by using the proce- Wiley & Sons, New York 2004.
dures for type 5 derivatives. 2. G. Forsyth, C. B. Moler: Computer Solution of
The difference in molar heat capacities (C p Lineart Algebraic Systems, Prentice-Hall,
C v ) can be derived in similar fashion. Using Englewood Cliffs 1967.
Equation 139 for C v yields 3. B. A. Finlayson: Nonlinear Analysis in
  Chemical Engineering, McGraw-Hill, New
S York 1980; reprinted, Ravenna Park, Seattle
Cv = T
T V 2003.
Mathematics in Chemical Engineering 139

4. S. C. Eisenstat, M. H. Schultz, A. H. Sherman: Systems and Convection-Diffusion, Comp.


Algorithms and Data Structures for Sparse Meth. Appl. Mech. 22 (1980) 23 48.
Symmetric Gaussian Elimination, SIAM J. 23. M. Abranowitz, I. A. Stegun: Handbook of
Sci. Stat. Comput. 2 (1981) 225 237. Mathematical Functions, National Bureau of
5. I. S. Duff: Direct Methods for Sparse Standards, Washington, D.C. 1972.
Matrices, Charendon Press, Oxford 1986. 24. J. C. Daubisse: Some Results about
6. H. S. Price, K. H. Coats, Direct Methods in Approximation of Functions of One or Two
Reservoir Simulation, Soc. Pet. Eng. J. 14 Variables by Sums of Exponentials, Int. J.
(1974) 295 308. Num. Meth. Eng. 23 (1986) 1959 1967.
7. A. Bykat: A Note on an Element Re-Ordering 25. O. C. McGehee: An Introduction to Complex
Scheme, Int. J. Num. Methods Egn. 11 (1977) Analysis, John Wiley & Sons, New York 2000.
194 198. 26. H. A. Priestley: Introduction to complex
8. M. R. Hesteness, E. Stiefel: Methods of analysis, Oxford University Press, New York
conjugate gradients for solving linear 2003.
systems, J. Res. Nat. Bur. Stand 29 (1952) 27. Y. K. Kwok: Applied complex variables for
409 439. scientists and engineers, Cambridge
9. Y. Saad: Iterative Methods for Sparse Linear University Press, New York 2002.
Systems, 2nd ed., Soc. Ind. Appl. Math., 28. N. Asmar, G. C. Jones: Applied complex
Philadelphia 2003. analysis with partial differential equations,
10. Y. Saad, M. Schultz: GMRES: A Generalized Prentice Hall, Upper Saddle River, NJ, 2002.
Minimal Residual Algorithm for Solving 29. M. J. Ablowitz, A. S. Fokas: Complex
Nonsymmetric Linear Systems. SIAM J. Sci. variables: Introduction and applications,
Statist. Comput. 7 (1986) 856869. Cambridge University Press, New York 2003.
11. http://mathworld.wolfram.com/ 30. J. W. Brown, R. V. Churchill: Complex
GeneralizedMinimalResidualMethod.html. variables and applications, 6th ed.,
12. http://www.netlib.org/linalg/html templates/ McGraw-Hill, New York 1996; 7th ed. 2003.
Templates.html. 31. W. Kaplan: Advanced calculus, 5th ed.,
13. http://software.sandia.gov/. Addison-Wesley, Redwood City, Calif., 2003.
14. E. Isaacson, H. B. Keller, Analysis of 32. E. Hille: Analytic Function Theory, Ginn and
Numerical Methods, J. Wiley and Sons, New Co., Boston 1959.
York 1966. 33. R. V. Churchill: Operational Mathematics,
15. W. H. Press, B. P. Flannery, S. A. Teukolsky, McGraw-Hill, New York 1958.
W. T. Vetterling: Numerical Recipes, 34. J. W. Brown, R. V. Churchill: Fourier Series
Cambridge University Press, Cambridge 1986. and Boundary Value Problems, 6th ed.,
16. R. W. H. Sargent: A Review of Methods for McGraw-Hill, New York 2000.
Solving Non-linear Algebraic Equations, in 35. R. V. Churchill: Operational Mathematics, 3rd
R. S. H. Mah, W. D. Seider (eds.): ed., McGraw-Hill, New York 1972.
Foundations of Computer-Aided Chemical 36. B. Davies: Integral Transforms and Their
Process Design, American Institute of Applications, 3rd ed., Springer, Heidelberg
Chemical Engineers, New York 1981. 2002.
17. J. D. Seader: Computer Modeling of
37. D. G. Duffy: Transform Methods for Solving
Chemical Processes, AIChE Monogr. Ser. 81
Partial Differential Equations, Chapman &
(1985) no. 15.
Hall/CRC, New York 2004.
18. http://software.sandia.gov/trilinos/packages/
38. A. Varma, M. Morbidelli: Mathematical
nox/loca user.html
Methods in Chemical Engineering, Oxford,
19. N. R. Amundson: Mathematical Methods in
New York 1997.
Chemical Engineering, Prentice-Hall,
39. Bateman, H., Tables of Integral Transforms,
Englewood Cliffs, N.J. 1966.
vol. I, McGraw-Hill, New York 1954.
20. R. H. Perry, D. W Green: Perrys Chemical
40. H. F. Weinberger: A First Course in Partial
Engineers Handbook, 7th ed., McGraw-Hill,
Differential Equations, Blaisdell, Waltham,
New York 1997.
Mass. 1965.
21. D. S. Watkins: Understanding the QR
41. R. V. Churchill: Operational Mathematics,
Algorithm, SIAM Rev. 24 (1982) 427 440.
22. G. F. Carey, K. Sepehrnoori: Gershgorin McGraw-Hill, New York 1958.
Theory for Stiffness and Stability of Evolution
140 Mathematics in Chemical Engineering

42. J. T. Hsu, J. S. Dranoff: Numerical Inversion 59. W. F. Ramirez: Computational Methods for
of Certain Laplace Transforms by the Direct Process Simulations, 2nd ed.,
Application of Fast Fourier Transform (FFT) Butterworth-Heinemann, Boston 1997.
Algorithm, Comput. Chem. Eng. 11 (1987) 60. U. M. Ascher, L. R. Petzold: Computer
101 110. methods for ordinary differential equations
43. E. Kreyzig: Advanced Engineering and differential-algebraic equations, SIAM,
Mathematics, 9th ed., John Wiley & Sons, Philadelphia 1998.
New York 2006. 61. K. E. Brenan, S. L. Campbell, L. R. Petzold:
44. H. S. Carslaw, J. C. Jaeger: Conduction of Numerical Solution of Initial-Value Problems
Heat in Solids, 2nd ed., Clarendon Press, in Differential-Algebraic Equations, Elsevier,
Oxford London 1959. Amsterdam 1989.
45. R. B. Bird, R. C. Armstrong, O. Hassager: 62. C. C. Pontelides, D. Gritsis, K. R. Morison, R.
Dynamics of Polymeric Liquids, 2nd ed., W. H. Sargent: The Mathematical Modelling
Appendix A, Wiley-Interscience, New York of Transient Systems Using
1987. Differential-Algebraic Equations, Comput.
46. R. S. Rivlin, J. Rat. Mech. Anal. 4 (1955) 681 Chem. Eng. 12 (1988) 449 454.
702. 63. G. A. Byrne, P. R. Ponzi:
47. N. R. Amundson: Mathematical Methods in Differential-Algebraic Systems, Their
Chemical Engineering; Matrices and Their Applications and Solutions, Comput. Chem.
Application, Prentice-Hall, Englewood Cliffs, Eng. 12 (1988) 377 382.
N.J. 1966. 64. L. F. Shampine, M. W. Reichelt: The
48. I. S. Sokolnikoff, E. S. Sokolnikoff: Higher MATLAB ODE Suite, SIAM J. Sci. Comp. 18
Mathematics for Engineers and Physicists, (1997) 122.
McGraw-Hill, New York 1941. 65. M. Kubicek, M. Marek: Computational
49. R. C. Wrede, M. R. Spiegel: Schaums outline Methods in Bifurcation Theory and Dissipative
of theory and problems of advanced calculus, Structures, Springer Verlag, Berlin
2nd ed, McGraw Hill, New York 2006. Heidelberg New York Tokyo 1983.
50. P. M. Morse, H. Feshbach: Methods of 66. T. F. C. Chan, H. B. Keller: Arc-Length
Theoretical Physics, McGraw-Hill, New York Continuation and Multi-Grid Techniques for
1953. Nonlinear Elliptic Eigenvalue Problems,
51. B. A. Finlayson: The Method of Weighted SIAM J. Sci. Stat. Comput. 3 (1982) 173 194.
Residuals and Variational Principles, 67. M. F. Doherty, J. M. Ottino: Chaos in
Academic Press, New York 1972. Deterministic Systems: Strange Attractors,
52. G. Forsythe, M. Malcolm, C. Moler: Computer Turbulence and Applications in Chemical
Methods for Mathematical Computation, Engineering, Chem. Eng. Sci. 43 (1988) 139
Prentice-Hall, Englewood Cliffs, N.J. 1977. 183.
53. J. D. Lambert: Computational Methods in 68. M. P. Allen, D. J. Tildesley: Computer
Ordinary Differential Equations, J. Wiley and Simulation of Liquids, Clarendon Press,
Sons, New York 1973. Oxford 1989.
54. N. R. Amundson: Mathematical Methods in
69. D. Frenkel, B. Smit: Understanding Molecular
Chemical Engineering, Prentice-Hall,
Simulation, Academic Press, San Diego 2002.
Englewood Cliffs, N.J. 1966.
70. J. M. Haile: Molecular Dynamics Simulation,
55. J. R. Rice: Numerical Methods, Software, and
John Wiley & Sons, New York 1992.
Analysis, McGraw-Hill, New York 1983.
71. A. R. Leach: Molecular Modelling: Principles
56. M. B. Bogacki, K. Alejski, J. Szymanowski:
and Applications, Prentice Hall, Englewood
The Fast Method of the Solution of Reacting
Cliffs, NJ, 2001.
Distillation Problem, Comput. Chem. Eng. 13
72. T. Schlick: Molecular Modeling and
(1989) 1081 1085.
Simulations, Springer, New York 2002.
57. C. W. Gear: Numerical Initial-Value Problems
73. R. B. Bird, W. E. Stewart, E. N. Lightfoot:
in Ordinary Differential Equations,
Transport Phenomena, 2nd ed., John Wiley &
Prentice-Hall, Englewood Cliffs, N.J. 1971.
Sons, New York 2002.
58. N. B. Ferguson, B. A. Finlayson: Transient
74. R. B. Bird, R. C. Armstrong, O. Hassager:
Modeling of a Catalytic Converter to Reduce
Dynamics of Polymeric Liquids, 2nd ed.,
Nitric Oxide in Automobile Exhaust, AIChE
Wiley-Interscience, New York 1987.
J 20 (1974) 539 550.
Mathematics in Chemical Engineering 141

75. P. V. Danckwerts: Continuous Flow 89. J. Wang, R. G. Anthony, A. Akgerman:


Systems, Chem. Eng. Sci. 2 (1953) 1 13. Mathematical simulations of the performance
76. J. F. Wehner, R. Wilhelm: Boundary of trickle bed and slurry reactors for methanol
Conditions of Flow Reactor, Chem. Eng. Sci. synthesis, Comp. Chem. Eng. 29 (2005)
6 (1956) 89 93. 24742484.
77. V. Hlavacek, H. Hofmann: Modeling of 90. V. K. C. Lee, J. F. Porter, G. McKay, A. P.
Chemical Reactors-XVI-Steady-State Axial Mathews: Application of solid-phase
Heat and Mass Transfer in Tubular Reactors. concentration-dependent HSDM to the acid
An Analysis of the Uniqueness of Solutions, dye adsorption system, AIChE J. 51 (2005)
Chem. Eng. Sci. 25 (1970) 173 185. 323332.
78. B. A. Finlayson: Numerical Methods for 91. C. deBoor, B. Swartz: Collocation at
Problems with Moving Fronts, Ravenna Park Gaussian Points, SIAM J. Num. Anal. 10
Publishing Inc., Seattle 1990. (1973) 582 606.
79. E. Isaacson, H. B. Keller: Analysis of 92. R. F. Sincovec: On the Solution of the
Numerical Methods, J. Wiley and Sons, New Equations Arising From Collocation With
York 1966. Cubic B-Splines, Math. Comp. 26 (1972) 893
80. C. Lanczos: Trigonometric Interpolation of 895.
Empirical and Analytical Functions, J. Math. 93. U. Ascher, J. Christiansen, R. D. Russell: A
Phys. (Cambridge Mass.) 17 (1938) 123 199. Collocation Solver for Mixed-Order Systems
81. C. Lanczos: Applied Analysis, Prentice-Hall, of Boundary-Value Problems, Math. Comp.
Englewood Cliffs, N.J. 1956. 33 (1979) 659 679.
82. J. Villadsen, W. E. Stewart: Solution of 94. R. D. Russell, J. Christiansen: Adaptive Mesh
Boundary-Value Problems by Orthogonal Selection Strategies for Solving
Collocation, Chem. Eng. Sci. 22 (1967) 1483 Boundary-Value Problems, SIAM J. Num.
1501. Anal. 15 (1978) 59 80.
83. M. L. Michelsen, J. Villadsen: Polynomial 95. P. G. Ciarlet, M. H. Schultz, R. S. Varga:
Solution of Differential Equations pp. 341 Nonlinear Boundary-Value Problems I. One
368 in R. S. H. Mah, W. D. Seider (eds.): Dimensional Problem, Num. Math. 9 (1967)
Foundations of Computer-Aided Chemical 394 430.
Process Design, Engineering Foundation, New 96. W. F. Ames: Numerical Methods for Partial
York 1981. Differential Equations, 2nd ed., Academic
84. W. E. Stewart, K. L. Levien, M. Morari: Press, New York 1977.
Collocation Methods in Distillation, in A. W. 97. J. F. Botha, G. F. Pinder: Fundamental
Westerberg, H. H. Chien (eds.): Proceedings Concepts in The Numerical Solution of
of the Second Int. Conf. on Foundations of Differential Equations, Wiley-Interscience,
Computer-Aided Process Design, Computer New York 1983.
Aids for Chemical Engineering Education 98. W. H. Press, B. P. Flanner, S. A. Teukolsky, W.
(CACHE), Austin Texas, 1984, pp. 535 569. T. Vetterling: Numerical Recipes, Cambridge
85. W. E. Stewart, K. L. Levien, M. Morari: Univ. Press, Cambridge 1986.
Simulation of Fractionation by Orthogonal 99. H. Schlichting, Boundary Layer Theory, 4th
Collocation, Chem. Eng. Sci. 40 (1985) 409 ed. McGraw-Hill, New York 1960.
421. 100. R. Aris, N. R. Amundson: Mathematical
86. C. L. E. Swartz, W. E. Stewart: Methods in Chemical Engineering, vol. 2,
Finite-Element Steady State Simulation of First-Order Partial Differential Equations with
Multiphase Distillation, AIChE J. 33 (1987) Applications, Prentice-Hall, Englewood Cliffs,
1977 1985. NJ, 1973.
87. K. Alhumaizi, R. Henda, M. Soliman: 101. R. Courant, D. Hilbert: Methods of
Numerical Analysis of a Mathematical Physics, vol. I and II,
reaction-diffusion-convection system, Comp. Intersicence, New York 1953, 1962.
Chem. Eng. 27 (2003) 579594. 102. P. M. Morse, H. Feshbach: Methods of
88. E. F. Costa, P. L. C. Lage, E. C. Biscaia, Jr.: Theoretical Physics, vol. I and II,
On the numerical solution and optimization McGraw-Hill, New York 1953.
of styrene polymerization in tubular reactors, 103. A. D. Polyanin: Handbook of Linear Partial
Comp. Chem. Eng. 27 (2003) 15911604. Differential Equations for Engineers and
142 Mathematics in Chemical Engineering

Scientists, Chapman and Hall/CRC, Boca 120. M. D. Gunzburger: Finite Element Methods for
Raton, FL 2002. Viscous Incompressible Flows, Academic
104. D. Ramkrishna, N. R. Amundson: Linear Press, San Diego 1989.
Operator Methods in Chemical Engineering 121. H. Kardestuncer, D. H. Norrie: Finite Element
with Applications to Transport and Chemical Handbook, McGraw-Hill, New York 1987.
Reaction Systems, Prentice Hall, Englewood 122. J. N. Reddy, D. K. Gartling: The Finite
Cliffs, NJ, 1985. Element Method in Heat Transfer and Fluid
105. D. D. Joseph, M. Renardy, J. C. Saut: Dynamics, 2nd ed., CRC Press, Boca Raton,
Hyperbolicity and Change of Type in the FL 2000.
Flow of Viscoelastic Fluids, Arch. Rational 123. O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu: The
Mech. Anal. 87 (1985) 213 251. Finite Element Method: Its Basis &
106. H. K. Rhee, R. Aris, N. R. Amundson: Fundamentals, vol. 1, 6th ed., Elsevier
First-Order Partial Differential Equations, Butterworth-Heinemann, Burlington, MA
Prentice-Hall, Englewood Cliffs, N.J. 1986. 2005.
107. B. A. Finlayson: Numerical Methods for 124. O. C. Zienkiewicz, R. L. Taylor: The Finite
Problems with Moving Fronts, Ravenna Park Element Method, Solid and Structural
Publishing Inc., Seattle 1990. Mechanics, vol. 2, 5th ed.,
108. D. L. Book: Finite-Difference Techniques for Butterworth-Heinemann, Burlington, MA
Vectorized Fluid Dynamics Calculations, 2000.
Springer Verlag, Berlin Heidelberg New 125. O. C. Zienkiewicz, R. L. Taylor: The Finite
York Tokyo 1981. Element Method, Fluid Dynamics, vol. 3, 5th
109. G. A. Sod: Numerical Methods in Fluid ed., Butterworth-Heinemann, Burlington, MA
Dynamics, Cambridge University Press, 2000.
Cambridge 1985. 126. Z. C. Li: Combined Methods for Elliptic
110. G. H. Xiu, J. L. Soares, P. Li, A. E. Rodriques: Equations with Singularities, Interfaces and
Simulation of ve-step one-bed Innities, Kluwer Academic Publishers,
sorption-endanced reaction process, AIChE J. Boston, MA, 1998.
48 (2002) 28172832. 127. G. J. Fix, S. Gulati, G. I. Wakoff: On the use
111. A. Malek, S. Farooq: Study of a six-bed of singular functions with nite element
pressure swing adsorption process, AIChE J. approximations, J. Comp. Phys. 13 (1973)
43 (1997) 25092523. 209238.
112. R. J. LeVeque: Numerical Methods for 128. N. M. Wigley: On a method to subtract off a
Conservation Laws, Birkhauser, Basel 1992. singularity at a corner for the Dirichlet or
113. W. F. Ames: Recent Developments in the Neumann problem, Math. Comp. 23 (1968)
Nonlinear Equations of Transport Processes, 395401.
Ind. Eng. Chem. Fundam. 8 (1969) 522 536. 129. H. Y. Hu, Z. C. Li, A. H. D. Cheng: Radial
114. W. F. Ames: Nonlinear Partial Differential Basis Collocation Methods for Elliptic
Equations in Engineering, Academic Press, Boundary Value Problems, Comp. Math.
New York 1965. Applic. 50 (2005) 289320.
115. W. E. Schiesser: The Numerical Method of 130. S. J. Osher, R. P. Fedkiw:, Level Set Methods
Lines, Academic Press, San Diego 1991. and Dynamic Implicit Surfaces, Springer, New
116. D. Gottlieb, S. A. Orszag: Numerical Analysis York 2002.
of Spectral Methods: Theory and Applications, 131. M. W. Chang, B. A. Finlayson: On the Proper
SIAM, Philadelphia, PA 1977. Boundary Condition for the Thermal Entry
117. D. W. Peaceman: Fundamentals of Numerical Problem, Int. J. Num. Methods Eng. 15
Reservoir Simulation, Elsevier, Amsterdam (1980) 935 942.
1977. 132. R. Peyret, T. D. Taylor: Computational
118. G. Juncu, R. Mihail: Multigrid Solution of Methods for Fluid Flow, Springer Verlag,
the Diffusion-Convection-Reaction Equations Berlin Heidelberg New York Tokyo
which Describe the Mass and/or Heat Transfer 1983.
from or to a Spherical Particle, Comput. 133. P. M. Gresho, S. T. Chan, C. Upson, R. L. Lee:
Chem. Eng. 13 (1989) 259 270. A Modied Finite Element Method for
119. G. R. Buchanan: Schaums Outline of Finite
Solving the Time-Dependent, Incompressible
Element Analysis, McGraw-Hill, New York
Navier Stokes Equations, Int. J. Num.
1995.
Mathematics in Chemical Engineering 143

Method. Fluids (1984) Part 1. Theory: 557 150. L. M. Delves, J. Walsh (eds.): Numerical
589; Part 2: Applications: 619 640. Solution of Integral Equations, Clarendon
134. P. M. Gresho, R. L. Sani: Incompressible Flow Press, Oxford 1974.
and the Finite Element Method, 151. P. Linz: Analytical and Numerical Methods for
Advection-Diffusion, vol. 1, John Wiley & Volterra Equations, SIAM Publications,
Sons, New York 1998. Philadelphia 1985.
135. P. M. Gresho, R. L. Sani: Incompressible Flow 152. M. A. Golberg (ed.): Numerical Solution of
and the Finite Element Method , Isothermal Integral Equations, Plenum Press, New York
Laminar Flow, vol. 2, John Wiley & Sons, 1990.
New York 1998. 153. C. Corduneanu: Integral Equations and
136. J. A. Sethian, Level Set Methods, Cambridge Applications, Cambridge Univ. Press,
University Press, Cambridge 1996. Cambridge 1991.
137. C. C. Lin, H. Lee, T. Lee, L. J. Weber: A level 154. R. Kress: Linear Integral Equations, 2nd ed.
set characteristic Galerkin nite element Springer, Heidelberg 1999.
method for free surface ows, Int. J. Num. 155. P. K. Kythe P. Purl: Computational Methods
Methods Fluids 49 (2005) 521547. for Linear Integral Equations, Birkhauser,
138. S. Succi: The Lattice Boltzmann Equation for Basel 2002.
Fluid Dynamics and Beyond, Oxford 156. G. Nagel, G. Kluge: Non-Isothermal
University Press, Oxford 2001. Multicomponent Adsorption Processes and
139. M. C. Sukop, D. T. Thorne, Jr.: Lattice Their Numerical Treatment by Means of
Boltzmann Modeling: An Introduction for Integro-Differential Equations, Comput.
Geoscientists and Engineers, Springer, New Chem. Eng. 13 (1989) 1025 1030.
York 2006. 157. P. L. Mills, S. Lai, M. P. Dudukovic, P. A.
140. S. Chen, S. G. D. Doolen: Lattice Boltzmann Ramachandran: A Numerical Study of
method for uid ows, Annu. Rev. Fluid Approximation Methods for Solution of Linear
Mech. 30 (2001) 329364. and Nonlinear Diffusion-Reaction Equations
141. H. T. Lau: Numerical Library in C for with Discontinuous Boundary Conditions,
Scientists and Engineers, CRC Press, Boca Comput. Chem. Eng. 12 (1988) 37 53.
Raton, FL 1994. 158. D. Duffy: Greens Functions with
142. Y. Y. Al-Jaymany, G. Brenner, P. O. Brum: Applications, Chapman and Hall/CRC, New
Comparative study of lattice-Boltzmann and York 2001.
nite volume methods for the simulation of 159. I. Statgold: Greens Functions and Boundary
laminar ow through a 4:1 contraction, Int. J. Value Problems, 2nd ed., Interscience, New
Num. Methods Fluids 46 (2004) 903920. York 1997.
143. R. L. Burden, R. L J. D. Faires, A. C. 160. B. Davies: Integral Transforms and Their
Reynolds: Numerical Analysis, 8th ed., Brooks Applications, 3rd ed., Springer, New York
Cole, Pacic Grove, CA 2005. 2002.
144. S. C. Chapra, R. P. Canal: Numerical Methods 161. J. M. Bownds, Theory and performance of a
for Engineers, 5th ed., McGraw-Hill, New subroutine for solving Volterra integral
York 2006. equations, J. Comput. 28 317332 (1982).
145. H. T. Lau: Numerical Library in Java for 162. J. G. Blom, H. Brunner, Algorithm 689;
Scientists and Engineers, CRC Press, Boca discretized collocation and iterated collocation
Raton, FL 2004. for nonlinear Volterra integral equations of the
146. K. W. Morton, D. F. Mayers: Numerical second kind, ACM Trans. Math. Software 17
Solution of Partial Differential Equations, (1991) 167177.
Cambridge University Press, New York 1994. 163. N. B. Ferguson, B. A. Finlayson: Error
147. A. Quarteroni, A. Valli: Numerical Bounds for Approximate Solutions to
Approximation of Partial Differential Nonlinear Ordinary Differential Equations,
Equations, Springer, Heidelberg 1997. AIChE J. 18 (1972) 1053 1059.
148. F. Scheid: Schaums Outline of Numerical 164. R. S. Dixit, L. L. Taularidis: Integral Method
Analysis, 2nd ed., McGraw-Hill, New York of Analysis of Fischer Tropsch Synthesis
1989. Reactions in a Catalyst Pellet, Chem. Eng.
149. C. T. H. Baker: The Numerical Treatment of
Sci. 37 (1982) 539 544.
Integral Equations, Clarendon Press, Oxford
1977.
144 Mathematics in Chemical Engineering

165. L. Lapidus, G. F. Pinder: Numerical Solution OptControlCentre, Comp. Chem. Eng. 27


of Partial Differential Equations in Science (2003) no. 11, 15131531.
and Engineering, Wiley-Interscience, New 181. L. T. Biegler, A. M. Cervantes, A. Wachter:
York 1982. Advances in Simultaneous Strategies for
166. C. K. Hsieh, H. Shang: A Boundary Dynamic Process Optimization, Chemical
Condition Dissection Method for the Solution Engineering Science 57 (2002) no. 4, 575593.
of Boundary-Value Heat Conduction Problems 182. C. D. Laird, L. T. Biegler, B. van Bloemen
with Position-Dependent Convective Waanders, R. A. Bartlett: Time Dependent
Coefcients, Num. Heat Trans. Part B: Fund. Contaminant Source Determination for
16 (1989) 245 255. Municipal Water Networks Using Large Scale
167. C. A. Brebbia, J. Dominguez: Boundary Optimization, ASCE Journal of Water
Elements An Introductory Course, 2nd ed., Resource Management and Planning 131
Computational Mechanics Publications, (2005) no. 2, 125.
Southhamtpon 1992. 183. A. R. Conn, N. Gould, P. Toint: Trust Region
168. J. Mackerle, C. A. Brebbia (eds.): Boundary Methods, SIAM, Philadelphia 2000.
Element Reference Book, Springer Verlag, 184. A. Drud: CONOPT A Large Scale GRG
Berlin Heidelberg New York Tokyo 1988. Code, ORSA Journal on Computing 6 (1994)
169. O. L. Mangasarian: Nonlinear Programming, 207216.
McGraw-Hill, New York 1969. 185. B. A. Murtagh, M. A. Saunders: MINOS 5.1
170. G. B. Dantzig: Linear Programming and Users Guide, Technical Report SOL 83-20R,
Extensions, Princeton University Press, Stanford University, Palo Alto 1987.
Princeton, N.J, 1963. 186. R. H. Byrd, M. E. Hribar, J. Nocedal: An
171. S. J. Wright: Primal-Dual Interior Point Interior Point Algorithm for Large Scale
Methods, SIAM, Philadelphia 1996. Nonlinear Programming, SIAM J. Opt. 9
172. F. Hillier, G. J. Lieberman: Introduction to (1999) no. 4, 877.
Operations Research, Holden-Day, San 187. R. Fletcher, N. I. M. Gould, S. Leyffer, Ph. L.
Francisco, 1974. Toint, A. Waechter: Global Convergence of a
173. T. F. Edgar, D. M. Himmelblau, L. S. Lasdon: Trust-region (SQP)-lter Algorithms for
Optimization of Chemical Processes, General Nonlinear Programming, SIAM J.
McGraw-Hill Inc., New York 2002. Opt. 13 (2002) no. 3, 635659.
174. K. Schittkowski: More Test Examples for 188. R. J. Vanderbei, D. F. Shanno: An Interior
Nonlinear Programming Codes, Lecture notes Point Algorithm for Non-convex Nonlinear
in economics and mathematical systems no. Programming. Technical Report SOR-97-21,
282, Springer-Verlag, Berlin 1987. CEOR, Princeton University, Princeton, NJ,
175. J. Nocedal, S. J. Wright: Numerical 1997.
Optimization, Springer, New York 1999. 189. P. E. Gill, W. Murray, M. Wright: Practical
176. R. Fletcher: Practical Optimization, John Optimization, Academic Press, New York
Wiley & Sons, Ltd., Chichester, UK 1987 1981.
177. A. Wachter, L. T. Biegler: On the 190. P. E. Gill, W. Murray, M. A. Saunders: Users
Implementation of an Interior Point Filter Line guide for SNOPT: A FORTRAN Package for
Search Algorithm for Large-Scale Nonlinear Large-scale Nonlinear Programming,
Programming, 106 Mathematical Technical report, Department of Mathematics,
Programming (2006) no. 1, 25 57. University of California, San Diego 1998.
178. C. V. Rao, J. B. Rawlings S. Wright: On the 191. J. T. Betts: Practical Methods for Optimal
Application of Interior Point Methods to Control Using Nonlinear Programming,
Model Predictive Control. J. Optim. Theory Advances in Design and Control 3, SIAM,
Appl. 99 (1998) 723. Philadelphia 2001.
179. J. Albuquerque, V. Gopal, G. Staus, L. T. 192. gPROMS Users Guide, PSE Ltd., London
Biegler, B. E. Ydstie: Interior point SQP 2002.
Strategies for Large-scale Structured Process 193. G. E. P. Box: Evolutionary Operation: A
Optimization Problems, Comp. Chem. Eng. Method for Increasing Industrial Productivity,
23 (1997) 283. Applied Statistics 6 (1957) 81101.
180. T. Jockenhoevel, L. T. Biegle, A. Wachter: 194. R. Hooke, T. A. Jeeves: Direct Search
Dynamic Optimization of the Tennessee Solution of Numerical and Statistical
Eastman Process Using the Problems, J. ACM 8 (1961) 212.
Mathematics in Chemical Engineering 145

195. M. J. D. Powell: An Efcient Method for Mathematical Programming 10 (1976)


Finding the Minimum of a Function of Several 147175.
Variables without Calculating Derivatives, 209. H. S. Ryoo, N. V. Sahinidis: Global
Comput. J. 7 (1964) 155. Optimization of Nonconvex NLPs and
196. J. A. Nelder, R. Mead: A Simplex Method for MINLPs with Applications in Process
Function Minimization, Computer Journal 7 Design, Comp. Chem. Eng. 19 (1995) no. 5,
(1965) 308. 551566.
197. R. Luus, T. H. I. Jaakola: Direct Search for 210. M. Tawarmalani, N. V. Sahinidis: Global
Complex Systems, AIChE J 19 (1973) Optimization of Mixed Integer Nonlinear
645646. Programs: A Theoretical and Computational
198. R. Goulcher, J. C. Long: The Solution of Study, Mathematical Programming 99 (2004)
Steady State Chemical Engineering no. 3, 563591.
Optimization Problems using a Random 211. C. S. Adjiman., I.P. Androulakis, C.A.
Search Algorithm, Comp. Chem. Eng. 2 Floudas: Global Optimization of MINLP
(1978) 23. Problems in Process Synthesis and Design.
199. J. R. Banga, W. D. Seider: Global Comp. Chem. Eng., 21 (1997) Suppl.,
Optimization of Chemical Processes using S445S450.
Stochastic Algorithms, C. Floudas and P. 212. C.S. Adjiman, I.P. Androulakis and C.A.
Pardalos (eds.): State of the Art in Global Floudas, Global Optimization of
Optimization, Kluwer, Dordrecht 1996, p. 563. Mixed-Integer Nonlinear Problems, AIChE
200. P. J. M. van Laarhoven, E. H. L. Aarts: Journal, 46 (2000) no. 9, 17691797.
Simulated Annealing: Theory and 213. C. S. Adjiman, I.P. Androulakis, C.D.
Applications, Reidel Publishing, Dordrecht Maranas, C. A. Floudas: A Global
1987. Optimization Method, BB, for Process
201. J. H. Holland: Adaptations in Natural and Design, Comp. Chem. Eng., 20 (1996) Suppl.,
Articial Systems, University of Michigan S419S424.
Press, Ann Arbor 1975. 214. J. M. Zamora, I. E. Grossmann: A Branch
202. J. E. Dennis, V. Torczon:, Direct Search and Contract Algorithm for Problems with
Methods on Parallel Machines, SIAM J. Opt. Concave Univariate, Bilinear and Linear
1 (1991) 448. Fractional Terms, Journal of Gobal
203. A. R. Conn, K. Scheinberg, P. Toint: Recent Optimization 14 (1999) no. 3, 217249.
Progress in Unconstrained Nonlinear 215. E. M. B. Smith, C. C. Pantelides: Global
Optimization without Derivatives, Optimization of Nonconvex NLPs and
Mathemtatical Programming, Series B, 79 MINLPs with Applications in Process
(1997) no. 3, 397. Design, Comp. Chem. Eng., 21 (1997) no.
204. M. Eldred: DAKOTA: A Multilevel Parallel 1001, S791S796.
Object-Oriented Framework for Design 216. J. E. Falk, R. M. Soland: An Algorithm for
Optimization, Parameter Estimation, Separable Nonconvex Programming
Uncertainty Problems, Management Science 15 (1969)
Quantication, and Sensitivity Analysis, 2002, 550569.
http://endo.sandia.gov/DAKOTA/software.html 217. F. A. Al-Khayyal, J. E. Falk: Jointly
205. A. J. Booker et al.: A Rigorous Framework
Constrained Biconvex Programming,
for Optimization of Expensive Functions by
Mathematics of Operations Research 8 (1983)
Surrogates, CRPC Technical Report 98739,
273286.
Rice University, Huston TX, February 1998.
218. F. A. Al-Khayyal: Jointly Constrained
206. R. Horst, P. M. Tuy: Global Optimization:
Bilinear Programs and Related Problems: An
Deterministic Approaches, 3rd ed., Springer
Overview, Computers and Mathematics with
Verlag, Berlin 1996.
Applications, 19 (1990) 5362.
207. I. Quesada, I. E. Grossmann: A Global
219. I. Quesada, I. E. Grossmann: A Global
Optimization Algorithm for Linear Fractional
Optimization Algorithm for Heat Exchanger
and Bilinear Programs, Journal of Global
Networks, Ind. Eng. Chem. Res. 32 (1993)
Optimization 6 (1995) no. 1, 3976.
208. G. P. McCormick: Computability of Global 487499.
Solutions to Factorable Nonconvex Programs: 220. H. D. Sherali, A. Alameddine: A New
Part I Convex Underestimating Problems, Reformulation-Linearization Technique for
146 Mathematics in Chemical Engineering

Bilinear Programming Problems, Journal of 234. T. J. Van Roy, L. A. Wolsey: Valid


Global Optimization 2 (1992) 379410. Inequalities for Mixed 0-1 Programs, Discrete
221. H. D. Sherali, C. H. Tuncbilek: A Applied Mathematics 14 (1986) 199213.
Reformulation-Convexication Approach for 235. E. L. Johnson, G. L. Nemhauser, M. W. P.
Solving Nonconvex Quadratic Programming Savelsbergh: Progress in Linear
Problems, Journal of Global Optimization 7 Programming Based Branch-and-Bound
(1995) 131. Algorithms: Exposition, INFORMS Journal
222. C. A. Floudas: Deterministic Global of Computing 12 (2000) 223.
Optimization: Theory, Methods and 236. J. F. Benders: Partitioning Procedures for
Applications, Kluwer Academic Publishers, Solving Mixed-variables Programming
Dordrecht, The Netherlands, 2000. Problems, Numeri. Math. 4 (1962) 238252.
223. M. Tawarmalani, N. Sahinidis: Convexication 237. G. L. Nemhauser, L. A. Wolsey: Integer and
and Global Optimization in Continuous and Combinatorial Optimization,
Mixed-Integer Nonlinear Programming: Wiley-Interscience, New York 1988.
Theory, Algorithms, Software, and 238. C. Barnhart, E. L. Johnson, G. L. Nemhauser,
Applications, Kluwer Academic Publishers, M. W. P. Savelsbergh, P. H. Vance:
Dordrecht 2002. Branch-and-price: Column Generation for
224. R. Horst, H. Tuy: Global optimization: Solving Huge Integer Programs, Operations
Deterministic approaches. Springer-Verlag, Research 46 (1998) 316329.
Berlin 1993. 239. O. K. Gupta, V. Ravindran: Branch and
225. A. Neumaier, O. Shcherbina, W. Huyer, T. Bound Experiments in Convex Nonlinear
Vinko: A Comparison of Complete Global Integer Programming, Management Science
Optimization Solvers, Math. Programming B 31 (1985) no. 12, 15331546.
103 (2005) 335356. 240. S. Nabar, L. Schrage: Modeling and Solving
226. L. Lovasz A. Schrijver: Cones of Matrices Nonlinear Integer Programming Problems,
and Set-functions and 0-1 Optimization, Presented at Annual AIChE Meeting, Chicago
SIAM J. Opt. 1 (1991) 166190. 1991.
227. H. D. Sherali, W.P. Adams: A Hierarchy of 241. B. Borchers, J.E. Mitchell: An Improved
Relaxations Between the Continuous and Branch and Bound Algorithm for Mixed
Convex Hull Representations for Zero-One Integer Nonlinear Programming, Computers
Programming Problems, SIAM J. Discrete and Operations Research 21 (1994) 359367.
Math. 3 (1990) no. 3, 411430. 242. R. Stubbs, S. Mehrotra: A Branch-and-Cut
228. E. Balas, S. Ceria, G. Cornuejols: A Method for 0-1 Mixed Convex Programming,
Lift-and-Project Cutting Plane Algorithm for Mathematical Programming 86 (1999) no. 3,
Mixed 0-1 Programs, Mathematical 515532.
Programming 58 (1993) 295324. 243. S. Leyffer: Integrating SQP and Branch and
229. A. H. Land, A.G. Doig: An Automatic Bound for Mixed Integer Noninear
Method for Solving Discrete Programming Programming, Computational Optimization
Problems, Econometrica 28 (1960) 497520. and Applications 18 (2001) 295309.
244. A. M. Geoffrion: Generalized Benders
230. E. Balas: An Additive Algorithm for Solving
Decomposition, Journal of Optimization
Linear Programs with Zero-One Variables,
Theory and Applications 10 (1972) no. 4,
Operations Research 13 (1965) 517546.
237260.
231. R. J. Dakin: A Tree Search Algorithm for
245. M. A. Duran, I.E. Grossmann: An
Mixed-Integer Programming Problems,
Outer-Approximation Algorithm for a Class of
Computer Journal 8 (1965) 250255.
Mixed-integer Nonlinear Programs, Math
232. R. E. Gomory: Outline of an Algorithm for
Programming 36 (1986) 307.
Integer Solutions to Linear Programs,
246. X. Yuan, S. Zhang, L. Piboleau, S. Domenech
Bulletin of the American Mathematics Society
: Une Methode doptimisation Nonlineare en
64 (1958) 275278.
Variables Mixtes pour la Conception de
233. H. P. Crowder, E. L. Johnson, M. W. Padberg:
Procedes, RAIRO 22 (1988) 331.
Solving Large-Scale Zero-One Linear
247. R. Fletcher, S. Leyffer: Solving Mixed
Programming Problems, Operations
Integer Nonlinear Programs by Outer
Research 31 (1983) 803834.
Approximation, Math Programming 66
(1974) 327.
Mathematics in Chemical Engineering 147

248. I. Quesada, I.E. Grossmann: An LP/NLP 261. R. Porn, T. Westerlund: A Cutting Plane
Based Branch and Bound Algorithm for Method for Minimizing Pseudo-convex
Convex MINLP Optimization Problems, Functions in the Mixed-integer Case, Comp.
Comp. Chem. Eng. 16 (1992) 937947. Chem. Eng. 24 (2000) 26552665.
249. T. Westerlund, F. Pettersson: A Cutting Plane 262. J. Viswanathan, I. E. Grossmann: A
Method for Solving Convex MINLP Combined Penalty Function and
Problems, Comp. Chem. Eng. 19 (1995) Outer-Approximation Method for MINLP
S131S136. Optimization, Comp. Chem. Eng. 14 (1990)
250. O. E. Flippo, A. H. G. R. Kan: 769.
Decomposition in General Mathematical 263. A. Brooke, D. Kendrick, A. Meeraus, R.
Programming, Mathematical Programming Raman: GAMS A Users Guide,
60 (1993) 361382. www.gams.com, 1998.
251. T. L. Magnanti, R. T. Wong: Acclerated 264. N. V. A. Sahinidis, A. Baron: A General
Benders Decomposition: Algorithm Purpose Global Optimization Software
Enhancement and Model Selection Criteria, Package, Journal of Global Optimization 8
Operations Research 29 (1981) 464484. (1996) no.2, 201205.
252. N. V. Sahinidis, I. E. Grossmann: 265. C. A. Schweiger, C. A. Floudas: Process
Convergence Properties of Generalized Synthesis, Design and Control: A Mixed
Benders Decomposition, Comp. Chem. Eng. Integer Optimal Control Framework,
15 (1991) 481. Proceedings of DYCOPS-5 on Dynamics and
253. J. E. Kelley Jr.: The Cutting-Plane Method Control of Process Systems, Corfu, Greece
for Solving Convex Programs, J. SIAM 8 1998, pp. 189194.
(1960) 703712. 266. R. Raman, I. E. Grossmann: Modelling and
254. S. Leyffer: Deterministic Methods for Computational Techniques for Logic Based
Mixed-Integer Nonlinear Programming, Integer Programming, Comp. Chem. Eng. 18
Ph.D. thesis, Department of Mathematics and (1994) no.7, 563.
Computer Science, University of Dundee, 267. J. N. Hooker, M. A. Osorio: Mixed
Dundee 1993. Logical.Linear Programming, Discrete
255. M. S. Bazaraa, H. D. Sherali, C. M. Shetty: Applied Mathematics 96-97 (1994) 395442.
Nonlinear Programming, John Wiley & Sons, 268. P. V. Hentenryck: Constraint Satisfaction in
Inc., New York 1994. Logic Programming, MIT Press, Cambridge,
256. G. R. Kocis, I. E. Grossmann: Relaxation MA, 1989.
Strategy for the Structural Optimization of 269. J. N. Hooker: Logic-Based Methods for
Process Flowsheets, Ind. Eng. Chem. Res. 26 Optimization: Combining Optimization and
(1987) 1869. Constraint Satisfaction, John Wiley & Sons,
257. I. E. Grossmann: Mixed-Integer Optimization New York 2000.
Techniques for Algorithmic Process 270. E. Balas: Disjunctive Programming, Annals
Synthesis, Advances in Chemical of Discrete Mathematics 5 (1979) 351.
Engineering, vol. 23, Process Synthesis, 271. H. P. Williams: Mathematical Building in
Academic Press, London 1996, pp. 171246. Mathematical Programming, John Wiley,
258. E. M. B. Smith, C. C. Pantelides: A Symbolic Chichester 1985.
Reformulation/Spatial Branch and Bound 272. R. Raman, I. E. Grossmann: Relation
Algorithm for the Global Optimization of between MILP Modelling and Logical
Nonconvex MINLPs, Comp. Chem. Eng. 23 Inference for Chemical Process Synthesis,
(1999) 457478. Comp. Chem. Eng. 15 (1991) no. 2, 73.
259. P. Kesavan P. P. I. Barton: Generalized 273. S. Lee, I. E. Grossmann: New Algorithms for
Branch-and-cut Framework for Mixed-integer Nonlinear Generalized Disjunctive
Nonlinear Optimization Problems, Comp. Programming, Comp. Chem. Eng. 24 (2000)
Chem. Eng. 24 (2000) 13611366. no. 9-10, 21252141.
260. S. Lee I. E. Grossmann: A Global 274. J. Hiriart-Urruty, C. Lemarechal: Convex
Optimization Algorithm for Nonconvex Analysis and Minimization Algorithms,
Generalized Disjunctive Programming and Springer-Verlag, Berlin, New York 1993.
Applications to Process Systems, Comp. 275. E. Balas: Disjunctive Programming and a
Chem. Eng. 25 (2001) 16751697. Hierarchy of Relaxations for Discrete
148 Mathematics in Chemical Engineering

Optimization Problems, SIAM J. Alg. Disc. 289. H. G. Bock, K. J. Plitt: A Multiple Shooting
Meth. 6 (1985) 466486. Algorithm for Direct Solution of Optimal
276. I. E. Grossmann, S. Lee: Generalized Control Problems, 9th IFAC World Congress,
Disjunctive Programming: Nonlinear Convex Budapest 1984.
Hull Relaxation and Algorithms, 290. D. B. Leineweber, H. G. Bock, J. P. Schloder,
Computational Optimization and Applications J. V. Gallitzendorfer, A. Schafer, P. Jansohn:
26 (2003) 83100. A Boundary Value Problem Approach to the
277. S. Lee, I. E. Grossmann: Logic-based Optimization of Chemical Processes Described
Modeling and Solution of Nonlinear by DAE Models, Computers & Chemical
Discrete/Continuous Optimization Problems, Engineering, April 1997. (IWR-Preprint
Annals of Operations Research 139 2005 97-14, Universitat Heidelberg, March 1997.
267288. 291. J. E. Cuthrell, L. T. Biegler: On the
278. ILOG, Gentilly Cedex, France 1999, Optimization of Differential- algebraic Process
www.ilog.com, Systems, AIChE Journal 33 (1987)
279. M. Dincbas, P. Van Hentenryck, H. Simonis, 12571270.
A. Aggoun, T. Graf, F. Berthier: The 292. A. Cervantes, L. T. Biegler: Large-scale DAE
Constraint Logic Programming Language Optimization Using Simultaneous Nonlinear
CHIP, FGCS-88: Proceedings of International Programming Formulations. AIChE Journal
Conference on Fifth Generation Computer 44 (1998) 1038.
Systems, Tokyo, 693702. 293. P. Tanartkit, L. T. Biegler: Stable
280. M. Wallace, S. Novello, J. Schimpf: Decomposition for Dynamic Optimization,
ECLiPSe: a Platform for Constraint Logic Ind. Eng. Chem. Res. 34 (1995) 12531266.
Programming, ICL Systems Journal 12 294. S. Kameswaran, L. T. Biegler: Simultaneous
(1997) no.1, 159200. Dynamic Optimization Strategies: Recent
281. V. Pontryagin, V. Boltyanskii, R. Gamkrelidge, Advances and Challenges, Chemical Process
E. Mishchenko: The Mathematical Theory of Control 7, to appear 2006.
Optimal Processes, Interscience Publishers 295. B. van Bloemen Waanders, R. Bartlett, K.
Inc., New York, NY, 1962. Long, P. Boggs, A. Salinger: Large Scale
282. A. E. Bryson, Y. C. Ho: Applied Optimal Non-Linear Programming for PDE
Control: Optimization, Estimation, and Constrained Optimization, Sandia Technical
Control, Ginn and Company, Waltham, MA, Report SAND2002-3198, October 2002.
1969. 296. G. P. Box, J. S. Hunter, W. G. Hunter:
283. V. Vassiliadis, PhD Thesis, Imperial College, Statistics for Experimenters: Design,
University of London 1993. Innovation, and Discovery, 2nd ed., John
284. W. F. Feehery, P. I. Barton: Dynamic Wiley & Sons, New York 2005.
Optimization with State Variable Path 297. D. C. Baird: Experimentation: An Introduction
Constraints, Comp. Chem. Eng., 22 (1998) to Measurement Theory and Experiment
12411256. Design, 3rd ed., Prentice Hall, Engelwood
285. L. Hasdorff:. Gradient Optimization and Cliffs, NJ, 1995.
Nonlinear Control, Wiley-Interscience, New 298. J. B. Cropley: Heuristic Approach to
York, NY, 1976. Comples Kinetics, ACS Symp. Ser. 65 (1978)
286. R. W. H. Sargent, G. R. Sullivan: 292 302.
299. S. Lipschutz, J. J. Schiller, Jr: Schaums
Development of Feed Changeover Policies
Outline of Theory and Problems of
for Renery Distillation Units, Ind. Eng.
Introduction to Probability and Statistics,
Chem. Process Des. Dev. 18 (1979) 113124.
McGraw-Hill, New York 1988.
287. V. Vassiliadis, R. W. H. Sargent, C. Pantelides:
300. D. S. Moore, G. P. McCabe: Introduction to
Solution of a Class of Multistage Dynamic
the Practice of Statistics, 4th ed., Freeman,
Optimization Problems, I & EC Research 33
New York 2003.
(1994) 2123.
301. D. C. Montgomery, G. C. Runger: Applied
288. K. F. Bloss, L. T. Biegler, W. E. Schiesser:
Statistics and Probability for Engineers, 3 rd
Dynamic Process Optimization through
ed., John Wiley & Sons, New York 2002.
Adjoint Formulations and Constraint 302. D. C. Montgomery, G. C. Runger, N. F.
Aggregation,. Ind. Eng. Chem. Res. 38 (1999) Hubele: Engineering Statistics, 3rd ed., John
421432. Wiley & Sons, New York 2004.
Mathematics in Chemical Engineering 149

303. G. E. P. Box, W. G. Hunter, J. S. Hunter: Process Design, Prentice-Hall Englewood


Statistics for Experimenters, John Wiley & Cliffs, NJ 1997.
Sons, New York 1978. 307. I. E. Grossmann (ed.), Global Optimization in
304. B. W. Lindgren: Statistical Theory, Engineering Design, Kluwer, Dordrecht
Macmillan, New York 1962. 1996.
305. O. A. Hougen, K. M. Watson, R. A. Ragatz: 308. J. Kallrath, Mixed Integer Optimization in the
Chemical Process Principles, 2nd ed., part II, Chemical Process Industry: Experience,
Thermodynamics, J. Wiley & Sons, New Potential and Future, Trans. I .Chem E., 78
York 1959. (2000) Part A, 809822.
306. L. T. Biegler, I. E. Grossmann, A. W. 309. J. N. Hooker: Logic-Based Methods for
Westerberg: Systematic Methods for Chemical Optimization, John Wiley & Sons, New York
1999.

You might also like