You are on page 1of 7

Chemi c al Engi neer i ng Sew nc e, Vol . 44, No. 7, pp. 1495-1501, 1989.

Printed i n Gr eat Br i t ai n.
OCQ%2509/89 S3.00+0.00
0 1989 Pergamon press plc
AN IMPROVED MEMORY METHOD FOR THE SOLUTION
OF A NONLINEAR EQUATION
MORDECHAI SHACHAM
Department of Chemical Engineering, Ben Gurion University of the Negev, Beer Sheva 84105, Israel
(First recei ved 14 October 1987; accepted i n revi sed form 2 Nooember 1988)
Abstract-A new method, the improved memory method, for the solution of a nonlinear equation is
introduced. This method is compared with several widely used solution methods by solving 15 test problems
from different chemical engineering application areas. It is shown that the proposed method requires the
smallest number of function (or derivative) evaluations to achieve the solution. A new algorithm for stopping
the iterations is introduced. This algorithm stoDs the iterations when the true error is reduced for the first
time below the desired error tolerance.
1. INTRODUCTION 2. BASK CONCEPTS
Many problems encountered in chemical engineering
require the solution of a nonlinear equation. Typical
examples are dew point, bubble point and isothermal
flash calculations for multicomponent mixtures, the
calculation of the conversion in an isothermal reactor,
and volume, compressibility factor or fugacity coeffic-
ient calculation using a complicated equation of state.
Often the solution of a nonlinear equation i s the
innermost loop in an involved calculation, like the
calculation of the fugacity coefficient in a rigorous
distillation computation. In such cases the innermost
loop has to be executed thousands of times for a single
iteration of the outer loop. That means that any
improvement in the method for solving a single non-
linear equation may result in a considerable saving in
the total computer time required to solve a particular
problem. This is the reason for the continuing interest
in developing new iterative methods for solving a
single nonlinear equation (see, for example, references
[l] and [2]) in spite of the fact that many well-
established methods do exist for this purpose [3].
The solulion of a nonlinear equation can be stated
as finding a value of x (denoted as x*) which satisfies
the equation
j-(x*)=0
(1)
where f(x) is a nonlinear function and x* is the
solution (or root) of the equation.
Equation (1) can be alternatively formulated as
x* = F(x*)
(2)
where F(x) is a different function.
An iterative solution begins by guessing an initial
value for x (denoted x,) and generating a sequence x,,
x1, X2, * . . ,
x, such that
lim x, = x*
(3)
?#+oD
where n is the iteration number.
The exact error in iteration n is described as
In 1972, Shacham and Kehat [4] proposed the use
of the memory method solution for a nonlinear equa-
tion. This method uses inverse Lagrange interpolation
to solve the nonlinear equation, and it has been shown
[4] that it requires less function and derivative evalu-
ations to reach the solution than other methods. This
method has since been rediscovered by Hopgood and
McKee [S] who named it the improved secant
method.
&,= Ix,-x*I.
(4)
It will be assumed that the functions f(x) and F(x)
and the solution x* satisfy the following restrictions:
(1) x* is located inside the interval I = [a, b] such
that I(x) and F(x) are continuous in 1.
(2) There is only a single root in I and the root is
real (not complex).
The most commonly used solution methods rely on
the above assumptions. Functions that do not satisfy
them require special solution methods.
This paper presents an improved memory method
(IMM) in which the inverse Lagrange interpolation is
replaced by an inverse interpolation based on con-
tinued fractions. An interpolation based on a con-
tinued function was proposed in a recent paper by
Pang and Knopf [6].
Table 1 summarizes some of the most widely used,
basic solution methods, for a nonlinear equation of the
form of eq. {l).
A new effective iteration stopping criterion is also
introduced. Fifteen sample problems taken from dif-
ferent chemical engineering applications are used to
demonstrate the advantages of the proposed tech-
niques.
In general it can be expected that the method which
has the highest order of convergence will converge to
the solution in the smallest number of iterations (there
can be exceptions to this rule, mainly because of
inappropriate starting guesses). The smallest number
of iterations does not usually mean the shortest com-
puter time because that also depends on the com-
putational effort involved in each iteration.
CES 44:7-c
1496 MORDECHAI SHACHAM
Table 1. Basic solution methods for a nonlinear equation of the form J(x) =0
Order of Information used
No. Method References convergence to calculate x,+ L
1 Bisection Linear (1)
I-(X), fG.- t)
2 Newton Raphson E:; Quadratic (2)
S(x.1, f(%)
3 Richmond
c31
3
4 Secant
c71
1.618
frc$ fy p.,
X T X 1
5 Inverse quadratic
c73
1.839
/(Xh J-(x. - I)> f(x. - 21
interpolation
6 Inverse polynomial
c417 C5l 2 /(X)>fk-I)~ ..f(x.~
interpolation
Assuming that f(x) is complicated, so that the
function computation time dominates, and the effort
needed to calculate the derivatives is similar to that
needed for the function calculation, the number of
function and derivative evaluations is a much better
measure of the effectiveness of the solution method.
On such a basis, Shacham and Kehat [4] found the
memory method, which is one of the inverse poly-
nomial interpolation type methods, superior to the
others. The memory method has some limitations.
Those and the way to improve its performance will be
discussed in Section 3.
There are methods in which two or more of the
basic methods are combined to achieve global con-
vergence. These methods require the user to provide
two initial guesses, X, and x1, so thatf(x,) f(xJ < 0.
During the iterations two points that bracket the
solution are kept. Brents method [8] uses a combi-
nation of bisection with inverse quadratic interp-
olation, the NEWSEC [9] method uses a combina-
tion of the Newton-Raphson (NR) and secant
methods, and Coxs [lo] method uses bisection with a
special quadratic interpolating function. The disad-
vantage of those methods is the requirement for two
starting points which bracket the solution: for some
functions this requirement puts too much burden on
the user.
For nonlinear equations of the form of eq. (2)
successive substitution (direct substitution) and
Wegsteins method [7] are most often used for sol-
ution. The successive-substitution method converges
only if 1 F(x*)l -E 1, and it has a linear convergence.
Wegsteins method, which is actually a rediscovery of
Aitkens 62 method, usually converges when success-
ive substitution does not, and it converges faster when
they both converge.
Iterative methods will (at least in theory) find the
exact solution only after an infinite number of iter-
ations [see eq. (3)]. For practical reasons the iterations
have to be stopped when the error becomes smaller
then a desired error tolerance. The exact error [eq. (4)]
cannot be used as a criterion to stop the iteration since
it is usually not known. Stopping criteria that are used
in practice are based on an estimate of the exact error.
The three generally used criteria [9] are:
(a) If( <Ed
(4 I~n-n,-~Iq&d (5)
tc) tx,-x,-,IcEdIx,l
where Ed is the desired error tolerance.
Note that criteria (a) and (b) compare absolute
errors and criterion (c) compares relative errors.
[Equation (SC) is often written as Ix, - x,, _ 1 l/l x, 1 -E Em.
This formulation, however, can caus,e division by zero
if the solution happens to be zero.]
Note also that, for nonlinear equations of the form
n = F(x), only criteria (5b) and (5~) can be used.
Shacham and Kehat [9] have shown that neither of
those criteria are satisfactory as a stopping criterion
because under certain circumstances they may stop
the iterations far from the solution, indicate a solution
where none exists or may not stop the iterations even
when the solution is reached. These limitations can be
eliminated by using methods which bracket the sol-
ution, such as Brents, Coxs or the NEWSEC method.
These methods have other disadvantages which were
mentioned earlier.
In Section 4 a new stopping criterion, which over-
comes the above limitations and can be used with any
solution method, will be introduced.
3. THE IMM
Limitations of the memory method as implemented
by Shacham and Kehat [47 stem from the use of the
Lagrange interpolation polynomial. The evaluation of
this polynomial requires a number of divisions and
multiplications in proportion to n2 [(n+ l)n division
and (n + l)(n + 1) multiplication operations]. For high
values of n (iteration number) the computational effort
associated with the evaluation of the polynomial
becomes considerable. A large number of arithmetic
operations also means an increase in round-off error
propagation. That may cause a deterioration in the
accuracy of the interpolation for large values of n.
Recently Pang and Knopf [6] have proposed the
use of a continued fraction based interpolation instead
of polynomial interpolation, because the use of con-
tinued fractions requires much fewer arithmetic oper-
ations.
Replacing the Lagrange interpolation polynomial
in the memory method with continued fractions leads
to the following algorithm for the IMM:
An improved memory method for the solution of a nonlinear equation 1497
Al gori thm 1 (IMM)
(1) Select two initial guesses, x, and x1. Calculate
Y, =f(x,), Y 1 =f(xr).
(2)
(3)
(4)
(5)
(6)
Calculate x2 using the following equations:
a, = x,
a1 = (Y1 -YJ/(X, -x0)
~2=a,-~Jal.
Set n= 2 and calculate y2 =f(+)_
Calculate x, + 1 using the following
equations:
b,=x,
(6)
recursive
(7)
(i= 1, 2, 3, . . . , n - 1) (8)
Ij/,=a,=(Y,-Yy,-,)/(b,-,-an,-,) (9)
ei- r =ai _l -yj_ l /t&i (i =n, n- 1, . . _ , 1) (10)
x
,+1=+.- (11)
Calculate y, + r =Nx, + r).
Check convergence: if converged exit, if not, set
n = n + 1 and return to (4).
Counting the number of operations in eqs (8)-(10)
shows that this method requires only 2n divisions per
iteration and no multiplications. This is clearly su-
perior to the Lagrange polynomial interpolation.
The method, as specified in the above algorithm,
can be used only for nonlinear equations of the type of
eq. (1). To use it for nonlinear equations of the type of
eq. (2). only the definition of y has to be changed to
Y=F(x)-xx.
4. ITERATION STOPPlNG CRITERIA
The proposed stopping criteria are based on the
principle proposed by Shacham and Kehat [93_ The
principle is that two successive values of x, say x,_ 1
and x,, must be placed so that the distance between
them is smaller than .sd and they must be on alternate
sides of the solution. Thus criterion S(b) or 5(c) is
satisfied as well as relation (12):
f(x,)I(x.- I)-=0 (12)
The following algorithm is proposed to satisfying
the above criteria with a minimum number of ad-
ditional function evaluations:
Al gori thm 2 (stoppi ng cri teri on)
(1)
(2)
(3)
(3a)
Set conv = false (conv is the convergence indi-
cator).
Check if criterion 5(a) or criterion 5(b) are
satisfied. If not, exit.
Check if y,Y,_ 1 < =O. If not, proceed to step
(4).
IfIx,-&-,I<sd,
set conv = true and exit.
(3b) Set x, + , = x, - sign (x, -x, _ I )sd and calculate
Y n + 1 =S(x. + 1).
IfY,+,Y?l
< = 0, set conv = true and exit.
(4) Set x, + 1 = x, + sign (x, ~ x, _ 1) cd and calculate
Y,+1 =S(x,+ L).
JfY,*,Y,
< = 0, set conv = true.
(5) Exit.
This algorithm ensures simultaneous satisfaction of
criteria 5(b) and (12), usually with one or none ad-
ditional function value calculation. The relative error
criterion 5(c) can be used istead of the absolute error
criterion. In order to use algorithm 2 with the relative
error criterion, E,, has to be replaced everywhere with
the expression ~~1 x, I_
Algorithm 2 can be used also for nonlinear equa-
tions of the form x = F(x). In such cases the definition
y=F(x)--x should be used.
5. NUMERICAL RESULTS
Fifteen test problems were used for comparing the
performance of the different methods. Appendix A
includes complete details of the test problems: the
reference where the problem was taken from, the type
of application in which the problem was encountered,
the function and the parameters (if any), initial
estimate(s), and solution(s).
The problems are representative of the following
application areas: chemical equilibrium calculations
(problems 1 and 1 l), isothermal flash (problem 2),
energy (problem 3) or material balance (problem 7) in
a chemical reactor. azeotropic point calculation (prob-
lem 4) adiabatic flame temperature (problem 51, calcu-
lation of gas volume from Beattie Bridgeman
(problem 6), Redlich Kwong (problem 13) and virial
(problem 12) equations of state, liquid flow rate in pipe
(problem 8), pressure drop in a converging diverging
nozzle (problem 9), and batch distillation at infinite
reflux (problem 10).
The problems in the test set contain a variety of
nonlinear functions: polynomials (problems 5, 6, 8, 12
and 15), ratio of polynomiais (problems 1, 4, 11 and
13), fractional powers of the unknown (problems 8, 9
and ll), exponential (problem 3), and logarithmic
functions (problems 7,10 and 14). The problems which
are the most difficult to solve are the ones that contain
logarithmic functions. They are characterized by a
very steep change in the function value and regions of
discontinuities close to the solution.
In problem 7, for example, the root is x* = 0.757396,
the function value approaches - cc when x - > 0.8,
and the function is undefined in the region 0.8 < x < 1.
Solution of problems 3 and 5 may also present
difficulties, especially if criterion (5a) is used as the
stopping criterion. The function evaluation involves
subtraction of terms of large absolute value (of the
order of IOl in problem 3) so the function value
remains high, because of round-off errors, even very
close to the solution.
The first 10 test functions are of the form f(x) =0
and functions 1 l-l 5 are of the form x = F(x). For the
1498 MORDECHAI SHACHAM
Table 2. Performance of five solution methods for test functions of the form j(x) =O+
Problem
NR Secant Richmond Memory IMM
No. a b c a b c a b c a b c n b c
1 11 1
:
10 1 8 - ~ ~ 10 1 8 7 1 5
2 7 1 7 1 5 10 1 3 6 1 4 6 1 4
3 9 1 4 7 0 5 13 1 4 6 1 4 6 1 4
4 7 1 3 7 1 5 6 1 4 5 1 3
5 7 1 3 7 1 5 7 1 2 6 0 4 6 0 4
6 9 0 4 9 1 7 7 I 2 7 1 5 6 1 4
7 9 1 4 10 1 8 7 1 2 8 0 6 6 1 4
8.1 9 1 4 7 1 5 10 1 3 7 0 5 6 1 4
8.2 7 1 3 8 1 6 7 1 2 7 1 5 6 1 4
9 9 1 4 8
:
6 10 1 3 7 1 5 6 1 4
10.1 11 1 5 10 8 10 1 3 10 1 8 9 2 7
10.2 9 2 4 10 2 8 7 1 2 7 1 5 6 1 4
a =number of function lor function and derivatives) evaluations, b = addition function evaluations for convergence test
[eq. (1 Z)IJ, and c = number of iterations.
I
first 10 functions two initial guesses are given. For
methods which require only one initial guess the
average of the two initial guesses was used as the
starting point. The selection of the initial guesses was
based mostly on physical considerations (like limiting
values of 0 and 1 for conversion, ideal gas volume for
real gas volume calculations, etc.). In the few cases
where the function has discontinuities (as in problem
7) or multiple solutions (as in problem 11) in the region
of interest, the initial guesses were selected to ensure
smooth convergence of all the methods.
Problem 8 was solved with two different parameter
sets (as in the original reference) and for problem 10
two different solutions were found using two sets of
starting guesses.
The first 10 problems have been solved using the
following methods: (1) NR, (2) Coxs, (3) secant, (4)
Richmonds, (5) memory, (6) the IMM, (7) Brents, and
(8) Mullers. For the first six methods an IBM PC
computer and PASCAL programs with approxi-
mately 11 digits of precision were used. For methods 7
and 8 their implementation in the IMSL subroutine
library [ 123 was used on a CDC Cyber 180-840
computer with 14-digit precision.
Table 2 shows the performance of the different
methods for the first 10 test problems using the first six
solution methods. The results for Coxs method are
not reported because they were very similar to the
results for the NR method.
Algorithm 2 with relative error tolerance cd= 10W6,
which guarantees a solution with at least six signifi-
cant digits precision, was used for stopping the iter-
ations.
The first column in Table 2 for each method
(marked by a) shows the number of function evalu-
ations (number of function and derivative evaluations
for the NR and Richmonds methods) that were
required to reach the solution with the desired accu-
racy. In the second column the number of additional
function evaluations which were required in order to
satisfy the criterion given in eq. (12), is shown. In the
third column the numbers of iterations are listed.
Table 3. Relative number of function and derivative
evaluations (the basis is the IMM at 100%)
NR
Secant
Richmond
Memory
Average Maximum Minimum
% % %
140 157 117
133 167 117
140 216 117
116 143 100
It can be seen from Table 2 that the IMM required
in all the cases fewer function evaluations than the NR,
Richmond and secant methods, and it required fewer
or an equal number of function evaluations as the
memory method. Its performance was very consistent
in that, except in two cases (problems 1 and lO.l), the
solution was found within four or less iterations, or six
or fewer function evaluations.
The advantage of the IMM becomes even more
apparent on looking at Table 3. The number of
function evaluations used in the IMM was taken as
the basis: 100% for the preparation of Table 3. Table 3
summarizes the average, maximal and minimal devi-
ations of the number of f&ction and derivative calcu-
lations in the other methods compared to the IMM. It
can be seen that-on average-the memory method
requires 16% more function evaluations, the secant
method 33% more, and the NR and Richmonds
methods 40% more than the IMM. The maximal
values are higher or even much higher than tho&
values.
As for the stopping criterion, the numbers in the
columns marked 6 in Table 2 show that usually only
one or no function evaluations were required to satisfy
criterion (12), after either criterion 5(a) or 5(c) was
satisfied. The meaning of those results is that for the
functions and methods tested the combination of
criteria S(a) and 5(c) gives a fairly accurate measure of
error during the iterations. The use of algorithm 2
makes it certain that the error is indeed smaller than
the desired tolerance. Minimum additional function
evaluations are needed for this safeguard.
An improved memory method for the solution of a nonlinear equation 1499
The: only exception was problem 10. In this particu-
lar problem, in several cases, the function value be-
came smaller than the desired error tolerance [eq.
S(a)] when the relative error in the independent vati-
able was still higher than the tolerance. In those cases,
one additional iteration was needed in order to
achieve the desired accuracy.
Comparison of the IMM with Brents and Mullers
methods had to be done differently because of the
different initial guesses and stopping criteria used in
the IMSL library. The mean convergence rate (R)
proposed by Broyden [13] has been adopted for
comparing the performance of these three methods. R
is defined as
(131
where n is the total number of function evaluations,
and N, is the norm of the function values at the
starting points.
The IMM performed better than Wegsteins
method. On the average it required 46% less function
evaluations than Wegsteins method (not counting
problem 11.1 in which Wegsteins method performed
exceptionally poorly, because of convergence to an
infeasible solution).
Table 4 shows R for three methods. The con-
As for the stopping criteria, the rapidly converging
vergence rate is the highest for the IMM in all except
Wegsteins method and the IMM required only one or
two out of the twelve cases. In problem 4 Mullet-k
no function evaluations to satisfy-criterion (12). Tn the
method has the higher convergence rate, in problem
slowly converging direct substitution method, in two
10.1 both Brents and Mullers methods have a higher
cases six additional iterations were required to satisfy
convergence rate than the IMM. Mullers method
criterion (12) after criterion (5a) was satisfied.
Table 4. Mean convergence rate (R) for three solution
methods
Problem No. Brents Mullers IMM
:
3
4
5
6
7
8.1
8.2
9
10.1
10.2
1.84
2.08
1.89
2.0
1.72
2.01
2.82
1.26
2.92
2.16
1.69
1.38
1.80
2.15
2.17
3.29
3.61
3.19
2.58
3.05
2.14
1.88
2.01
2.44
2.7
3.35
2.94
7.19
3.86
3.33
3.44
3.12
3.10
1.35
2.95
could not solve problem 7. These results clearly indi-
cate that in terms of the mean convergence rate the
IMM is superior to Brents and Mullers methods.
Table 5 shows the performance of the successive-
substitution and Wegsteins methods and the IMM
for problems 11-15. Problems 11 and 15 were solved
by two different formulations. For the second Formu-
lation 1 F(x*)l -e 1 (successive substitution converg-
ing), for the first formulation IF(x*)l > 1. The results
in Table 5 show that successive substitution is clearly
inferior to both Wegsteins method and the IMM. It
did not converge in three cases and when it did
converge it required between twice and 6 times as
many function evaluations as the other methods.
6. CONCLUSIONS
From the performance of the different solution
methods with the 15 test problems, the following
conclusions can be drawn:
(1) The IMM requires fewer function evaluations to
reach the solution than any of the competitive
methods. On average, one can expect 33% savings
with respect to the secant method, 46% savings with
respect to Wegsteins method, and 16% with respect
to the memory method.
(2) Assuming that evaluation of derivatives requires
about the same amount of effort as evaluation of the
functions, the savings that can be achieved using the
IMM is on average 40% relative to both the NR and
Richmonds methods.
Table 5. Performance of the solution methods for test functions of the form x = F(X)
Problem
No.
11.1
11.2
12
13
14
15.1
15.2
Successive-
substitution Wegstein IMM
a b c a b c a b c
Ed
f 23s 0 11 8 0 7 10-e
34 0 33 9 I 4 6 1 5 10-G
24 6 23 11 0 5 7 0 6 10-S
43 6 42 9 0 4 7 0 6 10-4
t 7 t 3 5 1 4 lo- *
t 9 I 4 6 0 10-e
14 0 4 9 0 4 6 0 1o-b
+a = number of function evaluations, b = additional number of function evaluations for
convergence test, and c = number of iterations.
f Diverging.
Converged to an infeasible solution.
1500 MORDECHAI SHACHAM
(3) The 1MM is superior to Brents and Mullers
methods when compared on the basis of the mean
convergence rate. That means that it will usually
achieve the solution with the same accuracy with fewer
function evaluations, or a higher accuracy with the
same number of function evaluations.
(4) Theoretical analysis has shown that, in addition
to function evaluations, the IMM requires arithmetic
operations of the order of n to calculate x,+ 1 instead of
n2 as in the case of the memory method (as im-
plemented by Shacham and Kehat [5]), so that the
former is, clearly superior.
(5) The algorithm for stopping the iterations per-
formed as expected for all the test problems, stopping
the iterations when the true error was reduced for the
first time below the desired error tolerance. This was
achieved with no or just a minimal increase in the total
computational effort.
The IMM, like most of the derivative-free al-
gorithms, is recommended f or use with complicated
functions which have to be differentiated numerically.
It should be emphasized that most of the functions in
the test problem set do not fit this description. For the
test problem set, relatively simple problems or simpli-
fied versions of complicated problems were selected so
that their complete description could be included.
NOTATION
;
variable in eqs (6)-(10)
variable in eqs (7)-(9)
f(x)
nonlinear function
Ftx)
nonlinear function
Iv norm of function values
P polynomial
R mean convergence rate
X independent variable
X* root of a nonlinear equation
Y
function of x, y =J(x) or y= F(x)-x
Greek l etters
E error
Subscri pts
PI
PI
c31
c41
c51
II51
initial value
desired value
iteration number
REFERENCES
Paterson, W. R., 1986, A new method for solving a class
of nonlinear equations. Chem. Engng Sci. 41, 1935.
Paterson, W. R., 1986, On preferring iteration in a
transformed variable to the method of successive sub-
stitution. Chem. Engng Sci . 41, 601.
Lapidus, L., 1962, Di gi tal Computation for Chemi cal
Engi neers. McGraw-Hill, New York.
Shacham, M. and Kehat, E., 1972, An iteration method
with memory for the solution of a non-linear equation.
Chem. Engng Sci . 27, 2099.
Hopgood, D. J. and McKee, S., 1974, Improved root-
finding methods derived from inverse interpolation. .I.
I nst. math. Appl i c. 14, 217.
Pang, T. and Knopf, F. C., 1986, Numerical analysis of
single variable problems with the use of continued
fractions. Camp. them. Engng 10, 87.
c71
ISI
c91
Cl01
Cl11
Cl27
Cl31
Cl47
Cl51
Cl61
Cf71
Forsythe, G. E., Malcolm, M. A. and Moler, C. B., 1977,
Computer Methods for Mathemati cal Computation.
Prentice-Hall, Englewood Cliffs, NJ.
Brent, R. P., 1971, An algorithm with guaranteed
convergence for finding a zero of a function. Camp. J.
14, 422.
Shacham, M. and Kehat, E., 1973, Converging interval
methods f or the iterative solution of nonlinear equa-
tions. Chem. Engng Sci . 28, 2187.
Cox, M. G., 1970, A bracketing technique for com-
puting a zero of a function. Camp. J. 13, 101.
Wegstein, J. H., 1958, Accelerating convergence of
iterative processes. Commun. ACM 1, 9.
I MSL Li brary Reference Manual , 9th edition. IMSL.
Houston, TX.
Broyden, C. G., 1965, A class of methods for solving
nonlinear simultaneous equations. Math. Comp. 19,
577.
Carnahan, B., Luther, H. A. and Wilkes, J. O., 1969,
Appl i ed Numeri cal Methods. Wiley, New York.
Shacham, M., 1986, Numerical solution of constrained
nonlinear algebraic equations. Int. J. numerical Meth.
Engng 23, 1455.
Eberhart, J. G., 1986, Solving equations by successive
substitution-the problems of divergence and slow
convergence. J. them. Educ. 63, 576.
Chang, H. and Over, 1. E., 1981, Sefecred Numeri cal
Methods and Computer Prclyrams for Chemi cal Engi n-
eers. Sterling Swift, Manchaca, TX.
APPENDIX A: TEST FUNCTIONS
The following general format is used to define the test
functions:
Description, reference
(a) Function definitions
(b) Parameters
(c) Starting point(s)
(d) Solution(s).
(t) Chemi cal equi l i bri um cal cul ati on, Shacham and Kehat [4]
8(4 - z)z
(a) f(z)= (6_332)2(2_-z) -0.186
(c) .I?, =O.lz, =0.9
(d) z* = 0.277759.
(2) I sothermnl j l ash, Shacham and Kehat [4]
where zi and ki are specified.
(c) a,=& a,=f
(d) ci- T 0.845954.
(3) Ki neti c probl em, Shocham and Kehaf [4J
I 21,000
(a) f(T)= -eexp ~
TZ ( 1
- I .1 1 x 10
T
(c) T. = 550, T, = 560
(d) T*=551.773.
(4) Azeofropi c point cal cul ati on, Shacham and Kehat [4]
(a) f(x) =
AB[B(l -xX)-Ax]
[x(A -B) + B]
+0.14845
(b) A = 0.38969, B = 0.55954
(c) x,=0, x, = 1
(d) x* = 0.691473.
An improved memory method for the solution of a nonlinear equation 1501
(5) Adi abati c fl ame femperature
(a) f(T)=*~+~(T-*~~)+~~T--2~~)
+ +-3-2983)
(b) AH = - 57,798.0, a = 7.256, /3 = 2.298 x lo-,
y=O.283 x 1Om6
(c) T, = 3oo0, TI = 5ooo
(d) T*=4305.31.
(6) Gas uol umefi om Beaffi e Bri dgeman equation, Carnahan et
al. [14], p_ 173
(a)f(v)=~+$+-$+;-P
jY=RTB,-A,- $
RcB,
y= -RTB,b+A,a- __
7-2
(b) A,=2.2769, B,=0.05587, a=0.01855, b= -0.01587,
c = 12.83 x 104, R =0.08205, T= 273.15, P= 100
(c) V,=O.l, V,=O.3
(d) V-=0.1749672.
(7) Conversion in a chemical reactor, Shacham [I SI
(c) x,=0.7, x1 =0.79
(d) x* = 0.757396.
(8) Flow in a smooth pipe, Paterson [l] and [2]
(a) f(u) = ad + buf4 - c
(b) Set 1: LI=240, b=40, C=ZOO
set 2: a = 46.64, b = 67.97, c=40.0
(c) Set 1: u,=O.l, ( = 1.0
set 2: , = 0.2, u, =0.9
(d) Set 1: v* =0.842524
set 2: v-=0.56565.
(9) Pressure drop i n a converging di vergi ng nozzl e, Cmnahan
et al. [14], p. 204
(b) A,=O.l, Ax0.12, y= 1_4t, P,= loo
(c) P, = 20, P, = 50
(d) P* = 25.92083.
(10) Batch di sti l l ati on at i nfi ni te w&x, Paterson Cl]
1 64 1
(a) S(x)= 631nx+ --In __ + In (0.95 -x) - In 0.9
63 1-x
(c) Set 1: x,=0.1, x1 =0.6
Set 2: x,=0.01, x1 =O.l
(d) Set 1: x+ = 0.5
Set 2: x* =0.03621008.
(11) Chemical equilibrium, Eberhart [16]
(4
W
(cl
(4
First formulation:
(c + ~x)~(u + b + c - 2x)*
.%=a-
k,P2(b-33x)
Second formulation:
x= b-
I [
(c+2x)*(a+b+c-2x)2 1s
1 >i
3
k,PZ(a -x)
a = 0.5, h = 0.8, r = 0.3, k, = 604500.0, P = 0.00243
x0 = 0.0
x* =0.058655
x* =0.60032.
(12) Vol umefrom ui ri al equation ojstate, Chang and Ouer [17]
RT b c
(a) tl= - 1.0-c - + -
P I I 2 >
(b) b= -159.0, c=9000.0, P=75.0, T=430.85, R=82.05
(c) u,=471.13
(d) v* = 213.00.
(13) Vol umefrom Redl i ch-Kwong equation ofstate
(a) Y=
RT j%(v+b)
P,/Tu(u + b) +a
fb
a = 0.42748R2 T f ./P,
b = 0.08664RT,/P,
(b) P,=45.8, r,=191.0, P= 100.0, T=210.4 R=0.08206
(c) 0, =0.1355
(d) u* =0.075709.
(14) Heat duty oJ shel1 ad tube heat exchanger
(a) q=UA
AT,-AT,
4
AT,=T,--tt,--
MC,
AT,=T,-t,-L
-P
(b) M=43,800.0, C,=O.605, m= 149,000.0, c,=O.49
U=.55.0, A=662.0, T,=390.0, t,=lQO.O
(c) q,=5rlO6
(cl) q = 5.2810 X 106.
(t 5) Sinkage depth of sphere in wuzer
(4
(4
(4
First formulation: h = h [ 1+ h(3 - h)] - 2.4
2.4
Second formulation: h = __
h(3 -h)
h, = 1.0
h*= 1.1341.

You might also like