Professional Documents
Culture Documents
k min f ( x k d k ), (3)
Emmanuel Nwaeze
Department of Mathematics,
University of Ilorin, Ilorin,
Nigeria. d k is a vector in the form
e-mail: nwaezeema@yahoo.com
g k 1 , k 0
d k 1 (4)
g k 1 k d k , k 1
T
Abstract—In regression analysis, scientists face the
problem of constructing a nonlinear mathematical function that in which g k g ( x k ) f ( x k ) and k is a parameter of
has the best fit to a series of experimental data points. Thus in the CGM, i.e. different s determine different CGMs. For
this paper, we present a Modified Nonlinear Conjugate Gradient
instance, the following are formulae for some popular CGMs:
ES
Method (MNCGM) for solving the nonlinear regression
Problems without linearizing the form of the function. Testing
the MNCGM on some problems (including polynomial regression
1
g kT ( g k g k 1 )
PkT1 ( g k g k 1 )
[5]
norm.
where x R,
N
R N is an N-dimensional Euclidean space
The global convergence properties of the above
and f is n – times differentiable. g (x) is the gradient vector. variants of CGM have already been established by many
A conjugate Gradient Method (CGM) for solving (1) uses an authors including Powell[10], Dai and Yuan[1] and
iterative scheme: Zoutendijk[8].The cubic interpolation technique is among the
x k 1 x k k d k , k 0, 1, 2,... (2) most widely applicable and useful procedure for establishing
convergence results of CGM. This technique (cubic
where x0 R N is an initial point, k , the step size at
interpolation) minimizes the one-dimensional function
iteration k, is defined by
T
Step 1 Given x1 v , d 1 g 1 , k 1, if g1 0 then stop.
Step 2 Compute an k 0 that satisfies Wolfe conditions . and n m 1.
Step 3 Let x k 1 x k k d k . If g k 1 0 then stop.
2. The Least-Squares Exponential function
Step 4 Compute k by 4 and generate d k 1 by (1.4),
k k 1, go to step 2. Pn ( x) be ax (10)
a 0 a1 x1 a 2 x 2 . . . a n x n
squares of the offsets is used instead of the offset absolute Pn ( x) (12)
b0 b1 x1 b2 x 2 .. . br x r
values because this allows the residuals to be treated as a
where a 0 , a1 , ..., a n ; b0 , b1 , ..., br are constants
continuous differentiable quantity. NCGM is then used to
minimize the sum of the squares of the offsets. The best-fit to be determined.
function that approximates a set of data points: Using any of the model functions above, the Modified
Nonlinear Conjugate Gradient Method seeks to minimize the
{ ( xi , yi ) | i 1, 2, ..., m } may take one of the following offset (error function)
m
F (a) y i Pn ( x i ) or
2
PROBLEM I: Construct the least squares polynomial of
i 1
m degree 3 for the data below.
F (a, b) y i Pn ( xi )
2
(13)
i 1 xi 0 1 2 3 4
over a, b where a and b .
n r
yi 0 12 156 174 408
The Modified Nonlinear Conjugate Gradient Method PROBLEM II: Construct the least squares polynomial of
is Algorithm (I) driven by cubic interpolation line search degree 10 for the data below.
procedure (10). This method inherits Dai and Yuan[1]
convergence results of Algorithm (I) since cubic interpolation xi 0 0.1 0.2 0.3 0.4 0.5
technique satisfies Wolfe conditions implicitly. The
Algorithm is as follows. yi 5 5.266 5.484 5.676 5.872 6.138
T
ALGORITHM II
i. Input initial values x0 and d 0 g 0 . 0.6 0.7 0.8 0.9 1 1.1
ii. Repeat:
6.635 7.745 10.292 15.962 28 52.337
a. Find step length k such that
b.
by (10)
Compute new point:
ES
f ( x k k d k ) min f ( x k d k ) ,
0 f ( x) 5 3 x 4 x 2 7 x 3 9 x 4 6 x 5
8 x 6 3x 7 4 x 8 10 x 9 x 10
x k 1 x k k d k (n = 10, m = 12)
c. Update search direction:
Dk 1 g k 1 k d k , PROBLEM III: Construct the least squares exponential
function of the form (10) for the data below.
2 xi 1 1.25 1.5 1.75 2
A
g k 1
k 5.1 5.79 6.53 7.45 8.46
T
d k yk yi
y k g k 1 g k n = 2, m = 5
d. Check for optimality of g:
Terminate iteration at step m when g m is so small that x m PROBLEM IV: Construct the least squares nonlinear function
IJ
5 6 7 8 9
The following constitute the test problems for the
CGM on which Algorithm II was implemented in Visual Basic 4.5 3.5 2.5 1.5 0.5
SOLUTION TO PROBLEM I
T
Time taken = 0.82 seconds.
ES
P3 ( x) 0.000096 9.998656 x 4.999029 x 2 6.999837 x3
P3 (3) 174.0004
Time taken = 0.45 seconds.
V. CONCLUSION
SOLUTION TO PROBLEM IV
T
Herein, we present a Modified Nonlinear Conjugate
Gradient Method (MNCGM) for solving nonlinear regression
problems to scientists and engineers. Testing the MNCGM on
some problems (including polynomial regression problems
ES with many parameters) confirms the possibility of obtaining
the global optimum of the objective function with high
accuracy. The results obtained also show that the method is
amenable to nonlinear regression analysis and computer
automation.
REFERENCES
A
[1] Y.H. Dai and Y. Yuan : “A nonlinear
conjugate gradient method with a strong
global convergence property”. SIAM J.
Optim. 10, 177-182 (1999)
[2] B. D. Bunday and G. R. Garside :
IJ
T
pp. 37–86, North-Holland, Amsterdam,
The Netherlands, 1970.
[9] C. Lawson and R. Hanson “Solving Least
Squares Problems”. Englewood Cliffs, NJ:
Prentice-Hall, 1974.
[10]
ES
M. J. D. Powell “Restart procedures for the
conjugate gradient method”, Mathematical
Programming, vol. 12, no. 2, pp. 241–
254, 1977.