You are on page 1of 6

School of Mechanical Aerospace and Civil Engineering

Direct Iteration Method


This is a fairly simple method, which requires the problem to be written in the form
x = f (x)

(3)

for some function f (x).

Numerical Solution of Equations

We start with an initial guess to the solution, x1 ,


and then calculate a new estimate as
f(x)

T. J. Craft
George Begg Building, C41

x2 = f (x1 )

Contents:

This process is continued, at each step


generating a new approximation

Introduction to CFD

Numerical solution of equations


Finite difference methods
Finite volume methods
Pressure-velocity coupling
Solving sets of linear equations
Unsteady problems
Turbulence and other physical modelling
Body-fitted coordinate systems

(4)

Reading:
J. Ferziger, M. Peric, Computational Methods for Fluid
Dynamics
H.K. Versteeg, W. Malalasekara, An Introduction to Computational Fluid Dynamics: The Finite Volume Method
S.V. Patankar, Numerical Heat Transfer and Fluid Flow
Notes: http://cfd.mace.manchester.ac.uk/tmcfd
- People - T. Craft - Online Teaching Material

xn+1 = f (xn )

(5)

The iterations are stopped when the difference between successive estimates becomes less
than some prescribed convergence criterion . ie. when |xn+1 xn | <
If the process is convergent, then taking a smaller value for results in a more accurate
solution, although more iterations will need to be performed.

- p. 3

Introduction

As an example, consider solving the equation

The Navier-Stokes equations are a set of coupled, non-linear, partial differential equations.

x3 3x2 3x 4 = 0

As indicated earlier, solving these numerically consists of two steps:

(which has the exact solution x = 4).

Approximation of the differential equations by algebraic ones.

We first need to write the equation in the form x = f (x), and there is more than one way of
doing this.

Solution of the system of algebraic equations.


Before considering these steps for differential equations we first review briefly how we use
numerical methods for approximate solutions of single non-linear algebraic equations.

One way is to write the equation as

Since most such methods are iterative, this introduces a number of concepts and generic
treatments that will also be met later when dealing with iterative solution methods for large
sets of coupled equations.

x = (3x2 + 3x + 4)1/3

so

f (x) = (3x2 + 3x + 4)1/3

(7)

If we start with an initial guess of x = 2.5, the above iterative method then gives:

The general problem to be considered is that of solving the equation


f (x) = 0

(6)

(1)

where f is some arbitrary (non-linear) function. For some methods we need to reformulate
this as
x = F (x)
(2)

Iteration
x
f (x)

1
2.5
3.12

2
3.12
3.49

3
3.49
3.71

4
3.71
3.83

5
3.83
3.91

6
3.91
3.95

7
3.95
3.97

8
3.97
3.98

9
3.98
3.99

10
3.99
3.99

11
3.99
4.00

The scheme converges, although not particularly rapidly.

for some function F .


Some methods will probably have been met in earlier courses, but it is useful to review these,
to understand their behaviour and examine some of their advantages and weaknesses.
- p. 2

- p. 4

The graphs of the two particular forms of f (x) examined in the example earlier are shown
below.

If, however, we rewrite equation (6) in the form


x = (x3 3x2 4)/3

so

f (x) = (x3 3x2 4)/3

(8)

and again take x1 = 2.5, then the iteration process leads to:
Iteration
x
f (x)

1
2.5
-2.38

2
-2.38
-11.44

3
-11.44
-631.19

4
-631.19
-8.4 107

xn+1 = (3x2n +3xn +4)1/3 converges

In this case the process is clearly diverging.

xn+1 = (x3n 3x2n 4)/3 diverges

In fact, even if we had started from an initial guess much closer to the real solution, with this
formulation the iterative process would still have diverged.
As can be seen from this example, the direct method as described is not always guaranteed
to converge, and the particular formulation chosen for f (x) can be highly influential in
determining the convergence behaviour.
To understand why this should be, we note that the difference between successive iterations
will be
(9)
|xn+1 xn | = |f (xn ) f (xn1 )|

This highlights an important aspect of numerical solutions: one usually needs a good
understanding of the problem to be solved and the solution methods in order to select the
most appropriate scheme.

- p. 5

Assuming that we are close to the true solution, xr , we can approximate f (xn ) and f (xn1 )
by Taylor series expansions about xr :

- p. 7

The Bisection Method


This is designed to solve a problem formulated as f (x) = 0 for some function f .

f (xn ) f (xr ) + (xn xr )f (xr ) + . . .

(10)

f (xn1 ) f (xr ) + (xn1 xr )f (xr ) + . . .

(11)

In this method we start off with two points x1 and x2 , chosen to lie on opposite sides of the
solution. Hence f (x1 ) and f (x2 ) have opposite signs.

where f denotes the derivative df /dx.


Since f (xr ) = xr , substituting these expressions for f (xn ) and f (xn+1 ) into equation (9)
gives
(12)
|xn+1 xn | |f (xr )||xn xn1 |
Thus, if the gradient of f is such that |f (xr )| < 1, the difference between successive
approximations decreases, and the scheme will converge. If |f (xr )| > 1 the difference
between successive approximations increases and the method will not converge.

For the next iteration we retain x3 and whichever of x1 or x2 gave the opposite sign of f to
f (x3 ). We thus know that the solution lies between the two points we retain.
We continue bisecting the interval as above until it
becomes sufficiently small, ie. |xn xn1 | < for
some small convergence criteria . We then call
the process converged.

f(x)

f(x)

This behaviour can also be


inferred graphically, as
shown.

Clearly, as we reduce the convergence criteria


we get a more accurate approximation to the
solution, but have to perform more iterations (ie.
more bisections of the interval).

Scheme converges

We then bisect this interval, so take x3 = 0.5(x1 + x2 ), and evaluate f (x3 ) in order to find its
sign.

f(x)

x1

x4x6x 5 x3

x2

Scheme diverges
- p. 6

- p. 8

With the earlier example, solving f (x) = x3 3x2 3x 4 = 0, and with the same starting
points, we would get:

Solving the same example as earlier we write


f (x) = x3 3x2 3x 4 = 0

(13)

Iteration
x
f (x)

Applying the bisection method iteration, with initial points x1 = 2.5 and x2 = 5 now gives:
Iteration
(x)
x
f (x)

2.5
-14.63

5.0
69.0

3
(x1 ,x2 )
3.75
-4.703

9
(x7 ,x8 )
4.0036
0.077

10
(x7 ,x9 )
3.994
-0.1257

11
(x10 ,x9 )
3.999
0.021

4
(x3 ,x2 )
4.375
9.193

5
(x3 ,x4 )
4.0625
1.348

6
(x3 ,x5 )
3.9062
-1.891

7
(x6 ,x5 )
3.9844
-0.325

8
(x7 ,x5 )
4.0235
0.499

1
2.5
-14.63

2
5
31.0

3
3.3
-10.62

4
3.73
-4.96

5
4.11
2.51

6
3.99
-0.28

7
4.00
-0.01

The process shows a fairly rapid convergence.

The method will always converge, since the interval size always decreases.
The method can be rather slow, since the interval size is only halved in each iteration.

- p. 9

The Secant Method


This method solves the system f (x) = 0.

However, if we instead start with the two points x1 = 0 and x2 = 5, we get the sequence:

f(x)

Iteration
x
f (x)

It also requires two starting points, x1 and x2 ,


but they need not be on opposite sides of the
exact solution.
We now fit a straight line through the two points
(x1 ,f (x1 )) and (x2 ,f (x2 )), and take the next
estimate as the point at which this line cuts the
x axis.

- p. 11

x5 x 4 x3

This iteration method can be formulated mathematically as

xn xn1
xn+1 = xn f (xn )
f (xn ) f (xn1 )

x2

x1

1
0.0
-4.0

2
5
31.0

3
0.57
-6.51

4
1.34
-11.0

5
-0.54
-3.41

6
-1.39
-8.29

7
0.05
-4.16

8
1.50
-11.87

9
-0.73
-3.8

10
-1.78
-13.81

In this case the process does not appear


to be converging.

(14)

By plotting the sequence, we can see that


the iterative process oscillates across the
local maximum of f around x = 0.41.

The process is again repeated until the convergence criteria is reached.

- p. 10

- p. 12

The Newton-Raphson Method

Under-Relaxation
Under-relaxation is commonly used in numerical methods to aid in obtaining stable solutions.
Essentially it slows down the rate of advance of the solution process by linearly interpolating
between the current iteration value, xn and the value that would otherwise be taken at the
next iteration level.
In the present example of using the Secant method, equation (14) could be modified to read

xn xn1
xn+1 = xn f (xn )
(15)
f (xn ) f (xn1 )

f(x)
This also solves the equation f (x) = 0.
In this method we start with an initial guess x1 .
We then draw the tangent to the curve of f at x1 ,
and take our next estimate x2 to be the point
where this tangent cuts the x axis.
x4 x3

x2

x1

where the under-relaxation factor is taken between 0 and 1.


If, for example, we take = 0.5, the Secant method applied to the previous example gives:
Iteration
x
f (x)

1
0.0
-4.0

2
5.0
31.0

3
2.79
-14.02

4
3.13
-12.11

5
4.23
5.20

6
4.06
1.31

7
4.03
0.71

8
4.02
0.36

9
4.01
0.18

Mathematically, we can write this iteration process as


xn+1 = xn

10
4.00
0.09

f (xn )
f (xn )

(16)

where f denotes the derivative df /dx.

By reducing the jump between successive values of x in the early iterations the solution
estimates stay to the right side of the local minimum in f , and the process now converges.

We again continue the process until we satisfy some suitable convergence criteria.

Note that as the solution gets closer to converging, the under-relaxation factor could be
increased to speed up convergence.
- p. 13

Using the same example as before, x3 3x3 3x 4 = 0, with a starting point of x1 = 2.5,
we obtain the following sequence:

Convergence Criteria
In all the examples presented here we have tested for convergence by checking on the
difference between successive solution estimates.

Iteration
x
f (x)
f (x)

We thus take the solution as converged when |xn+1 xn | < for some predefined
convergence criteria .
However, this condition should not always be applied in a purely blind fashion.
For example, if using the secant method in a
region where the gradient f is very steep the
changes in xn between iterations could be very
small, particularly if heavy under-relaxation is
also being used. However, as shown in the
figure, the estimates could still be some way
from the true solution.
A small value of |xn+1 xn | could simply be
indicating a very slow convergence rate.

- p. 15

1
2.5
-14.63
0.75

2
22
9126
1317

3
15.07
2692.31
587.95

4
10.49
789.11
264.26

5
7.51
227.27
120.96

6
5.63
62.27
58.21

7
4.56
14.66
31.95

8
4.10
2.15
22.8

9
4.00
0.08
21.07

Note the large jump in the first step, which occurs since we chose a value x1 for which
f (x1 ) is small. After this the convergence is fairly rapid.

f(x)

x3x2x1 x

A safer way of checking for convergence is to also test whether the actual function value
f (xn ) is sufficiently small.
- p. 14

- p. 16

Systems of Algebraic Equations

If, however, we had started the iteration process from an initial value of x1 = 0, we would
have got:
Iteration
x
f (x)
f (x)

1
0.0
-4
-3

2
-1.33
-7.70
10.33

3
-0.59
-3.48
1.56

4
1.64
-12.56
-4.79

5
-0.99
-4.92
5.84

6
-0.14
-3.63
-2.07

7
-1.90
-15.98
19.22

8
-1.07
-5.44
6.83

9
-0.27
-3.43
-1.14

10
-3.27
-61.25
48.71

Most of the methods outlined for solving a single equation can be extended to apply to a
system of equations of the form:
f1 (x1 , x2 , . . . , xn ) = 0
f2 (x1 , x2 , . . . , xn ) = 0

..
.

..
.

..
.

fn (x1 , x2 , . . . , xn ) = 0

As in the case of single equations, to solve the system efficiently (and sometimes to solve it
at all) requires an understanding of the method adopted, and of the nature of the equations
to be solved.

Because of the shape of the


function f , the method first
spends a number of iterations
oscillating around the local
maximum at x 0.41.

- p. 17

The Direct Iteration Method

In this case convergence can again be achieved by introducing under-relaxation. If we


under-relax the solution with a factor of 0.9, taking
xn+1 = xn 0.9f (xn )/f (xn )

(17)

we then get the iteration sequence:


It.
x
f (x)
f (x)

1
0.0
-4
-3

2
-1.20
-6.45
8.52

3
-0.52
-3.39
0.92

4
2.79
-13.99
3.66

- p. 19

For the equivalent of the direct iteration method, we first have to rewrite the system of
equations into the form
x1 = f1 (x1 , x2 , . . . , xn )

5
6.23
102.69
76.07

6
5.02
31.64
42.36

7
4.34
8.30
27.53

8
4.07
1.55
22.30

x2 = f2 (x1 , x2 , . . . , xn )

9
4.01
0.19
21.16

..
.

..
.

..
.

xn = fn (x1 , x2 , . . . , xn )

This can be written more compactly in vector form: x = f (x)


The under-relaxation here results in the
solution rapidly reaching a point near the
maximum of f where f is small enough
and positive that the next value of x lies
to the right of the root (ie. x > 4).

One then proceeds, with a starting guess x0 , constructing the sequence


xn+1 = f (xn )

(18)

stopping the iteration when ||xn+1 xn || < for a suitably small convergence criterion .

Note that in this case convergence could


again have been improved further by
reverting to using no under-relaxation
after iteration 4.

The norm ||xn+1 xn || is simply a scalar measure of the size of the vector xn+1 xn . For
example, it could be the L2 norm
||x||L2 = (x.x)1/2 =
- p. 18

x2i

1/2

(19)
- p. 20

As in the single equation case the method is not always guaranteed to converge.

Rearranging, and taking x(r) as the estimate to the solution at iteration level i + 1 leads to a
system of linear equations for the elements of the vector x(i+1) x(i) :

In addition to convergence behaviour being highly dependent on the precise formulation


used for fi in the equation
(20)
xi = fi (x1 , x2 , . . . , xn )
the choice of which equation is used for updating which variable is now also very important.
It is possible to devise expressions for the conditions necessary to ensure convergence, in
the same way as was done in the single equation case. However, the result is rather
complex and will not be considered here.

(i)

f1

B x1
B (i)
B f
B 2
B x1
B
B .
B .
B .
@
(i)

fn
x1

In general, a useful rule of thumb is to try and write the equations in a form where the right
hand sides contain the variables raised to powers less than one.

(i)

f1
x2

(i)

...

(i)

f2
x2

...

...

...

1 0 (i) 1
1 0 (i+1)
(i)
f
x1
x
C B 1 C
CB 1
C B (i) C
CB
(i) C B (i+1)
(i) C
B
f2 C Bx
x2 C Bf2 C
C
2
C B
C
xn C B
C=B
C
CB
C B . C
B
..
.. C
.
B
C
C
CB
.
. CB
C B . C
A @
A
A@
(i)
(i+1)
(i)
(i)
fn
x
f

x
n
n
n
x
f1
xn

(23)

which can be solved using a variety of methods, some of which will be discussed later.
Under-relaxation can also be applied in this method.

Under-relaxation is often also used, to aid convergence by avoiding large changes in the
estimates for xn from iteration to iteration. With this included, the iteration sequence can be
written as
xn+1 = (1 )xn + f (xn )
(21)

When the scheme converges it generally does so more rapidly than the simpler direct
iteration method. However, more work has to be done per iteration, since the derivatives
fi /xj have to be computed, and the matrix system of equation (23) solved.

where is the under-relaxation factor, with 0 < < 1.

- p. 21

The Newton-Raphson Method

- p. 23

Summary

Other schemes can also be adapted to obtain solutions to a system of equations.


The Newton-Raphson method for a single equation can be devised by retaining the first term
in a Taylor series expansion of f about the point xn . If xr is the actual root, then one can
write
(22)
0 = f (xr ) f (xn ) + (xr xn )f (xn ) + . . .
Neglecting the higher order terms and rearranging leads to xr xn f (xn )/f (xn ).

We have examined a number of numerical iterative methods for solving single non-linear
equations of the form f (x) = 0 and systems of the form f (x) = 0.
Some schemes, such as the bisection method, will always converge, but may do so rather
slowly.
Others converge more rapidly for some cases, but may not be guaranteed to always
converge.

In the case of a system of equations we could write


(r)

f1

(r)

f2

0 = f1
0 = f2

..
.

(i)

+ (x1

(r)

x1 )

(i)

+ (x1

(i)

(r)

x1 )

(i)

(i)

(i)

(i)

(i)

(i)

(i)

Under-relaxation can be applied in some cases to aid convergence by reducing the change
in solution estimates between iterations. The under-relaxation factor can often be increased
as the true solution is approached.

f1
(r)
(i) f
(r)
(i) f
+ (x2 x2 ) 1 + + (xn xn ) 1
x1
x2
xn
f2
(r)
(i) f
(r)
(i) f
+ (x2 x2 ) 2 + + (xn xn ) 2
x1
x2
xn

To select and apply the most appropriate method for a particular problem requires a good
understanding of the characteristics of the method and of the problem being solved.

..
.
(r)

0 = fn

(i)

(r)

fn + (x1

(i)

x1 )

(i)

(i)

(i)

fn
(r)
(i) fn
(r)
(i) fn
+ (x2 x2 )
+ + (xn xn )
x1
x2
xn

(i)

where fn denotes the value fn (xi ).


- p. 22

- p. 24

You might also like