You are on page 1of 17

Iterative methods for

systems of linear equations


Astrid Xiomara
Rodriguez
www.nebrija.es/.../MetodosMatem/sistemas_lin
www.nebrija.es/.../MetodosMatem/sistemas_lin
eales_iterativos
eales_iterativos
Iterative methods for systems of linear
equations

Introduction
Heat Equation
Jacobi Method
Gauss-Seidel Method
Overrelaxation method
Capacitor Problem
Direct methods versus iterative
methods
DIRECT ITERATIVE
• Ax =b • x = Cx + d
• x = A\ b • x(k+1) = Cx(k) + d
• Moderate size • Big size
Alter the structure Zeros preserved
Rounding error Truncation error
Iterative methods compared with direct, we do
not guarantee a better approach, however, are
more efficient when working with large matrices

In the resolution for numerical partial differential


equations often appear linear equation systems
with even 100 000 unknowns, in these systems
the coefficient matrix is sparse, ie a high
percentage of matrix elements are equal to 0. If
there are any patterns in the nonzero elements
(eg tridiagonal systems), then an iterative
method can be very effective.
Heat Equation
• System Eq. linear. • Associated matrix

T1  (T0  T2 ) / 2  2 -1 
 
T2  (T1  T3 ) / 2 - 1 2 - 1 
T3  (T2  T4 ) / 2  -1 2  
 

   - 1
Tn  (Tn-1  Tn+1 ) / 2  
 -1 2

T0 T1 T2 . . . Tn Tn+1
Jacobi method
• System of linear equations

a11x1  a12x 2  a13 x 3   a1nx n  b1 



a 21x1  a 22 x 2  a 23 x 3   a 2nx n  b 2 

a 31x1  a 32 x 2  a 33x 3   a 3nx n  b 3 
      

a n1x n  a n2 x 2  a n3 x 3   a nnx n  bn 
• Of all the iterative methods, Jacobi's is the
easiest to implement and understand, however,
is not very efficient in terms of obtaining
solutions.
Consider the system:
 4x  y  z  7

 4 x  8 y  z  21
 2 x  y  5 z  15

we can write equivalently as:


 7 yz
 x 4
 21  4 x  z
y 
 8
z  15  2 x y
 5

Jacobi's method is to use the above formulas as
fixed point iteration.
Fixed point equation

x1  (b1  a12x2  a13x3   a1nxn ) / a11 



x1  (b2  a21x1  a23x3   a2nxn ) / a22 

x2  (b3  a31x1  a32x2   a3nxn ) / a33 
     

xn  (bn  an1x1  an2x2   an,n1xn1) / ann 
Jacobi Iteration
(k+1)
x
1  (b1  a x(k)
12 2  a x   a x )/ a11 
(k)
13 3
(k)
1n n

(k+1)
x
2  (b2  a x(k)
21 1  a23x3   a2nxn )/ a22 
(k) (k)


(k+1)
x
3  (b3  a x(k)
31 1  a32x2   a3nxn )/ a33 
(k) (k)


     

(k+1)
x
n  (bn  a x(k)
n1 1  an2x2   an,n1xn1)/ ann 
(k) (k)
• At each step of the Jacobi iteration yields a vector with n
coordinates

P0 = (x1 (0), x2 (0), ..., xn (0)), ..., Pk = (x1 (k), x2 (k),


..., xn (k))

where the initial estimate (x1 (0), x2 (0), ..., xn (0))


should be chosen. When you do not have a clue about the
solution is usually taken xi (0) = bi / aii

In the example above, if we take P0 = (x (0), y (0), z (0))


= (1,2,2)
In the first iteration is obtained

P1 = (1.75, 3375, 3.00)

Generating the sequence of iterations of Jacobi notes that


converges to (2, 4, 3).
• Many times the Jacobi iteration method does not
work. Here is an example of rearranging the
equations above example.
Example:
 2 x  y  5 z  15

 4 x  8 y  z  21
 4x  y  z  7

Now the iteration formula is


  15  y  5 z
x  2
 21 4 y  z
 y
 8
 z  7  4 x y


and notes that the sequence of Jacobi diverges.


Note that the system matrix is not strictly
diagonal dominant.
Gauss-Seidel iteration
(k+1)
x
1  (b1  a x (k)
12 2 a x (k)
13 3   a x )/ a11 
(k)
1n n

(k+1)
x
2  (b2 a x (k+1)
21 1 a x (k)
23 3   a2nxn )/ a22 
(k)


(k+1)
x
3
a x(k+1)
 (b3  31 1  a x(k+1)
32 2   a3nxn )/ a33 
(k)


     

(k+1)
x
n  (bn a x (k+1)
n1 1 a x (k+1)
n2 2   an,n1xn1 )/ ann
(k+1)
• The Gauss-Seidel method is a modification of the
Jacobi method which accelerates the convergence of
the latter.
Note that the Jacobi method generates a sequence for
each unknown
(X1 (k)), ..., (xn (k)). Since xi (k +1) is probably
better approximated by xi (k) instead of xi (k) in the
calculation of xi +1 (k +1) we use xi (k +1).
Apply this strategy to the example 1 and comprubese
the speed of convergence.

The Gauss-Seidel substantially cut the number of


iterations to make for some precision in the
solution. Obviously the convergence criteria are
similar to those of Jacobi.
Overrelaxation method
xi k x ik+1
Gauss  Seidel : zi xik+1
xˆ i(k +1)  x i(k)  z i
Iverrelaxa tion :
x (k +1)
i x (k)
i  wz i ; 0<w <2
z i  xˆ i(k +1)  x i(k)
x (k +1)
i  (1  w)x (k)
i  w xˆ (k +1)
i
• Overrelaxation method reduces
the number of iterations in the
calculations of solutions of linear
systems by Gauss-Seidel. It is
based on each iteration to obtain a
weighted average (only for vector
elements before the position
calculation) for the solution of the
Jacobi method and the solution of
the Gauss-Seidel.
Pass Overrelaxation

x1(k+1)  (1 )x1(k)  (b1  a12x(k)


2  a x(k)
13 3   a (k)
1n n )/ a11
x 

x(k+1)
2 2
a x(k+1)
 (1 )x  (b2  21 1
(k)
 a x(k)
23 3   a2nx(k)
n )/ a22



x(k+1)
3 3
a x(k+1)
 (1 )x  (b3  31 1
(k)
 a x(k+1)
32 2   a3nxn )/ a33 
(k)


    

x(k+1)
n  (1 )x(k)
n  (bn  a x(k+1)
n1 1  a x(k+1)
n2 2   an,n1x(k+1)
n1 )/ ann 

Summary
• Iterative methods are applied to large
and sparse matrices.
The cost per iteration is O (n2) or less if
you take advantage of the dispersity
Are expected to converge in less than n
steps.
The matrix has to fulfill certain
conditions for the method to converge.

You might also like