You are on page 1of 93

Iterative Techniques in Matrix Algebra

Relaxation Techniques for Solving Linear Systems


Numerical Analysis (9th Edition)
R L Burden & J D Faires
Beamer Presentation Slides
prepared by
John Carroll
Dublin City University

c 2011 Brooks/Cole, Cengage Learning


Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

The SOR Algorithm

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

The SOR Algorithm

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

3 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Motivation
We have seen that the rate of convergence of an iterative
technique depends on the spectral radius of the matrix associated
with the method.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Motivation
We have seen that the rate of convergence of an iterative
technique depends on the spectral radius of the matrix associated
with the method.
One way to select a procedure to accelerate convergence is to
choose a method whose associated matrix has minimal spectral
radius.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Motivation
We have seen that the rate of convergence of an iterative
technique depends on the spectral radius of the matrix associated
with the method.
One way to select a procedure to accelerate convergence is to
choose a method whose associated matrix has minimal spectral
radius.
We start by introducing a new means of measuring the amount by
which an approximation to the solution to a linear system differs
from the true solution to the system.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Motivation
We have seen that the rate of convergence of an iterative
technique depends on the spectral radius of the matrix associated
with the method.
One way to select a procedure to accelerate convergence is to
choose a method whose associated matrix has minimal spectral
radius.
We start by introducing a new means of measuring the amount by
which an approximation to the solution to a linear system differs
from the true solution to the system.
The method makes use of the vector described in the following
definition.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Definition
IRn is an approximation to the solution of the linear
Suppose x
system defined by
Ax = b
with respect to this system is
The residual vector for x

r = b Ax

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Definition
IRn is an approximation to the solution of the linear
Suppose x
system defined by
Ax = b
with respect to this system is
The residual vector for x

r = b Ax

Comments
A residual vector is associated with each calculation of an
approximate component to the solution vector.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Definition
IRn is an approximation to the solution of the linear
Suppose x
system defined by
Ax = b
with respect to this system is
The residual vector for x

r = b Ax

Comments
A residual vector is associated with each calculation of an
approximate component to the solution vector.
The true objective is to generate a sequence of approximations
that will cause the residual vectors to converge rapidly to zero.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Looking at the Gauss-Seidel Method
Suppose we let
(k )

ri

(k )

(k )

(k )

= (r1i , r2i , . . . , rni )t

denote the residual vector for the Gauss-Seidel method

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Looking at the Gauss-Seidel Method
Suppose we let
(k )

ri

(k )

(k )

(k )

= (r1i , r2i , . . . , rni )t

denote the residual vector for the Gauss-Seidel method corresponding


(k )
to the approximate solution vector xi defined by
(k )

xi

(k )

(k )

(k )

(k 1)

= (x1 , x2 , . . . , xi1 , xi

Numerical Analysis (Chapter 7)

Relaxation Techniques

(k 1) t

, . . . , xn

R L Burden & J D Faires

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


Looking at the Gauss-Seidel Method
Suppose we let
(k )

ri

(k )

(k )

(k )

= (r1i , r2i , . . . , rni )t

denote the residual vector for the Gauss-Seidel method corresponding


(k )
to the approximate solution vector xi defined by
(k )

xi

(k )

(k )

(k )

(k 1)

= (x1 , x2 , . . . , xi1 , xi
(k )

The m-th component of ri


(k )
rmi

= bm

is
i1
X

(k )
amj xj

j=1

Numerical Analysis (Chapter 7)

(k 1) t

, . . . , xn

Relaxation Techniques

n
X

(k 1)

amj xj

j=i

R L Burden & J D Faires

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(k )
rmi

= bm

i1
X

(k )
amj xj

n
X

j=1

(k 1)

amj xj

j=i

Looking at the Gauss-Seidel Method (Contd)


(k )

Equivalently, we can write rmi in the form:


(k )
rmi

= bm

i1
X
j=1

(k )
amj xj

n
X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

for each m = 1, 2, . . . , n.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

7 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(k )

rmi = bm

i1
X

(k )

amj xj

j=1

n
X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

Looking at the Gauss-Seidel Method (Contd)


(k )

In particular, the ith component of ri


(k )

rii

= bi

i1
X
j=1

Numerical Analysis (Chapter 7)

(k )

aij xj

is

n
X

(k 1)

aij xj

(k 1)

aii xi

j=i+1

Relaxation Techniques

R L Burden & J D Faires

8 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(k )

rmi = bm

i1
X

(k )

amj xj

j=1

n
X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

Looking at the Gauss-Seidel Method (Contd)


(k )

In particular, the ith component of ri


(k )

= bi

rii

i1
X

(k )

aij xj

j=1

(k 1)

(k )

+ rii

n
X

(k 1)

aij xj

= bi

i1
X

(k )

aij xj

j=1
Numerical Analysis (Chapter 7)

(k 1)

aii xi

j=i+1

so
aii xi

is

Relaxation Techniques

n
X

(k 1)

aij xj

j=i+1
R L Burden & J D Faires

8 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(E)

(k 1)
aii xi

(k )
rii

= bi

i1
X

(k )
aij xj

n
X

(k 1)

aij xj

j=i+1

j=1

Looking at the Gauss-Seidel Method (Contd)


(k )

Recall, however, that in the Gauss-Seidel method, xi is chosen to be

i1
n
X
X
1
(k )
(k )
(k 1)
bi
xi =
aij xj
aij xj
aii
j=1

Numerical Analysis (Chapter 7)

Relaxation Techniques

j=i+1

R L Burden & J D Faires

9 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(E)

(k 1)
aii xi

(k )
rii

= bi

i1
X

(k )
aij xj

n
X

(k 1)

aij xj

j=i+1

j=1

Looking at the Gauss-Seidel Method (Contd)


(k )

Recall, however, that in the Gauss-Seidel method, xi is chosen to be

i1
n
X
X
1
(k )
(k )
(k 1)
bi
xi =
aij xj
aij xj
aii
j=1

j=i+1

so (E) can be rewritten as


(k 1)

aii xi
Numerical Analysis (Chapter 7)

(k )

+ rii

(k )

= aii xi

Relaxation Techniques

R L Burden & J D Faires

9 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(k 1)

aii xi

(k )

+ rii

(k )

= aii xi

Looking at the Gauss-Seidel Method (Contd)


Consequently, the Gauss-Seidel method can be characterized as
(k )
choosing xi to satisfy
(k )

(k )

xi

Numerical Analysis (Chapter 7)

(k 1)

= xi

rii
aii

Relaxation Techniques

R L Burden & J D Faires

10 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors


We can derive another connection between the residual vectors
and the Gauss-Seidel technique.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors


We can derive another connection between the residual vectors
and the Gauss-Seidel technique.
(k )

Consider the residual vector ri+1 , associated with the vector


(k )

(k )

(k )

xi+1 = (x1 ,. . ., xi

Numerical Analysis (Chapter 7)

(k 1)

(k 1) t
).

, xi+1 , . . ., xn

Relaxation Techniques

R L Burden & J D Faires

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors


We can derive another connection between the residual vectors
and the Gauss-Seidel technique.
(k )

Consider the residual vector ri+1 , associated with the vector


(k )

(k )

(k )

xi+1 = (x1 ,. . ., xi

(k 1)

(k 1) t
).

, xi+1 , . . ., xn

(k )

We have seen that the m-th component of ri


(k )
rmi

= bm

i1
X

(k )
amj xj

j=1

Numerical Analysis (Chapter 7)

Relaxation Techniques

n
X

is
(k 1)

amj xj

j=i

R L Burden & J D Faires

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(k )
rmi

= bm

i1
X

(k )
amj xj

n
X

(k 1)

amj xj

j=i

j=1

A 2nd Connection with Residual Vectors (Contd)


(k )

Therefore, the ith component of ri+1 is


(k )

ri,i+1 = bi

i
X
j=1

Numerical Analysis (Chapter 7)

(k )

aij xj

n
X

(k 1)

aij xj

j=i+1

Relaxation Techniques

R L Burden & J D Faires

12 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(k )
rmi

= bm

i1
X

(k )
amj xj

n
X

(k 1)

amj xj

j=i

j=1

A 2nd Connection with Residual Vectors (Contd)


(k )

Therefore, the ith component of ri+1 is


(k )

ri,i+1 = bi

i
X

(k )

aij xj

j=1

= bi

i1
X
j=1

Numerical Analysis (Chapter 7)

n
X

(k 1)

aij xj

j=i+1
(k )

aij xj

n
X

(k 1)

aij xj

(k )

aii xi

j=i+1

Relaxation Techniques

R L Burden & J D Faires

12 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(k )
ri,i+1

= bi

i1
X

(k )
aij xj

j=1

n
X

(k 1)

aij xj

(k )

aii xi

j=i+1

A 2nd Connection with Residual Vectors (Contd)


(k )

By the manner in which xi is defined in

i1
n
X
X
1
(k )
(k )
(k 1)
bi
xi =
aij xj
aij xj
aii
j=1

j=i+1

(k )

we see that ri,i+1 = 0.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

13 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Residual Vectors & the Gauss-Seidel Method


(k )
ri,i+1

= bi

i1
X

(k )
aij xj

j=1

n
X

(k 1)

aij xj

(k )

aii xi

j=i+1

A 2nd Connection with Residual Vectors (Contd)


(k )

By the manner in which xi is defined in

i1
n
X
X
1
(k )
(k )
(k 1)
bi
xi =
aij xj
aij xj
aii
j=1

j=i+1

(k )

we see that ri,i+1 = 0. In a sense, then, the Gauss-Seidel technique is


(k )
characterized by choosing each xi+1 in such a way that the ith
(k )
component of ri+1 is zero.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

13 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

The SOR Algorithm

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

14 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Reducing the Norm of the Residual Vector
(k )

Choosing xi+1 so that one coordinate of the residual vector is


zero, however, is not necessarily the most efficient way to reduce
(k )
the norm of the vector ri+1 .

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Reducing the Norm of the Residual Vector
(k )

Choosing xi+1 so that one coordinate of the residual vector is


zero, however, is not necessarily the most efficient way to reduce
(k )
the norm of the vector ri+1 .
If we modify the Gauss-Seidel procedure, as given by
(k )

(k )
xi

Numerical Analysis (Chapter 7)

(k 1)
xi

r
+ ii
aii

Relaxation Techniques

R L Burden & J D Faires

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Reducing the Norm of the Residual Vector
(k )

Choosing xi+1 so that one coordinate of the residual vector is


zero, however, is not necessarily the most efficient way to reduce
(k )
the norm of the vector ri+1 .
If we modify the Gauss-Seidel procedure, as given by
(k )

(k )
xi

(k 1)
xi

r
+ ii
aii

to

(k )

(k )

xi

Numerical Analysis (Chapter 7)

(k 1)

= xi

Relaxation Techniques

rii
aii

R L Burden & J D Faires

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Reducing the Norm of the Residual Vector
(k )

Choosing xi+1 so that one coordinate of the residual vector is


zero, however, is not necessarily the most efficient way to reduce
(k )
the norm of the vector ri+1 .
If we modify the Gauss-Seidel procedure, as given by
(k )

(k )
xi

(k 1)
xi

r
+ ii
aii

to

(k )

(k )

xi

(k 1)

= xi

rii
aii

then for certain choices of positive we can reduce the norm of


the residual vector and obtain significantly faster convergence.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Introducing the SOR Method
Methods involving
(k )

(k )

xi

(k 1)

= xi

rii
aii

are called relaxation methods.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Introducing the SOR Method
Methods involving
(k )

(k )

xi

(k 1)

= xi

rii
aii

are called relaxation methods. For choices of with 0 < < 1,


the procedures are called under-relaxation methods.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Introducing the SOR Method
Methods involving
(k )

(k )

xi

(k 1)

= xi

rii
aii

are called relaxation methods. For choices of with 0 < < 1,


the procedures are called under-relaxation methods.
We will be interested in choices of with 1 < , and these are
called over-relaxation methods.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Introducing the SOR Method
Methods involving
(k )

(k )

xi

(k 1)

= xi

rii
aii

are called relaxation methods. For choices of with 0 < < 1,


the procedures are called under-relaxation methods.
We will be interested in choices of with 1 < , and these are
called over-relaxation methods.
They are used to accelerate the convergence for systems that are
convergent by the Gauss-Seidel technique.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

From Gauss-Seidel to Relaxation Methods


Introducing the SOR Method
Methods involving
(k )

(k )

xi

(k 1)

= xi

rii
aii

are called relaxation methods. For choices of with 0 < < 1,


the procedures are called under-relaxation methods.
We will be interested in choices of with 1 < , and these are
called over-relaxation methods.
They are used to accelerate the convergence for systems that are
convergent by the Gauss-Seidel technique.
The methods are abbreviated SOR, for Successive
Over-Relaxation, and are particularly useful for solving the linear
systems that occur in the numerical solution of certain
partial-differential equations.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


A More Computationally-Efficient Formulation
(k )

Note that by using the i-th component of ri


(k )

rii

= bi

i1
X
j=1

Numerical Analysis (Chapter 7)

(k )

aij xj

n
X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

Relaxation Techniques

R L Burden & J D Faires

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


A More Computationally-Efficient Formulation
(k )

Note that by using the i-th component of ri


(k )

rii

= bi

i1
X

(k )

aij xj

n
X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

j=1

we can reformulate the SOR equation


(k )

(k )
xi

(k 1)
xi

r
+ ii
aii

for calculation purposes

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


A More Computationally-Efficient Formulation
(k )

Note that by using the i-th component of ri


(k )

rii

= bi

i1
X

(k )

aij xj

n
X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

j=1

we can reformulate the SOR equation


(k )

(k )
xi

(k 1)
xi

r
+ ii
aii

for calculation purposes as


(k )

xi

i1
n
X
X

(k 1)
(k )
(k 1)
bi
= (1 )xi
+
aij xj
aij xj
aii
j=1

Numerical Analysis (Chapter 7)

Relaxation Techniques

j=i+1

R L Burden & J D Faires

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Contd)


To determine the matrix form of the SOR method, we rewrite

i1
n
X
X

(k )
(k 1)
(k )
(k 1)
bi
aij xj
aij xj
xi = (1 )xi
+
aii
j=1

Numerical Analysis (Chapter 7)

Relaxation Techniques

j=i+1

R L Burden & J D Faires

18 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Contd)


To determine the matrix form of the SOR method, we rewrite

i1
n
X
X

(k )
(k 1)
(k )
(k 1)
bi
aij xj
aij xj
xi = (1 )xi
+
aii
j=1

j=i+1

as
(k )
aii xi

i1
X

(k )
aij xj

= (1

(k 1)
)aii xi

j=1

Numerical Analysis (Chapter 7)

n
X

(k 1)

aij xj

+ bi

j=i+1

Relaxation Techniques

R L Burden & J D Faires

18 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


(k )

aii xi

i1
X

(k )

aij xj

(k 1)

= (1 )aii xi

n
X

(k 1)

aij xj

+ bi

j=i+1

j=1

A More Computationally-Efficient Formulation (Contd)


In vector form, we therefore have
(D L)x(k ) = [(1 )D + U]x(k 1) + b

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

19 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


(k )

aii xi

i1
X

(k )

aij xj

(k 1)

= (1 )aii xi

n
X

(k 1)

aij xj

+ bi

j=i+1

j=1

A More Computationally-Efficient Formulation (Contd)


In vector form, we therefore have
(D L)x(k ) = [(1 )D + U]x(k 1) + b
from which we obtain:

The SOR Method


x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

19 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b


Letting
T = (D L)1 [(1 )D + U]

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b


Letting
T = (D L)1 [(1 )D + U]
and

Numerical Analysis (Chapter 7)

c = (D L)1 b

Relaxation Techniques

R L Burden & J D Faires

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b


Letting
T = (D L)1 [(1 )D + U]
and

c = (D L)1 b

gives the SOR technique the form


x(k ) = T x(k 1) + c

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Example
The linear system Ax = b given by
4x1 + 3x2

= 24

3x1 + 4x2 x3 = 30
x2 + 4x3 = 24
has the solution (3, 4, 5)t .

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

21 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Example
The linear system Ax = b given by
4x1 + 3x2

= 24

3x1 + 4x2 x3 = 30
x2 + 4x3 = 24
has the solution (3, 4, 5)t .
Compare the iterations from the Gauss-Seidel method and the
SOR method with = 1.25 using x(0) = (1, 1, 1)t for both
methods.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

21 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (1/3)

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (1/3)
For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are
(k 1)

(k )

= 0.75x2

(k )

= 0.75x1 + 0.25x3

(k )

= 0.25x2 6

x1

x2
x3

Numerical Analysis (Chapter 7)

+6

(k )

(k 1)

+ 7.5

(k )

Relaxation Techniques

R L Burden & J D Faires

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (1/3)
For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are
(k 1)

(k )

= 0.75x2

(k )

= 0.75x1 + 0.25x3

(k )

= 0.25x2 6

x1

x2
x3

+6
(k 1)

(k )

+ 7.5

(k )

and the equations for the SOR method with = 1.25 are
(k )

x1

(k )
x2
(k )
x3

(k 1)

= 0.25x1
=
=

(k 1)

0.9375x2

+ 7.5

(k )
(k 1)
(k 1)
0.9375x1 0.25x2
+ 0.3125x3
(k )
(k 1)
0.3125x2 0.25x3
7.5

Numerical Analysis (Chapter 7)

Relaxation Techniques

+ 9.375

R L Burden & J D Faires

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method: Solution (2/3)


Gauss-Seidel Iterations
k

(k )
x1
(k )
x2
(k )
x3

1
1
1

5.250000
3.812500
5.046875

3.1406250
3.8828125
5.0292969

3.0878906
3.9267578
5.0183105

Numerical Analysis (Chapter 7)

Relaxation Techniques

7
3.0134110
3.9888241
5.0027940

R L Burden & J D Faires

23 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method: Solution (2/3)


Gauss-Seidel Iterations
k

(k )
x1
(k )
x2
(k )
x3

1
1
1

5.250000
3.812500
5.046875

3.1406250
3.8828125
5.0292969

3.0878906
3.9267578
5.0183105

7
3.0134110
3.9888241
5.0027940

SOR Iterations ( = 1.25)


k

(k )
x1
(k )
x2
(k )
x3

1
1
1

6.312500
3.5195313
6.6501465

2.6223145
3.9585266
4.6004238

3.1333027
4.0102646
5.0966863

Numerical Analysis (Chapter 7)

Relaxation Techniques

7
3.0000498
4.0002586
5.0003486

R L Burden & J D Faires

23 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (3/3)

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (3/3)
For the iterates to be accurate to 7 decimal places,
the Gauss-Seidel method requires 34 iterations,

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (3/3)
For the iterates to be accurate to 7 decimal places,
the Gauss-Seidel method requires 34 iterations,
as opposed to 14 iterations for the SOR method with = 1.25.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

The SOR Algorithm

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

25 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of

An obvious question to ask is how the appropriate value of is


chosen when the SOR method is used?

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

26 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of

An obvious question to ask is how the appropriate value of is


chosen when the SOR method is used?
Although no complete answer to this question is known for the
general n n linear system, the following results can be used in
certain important situations.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

26 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem (Kahan)
If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies
that the SOR method can converge only if 0 < < 2.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem (Kahan)
If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies
that the SOR method can converge only if 0 < < 2.
The proof of this theorem is considered in Exercise 9, Chapter 7 of
Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,
2011.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem (Kahan)
If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies
that the SOR method can converge only if 0 < < 2.
The proof of this theorem is considered in Exercise 9, Chapter 7 of
Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,
2011.

Theorem (Ostrowski-Reich)
If A is a positive definite matrix and 0 < < 2, then the SOR method
converges for any choice of initial approximate vector x(0) .

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem (Kahan)
If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies
that the SOR method can converge only if 0 < < 2.
The proof of this theorem is considered in Exercise 9, Chapter 7 of
Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,
2011.

Theorem (Ostrowski-Reich)
If A is a positive definite matrix and 0 < < 2, then the SOR method
converges for any choice of initial approximate vector x(0) .
The proof of this theorem can be found in Ortega, J. M., Numerical
Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem
If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and
the optimal choice of for the SOR method is
2

=
1+

Numerical Analysis (Chapter 7)

1 [(Tj )]2

Relaxation Techniques

R L Burden & J D Faires

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem
If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and
the optimal choice of for the SOR method is
2

=
1+

1 [(Tj )]2

With this choice of , we have (T ) = 1.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Choosing the Optimal Value of


Theorem
If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and
the optimal choice of for the SOR method is
2

=
1+

1 [(Tj )]2

With this choice of , we have (T ) = 1.


The proof of this theorem can be found in Ortega, J. M., Numerical
Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Example
Find the optimal choice of for the SOR method for the matrix

4
3
0
4 1
A= 3
0 1
4

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

29 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (1/3)
This matrix is clearly tridiagonal, so we can apply the result in the
SOR theorem if we can also show that it is positive definite.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (1/3)
This matrix is clearly tridiagonal, so we can apply the result in the
SOR theorem if we can also show that it is positive definite.
Because the matrix is symmetric, the theory tells us that it is
positive definite if and only if all its leading principle submatrices
has a positive determinant.

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method

Solution (1/3)
This matrix is clearly tridiagonal, so we can apply the result in the
SOR theorem if we can also show that it is positive definite.
Because the matrix is symmetric, the theory tells us that it is
positive definite if and only if all its leading principle submatrices
has a positive determinant.
This is easily seen to be the case because


4 3
det(A) = 24, det
= 7 and
3 4

Numerical Analysis (Chapter 7)

Relaxation Techniques

det ([4]) = 4

R L Burden & J D Faires

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (2/3)
We compute
Tj

Numerical Analysis (Chapter 7)

= D 1 (L + U)

Relaxation Techniques

R L Burden & J D Faires

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (2/3)
We compute
Tj

= D 1 (L + U)

0 3 0
4 0 0

0 1
= 0 14 0 3
0
1 0
0 0 1
4

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (2/3)
We compute
Tj

Numerical Analysis (Chapter 7)

= D 1 (L + U)

0 3 0
4 0 0

0 1
= 0 14 0 3
0
1 0
0 0 14

0
0.75 0
0
0.25
= 0.75
0
0.25 0

Relaxation Techniques

R L Burden & J D Faires

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (2/3)
We compute
Tj

so that

= D 1 (L + U)

0 3 0
4 0 0

0 1
= 0 14 0 3
0
1 0
0 0 14

0
0.75 0
0
0.25
= 0.75
0
0.25 0

0.75
0
0.25
Tj I = 0.75
0
0.25

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (3/3)
Therefore


0.75
0

0.25
det(Tj I) = 0.75
0
0.25

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (3/3)
Therefore


0.75
0

0.25
det(Tj I) = 0.75
0
0.25

Numerical Analysis (Chapter 7)

Relaxation Techniques




= (2 0.625)

R L Burden & J D Faires

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (3/3)
Therefore


0.75
0

0.25
det(Tj I) = 0.75
0
0.25
Thus

(Tj ) =

Numerical Analysis (Chapter 7)




= (2 0.625)

0.625

Relaxation Techniques

R L Burden & J D Faires

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (3/3)
Therefore


0.75
0

0.25
det(Tj I) = 0.75
0
0.25
Thus

(Tj ) =

and
=




= (2 0.625)

0.625

2
2
q

1.24.
=
1 + 1 0.625
1 + 1 [(Tj )]2

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Method


Solution (3/3)
Therefore


0.75
0

0.25
det(Tj I) = 0.75
0
0.25
Thus

(Tj ) =

and
=




= (2 0.625)

0.625

2
2
q

1.24.
=
1 + 1 0.625
1 + 1 [(Tj )]2

This explains the rapid convergence obtained in the last example when
using = 1.25.
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Residual Vectors & the Gauss-Seidel Method

Relaxation Methods (including SOR)

Choosing the Optimal Value of

The SOR Algorithm

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

33 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (1/2)


To solve
Ax = b
given the parameter and an initial approximation x(0) :

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

34 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (1/2)


To solve
Ax = b
given the parameter and an initial approximation x(0) :
INPUT

the number of equations and unknowns n;


the entries aij , 1 i, j n, of the matrix A;
the entries bi , 1 i n, of b;
the entries XOi , 1 i n, of XO = x(0) ;
the parameter ; tolerance TOL;
maximum number of iterations N.

OUTPUT the approximate solution x1 , . . . , xn or a message

that the number of iterations was exceeded.


Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

34 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:
Step 3 For i = 1, . . . , n
set xi = (1 )XOi +

Numerical Analysis (Chapter 7)

i
P
1 h  Pi1
j=1 aij xj nj=i+1 aij XOj + bi
aii

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:
Step 3 For i = 1, . . . , n
set xi = (1 )XOi +

i
P
1 h  Pi1
j=1 aij xj nj=i+1 aij XOj + bi
aii

Step 4 If ||x XO|| < TOL then OUTPUT (x1 , . . . , xn )


STOP (The procedure was successful)

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:
Step 3 For i = 1, . . . , n
set xi = (1 )XOi +

i
P
1 h  Pi1
j=1 aij xj nj=i+1 aij XOj + bi
aii

Step 4 If ||x XO|| < TOL then OUTPUT (x1 , . . . , xn )


STOP (The procedure was successful)
Step 5 Set k = k + 1

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:
Step 3 For i = 1, . . . , n
set xi = (1 )XOi +

i
P
1 h  Pi1
j=1 aij xj nj=i+1 aij XOj + bi
aii

Step 4 If ||x XO|| < TOL then OUTPUT (x1 , . . . , xn )


STOP (The procedure was successful)
Step 5 Set k = k + 1
Step 6 For i = 1, . . . , n set XOi = xi

Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

The SOR Algorithm (2/2)


Step 1 Set k = 1
Step 2 While (k N) do Steps 36:
Step 3 For i = 1, . . . , n
set xi = (1 )XOi +

i
P
1 h  Pi1
j=1 aij xj nj=i+1 aij XOj + bi
aii

Step 4 If ||x XO|| < TOL then OUTPUT (x1 , . . . , xn )


STOP (The procedure was successful)
Step 5 Set k = k + 1
Step 6 For i = 1, . . . , n set XOi = xi
Step 7 OUTPUT (Maximum number of iterations exceeded)
STOP (The procedure was successful)
Numerical Analysis (Chapter 7)

Relaxation Techniques

R L Burden & J D Faires

35 / 36

Questions?