You are on page 1of 22

POISSON’S EQUATION - DISCRETIZATION

The Dirichlet boundary value problem for Poisson’s equa-


tion is given by
∆u(x, y ) = g (x, y ), (x, y ) ∈ R
(1)
u(x, y ) = f (x, y ), (x, y ) ∈ Γ
where R is a planar region and Γ is the boundary of
R. In the book in §8.8, we examine this problem for
R = {(x, y ) | 0 < x, y < 1}.

For an integer N > 1, we introduce a mesh size h =


1/N . With it, we define a rectangular mesh on R =
R ∪ Γ:
 
j k
(xj , yk ) = N, N , j, k = 0, 1, ..., N
At mesh points (=grid points) inside of R, we approx-
imate the equation ∆u = g . To do so, we recall the
approximation

 G(x + h) − 2G(x) + G(x − h) h2 (4)


G (x) = − G (ξ )
h 2 12
with some ξ ∈ [x − h, x + h]. This assumes G is four
times continuously differentiable on [x − h, x + h].
Since
∂ 2u(x, y ) ∂ 2u(x, y )
∆u(x, y ) = +
∂x2 ∂y 2
we obtain at each (xj , yk ) ∈ R that
u(xj+1, yk ) − 2u(xj , yk ) + u(xj−1, yk )
h2
u(xj , yk+1) − 2u(xj , yk ) + u(xj , yk−1)
+
 4 h2  (2)
h ∂ u(ξj , yk ) ∂ u(xj , ηk )
2 4
− +
12 ∂x4 ∂y 4
= g (xj , yk )
   
with ξj ∈ xj−1, xj+1 , ηk ∈ yk−1, yk+1 . We can
delete the error terms involving the fourth derivatives
of u. This leads to a new set of equations for approx-
imating unknown uh ≈ u:

uh(xj , yk ) | 1 < j, k < N


The values of uh at boundary mesh points on Γ are
given by
uh(xj , yk ) = f (xj , yk ), (xj , yk ) ∈ Γ (3)
This leads to the linear system
uh(xj+1, yk ) − 2uh(xj , yk ) + uh(xj−1, yk )
h2
uh(xj , yk+1) − 2uh(xj , yk ) + uh(xj , yk−1)
+
h2
= g (xj , yk ), (xj , yk ) ∈ R

uh(xj , yk ) = f (xj , yk ), (xj , yk ) ∈ Γ (4)


For interior mesh points, we can simplify this to
uh(xj , yk−1) + uh(xj−1, yk ) − 4uh(xj , yk )
+uh(xj+1, yk ) + uh(xj , yk+1) = h2g (xj , yk )
(xj , yk ) ∈ R
(5)
Thus we have a system of (N + 1)2 linear equations
in the (N + 1)2 unknowns

uh(xj , yk ) | 1 ≤ j, k ≤ N

If u(x, y ) is four times continuously differentiable over


R, it can be shown that for some c,


max u(xj , yk ) − uh(xj , yk ) ≤ c h2
R
JACOBI ITERATION

For m = 0, 1, 2, ..., define



(m+1) 1 (m) (m)
uh (xj , yk ) = uh (xj , yk−1) + uh (xj−1, yk )
4
(m) (m)
+ uh (xj+1, yk ) + uh (xj , yk+1) − h2g (xj , yk )
(xj , yk ) ∈ R
(6)
(m)
For values of uh at boundary points, use the given
boundary conditions as in (3).

We can write the iteration in matrix-vector form as


(m+1) (m)
uh = Quh + Gh
 
T I 0 ··· 0 0
 
 I T I 
 .. 
1 0 I T I 
Q=  
4 
.. ...
 
 
 0 I T I 
0 ··· I T
with I the identity of order N − 1,
 
0 1 ··· 0
 
 1 0 1 
 .. .. 
T =

... 

 1 0 1 
 
0 ··· 1 0
 
(m)

uh (∗, y1) 
 (m) 
uh = 
(m)

uh (∗, y2) 

 .. 
 
(m)
uh (∗, yN −1)
 
(m)

uh (x1, yk ) 
 (m) 
uh (∗, yk ) = 
(m)

uh (x2, yk ) 

 .. 
 
(m)
uh (xN −1, yk )


1 B(y ) − h2 G (∗, y )
 4 1 h 1 


 1 B(y ) − h2 G (∗, y ) 
Gh =  4 2 h 2 
 .. 
 


1 B(y
N −1) − h Gh (∗, yN −1)
2
4
 
f (x0, yk )
 
 0 
 .. 
B(yk ) = 



 0 
 
f (xN , yk )
 
g (x1, yk )
 g (x2, yk ) 
Gh(∗, yk ) = 
 ..


 
g (xN −1, yk )
The original discretization of (4)-(5) can be rewritten
as
uh = Quh + Gh
with uh written in accordance with the above defini-
(m)
tion of uh .
uh = Quh + Gh (7)

(m+1) (m)
uh = Quh + Gh (8)
In this block matrix description of the system being
solved, each block uh(∗, yk ) corresponds to the hori-
zontal row of interior grid points at y = yk , with the
entries in uh(∗, yk ) being ordered in correspondence
to the grid points being numbered from left to right in
the grid. Note that Q is a symmetric matrix of order
(N − 1)2.

If we write our linear system (7) in the form Ax = b,


we have A = I − Q, again a symmetric matrix. With
reference to the Gauss-Seidel iteration, note that the
diagonal elements of A are positive.
(m)
For the iteration error eh in the Jacobi iteration,
subtract (8) from (7), obtaining
(m+1) (m)
eh = Qeh , m = 0, 1, ... (9)
The matrix Q is the earlier matrix M of our general
framework in the case of the Gauss-Jacobi iteration.
Thus we have convergence of the Jacobi iteration if
and only if

rσ (Q) < 1
Note in this case that

Q 1 = Q ∞ = 1
Thus we are driven to looking at rσ (Q), and in this
case Q 2 = rσ (Q).
GAUSS-SEIDEL ITERATION

For m = 0, 1, 2, ..., define



(m+1) 1 (m+1)
uh (xj , yk ) = uh (xj , yk−1)
4
(m+1) (m)
+ uh (xj−1, yk ) + uh (xj+1, yk )

(m)
+uh (xj , yk+1) − h2g (xj , yk ), (xj , yk ) ∈ R
(10)
(m)
For values of uh at boundary points, use the given
boundary conditions as in (3).

We can use the above matrices to also write this iter-


ation in matrix-vector format. In the language of the
general framework given earlier, it amounts to letting
N be the lower triangular portion of A = I − Q and
P = N − A. Then write
(m+1) (m)
N uh = Gh + P uh
RATES OF CONVERGENCE

Introduce
 
π
ξ = 1 − 2 sin2 (11)
2N

∗ 2
ω =  
1 + sqrt 1 − ξ 2
Then for the rates of convergence of the various itera-
tion methods we have studied when applied to solving
(7), uh = Quh + Gh, we have
Rate of Convergence
Method rσ (M )
Jacobi ξ
Gauss-Seidel ξ2
SOR ω∗ − 1
THE RATE OF CONVERGENCE OF SOR

In the language of the general framework given earlier,


the error equation for the SOR method will be
(m+1) (m)
eh = M (ω ) eh , m = 0, 1, ... (12)
The eigenvalues of M (ω ) can be computed explicitly
in this case, yielding
  2 

 ωξ ωξ

 + sqrt + 4 (1 − ω ) ,



 2 2
rσ (M (ω )) = 0 ≤ ω ≤ ω ∗



 ω − 1,



 ∗
ω ≤ω≤2
We give graphs for a few cases of the discretization
parameter N .
COSTS

Recall the notation from §8.7 regarding costs:

Cost(c, ε, n) = m∗ν (n)


with c the factor by which the error was decreasing
per iterate, n the order of the system, ε the error
tolerance in
   
 (m)   (0) 
x − x  ≤ ε x − x 

ν (n) the number of arithmetic operations per iterate


calculation, and
∗ − log ε
m = , R(c) = − log c
R(c)
For c = 1 − δ with δ small, we have
.
R(c) = δ c = 1 − δ, δ≈0
Note that for Poisson example, ν (n) ≈ 5n, and thus
the total cost of attaining the desired accuracy is

Cost(c, ε, n) = 5nm∗
APPLICATION TO SOLVING
POISSON’S EQUATION

Gauss-Jacobi :
 
c=ξ= 1
1 − 2 π h + O h4
2 2

δ ≈ 12 π 2h2
Gauss-Seidel :
 
c= ξ2 = 1 − π 2h2 + O h4
δ ≈ π 2h2
SOR :
 
c = ω ∗ − 1 = 1 − 2πh + O h2
δ ≈ 2πh
The improvement in the needed numbers of iterates
is as follows, along with total costs for a desired ac-
curacy.

Gauss-Jacobi vs. Gauss-Seidel : Gauss-Seidel requires


approximately half the iterates required of Gauss-Jacobi.
For Gauss-Seidel iteration,
log ε 5(N −1)2 log ε
Cost(c, ε, n) ≈ − 5n = − 2
π 2h2 π (N −1)−2
5N 4 log ε
≈ − π2

Gauss-Seidel vs. SOR : The factor giving the decrease


in the number of iterates to be computed is
2πh 2
=
π 2h2 πh
Thus you are increasingly better off as h → 0. For
the cost of SOR,
log ε 5(N −1)2 log ε
Cost(c, ε, n) ≈ − 5n2πh = −
2π(N −1)−1
5N 3 log ε
≈ − 2π
The costs increase rapidly with N .
Let KJ , KGS , KSOR denote the number of iterations
needed to reduce the iteration error by a factor of
10, for the Jacobi, Gauss-Seidel, and SOR methods,
respectively. The following table shows the results for
several values of N .
N = 10 N = 20 N = 40 N = 80
ξ .9510565 .9876883 .9969173 .99922904
ξ2 .9045085 .9755283 .9938442 .99845867
ω∗ − 1 .5278640 .7294538 .8544978 .92444658
KJ 46 186 746 2985
KGS 23 93 373 1493
KSOR 4 8 15 31
AN INITIAL GUESS

(0)
How to generate an initial guess uh ? Since we know
the solution on the boundary,

uh(xj , yk ) = f (xj , yk ), (xj , yk ) ∈ Γ


We use an “interpolant” of this data to extend the
boundary values to all of R. Define
(0)
uh (x, y ) = (1 − x)f (0, y ) + xf (1, y )
+(1 − y )f (x, 0) + yf (x, 1)
− [(1 − y )(1 − x)f (0, 0) + xyf (1, 1)
+ (1 − y )xf (1, 0) + y (1 − x)f (0, 1)]
(13)
LINE ITERATION

Recall the discretization of (8.8.5):



uh(xj , yk ) = 1 u (x
4 h j+1, yk ) + uh(xj , yk+1)

+uh(xj−1, yk ) + uh(xj , yk−1)


h 2
− 4 g (xj , yk ), 1 ≤ j, k ≤ N − 1
Previously we solved individually for a new value of
uh(xj , yk ). Consider solving simultaneously for all the
values in a row of the grid, say in row #k.

The line Jacobi method is defined by



(m+1) (m+1)
uh (xj , yk ) = 14 uh (xj+1, yk )
(m+1) (m)
+ uh (xj−1, yk ) + uh (xj , yk+1)

(m) h2
+uh (xj , yk−1) − 4 g (xj , yk ), 1 ≤ j ≤ N − 1

for k = 1, ..., N − 1. This is a tridiagonal system of


order N − 1; and it can be solved in approximately
5N arithmetic operations.
MULTIGRID ITERATION

I just give the general philosophy, as the details are


quite complicated.

Imagine having a sequence of linear systems

AN uN = bN , N = N0, N1, N2, ..., Nq


In our case with the Poisson equation, we would gen-
erally have Nj+1 = 4Nj , approximately. We want
to solve for the most accurate discretization, that for
N = Nq .

The general philosophy is to use information from


each of the systems AN uN = bN to solve all the
systems with a finer mesh. There are a number of
ways is which this may be carried out.
Step 1. Use the Jacobi method to iterate in ANq uNq =
bNq , calculating a certain number of iterates (say K ),
(K)
to get v = uNq . Then calculate the residual
 
R = bNq − ANq v =ANq uNq − v
Step 2. Find an approximate solution by performing
the multigrid iteration on the system

ANq−1 δ = Rq,q−1R

calling the approximate solution δ.


Step 3. Prolong the solution δ to a vector δ be of
length Nq and then define the new iterate for ANq uNq =
bNq to be

uNq ≈ v + δ
This converges extremely rapidly; and there are pub-
lic domain packages available which implement it for
common classes of partial differential equations. In
general these are the fastest means of solving dis-
cretizations of PDEs.
DIRECT METHODS

With the systems for some special PDEs, including the


Poisson equation on a rectangle, it is possible to give
an exact method of solution which is also extremely
efficient. It depends on knowing the eigenvalues and
eigenvectors for the matrix AN . Since the matrix is
symmetric and positive definite, there is a basis of
orthogonal eigenvectors; and it can be found explicitly.
Suppose it is written as
v(1), v(2), ..., v(N )
with eigenvalues λ1, ..., λN . Then write
N 
 
(j)
b= b, v v(j)
j=1
The solution of AuN = b is given by
N
 1  (j)

uN = b, v v(j)
j=1 λj
In the particular case of the Poisson equation over a
rectangle, this can be computed in O(N log N ) oper-
ations.
1

0.95

0.9

0.85 ω
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure 1: Convergence rate rσ (M (ω )) with N = 40


and ω ∗ = 1.85450
1

0.99

0.98

0.97

0.96

0.95

0.94

0.93

0.92 ω
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure 2: Convergence rate rσ (M (ω )) with N = 80


and ω ∗ = 1.92445

You might also like