Professional Documents
Culture Documents
11
Cholesky and LDLT Decomposition
04.11.1
04.11.2
Chapter 04.11
Example 1
Find if
2 1 0
[ A] = 1 2 1
0 1 1
is SPD?
Solution
Criterion a: If each and every determinant of sub-matrix Aii (i = 1,2,..., n) is positive.
The given 3 3 matrix [ A] is symmetrical, because aij = a ji . Furthermore, one has
det [A]11 = 2 = 2 > 0
det [A]22 =
det [A]33
2 1
1 2
=3>0
2 1 0
= 1 2 1
0 1 1
=1> 0
Hence [ A] is SPD.
2 1 0 y1
y Ay = [ y1 y 2 y 3 ] 1 2 1 y 2
0 1 1 y 3
= (2 y12 2 y1 y 2 + 2 y 22 ) + {y 32 2 y 2 y 3 }
T
= ( y1 y 2 ) + y12 + y 22 + {y 32 2 y 2 y 3 }
2
= ( y1 y 2 ) + y12 + ( y 2 y 3 ) > 0
Since the above scalar is always positive, hence matrix [ A] is SPD.
2
04.11.3
a
(3)
21 a 22 a 23 = u12 u 22 0 0 u 22 u 23
a31 a32 a33 u13 u 23 u 33 0
0 u 33
Multiplying two matrices on the right-hand-side (RHS) of Equation (3), and then equating
each upper-triangular RHS terms to the corresponding ones on the upper-triangular left-handside (LHS), one gets the following 6 equations for the 6 unknowns in the factorized matrix
[U ] .
a
a
(4)
u11 = a11 ; u12 = 12 ; u13 = 13
u11
u11
1
1
a u12 u13
2 2
; u 33 = (a33 u132 u 23
(5)
)
u 22 = (a 22 u122 )2 ; u 23 = 23
u 22
In general, for a n n matrix, the diagonal and off-diagonal terms of the factorized matrix
[U ] can be computed from the following formulas:
1
i 1
2
2
u ii = aii (u ki )
k =1
(6)
i 1
u ij =
aij u ki u kj
(7)
u ii
It is noted that if i = j , then the numerator of Equation (7) becomes identical to the terms
under the square root in Equation (6). In other words, to factorize a general term u ij , one
simply needs to do the following steps:
Step 1.1: Compute the numerator of Equation (7), such as
k =1
i 1
Sum = a ij u ki u kj
k =1
Step 1.2 If u ij is an off-diagonal term (say, i < j ) then from Equation (7)
Sum
u ii
else, if u ij is a diagonal term (that is, i = j ), then from Equation (6)
u ij =
04.11.4
Chapter 04.11
u ii = Sum
As a quick example, one computes:
a u15 u17 u 25 u 27 u 35 u 37 u 45 u 47
(8)
u 57 = 57
u 55
Thus, for computing u (i = 5, j = 7) , one only needs to use the (already factorized) data in
columns # i (= 5) , and # j (= 7) of [U ] , respectively.
In general, to find the (off-diagonal) factorized term u ij , one only needs to utilize the
already factorized columns # i , and # j information (see Figure 1). For example, if i = 5 ,
and j = 7 , then Figure 1 will lead to the same formula as shown earlier in Equation (7), or in
Equation (8). Similarly, to find the (diagonal) factorized term u ii , one simply needs to utilize
columns # i , and # i (again!) information (see Figure 1). In this case, Figure 1 will lead to
the same formula as shown earlier in Equation (6).
Col. # i=5
Col. # j=7
uii
u1 j
u 2i
u3i
u2 j
u3 j
u 4i
uii
u4 j
uij
k=1
k=2
k=3
k=4
i=5
04.11.5
y1
y
2
.
.
y n
hence the name forward solution.
As a quick example, one has from Equation (11)
u11
u
12
u13
0 y1 b1
0 y 2 = b2
u 33 y3 b3
0
u 22
u 23
(12)
(13)
(14)
(15)
yj =
b j u ij y i
i =1
u jj
Step 3: Backward Solution phase
(16)
Since [U ] is an upper triangular matrix, Equation (10) can be efficiently solved for the
original unknown vector [x] , according to the order
xn
x
n 1
xn2
.
x1
04.11.6
Chapter 04.11
u11
0
u12
u13 x1 y1
u 23 x2 = y 2
u 33 x3 y3
u 22
0
(17)
Similarly
y3
u 33
x2 =
and
(18)
y 2 u 23 x3
u 22
(19)
y1 u12 x 2 u13 x3
u11
In general, one has
x1 =
yj
xj =
i = j +1
ji
(20)
xi
(21)
u jj
Amongst the above 3-step Cholesky algorithms, factorization phase in step 1 consumes
about 95% of the total SLE solution time.
If the coefficient matrix [ A] is symmetrical but not necessarily positive definite, then the
above Cholesky algorithms will not be valid. In this case, the following LDLT factorized
algorithms can be employed
(22)
[ A] = [ L][ D][ L]T
For example
0 1 l 21 l 31
a11 a12 a13 1 0 0 d11 0
(23)
21 a 22 a 23 = l 21 1 0 0 d 22 0 0 1 l 32
a31 a32 a 33 l 31 l 32 1 0
0 d 33 0 0 1
Multiplying the three matrices on the RHS of Equation (23), then equating the resulting
upper-triangular RHS terms of Equation (23) to the corresponding ones on the LHS, one
obtains the following formulas for the diagonal [D] , and lower-triangular [L] matrices
j 1
d jj = a jj l 2jk d kk
(24)
k =1
j 1
1
(25)
lij = aij lik d kk l jk
k =1
d jj
Thus, the LDLT algorithms can be summarized by the following step-by-step procedures.
04.11.7
(22, repeated)
1 l 21
0 1
0 0
xi = y i
l31 x1 y1
l32 x 2 = y 2
1 x3 y 3
n
k =i +1
ki
xk ; for i = n, n 1,...,2,1
Also, define
[ D][ y ] = [ z ]
0 y1 z1
d11 0
0 d
0 y 2 = z 2
22
0
0 d 33 y 3 z 3
z
yi = i , for i = 1,2,3, ......, n
dii
Then Equation (26) becomes
[ L][ z ] = [b]
1 0 0 z1 b1
l
21 1 0 z 2 = b2
l31 l32 1 z 3 b3
i 1
(26)
(27)
(28)
(29)
(30)
(31)
(32)
k =1
Equation (31) can be efficiently solved for the vector [z ] , and then Equation (29) can be
conveniently (and trivially) solved for the vector [ y ].
Step 3: Backward solution phase
In this step, Equation (27) can be efficiently solved for the original unknown vector [x ] .
Example 2
Using the Cholesky algorithm, solve the following SLE system for the unknown vector [x ] .
[ A][ x] = [b]
where
04.11.8
Chapter 04.11
2 1 0
[A] = 1 2 1
0 1 1
1
[b] = 0
0
Solution
The factorized, upper triangular matrix [U ] can be computed by either referring to Equations
(6-7), or looking at Figure 1, as following:
Row 1 of [U] is given below.
u11 = a 11
u12
= 2
= 1.414
a
= 12
u11
1
1.414
= 0.7071
a
u13 = 13
u11
=
0
1.414
=0
Row 2 of [U] is given below
=
u 22 = a 22
i 1=1
(u )
ki
k =1
= 2 (u12 )
1
2 2
= 2 ( 0.7071)
= 1.225
1
2
u23 =
a 23
i 1=1
u
k =1
ki
ukj
U 22
1 u12 u13
1.225
1 ( 0.7071)(0)
=
1.225
= 0.8165
Row 3 of [U] is given below
=
i 1= 2
2
2
u33 = a33 (uki )
k =1
2 2
}
= {a33 u132 u23
= 1 (0) ( 0.8165)
2
= 0.5774
Thus, the factorized matrix
0
1.414 0.7071
[U ] = 0
1.225
0.8165
0
0
0.5774
The forward solution phase, shown in Equation (11), becomes
[U ]T [ y ] = [b]
0
0 y1 1
1.414
0.7071 1.225
0 y 2 = 0
0
0.8165 0.5774 y 3 0
Thus, Equation (16) can be used to solve for [y] as
b
y1 = 1
u11
1
1.414
= 0.7071
=
y2 =
=
b2
j 1=1
u
i =1
u jj
ij
yi
= 0.4082
04.11.9
04.11.10
Chapter 04.11
y3 =
=
b3
j 1= 2
u
i =1
ij
yi
u jj
= 0.5774
The backward solution phase, shown in Equation (10), becomes:
[U ][x] = [ y ]
0
x1 0.7071
1.414 0.7071
0
1.225
0.8165 x 2 = 0.4082
0
0
0.5774 x3 0.5774
Thus, Equation (21) can be used to solve
yj
x3 =
u jj
=
y3
u33
0.5774
0.5774
=1
=
yj
x2 =
=
N =3
i = j +1=3
ji
xi
u jj
y 2 u23 x3
u22
0.4082 ( 0.8165)(1)
1.225
=1
=
yj
x1 =
=
N =3
i = j +1= 2
ji
xi
u jj
y1 u12 x2 u13 x3
u11
04.11.11
Hence
1
[x] = 1
1
Example 3
Using the LDLT algorithm, solve the following SLE system for the unknown vector [x ] .
[ A][ x] = [b]
where
2 1 0
[A] = 1 2 1
0 1 1
1
[b] = 0
0
Solution
The factorized matrices [D] and [L] can be computed from Equation (24) and Equation (25),
respectively.
j 1= 0
= a11
=2
l11 = 1 (always !)
j 1= 0
a 21 lik d kk l jk
k =1
l 21 =
d jj
= 21
Column 1 of matrices of [D ] and [L]
d11
1
=
= 0.5
a31
l31 =
d11
=0
04.11.12
Chapter 04.11
k =1
= 2 l212 d11
2
= 2 ( 0.5) (2 )
= 1.5
l22 = 1 (always !)
Column 2 of matrices [D ] and [L ]
j 1=1
a32 l31d11l21
k =1
l32 =
d 22
1 (0 )(2 )( 0.5)
=
1.5
= 0.6667
j 1= 2
d 33 = a33 l 2jk d kk
k =1
2
2
= 1 l31 d11 l32 d 22
Column 3 of matrices [D ] and [L]
2
2
= 1 (0 ) (2 ) ( 0.6667 ) (1.5)
= 0.3333
d 22 = a22
Hence
and
j 1=1
2
jk
d kk
0
2 0
[D] = 0 1.5 0
0 0 0.3333
0
0
1
[L] = 0.5
1
0
0
0.6667 1
The forward solution shown in Equation (31) becomes:
[L][z ] = [b]
0
0 z1 1
1
0.5
1
0 z 2 = 0 , or
0
0.667 1 z 3 0
i 1
z i = bi lik z k
Hence
k =1
(32, repeated)
04.11.13
z1 = b1 = 1
z 2 = b2 L21 z1
= 0 ( 0.5)(1)
= 0.5
z 3 = b3 L31 z1 L32 z 2
0 0 0.3333 y 3 0.3333
yi =
Hence
y1 =
zi
d ii
z1
d 11
1
2
= 0.5
z
y2 = 2
d 22
=
0.5
1.5
= 0.3333
z
y3 = 3
d 33
=
0.3333
0.3333
=1
The backward solution phase can be found by referring to Equation (27)
[L]T [x] = [ y ]
0 x1 0.5
1 0.5
0
1
0.667 x 2 = 0.333
0
0
1 x3 1
=
xi = yi
k =i +1
ki k
(28, repeated)
04.11.14
Hence
Chapter 04.11
x3 = y 3
=1
x 2 = y 2 l 32 x 3
= 0.3333 ( 0.6667 ) 1
x2 = 1
x1 = y1 l 21 x 2 l 31 x 3
Hence
x1
[x ] = x2
x3
1
= 1
1
Through this numerical example, one clearly sees that the square root operations have
NOT been involved during the entire LDLT algorithms. Thus, the coefficient matrix [A],
shown in Equation (1) is NOT required to be SPD.
Re-ordering Algorithms For Minimizing Fill-in Terms [1,2].
During the factorization phase (of Cholesky, or LDL T algorithms), many zero terms in the
original/given matrix [ A] will become non-zero terms in the factored matrix [U ] . These
new non-zero terms are often called as fill-in terms (indicated by the symbol F ). It is,
therefore, highly desirable to minimize these fill-in terms, so that both computational
time/effort and computer memory requirements can be substantially reduced. For example,
the following matrix [ A] and vector [b] are given:
0 0 0 2
112 7
7 110 5 4 3 0
0
5 88 0 0 1
[A] =
4
0 66 0 0
0
0
3
0 0 44 0
0
1 0 0 11
2
121
129
94
[b] =
70
47
14
(33)
(34)
04.11.15
The Cholesky factorization matrix [U ] , based on the original matrix [ A] (see Equation 33)
and Equations (6-7), or Figure 1, can be symbolically computed as
0
[U ] =
0
0
0 0
F
0
0 0
0 0
F
F
0
0
0
0
F
F
(35)
In Equation (35), the symbols x , and F represents the non-zero and fill-in terms,
respectively.
In practical applications, however, it is always a necessary step to rearrange the
original matrix [ A] through re-ordering algorithms (or subroutines) [Refs 1-2] and produce
the following integer mapping array
IPERM (new equation #) = {old equation #}
(36)
such as, for this particular example:
1 6
2 5
3 4
IPERM =
4 3
5 2
6 1
(37)
Using the above results (see Equation 37), one will be able to construct the following rearranged matrices:
0
2
11 0 0 1
0 44 0 0
3
0
0 0 66 0
4
0
A* =
0
1 0 0 88 5
0 3 4 5 110 7
7 112
2 0 0 0
[ ]
and
14
47
70
*
[b ] =
94
129
121
(38)
(39)
In the original matrix A (shown in Equation 33), the nonzero term A (old row 1, old
*
column 2) = 7 will move to new location of the new matrix A (new row 6, new column 5) =
7, etc.
*
The non zero term A (old row 3, old column 3) = 88 will move to A (new row 4, new
column 4) = 88, etc.
*
The value of b (old row 4) = 70 will be moved to (or located at) b (new row 3) = 70,
etc.
04.11.16
Chapter 04.11
Now, one would like to solve the following modified system of linear equations (SLE)
*
for [ x ] ,
[ A* ][ x * ] = [b * ]
(40)
rather than to solve the original SLE (see Equation (1)). The original unknown vector {x} can
[U ]
*
0
=
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
(41)
You can clearly see the big benefits of solving the SLE shown in Equation (40), instead of
solving the original Equation (1), since the factorized matrix [U * ] has only 1 fill-in term (see
the symbol F in Equation 41), as compared to six fill-in-terms occurred in the factorized
matrix [U ] as shown in Equation 35.
On-Line Chess-Like Game For Reordering/Factorized Phase [4].
Based on the discussions presented in the previous section 2 (about factorization phase), and
section 3 (about reordering phase), one can easily see the similar operations between the
symbolic, numerical factorization and reordering (to minimize the number of fill-in terms)
phases of sparse SLE.
In practical computer implementation for the solution of SLE, the reordering phase is
usually conducted first (to produce the mapping between old
new equation numbers, as
indicated in the integer array IPERM(-), see Equations 36-37).
Then, the sparse symbolic factorization phase is followed by using either Cholesky
Equations 6-7, or the LDLT Equations 24-25 (without requiring the actual/numerical values to
be computed). The reason is because during the symbolic factorization phase, one only
wishes to find the number (and the location) of non-zero fill-in terms. This symbolic
factorization process is necessary for allocating the computer memory requirement for the
numerical factorization phase which will actually compute the exact numerical values of
[U * ] , based on the same Cholesky Equations (6-7) (or the LDLT Equations (24-25)).
In this work, a chess-like game (shown in Figure 2, Ref. [4]) has been designed with
the following objectives:
04.11.17
04.11.18
Chapter 04.11
message to a particular player who discovers a new chess-like move (or, swapping
node) strategies which are even better than RCM algorithms.
4. Initially, the player(s) will select the matrix size ( 8 8 , or larger is recommended),
and the percentage (50%, or larger is suggested) of zero-terms (or sparsity of the
matrix). Then, the START Game icon will be clicked by the player.
5. The player will then CLICK one of the selected node i (or equation) numbers
appearing on the computer screen. The player will see those nodes j which are
connected to node i (based on the given/generated matrix [ A] ). The player then has to
decide to swap node i with one of the possible node j . After confirming the players
decision, the outcomes/results will be announced by the computer animated human
voice, and the monetary-award will (or will not) be given to the players/learners,
accordingly. In this software, a maximum of $1,000,000 can be earned by the player,
and the exact dollar amount will be inversely proportional to the number of fill-in
terms occurred (based on the players decision on how to swap node i with another
node j ).
6. The next player will continue to play, with his/her move (meaning to swap the i th
node with the j th node) based on the current best non-zero terms pattern of the
matrix.
References
CHOLSEKY AND LDLT DECOMPOSITION
Topic
Cholesky and LDLT Decomposition
Summary Textbook chapter of Cholesky and LDLT Decomposition
Major
General Engineering
Authors
Duc Nguyen
Date
July 29, 2010
Web Site http://numericalmethods.eng.usf.edu