You are on page 1of 42

DIRECT METHODS TO THE

SOLUTION OF LINEAR
EQUATIONS SYSTEMS
Lizeth Paola Barrero Riaño
Numerical Methods – Industrial Univesity of Santander
BASIC FUNDAMENTALS
•Symmetric Matrix
•Transposes Matrix

•Determinant

•Upper Triangular Matrix

•Lower Triangular Matrix

•Banded Matrix

•Augmented Matrix

•Matrix Multiplication
Matrix

A horizontal set of elements is called a


A matrix consists
row (i) and a vertical set is called a
of a rectangular
column (j).
array of elements
represented by a
single symbol. As Column 3
depicted in Figure  a11 a12 a13 ... a1m 
[A] is the  
a21 a22 a23 ... a2m Row 2
shorthand notation
for the matrix and
A  
 

designates an  
individual element  an 1 an 2 an 3 ... anm 
of the matrix.
Symmetric Matrix

Anm aij is a Symmetric Matrix if aij a ji , ij


It is a
square Example:
matrix in Scalar, diagonal
and identity
which the matrices, are
elements symmetric
are matrices.
symmetric If A is a symmetric matrix, then:
about the a. The product A  A is defined and is a symmetric
t

main matrix.
diagonal b. The sum of symmetric matrices is a symmetric
matrix.
c. The product of two symmetric matrices is a
symmetric matrix if the matrices commute
Transposes Matrix

Let any matrix Am n  aij  At n m  b ij such that b ji  a ij , i,j


A=(aij) of mxn Example:
order, the 1 0 
1 2 2   
matrix B=(bij) A23  
4 3 
  A t 32   2 4 
0  2 3 
de order nxm is  
the A transpose  2 
B12   2 9   Bt 21   
if the A rows  9 

are the B Properties:


columns . This a. ( A t )t  A
operation is b. ( A  B )t  A t  B t
usually denoted c. ( A  B )t  B t  A t
by At = A' = B d. (k  A )t  k  A t , k  
Determinant

Given a square Example:


matrix A of n
 2 4 5 
size, its 
To the matrix A   6 7 3 

determinant is 3 0 2 

defined as the
sum of the any applying the definition, if we choose the third row is:
matrix line
elements
4 5  2 5  2 4
product (row or det( A )  3  +0   +2 
7 3  6 3  6 7
column) chosen,
= 3  (-12-35)+ 0  (-(6-30))+ 2  (-14-24)= -141+ 0-76
by their = -217
corresponding
attachments.
Determinant Properties
a. If a matrix has a line (row or column) of zeros, the determinant is
zero.
b. If a matrix has two equal rows or proportional, the determinant is
null
c. If we permute two parallels lines of a square matrix, its determinant
changes sign.
d. If we multiply all elements of a determinant line by a number, the
determinant is multiplied by that number.
e. If a matrix line is added another line multiplied by a number, the
determinant does not change.
f. The determinant of a matrix is equal to its transpose,
g. If A has inverse matrix, A-1, it is verified that:
1
det( A 1 ) 
det( A )
Upper and Lower Triangular Matrix

Upper Triangular Matrix Lower Triangular Matrix

A  aij  is Upper Triangular if aij  0 , i  j An n  aij  is Low er Triangular if aij  0 , i  j

Example: Example:

 2 0 2   22 0 0 
A  0 1 0  D   2 1 0 
0 0 3   1 1 0 

It is a square matrix in which all the It is a square matrix in which all the
items under the main diagonal are zero. elements above the main diagonal are
zero.
Banded Matrix

A band matrix is a sparse matrix whose non-zero entries are


confined to a diagonal band, comprising the main diagonal
and zero or more diagonals on either side.

Example:

4 0 0 0 0 8 7 6 0 0
2 0 0 0    
  7 8 1 0 0 9 3 0 2 0 
0 1 0 0
C  D=  0 0 5 2 0 M=  3 1 8 9 10 
0 0 5 0    
  0 0 1 3 5 0 0 3 5 8
0 0 0 7 0
 0 0 3 4  0
 0 7 4 0 
Diagonal
Triadiagonal Pentadiagonal
Augmented Matrix

Example:
It is called
extended or  1 3 2 4 
augmented    
A   2 0 1 , B=  3 
matrix that is 5 2 2 8 
   
formed by the
coefficient
matrix and the The augmented matrix  A B 
vector of is represented as follow s:
independent
terms, which are  1 3 2 4 
usually separated  A B    2 0 1 3 
with a dotted line  5 2 2 1 
 
Matrix Multiplication
.

Given Am n   aij  y Bn p  b ij  , the product A.B is


To define A ⋅ B is
necessary that the another matrix Am p in which each Cij is the nth
number of columns in
the first matrix row product of A per the nth column of B, namely the
coincides with the n
number of rows in the element c ij  a ik .b kj
second matrix. The k 1
product order is given
by the number of  a11  a1n   b 11  b 1p 
   
rows in the first A     y B=    
matrix per the number a  b 
 m 1  am n   n 1  b np 
of columns in the
second matrix. That
is, if A is mxn order  a11b 11    a1n b n 1  a11b 1p    a1n b 1p 
and B is nxp order,  
then AB      
then C = A ⋅ B is mxp
a b    a b 
order.  m 1 11 m n n 1  am 1b 1p    am n b np 
Matrix Multiplication

Graphically: Example:

 1 3 5 1 3 4 0
   
A   0 0 1 B=  0 8 9 1
4 1 2 7 5 5 1 
  

 36 4 56 8 
 
C  A  B   7 5 5 1
 18 14 35 3 
 
Solution of Linear Algebraic Equation

Linear algebra is one of the


corner stones of modern A system of linear algebraic equations can
computational mathematics. be expressed as
Almost all numerical
schemes such as the finite  a11 a12  a1n   x 1   b1 
element method and finite a a  a2n   x 2  b2 
 21 22 
difference method are in act         
techniques that transform,     
am 1 am 2  anm  x m  bm 
assemble, reduce, rearrange,
and/or approximate the
where aij and bi are constants,
differential, integral, or other
i= 1,2,...,m , j  1,2,...,n.
types of equations to systems
of linear algebraic equations.
Solution of Linear Algebraic Equation

Or:
AX=B • If the intersected part is a line or a
surface, there are an infinite
Solving a system with a number of solutions, usually
coefficient matrix Am xnis expressed by a particular solution
equivalent to finding the added to a linear combination of
intersection point(s) of all
typically n-m vectors. Otherwise,
m surfaces (lines) in an n
dimensional space.
the solution does not exist.

If all m surfaces happen to • In this part, we deal with the case


pass through a single of determining the values x1, x2,
point then the solution is …,xn that simultaneously satisfy a
unique
set of equations.
Small Systems of Linear
Equations
1. Graphical Method
2. Cramer’s Rule
3. The Elimination of Unknows
1. Graphical Method

When solving a system


with two linear
equations in two
variables, we are
looking for the point
where the two lines
cross.
This can be determined
by graphing each line
on the same coordinate
system and estimating When two straight lines are graphed, one of
the point of three possibilities may result:
intersection.
Graphical Method

When two lines Case 1


cross in exactly one
point, the system is
consistent and
independent and the Independent
solution is the one system:
ordered pair where one solution
the two lines cross. point
The coordinates of
this ordered pair can
be estimated from
the graph of the two
lines:
Graphical Method

This graph shows two Case 2


distinct lines that are
parallel. Since parallel
lines never cross, then
there can be no
Inconsistent
intersection; that is, for system:
a system of equations no solution and
that graphs as parallel no intersection
lines, there can be no point
solution. This is called
an "inconsistent" system
of equations, and it has
no solution.
Graphical Method

This graph appears Case 3


to show only one
line. Actually, it is
the same line drawn
twice. These "two" Dependent
lines, really being system:
the same line, the solution is
"intersect" at every the
whole line
point along their
length. This is called
a "dependent"
system, and the
"solution" is the
whole line.
Graphical Method

For instance, if the lines cross at a shallow angle it


ADVANTAGES:
can be just about impossible to tell where the lines
The graphical method is cross:
good because it clearly
illustrates the principle
involved.
DISADVANTAGES:
•It does not always give
us an exact solution.
•It cannot be used when
we have more than two
variables in the
equations.
Graphical Method

Example

Solve the following system by graphing.

2x – 3y = –2
4x + y = 24

First, we must solve each equation for "y=", so we can graph easily:

2x – 3y = –2 4x + y = 24
2x + 2 = 3y y = –4x + 24
(2/3)x + (2/3) = y
Graphical Method

The second line will be easy to graph


using just the slope and intercept, but it is
necessary a T-chart for the first line.

x y = (2/3)x + (2/3) y = –4x + 24


–4 –8/3 + 2/3 = –6/3 = –2 16 + 24 = 40
–1 –2/3 + 2/3 = 0 4 + 24 = 28
2 4/3 + 2/3 = 6/3 = 2 –8 + 24 = 16
5 10/3 + 2/3 = 12/3 = 4 –20 + 24 = 4
8 16/3 + 2/3 = 18/3 = 6 –32 + 24 = –8

Solution: (x, y) = (5, 4)


Cramer’s Rule

Cramer’s rule is another For example, x1 would be


technique that is best suited to computed as
small numbers of equations.
This rule states that each
unknown in a system of linear b1 a12 a13
algebraic equations may be
expressed as a fraction of two b2 a22 a23
determinants with
denominator D and with the b3 a32 a33
numerator obtained from D by
x1 
replacing the column of
D
coefficients of the unknown in
question by the constants b1,
b2, … ,bn.
Example

Use Cramer’s Rule to solve the system:

5x – 4y = 2
6x – 5y = 1

 a1 b1   5  4 
D      5  5   6  4  25  4  1
Solution:  a2 b2   6  5 

We begin by  c1 b1   2  4 
Dx      2   5   1  4   10  4  6
b2   1  5 
setting up and
evaluating the  c2
three
determinants:  a1 c1  5 4
Dy        5   1   6   2   5  12   7
 a2 c2  6 1 
Example

From Cramer’s Rule, we have:

Dx 6 Dy 7
x  6 and x  7
D 1 D 1

The solution is (6,7)

Cramer’s Rule does not apply if D=0. When D=0 , the system
is either inconsistent or dependent. Another method must be
used to solve it.
The Elimination of Unknows
The elimination of unknowns by combing
The basic strategy is to
equations is an algebraic approach that can
multiply the equations by
be illustrated for a set of two equations:
constants so that of the
unknowns will be a11x1  a12 x 2  b1 1
eliminated when the two
equations are combined. a21x1  a22 x 2  b2 2
The result is a single
equation that can be For example, these equations might be
solved for the remaining multiplied by a21 and a11 to give
unknown. This value can
then be substituted into a11a21x1  a12 a21x 2  b1a21 3
either of the original
equations to compute the a21a11x1  a22 a11x 2  b2 a11 4
other variable.
The Elimination of Unknows

Subtracting Eq. 3 from 4 will, therefore, eliminate the xt term from the
equations to yield

a22 a11x 2  a12 a21x 2  b2 a11  b1a21

Which can be solve for


a11b2  a21b1
x2 
a11b22  a12 a21

This equation can then be substituted into Eq. 1, which can be solved for

a22 b1  a12 b2
x1 
a11b22  a12 a21
The Elimination of Unknows
Notice that these equations follow
EXAMPLE
directly from Cramer’s rule, which
Use the elimination of unknown to solve,
states

 b1 a12 
  3 x1  2 x 2  18
b a22  b a  a12 b2
x1   2  1 22  x1  2 x 2  2
 a11 a12  a11a22  a12 a21
  Solution
 a21 a22 
2(18)  2(2)
 a11 b1  x1  4
  3(2)  2(1)
a b2  a b ba
x 2   21  11 2 1 21 3(2)  (1)18
 a11 a12  a11a22  a12 a21 x2  3
  3(2)  2(1)
 a21 a22 
Gaussian Elimination

Gaussian Elimination is
considered the
workhorse of
computational science
for the solution of a
system of linear
equations. Karl Friedrich
Gauss, a great 19th Gaussian Elimination is a systematic application
century mathematician, of elementary row operations to a system of
suggested this linear equations in order to convert the system to
elimination method as a upper triangular form.
part of his proof of a
particular theorem.
Once the coefficient matrix is in upper triangular
Computational scientists
form, we use back substitution to find a solution.
use this “proof” as a
Gaussian Elimination

The general procedure for Gaussian Elimination can be


summarized in the following steps:
1. Write the augmented matrix for the system of linear
equations.

2. Use elementary row operations on the augmented matrix [A|


b] to transform A into upper triangular form. If a zero is
located on the diagonal, switch the rows until a nonzero is in
that place. If you are unable to do so, stop; the system has
either infinite or no solutions.

3. Use back substitution to find the solution of the problem.


Gaussian Elimination
Example
1. Write the augmented matrix for the system of linear equations.

2y  z  4 0 2 1 4 
 
x  y  2z  6  1 1 2 6 
2x  y  z  7 2 1 1 7 
 

2.Use elementary row operations on the augmented matrix [A|b] to


transform A into upper triangular form.

0 2 1 4  (r2 ) 1 1 2 6
  Change row 1 to row 2 
 0 2 1 4 

 1 1 2 6  (r1 ) and vice versa
 2 1 1 7   2 1 1 7  (r3 )  ( 2  r1 )
Gaussian Elimination

Notice that the original coefficient  


matrix had a “0” on the diagonal 1 1 2 6  1 1 2 6
  0 2
in row 1. Since we needed to use  0 2 1 4   1 4
0 1 3 5   
multiples of that diagonal element  (r )  ( 1  r )   5 3
 2 
0 0
3 2 
to eliminate the elements below it, 2
we switched two rows in order to
move a nonzero element into that
position. We can use the same
technique when a “0” appears on Since all of the nonzero elements are
the diagonal as a result of now located in the “upper triangle” of
calculation. If it is not possible to the matrix, we have completed the
move a nonzero onto the diagonal
by interchanging rows, then the first phase of solving a system of
system has either infinitely many linear equations using Gaussian
solutions or no solution, and the Elimination.
coefficient matrix is said to be
singular.
Gaussian Elimination
The second and final phase of Gaussian Elimination is back substitution.
During this phase, we solve for the values of the unknowns, working our
way up from the bottom row.

3. Use back substitution to find the solution of the problem


 
 1 1 2 6 
0 2 1 4
 
5 3
0 0  2 
 

The last row in the augmented matrix represents the equation:


5 6
 z  3  z=
2 5
Gaussian Elimination

The second row of the augmented matrix represents the equation:

4 z 4 6 5 7
2y  z  4  y    y 
2 2 5

Finally, the first row of the augmented matrix represents the equation

x  y  2z  6  x= 6-y-2z= 6- 7  2 6
5 5   x 
11
5
Gaussian-Jordan Elimination

As in Gaussian In Gauss-Jordan Elimination, the goal is to


Elimination, again transform the coefficient matrix into a diagonal
we are transforming matrix, and the zeros are introduced into the
the coefficient matrix one column at a time. We work to
matrix into another eliminate the elements both above and below
matrix that is much the diagonal element of a given column in one
easier to solve, and pass through the matrix.
the system
represented by the
new augmented
matrix has the same
solution set as the
original system of
linear equations.
Gaussian-Jordan Elimination

The general procedure for Gauss-Jordan Elimination can be summarized


in the following steps:

1. Write the augmented matrix for the system of linear equations.

2. Use elementary row operations on the augmented matrix [A|b] to


transform A into diagonal form. If a zero is located on the diagonal,
switch the rows until a nonzero is in that place. If you are unable to
do so, stop; the system has either infinite or no solutions.

3. By dividing the diagonal element and the right-hand-side element in


each row by the diagonal element in that row, make each diagonal
element equal to one.
Gaussian-Jordan Elimination
Example
We will apply Gauss-Jordan Elimination to the same example that was
used to demonstrate Gaussian Elimination

1-Write the augmented matrix for the system of linear equations.


2y  z  4 0 2 1 4 
 
x  y  2z  6 1 1 2 6
2x  y  z  7 2 1 1 7 
 

2. Use elementary row operations on the augmented matrix [A|b] to


transform A into diagonal form.
0 2 1 4  (r2 ) 1 1 2 6
   
 1 1 2 6  (r1 )  0 2 1 4 
2 1 1 7   2 1 1 7  (r3 )  ( 2  r1 )
   
Gaussian-Jordan Elimination
1 1 0 3  (r1 )  ( 3  r3 )  11 
 1 1 2 6  (r1 )  (  2  r2 )  2 4  5 1 0 0 5 
  0 2
  0 2 1 4   0 2 1 4  (r2 )  ( 2  r3 )  0  14 
  5  5
0 1 3 5  (r )  ( 1  r ) 0 0  5 3  0 0 5 
3 2 2  2   2 3 

3-By dividing the diagonal element and the right-hand-side element in


each row by the diagonal element in that row, make each diagonal
element equal to one.
Notice that the
 11   11 
 1 0 0 5  1 0 0 5  coefficient matrix is
0 2 0 14  (r2 )  ( 1 )  0 1 0 7  now a diagonal
 5 2  5
0 0  5  0 0 1 6  matrix with ones on
2 3  (r3 )  ( 5 )
2
  5  the diagonal. This is
a special matrix
Entonces,
called the identity
11 7 6
x= , y= , a nd z= matrix.
5 5 5
LU Decomposition
Just as was the case with Linear algebraic notation can be rearranged
Gauss elimination, Lu to give
decomposition requires  A  X   B  0 1
pivoting to avoid division by
zero. However, to simplify Suppose that this equation could be expressed
the following description, we as an upper triangular system:
will defer the issue of
pivoting until after the u11 u12 u13   x1   d1 
0 u    
fundamental approach is  22 u23   x 2   d2  2
elaborated. In addition, the  0 0 u33   x 3  d3 
following explanation is
limited to a set of three Elimination is used to reduce the system to upper
simultaneous equations. The triangular form. The above equation can also be
expressed in matrix notation and rearranged to
results can be directly
give
extended to n-dimensional  U  X   D  0 3
systems.method.
LU Decomposition
Now, assume that there is a lower diagonal matrix with 1’s on the
diagonal,
1 0 0
 L  l21 1 0  4
l31 l32 1

That has the property that when Eq. 3 is premultiplied by it, Eq. 1 is the
result. That is,

 L   U  X   D    A  X   B 5

If this equation holds, it follows from the rules for matrix multiplication that

 L  U   A 6
a nd
 L  D   B 7
LU Decomposition

A two-step strategy for obtaining solutions can be based on Eqs. 3, 6


y 7.

• LU decomposition step. [A] is factored or “decomposed” into


lower [L] and upper [U] triangular matrices.

• Substitution step. [L] and [U] are used to determine a solution


{X} for a right-hand side {B}. This step itself consists of two
steps. First, Eq. 7 is used to generate an intermediate vector {D}
by forward substitution. Then, the result is substituted into Eq.
3, which can solved by back substitution for [X].

In the other hand, Gauss Elimination can be implemented in this


way.
Bibliography

CHAPRA, Steven. Numerical Methods for engineers.


Editorial McGraw-Hill. 2000.
http://www.efunda.com/math
http://www.purplemath.com
http://ceee.rice.edu/Books

You might also like