You are on page 1of 6

Methods of Finding Solution of a Linear

System
There are many methods of finding solution of linear system equations such as Inverse Method, Cramer
Rule, LU-decomposition, Gauss Jordan Method, Gauss Elimination Method, Graphing Method,
Substitution Method, Jacobi Method, Choleskis Factorization Method, III-conditioned system,
Working rule, Gauss Seidel Method, Diagonal dominance theorem etc.

Jacobi Method
In numerical linear algebra, the Jacobi method (or Jacobi iterative method) is an algorithm for
determining the solutions of a diagonally dominant system of linear equations. Each diagonal element is
solved for, and an approximate value is plugged in. The process is then iterated until it converges. This
algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The
method is named after Carl Gustav Jacob Jacobi.

How it works?
In each equation we find the value of a variable by separating it on left side, same as the next equation
with another variable and so on. Then we put the values of variables by our self and find the answers of
left side variable. For each equation the values remains same. We do this procedure again and again
after row interchanging and make a table of the answers. For that reason it is also called iterative
process. After all of that procedure we find the average of all the values of a variable and make a
solution set. This is not a method of finding a correct answer but it can find the appropriate solution of
linear system of equations. Lets take an Example.

Example
Solve the following linear system of equations by Jacobi Method.

Solution: The given system can be written as


[

Let the initial solution is be x=0, y=0, z=0


By putting values on R.H.S. of equation 4, 5 and 6.

Now rewriting the system in the diagonally dominant form, by inter-changing second and third equation
[

Suppose the initial solution is the resulted solution of 1st iteration, x=1, y=1, z=-2
By putting values of R.H.S. of equation 7, 8 and 9.

And so on..
First Iteration

1.0

-2

Second Iteration

1.095

1.095

1.048

Third Iteration

0.995

1.026

0.969

Fourth Iteration

0.993

0.990

1.0

Fifth Iteration

1.002

0.998

1.004

Sixth Iteration

1.001

1.001

1.001

Seventh Iteration

1.0

1.0

1.0

Inverse Method
Suppose you are given an equation in one variable such as 4x = 10. Then you will find the value of x that
solves this equation by multiplying the equation by the inverse of 4: (1/4) 4 x = (1/4) 10, so the solution
will be x = 2.5.
Sometimes we can do something very similar to solve systems of linear equations; in this case, we will
use the inverse of the coefficient matrix. But first we must check that this inverse exists! The conditions
for the existence of the inverse of the coefficient matrix are the same as those for using Cramer's rule,
that is
1. The system must have the same number of equations as variables, that is, the coefficient
matrix of the system must be square.
2. The determinant of the coefficient matrix must be non-zero. The reason, of course, is that the
inverse of a matrix exists precisely when its determinant is non-zero.

How it works?
We put the equations in matrix form that is

To find the solution we multiply the A-1 with B matrix and then the answer matrix will be the solution.

To find the inverse of A we will first find |A| and the Adjoint matrix of A.

| |
And the adjoint matrix is equals to the matrix of its Cofectors in transpose form.
[
Lets take an example.

Example
Solve the following system

We

can

write
[

][ ]

| |
| |

| |

| |
| |

| |
[

as

[ ]

[
[

]
[ ]

[ ]

References
1. https://en.wikipedia.org/wiki/Jacobi_method
2. AN INTRODUCTION TO NUMERICAL ANALYSIS by DR. M Iqbal

You might also like