You are on page 1of 5

SCHOOL OF MATHEMATICS

MATHEMATICS FOR PART I ENGINEERING

Self Study Course

MODULE 17 MATRICES II

Module Topics

1. Inverse of matrix using cofactors


2. Sets of linear equations
3. Solution of sets of linear equations using elimination method
4. Inverse of matrix using elimination method

A: Work Scheme based on JAMES (THIRD EDITION)

1. You have discovered in Module 16 how to determine the adjoint matrix. The latter provides a direct
method for calculating the inverse of a matrix.
Turn to p.295 and study section 5.4, up to the start of Example 5.20. The inverse of a matrix is defined
near the start of this section. There are two cases to consider depending on whether |A| = 0 (A is singular)
6 0 (A is non-singular). When A is non-singular the inverse A−1 exists, and can be calculated
or |A| =
using the shaded formula on p.295. This is often called the direct method of calculating the inverse, or
the cofactor method.
Study Example 5.20. In the solution to part (a), the matrix adj A is calculated by first obtaining the minors.
For a 2 × 2 matrix omitting a row and column will leave the minor as the determinant of a single element,
which is the element itself. Hence, for A one obtains M11 = |3| = 3 and A11 = (−1)1+1 M11 = 3. The
other cofactors can be found using a similar argument. The value of the determinant is clearly 1(3) − 2(2) =
3 − 4 = −1, as stated in the solution. The inverse A−1 can then be easily caclulated using the general
formula.

2. Study the theory on p.297, and work through Example 5.21. The inverses A−1 and B−1 are found by
the method used in Example 5.20.

***Do Exercise 34 on p.298***


 
1 2
***Do Exercise A: Use the direct method to find the inverse of the matrix .
2 1

3. An extremely important use of matrices is in the solution of sets (or systems) of linear equations.
These occur most frequently in the numerical solution of problems. In these situations the number of
equations can be huge and solutions are possible only with the use of computers. It is important, however,
that you understand the principles underlying these numerical methods and in this module the theory is
presented and applied by hand to small systems.
Study section 5.5, which starts on p.299, up to the beginning of Example 5.23. The simultaneous linear
equations (5.19) are a system of n equations in n unknowns, so that if the system is written in the matrix
form AX = b then A is a n × n square matrix. There are four different cases to consider depending on
whether, or not, |A| = 0 or b = 0. The types of solution to expect for these four cases, (a)-(d), are stated
on pp.300 and 301.

–1–
4. Work through all five parts of Example 5.23. The purpose of this Example is to give you practice in
classifying systems of equations. The determination of the solutions, where they exist, will be looked at in
more detail in the following sections.
Study Example 5.24. The details of calculating A−1 are omitted, but you should be able to determine the
inverse and find the unique solution X.
Work through Examples 5.25 and 5.26. You will need to fill in the details in calculating the determinant
in Example 5.25. Eigenvalues and eignevectors, which appear in Example 5.26, are investigated in Module
18.

5. Read more of section 5.5 on pp.303 and 304, up to the beginning of Example 5.27. Cramer’s rule on
p.304 is a neat way of writing the solution. However, it should be emphasised that it is an inefficient way
of calculating the solution since determinants of large matrices are very time-consuming to evaluate.

***Do Exercise 39 on p.306***

6. Having discussed the types of solution that arise it is now important to discover the appropriate methods
for finding these solutions. Elimination methods are commonly used and these are discussed in section
5.5.2 starting on p.307. Study this section, stopping after the first two lines on p.310. Note that the
row operations apply to elements in A and b, but leave the elements in X unchanged. The elementary
row operations introduced on p.309 can be used to convert a matrix A into upper-triangular form. The
general procedure for solving a system of equations in upper-triangular form is stated on p.310 but it is
easier to understand by looking at Example 5.29, which is discussed below.
Study Example 5.29. The system is written in matrix form and then the elementary row operations are used
to convert the equations to upper triangular form. The reduced system of equations can then be written

4 10
x + 2y + 3z = 10, y+ z= , z = 1.
3 3

With back-substitution you must work back through these reduced equations starting with the last.
Clearly the third equation gives z = 1, and this can then be substituted into the next-to-end equation to
give y = 10 4 10 4 6
3 − 3 z = 3 − 3 (1) = 3 = 2. Substituting for y and z into the first equation then enables x to
be easily calculated. (At the end of the calculation you should always substitute your final answer into the
original equations to make sure that your solution is correct. If the equations are not satisfied then try to
find the error).
The above procedure can be written as separate algorithms for special matrices, but the basic method is
unchanged. Hence, omit the sections on the tridiagonal, or Thomas, algorithm and Gauss elimination and
move on to p.316. Work through Examples 5.31 and 5.32.

***Do Exercise 40 on p.306***

***Do Exercise 49 on p.321 using the standard elimination method***

7. As mentioned earlier the cofactor method is not efficient when calculating the inverses of large matrices,
since it takes a long time to evaluate large determinants.
Fortunately there are better methods of calculating inverses and one of these methods involves the elimination
method. This procedure is not discussed in J. so is given below. Suppose you require the inverse of the
n × n square matrix A. Then you form a new matrix (A I), in which the n × n unit matrix I is added to
A as extra columns with the new matrix now n × 2n. The basic method is to use row operations to reduce
A to the unit matrix I. By simultaneously carrying out the same row operations on I the latter changes to
A−1 . Hence the matrix (A I) becomes (I A−1 ).

–2–
The elimination method discussed in section 6 reduces a matrix to upper triangular form. During this
process the elements below the leading diagonal in a column are made zero. To reduce A to the unit matrix
it is also necessary to make the elements above the leading diagonal zero. Start with the first column, and
then consider columns 2, 3... in turn. We illustrate the method with the example below.
 
1 2 3
Example: If A =  −1 1 1  use the elimination method to determine A−1 .
0 1 −1
 
1 2 3 1 0 0
Start with the matrix (A I) :  −1 1 1 0 1 0
0 1 −1 0 0 1
 
1 2 3 1 0 0
Add row 1 to row 2 : 0 3 4 1 1 0
0 1 −1 0 0 1

Now move to the second column, where we need zeros both above and below the element 3.
 
1 2 3 1 0 0
Divide row 2 by 3 : 0 1 4 1 1
0
3 3 3
0 1 −1 0 0 1
 
1 0 1/3 1/3 −2/3 0
Subtract 2 × row 2 from row 1 : 0 1 4/3 1/3 1/3 0 
0 1 −1 0 0 1
 
1 0 1/3 1/3 −2/3 0
Subtract row 2 from row 3 : 0 1 4/3 1/3 1/3 0 
0 0 −7/3 −1/3 −1/3 1
 
1 0 1/3 1/3 −2/3 0
Multiply row 3 by − 37 :  0 1 4/3 1/3 1/3 0 
0 0 1 1/7 1/7 −3/7

The first and second columns are now correct, so we must move on to the third column.
 
1 0 0 2/7 −5/7 1/7
Subtract 13 × row 3 from row 1 :  0 1 4/3 1/3 1/3 0 
0 0 1 1/7 1/7 −3/7
 
1 0 0 2/7 −5/7 1/7
Subtract 43 × row 3 from row 2 :  0 1 0 1/7 1/7 4/7 
0 0 1 1/7 1/7 −3/7

The matrix A has been changed to I, and the theory says that the right-hand side I has become A−1 .
Hence, it has been shown that
 
2/7 −5/7 1/7
A−1 =  1/7 1/7 4/7 
1/7 1/7 −3/7

The evaluation of A−1 by the above method appears lengthy but for large matrices it can be shown that it
is more efficient than determining the inverse using the cofactor method.

–3–
***Exercise B: Using the elimination method determine the inverse of the matrices
 
  −1 2 1
1 2  0
(i) (ii) 1 −2 
2 1
1 4 −1

8. To complete the module it is instructive to read the section on ill-conditioning which starts on p.319.
Look through Example 5.34 which shows that difficulties arise when solving the system AX = b when |A|
is very small. Read the remainder of this section, up to the beginning of Exercises 5.5.3.

B: Work Scheme based on STROUD (FIFTH EDITION)


S. covers part, but not all, of this Module. The calculation of the inverse of a matrix using cofactors can be
found in Programme 5 of S. Start at p.549 and work through frames 28-34.
Before reading the section on solving sets of linear equations, it would be helpful for you to work through
sections 3 and 4 of the scheme based on J. This states the general cases that can arise.
Then return to p.552 of S. and study frames 35-46. The latter shows how to solve systems of n equations
in n unknowns, AX = b, using either the matrix inverse or the elimination method. Note, however, that S.
only considers the straightforward situations when the determinant of A is non-zero so that its inverse A−1
exists.
The calculation of the inverse of a matrix using the elimination method is not in S. so to complete the
module work through section 7 of A: Work Scheme based on JAMES (THIRD EDITION)

–4–
Specimen Test 17

1. (i) State a formal expression, involving cofactors, for the inverse of a square matrix A.
 
1 3
(ii) Hence find the inverse of
2 4

2. (i) State a condition on the matrix A for the set of n inhomogeneous equations in n unknowns AX = b to
have a unique solution.
(ii) State a condition on the matrix A for the set of n homogeneous equations in n unknowns AX = 0 to
have a non-trivial solution.

3. Find the value of α which is necessary for the set of equations

αx + y − z = 0
2x + 3y + 3z = 0
4x + 5y + z = 0

to have a non-trivial solution. (You are not asked to solve the equations).

4. Use the elimination method to solve the set of equations

x + 2y + z = 10
2x + y + z = 6
10x − y + 3z = 2

5. Use the elimination method to find the inverse of the matrix


 
1 1 1
2 0 2
2 −2 1

–5–

You might also like