You are on page 1of 2

Elementary transformations and matrix inversion Math 130 Linear Algebra

D Joyce, Fall 2012 Elementary row operations again. We used the elementary row operations when we solved systems of linear equations. Well study them more formally now, and associate each one with a particular invertible matrix. When you want to solve a system of linear equations Ax = b, form the augmented matrix by appending the column b to the right of the coecient matrix A. Then solve the system of equations by operating on the rows of the augmented matrix rather than on the actual equations in the system. There are three kinds of elementary row operations are those operations on a matrix that dont change the solution set of the corresponding system of linear equations. 1. Exchange two rows. 2. Multiply or divide a row by a nonzero constant.

where A is the augmented matrix for the original system of equations and B is the augmented matrix for the new system. In each case, well illustrate it with a system of three equations in three unknowns. Lets start with this system of equations. 2y + 2z = 4 + 2z = 3 3y + 3z = 6

Its almost in reduced echelon form, and only three steps are needed to put it in that form. Its augmented matrix is 0 2 2 4 1 0 2 3 0 3 3 6 For the rst step, well exchange the rst two rows. The elementary matrix E1 to do that is almost the diagonal matrix. Only the two 1s in the rows to be exchanged need to be moved to the opposite rows. 0 1 0 0 2 2 4 1 0 2 3 1 0 0 1 0 2 3 = 0 2 2 4 0 0 1 0 3 3 6 0 3 3 6

Next, lets divide the second row by 2, that is, 3. Add or subtract a multiple of one row from multiply it by 0.5. Again, the elementary matrix another. E2 to do that is almost the diagonal matrix. Just replace the 1 in the second row by 0.5. With these three operations, we can convert the augmented matrix to a particularly simple form 1 0 0 1 0 2 3 1 0 2 3 the reduced echelon formthat allows us to read 0 0.5 0 0 2 2 4 = 0 1 1 2 o the solutions. 0 0 1 0 3 3 6 0 3 3 6 Two martrices are said to be row equivalent if one can be transformed to the other by means of a Finally, well subtract 3 times the second row nite sequence of elementary row operations. from the third. Yet again, the elementary matrix E3 to do that is almost the diagonal matrix. All Elementary matrices. Now to nd the elemen- thats needed is to place a 3 in the second coltary matrices that correspond to these three kinds umn of the third row. of elementary row operations. In each case, were 1 0 0 1 0 2 3 1 0 2 3 looking for a square matrix E such that 0 1 0 0 1 1 2 = 0 1 1 2 EA = B 0 3 1 0 3 3 6 0 0 0 0 1

The matrix is now in reduced echelon form and In the rst case where theres a 1 in each row of the general solution can be read o from it: the square matrix EA, because its in reduced echelon form, therefore it has to be the identity. That (x, y, z) = (2z + 3, z + 2, z) says EA = I. Since E is the product Es E2 E1 of invertible elementary matrices, therefore E is also where z can be any number. invertible. Multiply the equation EA = I by E 1 Note that each kind of elementary matrix E1 , E2 , on the left to get A = E 1 , and therefore A1 = E. and E3 is invertible, and its inverse is another (or It is in this rst case that A is invertible and you the same) elementary matrix of the same kind. The can nd the inverse. inverse for E1 is itself. The inverse for E2 looks In the second case, the last row of EA is all 0s. just like E2 except you use the reciprocal for the We need to show that A cannot have an inverse special diagonal entry. And the inverse for E3 looks in this case. Well suppose A does have an inverse just like E3 except you use the negation for the oand derive a contradiction. Now, the matrix EA diagonal entry. has all 0s in its last row, so if you multiply it on the right by any matrix, then the product will also Proof that the inversion algorithm works. have all 0s in its last row. But A is invertible, so Recall the method used to nd the inverse of a ma- multiply EA on the right by A1 E 1 . The result trix. is EAA1 E 1 I, which is the identity matrix I, and Theorem 1. To nd the inverse of a square matrix I doesnt have 0s in its last row, a contradiction. A, rst, adjoin the identity matrix to its right to get Therefore, A doesnt have an inverse in this second an n 2n matrix [A|I]. Next, convert that matrix case. That nishes the proof that this method will eito reduced echelon form. If the result looks like [I|B], then B is the desired inverse A1 . But if the ther construct an inverse matrix or show that the q.e.d. square matrix in the left half of the reduced echelon inverse doesnt exist. In the proof of the theorem in the case where 1 had A E 1 = Proof. Suppose the sequence of elementary row op- A is invertible, we 1 1 = 1 , but E 1 erations performed on the matrix [A|I] to put it (Es E2 E1 ) = E1 E2 Es . That gives us into reduced echelon form is described by the el- the following corollary. ementary matrices E1 , E2 , . . . , Es . After the rst Corollary 2. Every invertible matrix is the prodelementary operation, the matrix [A|I] is con- uct of elementary matrices. verted to [E1 A|E1 I]. After the second, it becomes [E2 E1 A|E2 E1 I]. And after the last it looks like Math 130 Home Page at [Es E2 E1 A|Es E2 E1 I]. http://math.clarku.edu/~djoyce/ma130/ Lets denote the product of the Ei s as E, that is, E = Es E2 E1 . Then we have [EA|E] as the reduced echelon form. Now, we have two cases to consider. One is where EA, the rst half of the n 2n matrix thats in reduced echelon form, has a 1 in each row. The other is where some row has no one in it but is all 0s. If any row is all 0s, then the last row will be all 0s since its in reduced echelon form. 2 form is not the identity, then A has no inverse.

You might also like