You are on page 1of 5

Math 313 Lecture #10

2.2: Properties of Determinants


The Effect of Row Operations on Determinants. It would be great if we could
first row reduce an n n matrix A and then compute its determinant from the simpler
row reduced matrix.
But alas, this method is doomed to fail : an invertible matrix A is row equivalent to I
and the determinant of I is 1, but is det(A) = det(I)?
What do the elementary row operations do to the determinant of a matrix?




a11 a12
0 1
Case of Ri Rj . If A =
and E =
, then
a21 a22
1 0


a21 a22
= a12 a21 a11 a22 = det(A).
det(EA) =
a11 a12
Now let A = (aij ) be a 3 3 matrix and E the elementary matrix that switches row one
with row two. Then (using cofactor expansion along the second row)


a21 a22 a23


det(EA) = a11 a12 a13
a31 a32 a33






a21 a22
a21 a23
a22 a23






a13
+ a12
= a11
a31 a32
a31 a33
a32 a33
= det(A).
Switching any two rows of A leads to the same result.
What is the determinant of E? It is
det(E) = det(EI) = det(I) = 1.
In general, if A is an n n matrix and E is an elementary matrix corresponding to
switching two rows, then
det(EA) = det(A) = det(E)det(A).
Case of Ri Ri . Let A be an n n matrix and let E be the elementary matrix that
corresponds to multiplying the ith through by 6= 0.
Cofactor expansion along the ith row of EA gives
det(EA) = ai1 Ai1 + ai2 Ai2 + + ain Ain

= ai1 Ai1 + ai2 Ai2 + + ain Ain
= det(A).
In this calculation, notice that does not appear in the cofactors Aij ; this is what makes
this calculation work!

What is the determinant of E? It is


det(E) = det(EI) = det(I) = .
Thus if E is an elementary matrix that corresponds to multiplying a row through by a
nonzero , then
det(EA) = det(A) = det(E)det(A).
Case Ri + Rj Ri . To see what happens to the determinant of an n n matrix A
when a multiple of one row is added to another, we need a technical result about the
cofactors of A.

Lemma. The cofactors Aij of an n n matrix A satisfy


ai1 Aj1 + ai2 Aj2 + + ain Ajn

(
det(A) if i = j,
=
0
if i 6= j.

Proof. When i = j, the expression is the cofactor expansion of A along its ith row, and
is the determinant of A.
Now suppose i 6= j, and form a new matrix
of A with the ith row of A:

a11 a12
..
..
.
.

ai1 ai2
.
..

A =
.
..
a
a
i1
i2
.
..
..
.
an1 an2

A obtained from A by replacing the j th row

a1n
..
.

. . . ain j th row
..
.
.

. . . ain
..
.
. . . ann
...

Since A has two rows that are the same, its determinant is 0.
[This is from Theorem 2.1.4 (b) (on pg. 96 of the book). If A is 2 2 and has two rows
the same, then it is easy to see that det(A) = 0. If A is 3 3 and has two rows the same,
then by cofactor expansion along the row that is not one of the two identical rows, it is
easy to see that det(A) = 0. This pattern continues to any size of n n matrix having
two identical rows.]
So cofactor expansion of A along its j th row gives
0 = det(A ) = aj1 Aj1 + aj2 Aj2 + + ajn Ajn
= ai1 Aj1 + ai2 Aj2 + + ain Ajn ,
because the j th row of A is the same as the ith row of A .

Now let E be the elementary matrix corresponding to adding times Rj to Ri .


Then cofactor expansion of EA along its ith row and the Lemma give
det(EA) = (ai1 + aj1 )Ai1 + (ai2 + aj2 )Ai2 + + (ain + ajn )Ajn


= ai1 Ai1 + ai2 Ai2 + + ain Ain + aj1 Ai1 + Aj2 Ai2 + + ajn Ain
= det(A) + 0
= det(A).
What is the determinant of E? It is
det(E) = det(EI) = det(I) = 1.
Another way to see this is that E is a triangular matrix with 1s on its diagonal, so its
determinant is 1.
Thus if E is an elementary matrix corresponding to adding a scalar multiple of one row
to another row, then
det(EA) = det(A) = det(E)det(A).

Column Operations and Determinants. Another way to affect the determinant


of a matrix A is by columns operations.
Each row operation has an analogous column operation: switch two columns, multiply a
column through by a nonzero scalar, and add a scalar multiple of one column to another.
In other words, each column operation on A is a row operation on AT .
Since det(B T ) = det(B) and (BC)T = C T B T , and since the transpose of an elementary
matrix E is elementary (this is easy to see), then

det(AE) = det (AE)T = det(E T AT ) = det(E T )det(AT ) = det(E)det(A).
Therefore, if E is elementary, then
det(EA) = det(E)det(A) = det(A)det(E) = det(AE).
WARNING: column operations do not solve A~x = ~b!

Characterizing Singular Matrices. We constructed the determinant of a matrix A


so that A is invertible if and only if det(A) 6= 0.
So it stands to figure that a zero determinant has something to do with lack of invertibility.
Although row reduction does not preserve the determinant, we can still use row reduction
to prove things about determinants, as is illustrated next.

Theorem. An n n matrix A is singular if and only if det(A) = 0.

Proof. Let E1 , . . . , Ek be elementary matrices for which


U = Ek E1 A
is in row echelon form, i.e., U is upper triangular where the first nonzero entry in each
nonzero row is 1.
Since det(EB) = det(E)det(B) for any elementary matrix E, then
det(U ) = det(Ek E1 A)

= det Ek (Ek1 E1 A)
= det(Ek )det(Ek1 E1 A)
= det(Ek ) det(E1 )det(A).
Since det(Ei ) 6= 0 for all i = 1, . . . , k, it follows that
det(U ) = 0 if and only if det(A) = 0.
If A is singular, then A is not row equivalent to I, so that U must contain a row of zeros.
This means that det(U ) = 0, and hence det(A) = 0.
On the other hand, if det(A) = 0, then det(U ) = 0.
This means that U has a row of zeros (because if it did not, then U would have 1s on
all of its diagonal, implying that det(U ) = 1, a contradiction).
Hence A is not row equivalent to I, which says that A is singular.

The Method of Elimination. If we track the changes in the determinant when row
reducing a matrix A, we can compute the determinant of A.
How? Well, in the proof of the previous Theorem, we had
det(U ) = det(Ek ) det(E1 )det(A),
which can be rewritten as

1
det(A) = det(Ek ) det(E1 ) det(U ).
This still works when U is only in triangular form, the determinant of which is the
product of its diagonal entries.
Using row reduction and tracking the changes in the determinant to eventually compute
the determinant a matrix is called the method of elimination; this method is faster than
the method of cofactor expansion.
Just divide the determinant of U by the determinants of the Ei s used in the row reduction
of A to an upper triangular form.

Example. Use the method of elimination to find the determinant:



2

1

1

0

0




1 0 0 0 1
0 0


R R2
2 3 0 0 0 R2 2R1 R2
0 1 1


0 1
= (1) 1 1 0 0 1 R3 R1 R3
0 0 1 1 2
1 2



0 0 2 1 0
1 0




1 0 0 0 1
1 0 0 0 1




0 3 0 0 2
0 1 0 0 0

R2 R3


= (1)(1) 0 1 0 0 0
= (1) 0 3 0 0 2 R3 3R2 R3
0 0 1 1 2
0 0 1 1 2




0 0 2 1 0
0 0 2 1 0 R5 2R4 R4




1 0 0 0

1 0 0 0

1
1




0 1 0 0

0 1 0 0

0
0







2
= (1) 0 0 1 1
= 0 0 0 0 2
R R4
0 0 1 1
0 0 0 0 2
2 3



0 0 0 1 4
0 0 0 1 4 R4 R5


1 0 0 0

1


0 1 0 0

0



2 = 2.
= (1) 0 0 1 1
0 0 0 1 4


0 0 0 0 2
3
0
1
0
0

0
0
0
1
2

Notice that we did not have to use the row operation of multiplying a row through by a
nonzero scalar.


The Multiplicative Property of Determinants. Recall for an elementary matrix


E that det(AE) = det(A)det(E). It would be great if this works when E is not an
elementary matrix. And it is great that it does!

Theorem. If A and B are n n matrices, then


det(AB) = det(A)det(B).
Proof. If B is singular, then AB is singular too (why?), so that
det(AB) = 0 = det(A)det(B).
If B is nonsingular, then B is the product of elementary matrices Ek , . . . , E1 .
Since det(AE) = det(A)det(E), then

det(AB) = det AEk E1
= det(AEk E2 )det(E1 )
= det(A)det(Ek ) det(E1 )
= det(A)det(Ek E1 )
= det(A)det(B).
This completes the proof.

You might also like