You are on page 1of 22

SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013

MATH 54 LINEAR ALGEBRA


AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
All the Page numbers, Theorem numbers, etc. without stating the reference are from the
textbook [1]. Other references will be specied.
1.9 The Matrix of a Linear Transformation
24. (a) False. In fact, the linear transformation x Ax can never map R
3
to R
4
.
The range of the this linear transformation, by denition, is the span of the set of column
vectors of A. Since there are at most three pivots in the matrix A, in the echelon form, the
last row cannot have a pivot. Thus this span cannot be the whole R
4
.
(b) True. Page 73, Theorem 10. For every linear transformation T from R
n
to R
m
, there
is a unique m n matrix A such that T(x) = Ax. (Note that in the end of Page 67, it
is said that every matrix transformation is a linear transformation, yet there are examples
of linear transformations that are not matrix transformations. This refers to general linear
transformation on general vector space where there is no preferred basis.)
(c) True. Again, Page 73, Theorem 10. (Although technically, it should be stated that
the given linear transformation from R
n
to R
m
is T.)
(d) False. One-to-one, (or injectivity), means every vector in the codomain, has at most
one preimage. For T to be a mapping, it is automatically, each vector in R
n
maps to only
one vector in R
m
.
(e) False. See Table 3 on Page 76.
26. As we have already seen, the standard matrix of T is
A =
_
1 2 3
4 9 8
_
.
By subtracting 4 times rst row from the second row, we get the echelon form
_
1 2 3
0 17 20
_
.
1
2 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
There are two pivots, two rows and three columns, thus the columns of A span R
2
but are
not linear independent. By Theorem 12 on Page 79, T is onto but not one-to-one.
28. As we see in the gure, the column vectors of the standard matrix A, a
1
, a
2
are not
multiple to each other. Thus {a
1
, a
2
} span R
2
and is linear independent. (See Page 60 for
linear independence. For span R
2
, one can count the pivots in the 2 2 matrix A. Since
column vectors are linear independent, there is a pivot in each column, thus totally there
are two pivots. This implies each row must have a pivot.) Again, by Theorem 12 on Page
79, T is onto and one-to-one.
35. (1) If a linear transformation T : R
n
R
m
is onto, then m n. Assume A is the
standard matrix of T, then the columns of A span R
m
, or equivalently, in the echelon form
of A, there is a pivot in each row. Since A has m rows and n columns, this means A has
exactly m pivots and (since there can be at most one pivot in each column) m n.
(2) If a linear transformation T : R
n
R
m
is one-to-one, then m n. (The argument
is similar to above, just switch the roles of rows and columns.) Assume A is the standard
matrix of T, then the columns of A are linear independent, or equivalently, in the eche-
lon form of A, there is a pivot in each column. Since A has m rows and n columns, this
means A has exactly n pivots and (since there can be at most one pivot in each row) m n.
36. The reason is that this question is equivalent to (by denition) Is there a preimage
for every vector in the codomain? or (in terms of matrix equation, A is the standard
matrix of T) Does Ax = b have a solution for every b?. In both forms, it is clear that
this is an existence question.
2.1 Matrix Operations
2. Let
A =
_
2 0 1
4 5 2
_
, B =
_
7 5 1
1 4 3
_
, C =
_
1 2
2 1
_
, D =
_
3 5
1 4
_
, E =
_
5
3
_
(1)
A + 3B =
_
23 15 2
7 17 7
_
.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 3
(2) 2C 3E is not dened since C and E have dierent size.
(3)
DB =
_
26 35 12
3 11 13
_
.
(4) EC is not dened, since E has only one column and C has two rows.
4.
A 5I
3
=
_
_
0 1 3
4 2 6
3 1 3
_
_
, (5I
3
)A =
_
_
25 5 15
20 15 30
15 5 10
_
_
.
6.
A =
_
_
4 3
3 5
0 1
_
_
, B =
_
1 4
3 2
_
.
So
b
1
=
_
1
3
_
, b
2
=
_
4
2
_
.
and we have
Ab
1
=
_
_
5
12
3
_
_
, Ab
2
=
_
_
22
22
2
_
_
.
From either of the two ways of calculation, (which do not have much dierence) we get
AB = [Ab
1
Ab
2
] =
_
_
5 22
12 22
3 2
_
_
.
8. B has the same number of rows as BC, which is 5.
10. It is easy to calculate that
AB = AC =
_
21 21
7 7
_
.
12. Let B = [b
1
b
2
], then since AB = 0, b
1
and b
2
are solutions to the matrix equation
Ax = 0. By the elementary row operation that adds
2
3
times of the rst row to the second
4 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
row, we have
A
_
3 6
0 0
_
.
Therefore the general solutions of Ax = 0 is
x = x
2
_
2
1
_
.
For example, we can choose
B =
_
2 4
1 2
_
.
16. (a) True. This follows directly from the denition.
(b) False. In fact, AB = [Ab
1
Ab
2
Ab
3
].
(c) True. In (AB)
T
= B
T
A
T
, we take B = A, then we get (A
2
)
T
= (A
T
)
2
.
(d) False. In fact (ABC)
T
= C
T
B
T
A
T
not equal to C
T
A
T
B
T
in general. A counter example
is
A =
_
0 1
0 0
_
, B =
_
0 0
1 0
_
, C =
_
1 0
0 1
_
.
Then
(ABC)
T
=
_
1 0
0 0
_
=
_
0 0
0 1
_
= C
T
A
T
B
T
.
(e) True. Since (A + B)
T
= A
T
+ B
T
, by simple induction on n, we have the transpose of
a sum of n matrices equals the sum of their transposes.
18. The third column of AB is also all zeros. Since if B = [b
1
b
2
b
n
], then
AB = [Ab
1
Ab
2
Ab
n
]. The third column is Ab
3
= A0 = 0.
20. The rst two columns of AB are also equal. Same as Problem 18. If b
1
= b
2
, then
Ab
1
= Ab
2
.
22. Again, assuming B = [b
1
b
2
b
n
] and since the columns of B are linear dependent,
we can nd c
1
, . . . , c
n
not all zero such that
c
1
b
1
+ + c
n
b
n
= 0.
Therefore applying A to the left we have
c
1
Ab
1
+ + c
n
Ab
n
= A(c
1
b
1
+ + c
n
b
n
) = A0 = 0.
This shows that the columns of AB, Ab
1
, . . . , Ab
n
are linear dependent.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 5
24. Since the columns of A span R
3
. We can nd the solutions x = x
j
to Ax = e
j
,
j = 1, 2, 3 where e
1
, e
2
, e
3
are standard unit vectors in R
2
. Then D = [x
1
x
2
x
3
] satises
AD = I
3
.
27.
u
T
v = v
T
u = 3a + 2b 5c
(think this as a 1 1 matrix).
uv
T
=
_
_
3a 3b 3c
2a 2b 2c
5a 5b 5c
_
_
, vu
T
=
_
_
3a 2a 5a
3b 2b 5b
3c 2c 5c
_
_
.
28. By the formula (AB)
T
= B
T
A
T
and (A
T
)
T
= A, we have u
T
v = (v
T
u)
T
= v
T
u
since this is a 11 matrix, its transpose is the same as itself. Also we have uv
T
= (vu
T
)
T
.
32. For j = 1, . . . , n, the j-th column of AI
n
is Ae
j
where e
1
, . . . , e
n
are standard unit
vector in R
n
. Assume A = [a
1
a
n
], then
Ae
j
=
_

i=j
0a
i
_
+ 1a
j
= a
j
Therefore AI
n
has exactly the same columns as A, or AI
n
= A.
34. Use the formula (AB)
T
= B
T
A
T
repeatedly, (ABx)
T
= (B
x
)
T
A
T
= x
T
B
T
A
T
.
2.2 The Inverse of a Matrix
2. Use Theorem 4 on Page 105, since determinant of the matrix is 3 5 2 8 = 1 = 0,
_
3 2
8 5
_
1
= (1)
1
_
5 2
8 3
_
=
_
5 2
8 3
_
.
4. Again, use Theorem 4 on Page 105, since determinant of the matrix is 2 (6) 4
(4) = 4 = 0,
_
2 4
4 6
_
1
=
1
4
_
6 4
4 2
_
=
_

3
2
1
1
1
2
_
.
6 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
6. Since
_
7 3
6 3
_
1
=
1
7(3) 3(6)
_
3 3
6 7
_
=
_
1 1
2
7
3
_
,
The solution to the given system, which is equivalent to
_
7 3
6 3
_ _
x
1
x
2
_
=
_
9
4
_
is
_
x
1
x
2
_
=
_
7 3
6 3
_
1
_
9
4
_
=
_
1 1
2
7
3
_ _
9
4
_
=
_
5
26
3
_
.
7. (a) Since det A = 2 = 0, we have
A
1
=
1
2
_
12 2
5 1
_
=
_
6 1

5
2
1
2
_
.
Therefore the solutions to Ax = b
j
, j = 1, 2, 3, 4 are
x
1
= A
1
b
1
=
_
9
4
_
,
x
2
= A
1
b
2
=
_
11
5
_
,
x
3
= A
1
b
3
=
_
6
2
_
,
x
4
= A
1
b
4
=
_
13
5
_
.
(b) The augmented matrix is
[A b
1
b
2
b
3
b
4
] =
_
1 2 1 1 2 3
5 12 3 5 6 5
_
.
By subtracting 5 times the rst row from the second row, we get
_
1 2 1 1 2 3
0 2 8 10 4 10
_
.
By subtracting the second row from the rst row, we get
_
1 0 9 11 6 13
0 2 8 10 4 10
_
.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 7
By multiplying
1
2
to the second row, we get the reduced echelon form
_
1 0 9 11 6 13
0 1 4 5 2 5
_
which equals [I A
1
b
1
A
1
b
2
A
1
b
3
A
1
b
4
]. The last four columns are exactly the solu-
tions to the four equations.
8. By multiplying P
1
to the left-hand side and P to the right-hand side of the equation
A = PBP
1
, we have
P
1
AP = P
1
PBP
1
P = IBI = B
since P
1
P = I.
10. (a) False. The elementary row operations that reduce A to the identity I
n
are
equivalent to multiply A
1
from the left. Therefore it will change A
1
to A
1
A
1
which
does not necessarily equal to I
n
.
(b) True. Since by denition, A
1
A = AA
1
= I, this shows that A
1
is also invertible
and (A
1
)
1
= A.
(c) False. If A, B are invertible nn matrices, then by Theorem 6 part (b) on page 107,
we have (AB)
1
= B
1
A
1
which does not necessarily equal to A
1
B
1
.
(d) True. Let x
j
be the solution of Ax = e
j
. Then for any b = (b
1
, , b
n
) R
n
, since
b = b
1
e
1
+ +b
n
e
n
, we get that x = b
1
x
1
+ +b
n
x
n
satises the equation Ax = b. In
other words, the columns of A span R
n
. Therefore in the reduced echelon form of A, there
is a pivot in every row. So there are exactly n pivots, i.e. the reduced echelon form is I
n
.
In other words, A is row equivalent to I
n
. Thus A is invertible. (Also see Theorem 8 on
page 114.)
(e) True. See Theorem 7 on page 109.
12. Since A is invertible, we have A
1
A = I, therefore
D = ID = (A
1
A)D = A
1
(AD) = A
1
I = A
1
.
14. Since D is invertible, we can multiply D
1
to the right-hand side of the equation
(B C)D = 0 to get
0 = (B C)DD
1
= (B C)I = B C.
8 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
Therefore B = C.
16. Since B is invertible, we can write A = AI = A(BB
1
) = (AB)B
1
. Now both AB
and B
1
are invertible matrices, so the product A is also invertible.
18. Since B is invertible, as in Problem 16, A = (AB)B
1
= (BC)B
1
= BCB
1
.
20. (a) Since (A AX)
1
= X
1
B, we have B = IB = (XX
1
)B = X(X
1
B) =
X(A AX)
1
. Now both X and (A AX)
1
are invertible, we have that B is also
invertible.
(b) Since B = X(AAX)1, we have B(AAX) = X, so BABAX = X and thus
BA = X + BAX = (I + BA)X.
We need to invert I +BA, but since X is invertible, we have I +BA = BAX
1
where all
of B, A, X
1
are invertible. Therefore I + BA is invertible and we can solve that
X = (I + BA)
1
BA.
24. The same explanation as Problem 10 part d. The columns of A span R
n
. Therefore
in the reduced echelon form of A, there is a pivot in every row. So there are exactly n
pivots, i.e. the reduced echelon form is I
n
. In other words, A is row equivalent to I
n
. Thus
A is invertible. (Also see Theorem 8 on page 114.)
25. We consider the following two separated cases:
Case 1: a = b = 0. Then the matrix A is row equivalent to
_
c d
0 0
_
which can have at most one pivot. Therefore there is a row without a pivot, we know the
system Ax = 0 has innite solutions.
Case 2: a and b are not both zero, then let x
0
=
_
b
a
_
, we can check
Ax
0
=
_
ab + ba
cd + da
_
= 0.
Therefore Ax = 0 has more than one solution: at least 0 and x
0
.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 9
30.
_
3 6 1 0
4 7 0 1
_

_
1 2
1
3
0
4 7 0 1
_

_
1 2
1
3
0
0 1
4
3
1
_

_
1 2
1
3
0
0 1
4
3
1
_

_
1 0
7
3
2
0 1
4
3
1
_
.
Therefore
_
3 6
4 7
_
1
=
_

7
3
2
4
3
1
_
.
32. Since
_
_
1 2 1
4 7 3
2 6 4
_
_

_
_
1 2 1
0 1 1
0 2 2
_
_

_
_
1 2 1
0 1 1
0 0 0
_
_
,
we know the original matrix is not row equivalent to the identity matrix. Therefore it is
not invertible.
33. Since
_
_
1 0 0 1 0 0
1 1 0 0 1 0
1 1 1 0 0 1
_
_

_
_
1 0 0 1 0 0
0 1 0 1 1 0
0 1 1 1 0 1
_
_

_
_
1 0 0 1 0 0
0 1 0 1 1 0
0 0 1 0 1 1
_
_
we have
_
_
1 0 0
1 1 0
1 1 1
_
_
1
=
_
_
1 0 0
1 1 0
0 1 1
_
_
.
Similarly,
_

_
1 0 0 0 1 0 0 0
1 1 0 0 0 1 0 0
1 1 1 0 0 0 1 0
1 1 1 1 0 0 0 1
_

_

_

_
1 0 0 0 1 0 0 0
0 1 0 0 1 1 0 0
0 1 1 0 1 0 1 0
0 1 1 1 1 0 0 1
_

_
1 0 0 0 1 0 0 0
0 1 0 0 1 1 0 0
0 0 1 0 0 1 1 0
0 0 1 1 0 1 0 1
_

_

_

_
1 0 0 0 1 0 0 0
0 1 0 0 1 1 0 0
0 0 1 0 0 1 1 0
0 0 0 1 0 0 1 1
_

_
.
Therefore
_

_
1 0 0 0
1 1 0 0
1 1 1 0
1 1 1 1
_

_
1
=
_

_
1 0 0 0
1 1 0 0
0 1 1 0
0 0 1 1
_

_
.
10 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
In general, the corresponding n n matrix is
A =
_

_
1 0 0 0 0
1 1 0 0 0
1 1 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 1 1 1 0
1 1 1 1 1
_

_
.
We can write A = [a
ij
]
1i,jn
where
a
ij
=
_
1 if i j
0 if i < j
From the case of n = 3, 4 we can guess the inverse of A is
B =
_

_
1 0 0 0 0
1 1 0 0 0
0 1 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0
0 0 0 1 1
_

_
.
We can also write B = [b
ij
]
1i,jn
where
b
ij
=
_
_
_
1 if i = j
1 if i = j + 1
0 if i < j or i > j + 1
We can check directly that the product C = AB = [c
ij
]
1i,jn
is actually the identity
matrix. Since
c
ij
=
n

k=1
a
ik
b
kj
,
we rst use the formula for a
ik
to get
c
ij
=
i

k=1
b
kj
= b
1j
+ + b
ij
.
Case 1: If i > j, then there are two nonzero terms in the above sum. c
ij
= b
jj
+ b
j+1,j
=
1 1 = 0.
Case 2: If i = j, then there are only one nonzero term in the above sum. c
ij
= b
jj
= 1.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 11
Case 3: If i < j, then every term in the above sum is zero, so c
ij
= 0.
This shows that
c
ij
=
ij
=
_
1 if i = j
0 if i = j
(Kronecker delta symbol, see [2]) or AB = I.
There is also an algebraic way to prove AB = I is as follows, let
J =
_

_
0 0 0 0 0
1 0 0 0 0
0 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
0 0 0 1 0
_

_
.
Then J is a nilpotent matrix: J
n
= 0. (See [3]) In fact, J
k
has entries 1 on the sub-diagonal:
i j = k for k = 1, . . . , n 1. Now we can write A = I + J + J
2
+ + J
n1
, B = I J
and we have
AB = (I +J +J
2
+ +J
n1
)(I J) = I J +J J
2
+ +J
n1
J
n
= I J
n
= I.
Remark: Actually this is a standard property for all nilpotent elements in a commutative
ring with a unit (See e.g. [4]).
The third way to prove this is by induction. We write A
n
, B
n
for the n n matrices.
We need to prove A
n
B
n
= I
n
for every n. It is obvious that this is true for n = 1 since
A
1
= B
1
= 1 (as either numbers or 1 1 matrices.) Now suppose A
n1
= B
n1
= I
n1
,
since
A
n
=
_
A
n1
0
v
T
n1
1
_
, B
n
=
_
B
n1
0
w
T
n1
1
_
,
where
v
n1
=
_

_
1
.
.
.
1
1
_

_
, w
n1
=
_

_
0
.
.
.
0
1
_

_
.
Therefore by multiplication of partitioned matrices (see Section 2.4)
A
n
B
n
=
_
A
n1
B
n1
0
v
T
n1
B
n1
+ w
T
n1
1
_
=
_
I
n1
0
(B
T
n1
v
n1
+ w
n1
)
T
1
_
.
12 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
It remains to check that B
T
n1
v
n1
+ w
n1
= 0 which is straightforward. In fact, B
T
n1
v
n1
is the sum of all the columns of B
T
n1
, which is e
n1
R
n1
, i.e. the negative w
n1
.
34. Again we start with 3 3 matrix
_
_
1 0 0 1 0 0
2 2 0 0 1 0
3 3 3 0 0 1
_
_

_
_
1 0 0 1 0 0
1 1 0 0
1
2
0
1 1 1 0 0
1
3
_
_

_
_
1 0 0 1 0 0
0 1 0 1
1
2
0
0 1 1 1 0
1
3
_
_

_
_
1 0 0 1 0 0
0 1 0 1
1
2
0
0 0 1 0
1
2
1
3
_
_
Therefore
_
_
1 0 0
2 2 0
3 3 3
_
_
1
=
_
_
1 0 0
1
1
2
0
0
1
2
1
3
_
_
.
For 4 4 matrix, we have
_

_
1 0 0 0 1 0 0 0
2 2 0 0 0 1 0 0
3 3 3 0 0 0 1 0
4 4 4 4 0 0 0 1
_

_

_

_
1 0 0 0 1 0 0 0
1 1 0 0 0
1
2
0 0
1 1 1 0 0 0
1
3
0
1 1 1 1 0 0 0
1
4
_

_
1 0 0 0 1 0 0 0
0 1 0 0 1
1
2
0 0
0 1 1 0 1 0
1
3
0
0 1 1 1 1 0 0
1
4
_

_

_

_
1 0 0 0 1 0 0 0
0 1 0 0 1
1
2
0 0
0 0 1 0 0
1
2
1
3
0
0 0 1 1 0
1
2
0
1
4
_

_
1 0 0 0 1 0 0 0
0 1 0 0 1
1
2
0 0
0 0 1 0 0
1
2
1
3
0
0 0 0 1 0 0
1
3
1
4
_

_
.
Therefore
_

_
1 0 0 0
2 2 0 0
3 3 3 0
4 4 4 4
_

_
1
=
_

_
1 0 0 0
1
1
2
0 0
0
1
2
1
3
0
0 0
1
3
1
4
_

_
.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 13
In general, the corresponding n n matrix is
A =
_

_
1 0 0 0 0
2 2 0 0 0
3 3 3 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
n 1 n 1 n 1 n 1 0
n n n n n
_

_
.
We can write A = [a
ij
]
1i,jn
where
a
ij
=
_
i if i j
0 if i < j
From the case of n = 3, 4 we can guess the inverse of A is
B =
_

_
1 0 0 0 0
1
1
2
0 0 0
0
1
2
1
3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
1
n1
0
0 0 0
1
n1
1
n
_

_
.
We can also write B = [b
ij
]
1i,jn
where
b
ij
=
_

_
1
j
if i = j

1
j
if i = j + 1
0 if i < j or i > j + 1
Again, we can check directly that the product C = AB = [c
ij
]
1i,jn
is actually the identity
matrix. Since
c
ij
=
n

k=1
a
ik
b
kj
,
we rst use the formula for a
ik
to get
c
ij
=
i

k=1
ib
kj
= i(b
1j
+ + b
ij
).
Case 1: If i > j, then there are two nonzero terms in the above sum. c
ij
= i(b
jj
+b
j+1,j
) =
i(
1
j

1
j
) = 0.
Case 2: If i = j, then there are only one nonzero term in the above sum. c
ij
= ib
jj
=
i
j
= 1.
14 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
Case 3: If i < j, then every term in the above sum is zero, so c
ij
= 0.
This shows that
c
ij
=
ij
=
_
1 if i = j
0 if i = j
or AB = I.
The algebraic way to prove this is to write A = D(I + J + J
2
+ + J
n1
) and B =
(I J)D
1
where D is the diagonal matrix with diagonal entries 1, 2, . . . , n:
D =
_

_
1 0 0 0 0
0 2 0 0 0
0 0 3 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 n 1 0
0 0 0 0 n
_

_
.
To see A = D(I +J +J
2
+ +J
n1
), it is enough to use the expression of I +J + +J
n1
in the previous problem and the fact that multiply D on the left is equivalent to the row
operations that multiply the i-th row by i, i = 1, . . . , n. To see B = (I J)D
1
, it is
enough to see that multiply J on the left is equivalent to remove the last row, then move
every other row down for one level and nally put all zeroes in the rst row. Now it is easy
to see AB = DD
1
= I using the previous problem.
We can also prove this by induction. We write A
n
, B
n
for the n n matrices. We need
to prove A
n
B
n
= I
n
for every n. It is obvious that this is true for n = 1 since A
1
= B
1
= 1
(as either numbers or 1 1 matrices.) Now suppose A
n1
= B
n1
= I
n1
, since
A
n
=
_
A
n1
0
v
T
n1
n
_
, B
n
=
_
B
n1
0
w
T
n1
1
n
_
,
where
v
n1
=
_

_
n
.
.
.
n
n
_

_
, w
n1
=
_

_
0
.
.
.
0

1
n1
_

_
.
Therefore by multiplication of partitioned matrices (see Section 2.4)
A
n
B
n
=
_
A
n1
B
n1
0
v
T
n1
B
n1
+ nw
T
n1
1
_
=
_
I
n1
0
(B
T
n1
v
n1
+ nw
n1
)
T
1
_
.
It remains to check that B
T
n1
v
n1
+ nw
n1
= 0 which is straightforward.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 15
37. We assume
C =
_
c
11
c
12
c
13
c
21
c
22
c
23
_
.
Then the equation CA = I
2
is equivalent to the system
c
11
+ c
12
+ c
13
= 1
2c
11
+ 3c
12
+ 5c
13
= 0
c
21
+ c
22
+ c
23
= 0
2c
21
+ 3c
22
+ 5c
23
= 1
.
We need to nd solutions c
ij
{1, 0, 1}. It is easy to see that the solutions are
c
11
= 1, c
12
= 1, c
13
= 1.
c
21
= 1, c
22
= 1, c
23
= 0.
Therefore
C =
_
1 1 1
1 1 0
_
.
Now we can compute AC:
AC =
_
_
1 3 1
2 4 1
4 6 1
_
_
= I
3
.
(In fact, the product of a 3 2 matrix and a 2 3 matrix can have at most rank 2, but
the identity matrix has rank 3. So they can not equal to each other. See Section 2.7.)
38. We can choose
D =
_

_
1 0
1 1
1 1
0 1
_

_
.
It is not possible that CA = I
2
for some matrix C. Assume C = [c
1
c
2
], then we have
CA = [c
1
c
1
+c
2
c
1
c
2
c
2
].
Therefore the span of columns of CA: Span{c
1
, c
1
+c
2
, c
2
c
2
, c
2
} equals to Span{c
1
, c
2
},
which cannot be R
4
. Thus CA = I
4
.
16 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
2.3. Characterizations of Invertible Matrices
2. Since
_
4 2
6 3
_

_
4 2
0 0
_
The matrix is not invertible.
4. Since there is a row of all zero, there are at most two pivots, so the matrix is not
invertible.
6. Since
_
_
1 3 6
0 4 3
3 6 0
_
_

_
_
1 3 6
0 4 3
0 3 18
_
_

_
_
1 3 6
0 4 3
0 0
63
4
_
_
,
we know the matrix has three pivots, thus invertible.
8. Since this is an upper triangular matrix, which is already in the echelon form. There
are exactly four pivots, thus the matrix is invertible.
12. (a) True. Since AD = I where both A and D are n n matrices, both A and D
are invertible (Invertible Matrix Theorem (a)(j)(k)), also A = D
1
. Therefore DA =
DD
1
= I.
(b) False. A can be any n n matrix, for example, the zero matrix. Then the row
reduced echelon form of A is not I.
(c) True. Since the columns of A are linear independent, A is invertible, thus the columns
of A span R
n
. (Invertible Matrix Theorem (e)(a)(h).)
(d) False. Since the equation Ax = b has at least one solution for each b, the columns
of A span R
n
. Therefore A is invertible and x Ax is one-to-one. (Invertible Matrix
Theorem (h)(a)(f).)
(e) False. We can take b = 0 and A is not invertible, so the equation Ax = 0 has innite
solutions.
14. A square lower triangular matrix is invertible if and only if all the diagonal entries
are nonzero. Let A be a lower triangular matrix, then A is invertible if and only if A
T
is invertible. A
T
is a square upper triangular matrix. If all the diagonal entries of A are
non-zero, then A
T
is in echelon form and it has exact n pivots which are the diagonal
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 17
entries. So A is invertible. Conversely, if A is invertible, then the echelon form of A
T
must
have n pivots. This means that the diagonal entries must be the pivots, thus nonzero.
16. If A is invertible, then so is A
T
, thus the columns of A
T
must be linearly independent.
(Invertible Matrix Theorem (a)(l) and (a)(e).)
18. No. By an elementary row operation that add the negative of one of the two rows to
another, we get a row with all zero. Therefore the matrix cannot have n pivot positions,
thus not invertible. (Assume the matrix is n n.)
20. No. Because the equation Ax = b is consistent for every b in R
5
, the matrix A
is invertible. Therefore Ax = b has at most one solution for every b. (Invertible Matrix
Theorem (g)(a)(f), then use denition of one-to-one.)
22. Since EF = I, by Invertible Matrix Theorem (j)(k))(a), we know E and F are
both invertible. Also E
1
= F. Therefore FE = E
1
E = I = EF, i.e. E and F commute.
26. Since the columns of A are linear independent, A is invertible. (Invertible Matrix
Theorem (e)(a).) So A
2
is also invertible and as a consequence, the columns of A
2
span
R
n
. (Invertible Matrix Theorem (a)(h).)
28. Since AB is invertible, let C = (AB)
1
A, then CB = (AB)
1
AB = I. Therefore B
is invertible. (Invertible Matrix Theorem (j)(a)).
30. By denition, if x Ax is one-to-one, then each b can have at most one pre-
image. Ax = b cannot have more than one solutions for any b. Therefore If Ax = b has
more than one solutions for some b, then the transformation x Ax is not one-to-one. We
can also deduce that this transformation is not invertible. (By Invertible Matrix Theorem.)
32. Since the equation Ax = 0 has only the trivial solution, A has a pivot in every
column. Since A is an n n matrix, A has exactly n pivots. Therefore A has a pivot in
every row. This shows that Ax = b has a solution for each b.
18 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
34. The standard matrix of T is
A =
_
2 8
2 7
_
.
Since det A = 2 = 0, its inverse is
A
1
=
1
2
_
7 8
2 2
_
=
_

7
2
4
1 1
_
.
Therefore T is invertible and T
1
(x
1
, x
2
) = (
7
2
x
1
4x
2
, x
1
x
2
).
3.1 Introduction to Determinants
2. Cofactor expansion across the rst row:

0 5 1
4 3 0
2 4 1

=(1)
1+1
0

3 0
4 1

+ (1)
1+2
5

4 0
2 1

+ (1)
1+3
1

4 3
2 4

=0 20 + 22 = 2.
Cofactor expansion down the second column:

0 5 1
4 3 0
2 4 1

=(1)
1+2
5

4 0
2 1

+ (1)
2+2
(3)

0 1
2 1

+ (1)
3+2
4

0 1
4 0

=20 + 6 + 16 = 2.
6. Cofactor expansion across the rst row:

5 2 4
0 3 5
2 4 7

=(1)
1+1
5

3 5
4 7

+ (1)
1+2
(2)

0 5
2 7

+ (1)
1+3
4

0 3
2 4

=5 + 20 24 = 1.
10. First we use the cofactor expansion across the second row:

1 2 5 2
0 0 3 0
2 6 7 5
5 0 4 4

= (1)
2+3
3

1 2 2
2 6 5
5 0 4

,
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 19
then we use the cofactor expansion across the second column to get

1 2 2
2 6 5
5 0 4

= (1)
1+2
(2)

2 5
5 4

+ (1)
2+2
(6)

1 2
5 4

= 2(17) 6(6) = 2.
Therefore

1 2 5 2
0 0 3 0
2 6 7 5
5 0 4 4

= 6.
12. First, we use the cofactor expansion across the rst row:

4 0 0 0
7 1 0 0
2 6 3 0
5 8 4 3

= (1)
1+1
4

1 0 0
6 3 0
8 4 3

Then we use the cofactor expansion across the rst row again:

1 0 0
6 3 0
8 4 3

= (1)
1+1
(1)

3 0
4 3

= (1)3(3).
Therefore

4 0 0 0
7 1 0 0
2 6 3 0
5 8 4 3

= 4(1)3(3) = 36.
(Actually, this is a lower triangular matrix. The determinant is the product of the diagonal
entries.)
20 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
14. First we use the cofactor expansion across the fourth row, then the cofactor expansion
down the last column, nally the cofactor expansion down the rst column:

6 3 2 4 0
9 0 4 1 0
8 5 6 7 1
3 0 0 0 0
4 2 3 2 0

=(1)
4+1
3

3 2 4 0
0 4 1 0
5 6 7 1
2 3 2 0

= (1)
4+1
3(1)
3+4
1

3 2 4
0 4 1
2 3 2

=(1)
4+1
3(1)
3+4
1
_
(1)
1+1
3

4 1
3 2

+ (1)
3+1
2

2 4
4 1

_
=3[33 + 36] = 9.
18.

1 3 5
2 1 1
3 4 2

= (1 1 2) + (3 1 3) + (5 2 4) (1 1 4) (3 2 2) (5 1 3) = 20.
20. The elementary row operation is to multiply k to the second row. The determinant
is also multiplied by k.
22. The elementary row operation is to add k times the second row to the rst row. The
determinant remains the same.
24. The elementary row operation is to switch the rst row and the second row. The
determinant becomes the negative of the original determinant.
26.
det
_
_
1 0 0
0 1 0
k 0 1
_
_
= 1.
28.
det
_
_
1 0 0
0 k 0
0 0 1
_
_
= k.
30.
det
_
_
0 0 1
0 1 0
1 0 0
_
_
= 1.
SOLUTIONS TO HOMEWORK 3, DUE 02/13/2013 21
32. The determinant of an elementary scaling matrix with k on the diagonal is k.
36. Since
E =
_
1 0
k 1
_
, A =
_
a b
c d
_
,
we have
EA =
_
a b
ka + c kb + d
_
.
We can calculate that det(E) = 1, det(A) = ad bc and
det(EA) = a(kb + d) b(ka + c) = ad bc,
which shows that det(EA) = det(E) det(A).
38. Since
A =
_
a b
c d
_
,
we have
kA =
_
ka kb
kc kd
_
.
So we can calculate that
det(kA) = (ka)(kd) (kb)(kc) = k
2
(ad bc).
Therefore
det(kA) = k
2
det(A).
40. (a) False. Cofactor expansions across rows or down columns are always the same.
(b) False. The determinant of a triangular matrix (either upper or lower) is the product
of the entries on the main diagonal.
43. In general, det(A + B) = det A + det B. For example,
A =
_
1 0
0 1
_
, B =
_
1 1
1 1
_
, A + B =
_
2 1
1 2
_
then det(A + B) = 3, det(A) = 1, det(B) = 0.
22 MATH 54 LINEAR ALGEBRA AND DIFFERENTIAL EQUATIONS WITH PROF. STANKOVA
References
[1] David C. Lay, R. Kent Nagle, Edward B. Sa, Arthur David Snider, Linear Algebra and Dierential
Equations, Second Custom Edition for University of California, Berkeley.
[2] http://en.wikipedia.org/wiki/Kronecker delta
[3] http://en.wikipedia.org/wiki/Nilpotent
[4] Michael Atiyah, Ian G. Macdonald, Introduction to Commutative Algebra.

You might also like