You are on page 1of 3

Linear Algebra 110: HW 1 Solutions

Section 1.2
1. a) True by definition.
b) False; 0 + 00 = 00 because 00 is a zero element; 0 + 00 = 0 because 0 is a zero element; so
0 = 00 .
c) False; for instance a = 2, b = 3, x = ~0.
d) False; similar.
e) True.
f) False; the statement is backward.
g) False; for instance because in a vector space all pairs of elements can be added.
h) False; the top degree terms can cancel; ex. f, g both equal to 3x.
i) True; the top degree term is still nonzero (because fields are also integral domains).
j) True; definition.
k) True; definition.
2. Its the 3 4 matrix with all entries equal to the zero element of F.
7. Two functions are equal if f (s) = g(s) for each s S, i.e. if they are equal on every point
of the domain. In this case, the domain S has only two points. Just check that f (0) = g(0) and
f (1) = g(1). Similarly, just check f (0) + g(0) = h(0) and likewise for 1.
8. (a + b)(x + y) = a(x + y) + b(x + y) = ax + ay + bx + by by the first distributive property and
the second distributive property respectively (note these are two different axioms).
10. The important thing to note is that the sum of two differentiable functions is differentiable
and the scalar multiple of a differentiable function is differentiable. Thus the addition and scalar
multiplication indeed define operations V V V and R V V , respectively.
Next, you should technically tediously check the eight FS axioms. You could also reference the
first few paragraphs of section 1.3.
13. No. Note the vector space would need an additive identity. There is one; ~0 := (0, 1). But
then the element (1, 0) needs an additive inverse. Youll find nothing works because for the second
element in the ordered pair, 0 c = 0 6= 1 no matter what c you choose (a property of R and
generally, for fields).
17. No. For 1 F , we should have 1v = v for any v V , but instead for instance, 1(1, 1) = (1, 0) 6=
(1, 1).

Section 1.3
1. a) False; Take R2 and any line L that does not pass through the origin. L is not a subspace,
but one could biject the line with R and hence put a vector space structure on it.
b) False; needs 0.
c) True; the zero space is a subspace.
d) False; [0, 3], [2, 4] in R.

e) True. Pretty self evident.


f) False; try the 2 by 2 diagonal matrix with 0, 1 on its entries.
g) False; R2 is unrelated to R3 .
3. Check the left and right sides entrywise. Starting on the left, we have
((aA + bB)t )ij = (aA + bB)ji = (aA)ji + (bB)ji = aAji + bBji = a(At )ij + b(B t )ij
= (aAt )ij + (bB t )ij = (aAt + bB t )ij .
Each entry of the two sides is the same; this means the matrices are the same.
8. For the following, simply check that the sets contain zero, are closed under scaling, and closed
under addition. This means, for instance, to check closure under scaling, you would write: Consider
any c R, any a Wi . Need to check whether ca Wi .
For (c) you would finish off as follows: If a = (a1 , a2 , a3 ) W3 , check that ca = (ca1 , ca2 , ca3 )
satisfies
2(ca1 ) 7(ca2 ) + (ca3 ) = c(2a1 7a2 + a3 ) = 0
where the last equality is a result of a being in W3 .
a) Yes.
b) No; does not contain zero.
c) Yes.
d) Yes.
e) No; does not contain zero.
contain
f)No; this does
zero, and is closed under scaling. Closure under addition fails. I think
( 3, 5, 0) and (0, 6, 3) fails.
12. Check that this contains the matrix of all zero entries, and satisfies closure under scaling,
addition. Similar to above.
19. For the reverse direction, suppose either W1 W2 or W2 W1 . In the former case, W1 W2 =
W2 , which we are given is a subspace. The latter case is similar.
For the forward direction, suppose W1 W2 is a subspace, and suppose for contradiction that
the right hand side is false. This means W2 is not a subset of W1 , and W1 is not a subset of W2 .
This means there exists v2 W2 such that v2
/ W1 , and similarly there exists v1 W1 such
that v1
/ W2 .
Because v1 , v2 W1 W2 , which we have assumed is a subspace, v1 + v2 W1 W2 . This
means either v1 + v2 W1 or v1 + v2 W2 , or possibly both.
If v1 + v2 W1 , then v2 = v1 + v2 + (v1 ) W1 , where the latter containment is because W1
is a subspace. This is a contradiction.
So v1 + v2 W2 . But this leads to a similar contradiction. Our supposition was nonsense; the
right hand side is true.
25. We need first to check that W1 W2 = {0}. Consider any polynomial in both W1 and W2 .
This means all of its even degree coefficients are zero, and all its odd degree coefficients are zero.
In other words, all of its coefficients are zero; this means the polynomial is the 0 polynomial.
We also need to check that any polynomial p may be expressed as a sum of an element p1 W1 ,
and an element p2 W2 . Any polynomial has a representation
p = an xn + an1 xn1 + ... + a0
2

where we may assume n is odd (possibly having an = 0). Then simply take
p1 = an xn + an2 xn2 + ... + a1 x
p2 = an1 xn1 + an3 xn3 + ... + a0 .

29. (As far as I can tell, the characteristic 2 condition is irrelevant, so just let F be any field.) Note
that a symmetric matrix that is strictly lower triangular must be the zero matrix, so W1 W2 = {0}.
If you consider any matrix A Mnn (F), choose A2 to have the same entries as A in its upper
triangular entries; reflect the find the lower triangular entries. This is symmetric, A2 W2 .
Choose A1 to be the matrix where each strict lower triangular entry is the corresponding entry
of A, minus the corresponding entry of A2 , and fill the remaining entries with zero. Note A1 W1 ;
check that A1 + A2 = A.

You might also like