You are on page 1of 33

Contents

1 Fields 3
1.1 Notes about Exercise Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Properties of Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Vector Spaces 9
2.1 Denition of Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Some observations and Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.1 Examples Observation and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6.1 The impact of the set of Scalars on the Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7.1 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.8 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.8.1 Observations and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8.2 Terminology for working with Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8.3 Calculus of Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8.4 Compliment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Dimension of a subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.10 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.10.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.11 Dual Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.12 Reexivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.13 Basis of 1

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.14 Annihilators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.15 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
2 CONTENTS
List of Theorems and Denitions
1 Denition (Vector Space) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Denition (Vector Sum Notation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Denition (Sum of Zero Vectors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Denition (Linear Dependence) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5 Theorem (Linear Dependence 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6 Theorem (Linear Dependence 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7 Denition (Basis Denition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8 Theorem (Linear Combination Uniqueness using Basis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
9 Theorem (Extension of linearly independent set to Basis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
10 Theorem (Cardinality of Basis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
11 Denition (Dimension of a Vector Space) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
12 Theorem (Linear dependence of n+1 Vectors in n-dimensional Vector Space) . . . . . . . . . . . . . . . . . . . . 17
13 Denition (Isomorphic Vector Spaces) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
14 Denition (Isomorphism) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
15 Theorem (Isomorphism between equidimensional Vector spaces) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
16 Theorem (Equinumerity Vector Space and its Field) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
17 Denition (Subspace) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
18 Theorem (Subspace Intersection) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
19 Denition (Span of subset of vectors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
20 Theorem (Span of Vectors and Subspaces) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
21 Theorem (Sum of Supspaces) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
22 Theorem (Subspace Compliment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
23 Theorem (Dimension of a Subspace) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
24 Theorem (Subspace Basis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
25 Lemma (Basis of Intersection and Union of Subspaces) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
26 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
27 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
28 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
29 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
30 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
31 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
32 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
33 Theorem (Dimension of the Space of Annihilators) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
34 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3
4 LIST OF THEOREMS AND DEFINITIONS
Notes on Finite Dimentional Vector Spaces
Ramesh Kadambi
October 12, 2014
2 LIST OF THEOREMS AND DEFINITIONS
Chapter 1
Fields
1.1 Notes about Exercise Problems
Exercises that are neither imperative (Prove That) nor interrogative (is it true that ...?) but merely declarative, then it
is intended as a challenge. For such exercises it is asked to discover if the assertion is true or false, prove it if it is true and
construct a counter example if it is false, most of all discuss such alterations of hypothesis and conclusion as will make the true
ones false and the false ones true. Second, the exercises, whatever their grammatical form, are not always placed so as to make
their very position a hint to their solution. Frequently exercises are stated as soon as the statement makes sense, quite a bit
before machinery for a quick solution has been developed. A reader who tries (even unsuccessfully) to solve such a misplaced
exercise is likely to appreciate and to understand the subsequent developments much better for his attempt.
1.2 Fields
All numbers going forward are referred to as scalars. A scalar s C, R, Z , i.e a scalar may be a complex number, real
number or an integer etc..
1.2.1 Properties of Scalars
Throughout this study, scalars can be interpreted as reals or complex without loss of anything essential. The properties of the
scalars are assumed to be as follows,
(A) For every pair (, ), of scalars there corresponds a scalar, + , called the sum of and , such that:
1. addition/sum is commutative, + = + ,
2. addition/sum is associative, + ( + ) = ( + ) + ,
3. a unique scalar 0 (called zero) such that + 0 = for every scalar , and
4. to every scalar there corresponds a unique scalar such that + () = 0.
(B) For every pair (, ), of scalars there corresponds a scalar, , called the product of and , such that,
1. multiplication/product is commitative, = ,
2. multiplication/product is associative, () = (),
3. there exists a unique scalar 1 such that 1 = , for every scalar , and
4. for every non-zero scalar there corresponds a unique scalar
1
( or
1

) such that
1
= 1
(C) Multiplication is distributive with respect to addition, ( + ) = +
A set S of objects (scalars) is a eld if,
1. The operations of addition and multiplication are dened on S.
2. The conditions (A), (B), (C) are all satised.
Ex: 1. The set of all rational numbers Q, with the usual denitions of sum and product.
2. The set of reals R, and the set of complex numbers C.
3
4 CHAPTER 1. FIELDS
1.3 Exercises
1. Almost all the laws of elementary arithmetic are consequences of the axioms dening a eld. Prove, in particular, that if
F is a eld, and if , , and belong to F, then the following relations hold.
a. 0 + = follows from property A1.
b. If + = + then = .
Proof. Let = + = + , = + = + , using A1, A2 =
c. +() = ( here = +()). The result follows from A1, A2. +() = +(+) = ()+ =
0 + =
d .0 = 0. = 0
Proof. using the fact that 0 = +() we have , .0 = .( +()) it follows from C, .( +()) = ( ) =
= 0? Not really. The correct way of reasoning is, + 0 = .( + 0) = . + .0 = . we also have
+ 0 = . The result follows using b. The result 0. = 0 follows from B1.
e. (1) =
Proof. We have from d, .0 = 0 and 1 +(1) = 0 0 = .(1 +(1)) = +(1), However from A4, +() = 0.
Using b we have, (1) =
g If = 0 then either = 0 or = 0 or both.
Proof. We will look at the statement of the problem as = 0 = 0 or = 0 or both. The direction = 0 or
= 0 or both = 0 is straight forward from d.
In the other direction we have, = 0 + = , (1 + ) = (1 + ) = 1 from C3. 1 + = 1 = 0,
Similarly one can show that the statement implies = 0. Since = 0 + = and + = , = 0
both = 0, = 0.
2 a. Is the set of all positive integers a eld? (In familiar systems, such as the integels, we shall almost always use the
ordinary operations of addition and multiplication. On the rare occasions when we depart from this convention, we
shall give ample warning. As for positive, by that word we mean, here and elsewhere in this book, greater than or
equal to zero. If 0 is to be excluded, we shall say strictly positive.)
No, the set of positive integers fails the axiom A4.
b. What about the set of all integers?
No, (multiplicative inverses require rational numbers)
c. Can the answers to these questions be changed by re-dening addition or multiplication (or both)?
Interesting question, I need to thing about it a bit. This interesting, if there exists an isomorphism f : S F from a
set S and a eld (F, +, .) we can dene (S, +, .) on S such that x+y = f
1
(f(x) +f(y)) and x . y = f
1
(f(x) . f(y)).
3. Let m be an integer, m 2, and let Z
m
be the set of all positive integers less than m, Z
m
= 0, 1, ..., m1. If andf
are in Z
m
let + be the least positive remainder obtained by dividing the (ordinary) sum of and by m, and, similarly,
let and be the least positive remainder obtained by dividing the (ordinary) product of and by m. (Example: if
m= 12, then 3 + 11= 2 and 311= 9.) (a) Prove that Z
m
is a eld if and only if m is a prime. (b) What is -1 in Z
5
? (c)
What is
1
3
in Z
7
?
(a) Z
m
is a eld i m is a prime.
1.3. EXERCISES 5
Proof. If m is a prime, then all the axioms are satised, since all the axioms follow from that of natural addition and
multiplication of numbers. If m is a prime then Z
m
is a eld.
If Z
m
is a eld then an inverse for every possible scalar obtained from operations of addition and multiplica-
tion. However, if m is not prime then, for every pair (, ) such that = 0, there are no inverses.
Ex: If m = 4, Z
4
= 0, 1, 2, 3, 2.0, 1, 2, 3 = 0, 2, 0, 2 clearly 2 does not have an inverse.
We make use of the following two facts in order to prove the above statement.
1 If mod m = 0 and m is a prime then either or is divisible by m.
Proof. The proof is straight forward, if mod m = 0 then n such that = mn m =

n
but this is a
contradiction unless or is divisible by m. This can be extended to any
i
such that

n
1

i
mod m = 0.
2 If Z
m
is a eld and m is not a prime, then there exist , Z
m
such that mod m = 0.
Proof. If m is not prime then
i
Z
m
that are prime factors of m. From which it follows, that (, ) Z
m
such that mod m = 0.
Now using (1) and (2) if m is not a prime then from (2) a pair (, ) such that mod m = 0. Now if Z
m
is a eld
then has an inverse
1
such that
1
= 1, Consider the product
1
, the product is either or 1 depending
on how the product association is done. This contracdits axiom C2, and Z
m
is not a eld.
A better proof would be to show and do not have inverses.
(b) What is -1 in Z
5
?
4 : 1 + 4 mod 5 = 0
(c) What is
1
3
in Z
7
?
1
3
really does not belong to Z
m
. However, 3
1
= 5 since 3 . 5 mod 7 = 1
4. The example of Z
p
(where p is a prime) shows that not quite all the laws of elementary arithmetic hold in elds; in Z
2
,
for instance, 1 + 1 = 0. Prove that if F is a eld, then either the result of repeatedly adding 1 to itself is always dierent
from 0, or e1se the rst time that it is equal to 0 occurs when the number of summands is a prime. (The characteristic of
the eld F is dened to be 0 in the rst case and the crucial prime in the second.)
Proof. The proof is pretty straight forward. If Z
m
is a eld then as proved earlier m is a prime. Given + = +
mod m we have 1 + 1 + 1.....n times = 0

n
1
1 = km n = km clearly the rst time the sum is zero is when
k = 1 n = m the rst time the sum is 0.
5. Let Q(

2) be the set of all real numbers of the form +

2, where , are rational (ie. , Q). (a) Is Q(

2) a eld?
(b) What if and are required to be integers?
Checking if the eld axioms are satised,
A1 +

2 + +

2 = ( + ) + ( + )

2 Q(

2) since ( + ) Q and ( + ) Q
A2 follows from Associativity of rationals and A1
A3 = 0, = 0 0 + 0 .

2 Q(

2) since 0 Q
A4 +

2 Q since , Q and +

2 + (

2) = 0
B1 , , , Q we have ( +

2)( +

2) = + ( + )

2 + 2 since + 2, ( + ) Q
B2 Associativity follows from associativity of rationals and B1, , , , , , we have a = +

2, b = +

2, c =
+

2 Q

2 we have (ab)c = a(bc).


(ab)c = ( + 2 + ( + )

2)( +

2)
= ( + 2 + 2 + 2) + ( + )

2 + ( + 2)

2
= ( + 2 + 2 + 2) + ( + + + 2)

2 (1.3.1)
a(bc) = ( +

2)( + 2 + ( + )

2)
= + 2 + 2 + 2) + ( + 2 + + )

2 (1.3.2)
6 CHAPTER 1. FIELDS
from (1.3.2) and (1.3.1) we see that associativity holds.
B3 Since 1 Q we have 1 + 0

2 Q

2 such that +

2 . 1 = +

2
B4 The inverse, for every +

2 we have (

2)
1

2
2
2
Q

2 such that ( +

2)((

2)
1

2
2
2
) = 1
C1 Distributive property: given , , , , , we have
Proof.
( +

2)( +

2 + +

2) = (( +

2)( +

2) + ( +

2)( +

2))
= ( + 2 + ( + )

2) + ( + 2 + ( + )

2)
= + + 2( + ) + ( + + + )

2
= ( + + ( + )

2) +

2( + + ( + )

2)
= ( +

2)( + + ( + )

2)
(a) what if and are integers? The inverse requires the coecients be rational.
6 a Does the set of all polynomials with integer coecients form a eld?
Yes. The proof of addition, multiplication, commutative and associative properties of these operations, A1, A2, B1,
B2 follow from the properties of integer addition and multiplication.
A3 The zero polynomial is the one with all zero coecients. 0

n
1
x
i
A4

n
1
a
i
x
i
P such that

n
1
a
i
x
i
+

n
1
a
i
x
i
= 0, since if a
i
Q a
i
(Q).
B1-B2 given

n
1
a
i
x
i
,

m
1
b
j
x
j
,

n
1
c
k
x
k
P we have

n
1
a
i
x
i
.

m
1
b
j
x
j
P and

n
1
a
i
x
i
(

m
1
b
j
x
j
.

n
1
c
k
x
k
) = (

n
1
a
i
x
i
.

m
1
b
j
x
j
)

n
1
c
k
x
k
B3 1x
0
P such that

n
1
a
i
x
i
. 1x
0
=

n
1
a
i
x
i
B4
1

n
1
aix
i
such that

n
1
a
i
x
i
.
1

n
1
aix
i
= 1 but
1

n
1
aix
i
/ P The set of polynomials with integer coecients is not a
eld. The same holds for real coecients as well.
7 Let F the set O
p
of all ordered pairs (, ) of real numbers.
a If addition and multiplication are dened as
(, ) + (, ) = ( + , + )
(, )(, ) = (, )
does F become a eld?
A1,A2 Follow from the properties of addition of real numbers.
A3 The pair (0, 0) O
p
such that (, ) + (0, 0) = ( + 0, + 0) = (, )
A4 (, ) O
p
such that (, ) + (, ) = (0, 0) since , R.
B1,B2 These follow from the commutative and associative properties of reals. B1 follows from commutativity of reals.
B2 can be shown as follows,
(, )(, ) = (, ) O
p
since , R,
(, )((, )(, )) = (, )(, )
= (, ) = ((), ()) using associativity of Reals
= (, )(, )
= ((, )(, ))(, )
B3 There exists (1, 1) in O
p
since 1 R, such that (, )(1, 1) = (, )
B4 There exists (
1

,
1

) O
p
such that (, )(
1

,
1

) = (1, 1). Since


1

,
1

R, This trips you up, consider the pair


(0, a), (a, 0), do not have a multiplicative inverse.
1.3. EXERCISES 7
C1 The distributive property again follows from the distributive property of the reals.
(, )((, ) + (, )) = (, )( + , + )
= (( + ), ( + ))
= ( + , + )
= (, ) + (, )
= (, )(, ) + (, )(, )
Since (0, a), (a, 0) fail to form an inverse this is not a eld.
b If addition and multiplication are dened by,
(, ) + (, ) = ( + , + )
(, )(, ) = ( , + ), is F a eld.
Yes F is a eld. These are complex numbers. The properties of addition follow from the previous problem. The
additive inverse is (, ) and the zero element is (0, 0) The properties of multiplication are as follows,
B1,B2 These follow from the fact that , + R and therefore ( , + ) O
p
.
B2 Given a = (, ), b = (, ) and c = (, ) we have to prove, a(bc) = (ab)c
Proof.
a(bc) = (, )((, )(, ))
= (, )( , + )
= ([ ] [ + ], [ + ] + [ )])
= ([ ] [ + ], [ + ] + [ ]) using associativity of reals wrt multiplication
= ([ ], [ + ])(, )
= ((, )(, ))(, )
B3 There exists a unique scalar (1, 0) O
p
such that (, )(1, 0) = (1 0, 0 + 1) = (, )
B4 There exists a unique scalar (
1

2
+
2
,
1

2
+
2
) such that (, )(
1

2
+
2
,
1

2
+
2
) = (
1

2
+
2
(
2
+
2
),
1

2
+
2
(+
)) = (1, 0) O
p
C1 Distributive property follows similarly by using a ton of algebra.
c What happens (in both preceding cases) if we consider ordered pairs of complex numbers instead?
a In the rst case it will still remain a eld since properties of addition follow from eld properties of C. In the
case of multiplication, since
1
x+iy
C all the properties are satised..
b It remains a eld in the second case as well since the eld proofs involved only properties of reals that were
eld properties as well as commutativity and associativity of the operators. Since complex numbers have similar
properties the eld properties are retained.
1
1
Essentially as stated at the end of the chapter.
8 CHAPTER 1. FIELDS
Chapter 2
Vector Spaces
2.1 Denition of Vector Space
Here we assume that we are working with a given particular eld F; the scalars to be used are elements of F.
Denition 1 (Vector Space). A vector space is a set 1 of elements called vectors satisfying the following axioms.
(A) To every pair, x, y 1 there corresponds a vector x + y 1, called the sum of x and y, in such a way that,
(1) addition is commutative, ie. x + y = y + x,
(2) addition is associative, ie. x + (y + z) = (x + y) + z,
(3) there exists in 1 a unique vector 0 (called the origin) such that x + 0 = x for every vector x 1, and
(4) x 1, x[x + (x) = 0, the vector x is unique.
(B) To every pair F and v 1, there corresponds a vector v in 1, called the product of and v, such that,
(1) multiplication by scalars is associative i.e (v) = ()v,
(2) 1x = x for every vector x 1 and 1 F.
(C) (1) Multiplication by scalars is distributive w.r.t, vector addition, (x + y) = x + y,
(2) Multiplication by vectors is distributive w.r.t scalar addition ( + )x = x + x.
1
The axioms above are not necessarily logically independent. The name given to the vector space depends on the elements of
the eld, ie. if F R, C, Q it is called a real vector space, complex vector space, or rational vector space.
2.2 Exercises
1. 1. Prove that if x and y are vectors and if is a scalar, then the following rela- tions hold. (a) 0 + x = x. (b) 0 = 0.
(c) . 0 = 0. (d) 0 . x = 0. (Observe that the same symbol is used on both sides of this equa- tion; on the left it
denotes a scalar, on the right it denotes a vector.) (e) If x = 0 ,then either a = 0 or x = 0 (or both). (f) x = (1)x.
(g)y + (x y) = x (Here x y = x + (y))
Proof. (a) 0 +x = x follows fom commutativity of addition and A(3).
(b) 0 = 0 , Using A4, 0 + (0) = 0 0 = 0
2
(c) 0 = 0, using A3 we have, (x + 0) = x, using C1 we have (x + 0) = x + 0 = x, it follows from A3 that
0 = 0.
(d) 0 . x = 0. We have + 0 = ( + 0) . x = . x, using C2, we have . x + 0 . x = . x, using A3, we have
0 . x = 0.
(e) If x = 0 then either = 0 or x = 0, this is pretty straight forward using (c) and (d) and the fact that the zero
element is unique in both the scalar eld as well as vector space.
1
Are the distributive properties derivable from other axioms? It does not look like it, can we prove it?
2
All bold case are vectors.
9
10 CHAPTER 2. VECTOR SPACES
(f) x = (1)x, This essentially follws from eld axioms as well as C2, x + (1)x = (1 + (1))x = 0 . x = 0 x =
(1)x
(g) y + (x y) = x, this follows from associativity of addition, y + (x y) = (y y) +x = 0 +x
2. If p is a prime, then Z
n
p
is a vector space over Z
p
, (cf. 1, Ex. 3); how many vectors are there in this vector space?
Soln: p
n
3. Let V be the set of all (ordered) pairs of real numbers. If x = (
1
,
2
) and y = (
1
,
2
) are elements of V, write x + y =
(
1
+
1
,
2
+
2
), x = (
1
, 0), 0 = (0, 0), x = (
1
,
2
), Is V a vector space with respect to these denitions of the
linear operations? Why?
Proof. No,since 1x ,= x, 1x = (
1
, 0) ,= x
4. Sometimes a subset of a vector space is itaeIf a vector space (with respect to the linear operations already given). Consider,
for example, the vector space C
3
and the subsets V of C consisting of those vectors (
1
,
2
,
3
) for which (a)
1
R (b)

1
= 0 (c) either
1
= 0 or
2
= 0 (d)
1
+
2
= 0 (e)
1
+
2
= 1 In which of these cases is V a vector space?
Proof. a. x +y V x, y V since
1
+
1
R.
a.A1-A2 Since (
1
,
2
,
3
) V the properties A1 A2 follow from the eld properties of reals and complex numbers.
a.A3 0 = (0, 0, 0) V since 0 R, C and x +0 = x.
a.A4 x = (
1
,
2
,
3
) x = (
1
,
2
,
3
) V such that x + (x) = 0, since
1
R and
2
,
3
C.
a.B x V since (
1
,
2
,
3
) = (
1
,
2
,
3
) and
1
R which follows from eld properties of R and
2
,
3
C,
a.B1,B2 These follow from the eld properties of R, C.
a.C1,C2 These again follow from the fact that V C
3
. V is a vector space
b. Proof. The set V = (0, x
2
, x
3
); x
2
, x
3
C is a vector space since
a. x, y V, x +y V since x +y = (0, x
2
+ y
2
, x
3
+ y
3
) C
3
and x
1
= 0,
A1,A2 Follow from the vector space properties of C
3
and eld properties of C,
A3 0 = (0, 0, 0) V since x
1
= 0 and (0, 0, 0) C
3
,
A4 x V; x = (0, x
2
, x
3
) C
3
; (0, x
1
, x
2
) + (0, x
1
, x
2
) = 0 x V,
B1-C3 Follow from properties of C
3
and the fact that x V x C
3
.
c either x
1
= 0 or x
2
= 0 (interpreting this as x
1
and x
2
cannot both be zero. This is not a vector space. Since 0 / V.
d V = x C
3
; x
1
+ x
2
= 0 is a vector space since, (0, 0, 0) V and the remaining properties follow from the fact that
x V x C
3
.
e V = x C
3
; x
1
+ x
2
= 1 is not a vector space as the vector 0 = (0, 0, 0) / V.
2.3 Linear Dependence
Denition 2 (Vector Sum Notation). Vector sums are denoted by

n
1
x
i
over the set x
i
of vectors, ie. each x
i
is a vector.
Denition 3 (Sum of Zero Vectors). In the case that the set x
i
= the sum

i
x
i
= 0
Denition 4 (Linear Dependence). A nite set x
i
of vectors is linearly dependent if there exists a corresponding set
i
of
scalars, not all 0, such that,

i
x
i
= 0
, If on the other hand, the sum

i

i
x
i
= 0
i
= 0 for each i, the set x
i
is linearly independent.
2.4. LINEAR COMBINATIONS 11
2.3.1 Some observations and Commentary
1. The denition (4) is intended to cover the case of the empty set. If there are no indices i, it is not possible to pick out
some of them and assign to the selected vectors a non-zero scalar so as to make the sum vanish. Rephrasing the denition

i

i
x
i
= 0
i
= 0 as

i

i
x
i
= 0 i such that
i
,= 0. The conclustion is therefore empty set of vectors is linearly
independent.
2. Linear dependence and independence are properties of sets of vectors. However, they are used as an adjective to vectors
themselves. Ex: a set of linearly independent vectors instead of linearly independent set of vectors.
3. We say that an innite set of vectors X is linearly independent if every nite subset of X is linearly indipendent.
4. If x, y C
1
the the set x, y is a linearly dependent set of vectors. Since xy + (x)y = 0 since x, y C, the products
are dened. Therefore any set containing more than 2 elements in C
1
is linearly dependent set.
5. The set of polynomials T is interesting. The nite set of vectors 1 t, t(1 t), 1 t
2
are linearly dependent. (1 t) +
t(1 t) 1 + t
2
= 0, However, the innite set 1, t, t
2
, t
3
, ..... is a linearly independent set of vectors.
Proof.

0
+
1
t +
2
t
2
+
3
t
3
+ ...... = 0
n

i
t
i
= 0
i
= 0
Since none of the t
i
can be expressed as a linear combination of the others, since [t
i
[ < [t
i+1
[ t / [1, 1] and [t
i
[ >
[t
i+1
[ t (1, 1) .
6. In R
n
and C
n
spaces which are prototypical spaces. Linear independence has a geometrical interpretation that can be
visualized in R
1
R
3
depending on interpretation of the vectors as either (a) points in space, (b) directed lines from the
origin to the point. If the vectors are points in space then linearly dependent vectors are colinear/coplanar and pass through
the origin. If the interpretation is directed lines from the origin, then the linearly dependent vectors are colinear/coplanar.
The reason is fairly obvious since there exists a linear combination of the vectors that add to 0.
7. Linear manifolds are vector subspaces of a vector space.
2.4 Linear Combinations
A linear combination is a linear sum of vectors x
i
, expressed as x =

i

i
x
i
. If x =

i

i
x
i
then, x is said to be linearly
dependent on x
i
.
Theorem 5 (Linear Dependence 1). If x
i
is linearly independent, then a necessary and sucient condition that x is a linear
combination of x
i
is that the enlarged set, obtained by adjoining x to x
i
, be linearly dependent.
Proof. If x =

i

i
x
i
then x
i
x is a linearly dependent set, since x

i

i
x
i
= 0. On the other hand if x
i
x is linearly
dependent, then
i
;

i

i
x
i
= 0, x
i
x
i
x x =
1
x

i

i
x
i
.
With the denition of an empty sum, the origin is a linear combination of the empty set of vectors; it is, moreover, the only
vector with this property.
Theorem 6 (Linear Dependence 2). The set of non zero vectors x
i
; i 1, 2, 3, , n is linearly dependent i some x
k
; 2
k n, is a linear combination of the preceding ones, ie. x
k
=

k1
i=1

i
x
i
.
Proof. If 2 k n;

i
x
i
= 0 then by denition of linear dependence the vectors are dependent. If the vectors are dependent,
we know that
i
;

n
1

i
x
i
= 0. We can custruct an augmented set of vectors x
k1
x
k
starting from k = 2 to k = n. Since the
vectors are dependent, k n such that x
k
are linearly dependent. This would imply

k
1

i
x
=
0 x
k
=
1

k1
1

i
x
i
.
2.5 Bases
Denition 7 (Basis Denition). A (linear) basis (or a coordinate system) in a vector space 1 is a set A of linearly independent
vectors such that every vector in 1 is a linear combination of elements of A. A vector space 1 is a nite-dimensional if it
has a nite basis.
12 CHAPTER 2. VECTOR SPACES
2.5.1 Examples Observation and Comments
1. The set B
P
= t
n
; n 0, 1, 2, forms the basis of the set of polynomials T. As seen B
P
forms an innite basis for the
set T for if there was a nite basis of size n, t
n+1
such that, t
n+1
is not a linear combination of t
i
; i 1, 2, 3, , n.
2. An example of a basis in C
n
is the set of vectors x
i
, i 1, 2, 3, , n, dened by the condition that the j
th
coordinate of
x
i
is
ij
. This is true since
(a) The vectors x
i
=
ij
are linearly independent, since if

i

i
x
i
= 0
1
(1, 0, 0, , 0) +
2
(0, 1, 0, , 0) + +

n
(0, 0, 0 , 1) = (
1
,
2
, ,
n
) = 0
i
= 0
(b) any vector x = x
1
, x
2
, x
3
, , x
n
can be written as (x
1
, 0, 0, , 0), (0, x
2
, 0, , 0), , (0, 0, , x
n
).
(c) In general for a vector space F
n
it can be seen that the basis is again of the form x
j
i
=
ij
, where x
j
i
is the j
th
component of x.
2.5.2 Uniqueness
Theorem 8 (Linear Combination Uniqueness using Basis). In a general nite dimensional vector space 1, with basis B
V
=
x
1
, x
2
, , x
n
. Every vector x 1 can be uniquely represented as a linear combination of the basis vectors B
V
.
Proof. If there are two representations of a vector x 1;

n
1

i
x
i
and

n
1

i
x
i
. We have by subtracting each other,

n
1

i
x
i

n
1

i
x
i
= x x = 0

n
1
(
i

i
)x
i
= 0 since the x
i
are linearly independent, we have

n
1
(
i

i
) = 0
i

i
= 0

i
=
i
Theorem 9 (Extension of linearly independent set to Basis). If 1 is a nite dimensional vector space and if y
1
, , y
n
is
any set of linearly independent vectors in 1, then, unless the y

s already form a basis, we can nd vectors y


m+1
, ...., y
m+p
so
that the totality of y

s. ie. y
1
, , y
m
, y
m+1
, , y
m+p
form the basis for the vector space 1. In other words, every linearly
independent set of vectors of a vector space can be extended to a basis.
Proof. The proof starts of with the assumption that we already know an existing basis x
1
, x
2
, , x
n
, this follows from the
fact that ever vector space has a basis
3
. If we have a set of linearly independent vectors, then we can augment it with a vector
from the existing basis, if the new augmented set is linearly independent we add the next vector from the basis and continue
till we exhaust the basis. If the augmented set is not linearly independent we discard the vector from the basis and continue
with the remander of the vectors in the basis. Once we have exhausted all the vectors in the basis we are left with a new basis.
The proof that the new set will span the vector space is fairly straight forward, since every vector from the basis that was not
a linear combination of the original set is included in the new basis and every vector from the basis that was not included is a
linear combination of the vectors in new basis. Essentially we can go from our new basis to the old basis and from there span
the entire vector space.
2.6 Exercises
1a. Prove that the four vectors x = (1, 0, 0), y = (0, 1, 0), z = (0, 0, 1), u = (1, 1, 1) in C
3
form a linearly dependent set, but
any three of them are linearly independent. To prove that a set of vectors are linearly dependent solve the linear system
of equations resulting from the equation x + y + x = 0.
Proof. (a) (1, 1, 1) = (1, 0, 0) + (0, 1, 0) + (0, 0, 1)
(b) The equations resulting from (1, 0, 0) + (0, 1, 0) + (0, 0, 1) = 0 (, , ) = 0 = 0, = 0, = 0.
3
The proof follows from Zorns Lemma, which states that for any set M = if every set C M has an upper bound then M has a maximal element
(may not be unique). Our proof now starts with set M = {set all the subsets of V that are linearly independent}. This set M = since this set has
atleast one vector. The set M can be ordered using the ordering if A, B M; A B if A B (note there are sets such that A, B M and are
disjoint). Every set A M has an upper bound since A M; A V and the upper bound in the worst case will be V. Using Zorns lemma M has
a maximal element. Our claim is that this maximal element B
V
is the basis of the vector space V. If B
V
does not span V then there exists a vector
x V that is linearly independent of B
V
and x / B
V
. This contradicts Zorns lemma of B
V
being the maximal element. This proof is a bit wicked.
The question is why would a basis ever span the entire space V? The basis is actually dened as that set that spans the entire space so the question
is really not about whether or not it spans but it is not a basis if it does not span the vector space. The proof that there is a basis for every vector
space is purely based on well ordering and zorns lemma. At rst sight it seems as though the notion of linear combination is never used, however the
contradiction to Zorns lemma is actually obtained by using the fact that either a vector v V is either a linear combination of the vectors in the set
B
V
or the v B
V
.
2.6. EXERCISES 13
(c) The equations resulting from (1, 0, 0) +(0, 1, 0) +(1, 1, 1) = 0 (+, +, ) = 0 = 0, = 0, = 0. The
rest are proved similarly.
1b. If the vectors x, y, z, u in T are dened by x(t) = 1, y(t) = t, z(t) = t
2
, u(t) = 1 + t + t
2
, prove that x, y, z, u are linearly
dependent, but any three of them are linearly dependent.
Proof. 1b.1 u = x +y +z the vectors x, y, z, u are linearly dependent.
1b.2 x + y + z = 0 t + t + t
2
= 0 , , = 0
Similarly one can prove the independence of the other combinations of vectors.
2. Prove that if R is considered as a rational vector space (see 3, (8)), then a necessary and sucient condition that the
vectors 1 and in R be linearly independent is that the real number be irrational.
Proof. This is an interesting problem. It sheds some light on the impact on the basis the choice of the underlying eld
from which the scalars come from. R with scalars coming from R has just any real number as its basis. Usually 1 would
do it, since the scalar spans the entired vector space. If the eld we choose from changes then the basis vector changes. In
the case of the vector space (R, Q) our basis is (1, ) where is irrational.
4
Given , Q, + = 0 always has the rational solution =

if Q has to be irrational. On the ip


side if is irrational + = 0 does not have solutions for , Q other than = 0, = 0.
3. Is it true that if x, y and z linearly independent vectors, then so also are, x +y, y +z, and z +x ?
Proof. If x, y and z are LI then , , ,= 0; x + y + z = 0 we have (x +y) + (y +z) + (z +x) = 0
( + )x + ( + )y + ( + )z = 0 x +y, y +z, z +x are linearly independent.
4a Under what conditions on the scalar are vectors (1 + , 1 ) and (1 , 1 + ) C
2
linearly dependent.
(1+, 1)+(1, 1+) = ((+)+(), (+)+()) = 0 (+)+() = 0, (+)+() = 0 = 0
and = .
_
1 + 1
1 1 +
_ _

_
=
_
0
0
_
The condition for linear dependence is that, the det
__
1 + 1
1 1 +
__
= 0 (1 + )
2
(1 )
2
= 0 4 = 0 = 0
4b Under what conditions on the scalar are the vectors (, 1, 0), (1, , 1), (0, 1, ) R
3
linearly dependent?
= 0 (0, 1, 0), (1, 0, 1), (0, 1, 0) are linearly dependent. However we are looking for a more generic non-zero solution to
the system,
_
_
1 0
1 1
0 1
_
_
_
_

_
_
=
_
_
0
0
0
_
_
(
2
1) = 0
(
2
2) = 0 = 0, =

2
4c What is the answer to (b) for Q
3
(in place of R
3
)?
= 0
4
This is not true, your answer is wrong. (1, ) is not a basis of Rational Vector Space R.
14 CHAPTER 2. VECTOR SPACES
5a The vectors (
1
,
2
) and (
1
,
2
) in C
2
are linearly dependent i
1

2
=
2

1
Proof. The vectors are linearly dependent if, , ,= 0 such that

1
+
1
= 0

2
+
2
= 0
solving for (, ),
(

1

2
) = 0, ,= 0

2

1

2
= 0
2

1
=
1

2
using the determinant methodology,
_

1

1

2

2
_ _

_
=
_
0
0
_


1

1

2

2
= 0
2

1
=
1

2
On the other side, If
2

1
=
1

2
then we have the following vectors (
1
,
2
) and (
1
,
2
) choose =
2
2
, = 1 clearly
(
1
,
2
) + (
1
,
2
) = 0
5b Find a similar necessary and sucient condition for the linear dependence of two vectors in C
3
. Do the same for three
vectors in C
3
.
Proof. If two vectors are linearly dependent in C
3
then the following equations should hold,
_
_

1

1

2

2

3

3
_
_
_

_
=
_
_
0
0
0
_
_
Solving pairs of equations we end up with

2
=
2

1
,
2

3
=
3

2
,
now if =
2
2
, = 1, we have (
1
,
2
,
3
) = (
1
,
2
,
23
2
) = (
1
,
2
,
3
), Now the other side of the i, is easily
shown by the choice of , given the condition holds. For 3 vectors in C
3
we have the following,
_
_

1

1

1

2

2

2

3

3

3
_
_
_
_

_
_
=
_
_
0
0
0
_
_

1
(
2

3

2

3
)
1
(
2

3

3

2
) +
1
(
1

3

3

2
) = 0

2

3
=
2

3
,
2

3
=
3

2
,
2

3
=
3

2
(2.6.1)
5c Is there a set of 3 linearly independent vectors in C
2
? No. Consider (x
1
, x
2
), (y
1
, y
2
), (z
1
, z
2
) C
2
if they are linearly
independent then,
_
x
1
y
1
x
2
y
2
_ _

_
,= 0
_
x
1
y
1
x
2
y
2
_
is invertible. Hence there exists solutions to the equation
_
x
1
y
1
x
2
y
2
_ _

_
=
_
z
1
z
2
_

x
1
+ y
1
= z
1
x
2
+ y
2
= z
2
=
z1y2z2y1
x1y2x2y1
, =
x1z2x2z1
x1y2x2y1
.
6a Under what conditions on the scalars , are the vectors (1, ) and (1, ) in C
2
linearly dependent.
If (1, ), (1, ) are linearly dependent then
_
1 1

_
is not invertible = from the condition for non- invertibility.
6b Under what conditions on the scalars , , are vectors (1, ,
2
), (1, ,
2
), (1, ,
2
) in C
3
are linearly dependent.
From the earlier condition from (2.6.1) in C
3
we have,
2
=
2
,
2
=
2
,
2
=
2
= , = , =
2.6. EXERCISES 15
6c Guess and prove a generalization of (a) and (b) to C
n
. In C
n
the condition for linear dependency is,
Proof.
1 1 1 1

1

2

3

n

n1
1

n1
2

n1

n1
n

n
1

n
2

n
3

n
n
= 0
n1
j

n
i
=
n1
i

n
j

1
=
2
=
3
=
n
7a Find two bases in C
4
such that the only vectors in common to both are (0, 0, 1, 1) and (1, 1, 0, 0)
Sol
n
: (0, 0, 1, 1)(1, 1, 0, 0)(1, 0, 0, 0)(0, 0, 0, 1) and (0, 1, 0, 0)(1, 1, 0, 0), (0, 0, 1, 0)(0, 0, 1, 1).
7b Find two bases in C
4
that have no vectors in common so that one of them contains the vectors (1, 0, 0, 0) and (1, 1, 0, 0)
and the other one contains the vectors (1, 1, 1, 0) and (1, 1, 1, I).
Sol
n
: (1,0,0,0) (1,1,0,0), (0,0,1,0), (0,0,0,1) and (1,1,1,0), (1,1,1,1),(1,0,0,0),(0,1,0,0)
8a Under what conditions on the do the vectors (1, 1, 1) and (1, ,
2
) form a basis of C
3
?
Sol
n
Two vectors cannot form basis of C
3
. The other approach is that ever subset of a basis has to be linearly independent

_
_
1 1
1
1
2
_
_
_
_

_
_
=
_
_
0
0
0
_
_
,= 1
8b Under what conditions on the scalar do the vectors (0, 1, ), (, 0, 1), and (, 1, 1 + ) form a basis of C
3
?
Sol
n
0
1 0 1
1 1 +
,=
_
_
0
0
0
_
_
(1 + ) + = 0
0 = 0
The vectors will not form a basis for any value of since the condition for linear dependency, holds for these set of vectors
for any value of . Since
_
_
0
1

_
_
+
_
_

0
1
_
_
=
_
_

1
1 +
_
_
9 Consider the set of all those veetors in C
3
each of whose coordinates is either 0 or 1; how many dierent bases does this
set contain? Sol
n
: The vectors are (0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1),
(a) (0, 0, 1), (0, 1, 0), (1, 0, 0), (0, 0, 1), (0, 1, 0), (1, 0, 1), (0, 0, 1), (0, 1, 0), (1, 1, 1), (0, 0, 1), (0, 1, 0), (1, 1, 0) You can
gather the other sets similarly.
10 If A is the set consisting of the six vectors (1, 1,0,0), (1,0, 1,0), (I, 0, 0, I), (0, 1, 1, 0), (0, 1, 0, 1), (0, 0, 1, 1) in C
4
nd two
dierent maximal linearly independent subsets of A (A maximal linearly independent subset of A is a linearly independent
subset } of A that becomes linearly dependent every time that a vector of A that is not already in } is adjoined to }.)
Sol
n
: Any set of 4 linearly independent vectors forms a basis in C
4
. Since the basis is a maximal element of the linearly
independent subsets of A. We have,
(a) (1, 1, 0, 0), (1, 0, 1, 0), (0, 1, 1, 0), (0, 1, 0, 1), since
a
_

_
1
1
0
0
_

_
+ b
_

_
1
0
1
0
_

_
+ c
_

_
0
1
1
0
_

_
+ d
_

_
0
1
0
1
_

_
= 0
_

_
a + b
a + c + d
b + c
d
_

_
= 0 d = 0, b = c, a = c, a = b
16 CHAPTER 2. VECTOR SPACES
is a contradiction since a = c and a = +c, this is possible only if c = 0 a = 0, b = 0, c = 0, d = 0 the
vectors are linearly independent. It is a maximal set can be proved by adding the vector (0, 0, 1, 1) to the set, i,e
(1, 1, 0, 0), (1, 0, 1, 0), (0, 1, 1, 0), (0, 1, 0, 1), (0, 0, 1, 1),
a
_

_
1
1
0
0
_

_
+ b
_

_
1
0
1
0
_

_
+ c
_

_
0
1
1
0
_

_
+ d
_

_
0
1
0
1
_

_
+ e
_

_
0
0
1
1
_

_
= 0
_

_
a + b
a + c + d
b + c + e
d + e
_

_
= 0
e = d, a = b, b + c = d, b + c = d c = 0, b = d, e = d, a = b
if b = 1, d = 1, c = 0, e = 1, a = 1 we have (1, 1, 0, 0) + (1, 0, 1, 0) + (0, 1, 0, 1) + (0, 0, 1, 1) = 0
(b) (1, 1, 0, 0), (0, 1, 1, 0), (0, 0, 1, 1), (0, 1, 0, 1). The proof of linear independence and maximality are done as before.
11 Prove that every vector space has a basis. (The proof of this fact is out of reach for those not acquainted with some
transnite trickery, such as well-ordering or Zorns lemma.)
Proof. See the footnote under Theorem 9, uniqueness section.6
2.6.1 The impact of the set of Scalars on the Vector Space
1. The conditions for linear dependence will vary as seen in some of the problems in the problems section. The choice of Q vs
R had a signicant impact on the range of values the elements of the vectors could take in order to be linearly independent.
2. The basis vector sets are dierent.
2.7 Dimension
Theorem 10 (Cardinality of Basis). The number of elements in any basis of a nite dimensional vector space 1 is the same as
any other basis.
Proof. By denition and the proof that there exists a basis for every vector space indicates that it is a maximal element of the set
of linearly independent vectors and the fact that adding another vector makes the set linearly dependent implies this theorem.
Let us say there are two basis B
1
V
, B
2
V
such that the [B
1
V
[ ,= [B
2
V
[
5
. Without loosing generality let [B
1
V
[ < [B
2
V
[. We have to show
one the following,
1. if B
1
V
is indeed the basis then B
2
V
is linearly dependent and hence cannot be a basis.
2. if B
2
V
is a basis then b
2
k
B
2
V
that is not spanned by B
1
V
.
Things that we know;
1. Every linearly independent set can be extended to form a basis. The proof was to start with the linearly independent set
and start adding vectors to it from the basis till we have exhausted all the vectors in the basis.
2. The set of non zero vectors x
1
, , x
n
is linearly dependent i some x
k
, 2 k n, is a linear combination of the preceding
ones.
The proof is interesting. It really starts of with two sets having one of the properties of a Basis.
1. A = x
1
, x
2
, , x
m
that span the vectors space but not linearly independent.
2. } = y
1
, y
2
, , y
n
that are linearly independent but do not span the vector space.
Now we start by dropping vectors from the augmented set S = A } and applying Theorem 6 to the set S. The set S spans
the entire vector space 1. We drop vectors that are linear combinations of vectors s
i
S. At each iteration the set S
i
spans the
vector space 1. Clearly we will exhaust the vectors from A before we reach } since if we did not then the set Y is not linearly
independent. The result implies n m. The new set S
e
has the same properties that A has n y
i
s replacing x
i
s. Now if B
1
V
, B
2
V
are two bases with both properties then m n and n m.
5
|A| represents the number of elements in set A
2.7. DIMENSION 17
Denition 11 (Dimension of a Vector Space). The dimension of a nite dimensional vector space 1 is the number of elements
in a basis of 1.
Theorem 12 (Linear dependence of n+1 Vectors in n-dimensional Vector Space). Every set of n+1 vectors in an n-dimensional
vector space 1 is linearly dependent. A set of n vectors in 1 is a basis if and only if it is linearly independent, or, alternatively,
if and only if every vector in 1 is a linear ccmibination of elements of the set.
Proof. a The proof of the fact Every set of n + 1 vectors in an n-dimensional vector space 1 is linearly dependent follows
from the fact that any set of linearly independent vectors can be extended to form a basis. Since the set of n + 1 vectors
has atleast 1 linearly independent vector, we can try constructing a basis from the set of n + 1 vectors. However, the size
of the basis is n in an n-dimensional vector space. Which implies that any set of n + 1 vectors in a n-dimensional vector
space is linearly dependent.
b A set of n vectors in 1 is a basis i it is linearly independent. If a set of n vectors form a basis, then by denition they are
linearly independent. If a set of n vectors is linearly independent, we can form a basis using this set. Let us consider the
augmented set S } where S is the set of linearly independent vectors and } is a known basis. The new augmented set is
linearly dependent and can span the vector space 1. Now the process of eleimination from the augmented set eliminates
all the vectors } since the original vectors are assumed to be linearly independent. Since the basis } vectors are all linear
combinations of vectors o they span the entire space and are a basis.
2.7.1 Isomorphism
Denition 13 (Isomorphic Vector Spaces). Two vector spaces | and 1 (over the same eld) are isomorphic if there is a
one to one correspondence between the vectors x | and the vectors y 1, say y = T(x), such that
T(ax
1
+ bx
2
) = aT(x
1
) + bT(x
2
)
. The mapping T is called an isomorphism.
Denition 14 (Isomorphism). An isomorphism is a one to one relation that preserves all linear relations.
Isomorphic spaces are essentially indistinguishable. Since one can transform back and forth between the elements of the
isomprhic spaces. Clearly the dimension of isomorphic spaces are the same, since for each basis vector x in one space there
corresponds a vector T(x) in the other. Dimension is a isomorphism invariant. Isomorphic spaces are transitive, ie. if | and 1
are isomorphic and 1 and J are isomorphic then | and J are isomorphic.
Theorem 15 (Isomorphism between equidimensional Vector spaces). . Every n-dimensional vector space 1 over a eld T is
isomorphic to T
n
.
Proof. The proof is fairly straight forward. Every vector v 1 is a linear combination of the basis vectors of 1, let us
dene T(v) = T(

n
1
a
i
x
i
) = (a
1
, , a
n
) = a T
n
, where x
i
are a basis of 1. We note that the a
i
are unique. T is an
isomorphism, since T(av
1
+ bv
2
) = T(a

n
1
a
i
x
i
+ b

n
1
b
i
x
i
) = T(

n
1
(aa
i
+ bb
i
)x
i
= (aa
1
+ bb
1
, , aa
n
+ bb
n
) T
n
=
(aa
1
, , aa
n
) + (bb
1
, , bb
n
) = a(a
1
, , a
n
) + b(b
1
, , b
n
) = aT(v
1
) + bT(v
2
)
6
2.7.2 Exercises
1a What is the dimension of the set C of all complex numbers considered as real vector space? (See 3, (9))
Sol
n
: The dimension of C as a real vector space is 2 with the basis (0 + i), (1 + 0i). Since any complex number
a + bi = a(1 + 0i) + b(0 + i) = a + bi. can be represented as The dimension of C as a complex vector space is 1 with the
basis (1 + 0i). Since any complex number a + bi = (a + bi)(1 + 0i) = a + bi.
b Every complex vector space 1 is intimately associated with a real vector space 1

; the space 1

is obtained from 1 by
refusing to multiply vectors of 1 by anything other than real scalars. If the dimension of the complex vectors space 1 is
n, what is the dimension of the real vector space 1

?
Sol
n
: It follows from (1a) the dimension of dim(1

) = 2dim(1)
6
Read Halmos commentary on this proof
18 CHAPTER 2. VECTOR SPACES
Proof. Since Dim(1) = n, we have c
i
C such that

n
1
c
i
x
i
= v 1, where x
i
are the basis vectors of 1. Since
all real numbers also belong the C, the basis vectors have to be (1, , 0), (0, 1, , 0), , (0, 0, , 1). Any vector
v 1 is of the form a
1
+ ib
1
, a
2
+ ib
2
, , a
n
+ ib
n
=

n
1
(a
j
+ ib
j
)x
j
where x
j
are basis vectors. Rewriting a
1
+
ib
1
, a
2
+ ib
2
, , a
n
+ ib
n
using the fact a
1
, , a
n
, ib
1
, , ib
n
1 we have a
1
+ ib
1
, a
2
+ ib
2
, , a
n
+ ib
n
=
a
1
, a
2
, , a
n
+ ib
1
, ib
2
, , ib
n
=

n
1
a
j
x
j
+

n
1
ib
j
x
j
=

n
1
a
j
y
j
+

n
1
b
j
z
j
where y
j
is a vector with j
th
element
set to 1 and z
j
= ix
j
is a vector with j
th
element set to i and a
i
, b
i
R. We just mapped a vector v 1 to v

, it
follows that the basis of 1

is y
1
, , y
n
z
1
, , z
n
whose size is 2n.
2 Is the set R of all real numbers a nite dimensional vector space over the eld Q of all rational numbers? (See 3, (8). The
question is not trivial; it helps to know something about cardinal numbers).
Sol
n
: The issue is every irrational number in R is a limit of a sequence of rationals. My argument was that one cannot
span all the irrationals as a scalar multiple of an irrantional with a rational. The better argument is that the cardinality
of a nite dimensional vector space is the same as that of the eld over which it is dened.
Theorem 16 (Equinumerity Vector Space and its Field). The nite dimensional vector space is equinumerous with its
eld.
Proof. Consider a n-dimensional vector space 1. It has a basis of cardinality n. Every v 1 can be represented as a
linear combination of the basis as, v =

n
1
a
i
x
i
, this is an isomorphism from f : 1 F
n
ie. The linear combination acts
as a bijection from F
n
1 1 F
n
, but F
n
.
The theorem has implications for the basis of the rational vector space over real numbers. If there existed a nite basis
basis it would imply that Q
n
R, which is a contracdiction. The basis is innitely large since Q
N
R.
3 How many vectors are there in an n-dimensional vector space over the eld Z
p
(where p is a prime)?
Sol
n
: p
n
4 Discuss the following assertion: if two rational vector spaces have the same cardinal number (i.e., if there is some one-to-one
correspondence between them), then they are isomorphic (i.e., there is a linearity-preserving one-to-one correspondence
between them). A knowledge of the basic facts of cardinal arithmetic is needed for an intelligent discussion.
Proof. The same cardinal number implies that the two spaces are equinumerous. This implies there is a bijection from one
space to the other. It is just a matter of showing that the bijection is isomorphic.
2.8 Subspaces
Quoting from Halmos, The objects of interest in geometry are not only the points of the space under consideration, but also
its lines, planes, etc. We proceed to study the analogues, in general vector spaces, of these higher-dimensional elements.
Denition 17 (Subspace). A non-empty subset M of a vector space 1 is a subspace or a linear manifold if along with every
pair, x and y, of vectors contained in M, every linear combination x + y is also contained in M
Note that the denition requires, 0 M since if x M, choosing = 1, = 1, x x = 0 M. Every candidate subspace
therfore should contain the origin of the vector space. It also follws that the subspace M in a vector space 1 is a vector space.
Proof.
A1-A2 follows from the properties of vectors in 1 and the denition of subspace.
A3 0 M is a requirement of a subspace.
A4 x M x M since every x + 0 M, choose = 1.
B1-B2 These follow from the properties of v 1.
C1-C2 Follows from properties of v 1 as well as denition of M.
2.8. SUBSPACES 19
2.8.1 Observations and Examples
1. The set O = 0 is a subspace.
2. The whole space 1.
3. Given m > 0, n > 0 and m n. Let M=
1
, ,
n
C
n
;
1
=
2
= =
m
= 0.
4. Given m > 0, n > 0 such that m n, the space T
n
, and any m real numbers t
1
, , t
m
. Let
M= x T
n
: x(t
1
), x(t
2
), x(t
m
) = 0
.
5. Let M= x T
n
: x(t) = x(t) is a subspace.
2.8.2 Terminology for working with Subspaces
1. Given any collections M
p
of subsets of a gieen set, we write

p
M
p
for the intersection of all M
p
, i.e., for the set of
points commong to all M
p
.
2. If M, N are subsets of a set, we write M N, if M is a subset of N, i.e. every element in M is an element of N. We do
not exclude the possibility that M= N; thus 1 1 as well as 0
v
1
7
.
3. Two subspaces M, N are disjoint if M N = 0
v
.
2.8.3 Calculus of Subspaces
Theorem 18 (Subspace Intersection). The intersection of any collection of subspaces is a subspace.
Proof. This is easy to see.
A x, y

p
M
p
, x + y

p
M
p
, since each of the M
p
is a vector space and x + y M
p
p.
A1-A2 This follows from the properties of vectors in a vector space and x, y

p
M
p
x, y M
p
p
A3 p 0 M
p
0

p
M
p
A4 p, x M
p
p x M
p
, x, x

p
M
p
and x x = 0.
B To every pair (, x), where is a scalar and x

p
M
p
, we have x

p
M
p
, since x M
p
p.
B1-B2 Follow from the properties of vectors in M
p
.
C1-C2 Follow from the properties of vectors in M
p
.
Denition 19 (Span of subset of vectors). If S is an arbitrary set of vectors from a vector space 1, then the intersection of
every subspace of 1 that contains all the vectors of S is called the span of S, i.e. span(S) =

p
M
p
, S M
p
.
Theorem 20 (Span of Vectors and Subspaces). If S is any set of vectors in a vector space 1 and if M is the subspace spanned
by S, then M is the same as the set of all linear combinations of elements of S.
Proof. From denition (19) the span of S is intersection of all subspaces that contain S. The span of S is the smallest set
containing all the elements of S. Given x, y S; x+y M, since M is a vector space. Consider M
l
= z; z = x+y, x, y
S, (, ) T. We would like to show that M
l
= M. It is clear from the denition that M
l
M.
If z Mand z / M
l
, then z = x+y, (x, y) 1, (x, y) / S. But this is a contradiction since, M
l
M
s
= M
p
; (x, y) S, x+
y M
p
, and M
l


p
M
p
= MM
l
= M.
Halmoss proof is a bit dierent. His claim is that since the set of all linear combinations of elements of S is a subspae
containing S and this contains M. On the ip side M contains S and is a subspace which implies it contains all linear combina-
tions of vectors of S. The issue I have is with the rst conclusion. Why would the subspace containing all linear combinations
elements of S contain M.
7
0v is the null vector space or the zero vector space
20 CHAPTER 2. VECTOR SPACES
Theorem 21 (Sum of Supspaces). If H and K are two subspaces and if M is the subspace spanned by H and K together, then
M is the same as the set of all vectors of the form x +y, with x H and y K.
Proof. This follows from Theorem 20 and the fact that every vector in a vector space is a linear combination of vectors in the
basis.
2.8.4 Compliment
1 The notation H +K will be used to represent the subspace M spanned by H and K.
2 We shall say a subspace K of vector space 1 is a complement of a subspace K if H K = O
8
and H +K = 1.
Theorem 22 (Subspace Compliment). Every subspace has a complement.
Proof. Every subspace is a vector space. If the vector space is the entire space 1 then the complement is O. Since 1 O = O
and 1 +O = 1. For any other subspace o, it has a basis, which is a subset of the basis B
V
. The subspace spanned by the relative
compliment B
V
/B
S
is clearly a complement of the space spanned by B
S
. Since if it is not, it would mean the basis vectors were
not linearly independent.
2.9 Dimension of a subspace
Theorem 23 (Dimension of a Subspace). A subspace M in an n-dimensional vector space 1 is a vector space of dimension n.
Proof. If the subspace is equal to the vector space 1 then the dimension of the subspace is n. If not the subspace o 1, the
basis B
S
spans o and has to have a dimension less than or equal to the dimension of the basis B
V
. If not it would contradict
the fact that B
V
spans 1 and that B
V
is the maximal set that spans 1, ie. every set n + 1 vectors is linearly dependent.
Halmos claims that the above proof does not prove that there exists a basis if at all. However, we have already proved that every
vector space has a basis and this basis is a maximal set. His argument is as follows, If M= O then the only vector spanning the
space is 0, and M is 0-dimensional. If M contains a non-zero vector x
1
then, let M
1
( M) be the subspace spanned by x
1
. If
M
1
= M then M is one-dimensional. If M
1
,= M, we nd a subspace spanned by vectors x
1
, x
2
. The same logic now applies, if
M
2
is the subspace spanned by the two chosen vectors, then if M
2
= M then M is 2-dimensional, if not we continue the process
it will end at n steps since we cannot nd n + 1 linearly independent vectors. Halmos likes this second proof better as it does
not make the assumption of an existing basis. The proof purely relies on Theorem 6, which does not require the assumption
of existence of a basis. However, looking into the proof carefully, the Theorem 10 on which the proof of n + 1 vectors form a
linearly dependent set already assumes that there exists a basis. If you want to talk about basis of a vector space, one has to show
that it exists and it is a valid and a possible concept, there is no way of getting around the fact that the existence of a basis has
already been proved. On second thought it may be because it is a constructive proof as opposed to an existence proof. The other
issue is how would you verify that the subspace and the vector space are the same at each iteration?.
As a consequence of Theorem 23 we have,
Theorem 24 (Subspace Basis). Given any m-dimensional subspace M in an n-dimensional vector space 1, we can nd a basis
x
1
, , x
m
, x
m+1
, , x
n
in 1, so that x
1
, , x
m
are in M and form, therfore, a basis of M.
Proof. This proof is pretty straight forward. From Theorem 23 we have dim(M) dim(1), and therefore the basis x
1
, , x
n

of the vector space 1, spans M. Since M is a vector space, there exists a subset of the basis x
1
, , x
n
that spans M, if not
it is a contradiction to our assumption that M 1, and a subspace of 1.
2.9.1 Exercises
1. if M, N are nite-dimensional subspaces with the same dimension, and M N then M= N.
Proof. Given dim(M) = dim(N) and M N, we have the basis y
1
, , y
n
of the subspace N spans M, and since the
dimension of M= n, consider x
1
, , x
n
Mand is a basis of M. Since x
1
, , x
n
N and are linearly independent,
we can extend it to form basis of N (Theorem 9). Now the dimension of x
1
, , x
n
N is n, so they already must be
a basis (Theorem 12) of N M= N
9
.
8
Note that O is a zero vector space and not the zero vector 0
9
Happy with this proof , better than the rst iteration. One cannot just claim that {y
1
, , yn} M and hence its basis. Here we have showed
that N M., by trying to extend the basis of M to a basis of N.
2.9. DIMENSION OF A SUBSPACE 21
2. If M, N are subspaces of a vectors space 1, and if every vector in 1 belongs to M or N or both, then either M = 1 or
N = V or both.
Proof. This is pretty straight forward. given /, A 1 consider the basis B
M
= x
1
, , x
n
M, B
N
= y
1
, , y
n

N. If it is true that every vector of 1 M or N or both. Then every vector of 1 is a linear combination of the basis
B
M
, B
N
. Since M/N are subspaces of 1 we can apply Theorem 9 to extend the basis from M/N to 1. If every vector
v 1 can be expressed as

n
1
a
i
x
i
, clearly x
i
form a basis of 1 M= 1 similarly if every vector v 1 can be expressed
as

n
1
b
i
y
i
then, y
i
form a basis of 1 N = 1 . If both are true then the third result follows.
3. If x, y, and z are vectors such that x +y +z = 0, then x and y span the same subspace as y and z.
Proof. Let o
xy
be the subspace spanned by x, y and o
yz
the subspace spanned by y, z. Every vector ax+by = a(z y) +
by = az + (b a)y o
xy
= o
yz
= o
zx
.
4. Suppose x, y be vectors and M be a subspace in a vector space 1, let H be the subspace spanned by M and x, and let K
be the subspace spanned by M and y. Prove that if y H but y / M then x K.
Proof. Given,
H = o
Mx
= z = ax +bw, x 1, w M
K = o
My
= z = ay +bw, y 1, w M
We need to prove, if y / M but y H then x K. The proof is pretty straight forward. If y / M but y h, then
y = ax+bw where y, x 1 and w M. Also, x =
1
a
y
b
a
w x K. Essentially the point to be made is that if x, y 1
and w M are linearly dependent, then x, y H, K.
5. Suppose that L, M, N, are subspaces of a vector space.
a Show that the equation L (M+N) = L M+L N is not necessarily true.
Proof. Consider a vector l L and l / M, or Nor both, and l = am+bn, where m M, n Nclearly, l L(M+N)
and l / L M+L N. The question is that of existence of such a vector l. We can choose subspaces M, N such that
M N = O (easy to construct using a subset of a basis of 1 and its complement). Choose a subspace L 1. The
rest follows. Essentially, intersection is not distributive relative to the + operation.
b Prove that L (M+ (L N)) = (L M) + (L N)
Proof. We have
U = (M+ (L N)) = x; x = y +z; y M, z L N
L M+L N = x; x = y +z, y L M, z L N
L U = x; x L, M+L N
We will rst prove a lemma and use it to prove the above:
Lemma 25 (Basis of Intersection and Union of Subspaces). If L, M are subspaces of the same vector space 1 then,
the basis of the space L +M= B
L
B
M
and of the intersection subspace L M= B
L
B
M
, where B
L
, B
M
are subsets
of the basis B
V
of the vector space 1 spanning the appropriate subspaces.
10
Proof. The proof is pretty straight forward. Let B
V
= v
1
, , v
n
and B
L
= v
i
, , v
l
and B
M
= v
j
, , v
m

Given x L +M,
x = y +z ; y L, z M
x =

i
v
i
+

j
v
j
; v
i
B
L
, v
j
B
M

x =

k
c
k
v
k
where v
k
B
L
or B
M
v
k
B
L
B
M
but v
k
are linearly independent and span the space v
k
B
L
B
M
is the basis of L +M.
10
The vector spaces need not be nite but countable.
22 CHAPTER 2. VECTOR SPACES
similarly,
if x L M x L and x M
x =

i
a
i
v
i
; v
i
B
L
; x =

j
b
j
v
j
; v
j
B
M

i
a
i
v
i

j
b
j
v
j
= 0
if B
L
, B
M
are not disjoint then L M,= O,

i
a
i
v
i

j
b
j
v
j
=
B
L
B
M

k
(a
k
b
k
)v
k
+
B
L
B
M

i
a
i
v
i

B
M
B
L

j
b
j
v
j
= 0
since the vectors are linearly independent a
k
b
k
= 0, a
i
= 0, b
j
= 0.
B
LM
= B
L
B
M
Now the proof of the problem is pretty straight forward, the basis of L (M+ (L N)) is given by:
B
U
= B
M
(B
L
B
N
), and B
LU
= B
L
(B
M
(B
L
B
N
) = (B
L
B
M
) (B
L
B
N
),
the basis of (L M) + (L N) is (B
L
) B
M
(B
L
B
N
), Similarly the basis B
(LM)+(LN)
= B
LM
B
LN
=
(B
L
B
M
) (B
L
B
N
), ie. the basis vectors are the same.
A simpler proof is that given x L (M+ (L N) x L, M+ (L N), since we have, x = y +z; y M, z L N
but, x, z L, y = z x y L y L M. Conversely, if x (L M) + (L N) x = y +z; y L M, z
L N, y, z L, x L similarly, x = y +z, y M, z L N x M+ (L N) x L (M+ (L N)).
6. A polynomial x is called even if x(t) = x(t) identically in t (see 10, 3, and it is called odd if x(t) = x(t).
a Both class M of even polynomials and the class N of odd polynomials are subspaces of the space T of all complex
polynomials.
Proof. We need to prove that even and odd polynomials form a vector space.
i. Over the eld of complex numbers clearly x(t) +y(t) M x(t) +y(t) = x(t) +x(t).
A1-A2 The commutative and associative properties follow from the properties of general polynomials.
A3 There exists a zero polynomial 0 such that x(t) +0 = x(t).
A4 x(t) x(t)[x(t) +x(t) = 0, note that x(t) M since x(t) = 1x(t)
B To every pair F and p(t) T, T T, since p(t) = p(t).
B1-B2 The associative property follows from the properties of general polynomials.
C1-C2 These follow from the properties of general polynomials.
The above proof can be applied to show that odd polynomials are a subspace as well.
b Prove that M and N are each others complements.
Proof. The rst part is easy, i.e. M N = 0. The second part is to show that p T; p(t) = o(t) +e(t) this is easy
to show by considering the basis of even and odd polynomials.
6a Can it happen that a non-trivial subspace of a vector space 1 (i.e., a subspace dierent from both O and 1) has a unique
complement?
Soln : Yes, it is possible I suppose. However the construction does not seem obvious. Clearly examples with non-unique
complements can be easily given. Consider in R
2
, the subspaces spanned by (0, 1), (1, 1) as well as (0, 1), (2, 1) is the
entire space R
2
and the subspaces are complementary to each other and non-unique. The question is an example for
complementary subspaces where they are unique. One case where this is true is if the basis B
V
only one complementary
set of subspaces, as in the case of polynomials of degree n.
6b If M is an m-dimensional subspace in an n-dimensional vector space, then every complement of M has dimension n m.
2.10. DUAL SPACES 23
Proof. The proof is pretty straight forward. Given a m-dimensional subspace M of a vector space its complement N is
such that M N = O, M+N = x +y; x M, y N span the vector space 1. Clearly, given a basis B
V
of the vector
space 1, one can nd a basis B
M
= v
1
, , v
m
B
V
such that B
M
is the basis of M, similarly one can nd the basis
B
N
= v
1
, , v
n
B
V
and is a basis of N. Since M

N = O, we have B
M

B
N
= . Using the fact that M+N spans
the vector space, we can see that dim(M+N) = dim(V ) = dim(M) + dim(N). The result follows. The question is why
is dim(M+N) = dim(V ), Isnt that what we are trying to prove?
Approaching this a bit dierently, since every vector v = x+y : x M, y N v =

i
a
i
v
i
+

j
b
j
v
j
dim(M+N) =
dim(V ) but dim(M+N) = dim(M) + dim(N) since M

N = O. The result follows.


7a Show that if both M and N are three-dimensional subspaces of a ve- dimensional vector space, then M and N are not
disjoint.
Proof. This follows from problem 6b. The dim(M) + dim(N) > dim(1) our argument is going to be similar to the
argument for problem 6b. The basis for M, B
M
= v
i
B
V
; m M, m =

3
1
v
i
, similarly the basis for N, B
N
=
v
i
B
V
; n N, n =

3
1
v
j
, now the given that the dimension dim(B
M
) = 3 and dim(B
N
) = 3 we have dim(B
V
) <
dim(B
M
) + dim(B
N
) B
M

B
N
,= M

N ,= .
7b If M and N, are nite-dimensional subspaces of a vector space, then dim(M) + dim(N) = dim(M+N) + dim(M

N).
Proof. The proof follows from the lemma 25 we proved earlier. The basis B
M+N
= B
M

B
N
and B
M

N
= B
M

B
N
.
Clearly B
M
= (B
M

B
N
)/B
N

B
M

N
, Since (B
M

B
N
)/B
N

B
M

N
= , we have card(B
M
) = card(B
M

B
N
)
card(B
N
)+card(B
M

N
) card(B
M
)+card(B
N
) = card(B
M

B
N
)+card(B
M

N
) dim(M)+dim(N) = dim(M+N)+
dim(M

N).
Corollary: Given an l -dimensional vector space 1 and m,n-dimensional vector spaces M and N such that m+n >l then
M N O
Proof. The argument is similar to the argument given in problem (7a). Consider a basis of the vector space 1 now, B
V
=
v
1
, , v
n
. Since M, N are subspaces of 1 from theorem 24 there exist subsets B
M
, B
N
B
V
such that the subsets form
a basis of the respective subspaces. It follows that, card(B
M
) +card(B
N
) > card(B
V
) and B
M
, B
N
B
V
B
M

B
N
,= .
The conculsion follows.
2.10 Dual Spaces
Denition 26. A linear functional on a vector space 1 is a scalar-valued function y dened for every vector x, with the
property that (identically in the vectors x
1
and x
2
and the scalars
1
and
2
) i.e: y : 1
F
T.
y(
2

i
x
i
) =
2

i
y(x
i
)
Some examples and observations about linear functionals,
1. for x = (x
1
, , x
n
) in C
n
, write y(x) = x
1
. More generally if a
1
, , a
n
T the function y(x) =

n
1
a
i
x
i
is a
linear functional. Clearly y(
1
x
1
+
2
x
2
) = y(w) =

n
1
a
i
w
i
=

n
1
a
i
(
1
x
1i
+
2
x
2i
) =
1

n
1
a
i
x
1i
+
2

n
1
a
i
x
2i
=

1
y(x
1
) +
2
y(x
2
)
2. We observe that y(0) = y(0.0) = 0.y(0) = 0. This is the reason why linear functionals are called homogeneous. In
particular in T
n
, if y is dened by y(x) = b +

n
1
a
i
x
i
is a linear functional i b = 0.
3. For any polynomial x T, we write y(x) = x(0) is a linear functional. In general given scalars a
1
, , a
n
and
real numbers t
1
, , t
n
the function y(x) =

n
1
a
i
x(t
i
) is a linear functional. This is pretty straight forward to see,
y(x
1
+x
2
) =

n
1
a
i
(x
1
(t
i
) +x
2
(t
i
)) =

n
1
a
i
x
1
(t
i
) +

n
1
a
i
x
2
(t
i
) = y(x
1
) +y(x
2
). A limiting case of the above
is obtained as follows. Let (a, b) be any nite interval on the real t-axis, and be any complex-valued integrable function
dened on (a, b); dene y by y(x) =
_
b
a
(t)x(t)dt. The proof for the integral form is similar.
4. On an arbitrary vector space 1, dene y by writing, y(x) = 0 x 1. Clearly y is a linear functional.
24 CHAPTER 2. VECTOR SPACES
Denition 27. The set 1

F
, of all linear functionals on a vector space 1
F
(which includes zero-functional dened in (4)) forms
a vector space. 1

is called the dual of 1.


Proof. Given two linear functionals (y
1
, y
2
) on a vector space 1
F
, any linear combination y = y
1
+y
2
is also a linear functional,
where
1
,
2
T. Consider [ax
1
+ bx
2
, y] = [ax
1
+ bx
2
, y
1
+ y
2
] = [ax
1
+ bx
2
, y
1
] + [ax
1
+ bx
2
, y
2
] = [(ax
1
+ bx
2
), y
1
] +
[(ax
1
+ bx
2
), y
2
] = a([x
1
, y
1
] + [x
1
, y
2
]) + b([x
2
, y
1
] + [x
2
, y
2
]) = a[x
1
, y
1
+ y
2
] + b[x
2
, y
1
+ y
2
] = a[x
1
, y] + b[x
2
, y].
A1-A2 Follows from the fact that the linear functional is scalar valued.
A3 The linear functional y
0
(x) = 0 x 1 y(x) + y
0
(x) = y(x) y 1

F
A4 y((x)) 1

F
, y(x) = y(1x) 1

F
and y(x) + y(1x) = y(x) y(x) = 0.
B1-B2 These follow directly from the linearity property of linear functionals.
C1-C2 These properties follow from the linearity property of linear functionals.
We will use the following notation to indicate a linear functional, y(x) = [x, y]. With the new conventions we have the following,
1. [
1
x
1
+
2
x
2
, y] =
1
[x
1
, y] +
2
[x
2
, y]
2. [x,
1
y
1
+
2
y
2
] =
1
[x, y
1
] +
2
[x, y
2
]
The relations 1, 2 are together expressed by saying that [x, y] is a bilinear functional of vectors x 1 and y 1

.
2.10.1 Exercises
1. Consider the set C of complex numbers as a real vector space (88 in 3, (9)). Suppose that for each x =
1
+ i
2
C
(where and
1
and
2
are real numbers and i =

1). Which of the y dened below are linear functionals,


a y(x) =
1
, is a linear functional since [ax
1
+ bx
2
, y] = [(a
1
+ b
1
) + i(a
2
+ b
2
), y] = a
1
+ b
1
= a[x
1
, y] + b[x
2
, y]
b y(x) =
2
, is a linear functional since [ax
1
+ bx
2
, y] = [(a
1
+ b
1
) + i(a
2
+ b
2
), y] = a
2
+ b
2
= a[x
1
, y] + b[x
2
, y]
c y(x) =
2
1
, is not a linear functional,[ax
1
+bx
2
, y] = [(a
1
+b
1
) +i(a
2
+b
2
), y] = (a
1
+b
1
)
2
,= a[x
1
, y] +b[x
2
, y] =
a
2
1
+ b
2
1
d y(x) =
1
i
2
is not a linear functional since, [ax
1
+bx
2
, y] = [(a
1
+b
1
)+i(a
2
+b
2
), y] = (a
1
+b
1
)i(a
2
+b
2
) / R
e y(x) =
_

2
1
+
2
2
, is not a linear functional since, [ax
1
+ bx
2
, y] = [(a
1
+ b
1
) + i(a
2
+ b
2
), y] =
_
(a
1
+ b
1
)
2
+ (a
2
+ b
2
)
2
,= a[x
1
, y] + b[x
2
, y] = a
_

2
1
+
2
2
+ b
_
(
2
1
+
2
2
2. Suppose that for each x = (x
1
, x
2
, x
3
) C
3
the function y is dened as below, which ones are linear functionals.
(a) y(x) = x
1
+ x
2
, is a linear functional since,[ax
1
+ bx
2
, y] = [((ax
1,1
+ bx
2,1
), (ax
1,2
+ bx
2,2
), (ax
1,3
+ bx
2,3
)), y] =
ax
1,1
+ bx
2,1
+ ax
1,2
+ bx
2,2
= a[x
1
, y] + b[x
2
, y]
(b) y(x) = x
1
x
2
, is a linear functional since, [ax
1
+ bx
2
, y] = [((ax
1,1
+ bx
2,1
), (ax
1,2
+ bx
2,2
), (ax
1,3
+ bx
2,3
)), y] =
ax
1,1
+ bx
2,1
ax
1,2
bx
2,2
= a[x
1
, y] + b[x
2
, y]
(c) y(x) = x
1
+ 1, is not a linear functional since, [ax
1
+ bx
2
, y] = [((ax
1,1
+ bx
2,1
), (ax
1,2
+ bx
2,2
), (ax
1,3
+ bx
2,3
)), y] =
ax
1,1
+ bx
2,1
+ 1 ,= a[x
1
, y] + b[x
2
, y] = ax
1,1
+ 1 + bx
2,1
+ 1, also note that [0, y] = 1.
11
(d) y(x) = x
1
2x
2
+3x
3
, is a linear functional since, [ax
1
+bx
2
, y] = [((ax
1,1
+bx
2,1
), (ax
1,2
+bx
2,2
), (ax
1,3
+bx
2,3
)), y] =
ax
1,1
+bx
2,1
2(ax
1,2
+bx
2,2
) +3(ax
1,3
+bx
2,3
) = a[x
1
, y] +b[x
2
, y] = ax
1,1
2ax
1,2
+3ax
1,3
+bx
2,1
2bx
2,2
+3bx
2,3
.
3. Suppose for each x T the function y is dened as below, which of them are linear functionals?
(a) y(x) =
_
2
1
x(t)dt is a linear functional since [ax
1
+ bx
2
, y] =
_
2
1
[ax
1
(t) + bx
2
(t)] dt = a[x
1
, y] + b[x
2
, y]
(b) y(x) =
_
2
0
x(t)
2
dt is not a linear functional since [ax
1
+ bx
2
, y] =
_
2
0
[ax
1
(t) + bx
2
(t)]
2
dt ,= a[x
1
, y] + b[x
2
, y] =
_
2
0
(ax
1
(t))
2
dt +
_
2
0
(bx
2
(t))
2
dt
(c) y(x) =
_
1
0
t
2
x(t)dt is a linear functional since [ax
1
+ bx
2
, y] =
_
1
0
t
2
[ax
1
(t) + bx
2
(t)] dt = a[x
1
, y] + b[x
2
, y]
11
A shifted linear functional does not remain linear functional anymore, it is like moving the line so that it does not pass throught the origin.
2.11. DUAL BASES 25
(d) y(x) =
_
1
0
x(t
2
)dt is a linear functional since [ax
1
+ bx
2
, y] =
_
1
0
_
ax
1
(t
2
) + bx
2
(t
2
)

dt = a[x
1
, y] + b[x
2
, y]
(e) y(x) =
dx
dt
is a linear functional since [ax
1
+ bx
2
, y] =
_
d(ax1+bx2)
dt
_
= a
dx1
dt
+ b
dx2
dt
= a[x
1
, y] + b[x
2
, y]
(f) y(x) =
d
2
x
dt
2
[
t=1
is a linear functional since [ax
1
+bx
2
, y] =
_
d
2
(ax1+bx2)
dt
2
[
t=1
_
= a
d
2
x1
dt
[
t=1
+b
d
2
x2
dt
2
[
t=1
= a[x
1
, y]+b[x
2
, y]
4. If (a
0
, a
1
, a
2
, ) is an arbitrary sequence of complex numbers, and if x is an element of T, x(t) =

n
0

i
t
i
, write
y(x) =

n
0
a
i

i
. Prove that y is an element of T

and that every element of T

can be obtained in this manner by a


suitable choice of a
i
.
12
Proof. In order to show that

n
0
a
i

i
P

all I need to show is that it is a linear functional. Given x


1
, x
2
T we have
[ax
1
+bx
2
, y] = [

n
0
(a
i
+b
i
)t
i
, y] =

n
0
a
i
(a
i
+b
i
) = a

n
0
a
i

i
+b

n
0
a
i

i
= a[x
1
, y] +b[x
2
, y] y T

. The second
part of the proof is pretty straight forward. For any y T

choose your a
i
= y(t
i
) : i 0, n, this works since t
i
form
the basis of the set of polynomials. The linear functional has to map each of the elements of the basis t
i
to some a
i
. This
can be generalized to any vector space and linear functionals.
13
5. If y is a non-zero functional on a vector space 1, and if is an arbitrary scalar, does there necessarily exist a vector in 1
such that [v, y] = ?
Proof. The proof for this is very trivial. Consider a v 1 let [v, y] = now since ,
1

F v
1
=
1

v 1 and
[
1

v, y] = 1 v
1
=

v; [v
1
, y] =
6. Prove that if y and z are linear functionals (on the same vector space) such that [v, y] = 0 whenever [v, z] = 0, then there
exists a scalar such that y = z.
Proof. Consider the basis B
V
= v
1
, , v
n
, now v; [v, y] = [v, z] = 0 we have

n
1

i

i
=

n
1

i

i
2.11 Dual Bases
One more word before embarking on the proofs of the important theo- rems. The concept of dual space was dened without any
reference to coordinate systems; a glance at the following proofs will show a super- abundance of coordinate systems. We wish
to point out that this phenome- non is inevitable; we shall be establishing results concerning dimension, and dimension is the
one concept (so far) whose very denition is given in terms of a basis.
14
Theorem 28. If 1
F
is an n-dimensional vector space, if v
1
, , v
n
is a basis in 1
F
, and if
1
, ,
n
F is any set of
scalars, then there is one and only one functional y on 1
F
such that [v
i
, y] =
i
for i = 1, , n.
Proof. The theorem claims that if two linear functionals map the basis vectors to the same set of scalars
1
, ,
n
then they
are the same.
15
This is very clear since if y
1
, y
2
are two linear functionals that map the basis vector to the same
i
then every
given vector v =

n
1

i
v
i
and [v, y
1
] = [

n
1

i
v
i
, y
1
] =

n
1

i

i
=

n
1

i
[v
i
, y
2
] = [

n
1

i
v
i
, y
2
] = [v, y
2
] y
1
= y
2
. Again if we
dened y as [x, y] =
1

1
+ +
n

n
then, y is indeed a linear functional, and [x
i
, y] =
i
.
Theorem 29. If 1
F
is an n-dimensional vectors space and B
V
F
= v
1
, , v
n
is a basis in 1
F
, then there is a uniquely
determined basis B

V
F
in 1

F
, B

V
F
= y
1
, , y
n
, with the property that [v
i
, y
j
] =
ij
. Consequently the dual space of an
n-dimensional space is n-dimensional.
Proof. The existence of such a functional is fairly obvious as well for any given vector x dene [x, y
i
] = [

n
1

i
v
i
, y
i
] =
i
. We
have shown in one of the exercise problems that such a function is a linear-functional. It is clear as well that [v
i
, y
i
] = 1 and
[v
j
, y
i
] = 0. Infact the existence is proven using Theorem 28 since we can always nd a unique linear functional y for the set
1, , 0. Clearly any given linear functional y can be expressed as y =

n
1

i
y
i
where [v
i
, y] =
i
, To see that this works,
consider a vector x, [x, y] = [

n
1

i
v
i
, y] =

n
1

i
[v
i
, y] =

n
1

i

i
, but from our denition of y
i
we have [x, y
i
] =
i
, the result
follows. The set B

V
F
is a basis of 1

F
. We need to show that the set of y
i
are linearly independent. This is pretty straight
forward to see, consider a vector x 1
F
if

n
1

i
y
i
= 0 we have [x,

n
1

i
y
i
] =

n
1

i
[x, y
i
] = 0 for all x for x = v
i
we have
0 =

n
1

i
[v
i
, y
i
] =

n
1

i

ij
= 0
i
= 0
12
It is intersting that polynomials are already expressed in terms of their basis. The vectors t
i
are the elements of the basis.
13
I cannot believe I could not gure this out the second time.
14
This is a copy of the introductory statement from Halmoss book
15
The manner in which halmos has stated the theorem is a bit convoluted.
26 CHAPTER 2. VECTOR SPACES
Theorem 30. If u, w are two dierent vectors of the n-dimensional space vector space 1, then there exists a linear functional
y on 1 such that [u, y] ,= [w, y]; or equivalently, to any non-zero vector v 1 there corresponds a linear functional y 1

such
that [v, y] ,= 0.
Proof. The proof is pretty straight forward when you think of every vector as a linear combination of its basis. Given two
vectors u, v 1; such that u =

i
v
i
, w =

n
1

i
v
i
now let y be a linear functional on 1, we have y =

n
1

i
y
i
, [u, y] =
[

n
1
v
i
, y] =

n
1

i
[v
i
, y] =

n
1

i
[v
i
,

n
1

j
y
j
] =

i
1

i

i
similarly [w, y] =

n
1

i

i
, if there is no y such that [u, y] ,= [w, y]
and u ,= w, we have [u, y] = [w, y] y

n
1

i

i
=

n
1

i

i
=

n
1

i
(
i

i
) = 0
i
=
i
u = w a contradiction. A
simple choice of y would be one of the basis linear functionals. The equivalence comes from the choice of the vector x = wu.
The equivalence can be proven similarly, given [x, y] =

n
1
x
i

i
, if [x, y] = 0 for all y, then

n
1
x
i

i
= 0 for all y which implies
[x, y
j
] = 0, j = 1, , n x = 0.
16
2.12 Reexivity
It is natural to think about linear functionals on the space of Duals. We are esentially looking at the vector space 1

. Thse idea
is the reverse the application of the functional. We exploit the bilinearity of linear functionals do investigate linear functionals
on the space of linear functionals on a vector space 1.
17
Consider the linear functional [x
0
, y] where x
0
is a xed vector value and y takes values from the space of Duals of 1. Clearly
due to the bilinearity of linear functionals [x
0
, y] is a linear functional.
Theorem 31. If 1 is a nite dimensional vector space, then corresponding to every linear functional z
0
on 1

there is a vector
x
0
in 1 such that z
0
(y) = [x
0
, y] = y(x
0
) for every y 1

; the correspondence z
0
x
0
between 1

and 1 is an isomorphism.
Proof. This is a really weird and a strange idea. The linear functionals on 1

are essentially all the linear functionals y on


1, valued at a chosen vector v 1. So essentially the linear functionals 1

are equinumerous with the vector space 1 by


construction. This is possible only because the linear functionals themselves preserve linearity i.e. [x, y +z] = [x, y] +[x, z].
In order to prove isomorphism, we do the following, If x
1
, x
2
are two vectors belonging to 1 and if [x
1
, z] = [x
2
, z]y 1

then
x
1
= x
2
. This is fairly clear since [x
1
, y] = [x
2
, y] for all y implies from theorem 30, that x
1
= x
2
.
18
What we showed is that the linear functionals of the form we picked is equinumerous with 1. It remains to be shown that
the entire space 1

is indeed equinumerous with 1. The above two sections show that the set of linear functionals z on 1

ie:
z : 1

F is a subspace of 1

. This subspace is isomorphic with 1 and therefore n dimensional. However, both 1, 1

are
n-dimensional which implies that 1

is n-dimensional and hence equinumerous with 1 and coincides with the subspace of linear
functionals.
2.13 Basis of 1
//
Given our denition of 1

, every [y, z] = [x, y] for some , over the space of 1

can be represented as [

n
1

i
v
i
, y] where each
[z
i
, y] = [v
i
, y], where z
i
form the basis of 1

. Professor Halmos uses the isomorphic nature of z


i
v
i
and just notate the basis
as v
i
i.e B
V
= B
V
.
2.14 Annihilators
Denition 32. The annihilator o
0
of any subset o of a vector space 1 (o need not be a subspace) is the set of all vectors y in
1

such that [x, y] is identically zero for all x in o.


o
0
= y 1

; v o 1, [v, y] = 0
Some observations and examples:
1. O
0
= 1

and 1
0
= O( 1

)
16
In your original proof you forgot to use the statement y, which is extremely important for the existence proof to work. The proof works because
your claim is that if there is no y such that .... is possible.
17
Well that is a mouthful...
18
Note that theorem 30 is extremely important. If it were not true then V would have not only been a subspace of V but not equinumerous with
V or V.
2.15. EXERCISES 27
2. If 1 is nite dimensional and o contains a non-zero vector, the Theorem 30 shows that o
0
,= 1

.
Theorem 33 (Dimension of the Space of Annihilators). If M is an m-dimensional subspace of an n-dimensional vector space
1, then M
0
is an (n-m)-dimensional subspace of 1

.
Proof. The proof is fairly trivial and straight forward. It is easy to see that M
0
is a vector space since,
m
1
, m
2
M
0
, m
1
+ m
2
M
0
since [v, m
2
+ m
1
] = [v, m
1
] + [v, m
2
] = 0 + 0 = 0, from the linearity properties of linear
functionals in 1

.
A1-A2 These follow from the vector space properties of 1

.
A3 The zero vector 0 M
0
since [m, 0] = 0 v 1, and m + 0 = m m M
0
.
A4 Since m 1

there exists a linear functional m. Since [v, m] = [v, m] = 0, m M and m + (m) = 0.


m M
0
, m M
0
since [v, m] = [v, m] = 0 follows from the linearity properties of m.
B1-B2 Follows from the properties of vectors in 1

.
C1-C2 Follows from the properties of vectors in 1

.
From the above it is clear that, given M
0
1

, it is a subspace of 1

.
The basis vectors of the space 1 are mapped to
ij
by the basis vectors of the dual space 1

. Consider a vector m =

m
1

i
v
i
where B
M
= v
1
, , v
m
are the basis vectors of the subspace M. Let y
1
, , y
n
be the basis of 1

.
[m, y] = [m, y] = [m,
n

i
y
i
] =
n

i
[v, y
i
]
=
m

i
[m, y
i
] +
n

m+1

j
[m, y
j
] =
m

i
[m, y
i
] +
n

m+1

j
[
m

i
v
i
, y
j
]
clearly

n
m+1

j
[

m
1

i
v
i
, y
j
] =

n
m+1

j

m
1

i
[v
i
, y
j
] = 0 since [v
i
, y
j
] = 0i ,= j the subset of basis y
m+1
, , y
n
maps
the vectors v Mto 0. It is now a matter to show that y
m+1
, , y
n
form a basis of M
0
. This follows from the fact that M
0
is
a subspace of 1

.
19
Actually I have proved nothing here. The proof has to show that y
m+1
, , y
n
forms a basis of M
0
in
other words span(y
m+1
, , y
n
) = M
0
. To do this, (a) let us consider a functional y M
0
. Since y 1

, y =

n
1

i
y
i
. Given
that [v, y] = 0 it is clear that y =

n
m+1

i
y
i
y span(y
m+1
, , y
m
). (b) On the other hand every linear combination

n
m+1

i
y
i
maps every vector v M to 0, implies that every (y =

n
m+1

i
y
i
) span(y
m+1
, , y
m
) is also a member of
M
0
. Together (a) and (b) imply that span(y
m+1
, , y
m
) = M
0
.
Theorem 34. If M is a subspace in a nite dimensional vector space 1, then M
00
= ((M
0
)
0
) = M .
Proof. The theorem essentially claims that the dual space of annihilators of the subspace M is the subspace itself using our
notation resulting from the iso-morphism. The dual space of the space of M
0
are all the linear functionals of the form [z, y] =
[x
0
, y] where x
0
M. Since [x
0
, y] = 0 for every y M
0
, M
0
= M
00
. In order to show the iso-morphism, let M be
m-dimensional. This implies that M
0
is (n-m)-dimesional. Now from Theorem 33 dim(M
00
) = n (n m) = m since
dim(1

) = n, proving that M and M


00
are iso-morphic.
20
2.15 Exercises
1. Dene a non-zero linear functional y on C
3
such that if x
1
= (1, 1, 1) and x
2
= (1, 1, 1), then [x
1
, y] = [x
2
, y] = 0.
Sol. Let x = (
1
,
2
,
3
) Consider the linear functional y; [x, y] =
2

1
or
1

2
.
2. The vectors x
1
= (1, 1, 1), x
2
= (1, 1, 1) and x
3
= (1, 1, 1) form a basis of C
3
. If y
1
, y
2
, y
3
is the dual basis, and if
x = (0, 1, 0), then nd [x, y
1
], [x, y
2
] and [x, y
3
].
19
This does not follow from the fact that M
0
is a subspace of V

20
Note again that in order to show that M
00
= M we had to either use the subset argument or iso-morphism argument resulting from equi-
dimensionality of vector spaces on the same eld. First step to a proof is to understand what needs to be proved. It seems that I do not quite get what
needs to be proved.
28 CHAPTER 2. VECTOR SPACES
Sol. We have [x
1
, y
1
] = 1, [x
2
, y
2
] = 1 and [x
3
, y
3
] = 1. Given x =

3
1

i
x
i
we can solve for the coecients
i
.
_
_
1 1 1
1 1 1
1 1 1
_
_
_
_

3
_
_
=
_
_
0
1
0
_
_

1
+
2
+
3
= 0

1
+
2

3
= 1

1

2

3
= 0

2
=
1
2
,
1
= 0, 3 =
1
2
(a) [(0, 1, 0), y
1
] = [

3
1

i
x
i
, y
1
] = [0x
1
+
1
2
x
2

1
2
x
3
, y
1
] = 0
(b) [(0, 1, 0), y
2
] = [

3
1

i
x
i
, y
1
] = [0x
1
+
1
2
x
2

1
2
x
3
, y
2
] =
1
2
(c) [(0, 1, 0), y
3
] = [

3
1

i
x
i
, y
1
] = [0x
1
+
1
2
x
2

1
2
x
3
, y
3
] =
1
2
3. Prove that if y is a linear functional on an n-dimensional vector space 1, then the set of all those vectors x for which
[x, y] = 0 is a subspace of 1; what is the dimension of that subspace?
Sol. The proof that the set y
0
= x 1; [x, y] = 0 is a subspace of 1, follows from the following facts,
(a) 0 y
0
, since [0, y] = 0y 1

.
(b) Linearity properties of vectors x 1.
To begin with every x 1 can be written in terms of the basis B
V
= v
1
, , v
2
. We therefore have for all x
y
0
, [

n
1
a
i
v
i
, y] = 0

n
1
a
i
[v
i
, y] =

n
1
a
i
b
i
= 0 a
i
= 0 or b
i
= 0 or both, where y =

n
1
b
i
y
i
, y
i
being the basis of
1

. Clearly if all the b


i
are non-zero, all a
i
have to be zero and that leaves us with dimension of the subspace being 0. On
the other hand if all the b
i
are zero the dimension of the subspace is n. If the number of b
i
that are zero is m then, the
dimension of the subspace is m.
Let us look at the space of annihilators on the space of linear functions on 1

. These are the set z 1

, such that
[z, y] = [x, y] = 0y } 1

ie. using the isomorphism between 1

and 1 it is the set of x 1. The set of annihilators


of } V

is given by }
0
= z; [z, y] = 0, y }
0
. Applying Theorem 33 we see that the dimension of y
0
= n 1 since
y = dim(y) = 1 dim(y
0
) = n 1.
21
4. 4. If y(x) =

3
1
x
i
whenever x = (x
1
, x
2
, x
3
) is a vector in 1, then y is a linear functional on C
3
; nd a basis of the
subspace consisting of all those vectors x for which [x, y] = 0.
Sol. [x, y] = 0 the dimension of the vector space }
0
= n 1. We construct a basis for this space since y takes all vectors of
the form x
1
+x
2
+x
3
= 0 x
1
= (x
2
+x
3
), x
1
= (1,
1
2
,
1
2
), x
2
= (1, 1, 0) would form a basis of }
0
.
5. Prove that if m < n and if y
1
, , y
m
are linear functionals on an n-dimensional vector space 1, then there exists a
non-zero vector x 1 such that [x, y
j
] = 0 for j = 1, ..., m. What does this result say about the solutions of linear
equations?
Sol. The existence follows from the solution to the problem 3 and Theorem 33. The dimension of the Y = span(y
1
, , y
m

m implies the dimension of }
0
= n m x [ [x, y
j
] = 0. Let us look at the representation of linear equations in using
linear functionals. A system of linear equations is represented as
_

_
x
1
1
x
n
1
x
1
2
x
n
2

x
1
n
x
n
n
_

_
_

_
a
1
a
2

a
n
_

_
=
_

n
_

_
In essense we are looking for
i
that would form a linear combination of the vectors x
1
, , x
n
such that it is the vector
v.
21
I am unable to reconcile this argument with my earlier argument. I am not sure what I am missing. The earlier argument is wrong. On second
thought, the rst argument/proof is rubbish. The basis {y
1
, , yn} are not independent of the basis vectors. The second argument is what makes
sense.
2.15. EXERCISES 29
6. Suppose that m < n and that y
1
, , y
m
are linear functionaIs on an n-dimensional vector space 1. Under what
conditions on the scalars
1
, ,
n
is it true that there exists a vector x 1 such that [x, y
j
] =
j
for j = 1, , m?
What does this result say about the solutions of linear equations?

You might also like