You are on page 1of 13

Multilinear algebra

Wae
Mathcamp 2010
1 Universal sums and products
The basic idea of multilinear algebra is to study vector spaces with some kind of multiplication. In this
section, we will introduce the most basic and universal case of such a multiplication. Given two vector
spaces V and W, we will construct a vector space V W (the tensor product of V and W) such that we can
multiply a vector in V by a vector in W to get a vector in V W.
First, as a warm-up, we will give a reinterpretation of the direct sum operation on vector spaces. Recall
that if V and W are vector spaces, their direct sum is the vector space V W = V W, with scalar
multiplication and addition given by c(v, w) = (cv, cw) and (v, w) + (v

, w

) = (v +v

, w +w

). We think of
V as sitting inside V W by identifying v V with (v, 0) V W, and similarly for W.
Id now like to explain why one might come up with this construction of V W. The idea is, we have
two dierent kind of vectors, v V and w W, and we cant add them because they live in dierent vector
spaces. However, wed like to build a new vector space in which it makes sense to add them together. That
is, given v V and w W, we want to have a sum v w which lives in some vector space that we will call
V W. Furthermore, this operation should have the expected properties of addition. For example, we
should have that if c is a scalar, c(vw) = cvcw, and we should also have vw+v

= (v+v

)(w+w

).
Based on this, we could make a denition.
Denition 1.1. Let V and W be vector spaces. Then a sum on V and W is another vector space E
together with a function : V W E such that, writing (v, w) = v w, we have c(v w) = cv cw
and v w +v

= (v +v

) (w +w

).
However, this notion of a sum does not quite give us what the direct sum should be. For example, for
any vector space E, (v, w) = 0 denes a sum V W E. This is a bad sum because it throws away
information about V and W: elements of V and W are not all equal to 0, but this sum forgets about that.
On the other hand, note that if E is a sum of V and W and E is a subspace of F, F is also a sum of V and
W, with the same operation. This is also no good: F might have tons of other vectors in it that dont come
from either V or W. That is, instead of losing information, this time weve gained extraneous information.
We want the direct sum to be the sum that is just right, and doesnt gain or lose any information. The
following notion is how we can make this precise. Note that if : V W E is a sum and T : E F is
linear, then the composition T : V W E F is also a sum.
Denition 1.2. A sum : V W E of V and W is universal (or initial ) if for any sum : V W F,
there is a unique linear map T : E F such that T(v
E
w) = v
F
w (i.e., T = ).
Heres why this is a reasonable denition. We have two dierent ways of getting vectors v w, one from
and one from . The map T is such that T(v
E
w) = v
F
w. If E has no extra information besides being
a sum of V and W, this will be enough to dene T: there are no other random vectors in E on which we
need to dene T. Furthermore, if E has not lost any information, than T will be well-dened. For example,
if v w = 0 in E for all v and w, but this is not true in F, then T(v
E
w) = v
F
W will not be well-dened.
Furthermore, we can check that our original denition V W = V W is universal: indeed, if we consider
this as a sum, the map is just the identity map 1 from V W to itself, and there is a unique T such that
T 1 = , namely T = .
1
However, while there should be a single denite direct sum of two vector spaces, its far from obvious
that there is only one universal sum. However, it turns out that there is.
Theorem 1.3. Let : V W E and : V W F be two universal sums. Then there is a unique
isomorphism T : E F such that T(v
E
w) = v
F
w.
Proof. Since E is universal, there is a unique linear map T with the desired property; we just want to show
that T is an isomorphism. We show this by constructing an inverse map S : F E. Namely, because F is
also universal, there is a unique map S : F E such that S(v
F
w) = v
E
w.
We want to show that S is the inverse of T, i.e. that ST is the identity map 1 : E E and TS is
1 : F F. Note that ST(v
E
w) = S(v
F
w) = v
E
w. That is, ST : E E satises ST(v
E
w) = v
E
w.
By universality of E, there is a unique map E E with this property. But the identity map also has this
property, so we must have ST = 1. The same argument shows that TS = 1.
Thus in fact, this highly abstract denition of a universal sum is in fact well-dened (up to isomorphism)
and gives a new denition of direct sum. It might seem silly to use such a complicated denition when we
can just dene V W = V W. However, in the case of tensor product, which we now turn to, there is no
such simple concrete denition, at least without choosing a basis.
Tensor products will be like direct sums, except for the operation of multiplication rather than addition.
We need to write down some axioms that multiplication should satisfy, and then we can just copy the
denition we made above.
Denition 1.4. Let V and W be vector spaces. Then a product (or bilinear map) on V and W is another
vector space E together with a function : V W E such that, writing (v, w) = v w, we have
c(v w) = cv w = v cw, (v +v

w = v w +v

w), and v (w +w

) = v w +v w

.
Example 1.5. The dot product k
n
k
n
k dened by v w = v
1
w
1
+v
2
w
2
+ +v
n
w
n
is a product.
More generally, we can see that a function : k
n
k
m
k is a product if (v, w) is some polynomial in the
coordinates of v and the coordinates of w, each term of which is of the form c
jk
v
j
w
k
for c
jk
a scalar.
Denition 1.6. A product : V W E of V and W is universal (or initial ) if for any product
: V W F, there is a unique linear map T : E F such that T(v
E
w) = v
F
w (i.e., T = ).
Theorem 1.7. Let : V W E and : V W F be two universal products. Then there is a unique
isomorphism T : E F such that T(v
E
w) = v
F
w.
Proof. Exactly the same as Theorem 1.3
We can thus dene tensor products.
Denition 1.8. The tensor product of two vector spaces V and W is the (unique up to isomorphism) vector
space V W equipped with a universal product V W V W.
But wait! Theres still a problem with this denition: how do we know that a universal product actually
exists? For sums, we knew it existed because we could explicitly construct it, as just the cartesian product.
For products, we will have to give an explicit construction to show existence as well, but it is much less
simple. We will do this at the beginning of the next section.
1.1 Exercises
Exercise 1.1. Show that a sum : V W E is equivalent to a pair of linear maps, one V E and the
other W E.
1
(Hint: Consider (v, 0) and (0, w).)
Exercise 1.2. For V and W vector spaces, write Hom(V, W) for the set of all linear maps from V to W.
1
By equivalent, we mean that there is a natural bijection between the set of sums V W E and the set of pairs of
maps V E and W E.
2
(a): Show that Hom(V, W) is naturally a vector space.
(b): Show that a product map V W E is equivalent to a linear map V Hom(W, E).
(c): Find a natural nontrivial product map Hom(V, W) V W.
Exercise 1.3. Show that 0 0 = 0, where 0 = {0} is the trivial vector space.
Exercise 1.4. Show that the map : kk k given by (a, b) = ab is a product. Is it a universal product?
Exercise 1.5. We will do this in class tomorrow, but you can try to gure it out now if you want. Let
V = k
n
and W = k
m
, and dene a product : V W k
nm
by
(v, w) = (v
1
w
1
, v
1
w
2
, . . . , v
1
w
m
, v
2
w
1
, . . . , v
2
w
m
, v
3
w
1
, . . . , v
n
w
m
).
Show that this is a universal product, so k
n
k
m
= k
nm
.
3
2 Tensor products
We begin with the result promised at the end of the previous section.
Theorem 2.1. Any two vector spaces have a tensor product.
Proof. Let V and W be vector spaces. Let E be the vector space of all formal linear combinations of symbols
v w where v V and w W. That is, an element of E is a nite sum

c
i
(v
i
w
i
), where the c
i
are
scalars, v
i
V and w
i
W. Alternatively, E is a vector space with basis V W, where we write the basis
element coming from (v, w) as v w.
We now want to force the symbol to represent a product. To do this, we let K be the subspace of E
spanned by all elements of the form c(v w) cv w, c(v w) v cw, (v +v

) wv wv

w, and
v (w + w

) v w v w

. That is, these are the elements such that if they were zero, that would say
that is a product. We then dene V W to be the quotient vector space E/K.
We have a map : V W E E/K = V W where the rst map sends (v, w) to v w; we claim
that this map is a universal product. First of all, it is a product: thats exactly what modding out by K
does. Now let : V W F be any other product. Note that since E and hence E/K is spanned by
elements of the form v w, there is at most one linear map T : V W preserving the product. We thus
want to show that one exists. We can rst dene a map

T : E F which sends v w to v
F
w; this
is well dened since {v w} forms a basis for E. To turn

T into our desired map T : E/K F, we just
need to show that K is contained in the kernel of

T. But since
F
is a product, all the elements of K must
become zero when you replace with
F
, which is exactly what

T does. Thus

T(K) = 0 so we obtain a
map T : E/K F witnessing the universality of E/K.
This constructions is explicit enough to tell us what an element of V W looks like: it is a linear
combination

c
i
v
i
w
i
. However, two such linear combinations can be equal, and they are equal exactly
when they are forced to be equal by the fact that is a product.
Lets now give a more concrete description of tensor products.
As a rst example of how we can actually compute tensor products, lets let W = 0, the trivial vector
space containing only 0. Then in V 0, for any v V we have an element v 0, and these will span all of
V 0. Now note that for any scalar c, c(v 0) = v (c 0) = v 0. But the only way this can happen for
an element of a vector space is if v 0 = 0! Indeed, if for example we let c = 2, we get 2(v 0) = v 0 so
subtracting v 0 from both sides gives v 0 = 0.
This means that every element of V 0 is 0, i.e. V 0 = 0.
OK, lets look at a more interesting example. Let W = k, the eld of scalars, considered as a 1-dimensional
vector space (if you are not comfortable thinking about vector spaces over general elds, you can assume
k = R). To understand V k, we need to understand products : V k E. Given such a product, for
each v V we have an element (v, 1) E. Furthermore, for any other c k, by bilinearity (i.e., being
a product), (v, c) = c(v, 1), so these elements (v, 1) determine . Also, bilinearity says that (v, 1) is
linear as a function of v. Indeed, is bilinear i (v, 1) is linear in v.
Thus we have found that a product on V k is the same thing as a linear map on V . But by denition,
a product on V k is the same thing as a linear map on V k. Thus we should get the following.
Proposition 2.2. For any V , V k

= V .
Proof. To prove this rigorously, we have to give a product : V k V and show that it is universal. The
only natural product to write down is (v, c) = cv, where the right-hand side is scalar multiplication in V .
This is universal by the discussion above: given any other product : V k F, we get a map V F by
just sending v to (v, 1), and this preserves the product since it sends (v, c) = cv to (cv, 1) = (v, c).
Finally, there is one more property that we need to compute general tensor products.
Proposition 2.3. For any U, V , and W, (U V ) W

= (U W) (V W). The isomorphism sends
(u v) w to (u w) (v w).
4
Proof. First, we dene a map T : U (V ) W (U W) (V W) by saying
T((u v) w) = (u w) (v w).
This gives a well-dened linear map by the denition of the tensor product, since the right-hand side is
bilinear in u v and w
On the other hand, we can also dene a map S : (U W) (V W) U (V ) W by
S((u w
1
) (v w
2
)) = (u 0) w
1
+ (0 v) w
2
.
This is a well-dened linear map by a similar analysis. Finally, we have TS = 1 and ST = 1, so they are
inverse isomorphisms.
With this, we can understand the tensor product of any two vector spaces. For example, let W = kk, a
2-dimensional vector space. Then for any V , V W

= (V k) (V k) = V V . More generally, iterating
this argument shows that V W is a direct sum of copies of V , one for each dimension of W. Lets make
this more precise.
Theorem 2.4. Let {e
i
} be a basis for V and {f
j
} be a basis for W. Then {e
i
f
j
} is a basis for V W.
In particular, dim(V W) = dim(V ) dim(W).
Proof. We can verify this directly by showing explicitly that a bilinear map is uniquely determined by saying
where each (e
i
, f
j
) gets sent. Indeed, for any v V we can write v =

c
i
e
i
and for any w W we can write
w =

d
j
f
j
, and then we must have (v, w) = (

c
i
e
i
,

d
j
f
j
) =

i,j
c
i
d
j
(e
i
, f
j
). On the other hand,
given any values of (e
i
, f
j
), we can see that the expression above for (v, w) gives a well-dened bilinear
map, by uniqueness of the representation v =

c
i
e
i
and w =

d
j
f
j
. This says that a linear map out of
V W is uniquely determined by saying where each e
i
f
j
goes, i.e. {e
i
f
j
} is a basis.
However, this also follows from the results we have proven already. Namely, giving a basis {e
i
} of V is
the same as giving an isomorphism k
n
V which sends (1, 0, 0, . . . ) to e
1
, (0, 1, 0, . . . to e
2
, (0, 0, 1, . . . ) to
e
3
, and so on. Thus our bases give isomorphisms V

= k
n
and W

= k
m
, and the distributivity property
(Proposition 2.3) together with k k

= k gives that k
n
k
m
= k
nm
. Chasing through exactly what the map
in Proposition 2.3 is, we can see that the isomorphism V W

= k
n
k
m
= k
nm
corresponds to picking the
basis {e
i
f
j
} for V W.
Note that we could have just taken Theorem 2.4 as the denition of the tensor product. However, this
would have been very awkward because it depends on a choice of a basis, and its not at all obvious that
you could make it independent of the basis. This is why we made our abstract denition.
2.1 Exercises
Exercise 2.1. Show that V W

= W V naturally. (Hint: You can prove it picking a basis or using
the universal property. However, if your isomorphism involves a basis, you should show that if you picked a
dierent basis, you would still get the same isomorphism (this is part of what naturally should mean).)
Exercise 2.2. Show that (U V ) W

= U (V W).
Exercise 2.3. Let k = R, and consider C as a vector space over R. Show that for any R-vector space V ,
C V can naturally be thought of as a vector space over C, whose dimension over C is the same as the
dimension of V over R. (C V is called the complexication of V , and should be thought of as V but with
the scalars formally extended from R to C.)
Exercise 2.4. Let k[x] be the set of polynomials in a variable x with coeents in k; this is a vector space
over k. Show that k[x] k[y]

= k[x, y] via f(x) g(y) f(x)g(y), where k[x, y] is the vector space of
polynomials in two variables x and y.
5
Exercise 2.5. Give a denition of a trilinear map U V W E (it should be something like linear in
all three variables simultaneously). Show that U V W (parenthesized in either way by Exercise 2.2)
is the universal vector space with a trilinear map from U V W (the same way V W is the universal
vector space with a bilinear map from V W).
Exercise 2.6. Generalize Exercise 2.5 by replacing 3 by n: dene an n-multilinear map V
1
V
2
V
n
E,
and show that V
1
V
2
V
n
is the universal vector space with an n-multilinear map V
1
V
2
V
n

V
1
V
2
V
n
.
Exercise 2.7. Dene a product map of abelian groups to be a map : AB C satisfying (a +a

, b) =
(a, b) +(a

, b) and (a, b +b

) = (a, b) +(a, b

).
(a): Show that for any two abelian groups A and B, a universal product A B A B exists. (Hint:
Imitate the proof of Theorem 2.1.)
(b): Show that Z/2 Z/3 = 0. (Hint: Show that any product Z/2 Z/3 C is identically 0.)
6
3 Exterior products and determinants
In this section, we will look at a specic notion of multiplication, namely one that relates to area and volume.
The basic idea is as follows. Given two vectors v and w, we can form the parallelogram that they span,
and write v w for something we think of as the area of the parallelogram. This is not quite the usual
notion of area, however, because we want to think of it not just as a single number but as also having a
two-dimensional direction (the same way a single vector v both has a size and a direction). That is, if
we had a parallelogram pointing in a dierent direction, i.e. in a dierent plane, we would think of it as
dierent.
Lets write down some properties this v w ought to have. Scaling v or w scales the parallelogram, and
thus ought to scale its area. That is, for scalars c, we should have (cv) w = c(v w) = v (cw). This
suggests that the operation ought to be bilinear. Another property of is that for any vector v, v v
should be 0: if both vectors are pointing in the same direction, the parallelogram is just a line segment
and has no area.
As it turns out, these two properties are the only properties we really need.
Denition 3.1. Let V be a vector space. Then the exterior square
_
2
(V ) is the quotient of V V by the
subspace spanned by the elements v v for all v V . We write v w for the image of v w under the
quotient map V V
_
2
(V ).
Lets try to understand what this vector space
_
2
(V ) actually looks like. First, there is a very important
formal consequence of the fact that v v = 0 for all v. Namely, if we apply this to a sum v +w, we get
0 = (v +w) (v +w) = v v +v w +w v +w w = v w +w v.
Thus for any v and w, v w = wv. That is, as a multiplication, this operation is anticommutative
(or alternating).
We can now see what
_
2
(V ) looks like if we pick a basis {e
i
} for V . We know that {e
i
e
j
} is a basis
for V V . However, in
_
2
(V ), e
i
e
i
= 0 and e
i
e
j
= e
j
e
i
, so
_
2
(V ) can be spanned by just the
elements e
i
e
j
for which i < j. Seeing that all these are indeed linearly independent is a bit trickier.
Theorem 3.2. Suppose {e
i
} is a basis for V . Then {e
i
e
j
}
i<j
is a basis for
_
2
(V ). In particular,
dim
_
2
(V ) =
_
dimV
2
_
.
Proof. The idea behind the proof is that
_
2
(V ) is the free (or universal) vector space in which you can
multiply two elements of V in an anticommutative way, so to show that the e
i
e
j
are linearly indepen-
dent, you just have to construct some vector space with such a multiplication in which they are linearly
independent.
We dene a vector space E as follows: an element of E is a formal linear combination of symbols e
ij
for i < j. We dene a map T : V V E by T(e
i
e
j
) = e
ij
if i < j, T(e
i
e
j
) = e
ji
if i > j, and
T(e
i
e
i
) = 0. We want to show that T gives a map S :
_
2
(V ) E; it suces to show that T(v v) = 0
for all v V . Let v =

c
i
e
i
; then
v v =

c
i
c
j
e
i
e
j
=

i
c
2
i
e
i
e
i
+

i<j
c
i
c
j
(e
i
e
j
+e
j
e
i
).
We thus see that T(v v) = 0. Hence T gives a map S :
_
2
(V ) E which sends e
i
e
j
to e
ij
. Since
the e
ij
are linearly independent in E by construction, this implies that the e
i
e
j
(for i < j) are linearly
independent, and hence a basis.
One thing to note about
_
2
(V ) is that not every element is of the form v w. That is, not every area
vector is just an area in some plane; it can also be a sum of areas in dierent planes. For example, if {e
i
}
is a basis, then e
1
e
2
+e
3
e
4
cannot be simplied to a single v w.
We can use the same idea to generalize from area to volume, or even n-dimensional volume.
7
Denition 3.3. Let r N and V be a vector space. Then
_
r
(V ) is the quotient of V V (with r
factors) by the subspace spanned by all tensors v
1
v
n
for which two of the v
i
are the equal. The
exterior algebra
_
(V ) is the direct sum

r
_
r
(V ).
We think of an element of
_
r
(V ) as some sort of r-dimensional volume vector. Note that
_
0
(V ) = k
and
_
1
(V ) = V , the former because the empty tensor product is k (since V k = V for any V ). The
same argument as for r = 2 shows the following.
Theorem 3.4. Suppose {e
i
} is a basis for V . Then {e
i1
e
i2
e
ir
}
i1<i2<<ir
is a basis for
_
r
(V ).
In particular, dim
_
r
(V ) =
_
dimV
r
_
(0 if r > dimV ).
In particular, this means that when r = dimV = n,
_
n
(V ) is 1-dimensional, spanned by e
1
e
n
for
any basis {e
i
} of V .
Note that given any linear map T : V W between two vector space, we naturally get linear maps
_
r
T :
_
r
(V )
_
r
(W) by just dening
_
r
T(v
1
v
r
) = T(v
1
) T(v
r
). To see that this is
well-dened, we can note that this map is multilinear and vanishes if any of the v
i
are equal to each other.
Intuitively, the idea is that given a linear map, it also gives us a way to turn r-dimensional volumes into
r-dimensional volumes.
Now in particular, we can consider the case that W = V and r = n = dimV . In this case, we have a
map T from V to itself, and were looking at what that map does to (n-dimensional) volume in V . Now
_
n
(V ) is 1-dimensional, so
_
n
T is a linear map from a 1-dimensional vector space to itself. Any such map
is given by multiplying by some scalar, and that is independent of any choice of basis. This scalar is what
T multiplies volumes by, from a geometric perspective.
Denition 3.5. Let T : V V be a linear map and n = dimV . Then the determinant det(T) is the scalar
such that
_
n
T is multiplication by det(T).
3.1 Exercises
Do the following exercise if you are familiar with the usual cross product on R
3
.
2
Exercise 3.1. Identify
_
2
(R
3
) with R
3
by identifying e
1
e
2
with e
3
, e
2
e
3
with e
1
, and e
3
e
1
with e
2
.
Show that under these identication, the exterior product v w
_
2
(R
3
) = R
3
is the same as the cross
product v w R
3
.
Exercise 3.2. Let V have basis {e
1
, e
2
} and let T : V V be given by T(e
1
) = ae
1
+ ce
2
and T(e
2
) =
be
1
+de
2
. Compute
_
2
T :
_
2
(V )
_
2
(V ) in terms of this basis. What is det(T)?
Exercise 3.3. Prove Theorem 3.4.
Exercise 3.4. Let Sym
2
(V ) be the quotient of V V by the subspace spanned by elements of the form
v w w v. We write vw for the image of v w under the quotient map V V Sym
2
(V ). If {e
i
} is a
basis for V , show that {e
i
e
j
}
ij
is a basis for Sym
2
(V ). (Hint: Imitate the proof of Theorem 3.4.)
Exercise 3.5. Generalize Exercise 3.4 to larger values of 2: dene a vector space Sym
r
(V ) that is a quotient
of V V (r factors), such that if {e
i
} is a basis for V then {e
i1
e
i2
. . . e
ir
}
i1<i2<<ir
is a basis for Sym
r
(V ).
Exercise 3.6. Show that there is a linear map :
_
r
(V )
_
s
(V )
_
r+s
(V ) given by
(v
1
v
r
, w
1
w
s
) = v
1
v
r
w
1
w
s
.
(There is something to check here because not every element of
_
r
(V ) is of the form v
1
v
r
.) Geo-
metrically, this corresponds to combining an r-dimensional volume with an s-dimensional volume to get an
r +s-dimensional volume. We write x y for (x, y).
Exercise 3.7. Let x
_
r
(V ), y
_
s
(V ), and z
_
t
(V ). Show that (x y) z = x (y z) and
x y = (1)
rs
y x (with x y as dened in Exercise 3.6).
2
If youre not familiar with it, here is one denition: the cross product of v and w is perpendicular to the plane spanned by
v and w and has length equal to the area of the parallelogram spanned by v and w, with sign given by the right-hand rule.
8
4 More on Determinants; Duality
Lets see that this denition of determinant we gave agrees with other denitions you may have seen. If
V = k
2
is 2-dimensional (with basis {e
1
, e
2
} and we represent T as a matrix
_
a b
c d
_
, then det(T) is supposed
to be ad bc. Now T(e
1
) = ae
1
+ce
2
and T(e
2
) = be
1
+de
2
, so we have
2

T(e
1
e
2
) = T(e
1
)T(e
2
) = (ae
1
+ce
2
)(be
1
+de
2
) = ab(e
1
e
1
)+ad(e
1
e
2
)+bc(e
2
e
1
)+bd(e
2
e
2
) = (adbc)(e
1
e
2
).
In general, a similar calculation shows that the determinant of a matrix (a
ij
) is given by a sum of prod-
ucts

a
i(i)
where is a permutation of the numbers 1, . . . , n and the sign is given by the sign of the
permutation .
Many important properties of determinants are easy to understand from this denition. First of all, it is
manifestly independent of any choice of basis. Second, it is clear that det(ST) = det(S) det(T), since
_
n
(ST)
is just the composition
_
n
(S)
_
n
(T), which is just multiplication by det(T) followed by multiplication by
det(T). We can then use this to prove the other extremely useful property of determinants:
Theorem 4.1. Let V be nite-dimensional and T : V V be linear. Then T is invertible i det(T) = 0.
Proof. First, if TT
1
= I, then det(T) det(T
1
) = det(I) = 1 so det(T) = 0. Conversely, suppose T is not
invertible. Then for some nonzero v, T(v) = 0. Pick a basis {e
i
} for V such that e
1
= v. Then
det(T)e
1
e
n
=
n

T(e
1
e
n
) = T(e
1
) T(e
n
) = 0 T(e
n
) = 0,
so det(T) = 0.
Lets now change gears and look at another construction, closely related to tensor products. We will
assume all vector spaces are nite-dimensional from now on.
Denition 4.2. Let V and W be vector spaces. Then we write Hom(V, W) for the set of linear maps from
V to W. This is a vector space by dening addition and scalar multiplication pointwise.
More concretely, if we pick bases for V and W, we can think of Hom(V, W) as a vector space of matrices:
if V is n-dimensional and W is m-dimensional, a linear map V W can be represented as an nm matrix
(assuming we pick bases). It follows that the dimension of Hom(V, W) is nm.
We also know another way to form a vector space from V and W that is nm-dimensional, namely the
tensor product V W! It follows that Hom(V, W) is isomorphic to V W. However, we could ask whether it
is natural to think of them as being isomorphicthat is, whether we can write down an isomorphism between
them without choosing bases.
To understand this, we will specialize to the case W = k. In that case, V W = V k can naturally be
identied with V , and Hom(V, W) = Hom(V, k) is called the dual of V and written V

. An element of V

is a linear function that eats a vector and V and gives you a scalar.
Given a basis {e
i
} for V , we get a basis {
i
} (the dual basis) for V

as follows: let
i
be the linear map
such that
i
(e
i
) = 1 and
j
(e
i
) = 0 if j = i. This completely denes
i
since {e
i
} is a basis, and we can
easily see that these are linearly independent (because (

c
i

i
)(e
j
) = c
j
, so if

c
i

i
= 0 then c
i
= 0 for
all i). Finally, the
i
span all of V

since given any V

, we can show that =

(e
i
)
i
.
Thus given a basis {e
i
} for V , we obtain an isomorphism T : V V

by dening T(e
i
) =
i
. However,
for this to be a natural isomorphism, it should not depend on which basis we chose. Unfortunately, it
does!
Example 4.3. Let V = k
2
, and let {e
1
, e
2
} be the standard basis, with dual basis {
1
,
2
}. A dierent
basis for V is given by f
1
= e
1
and f
2
= e
1
+e
2
; call the dual basis to this {
1
,
2
}. Now
1
(e
1
) =
1
(f
1
) = 1
and
1
(e
2
) =
1
(f
2
f
1
) = 1, so
1
=
1

2
. Similarly,
2
=
2
.
9
Now the isomorphism T : V V

given by the basis {e


1
, e
2
} is T(e
1
) =
1
, T(e
2
) =
2
. However, the
isomorphism S : V V

given by the basis {f


1
, f
2
} is S(f
1
) =
1
, S(f
2
)
2
, and hence
S(e
1
) = S(f
1
) =
1
=
1

2
,
S(e
2
) = S(f
2
f
1
) =
2

1
= 2
2

1
.
Thus S and T are quite dierent, showing that the isomorphism V V

very much depends on what basis


you use!
Thus we should not think of V and V

as being the same; we can only identify them if weve chosen a


basis. However, we can identify V with its double dual!
Theorem 4.4. Let V be a (nite-dimensional) vector space and v V . Let v : V

k be dened by
v() = (v). Then v v is an isomorphism from V to (V

.
Proof. It is straightforward to check that v : V

k is a linear map, so v (V

. Similarly, it is
straightforward to check that the map v v is linear. Thus we only need to check that it is an isomorphism.
If we pick a basis {e
i
} for V , let {
i
} be the dual basis for V

and let {f
i
} be the basis of (V

dual to
{
i
}. We claim that f
i
= e
i
; it follows that our map is an isomorphism. Indeed, e
i
(
i
) =
i
(e
i
) = 1 and
e
i
(
j
) =
j
(e
i
) = 0 if i = j, exactly the denition of the dual basis {f
i
} of {
i
}.
Thus we should think of V

as dierent from V (even though they have the same dimension), but V

is the same as V .
Lets now turn back to the general question of how Hom(V, W) and V W might be related. In the case
W = k, we found that Hom(V, k) = V

and V k = V were of the same dimension but not the same. For
general W, we have a similar situation.
Theorem 4.5. Dene a map T : V

W Hom(V, W) by T( w)(v) = (v)w (remember (v) k is a


scalar, so we can multiply it with w). Then T is an isomorphism, and so V

W

= Hom(V, W) in a natural
(basis-independent) way.
Proof. Pick a basis {e
i
} for V , the dual basis {
i
} for V

, and a basis {f
j
} for W. A basis for Hom(V, W) is
given by the matrices with all entries 0 except for the ijth entry 1; this is exactly the linear map
i
j
dened
by
i
j
(e
i
) = f
j
and
i
j
(e
k
) = 0 if k = i. We claim that T(
i
f
j
) =
i
j
, and hence T sends a basis to a basis
and is an isomorphism.
To check this, we must just evaluate T(
i
f
j
) on each e
k
and see that it agrees with
i
j
. But T(
i

f
j
)(e
k
) =
i
(e
k
)f
j
, which is f
j
if i = k and 0 otherwise, which is exactly what we want.
Note that the coecients of a map A : V W with respect to the basis {
i
j
} is exactly the entries of
the matrix representing A. Thus if we think of a map A : V W as an element of V

W, its coecients
with respect to the basis {
i
f
j
} are exactly its matrix entries.
4.1 Exercises
Exercise 4.1. Given a linear map T : k
3
k
3
given by a matrix
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
, show that
det(T) = a
11
a
22
a
33
+a
12
a
23
a
31
+a
13
a
21
a
32
a
11
a
23
a
32
a
13
a
22
a
31
a
12
a
21
a
33
.
Exercise 4.2. Given a linear map T : k
n
k
n
given by a matrix with entries a
ij
(1 i, j n), show that
det(T) =

Sn
()

i
a
i(i)
,
where the product is over all permutations of {1, . . . , n} and () = 1 is the sign of the permutation.
10
Exercise 4.3. Dene a map T : V

(V W)

by T( )(v w) = (v)(w) (so T( ) is a


map V W k). Show that T is an isomorphism. (Hint: Use bases.)
Exercise 4.4. Write the identity map I : V V as an element of V V

, in terms of a basis {e
i
} for V
and the dual basis {
i
} on V

.
Exercise 4.5. Let : V V k be a bilinear map. We say is nondegenerate if for any v V , there
exists a w V such that (v, w) = 0. Show that a nondegenerate bilinear map gives an isomorphism
T : V V

dened by T(v)(w) = (v, w).


Exercise 4.6. Let {e
i
} be a basis for V and dene (c
i
v
i
, d
i
v
i
) =

c
i
d
i
(the dot product with respect
to this basis). Show that is a nondegenerate bilinear map V V k, and show that the associated
isomorphism V V

sends the basis {e


i
} to its dual basis.
Exercise 4.7. Given a linear map T : V W, you can get a linear map T

: W

by dening
T

()(v) = (T(v)) (so T

() is a map V k). We can think of T as an element of W V

and T

as an
element of V

= V

W. Show that if we identify W V

with V

W by just swapping the order


of tensors, then T is the same as T

. (Hint: Use bases.)


11
5 Traces and TQFT
Another way that Hom(V, W) is related to tensor products is the following. Given f Hom(V, W) and v V ,
we can evaluate f on v to get an element f(v) W. This is a bilinear map Hom(V, W)V W, so it gives a
linear map ev : Hom(V, W)V W. In particular, for W = k, we get a map tr : V V

k. Furthermore,
if we identify Hom(V, W) with V

W = WV

as in Theorem 4.5, ev : Hom(V, W)V = WV

V W
is just given by pairing up the V

and V using tr. That is ev(w v

v) = v

(v)w (for v

). This can
be seen by the map used in Theorem 4.5: an element of W V

maps V to W by pairing the element of v


with the V

part of the tensor product.


More generally, there is a composition map Hom(U, V ) Hom(V, W) Hom(U, W) which composes two
linear maps. Writing these Homs in terms of tensor products and duals, this is a map U

V V

W
U

W. A similar analysis to the analysis above shows that this map is just given by pairing up the V and
V

in the middle: we compose linear maps by sending u

v v

w to v

(v)u

w.
Lets now see what happens when we let W = V . We then have Hom(V, V ) = V V

, and there is a
map tr : Hom(V, V ) = V V

k, a map which takes a linear map T : V V and gives a scalar. This is


called the trace tr(T) of T.
You may have seen the trace of a matrix dened as the sum of its diagonal entries. This denition is really
quite mystifying: why on earth would you take the diagonal entries (as opposed to some other collection of
entries) and add them up? Why on earth would that be independent of a choice of basis?
On the other hand, our denition is manifestly independent of a choice of basis and is very naturally
dened: its just the natural evaluation map on V V

. We can furthermore check that this agrees with


the sum of the diagonal entries denition.
Example 5.1. Let V have basis {e
1
, e
2
} and let {
1
,
2
} be the dual basis of V

. Let T : V V have
matrix
_
a b
c d
_
. We want to write down T as an element of V V

. Recall that e
i

j
corresponds to the
matrix with ij entry 1 and all other entries 0. Thus T is given by
ae
1

1
+be
1

2
+ce
2

1
+de
2

2
.
Now the trace just takes these tensors and evaluates them together, so e
i

j
goes to 1 if i = j and 0
otherwise. Thus tr(T) ends up being exactly a + d, the sum of the diagonal entries. The same argument
would generalize to n n matrices for any n.
Besides basis-invariance, one of the most important properties of traces is the following.
Theorem 5.2. Let S, T : V V be linear maps. Then tr(ST) = tr(TS).
Proof. Write S and T as elements of V

V , say S = v

v and T = w

w (actually, S and T will be linear


combinations of things of this form, but by linearity of everything we can treat each term separately). Then
recall that we compose linear maps by pairing up the vectors in the middle, so ST will be v

(w)w

v, so
tr(ST) = v

(w)w

(v). On the other hand, TS = w

(v)v

w and tr(TS) = w

(v)v

(w). But multiplication


of scalars is commutative, so these are the same!
Lets now look more closely at how duals and tensor products interact with linear maps. First, note that
an element u of a vector space U is just the same thing as a linear map T : k U, namely the linear map
such that T(1) = u. Thus in showing that elements of V

W = Hom(V, W) are the same as maps V W,


weve shown that maps V W are the same as maps k V

W. We can think of this as moving V to


the other side by dualizing it. In fact, this works more generally.
Theorem 5.3. Let U, V , and W be (nite-dimensional) vector spaces. Then linear maps U V W are
naturally in bijection with linear maps U V

W = Hom(V, W).
Proof. Given T : U V W, we can dene

T : U Hom(V, W) by

(T)(u)(v) = T(u v). Conversely,
given

T : U Hom(V, W), we can dene T : U V W by T(u v) =

T(u)(v). Clearly these two
operations are inverse to each other.
12
That is, a V on one side of a linear map is equivalent to a V

on the other side. Another way to see this


is that the set of maps U V W is
Hom(U V, W) = (U V )

W = U

W = U

(V

W) = Hom(U, V

W)
(using Exercise 4.3 to say (U V )

= U

).
As yet another way to understand this correspondence, we can take

T : U V

W and dene a linear


map 1

T : V U V V

W by 1

T(v u) = v

(T)(u). We can then compose this with the map


V V

W k W = W which just evaluates the V

factor on the V factor (i.e., the trace map) to get a


map T : U V = V U V V

W W. Chasing through all the denitions (perhaps using a basis)


shows that this is the same operation as the one used above.
We can understand this operation in another way by drawing pictures of lines. Unfortunately, I dont
know how to make such pretty pictures in L
A
T
E
X, so I cant provide these pictures in these notes.
5.1 Exercises
Exercise 5.1. Verify that if you think of maps U V and V W as elements of V

U and W

V ,
then you compose maps via the linear map W

V V

U which sends w

v u to v

(v)w

u.
Exercise 5.2. Show that the dual of the trace map V V

k is the map k

= k V

V = Hom(V, V )
which sends 1 to the identity I : V V . (Here we are take the dual of a linear map as in Exercise 4.7.)
Exercise 5.3. Show that there is a natural (basis-free) isomorphism
_
r
(V

)

= (
_
r
(V ))

. (Hint: Find a
way to wedge together r elements of V

to get a map from


_
r
(V ) k, and use a basis to show that this
gives an isomorphism
_
r
(V

) (
_
r
(V ))

.
Exercise 5.4. Let V and W be vector spaces and let S : k V W and T : W V k be maps such that
the compositions V = kV
S1
V WV
1T
V k = V and W = Wk
1S
WV W
T1
kW = W
are both the identity maps (here 1 S, for example, means the map 1 S(v c) = v S(c)). Show
that there is an isomorphism W

= V

such that when you identify W with V

, S(1) is the identity map


I Hom(V, V ) = V V

and T : V

V k is the trace map.


13