You are on page 1of 54

COVECTORS AND FORMS ELEMENTARY

INTRODUCTION TO DIFFERENTIAL GEOMETRY IN


EUCLIDEAN SPACE

NIKO MAROLA AND WILLIAM P. ZIEMER

Preface
The purpose of this note is to provide an elementary and more de-
tailed treatment of the algebra of differential forms in Rn , and in R3
in particular. The content of the note is a much watered-down ver-
sion of the beginning of Geometric Integration Theory by Hassler
Whitney [7]. There is also a more recent treatise on this subject
by KrantzParks [5]. It is assumed that the possible reader has
not received any formal intstruction in differential geometry, advanced
analysis or linear algebra. This deficiency created three drawbacks:
(a) The development of the Grassmann algebra of three space re-
quires more than half the note.
(b) Several of the proofs are unduly computational, because to give
simple proofs would have involved introducing even more struc-
ture. See, for instance, the duality theorem
(R32 ) the dual space of R32
in 7.12.
(c) No coordinate-free treatment as is done in Whitney [7].
Should you have any comments, please email at niko.marola@helsinki.fi

Contents
1. Vectors - Brief review of Euclidean 3-space 2
2. The space of covectors 3
3. Differential 1-forms 4
4. The space of 2-vectors 6
5. The space of 3-vectors 9
6. The space of p-vectors in Rn 11
7. The space of 2-covectors 12
7.1. Definition I 12
7.2. Definition II 13
1
2 MAROLA AND ZIEMER

8. The space of 3-covectors 16


9. The space of p-covectors in Rn 18
10. Applications to area theory 18
10.1. Case 1 18
10.2. Case 2 20
10.3. Case 3 21
10.4. General case 21
11. Differential 2-forms 22
12. Differential 3-forms 26
13. The exterior algebra of R3 28
13.1. Exterior products of k-covectors (k = 0, 1, 2, 3) 28
14. The algebra of differential forms 31
14.1. Exterior derivative of differential forms 32
15. Effects of a transformation on differential forms 35
16. The GaussGreenStokes theorems 48
17. A glance at currents in Rn 52
References 53

1. Vectors - Brief review of Euclidean 3-space


We denote the basis vectors in R3 by e1 , e2 , e3 , and by e1 , e2 , e3 the
dual basis for the dual space (R3 ) = {f : f : R3 R1 linear}
with ei (ej ) = ij for each index i, j. The standard inner product and
Euclidean norm on R3 are denoted by (, ) and | |, respectively.
Addition of vectors, multiplication of a vector by a real number, and
the dot product of two vectors in R3 satisfy the following fundamental
rules:
For all vectors u, v, w R3 , and all scalars , R1
1. (u + v) + w = u + (v + w)
2. u+v =v+u
3. there is a zero vector 0 so that 0 + v = v
4. for each v there is a negative of v so that v + (v) = 0
5. (u + v) = u + v
6. ( + )u = u + u
7. ()u = (u)
8. 1u = u
9. uv =vu
10. u (v + w) = u v + u w
11. (u) v = (u v)
12. u u 0, and u u = 0 iff u = 0
COVECTORS AND FORMS 3

From these 12 basic properties, the following additional results can


be deduced:
13. 0v = 0
14. 0 = 0
15. 0 u = 0
16. (u) = ()u = (u)

The norm of a vector, |u| = u u, satisfies:
17. |u| = 0 iff u = 0
18. |u| = |||u|
19. |u + v| |u| + |v|
20. Schwarz inequality: |u v| |u||v|
Properties 1720 may be deduced from the twelve basic properties and
definition of norm.

2. The space of covectors


A covector is a linear map f : R3 R1 . That is to say, it is a
function f satisfying
f (1 v1 + 2 v2 ) = 1 f (v1 ) + 2 f (v2 ).
The set of all covectors is denoted
(R31 ) .
If f and g are covectors, and R1 , we define:
(1) (f + g)(v) = f (v) + g(v),
(2) (f )(v) = (f (v)).
Proposition 2.1. The sum of covectors is a covector. The product of
a real number and a covector is a covector.
Proof. The proof is left to the reader. 
Proposition 2.2. Addition and scalar multiplication of covectors sat-
isfy the fundamental rules 18 of the preceding section.
Proof. The proof is left to the reader. 
Proposition 2.3. A covector f (R31 ) is completely determined by
its value on the three basis vectors e1 , e2 , e3 of R3 .
Proof. If v R3 , then v can be uniquely expressed as v = 1 e1 +
2 e2 + 3 e3 . Now using the linearity of the covector f , we have f (v) =
1 f (e1 ) + 2 f (e2 ) + 3 f (e3 ). Since f (e1 ), f (e2 ), f (e3 ) are known, so
is f (v). 
4 MAROLA AND ZIEMER

Proposition 2.4. Conversely, if a function f : R3 R1 is defined by


selecting 3 numbers f (e1 ), f (e2 ), f (e3 ) arbitrarily, and extending the
definition of f to all vectors v R3 by the linearity
f (1 e1 + 2 e2 + 3 e3 ) = 1 f (e1 ) + 2 f (e2 ) + 3 f (e3 ),
then the function f so defined is a covector.
Proof. The proof is left to the reader. 
We now wish to define a basis for the space of covectors. We define
these covectors f1 , f2 , f3 by the formulae

0 if i 6= j
fi (j ) =
1 if i = j.
By the preceding two propositions, f1 , f2 , f3 are well-defined covectors.
Note also that fi (a1 , a2 , a3 ) = ai , i = 1, 2, 3.
Exercise 2.5. Prove that the three covectors f1 , f2 , f3 just defined form
a basis for the space of covectors.
It is possible to define a dot product between covectors in such a way
that fundamental properties 912 of the preceding section are satisfied,
and thus obtain a complete analogy between the space of vectors (R3 )
and the space of covectors (R31 ) . Note also that all concepts of the
preceding sections can be generalized to n-space. In particular, an
n-dimensional covector would be a linear function f : Rn R1 .

3. Differential 1-forms
Let be an open set in R3 . A differential 1-form on is a function
w : (R31 ) .
Thus to each point p the differential 1-form w associates a linear
function w(p) : R3 R1 . In other words, a differential 1-form is a
covector-valued function.
Example 3.1. Suppose that f : R1 , f C 1 (). Then we define
a function
df : (R31 )
by the rule: for each point p , df associates the covector df (p), i.e.,
the differential of the function f at the point p. We already know that
df (p) is a linear function for each p, so the function df just defined is
indeed a differential 1-form.
COVECTORS AND FORMS 5

Thus we see that the class of all differential 1-forms on open sets
includes among its members all the differentials, i.e., special 1-forms
which can be obtained from function f in the manner just illustrated.
However, the notion of differential 1-form is more general than the
notion of differential of a function; that is, there are 1-forms w which
cannot be expressed as df for any function f . We shall prove this
presently. We will drop the word differential from the name for the
sake of brevity.
Proposition 3.2. Let w be a 1-form on . Then w can be represented
by
w(p) = A1 (p)f1 + A2 (p)f2 + A3 (p)f3
for all p, where Ai , i = 1, 2, 3, is a real-valued function defined on ,
and fi , i = 1, 2, 3, is the standard basis covector defined in 2. This
representation is unique.
Proof. Fix a point p . Then w(p) (R31 ) . Hence the covector
w(p) can be uniquely expressed in terms of the three basis covectors
f1 , f2 , f3 , by w(p) = A1 f1 + A2 f2 + A3 f3 . The coefficients A1 , A2 , A3 are
determined by the covector w(p), which in turn depends on the choice
of the point p . Hence the Ai s are in fact functions of p. 
Let w be a 1-form on . Then the three functions A1 , A2 , A3 on
are called the coordinate functions of w. A 1-form w is said to be
continuous (differentiable, in class C 1 , etc.) if and only if its coordinate
functions are continuous (differentiable, in class C 1 , etc.).
We now turn to the notion of integration of 1-forms over curves. Let
w be a 1-form on an open set . Let : [a, b] R3 be a smooth curve
such that trace() . Then the integral of w over is given by
Z Z b
w((t)) ( 0 (t)) dt.
 
w=
a

In order that the left hand side should always exist, we shall assume
Rthat the 1-form w is continuous. Let us write
P3 out the definition of

w in coordinate form. Suppose w(p) = i=1 Ai (p)fi , and (t) =
(x(t), y(t), z(t)). Then
3
X
w((t)) = Ai ((t))fi .
i=1

Thus by using rules of addition and scalar multiplication of covectors,


we have that
w((t)) ( 0 (t)) = (A1 )x0 + (A2 )y 0 + (A3 )z 0 (t).
   
6 MAROLA AND ZIEMER
R
Now using the definition of w,
Z Z b
(A1 )x0 + (A2 )y 0 + (A3 )z 0 (t) dt.
 
w=
a

Example 3.3. For the sake of simplicity, and to show how the notion of
1-form can be generalized to n-space, where n 6= 3, let us integrate a 1-
form in the plane over a curve in the plane. Let (t) = (5 cos t, 5 sin t),
0 t 2, and w(x, y) = x2 yf1 + xyf2 . Here of course f1 , f2 are basis
covectors for (R21 ) , defined by

0, i 6= j,
fi (ej ) =
1, i = j.
Then 0 (t) = (5 sin t, 5 cos t), and w((t)) = 125 cos2 t sin tf1 +25 cos t sin tf2 ,
so that
Z Z 2
w= (625 cos2 t sin2 t + 125 cos2 t sin t) dt.
0

As a second example, let w : (R31 ) , where = {(x, y, z) R3 :


xy > 0}, be given by w(x, y, z) = log(xy)f2 . Notice that the coordinate
functions A1 , A3 are zero. Let (t) = (2t, et , 1), 0 t 1. Then
Z Z 1
w= (log 2 + logt + t)et dt.
0

Theorem 3.4. Let w be continuous 1-form on (into (R31 ) ) and


: [a, b] , : [c, d] be smoothly equivalent curves. Then
Z Z
w = w.

Proof. The proof is left to the reader. 

4. The space of 2-vectors


Let u and v be any vectors in R3 . We consider expressions of the
form
uv
(read u wedge v). This object is called the wedge product of the
vectors u and v. In general, we shall consider expressions of the form
= 1 (u1 v1 ) + . . . + k (uk vk ),
where the ui s and vi s are vectors in R3 , and the i s are real numbers.
Such expressions are called 2-vectors in R3 . A 2-vector, then, is simply
a linear combination of wedge products. The set of all two vectors is
denoted R32 . Now, if = 1 (u1 v1 ) + . . . + k (uk vk ) and =
COVECTORS AND FORMS 7

1 (w1 x1 ) + . . . + k (wk xk ) are 2-vectors, we may define the sum


by
+ = 1 (u1 v1 ) + . . . + k (uk vk ) + 1 (w1 x1 ) + . . . + k (wk xk ).
That is, we merely string the two expressions together in one linear
combination. Similarly, if is a real number, and is as above, the we
define by 1 (u1 v1 ) + . . . + k (uk vk ).
We want the set R32 of 2-vectors, with the rules for addition and
scalar multiplication just defined, to satisfy fundamental rules 18 of
1. Accordingly we make the following assumptions:
(1) 1 (u1 v1 ) + 2 (u2 v2 ) = 2 (u2 v2 ) + 1 (u1 v1 ),
(2) (u v) + (u v) = ( + )(u v),
(3) 1(u v) = (u v).
Under these assumptions, the rules 18 of 1 are satisfied. In partic-
ular, the zero 2-vector will be denoted 0, and may be thought of as a
linear combination in which the coefficients are all zeros. We also make
the following four additional assumptions:
(4) (u1 + u2 ) v = (u1 v) + (u2 v),
(5) u (v1 v2 ) = (u v1 ) + (u v2 ),
(6) (u) v = u (v) = (u v),
(7) u u = 0.
This completes our definition. Observe that the definition is ax-
iomatic, rather than constructive.
Proposition 4.1. u v = (v u).
Proof.
(u v) + (v u) = (u u) + (u v) + (v u) + (v v)
   
= u (u + v) + v (u + v)
= (u + v) (u + v) = 0.

A simple 2-vector is said to be one of the form (u v), i.e., a single
wedge product.
Assumptions 47 of the definition of the space of 2-vectors given
above give us hope of reducing more complicated 2-vectors to simple
2-vectors. For example 4 and 5 enable us to reduce certain 2-vectors
involving two wedge products to simple 2-vectors. Thus the 2-vector
(u1 v) + (u2 v) is simple, because it can be written as (u1 + u2 ) v
by 4.
Exercise 4.2. Prove that the following 2-vectors are simple:
8 MAROLA AND ZIEMER
 
a) 2(u v) + (u w) + (v + w) u ,
b) (u v) + (v w) + (w u).
Remark 4.3. It may seem that perhaps every 2-vector is simple. How-
ever, this is not the case.
We next wish to investigate the problem of finding basis 2-vectors.
Consider the 2-vectors e1 e2 , e1 e3 , and e2 e3 . It is easy to see
that every 2-vector can be written as a linear combination of these
three. These three 2-vectors are also linearly independent, hence we
have found a basis.
Using the linear independence of these basis 2-vectors, you should
be able to answer the question about whether all 2-vectors are simple.
Thus far we have obtained a good analogy between the space of 2-
vectors and the space R3 , at least so far as fundamental rules 18 and
the existence of a basis are concerned. For the sake of comparison, it is
convenient to think of R3 as the space of 1-vectors, and to denote it by
R31 . The analogy becomes even clearer if we establish a correspondence
between 2-vectors and 1-vectors by
e1 e2 e3
e1 e3 e2
e2 e3 e1 .
Under this correspondence, a general 2-vector = 1 (e1 e2 ) + 2 (e1
e3 ) + 3 (e2 e3 ) corresponds to 1 e3 2 e2 + 3 e1 . This correspon-
dence can be used to identify the wedge product with the cross-product
of classical vector analysis. The geometric interpretation is that u v
gives the area of the oriented parallelogram with vertice vectors u and
v.
To complete the analogy between R32 and R31 we introduce the no-
tions of dot product and norm of 2-vectors. Let and be 2-vectors,
say, = 1 (e1 e2 ) + 2 (e1 e3 ) + 3 (e2 e3 ) and = 1 (e1 e2 ) +
2 (e1 e3 ) + 3 (e2 e3 ). Then define
3
X
= i i .
i=1
It readily follows that the dot product of 2-vectors satisfies fundamental
rules 912 of 1. Moreover, for simple 2-vectors, the definition given
above is equivalent to the following

u1 u2 u1 v2

(u1 v1 ) (u2 v2 ) = .

v1 u2 v1 v2
COVECTORS AND FORMS 9

For any 2-vector , we put || = . The norm of a 2-vector is
well-defined, and it satisfies fundamental rules 1720 of 1.

5. The space of 3-vectors


The definition and theorems of this section parallel those of the pre-
ceding section completely; accordingly, we shall not go into as much
detail here.
Let u, v, and w be any vectors in R3 . The object u v w is called
the wedge product of the vectors u, v, and w. In general, a 3-vector in
R3 is an expression of the form
= 1 (u1 v1 w1 ) + . . . + r (ur vr wr ).
The set of 3-vectors is denoted R33 . If = 1 (x1 y1 z1 )+. . .+s (xs
ys zs ), and is as written above, then we define
+ = 1 (u1 v1 w1 ) + . . . + r (ur vr wr )
+ 1 (x1 y1 z1 ) + . . . + s (xs ys zs ),
and
= 1 (u1 v1 w1 ) + . . . + r (ur vr wr ),
for any real number .
In order that fundamental rules 18 of 1 be satisfied, we assume
(1)
 
1 (u1 v1 w1 ) + 2 (u2 v2 w2 ) + 3 (u3 v3 w3 )
 
= 1 (u1 v1 w1 ) + 2 (u2 v2 w2 ) + 3 (u3 v3 w3 ),
(2) 1 (u1 v1 w1 ) + 2 (u2 v2 w2 ) = 2 (u2 v2 w2 ) + 1 (u1
v1 w1 ),
(3) (u v w) + (u v w) = ( + )(u v w),
(4) 1(u v w) = u v w.
Then we also make the following five additional assumptions:
(5) (u1 + u2 ) v w = (u1 v w) + (u2 v w),
(6) u (v1 + v2 ) w = (u v1 w) + (u v2 w),
(7) u v (w1 + w2 ) = (u v w1 ) + (u v w2 ),
(8) (u) v w = u (v) w = u v (w) = (u v w),
(9) u v w = 0 whenever at least two of the vectors u, v, w are
equal.
Again note that the definition is axiomatic rather than constructive.
Axioms 14 are more or less natural to insure that fundamental rules
18 of 1 are satisfied. Axioms 59 carry the special information about
the space of 3-vectors.
10 MAROLA AND ZIEMER

Exercise 5.1. Show that u v w = v u w.


Proposition 5.2. For any vectors u, v, and w in R3 , the following are
equal

uvw =wuv =vwu


= v u w = u w v = w v u.
Proof. The proof is left to the reader. 
Proposition 5.3. The single 3-vector e1 e2 e3 forms a basis for the
space R33 of 3-vectors.
Proof. Consider a 3-vector of the form u v w with u, v, w R3 .
Then u, v, w can be written as
u = 1 e1 + 2 e2 + 3 e3
v = 1 e1 + 2 e2 + 3 e3
w = 1 e1 + 2 e2 + 3 e3 .
When we form u v w, and apply the assumptions 58, we obtain an
expression
X3
uvw = ijk (ei ej ek ),
i,j,k=1

in which there are 27 terms in the sum on the right. In all but 6 of the
27 terms, we have ei ej ek , where two (at least) of the three factors
are equal. By assumption 9, these terms are all zero. The remaining
6 terms are of the form ei ej ek , where i, j, k are all different; i.e.
i, j, k are 1,2,3 in some order. But the previous proposition shows us
that these 6 vectors are then equal to e1 e2 e3 , where the sign
depends upon the ordering of the subscripts. Combining these terms,
we have
u v w = (e1 e2 e3 ).
The reader should verify that
= (1 2 3 ) + (2 3 1 ) + (3 1 2 )
(1 3 2 ) (2 1 3 ) (3 2 1 )

1 2 3

= 1 2 3 .
1 2 3
It follows that since every 3-vector is a linear combination of simple
3-vectors, i.e. 3-vectors of the type u v w, that every 3-vector can
COVECTORS AND FORMS 11

be written as a linear combination of e1 e2 e3 . Hence every 3-vector


is simple!
It is also true that the 3-vector e1 e2 e3 is linearly independent,
i.e. not zero, though we shall not give the proof. Lest the reader think
this is obvious, see the remark at the end of this section.
Hence it follows that {e1 e2 e3 } is a basis, and every 3-vector can
be uniquely expressed as = (e1 e2 e3 ). 
Note in particular that in this manner there is a 1-1 correspondence
between R33 , the space of 3-vectors in R3 , and R1 .
Remark 5.4. We could, of course, continue, and try to define a space
R34 of 4-vectors in R3 . But observe that when we turned to the prob-
lem of expressing 4-vectors in terms of e1 , e2 , e3 , we would obtain an
expression of the form
3
X
uvwx= ijkl (ei ej ek el ).
i,j,k,l=1

However, at least 2 of the 4 indices must be equal, so the 4-vector


ei ej ek el = 0. This would hold for all the 81 terms of this sum,
so we would have u v w x = 0; as a consequence we have R34 = 0.
The table below summarize the various spaces of p-vectors in R3 ,
where p is any non-negative integer:

0 vectors: R30 basis vectors: {1}


1 vectors: R31 basis vectors: {e1 , e2 , e3 }
2 vectors: R32 basis vectors: {e1 e2 , e1 e3 , e2 e3 }
3 vectors: R33 basis vectors: {e1 e2 e3 }
p vectors: R3p = 0, p 4.

6. The space of p-vectors in Rn


We have tried to give the definitions of 4 and 5 so that they can
be easily generalized. A p-vector in Rn , then, is an expression
t
X
= i (u1i . . . upi ),
i=1

where the i s are real numbers, and the uji s are vectors in Rn . The
space of p-vectors in Rn is denoted Rnp . The four basic axioms that
make Rnp into a vector space are
12 MAROLA AND ZIEMER

1)
 
1 (u11 . . . up1 ) + 2 (u12 . . . up2 ) + 3 (u13 . . . up3 )
 
= 1 (u11 . . . up1 ) + 2 (u12 . . . up2 ) + 3 (u13 . . . up3 ) ,
2)
1 (u11 . . . up1 ) + 2 (u12 . . . up2 )
= 2 (u12 . . . up2 ) + [1 (u11 . . . up1 ),
3) (u1 . . . up ) + (u1 . . . up ) = ( + )(u1 . . . up ),
4) 1(u1 . . . up ) = u1 . . . up .
In addition, the axioms that give Rnp its special properties are
5) For k = 1, 2, . . . , p,
(u1 . . . uk + vk . . . up )
= (u1 . . . uk . . . up ) + (u1 . . . vk . . . up ),
6) For k = 1, 2, . . . , p,
(u1 . . . up ) = u1 . . . (uk ) . . . up ,
7) u1 . . . up = 0 if 2 or more uj s are equal.
One then proves that if v1 . . . vp is obtained from u1 . . . up
by rearrangement of factors, then v1 . . . vp = u1 . . . up , with
the sign + or - according to whether the number of interchanges used
in making the rearrangement is even or odd.
Finally, a basis for Rnp is obtained by taking p-vectors ei1 . . . eip ,
where e1 . . . en are the usual basis for Rn , and the subscripts are
arranged in increasing order: i1 < i2 < . . . < ip . Thus Rnp has np basis
vectors; i.e., it is np -dimensional.


7. The space of 2-covectors


7.1. Definition I. We define 2-covectors from covectors exactly as we
defined 2-vectors from vectors:
An object of the form f g, where f and g are covectors in (R31 )
is called the wedge product of f and g; in general, a 2-covector is an
expression of the form
F = 1 (f1 g1 ) + . . . + k (fk gk ).
The set of 2-covectors in R3 is denoted
(R32 ) .
If G = 1 (f10 g10 ) + . . . + m (fm
0 0
gm ), then
F + G = 1 (f1 g1 ) + . . . + k (fk gk ) + 1 (f10 g10 ) + . . . + m (fm
0 0
gm )
COVECTORS AND FORMS 13

and
F = 1 (f1 g1 ) + . . . + k (fk gk ),
for any real number .
So that the fundamental rules 18 of 1 are satisfied, we first assume:
  
(1) 1 (f
 1 g 1 ) + 2 (f 2 g2 ) + 3 (f 3 g3 ) = 1 (f1 g1 ) + 2 (f2
g2 ) + 3 (f3 g3 ),
(2) 1 (f1 g1 ) + 2 (f2 g2 ) = 2 (f2 g2 ) + 1 (f1 g1 ),
(3) (f g) + (f g) = ( + )(f g)
(4) 1(f g) = f g.
Then to give (R32 ) its special properties, we assume:
(5) (f1 + f2 ) g = (f1 g) + (f2 g),
(6) f (g1 + g2 ) = (f g1 ) + (f g2 ),
(7) (f g) = (f ) g = f (g),
(8) f f = 0, the zero 2-covector.
One then proves that f g = g f . Furthermore, suppose that
f1 , f2 , f3 are the basis covectors defined in 2. Then we obtain, as in
4, the result that f1 f2 , f1 f3 , and f2 f3 are basis 2-covectors.
There is an alternative way of obtaining the space (R32 ) , which we
now consider.

7.2. Definition II. A co-2-vector is a linear map H : R32 R1 . If H


and K are co-2-vectors, and R1 , we define
1) (H + K)() = H() + K(),
2) (H)() = (H()).
We see that the definition of co-2-vector as a linear function on the
space of 2-vectors is exactly parallel to the definition of a covector as a
linear function on the space of vectors given in 2. We shall not repeat
the proofs of propositions which are exactly the same as in 2.

Proposition 7.1. a) The sum of co-2-vectors is a co-2-vector.


b) The product of a real number and a co-2-vector is a co-2-vector.

Proposition 7.2. Addition and scalar multiplication of co-2-vectors


satisfy the fundamental rules 18 of 1.

Proposition 7.3. If 1 , 2 , 3 are any three given real numbers, then


there is one and only one co-2-vector H such that

H(e1 e2 ) = 1 , H(e1 e3 ) = 2 , H(e2 e3 ) = 3 .


14 MAROLA AND ZIEMER

Now using this last proposition, we can obtain a basis for the space
of co-2-vectors. We define these co-2-vectors H1 , H2 , H3 , by:
H1 (e1 e2 ) = 1 H2 (e1 e2 ) = 0 H3 (e1 e2 ) = 0
H1 (e1 e3 ) = 0 H2 (e1 e3 ) = 1 H3 (e1 e3 ) = 0
H1 (e2 e2 ) = 0 H2 (e2 e3 ) = 0 H3 (e2 e3 ) = 1.
H1 , H2 , H3 are well-defined co-2-vectors, and they form a basis for
the space of co-2-vectors. The following theorem is of critical impor-
tance.
Theorem 7.4. There is a canonical one-to-one correspondence between
the space of 2-covectors and the space of co-2-vectors. Under this cor-
respondence addition and scalar multiplication are preserved.
Remark 7.5. In the language of abstract algebra, this theorem says
that the space of 2-covectors and the space of co-2-vectors are canoni-
cally isomorphic.
Proof. We first recall that if F is any 2-covector whatever, then F can
be written in one and only one way as F = 1 (f1 f2 ) + 2 (f1
f3 ) + 3 (f2 f3 ). Similarly, if H is any co-2-vector whatever, H can
be uniquely expressed as a linear combination of the basis co-2-vectors
H1 , H2 , H3 .
Now let F and G be any 2-covectors. Then F = 1 (f1 f2 ) +
2 (f1 f3 ) + 3 (f2 f3 ) and G = 1 (f1 f2 ) + 2 (f1 f3 ) + 3 (f2
f3 ). By our rule of correspondence, F corresponds to the co-2-vector
H = 1 H1 + 2 H2 + 3 H3 , and G corresponds to the co-2-vector K =
1 H1 + 2 H2 + 3 H3 . Now by the rules for addition of 2-covectors,
F + G = (1 + 1 )(f1 f2 ) + (2 + 2 )(f1 f3 ) + (3 + 3 )(f2 f3 ).
Under the correspondence, F + G corresponds to (1 + 1 )H1 + (2 +
2 )H2 + (3 + 3 )H3 . But under the rules for addition of co-2-vectors,
this is precisely the sum H + K. Thus F + G corresponds to H + K.
Similarly, we obtain that F corresponds to H for any real number
.
To complete the proof, we must justify the use of the word canoni-
cal. Consider a 2-covector g h, where g and h are covectors; say
g = 1 f1 + 2 f2 + 3 f3 , and h = 1 f1 + 2 f2 + 3 f3 . Then we
know that g h = (1 2 2 1 )(f1 f2 ) + (1 3 3 1 )(f1 f3 ) +
(2 3 3 2 )(f2 f3 ). Hence g h corresponds to the co-2-vector
H = (1 2 2 1 )H1 + (1 3 3 1 )H2 + (2 3 3 2 )H3 . Now let
u, v be any vectors, and consider H(u v). We have
H(u v) = (1 2 2 1 )H1 (u v) + (1 3 3 1 )H2 (u v)
+ (2 3 3 2 )H3 (u v).
COVECTORS AND FORMS 15

Suppose u = 1 e1 +2 e2 +3 e3 , v = 1 e1 +2 e2 +3 e3 . Then we know


that u v = (1 2 2 1 )(e1 e2 ) + (1 3 3 1 )(e1 e3 ) + (2 3
3 2 )(e2 e3 ). By definition of H1 , H2 , H3 , H1 (u v) = 1 2 2 1 ,
H2 (u v) = 1 3 3 1 , and H3 (u v) = 2 3 3 2 . Therefore,
H(u v) = (1 2 2 1 )(1 2 2 1 )
+ (1 3 3 1 )(1 3 3 1 ) + (2 3 3 2 )(2 3 3 2 ).
Expanding and canceling,
H(u v) = 1 1 2 2 + 2 2 1 1
2 1 1 2 1 2 2 1
+ 1 1 3 3 + 3 3 1 1
1 3 3 1 3 1 1 3
+ 2 2 3 3 + 3 3 2 2
3 2 2 3 2 3 3 2 .
By knowing the answer in advance, we are able to tell that this mess
is:

1 + 2 + 3 1 + 2 + 3
1 2 3 1 2 3
H(u v) = .
1 1 + 2 2 + 3 3 1 1 + 2 2 + 3 3

Now observe:
1 1 + 2 2 + 3 3 = g(u)
1 1 + 2 2 + 3 3 = g(v)
1 1 + 2 2 + 3 3 = h(u)
1 1 + 2 2 + 3 3 = h(v).
Therefore we have shown that if g and h are any covectors, then the
co-2-vector H which corresponds to g h satisfies

g(u) g(v)

H(u v) = .

h(u) h(v)
Hence the correspondence given in this theorem is independent of bases.

The importance of the preceding theorem is now easily established.
The theorem tells us that the two algebraic structure, namely what we
have called the space of 2-covectors and the space of co-2-vectors, are
completely equivalent. This equivalence enables us to identify the two
spaces. We shall only use the term 2-covector, and if g h is such a
16 MAROLA AND ZIEMER

2-covector, we shall think of it equally well as the wedge product of the


covectors g and h, or as that linear function g h : R32 R1 , specified
by the rule
g(u) g(v)

(g h)(u v) = .

h(u) h(v)
Notice also that this formula determines g h completely; for if
= 1 (u1 v1 ) + . . . + k (uk vk ) is any 2-vector, then because the
function g h is linear, we have
 
(g h)() = (g h) 1 (u1 v1 ) + . . . + k (uk vk )
= 1 (g h)(u1 v1 ) + . . . + k (g h)(uk vk )

g(u1 ) g(v1 ) g(uk ) g(vk )

= 1

+ . . . + k
.

h(u1 ) h(v1 ) h(uk ) h(vk )
Furthermore, because the correspondence of the theorem preserves
sums and scalar products, we may extend the basic of the previous page
by linearity to arbitrary 2-covectors: if = 1 (g1 h1 )+. . .+s (gs hs )
is any 2-covector, then when considering as a linear function R32
R1 , we have
 
() = 1 (g1 h1 ) + . . . + s (gs hs ) ()
= 1 (g1 h1 )() + . . . + s (gs hs )().
In conclusion, then, if is any 2-covector and any 2-vector, with
formulas as on the previous page, then () is the real number given
by
s X k gi (uj ) gi (vj )
X
() = i j .

i=1 j=1 hi (uj ) hi (vj )
Exercise 7.6. Let the covectors g1 , g2 , h1 , h2 be given by g1 = 2f1 +f2 ,
g2 = f1 + f2 f3 , h1 = f2 3f3 , h2 = f1 + 2f2 , where f1 , f2 , f3
are the standard basis covectors. Let the vectors u and v be given by
u = (1, 1, 0), v = (2, 1, 3). Consider the 2-covector = 2(g1 h1 )
(g2 2h2 ), and compute (u v).
8. The space of 3-covectors
A 3-covector is an expression
= 1 (g1 h1 k1 ) + . . . + s (gs hs ks ),
where the i s are real numbers, and the gi s, hi s, and ki s are covectors
in R3 . Sums and scalar products are defined in the obvious way, and
COVECTORS AND FORMS 17

9 basic assumptions are made. These 9 assumptions are exactly like


those in 5 given for 3-vectors, and we shall not copy them down again.
The following propositions are true:
Proposition 8.1. For any covectors f, g, and h (R31 ) , the following
are equal:
f gh=ghf =hf g
= g f h = f h g = h g f.
Proposition 8.2. The single 3-covector f1 f2 f3 forms a basis for
the space (R33 ) of 3-covectors.
For the moment, let us use the term co-3-vector to describe a linear
function R33 R1 . Sums and scalar multiples of co-3-vectors are
defined in the (by now) usual way, and the set of co-3-vectors forms a
space satisfying fundamental rules 18 (cf. 7).
Proposition 8.3. If is any given real number, there is one and only
one co-3-vector H such that H(e1 e2 e3 ) = .
Using these propositions, it follows that the unique co-3-vector H1
specified by H1 (e1 e2 e3 ) = 1 forms a basis for the space of co-3-
vectors.
Theorem 8.4. The space of 3-covectors and the space of co-3-vectors
are canonically isomorphic.
The proof makes the single basis 3-covector f1 f2 f3 correspond
to the single basis co-3-vector H1 .
If f, g, h are covectors and u, v, w are vectors, then the co-3-vector
H corresponding to f g h satisfies
H(u v w) = (f g h)(u v w)

f (u) f (v) f (w)



= g(u) g(v) g(w) .


h(u) h(v) h(w)
which shows that the isomorphism is canonical.
This theorem allows us to identify the concepts of 3-covector and co-
3-vector, and we choose to use the term 3-covector only. Thus we shall
think of f g h equally well as the wedge product of the covectors
f, g, and h, or as that linear function specified by the rule above.
There is no need to extend this rule by linearity, since every 3-vector
is simple, and every 3-covector is simple.
18 MAROLA AND ZIEMER

9. The space of p-covectors in Rn


A p-covector in Rn would be an expression
t
X
= i (g1i . . . gpi ),
i=1

where the i s are real numbers, and the gji s are covectors in Rn ; i.e.
real-valued linear functions on Rn . The basic axioms for p-covectors
are completely parallel to those given in 6. A basis for the space
(Rnp ) is obtained by taking p-covectors fi1 . . . fip , where f1 , . . . , fn
are the natural basis covectors on (Rn1 ) , and i1 < i2 < . . . < ip .
The fundamental theorem, of course, is that the p-covectors may be
identified with the linear functions on the space (Rnp ) of p-vectors. The
proof makes the basis p-covectors fi1 . . . fip correspond to the linear
function whose value at ei1 . . . eip (same subscripts as on the f s)
is 1, and whose value at the other basis p-vectors is 0. The formula
which expresses the action of an arbitrary p-covector on an arbitrary
p-vector is
(g1 . . . gp )(u1 . . . up )

g1 (u1 ) g1 (u2 ) . . . g1 (up )
g2 (u1 ) g2 (u2 ) . . . g2 (up )


= .. .. . . ..
. . . .

g (u ) g (u ) . . . g (u )
p 1 p 2 p p

which is extended by linearity in both directions.

10. Applications to area theory


10.1. Case 1. Let D R2 be open, and consider : D R3 ,
smooth. We define
u
(p), and
v
(p), for p D as follows: Say

x = (u, v)
: y = (u, v)
z = (u, v)


Then u
(p) is a vector:
 

(p) = (p), (p), (p)
u u u u
x y z

(also written u (p), u (p), u (p) ). Also,
 

(p) = (p), (p), (p) ,
v v v v
COVECTORS AND FORMS 19

x
(p), y z

or v v
(p), v (p)
.
The Jacobian 2-vector of at p is given by

J(p)
e = (p) (p).
u v
Proposition 10.1. Under the correspondence
e1 e2 e3
e1 e3 e2
e2 e3 e1

as given in 4, J(p)
e corresponds to n(p), the normal to at p.

Proof. Let J(p)


e = 1 (e1 e2 ) + 2 (e1 e3 ) + 3 (e2 e3 ). We must
determine the coefficients 1 , 2 , 3 . It is convenient to utilize the
theory of 2-covectors. For 1 = (f1 f2 )(J(p)),e and by the rule
presented in 7, we get

f1 ( (p)) f1 ( (p))
u v
1 = (f1 f2 )(J(p))
e =


f2 ( (p)) f2 ( (p))
u v




u
(p) v
(p)


=



u
(p) v
(p)

(, ) (x, y)
= (p) = (p).
(u, v) (u, v)
Similarly
(x, z) (y, z)
2 = (p), 3 = (p).
(u, v) (u, v)
Hence the components of J(p) e do correspond to the components of
n(p) (see, e.g., [1, p. 836] or [3, p. 272]). 
Corollary 10.2.
ZZ
A() = |J(p)|.
e
D

Proof. This result is immediate from the definition of A(), for in-
stance, see [1, p. 836] or [3, p. 275], and the definition of norm for
vectors and 2-vectors. 
20 MAROLA AND ZIEMER

10.2. Case 2. Let D R3 be open, and consider T : D R3 , T


smooth; that is, T C 1 , JT (p) 6= 0 for all p D. Suppose

x = (u, v, w)
T : y = (u, v, w)
z = (u, v, w)

We define
 
T
(p) = (p), (p), (p)
u u u u
 
x y z
= (p), (p), (p) .
u u u

Also, T
v
T
(p), w (p) are defined similarly.
The Jacobian 3-vector of T at p is given by

e (p) = T (p) T (p) T (p).


JT
u v w
Remark 10.3. As we know, every 3-vector is simple; if fact, by the
methods in 5, we see that JT
e (p) = (e1 e2 e3 ), where by the rule
on p. 9,

(p) (p) (p)
u v w



= u (p) v (p) w (p)



(p) (p) (p)
u v w

(, , ) (x, y, z)
= (p) = (p) = JT (p)
(u, v, w) (u, v, w)
the ordinary Jacobian. This computation will justify the use of the word
Jacobian in the terms Jacobian 2-vector and Jacobian 3-vector.
Corollary 10.4.
ZZZ
V (T (D)) = |JT
e (p)|,
D

where |JT
e (p)| is the norm of the 3-vector (for a 3-vector u v w =
(e1 e2 e3 ) we define |u v w| = ||).
Proof. This is immediate from [1, p. 788] or [3, Theorem 3, p. 239]. 
COVECTORS AND FORMS 21

10.3. Case 3. Let D R1 be open, and consider : D R3 ,


smooth (D will be a union of non-overlapping open intervals). We
define d (t ) as usual
dt 0
x = (t)
: y = (t)
z = (t)

Then for t0 D,
d
(t0 ) = 0 (t0 ) = (0 (t0 ), 0 (t0 ), 0 (t0 ))
dt  
dx dy dz
= (t0 ), (t0 ), (t0 ) .
dt dt dt
The Jacobian 1-vector of at t0 is given by
e 0 ) = d (t0 ) R3
J(t 1
dt
(i.e. in this case the Jacobian vector is simply the derivative vector
d
dt
).
Remark 10.5. Z
l() = |J(t
e 0 )|.
D
This is immediate from definition.
10.4. General case. These three illustrations above provide ample
motivation for the general case, which we now study.
Let D Rk be open, and consider T : D Rn . We suppose
T C 1 . Let us use the term k-dimensional measure. We want a
formula then for the k-dimensional measure of the set T (D) Rn .
Suppose

y1 = 1 (x1 , x2 , . . . , xk )
y2 = 2 (x1 , x2 , . . . , xk )

T : ..


.
yn = n (x1 , x2 , . . . , xk )
Let p D. Then we define
 
T 1 2 n
(p) = (p), (p), . . . , (p)
xj xj xj xj
 
y1 y2 yn
= (p), (p), . . . , (p) ,
xj xj xj
T
where j = 1, 2, . . . , k. For j = 1, 2, . . . , k, xj
(p) is a vector in n-space
Rn .
22 MAROLA AND ZIEMER

The Jacobian k-vector of T at p is given by


e (p) = T (p) T (p) . . . T (p).
JT
x1 x2 xk
It is a k-vector in n-space (see 6). If we expand JT e (p) in terms of the
basis k-vectors (ei1 ei2 . . . eik ), with i1 < i2 < . . . < ik , we have
X
JT
e (p) = i1 ik (ei1 ei2 . . . eik ).

The norm of JTe (p) is then defined in the obvious way it is the square
root of the sums of the squares of the components of JT
e (p) with respect
to the basis k-vectors.
The k-dimensional measure of the set T (D) Rn is given by the
formula Z Z
k (T (D)) = |JTe (p)|.
| {z }
k times

The formula is valid under the condition |JT 6 0 for all p D (this
e (p)| =
is the general requirement of smoothness).
Remark 10.6. Thus the theory of k-vectors in n-space has enabled us
to give a unified treatment of length, area, and volume. Briefly stated:
to find the k-dimensional measure of a k-dimensional surface in n-
space, integrate the Jacobian k-vector.
This implication alone gives a good justification for the theory of
k-vectors in Rn .

11. Differential 2-forms


Let be an open set in R3 . A differential 2-form on is a function
w : (R32 ) ;
thus is a 2-covector valued function.
Proposition 11.1. Every 2-form w on can be uniquely represented
in the form
w(p) = A1 (p)(f1 f2 ) + A2 (p)(f1 f3 ) + A3 (p)(f2 f3 ),
for all p .
Proof. The proof is left to the reader. 
These three functions A1 , A2 , A3 are called the coordinate functions
of the 2-form w. The 2-form w is said to be continuous (differentiable)
if the Ai s are continuous (differentiable).
COVECTORS AND FORMS 23

Let w be a continuous 2-form on an open set . Let : D R3


be a smooth surface such that trace()1 . Then the integral of w
over is given by
ZZ ZZ
  
w= w (u, v) J(u,
e v) dudv,
D

where J(u, Jacobian2-vector as defined in 10, and where the


v) is the
e

symbol w (u, v) J(u, e v) is interpreted as follows: for each point
(u, v) D, J(u,
e v) is a 2-vector, (u, v) a point in , w (u, v) is a
2-covector, and the entire expression denotes the real number obtained
by letting the 2-covector w (u, v), considered as a linear function on
R32 , operate on the 2-vector J(u,
e v).
Remark 11.2. This definition is a straightforward generalization of
the definition of the integral of a 1-form over a smooth curve (p. 4).
For if w : (R13 ) is a continuous 1-form, and : [a, b] R3 a
smooth curve so that trace() , then
Z Z b Z b
  0    
w= w (t) (t) dt = w (t) J(t)
e dt,
a a

where J(t)
e is the Jacobian 1-vector of at t as defined in 10.3.
Example 11.3. Let w(x, y, z) = xydydz + xdzdx + 3zxdxdy, and

x=u+v
: y =uv
z = uv,

where 0 u, v 1. Here , the domain of definition of w, is all of


R3 , and w is smooth. Now
 
x y z
(u0 , v0 ) = (u0 , v0 ), (u0 , v0 ), (u0 , v0 ) = (1, 1, v0 ),
u u u u

and similarly v
(u0 , v0 ) = (1, 1, u0 ). Therefore

J(u 0 , v0 ) = (1, 1, v0 ) (1, 1, u0 ).


e

Next, w(x, y, z) = 3zx(f1 f2 ) x(f1 f3 ) + xy(f2 f3 ). So that


w (u0 , v0 ) = 3(u0 v0 )(u0 + v0 )(f1 f2 ) (u0 + v0 )(f1 f3 )
+ (u0 + v0 )(u0 v0 )(f2 f3 ).

1The set of all points that lie on is called the trace or graph of the surface .
24 MAROLA AND ZIEMER

Finally,
  
w (u0 , v0 ) J(u
e 0 , v0 ) = ... (left to the reader)
= 3u0 v0 (u0 + v0 )(2) (u0 + v0 )(u0 v0 )
+ (u20 v02 )(u0 + v0 )
= u30 u20 5u20 v0 7u0 v02 + v02 v03 .
Thus
ZZ Z 1 Z 1
w= du (u3 u2 5u2 v 7uv 2 + v 2 v 3 ) dv = 2.
0 0

Lemma 11.4. Let : D R3 and : D R3 be smoothly


equivalent surfaces. Recall that this means there is a transformation
h : D D such that h is 1-1 and onto, h C 1 , Jh(p) > 0 for all
p D, and h = . Then
J(p)
e e (h(p))Jh(p)
= J
for all p D.
Proof. Define three projection functions 1 , 2 , 3 : R3 R2 , by the
formulae
1 (x, y, z) = (x, y)
2 (x, y, z) = (x, z)
3 (x, y, z) = (y, z).
Since h = , we also have
i h = i for i = 1, 2, 3.

1
D / R2
{ =
{{{
h {{
 {{ 1

D
This diagram represents three transformations from the plane into
the plane. By the Chain Rule for all p D
d(1 )|p = d(1 h)|p = d(1 )|h(p) dh|p .
Taking determinants on both sides,
J(1 )(p) = J(1 )(h(p))Jh(p).
COVECTORS AND FORMS 25

Similarly,
J(2 )(p) = J(2 )(h(p))Jh(p)
J(3 )(p) = J(3 )(h(p))Jh(p).

Now consider the 2-vector J(p).e If (u, v) = (x, y, z), then by the
Proposition 10.1,
(x, y) (x, z) (y, z)
J(p)
e = (e1 e2 ) + (e1 e3 ) + (e2 e3 )
(u, v) p (u, v) p (u, v) p
= J(1 )(p)(e1 e2 ) + J(2 )(p)(e1 e3 )
+ J(3 )(p)(e2 e3 )
= J(1 )(h(p))Jh(p)(e1 e2 ) + J(2 )(h(p))Jh(p)(e1 e3 )
+ J(3 )(h(p))Jh(p)(e2 e3 )
= Jh(p) [J(1 )(h(p))(e1 e2 ) + J(2 )(h(p))(e1 e3 )
+ J(3 )(h(p))(e2 e3 )] .
e (h(p)).
Again by Proposition 10.1, the bracketed 2-vector is simply J
Hence
J(p)
e e (h(p)).
= Jh(p)J

Theorem 11.5. Let w be a continuous 2-form on an open set R3 .
Let : D R3 and : D R3 be smoothly equivalent surfaces
such that trace() = trace( ) . Then
ZZ ZZ
w= w.

Proof. Let h : D D be as in the preceding lemma. Say h(u, v) =


(x, y). Now
ZZ ZZ
w (x, y) J
  
w= e (x, y) dxdy.
D

We make a change of integral using the transformation h:


ZZ ZZ
w (h(u, v)) J
  
w= e (h(u, v)) |Jh(u, v)| dudv.
D

Now we use the fact that Jh > 0, and the linearity of the 2-covector
w (h(u, v)) :
ZZ ZZ
w h(u, v) J
  
w= e (h(u, v))Jh(u, v) dudv.
D
26 MAROLA AND ZIEMER

Since h = , and J
   
e (h(u, v))Jh(u, v) = J(u, e v) by the
preceding lemma, so we have
ZZ ZZ
  
w= w (u, v) J(u,
e v) dudv

Z ZD
= w, by definition.



12. Differential 3-forms


Let R3 be an open set. A differential 3-form on is a function
w : (R33 ) ;
every 3-form can be written
w(p) = A(p)(f1 f2 f3 )
for all p ; A(p) is called the coordinate function of w; w is continuous
(differentiable) if and only if A is continuous (differentiable).
Provisional definition. If w is a 1-form, we integrated w over a curve
(i.e., a function : R1 R3 ); if w was a 2-form, we integrated w over
a surface (i.e., a function : R2 R3 ), therefore, to be consistent, we
should integrate a 3-form over a function T : R3 R3 .
Let w be a continuous 3-form on an open set R3 . Let T : D
3
R be a smooth transformation (JT (p) 6= 0 in D). Also suppose that
T (D) . Then the integral of w over T is given by
ZZZ ZZZ
  
w= w T (u, v, w) JTe (u, v, w) dudvdw.
T D

Proposition 12.1. Let w be a continuous 3-form on an open set ; let


T : D R3 be a smooth transformation such that T (D) . Let A
be the coordinate function of w. Then if T is 1-1 on D and JT (p) > 0
for all p D, ZZZ ZZZ
w= A.
T T (D)
RRR
Proof. In the defining formula for T
w, we note:
1) w T (u, v, w) = A(T (u, v, w))(f1 f2 f3 ),
e (u, v, w) = JT (u, v, w)(e1 e2 e3 ).
2) JT
Now (f1 f2 f3 )(e1 e2 e3 ) = 1, so
ZZZ ZZZ
w= A(T (u, v, w))|JT (u, v, w)| dudvdw,
T D
COVECTORS AND FORMS 27

and the hypothesis T is 1-1 enables us to apply the change of integral


theorem, so
ZZZ ZZZ
A(T (u, v, w))|JT (u, v, w)| dudvdw = A(x, y, z) dxdydz.
D T (D)


Remark 12.2. If the additional hypotheses on T are not satisfied, then
the proposition is false, in general.
For the purposes of this note, we shall only be interested in integrat-
ing a 3-form w over a transformation T which satisfies the hypotheses
of the preceding proposition. Accordingly, we abandon our provisional
definition, and substitute in its place the following definition.
Definition 12.3. Let w be a continuous 3-form on ; say w(x, y, z) =
A(x, y, z)(f1 f2 f3 ); let D be a subset of having volume. Then the
integral of w over D is given by
ZZZ ZZZ
w= A(x, y, z) dxdydz.
D D

We can, of course, consider this definition as a special case of the


provisional definition simply by taking T : D R3 to be the identity
transformation on D.
Lemma 12.4. Let D, D R3 be open, T : D R3 , T : D R3 ,
where T, T C 1 . Suppose h : D D , h C 1 , and T = T h on
D. Then
JT e (p)Jh(p)
e (p) = JT
for every p D.
Proof. The proof is left to the reader. 

D
T / R3
{ =
{{{
h {{
 {{ T
D
Theorem 12.5. Let w : (R33 ) be a continuous 3-form. Let
T : D R3 , T : D R3 be smoothly equivalent transformations so
that T (D) = T (D ) . Then
ZZZ ZZZ
w= w.
T T

Proof. The proof is left to the reader. 


28 MAROLA AND ZIEMER

13. The exterior algebra of R3


13.1. Exterior products of k-covectors (k = 0, 1, 2, 3). Thus far,
we have considered the spaces (R31 ) , (R32 ) , (R33 ) of 1-covectors, 2-
covectors, and 3-covectors, respectively, as separate entities; the spaces
are disjoint. Each space is, moreover, endowed with a vector space
structure. We now wish to show that these spaces are naturally in-
terrelated by a multiplication. Define (R30 ) = R1 ; this is merely a
convention.
By the same reasoning as in Remark 5.4, we note that (R34 ) =
0, so there is no value to be received by considering k-covectors for
k > 3. Now let be a k-covector, and an l-covector, where k and
l are integers between 0 and 3. Our goal is to define a product of
and . This product will be a (k + l)-covector. We shall write the
product as , purposely confusing it with the wedge product (which
is not really a true multiplication as defined). We shall want our
product to satisfy the distributive laws with respect to addition and
scalar multiplication.
Every k-covector can be uniquely expressed as a linear combination
of the basis k-covectors. (If k = 0, the basis 0-covector is the real
number 1.) Similarly, every l-covector can be uniquely expressed as a
linear combination of the basis l-covectors.
Since we want the distributive laws to be satisfied, it is sufficient
to define the products of the various basis k-covectors and extend the
definition by linearity. Let us list these basis k-covectors:

Space Basis

(R30 ) {1}

(R31 ) {f1 , f2 , f3 }

(R32 ) {f1 f2 , f1 f3 , f2 f3 }

(R33 ) {f1 f2 f3 }

Definition 13.1. The product of a basis k-covector and a basis l-


covector is given by the table below. The product of an arbitrary k-
covector and an arbitrary l-covector is obtained by linearity.
COVECTORS AND FORMS 29

(1) (1) = 1
(1) (fi ) = fi
(1) (fi fj ) = fi fj
(1) (fi fj fk ) = fi fj fk

(fi ) (1) = fi
(fi ) (fj ) = fi fj
(fi ) (fj fk ) = fi fj fk
(fi ) (fj fk fl ) = fi fj fk fl = 0

(fi fj ) (1) = fi fj
(fi fj ) (fk ) = fi fj fk
(fi fj ) (fk fl ) = fi fj fk fl = 0
(fi fj ) (fk fl fm ) = fi fj fk fl fm = 0

(fi fj fk ) (1) = fi fj fk
(fi fj fk ) (fl ) = fi fj fk fl = 0
(fi fj fk ) (fl fm ) = fi fj fk fl fm = 0
(fi fj fk ) (fl fm fn ) = fi fj fk fl fm fn = 0.

To make a long story short, in multiplying the basis k-covectors,


one simply strings them together. The reader should feel that this
definition is quite natural.

Example 13.2. 1)

(2f1 + f2 ) (3(f1 f3 ) + 2(f2 f3 ))

= (2f1 ) (3(f1 f3 )) + (2f1 ) ( 2(f2 f3 ))

+ (f2 ) (3(f1 f3 )) + (f2 ) ( 2(f2 f3 ))

= 6(f1 f2 f3 ) + 2 2(f1 f2 f3 )

+ 3(f2 f1 f3 ) + 2(f2 f2 f3 )

= (2 2 3)(f1 f2 f3 ).
30 MAROLA AND ZIEMER

2)
 
(6)(f2 3f3 ) (f1 f2 + 10f3 )
 
= (6) (f2 ) + (6) (3f3 ) (f1 f2 + 10f3 )
= (6f2 18f3 ) (f1 f2 + 10f3 )
= 6(f2 f1 ) 6(f2 f2 ) + 60(f2 f3 )
18(f3 f1 ) + 18(f3 f2 ) 180(f3 f3 )
= 6(f1 f2 ) + 18(f1 f3 ) + 42(f2 f3 ).
 
3) 2(f1 f2 ) + (f2 f3 ) (f1 f3 ) 4(f2 f3 ) = 0, because
it is a 4-covector.
Exercise 13.3. Evaluate the following:

a) (f1 2f2 +f3 ) (f1 + 3f2 3f3 . 
b) (f1 + f2 ) (2(f
 1 f 2 ) 4(f 2 f 3 )) (3) .
c) n(f2 f3 ) (7) (f1 + 4f3 ).o
 
d) (6) (f2 + f3 ) (f1 f2 ) (f2 + f3 ).

Remark 13.4. We have seen that multiplication of k-covectors is dis-


tributive with respect to addition and scalar multiplication. It is also
associative (i.e., ( ) = ( )), as the reader may verify.
But it is not commutative. For example, (f1 ) (f2 ) 6= (f2 ) (f1 ).
In a completely analogous manner, one may define the product of a
k-vector and an l-vector, for k, l = 0, 1, 2, 3. The definitions and proper-
ties of this multiplication are exactly the same as for the multiplication
of k-covectors, so we shall not go through the details.
If is a k-covector, and is an l-covector, is called the exterior
product of and .
Associated with the vector space R3 , we have 8 vector spaces: 4
spaces of k-vectors, R30 , R31 , R32 , R33 , and 4 spaces of k-covectors, (R30 ) ,
(R31 ) , (R32 ) , (R33 ) . These are all vector spaces, and in addition satisfy
extra axioms concerning wedge products. Each (R3k ) may be regarded
as the space of linear functions on the corresponding (R3k ). In addition,
there is defined an exterior product of k-vectors and l-vectors, and an
exterior product of k-covectors and l-covectors.
This entire structure, as described above, may be called the exterior
algebra of R3 or the Grassmann algebra of R3 (Hermann Grassmann,
18091877).
The reader who should have no difficulty in describing the structure
of the exterior algebra of Rn .
COVECTORS AND FORMS 31

Notice that we did not include the notions of differential k-forms


among the various parts of the exterior algebra of R3 ; the differen-
tial forms are the link between the differential and integral calculus of
Euclidean spaces and the exterior algebra. Specifically, the concept of
differential forms enables us to express certain geometric problems of
the calculus of Euclidean spaces in such a manner that the algebraic
tools of the exterior algebra can be applied to their solution.
In the next section we develop some manipulative techniques for dif-
ferential forms; following that we shall proceed to illustrate the com-
ments made above, by attacking Stokes, Greens, and Gauss Theo-
rems.
14. The algebra of differential forms
Thus far we have defined differential k-forms for k = 1, 2, 3. Let us
complete the spectrum for k = 0:
A differential 0-form is a mapping
w : (R30 ) = R1 .
Hence a 0-form is simply a real-valued function on R3 . It is contin-
uous or differentiable in the usual sense of continuity or differentiability
of a real-valued function.
Let w1 and w2 both be differential k-forms, say with domains 1 , 2
R3 , where 0 k 3. We define w1 + w2 to be a differential k-form
with domain 1 2 by
(w1 + w2 )(p) = w1 (p) + w2 (p),
for all p 1 2 . Note that the addition on the right-hand side is in
the space of k-covectors. Similarly, if is a real number, then w1 is
a differential k-form with domain 1 given by
(w1 )(p) = [w1 (p)]
for all p 1 . The interested reader may verify that these definitions
turn the set of differential k-forms into a vector space, although we
shall not need that fact.
Next we define the exterior product w1 w2 of a k-form w1 : 1 R3
and an l-form w2 : 2 R3 to be a (k+l)-form (w1 w2 ) : 1 2 R3
by
(w1 w2 )(p) = w1 (p) w2 (p).
Remark 14.1. This multiplication of differential forms is associative
and distributive with respect to addition and scalar multiplication. That
is, if w1 , w2 , and w3 are forms, and 1 , 2 , 3 are real numbers, then
a) (w1 w2 ) w3 = w1 (w2 w3 ),
32 MAROLA AND ZIEMER

b) (1 w1 + 2 w2 ) w3 = 1 (w1 w3 ) + 2 (w2 w3 ) (here assume


w1 and w2 have the same dimension, i.e., both are k-forms),
c) w1 (2 w2 + 3 w3 ) = 2 (w1 w2 ) + 2 (w1 w3 ) (here assume
w2 and w3 have the same dimension).
Exercise 14.2. Prove the preceding remark.
Exercise 14.3. Is multiplication of forms commutative? Prove it or
give a counterexample.
Notice that the algebraic structure on differential forms is simply
transported from the structure of k-covectors. The situation is com-
pletely analogous to the determination of the strucure of the space of
real-valued functions from the algebraic rules for real numbers.

14.1. Exterior derivative of differential forms. Let f : R1 ,


f C 1 . In other words, f is differentiable differential 0-form. We have
seen earlier how the differential of f , df , may be regarded as a 1-form
df : (R31 ) . In fact, we note that the coordinate functions of df
are simply the partials of f ; i.e.,
f f f
df (p) = (p)f1 + (p)f2 + (p)f3 .
x y z
(Here f1 , f2 , f3 are the basis covectors, not to be confused with the
partials of f which appear as coordinate functions.)
In this manner, starting with a differentiable differential 0-form f ,
we have obtained a differential 1-form, df . It is this procedure which
we want to generalize; i.e., starting with a differentiable differential k-
form w, we wish to define a differential (k + 1)-form dw. We proceed
in stages, using the definition of df for a 0-form f as a starting point.
Definition 14.4. (1) Let f : R1 be a 0-form, f C 1 . Then
df : (R31 ) is a 1-form, given by
f f f
df (p) = (p)f1 + (p)f2 + (p)f3 .
x y z
(2) Let w : (R31 ) be a 1-form, w C 1 . Say
w(p) = A1 (p)f1 + A2 (p)f2 + A3 (p)f3 .
Then dw : (R32 ) is a 2-form, given by
dw(p) = (dA1 (p) f1 ) + (dA2 (p) f2 ) + (dA3 (p) f3 ).
In this formula, dA1 , dA2 , dA3 are the 1-forms which are de-
fined in (1) above.
COVECTORS AND FORMS 33

(3) Let w : (R32 ) be a 2-form, w C 1 . Say


w(p) = A1 (p)(f1 f2 ) + A2 (p)(f1 f3 ) + A3 (p)(f2 f3 ).
Then dw : (R33 ) is a 3-form, given by
dw(p) = (dA1 (p) (f1 f2 )) + (dA2 (p) (f1 f3 )) + (dA3 (p) (f2 f3 )).
Again, dA1 , dA2 , dA3 are as in (1) above.
Example 14.5. Let w = x2 y dydzxz dxdy. In our language, w(x, y, z) =
(xz)(f1 f2 ) + (x2 y)(f2 f3 ). Using the terminology of (3) above, we
have A1 (x, y, z) = xz, A2 (x, y, z) = 0, A3 (x, y, z) = x2 y. Using (1)
above,
A1 A1 A1
dA1 (x, y, z) = (x, y, z)f1 + (x, y, z)f2 + (x, y, z)f3
x y z
= zf1 xf3 ,
and similarly dA3 (x, y, z) = 2xyf1 + x2 f2 . Thus
dw(x, y, z) = (dA1 (x, y, z) (f1 f2 )) + (dA3 (x, y, z) (f2 f3 ))
= (zf1 xf3 ) (f1 f2 )) + (2xyf1 + x2 f2 ) (f2 f3 ))
= (2xy x)(f1 f2 f3 ).
Exercise 14.6. 1) Let f, g be 0-forms on , with f, g C 1 , and
let , be real numbers. Prove: d(f + g) = df + dg.
2) Let f, g be 0-forms on , with f, g C 1 . Prove: d(f g) =
(df g) + (f dg).
Proposition 14.7. Let w1 , w2 be k-forms, in C 1 , and let , be real
numbers. Then d(w1 + w2 ) = dw1 + dw2 .
Proof. For k = 0, the proposition is true by Exercise 14.6 1). Let
k = 1. Suppose A1 , A2 , A3 are the coordinate functions of w1 . For
simplicity of notation we shall write w1 = A1 f1 +A2 f2 +A3 f3 . Similarly,
w2 = B1 f1 + B2 f2 + B3 f3 . Then
 
d(w1 + w2 ) = d (A1 + B1 )f1 + (A2 + B2 )f2 + (A3 + B3 )f3
= d(A1 + B1 ) f1 + d(A2 + B2 ) f2 + d(A3 + B3 ) f3
= (dA1 + dB1 ) f1 + (dA2 + dB2 ) f2 + (dA3 + dB3 ) f3
= (dA1 f1 ) + (dB1 f1 ) + (dA2 f2 ) + (dB2 f2 )
+ (dA3 f3 ) + (dB3 f3 )
  
= (dA1 f1 ) + (dA2 f2 ) + (dA3 f3 ) + (dB1 f1 )

+ (dB2 f2 ) + (dB3 f3 )
= dw1 + dw2 .
34 MAROLA AND ZIEMER

The proof for k = 2 is exactly as above. 

This proposition tells us that the exterior derivative may be thought


of as a linear transformation from the space of k-forms to the space of
(k + 1)-forms.
Theorem 14.8. Let w1 be a k-form, w2 an l-form, with w1 , w2 C 1 .
Then
d(w1 w2 ) = (dw1 w2 ) + (1)k (w1 dw2 ).
Proof. The case k = l = 0 was proved in Exercise 14.6 2). We consider
the case k = l = 1. As in the preceding proposition, let w1 = A1 f1 +
A2 f2 + A3 f3 , w2 = B1 f1 + B2 f2 + B3 f3 . Then

(dw1 w2 ) (w1 dw2 )


h i h i
= (dA1 f1 ) + (dA2 f2 ) + (dA3 f3 ) B1 f1 + B2 f2 + B3 f3
h i h i
A1 f1 + A2 f2 + A3 f3 (dB1 f1 ) + (dB2 f2 ) + (dB3 f3 )
= (dA1 f1 ) (B2 f2 ) + (dA2 f2 ) (B1 f1 ) + (dA1 f1 ) (B3 f3 )
+ (dA3 f3 ) (B1 f1 ) + (dA2 f2 ) (B3 f3 ) + (dA3 f3 ) (B2 f2 )
h
(A1 f1 ) (dB2 f2 ) + (A2 f2 ) (dB1 f1 ) + (A1 f1 ) (dB3 f3 )
i
+ (A3 f3 ) (dB1 f1 ) + (A2 f2 ) (dB3 f3 ) + (A3 f3 ) (dB2 f2 )
= B2 (dA1 f1 f2 ) + B1 (dA2 f2 f1 ) A1 (f1 dB2 f2 )
A2 (f2 dB1 f1 ) + B3 (dA1 f1 f3 ) + B1 (dA3 f3 f1 )
A1 (f1 dB3 f2 ) A3 (f3 dB1 f1 ) + B3 (dA2 f2 f3 )
+ B2 (dA3 f3 f2 ) A2 (f2 dB3 f3 ) A3 (f3 dB2 f2 )
h i h
= A1 dB2 + B2 dA1 A2 dB1 B1 dA2 (f1 f2 ) + A1 dB3
i h
+ B3 dA1 A3 dB1 B1 dA3 (f1 f3 ) + A2 dB3 + B3 dA2
i i
A3 dB2 B2 dA3 (f2 f3 )
= d(A1 B2 A2 B1 ) (f1 f2 ) + d(A1 B3 A3 B1 ) (f1 f3 )
+ d(A2 B3 A3 B2 ) (f2 f3 ).
The last step follows by Exercise 14.6. Now (A1 B2 A2 B1 ), (A1 B3
A3 B1 ), (A2 B3 A3 B2 ) are the coordinate functions of w1 w2 . Hence
this final expression is d(w1 w2 ).
COVECTORS AND FORMS 35

The cases k = 0, l = 1, and k = 1, l = 0, are similar to, but simpler


than the case just considered, and we leave the proofs to the reader.
Since d(w1 w2 ) is a (k + l + 1)-form, we see that in all the remaining
cases (k = 1, l = 2; . . .; k = 3, l = 3) both sides are zero and there is
nothing to prove. 
Lemma 14.9. Let A be a 0-form on , A C 2 . Then
d(dA) = 0.
Proof. Since dA = Ax f1 + Ay f2 + Az f3 , we obtain
d(dA) = (dAx f1 ) + (dAy f2 ) + (dAz f3 )
= (Axx f1 + Axy f2 + Axz f3 ) f1
+ (Ayx f1 + Ayy f2 + Ayz f3 ) f2
+ (Azx f1 + Azy f2 + Azz f3 ) f3
= (Ayx Axy )(f1 f2 ) + (Azx Axz )(f1 f3 )
+ (Azy Ayz )(f2 f3 ).
Since A C 2 , off diagonal terms in the final expression are equal.
Hence d(dA) = 0. 
Theorem 14.10. Let w be a k-form on , w C 2 . Then
d(dw) = 0.
Proof. The case k = 0 is covered by the preceding lemma. The case
k = 1 is given as an exercise (see below). Since d(dw) is always a
(k + 2)-form, we find that for k = 2, 3, d(dw) is a 4 or 5-form which is
automatically 0. 
Exercise 14.11. Prove the above theorem for k = 1. Do not use
the partial derivatives directly. The proof one should give uses: 1) the
preceding lemma, 2) the preceding theorem on exterior derivative of a
wedge product.
15. Effects of a transformation on differential forms
In this section, we develop the concepts of change of variable for
differential forms. We shall present a unified treatment.
Definition 15.1. Suppose T : D Rm , T C 1 , where D Rn
is open. For each point p D, there is defined a linear mapping
dT (p) : Rn Rm , called the differential of T . We now claim that
dT (p) induces, in a natural way, linear transformations dk T (p) from
the space of k-vectors in Rn , into the space of k-vectors in Rm
dk T (p) : Rnk Rm
k ,
36 MAROLA AND ZIEMER

by the formula
 
dk T (p) (ei1 . . . eik ) = dT (p, ei1 ) . . . dT (p, eik ).
There are several remarks to be made concerning this definition. In
our applications, we shall have m, n = 2 or 3, so the formulas will
become somewhat simpler. However, we shall have to deal with several
specific cases of this definition, namely n = m = 3, n = 2, m = 3, and
n = m = 2. It is for this reason that we have presented one unifying
definition.
Suppose for example, n = m = 3. Then we have three linear trans-
formations:
d1 T (p) :R31 R31
d2 T (p) :R32 R32
d3 T (p) :R33 R33 .
They are specified by the following rules: d1 T (p)(e1 ) = dT (p, e1 );
d1 T (p)(e2 ) = dT (p, e2 ); d1 T (p)(e3 ) = dT (p, e3 ). Also
d2 T (p)(e1 e2 ) = dT (p, e1 ) dT (p, e2 )
d2 T (p)(e1 e3 ) = dT (p, e1 ) dT (p, e3 )
d2 T (p)(e2 e3 ) = dT (p, e2 ) dT (p, e3 ),
and d3 T (p)(e1 e2 e3 ) = dT (p, e1 ) dT (p, e2 ) dT (p, e3 ).
To return to the definition, we notice that we have only defined
the function dk T (p) on the basis k-vectors; two questions immediately
arise:
(1) How do we know that there is a linear transformation from Rnk
to Rmk whose values on the basis k-vectors are those given?
(2) Even if there is such a linear transformation, how do we know
there is only one? That is, what right do we have simply to
specify the values of the transformation on the few basis vec-
tors?
Let us state the underlying abstract principle that is involved:
Let V be an abstract vector space, and let {v1 , . . . , vs } be a basis for
V . If W is any vector space, and {w1 , . . . , ws } are any given vectors of
W , then there is one and only one linear transformation T : V W
such that T (v1 ) = w1 , . . . , T (vs ) = ws .
Using this principle, the mappings dk T (p) are all linear transforma-
tions, and they are well-defined, i.e. unambiguous.
Notice that for each choice of n and m, there are only a finite number
of non-trivial mappings to be defined; namely we only need to define
dk T (p) for k min(n, m). For when k > min(n, m), one of the two
COVECTORS AND FORMS 37

spaces Rnk , Rm
k reduces to 0. For example, if n = 4, m = 6, we would
only define d1 T (p), d2 T (p), d3 T (p), d4 T (p).
Finally, to make the definition complete, let us define d0 T (p). Recall
that for any n, Rn0 , the space of 0-vectors in n-space, is merely defined
to be R1 . As a matter of convention, then, we define
d0 T (p) : R1 R1
 
to be the identity mapping: d0 T (p) () = .
Remark 15.2. Suppose T : D Rm , D  Rn , T C 1 . The reader
may verify that for all u1 , . . . , uk Rn , dk T (p) (u1 . . . uk ) =
dT (p, u1 ) . . . dT (p, uk ). The purpose for introducing these linear
transformations dk T (p) is to enable us to define what is meant by the
transform of a k-form.
Definition 15.3. Let T : D Rm , where D is an open set in Rn ,

and T C 1 . Let w : (Rm k ) be a differential k-form in m-space.
We suppose that T (D) . We define the transform of w by T to be
a differential k-form in n-space, defined on D:
T w : D (Rnk ) .
It is given by the following formula: For each p D, T w(p) is a
k-covector in n-space, whose value at a k-vector is
    
T w(p) () = w(T (p)) dk T (p)() .
Again there are many remarks to be made. Let us study this formula
carefully. The left hand side is what we are defining. We want to know
what T w is, so we must define what the k-covector T w(p) is for any
p D. Then, to know what a k-covector is, it is sufficient to know
what its value is for an arbitrary k-vector .
On the right hand side, T (p) is a point in m-space; in fact, T (p) ,
since by assumption T (D) . The k-form w is therefore defined at
T (p), and w(T (p)) is a k-covector in m-space. Also, is a k-vector
in n-space, so by Definition 15.1, [dk T (p)]() is a k-vector in m-space.
The right hand side therefore stands for the effect of the operation of a
k-covector in m-space on a k-vector in m-space, which is a real number,
as desired. So at least everything makes sense. However, there is one
important detail to be checked. We have claimed in our definition
that for each p D, T w(p) is a k-covector. We must verify that the
function T w(p) as defined is indeed linear, i.e.

T w(p) (1 1 + 2 2 ) = 1 T w(p) (1 ) + 2 T w(p) (2 ).
    
38 MAROLA AND ZIEMER

Proof. By definition
    
T w(p) (1 1 + 2 2 ) = w(T (p)) dk T (p)(1 1 + 2 2 ) .
But dk T (p) and w(T (p)) are linear transformations, so we obtain
  
w(T (p)) 1 (dk T (p))(1 ) + 2 (dk T (p))(2 )
     
= 1 w(T (p)) dk T (p))(1 ) + 2 w(T (p)) dk T (p))(2 )
def
= 1 T w(p) (1 ) + 2 T w(p) (2 ).
   


Example 15.4. Let T : R2 R2 be given by T (x, y) = (u2 + v, v).
Clearly T C 1 . We note that
 
2u 1
dT (u, v) = .
0 1
Now we must transform a k-form in the plane. Therefore we consider
w(x, y) = xydx; i.e. w(x, y) = xyf1 . w is a 1-form, and T w will also
be a 1-form, again in the plane. The usual way of representing such
a 1-form is by its coordinate functions. For notational convenience, w
is a 1-form in the (x, y)-plane, T is a transformation from the (u, v)-
plane to the (x, y)-plane, so T w is a 1-form in the (u, v)-plane. It will
be given by T w(u, v) = A1 (u, v)f1 + A2 (u, v)f2 . We compute these
coordinate functions as follows:
 
A1 (u, v) = A1 (u, v)f1 + A2 (u, v)f2 (e1 )
= T w(u, v) (e1 )
 

def   
= w(T (u, v)) d1 T (u, v)(e1 )
  
= w(T (u, v)) dT [(u, v), e1 )] .
Now dT [(u, v), e1 ] = (2u, 0), and w(T (u, v)) = (u2 + v) f1 . Thus
A1 (u, v) = ((u2 + v) f1 )(2u, 0) =(u2 v + v 2 )(2u) = 2u3 v + 2uv 2 .


In a similar manner, A2 (u, v) = T w(u, v) (e2 ) = [(u2 v+v 2 )f1 ](1, 1) =
u2 v + v 2 . Thus T w(u, v) = (2u3 v + 2uv 2 )f1 + (u2 v + v 2 )f2 .
Example 15.5. Let T : R2 R3 be given by T (x, y, z) = (u
v, uv, v 2 ), T C 1 , and

1 1
dT (u, v) = v u .
0 2v
T will transform k-forms in 3-space into k-forms in 2-spaces. In partic-
ular, T will transform k-forms in (x, y, z)-space into k-forms in (u, v)-
space. Let us put k = 2. So let w(x, y, z) = x(f1 f2 ) + yz(f2 f3 ).
COVECTORS AND FORMS 39

Now T w will be a 2-form in the (u, v)-plane, so


T w(u, v) = A(u, v)(f1 f2 ).
Note that f1 , f2 in this formula stand for the basis covectors in R2 ,
while f1 , f2 , f3 in the formula for w stand for the basis covectors in R3 .
Now A(u, v) may be determined by
 
A(u, v) = A(u, v)(f1 f2 ) (e1 e2 )
= T w(u, v) (e1 e2 )
 
  
= w(T (u, v)) d2 T (u, v)(e1 e2 )
= w(u v, uv, v 2 ) (dT (u, v)(e1 )) (dT (u, v)(e2 )) .
  

dT (u, v)(e1 ) = (1, v, 0), and dT (u, v)(e2 ) = (1, u, 2v). Hence
A(u, v) = w(u v, uv, v 2 ) (1, v, 0) (1, u, 2v)
  

= (u v) (f1 f2 )[(1, v, 0) (1, u, 2v)]
+ (uv 3 ) (f2 f3 )[(1, v, 0) (1, u, 2v)]


f1 (1, v, 0) f1 (1, u, 2v)
= (u v)
f2 (1, v, 0) f2 (1, u, 2v)

3 f2 (1, v, 0) f2 (1, u, 2v)

+ (uv )
f3 (1, v, 0) f3 (1, u, 2v)

1 1 3
v u
= (u v) + (uv )
v u 0 2v
= (u v)(u + v) + (uv 3 )(2v 2 ) = u2 v 2 + 2uv 5 .
Therefore T w(u, v) = (u2 v 2 + 2uv 5 )(f1 f2 ).
Exercise 15.6. a) Let T : R3 R2 be given by

u = x2 + z
T :
v = x + y,
and w(u, v) = uvf1 vf2 . Compute T w.
b) Let T : R3 R3 be given by

x=r+s
T : y = s2 t
z = 2t,

and w(x, y, z) = (xy z 2 )f1 f2 f3 . Compute T w.


Let us consider what happens to our Definition 15.3 in case k = 0.
By definition, a 0-form is simply a real-valued function. Suppose then

that D Rn is open, and T : D Rm , T C 1 . Let w : (Rm 0 )
40 MAROLA AND ZIEMER

be a 0-form in m-space, and let T (D) . Then T w should be a


0-form in n-space, T w : D (Rn0 ) , specified by
 
T w(p) () = [w(T (p))][d0 T (p)()].
Here is a 0-vector in Rn0 = R1 . By Definition 15.1, d0 T (p)() = .
Hence we have
 
T w(p) () = [w(T (p))]().
Now to make sense of this equation, we must know how a 0-covector
operates on a 0-vector. We have not defined this notion before, as
we should have done, so we give the definition now. If  (Rn0 ) ,
and Rn0 , then ()() = . In particular then, T w(p) () and
[w(T (p))]() stand for ordinary real number multiplication. Now put
= 1, and we have T w(p) = w(T (p)). Since this holds for every
p D, T w = w T . Thus we see that for the case of 0-forms, the
notion of transform by T corresponds to the notion of inducing a change
of coordinates by T .
We wish to develop a formula concerning the interrelation between
the exterior derivative and the transform by T . This requires a number
of preliminary results which have independent interest as well. For the
next several pages we shall always consider T : D Rm , D Rn
open, T C 1 .
Proposition 15.7. T may be thought as a linear transformation from
the space of k-forms in m-space to the space of k-forms in n-space.
Proof. The proof is left to the reader. 

The next result applies to the interaction of T with the exterior


product. First we need some lemmas.
Lemma 15.8. If w is a k-form in m-space, then the transform T w
may also be described by the formula
T w(p) = w(T (p)) dk T (p).
Proof. This is obvious from the definition. The diagram below illus-
trates this lemma.
T w(p)
Rnk / R1
CC | =
CC ||
CC |
dk T (p) CC! ||
|| w(T (p))
Rm
k


COVECTORS AND FORMS 41

Lemma 15.9. Let be a simple k-covector in m-space; say = g1


m
. . . gk (Rmk ) , where gi (R1 ) . Then dk T (p) is a simple
k-covector in n-space; in fact,
 
dk T (p) = g1 dT (p) . . . gk dT (p) .
Proof. We must show that both sides of the equation stand for the
same function in (Rnk ) ; so let (u1 . . . uk ) Rnk . Then, using the
formula on p. 17,
   
dk T (p) (u1 . . . uk ) = dT (p)(u1 ) . . . dT (p)(uk )
g1 (dT (p)(u1 )) . . . g1 (dT (p)(uk ))



= .. ..
.
. .
g (dT (p)(u )) . . . g (dT (p)(u ))
k 1 k k
 
Now gi (dT (p)(uj )) = gi dT (p) (uj ). So we get

g1 dT (p)(u1 ) . . . g1 dT (p)(uk )

.. ..

 .   . 


gk dT (p) (u1 ) . . . gk dT (p) (uk )
 
= (g1 dT (p)) . . . gk dT (p) (u1 . . . uk ).
Thus the functions dk T (p) and (g1 dT (p)) . . . gk dT (p) have
the same values on simple k-vectors, and so by linearity they are the
same function. 
Lemma 15.10. Let 1 be a k-covector in Rm , and 2 an l-covector in
Rm . Then
   
(1 2 ) dk+l T (p) = 1 dk T (p) 2 dl T (p) .
Proof. First consider the case where 1 and 2 are simple; say 1 =
m
g1 . . . gk (Rm
k ) , and 2 = h1 . . . hl (Rl ) . Then, using
Lemma 15.8,
(1 2 ) dk+l T (p) = (g1 . . . gk h1 . . . hl ) (dk+l T (p))
= (g1 dT (p)) . . . (gk dT (p))
(h1 dT (p)) . . . (hl dT (p))
   
= 1 dk T (p) 2 dl T (p) .
In the general case, we may write 1 = si=1 i i , 2 = ti=1 j j ,
P P
where the i s are simple k-covectors and the j s are simple l-covectors.
By the linearity of the exterior product,
Xs X t
1 2 = i j (i j ).
i=1 i=1
42 MAROLA AND ZIEMER

Therefore, using the first case, we obtain


s X
X t
 
(1 2 ) dk+l T (p) = i j (i j ) dk+l T (p)
i=1 i=1
s X
X t
 
= i j (i dk T (p)) (j dl T (p)
i=1 i=1
" s
# " t
#
X X
= i (i dk T (p)) j (j dl T (p)
i=1 i=1
" s
! # " t
! #
X X
= i i dk T (p)) j j dl T (p)
i=1 i=1
   
= 1 dk T (p) 2 dl T (p) .

Theorem 15.11. If w1 is a k-form in Rm , and w2 is an l-form in Rm ,
then
T (w1 w2 ) = T w1 T w2 .
Proof. Let p D. Using Lemma 15.8, Lemma 15.10, and again Lemma 15.8,
we obtain
T (w1 w2 )(p) = (w1 w2 )(T (p)) dk+l T (p)
 
 
= w1 (T (p)) w2 (T (p)) dk+l T (p)
   
= w1 (T (p)) dk T (p) w2 (T (p)) dl T (p)
= (T w1 (p)) (T w2 (p)) = (T w1 T w2 )(p).

Theorem 15.12. If w is a k-form in Rm , and T : D Rm as above,
then if w C 1 ,
T (dw) = d(T w).
Proof. The method of proof is similar to the proof of the preceding
theorem.
Exercise 15.13. Let k = 0, and n = m = 3, and prove the theorem in
this special case.
The general case now follows quickly from what we have already
established. First we make two remarks. Let f1 , . . . , fm be k-forms in
COVECTORS AND FORMS 43

Rm , and suppose T : D Rm is as above, with coordinate functions

y1 = 1 (x1 , . . . , xn )

T : ..
.
ym = m (x1 , . . . , xn )

Then we observe that T fi = di . For indeed, in the proof of Lemma 15.10


we showed T fi = (i )x1 f1 + . . . + (i )xn fn = di .
As our second remark, we note dfi = 0. For indeed, if we let Fi :
R R1 be given by Fi (x1 , . . . , xn ) = xi , then we have seen that
m

dFi = fi . Thus dfi = d(dFi ) = 0.


To prove the theorem, we proceed in steps; let k = 1, and let w be
a 1-form in Rm . Then w = A1 f1 + . . . + Am fm expresses w as a
sum of wedge products of 0-forms and 1-forms. Now

d T (Ai fi ) = d (T Ai ) (T fi )
  

= d(T Ai ) (T fi ) + (T Ai ) d(T fi ) .
 

As we have noted, T fi = di , so d(T fi ) = d(di ) = 0, so the second


term is 0, and we get d(T Ai ) (T fi ), which by the Exercise 15.13 for
k = 0, is T dAi T fi = T (dAi fi ). Finally, d(Ai fi ) = (dAi
fi ) + (Ai dfi ) = dAi fi by the second remark. Thus T (dAi fi ) =
T d(Ai fi ). Hence dT (Ai fi ) = T d(Ai fi ).
Thus the desired formula holds in each of the terms of the sum
representing w. We now make the observation that since T and d are
both linear, it follows that if T d(wi ) = dT (wi ) for i = 1, 2, . . . , s, then
the formula holds for their sum
s
 X  s
X 

T d wi =T d(wi )
i=1 i=1
s
X s
X

= T (d(wi )) = d(T (wi ))
i=1 i=1
s
X   X s 

=d T (wi ) = d T wi .
i=1 i=1

Since the formula does hold for each of the terms Ai fi which add up
to w, it holds for w.
To complete the theorem, we must establish the formula for any k.
We note that if the formula T d = dT holds for any forms w1 and
w2 (of arbitrary dimension) it holds for their wedge product: Let w1
44 MAROLA AND ZIEMER

be a k-form
T (d(w1 w2 )) = T ((dw1 w2 ) + (1)k (w1 dw2 ))
= T (dw1 w2 ) + (1)k T (w1 dw2 )
= T (dw1 ) T (w2 ) + (1)k T (w1 ) T (dw2 )
= d(T w1 ) T w2 + (1)k T w1 d(T dw2 )
= d T w1 T w2 = d T (w1 w2 ) .
   

This said, we now note that if w is a k-form, k 2, then w can be built


up from forms of dimension less than k by sums and wedge products;
hence if the formula T d = dT holds for forms of dimension less than
k, it holds for forms of dimension k. Thus the proof is complete. 
To illustrate this final assertion, consider a 2-form w in R3 . As we
have seen, w has a coordinate representation: w(p) = A1 (p)(f1 f2 ) +
A2 (p)(f1 f3 ) + A3 (p)(f2 f3 ). But this very formula means that
w = A1 f1 f2 + A2 f1 f3 + A3 f2 f3 . The Ai s are 0-forms and
the fi s are 1-forms. Since we have proved the formula in case k = 0, 1,
it follows for the 2-form w by our remarks.
We now consider the effects of two transformations in composition
on differential forms. For the next few pages we shall deal with the
following situation: U Rq open, V Rn open, S : U Rn ,
T : V Rm , S, T C 1 , and S(U ) V , so the composite T S makes
good sense as well as d(T S)(p) = dT (S(p)) dS(p).

d(T S)(p)
U@
TS / Rm Rq D / Rm
@ {= DD y <
@@
@@ {{{ DD yyy
{{ D yy
S @ {{ T dS(p) DD! yy dT (S(p))
V Rn
Proposition 15.14. For all k 0, we have
dk (T S)(p) = dk T (S(p)) dk S(p)
for any point p U .
dk (T S)(p)
Rqk / Rm
AA = k
AA |||
AA |
dk S(p) AA ||
|| dk T (S(p))
Rnk

Proof. Case 1: k = 0. In this case, Rq0 = Rn0 = Rm 1


0 = R , and
d0 (T S)(p) = d0 T (S(p)) = d0 S(p) = Identity. Hence the conclusion is
immediate.
COVECTORS AND FORMS 45

Case 2: k > 0. Using Definition 15.1 and the preceding reminder,


     
dk (T S)(p) (ei1 . . . eik ) = d(T S)(p) (ei1 ) . . . d(T S)(p) (eik )
   
= d(T (S(p)) dS(p) (ei1 ) . . . d(T (S(p)) dS(p) (eik )
 
= d(T (S(p)) dS(p) (ei1 ) . . . d(T (S(p)) dS(p) (eik )
  
= dk T (S(p)) [dS(p)](ei1 ) . . . [dS(p)](eik )
   
= dk T (S(p)) dk S(p) (ei1 . . . eik )
   
= dk T (S(p)) dk S(p) (ei1 . . . eik ).
We have proved that the two linear transformations dk (T S)(p) and
dk T (S(p)) dk S(p) agree on the basis vectors ei1 . . . eik . Thus they
agree on all k-vectors as showed earlier. 
Proposition 15.15. Let us use the symbols Fkq , Fkn , Fkm to denote the
spaces of differential k-forms in q-space, n-space, m-space, respectively.
Then for all k 0, we have
(T S) w = S (T w),
where w is any k-form in m-space such that T (V ) D2(w).
(T S)
Fkq o Fkm
`BB
BB {{
BB {{{
S BB {
{} { T
Fkm
Proof. Note first that both sides stand for k-forms in q-space defined
on U . Let p U and Rqk . Then, using the preceding proposition
and Definition 15.3, we obtain
(T S) w(p) () = w(T S(p)) dk (T S)(p)()
    
  
= w(T (S(p))) (dk T (S(p)) dk S(p))()
  
= w(T (S(p))) dk T (S(p))(dk S(p)())
= T w(S(p)) dk S(p)()
  

= S (T w)(p) ().
 

Since this equation holds for all p U and Rqk , we have (T S) w =


S (T w). 
Lemma 15.16. Let D  Rk open, T : D Rn , T C 1 . Then for all
p D, JT
e (p) = dk T (p) (e1 . . . ek ).
2domain of the k-form w
46 MAROLA AND ZIEMER

Proof. Let
y1 = 1 (x1 , . . . , xn )

T : ..
.
ym = m (x1 , . . . , xn )
then dT (p) is the Jacobian matrix of T at p. Then note that dT (p)(ej ) =
1 n T

xj
(p), . . . , xj (p) = xj (p). Hence

e (p) = T (p) . . . T (p)


JT
x1 xk
= dT (p)(e1 ) . . . dT (p)(ek )
 
= dk T (p) (e1 . . . ek ).

We continue to use the basic hypotheses as given on page 43 preced-
ing Proposition 15.14. We suppose also that S and T are smooth.
Proposition 15.17. Let w be a continuous q-form in m-space. Then
T w is a continuous q-form in n-space, and
Z Z

T w= w.
S TS

Proof. To show that T w is continuous, we must P show that its coordi-

nate functions are continuous. Now T w(p) = Ai1 ,...,iq (p)(fi1 . . .
fiq ), where i1 , . . . , iq is an increasing sequence of integers from among
{1, . . . , n}, and f1 , . . . , fn are the basis covectors in n-space. We know
the procedure for obtaining the coordinate functions:
Ai1 ,...,iq (p) = T w(p) (ei1 . . . eiq )
 
  
= w(T (p)) dq T (p)(ei1 . . . eiq )
  
= w(T (p)) dT (p)(ei1 ) . . . dT (p)(eiq ) .
We use the fact that w is continuous; this means
X
w(x) = Bj1 ,...,jq (x)(gj1 . . . gjq )
where j1 , . . . , jq is an increasing sequence of integers from among {1, . . . , m},
and where g1 , . . . , gm are the basis covectors in m-space. Now
  
w(T (p)) dT (p)(ei1 ) . . . dT (p)(eiq )
X  
= Bj1 ,...,jq (T (p)) (gj1 . . . gjq )(dT (p)(ei1 ) . . . dT (p)(eiq )) .
The bracketed quantity is given by a certain determinant of a q q-
matrix (see p. 17). Note that gj dT (p)(ei ) stands for the action
of a covector in m-space on a vector in m-space; its value is the j th
COVECTORS AND FORMS 47

component of the vector dT (p)(ei ), that is to say, the partial derivative


j
xi
(p). Thus the entries in the determinant are simply the various
partials of the functions of T , all of which are continuous, since T C 1 .
Hence the entire determinant is a continuous function of p. Also, the
coefficients Bj1 ,...,jq T are composites of continuous functions. Thus
T w is continuous.
We may write S T w as follows
R

Z Z

  
T w= T w(S(p)) JS(p)
e
S
ZU
  
= T w(S(p)) dq S(p)(ei1 . . . eiq )
ZU
  
= w(T (S(p))) dq T (S(p))(dq S(p)(ei1 . . . eiq ))
ZU
  
= w(T (S(p))) dq (T S)(p)(ei1 . . . eiq )
ZU
  
= w(T S)(p)) J(T
e S)(p)
ZU
= w.
TS

Corollary 15.18. Let w be a continuous 2-form in R3 . Let : D


R3 be a smooth surface, where D R2 is open. Then
Z Z
w= w.
D

Proof. Put T = in the preceding proposition. Also, put S : D D


equal to the identity transformation; so the assertion of the preceding
proposition says
Z Z Z

w= w= w.
S S

On the other hand, by the remark after Definition 12.3 restated for
2-forms in the plane instead of 3-forms in space, we have
Z Z

w= w.
S D


48 MAROLA AND ZIEMER

16. The GaussGreenStokes theorems


Let 1 : [a, b] R3 , 2 : [b, c] R3 be two curves. Suppose that
1 (b) = 2 (b). Then, as usual, we define 1 + 2 : [a, c] R3 by

1 (t), a b,
(1 + 2 )(t) =
2 (t), b t c.
Suppose that 1 and 2 are smooth. Then 1 + 2 will be a piecewise
smooth curve, and we shall define
Z Z Z
w= w+ w,
1 +2 1 2

for any 1-form w which is continuous and whose domain contains


trace(1 ) trace(2 ).
Lemma 16.1. Let w1 , w2 : (R3k ) be continuous k-forms (k =
1, 2 or 3). Let 1 , 2 be real numbers; let D Rk be open, and let
T : D R3 be a smooth k-surface, with T (D) . Then
Z Z Z
1 w1 + 2 w2 = 1 w1 + 2 w2 .
T T T

Proof.
Z Z
  
1 w 1 + 2 w 2 = (1 w1 + 2 w2 )(T (p)) JTe (p)
T D
Z Z
     
= 1 w1 (T (p)) JT (p) + 2
e w2 (T (p)) JTe (p)
ZD Z D

= 1 w1 + 2 w2 .
T T

By an admissible region D in the plane we shall mean the following:


D shall be a subset of a rectangle R = {(x, y) R2 : a x b, c
y d}, and if the projection of D onto the horizontal axis is the closed
interval [, ], then D is the set of all points (x, y) such that
x
f (x) y g(x),

where f and g are the smooth (e.g. C 1 ) functions whose graphs form
the top and bottom pieces of the boundary of D. Likewise, if [0 , 0 ] is
the projection of D upon the vertical axis, D is the set of points (x, y)
COVECTORS AND FORMS 49

such that
0 y 0
F (y) x G(y).
We are now ready to state the first form of Greens theorem.
Theorem 16.2 (Green). Let D be an admissible region, and let w :
(R31 ) be a C 1 1-form, with D open. Then
Z Z
w= dw.
D D

Proof. Let w(x, y) = A(x, y)fR1 + B(x, Ry)f2 . Let Rw1 (x, y) = A(x, y)f1 ,
w2 (x, y) = B(x, y)f2 . Since DR w = D w1 + D w2 , we may treat
each part separately. Consider D w1 . On 1 , the lower part of D,
y = f (x) (or x = F (y)), x (or 0 y 0 ), and w1 (x, f (x)) =
A(x, f (x))f1 . Thus
Z Z
w1 = A(x, f (x)) dx.
1

On 2 , the upper part of D, y = g(x) (or x = G(y)) and x goes from


to . Thus, w1 (x, g(x)) = A(x, g(x))f1 and
Z Z Z
w1 = A(x, g(x)) dx = A(x, g(x)) dx.
2
On the vertical parts of D, if any, w1 = 0. Adding, we obtain
Z Z Z
 
w1 = A dx = A(x, f (x)) A(x, g(x)) dx.
D D
On the other hand dw1 = d(A(x, y)f1 ) = dA(x, y) f1 = Ay (x, y)f1
f2 so that
Z Z
dw1 = Ay (x, y) dxdy
D D
Z Z g(x)
= dx Ay (x, y) dy
f (x)
Z  
= A(x, g(x)) A(x, f (x)) dx

Z Z
 
= A(x, f (x)) A(x, g(x)) dx = w1 .
D
A similar computation shows that the same relation holds for w2 =
B(x, y)f2 , and adding these, we obtain the formula for a general 1-
form. 
50 MAROLA AND ZIEMER

We wish to extend Greens theorem to a wider class of plane regions


than class of admissible regions; we can of course extend the theorem
without further delay to a finite union of admissible regions D1 , . . . , Dn ,
and we get Z Z
dw = w.
D1 Dn D1 Dn
Now we would like to write (D1 Dn ) for D1 Dn . As
we know, this equality is generally false; however, it may happen that
Z Z
w= w
D1 Dn (D1 Dn )

for all differential forms w, as the following example shows:


Consider the crescent-shaped region D bounded by the curves y =
x2 + 1 and y = 2x2 . Let us cut D into two pieces by the y-axis: D1 and
D2 are both admissible regions. We observe that D1 and D2 both
contain the segment of the y-axis between 0 and 1 but the orientation
of this segment is reversed. Accordingly, the integrals over this segment
cancel, and we find that
Z Z Z
w= w= dw.
D D1 D2 D1 D2

Now D1 D2 6= D, because D1 and D2 do not contain the segment on


the y-axis. However, this segment is a set of zero area, so
Z Z
dw = dw.
D1 D2 D

Thus the theorem holds for D.


In this manner, Greens theorem holds also for a wider class of re-
gions.
Theorem 16.3 (Green revisited). Suppose Greens theorem holds for
a domain D. Let D U R2 , where U is open, and suppose T : U
R2 , T C 2 on U , T is 1-1 on U and JT (p) > 0 on U . Then Greens
theorem holds for T (D).
Proof. Let ID denote the identity transformation on D. Then we obtain
Z Z Z Z
T hm.16.2
w= w= T w = d(T w)
(T (D)) T D D D
Z Z Z Z
T hm.15.12
= T (dw) = T (dw) = dw = dw.
D ID T ID T (D)


COVECTORS AND FORMS 51

If we now consider the region in the first and fourth quadrants


bounded by the lines x = 0, x = 1, y = 1 and the curve y = x3 sin(/x),
it can be shown that this region is the image of the unit square under
a transformation T having all the properties of the preceding theorem.
But clearly this region cannot be chopped up into a finite number of
admissible regions, hence this theorem gives us additional power in
applying Greens theorem.

Theorem 16.4 (Stokes). Let D be a region in the plane for which


Greens theorem holds, and let : D R3 be a smooth surface. We
view D as one or more closed curves in the plane, and we define
= D. Thus consists of one or more closed curves in R3 .
If w : (R31 ) is a class C 1 -form, with (D), then
Z Z
w= dw.

Proof. Let ID denote the identity transformation on D. Then we obtain


Z Z Z Z Z

w= w= w= d( w) = (dw)

ZD D
Z D
Z D

= (dw) = dw = dw.
ID ID

Theorem 16.5 (Gauss). Let R be the unit cube in R3 , and regard R


as six smooth surfaces. Then if w : (R32 ) is a class C 1 2-form,
with R,
Z Z
w= dw.
R R

Theorem 16.6 (Gauss revisited). Suppose Gauss theorem holds for a


region R. Let R U R3 , where U is open, and suppose T : U R3 ,
T C 2 on U , T is 1-1 on U and JT (p) > 0 on U . Then Gauss theorem
holds for T (R).

The proofs of these theorems are quite similar to those given for
Greens theorem, and we shall not repeat them.

Using the material contained in these notes the reader should be


able to formulate and prove analogues of the Gauss, Green, and Stokes
theorems for higher dimensions.
52 MAROLA AND ZIEMER

17. A glance at currents in Rn


To close this note, we very briefly introduce the notion of currents in
n
R as it would be a natural continuation of this treatise. The interested
reader should consult, e.g., Lang [6], and the references therein.
As a historical sidenote, currents in the sense of geometric measure
theory were introduced by de Rham in 1955 (for use in the theory of
harmonic forms). Later, in the fundamental paper from 1960 Federer
and Fleming developed the class of rectifiable currents, and thereby
provided a solution to the Plateau problem for surfaces of arbitrary
dimension and codimension in Euclidean spaces. Roughly speaking,
Plateaus problem is as follows: Given an (m 1)-dimensional bound-
ary . Find an m-dimensional surface S such that S = and S
has the minimal m-dimensional area. The theory of currents then de-
veloped into a powerful tool in the Calculus of Variations. Federers
monograph [4] gives a comprehensive account of the state of the sub-
ject prior to 1970. Since then, the theory has been extended in various
directions and has found numerous applications in geometric analysis
and Riemannian geometry. A breakthrough was achieved by Ambrosio
and Kirchheim [2] in 2000. In [2] the authors extended the theory of
currents in metric spaces. Their approach employs (m + 1)-tuples of
real-valued Lipschitz functions in place of m-forms and provides new
insight to the theory even in Euclidean spaces. For a nice exposition
on the theory of currents in metric spaces, we refer to [6].

An m-dimensional current in Rn is a continuous linear mapping

T : w R1 ,

where w is a m-form. The support supp(T ) of a current T is defined to


be the smallest closed set C Rn with the property that T (w) = 0 for
all m-forms w with supp(w) C = . The boundary of an m-current
T is

T (w) := T (dw),

where w is an (m 1)-form and dw is the exterior derivative of w (i.e.


an m-form), is an (m 1)-current. Clearly, = 0 since d(dw) = 0,
and supp(T ) supp(T ).

Example 17.1. (1) Measures are 0-currents, and they act on 0-


forms, i.e. functions.
COVECTORS AND FORMS 53
R1R1
(2) T (w1 dx1 + w2 dx2 ) = 0 0
w1 (x, y) dxdy is an 1-current in R2 .
Then
f f 
T (f ) = T (df ) = T dx1 + dx2
x1 x2
Z 1Z 1 Z 1
f
= dx1 dx2 = (f (1, x2 ) f (0, x2 )) dx2 .
0 0 x1 0

Suppose is a smooth oriented m-dimensional submanifold of R3


with boundary, and is a closed subset of Rn . Let the orientation of
be given as a continuous function : Rnm such that for every
x , (x) is a simple m-vector and represents the tangent space Tx ,
and | (x)| = 1. Then


Z Z
T (w) := w = (w(x), T (x)) dHm (x),

m
where H denotes the m-dimensional Hausdorff measure. T is an
m-dimensional current attached to . Moreover, suppose that is
equipped with the induced orientation 0 : Rnm1 , i.e., = 0
for the exterior unit normal , then formally
Z
T (w) = T (dw) = (dw(x), (x)) Hm (x)

Z
= (w(x), 0 (x)) Hm1 (x)

= T (w)

for all (m 1)-form w, where the Stokes theorem 16.4 was used.

References
[1] (Or suchlike) R. A. Adams: Calculus A Complete Course, Pearson Addison
Wesley.
[2] L. Ambrosio and B. Kirchheim: Currents in metric spaces, Acta Math. 185
(2000), 180.
[3] (Or suchlike) R. Creighton Buck: Advanced Calculus, McGraw-Hill Book Com-
pany, Inc., New York-Toronto-London, 1956.
[4] H. Federer: Geometric Measure Theory, Die Grundlehren der mathematischen
Wissenschaften, Band 153, New York 1969.
[5] S. G. Krantz and H. R. Parks: Geometric Integration Theory, Cornerstones,
Birkhuser Boston, Inc., Boston, MA, 2008.
[6] U. Lang: Local currents in metric spaces, http://www.math.ethz.ch/~lang/.
[7] H. Whitney: Geometric Integration Theory, Princeton University Press,
Princeton, N. J., 1957.
54 MAROLA AND ZIEMER

(N.M.) Department of Mathematics and Statistics, P.O.Box 68, FI-


00014 University of Helsinki, Finland
E-mail address: niko.marola@helsinki.fi

(W.P. Z.) Mathematics Department, Indiana University, Blooming-


ton, Indiana 47405, USA
E-mail address: ziemer@indiana.edu

You might also like