Professional Documents
Culture Documents
Contents
1
Preface
2
Multilinear Algebra
2.1
Vector Spaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
The Dual Space. The concept dual basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3
The Kronecker tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4
Linear Transformations. Index-gymnastics . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5
Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6
Reciproke basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7
Special Bases and Transformation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8
Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8.1
General Definition
0
2.8.2
0-tensor = scalar = number
1
2.8.3
0-tensor = contravariant 1-tensor = vector
0
2.8.4
1-tensor = covariant 1-tensor = covector
0
2.8.5
2 -tensor = covariant 2-tensor =
linear transformation: V V
2
2.8.6
0 -tensor = contravariant 2-tensor =
linear transformation: V V
1
2.8.7
1 -tensor = mixed 2-tensor =
linear transformation: V V and V V
0
2.8.8
3 -tensor = covariant 3-tensor =
linear transformation: V (V V ) and (V V) V
2
2.8.9
2 -tensor = mixed 4-tensor =
linear transformation: (V V) (V V) =
2.8.10 Continuation of the general considerations about rs -tensors.
Contraction and .
2.8.11 Tensors on Vector Spaces provided with an inner product
2.9
Mathematical interpretation of the "Engineering tensor concept" . . . . . . . .
2.10 Symmetric and Antisymmetric Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.11 Vector Spaces with a oriented volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.12 The Hodge Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.14 RRvH: Identification V and V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.15 RRvH: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.16 RRvH: The four faces of bilinear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
5
8
12
13
17
20
23
25
26
26
27
27
3
Tensor Fields on Rn
3.1
Curvilinear Coordinates and Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . .
3.2
Definition of Tensor Fields on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3
Alternative Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4
Examples of Tensor Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
78
81
83
85
28
32
35
38
39
42
45
46
55
65
68
72
73
75
77
3.4.1
3.4.2
3.4.3
3.5
3.5.1
3.5.2
3.5.3
3.6
3.6.1
3.6.2
3.6.3
3.6.4
3.6.5
3.7
3.7.1
3.7.2
3.7.3
3.8
3.8.1
3.8.2
3.8.3
3.8.4
3.9
3.10
3.10.1
85
85
86
87
87
88
88
90
90
91
91
93
96
102
102
103
103
104
104
105
106
107
108
109
109
4
Differential Geometry
4.1
Differential geometry of curves in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1
Space curves
4.1.2
The Frenet formulas
4.2
Differential geometry of surfaces in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1
Surfaces
4.2.2
The first fundamental tensor field
4.2.3
The second fundamental tensor field
4.2.4
Curves at a surface
4.2.5
The covariant derivative at surfaces
4.3
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4
RRvH: Christoffel symbols? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1
Christoffel symbols
110
110
110
112
118
118
119
120
123
126
132
134
134
5
Manifolds
5.1
Differentiable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2
Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3
Riemannian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4
Covariant derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5
The curvature tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
136
136
137
140
142
145
6
Appendices
6.1
The General Tensor Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2
The Stokes Equations in (Orthogonal) Curvilinear Coordinates . . . . . . . .
6.2.1
Introduction
6.2.2
The Stress Tensor and the Stokes equations in Cartesian Coordinates
6.2.3
The Stress Tensor and the Stokes equations in Arbitrary Coordinates
6.2.4
The Extended Divergence and Gradient in Orthogonal
Curvilinear Coordinates
6.2.4.1
The Extended Gradient
6.2.4.2
The Extended Divergence
6.3
The theory of special relativity according Einstein and Minovski . . . . . . .
6.4
Brief sketch about the general theory of special relativity . . . . . . . . . . . . . .
6.5
Lattices and Reciproke Bases. Piezoelectricity. . . . . . . . . . . . . . . . . . . . . . . .
6.6
Some tensors out of the continuum mechanics. . . . . . . . . . . . . . . . . . . . . . .
6.7
Thermodynamics and Differential Forms. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index
149
149
153
153
154
154
154
154
154
155
155
155
155
155
158
Chapter 1 Preface
This is a translation of the lecture notes of Jan de Graaf about Tensor Calculation and
Differential Geometry. Originally the mathematical student W.A. van den Broek has
written these lecture notes in 1995. During the years, several appendices have been
added and all kind of typographical corrections have been done. Chapter 1 is rewritten
by Jan de Graaf.
To make this translation is not asked to me. I hope that this will be, also for me, a
good lesson in the Tensor Calculation and Differential Geometry. If you want to make
comments, let me here, see the front page of this translation,
Ren van Hassel (started: April 2009)
In June 2010, the Mathematical Faculty asked me to translate the lecture notes of Jan de
Graaf.
Pieces of text characterised by "RRvH:" are additions made by myself.
The text is typed in ConTeXt, a variant of TeX, see context.
Comment(s): 2.1.1
For every basis {ei } of V and for every vector x V there exists an unique
ordered set of real numbers {xi } such that x = xi ei .
Definition 2.1.1 The numbers {xi } are called the contravariant components of the
vector x with respect tot the basis {ei }.
Convention(s): 2.1.1
Notice(s): 2.1.1
X = xi Ei .
To every basis {ei } of V can be defined a bijective linear transformation
E : V Rn by Ex = X. Particularly holds that Eei = Ei . A bijective linear transformation is also called a isomorphism. By choosing a basis {ei } of
V and by defining the correspondent isomorphism E, the Vector Space V is
"mapped". With the help of Rn , V is provided with a "web of coordinates".
Comment(s): 2.1.2
For every pair of bases {ei } and {ei0 } of V, there exist an unique pair of ordered
0
0
real numbers Aii0 and Aii such that ei = Aii ei0 and ei0 = Aii0 ei .
Convention(s): 2.1.2
Notice(s): 2.1.2
On the one hand holds ei = Aii ei0 = Aii Ai0 e j and on the other hand ei = ij e j ,
j
from what follows that Aii Ai0 = ij . On the same manner is to deduce that
j0
Aii0 Ai = ij0 . The ij and ij0 are the Kronecker deltas. Construct with them the
0
seen that the coordinates xi "transform" with the inverse Aii of the changeof-coordinates matrix Aii0 . That is the reason of the strange 19th century term
contravariant components.
i0
0
0
0
Out of the relation xi = xi Aii follows that x i = Aii .
x
Notation(s):
xi f
i0 f
=
=
A
0
0,
i
xi
xi xi
xi
so
xi
= Aii
0,
xi
Example(s): 2.1.1 Let S = {e1 = (1, 0), e2 = (0, 1)} be the standard basis of R2
and let T = {e10 = 1 (1, 2)T , e20 = 1 (2, 1)T } be an orthonormal basis of R2 .
5
The coordinates of e and e are given with respect to the standard basis S.
The matrix
!
1
1 2
A, = e10 e20 =
,
5 2 1
10
20
Definition 2.2.2 The Dual Space V belonging to the Vector Space V is the set of
all linear functions on V, equipt with the addition and the scalar multiplication.
For every pair linear functions b
f and b
g on V, the function (b
f +b
g ) is defined by
b
b
b
(f +b
g )(x) = f(x) + b
g(x). For every linear function f and every real number ,
the linear function (b
f) is defined by (b
f)(x) = (b
f(x)). It is easy to control that
b
V is a Vector Space. The linear functions f V are called covectors or covariant
1-tensors.
Convention(s): 2.2.1
i
Definition 2.2.4 The amount of n rowvectors b
E , defined by
i
b
E = (0, , 0, 1, 0, , 0),
where there becomes a number 1 on the i-th position, is a basis of Rn . This basis is
called the standard basis of Rn .
10
Notice(s): 2.2.1
i
b
E .
F = fib
b
f(x) = b
F X, where b
F represents the rowvector of the covariant components of
the covector b
f.
To every basis {ei } of V belong n linear functions b
e k , defined by
b
e k (x) = xk = (Ek )T Ex. Keep in mind, that every b
e k most of the time is determined by the entire basis {ei }.
Proof Let b
g V . For every x V holds that
b
g(x) = b
g(xi ei ) = xib
g(ei ) = gi xi = gi (b
e i (x)) = (gib
e i )(x),
so b
g = gib
e i . The Dual Space V is spanned by the collection {b
e i }. The only thing to
prove is that {b
e i } are linear independent. Assume that {i } is a collection of numbers
such that ib
e i = 0. For every j holds that ib
e i (e j ) = i ij = j = 0. Hereby is proved
that {b
e i } is a basis of V .
Consequence(s):
Notice(s): 2.2.2
11
Lemma 2.2.2 Let {e i } and {e i0 } be bases of V and consider their belonging dual
0
bases {b
e i } and {b
e i }.
0
e i }.
e i0 } = Aii {b
e i0 } and {b
{b
e i } = Aii0 {b
0
j0
b
e i )(A j e j0 ) = Bii0 A j ij0 = Bii0 Aij ,
e i (e j ) = (Bii0 b
0
Notice(s): 2.2.3
b
y = yi0 b
e i = yi0 Aii b
e i = (Aii yi0 )b
e i = yib
e i = yi Aij0 b
e j = yi Aij0 b
ei .
In matrix notation
0
b
Y=b
Y, A
b
Y, = b
YA,
(= b
Y, (A, )1 )
Putting the expression yi0 = Aii0 yi and ei0 = Aii, ei side by side, then is seen
that the coordinates yi "transform" just as the basis vectors. That is the reason
of the strange 19th century term covariant components.
Notation(s):
0
The basisvector b
e i is also written as d xi and the dual basisvector b
e i as d xi .
At this stage this is pure formal and there is not attached any special meaning
to it. Sometimes is spoken about "infinitesimal growing", look to the formal
similarity
0
xi
i
i0
i0
dx =
d
x
=
A
d
x
.
i
xi
i0
12
Notation(s):
The covector b
f : x 7 b
f(x) will henceforth be written as b
f : x 7< b
f, x >.
Sometimes is written b
f = <b
f, >. The "argument" x leaves "blanc".
Notice(s): 2.3.1
Covectors can only be filled in at the first entrance of < , >, so elements out
of V . At the second entrance there can be filled in vectors, so elements out
of V. The Kronecker tensor is not an "inner product", because every entrance
can only receive only its own type of vector.
The Kronecker tensor is a linear functions in every separate variable. That
means that
b
u + b
v, z > = < b
u, z > + < b
v, z >,
u,b
v V z V , R : < b
en
b
u, x + y > = < b
u, x > + < b
u, y > .
u V x,y V , R : < b
The pairing between the basisvectors and the dual basisvectors provides:
1 if i = j,
i
i
<b
e , ej > = j =
,
0 if i 6= j,
the famous "Kronecker delta".
To every fixed chosed a V, there can be looked to the linear function b
u :
x 7< b
u, a > on V ! This linear function belongs to the dual of the Dual
Space of V, notation: (V ) = V . A co-covector, so to speak. The following
lemma shows that in the finite dimensional case, V can be indentified with
V, without "introducing extra structures". The proof needs some skills with
the abstract linear algebra.
13
b
Proof Choose a vector a V. Define the "evaluation function" b
a : V R by
b
b
a ( b
u) =<b
u, a > .
b
Look at the linear transformation J : V V , defined by Jx = b
x . The linear
transformation J is injective. If (Jx)( b
u) =< b
u, x > = 0 for all b
u V , then x has
to be 0. Take for b
u successively the elements of a "dual basis". Because furthermore
dimV = dimV = dimV = n < , J has also to be surjective. This last is justified
by the dimension theorem. The bijection J : V V "identifies" V and V without
making extra assumptions.
Comment(s): 2.3.1
14
Notation(s):
Rei = Ri e j ,
j0
Rei0 = Ri0 e j0 .
(2.1)
This means that the contravariant components of the vector Rei with respect
j
to the basis {e j } are notated by Ri and the contravariant components of the
j0
j0
Notice(s): 2.4.1
With the help of the transisition matrices A, and A there can be made a link
0
between the matrices R and R, . There holds namely
j
j0
j0
Compare this relation with ( 2.1) and it is easily seen that Ri0 = Aii0 Ri A j . In
0
j0
The relations between the matrices, which represent the linear transformation
R with respect to the bases {ei } and {ei0 }, are now easily to deduce. There holds
namely
0
R, = (A, )1 RA,
and
R = (A )1 R, A .
The other linear types of linear transformations can also be treated on almost the same
way. The results are collected in figure 2.1.
15
x V b
y, Rx > = < Pb
y, x > holds exactly if P = R, so if Pij = Rij
y V :< b
holds exactly.
x V z V :< Gy, z > = < Gz, x > holds exactly if GT = G, so if g ji = gij
holds exactly. In such a case the linear transformation G : V V is called
symmetric.
Some of the linear transformations in the table can be composed by other linear transformations out of the table. If R = H G then is obvious
Rkj x j ek = Rx = (H G)x = hkl g jl x j ek . In matrix notation: RX = H(XT G)T =
HGT X.
If P = G H then is obvious Pkj ykb
e j = Py = G Hb
y = hkl glj ykb
e j . In matrix
b = (HY
bT )T G = YH
b T G.
notation: YP
16
Indexnotation = Componentnotation
Matrixnotation
No basis used
Order is fixed
Basis dependent
Order is of no importance
Basis dependent
Order is fixed
Space
x
i0
i0
xi = < b
e i , x > xi = < b
e ,x >
0
0 i
i
i
i
i
x = Ai x x = Ai xi , etc.
xV
X X
0
X = A X, etc.
0
y
b
y V
e i , etc.
yi , with b
y = yib
e i = yi0b
yi0
y, ei0 >
yi0 = < b
yi = < b
y, ei >
b Y
b,
Y
b= Y
b, A0 etc.
Y
yi = Aii yi , etc.
yi0 = Aii0 yi
<b
y, x > R
b = Y
b, X0 R
YX
yi xi = yi0 xi R
R
Re j =
Rij ei
Re j0 = Rij0 ei0
Rij = < b
e i , Re j >
R:VV
i0
i0
j0
i0
Rij x j = < b
e i , Rx >
Rx V
i0
j0
R x
i0
j0
j
Ai R j x j
i0
=<b
e , Rx > =
0
<b
y, Rx > R
=<b
e , Re j0 > = Ai A j0 x j
R = A, R, A
column(Rij x j ) = RX
0
column(Rij0 x j ) = R, X
b
b, R0 X0 R
YRX
= Y
,
Pij = < Pb
e i, e j >
P : V V
i0
j0
ej
Pb
e i = Pij0b
i0
i0
Pb
y V
Pi0 y j0 = < Pb
y,b
ei0 > = Aii0 Pi y j
j
y j Pi xi
0
j0
Pi0 y j0 xi
P, = A PA,
j
Pi y j = < Pb
y,b
ei >
j0
< Pb
y, x > R
= < Pb
e , e j0 > = Ai A j0 Pij
b
row(Pi y j ) = YP
0
j0
row(P 0 y j0 ) = b
Y, P, = b
YPA
i
b
b, P0 X0 R
YPX
= Y
,
G
0
Gei = gijb
e j Gei0 = gi0 j0b
ej
gij = < Gei , e j >
G : V V
j
Aii0 A j0 gij
0
gi0 j0 xi
row(gi j xi ) = XT G
j
A j0 gij xi
row(gi0 j0 xi ) = (X )T G,,
0
gij xi z j = gi0 j0 xi z j R
XT GZ = (X0 )T G,, Z R
H
k
Hb
e = hkl el
0 0
Hb
e k = hk l el0
hkl = < b
e k , Hb
el >
H : V V
k0 l0
k0
l0
l0
=<b
e , Hb
e > = Ak Al hkl
hkl yl = < b
e k , Hb
y>
Hb
y
<b
u, Hb
y > R
0 0
hk l yl0
k0
=<b
e , Hb
y >=
0
Akk hkl yl
0 0
00
0 0
H = [hk l ] = A H(A )T
bT
column(hkl yl ) = HY
0 l0
k
column(h yl0 ) = H ( b
Y,)T
0
0
= A H( b
Y,A )T
bT = U
b, )T R
b Y
b, H ( Y
UH
17
x 6= 0 (x, x) > 0
Notice(s): 2.5.1 In the mathematics and physics there are often used other inner
products. They differ from the Euclidean inner product and it are variations on
the conditions of Definition 2.5.1 i and Definition 2.5.1 iii. Other possibilities are
a.
b.
Clarification(s): 2.5.1
Condition Def. 2.5.1 iii implies condition Ntc. 2.5.1 b. Condition Ntc. 2.5.1 b
is weaker then condition Def. 2.5.1 iii.
In the theory of relativity, the Lorentz inner product plays some rule and it
satisfies the conditions Def. 2.5.1 i, Def. 2.5.1 ii and Ntc. 2.5.1 b.
In the Hamiltonian mechanics an inner product is defined by the combination
of the conditions Ntc. 2.5.1 a, Def. 2.5.1 ii and Ntc. 2.5.1 b. The Vector Space
V is called a symplectic Vector Space. There holds that dimV = even. (Phase
space)
If the inner product satisfies condition Def. 2.5.1 i, the inner product is called
symmetric. If the inner product satisfies condition Ntc. 2.5.1 a then the inner
product is called antisymmetric.
18
Definition 2.5.2 let {ei } be a basis of V and define the numbers gij = (ei , e j ). The
matrix G = [gij ], where i is the row index and j is the column index, is called the
Gram matrix.
Notation(s):
If the inverse G1 of the Gram matrix exists then it is notated by G1 = [gkl ],
with k the row index and j the column index. Notate that gik gk j = i and
gli gik = lk .
Theorem 2.5.1 Consider an inner product (, ) on V which satisfies the conditions: Def. 2.5.1 i or Ntc. 2.5.1 a, Def. 2.5.1 ii and Ntc. 2.5.1 b.
There exists a bijective linear transformation G : V V such that
a.
Proof
a.
b.
c.
d.
Take a fixed u V and define the linear function x 7 (u, x). Then there exists a
b
u V such that for all x V holds: < b
u, x > = (u, x). The addition u 7 b
u seems
to be a linear transformation. This linear transformation is called G : V V . So
b
u = Gu.
Because dim(V) = dim(V ) < the bijectivity of G is proved by proving that G
is injective. Assume that there exists some v V, v 6= 0 such that Gv = 0. Then
holds for all x V that 0 = < Gv, x > = (v, x) and this is in contradiction with
Ntc. 2.5.1 b.
G is invertiblle if and only if GT is invertible. Assume that there is a columnvector
X Rn , X 6= O such that GT X = O. Then the rowvector XT G = O. With x =
E1 X 6= O follows that the covector E (XT G) = Gx = 0. This is contradiction with
the bijectivity of G.
The components of Gei are calculated by < Gei , ek > = (ei , ek ) = gik . There follows
that Gei = gikb
e k and also that Gx = G(xi ei ) = xi gikb
e k.
e k = lkb
ek = b
e l follows that G1 b
e l = gli ei . At last
Out of G(gli ei ) = gli Gei = gli gikb
G1 b
y = G1 (ylb
e l ) = yl gli ei .
19
Comment(s): 2.5.1
In the follow up, so also in the next paragraphs, the inner product is assumed
to satisfy the conditions Def. 2.5.1 i, Def. 2.5.1 ii and Ntc. 2.5.1 b, unless otherwise specified. So the Gram matrix will always be symmetric.
Definition 2.5.3 In the case of an Euclidean inner product the length of a vector
x is notated by | x | and is defined by
p
| x | = (x, x).
Lemma 2.5.1 In the case of an Euclidean inner product holds for every pair of
vectors x and y
| (x, y) | | x | | y | .
This inequality is called the inequality of Cauchy-Schwarz.
20
Definition 2.5.4 In the case of an Euclidean inner product the angle between
the vectors x 6= 0 and y 6= 0 is defined by
(x, y)
= arccos
.
| x || y |
Starting Point(s):
Definition 2.6.1 To the basis {ei } in V is defined a second basis {ei } in V which is
defined by {ei } = G1 b
e i = gij e j . This second basis is called the to the first basis
belonging reciproke basis.
21
Comment(s): 2.6.1
There holds that (ei , e j ) = gil (el , e j ) = gil glj = ij . In such a case is said that
the vectors ei and e j for every i 6= j are staying perpendicular to each other.
Lemma 2.6.1 Let {ei } and {ei0 } be bases of V and consider there belonging reci0
proke bases {ei } and {ei }. The transistion matrix from the basis {ei } to the basis {ei0 }
0
is given by A and the transistion matrix the other way around is given by A, . So
0
0
0
ei = Aii0 ei and ei = Aii ei .
Proof It follows direcly out of the transistions between dual bases, see Lemma 2.2.2.
The proof can be repeated but then without "dual activity".
0
0
0
Notate the transition matrices between the bases {ei } and {ei } by B, = [Bii ] and B =
0
0
0
[Bii0 ], so ei = Bii0 ei and ei = Bii ei . On one hand holds (ei , e j ) = ij and on the other
0
j0
j0
j0
j0
hand (ei , e j ) = (Bii0 ei , A j e j0 ) = Bii0 A j ij0 = Bii0 A j , so Bii0 A j = ij . Obviously are B, and
0
22
Comment(s): 2.6.2
The mutual correspondence between the covariant and the contravariant components is described with the help of the Gram matrix and its inverse. There
holds that xi = (x, ei ) = x j (e j , ei ) = g ji x j and for the opposite direction holds
xi = (x, ei ) = x j (e j , ei ) = g ji x j . With the help of the Gram matrix and its
inverse the indices can be shifted "up" and "down".
The inner product between two vectors x and y can be written on several manners
i j
i
x y gij = x yi
(x, y) =
x y gij = x yi .
i j
i
Summarized:
xi0 = xi Aii0
b
X, = b
XA,
xi = gij x j
b
X = (GX)T
(x, y) = xi yi
(x, y) = b
XY
(x, y) = xi0 yi
0
(x, y) = b
X, Y
(x, y) = gij xi y j
(x, y) = XT GY
0 T
0
0
0
(x, y) = gi0 j0 xi y j (x, y) = X G,, Y
23
Conclusion(s): IMPORTANT:
To a FIXED CHOSEN inner product (.) the concept of "dual space" can be ignored
without any problems. EVERY preliminary formula with hooks "< , >" in it,
gives a correct expression if the hooks "< , >" are replaced by "(, )" and if the
caps " b " are kept away. There can be calculated on the customary way such as
is done with inner products.
Lemma 2.7.1 For every invertible symmetric n n-matrix Q there exists a whole
number p {0, , n} and an invertible matrix A such that AT QA = , with =
diag(1, , 1, 1, , 1). The matrix contains p times the number 1 and (n p)
j
times th number 1. Notate A = [Ai ], Q = [Qij ] and = [ij ], then holds
Aki Qkl Alj = ij in index notation.
Proof Because of the fact that Q is symmetric, there exists an orthogonal matrix F such
that
FT QF = = diag(1 , , n ).
The eigenvalues of Q are ordered such that 1 n . The eigenvalues i 6= 0,
because the matrix Q in invertible. Define the matrix
1
| | 2 = diag(| 1 | 2 , , | n | 2 )
1
24
Comment(s): 2.7.1
Definition 2.7.1 The to Theorem 2.7.1 belonging basis {ei } is called an orthonormal
basis of the Vector Space V.
Notice(s): 2.7.1
(2.2)
25
Example(s): 2.7.1
If the inner product on V has signature p = n then the group O(p, q) is exactly
equal to the set of orthogonal matrices. This group is called the orthogonal
group and is notated by O(n). An element out of the subgroup SO(n) transforms an orthogonal basis to an orthogonal basis with the same "orientation".
Remember that the orthogonal matrices with determinent equal to 1 describe
rotations around the origin.
Let the dimension of V be equal to 4 and the inner product on V has signature
p = 1. Such an inner product space is called Minkowski Space. The
belonging group O(1, 3) is called the Lorentz group and elements out of this
group are called Lorentz transformations. Examples of Lorentz transformations are
cosh sinh 0 0
sinh cosh 0 0
,
A1 =
0
1 0
0
0
0
0 1
with an arbitrary real number and
0
A2 =
0
0
T
26
Starting Point(s):
The
r
s -tensor
on V, with r = 0, 1, 2, , s = 0, 1, 2, is a func-
T : V V V V R,
| {z } | {z }
r times
s times
b
u;b
v; ;b
z ; v; w; ; y 7 R,
| {z } | {z }
r covectors V s vectors V
which is linear in each argument. This means that for every , R and each
"slot" holds that by way of example
T( b
u,b
v, , b
z1 + b
z2 , v; w; ; y) =
T( b
u,b
v, ,b
z1 , v; w; ; y) + T( b
u,b
v, ,b
z2 , v; w; ; y).
For more specification there is said that T is contravariant of order r and is covariant
of order s. If holds that p = r+s then there is sometimes spoken about a p-tensor.
Comment(s): 2.8.1
2.8.2
0
0 -tensor = scalar = number
27
If p = 0 there are no vectors or covectors to fill in. The following definition is just a
convention.
Definition 2.8.2 A 00 -tensor is an alternative name for a real number.
A 00 -tensor is also called a scalar.
2.8.3
1
0 -tensor = contravariant
Definition 2.8.3 A
1
0 -tensor
1-tensor = vector
is a linear transformation of V to R.
Notice(s): 2.8.1
2.8.4
0
1 -tensor = covariant
Definition 2.8.5 A
0
1 -tensor
1-tensor = covector
is a linear transformation of V to R.
28
Notice(s): 2.8.2
Write the tensor as x 7 F(x). In accordeance with Definition 2.2.1 the functions F is a linear function on V and can be written as x 7 F(x) = < b
f, x >, for
0
b
certain f V . The set of 1 -tensors is exactly equal to Dual Space V of the
Vector Space V, the startingpoint.
For every basis {ei } of V a 01 -tensor F can be written as F = F(ei )b
e i = Fib
e i,
with Fib
e i = fib
ei = b
f. For every x V holds as known
F(x) = T(b
x i ei ) = b
x i F(ei ) = F(ei ) < b
e i , x > = < fi b
e i, x > = < b
f, x > .
Definition 2.8.6 The numbers Fi = F(ei ) are called the covariant components of
the tensor F with respect to the basis {ei }. This explains also the name "covariant
1-tensor".
2.8.5
0
2 -tensor = covariant
2-tensor =
linear transformation: V V
Definition 2.8.7 A 02 -tensor is a linear transformation of V V to R, which is
linear in both arguments. A 02 -tensor is also called a bilinear function on V V.
Clarification(s): 2.8.1
For a
0
2 -tensor
holds:
(x + y, z) = (x, z) + (y, z),
(x, y + z) = (x, y) + (x, z),
Definition 2.8.8 For every pair of 02 -tensors and , the 02 -tensor + is
defined by ( + )(x, y) = (x, y) + (x, y) and for every R the 02 -tensor is
defined by ()(x, y) = (x, y).
29
Comment(s): 2.8.2
The set of 02 -tensors is a Vector Space over R, which is notated by V V and
also with T20 (V).
Example(s): 2.8.1
Every type of inner product on V, such as considered in
0
Section 2.5, is a 2 -tensor. If there is made a fixed choice for an inner product
on some Vector Space V, then this inner product is called a fundamental tensor.
0
p b
q on
2 -tensor b
V is defined by
(b
p b
q )(x, y) = < b
p, x >< b
q, y > .
Notice(s): 2.8.3
If the system { b
p, b
q } is linear independent then b
p b
q 6= b
q b
p.
30
0
2 -tensor
K.
Proof
Choose a fixed a V and look to the 01 -tensor x 7 K(a, x). Interpret anyhow
a as variable, such that there is defined a linear transformation K : V V by
a 7 K(a, ) = < Ka, >. Then K(u, v) = < Ku, v >.
The 01 -tensor K(u, x) can be written as K(u, x) = K(u, ei ) < b
e i , x >, see Notice 2.8.2.
After a basis transition holds Ku = K(u, ei0 )b
e i.
Choose a fixed a V and look to the 01 -tensor x 7 K(x, a). Define the linear transformation K : V V by a 7 K(, a) = < K a, >. Then K(u, v) = < K v, u >.
The 01 -tensor K(x, w) can be written as K(x, w) = K(ei , w) < b
e i , w >, see
Notice 2.8.2, so K w = K(ei , w)b
e i.
0
The explicit representation after a basis transition holds K u = K(ei0 , u)b
ei .
Notice(s): 2.8.4
Definition 2.8.11 Let {ei } a basis of V and a 02 -tensor on V. The numbers ij ,
defined by ij = (ei , e j ), are called the covariant components of the tensor with
respect to the basis {ei }.
31
Notice(s): 2.8.5
Definition 2.8.12 Let {ei } be basis on V. To every pair indices i and j the
b
ei b
e j is defined by
0
2 -tensor
(b
ei b
e j )(x, y) = b
e i (x)b
e j (y) = < b
e i , x >< b
e j, y > .
Lemma 2.8.1
ei b
e j.
The set {b
ei b
e j } is a basis of T20 (V). There holds: = ij b
If dim( V ) = n then dim( T20 (V) ) = n2 .
0 = ij (b
ei b
e j )(ek , el ) = ij ik l = kl .
Comment(s): 2.8.3 As previously stated an inner product is an 02 -tensor. Hereby
is considered a linear transformation from V to V , which is defined by a 7 (a, ).
That is exactly the bijective linear transformation G out of Theorem 2.5.1.
32
2.8.6
2
0 -tensor = contravariant
linear transformation: V
2-tensor =
V
Definition 2.8.13 A 20 -tensor is a linear transformation of V V to R, which is
linear in both arguments. A 20 -tensor is also called a bilinear function on V V .
Clarification(s): 2.8.2
For a
2
0 -tensor
H holds:
H(b
x + b
y, b
z ) = H(b
x, b
z ) + H(b
y, b
z ),
H(b
x, b
y + b
z ) = H(b
x, b
y ) + H(b
x, b
z ),
for all b
x, b
y, b
z V and for every , R.
Definition 2.8.14 For every pair of 20 -tensors H and h, the 20 -tensor H + h is
defined by (H + h)(b
x, b
y ) = H(b
x, b
y ) + h(b
x, b
y ) and for every R the 20 -tensor
H is defined by (H)(b
x, b
y ) = H(b
x, b
y ).
Comment(s): 2.8.4
The set of 20 -tensors is a Vector Space over R, which is notated by V V and
also with T02 (V).
2
0 -tensor
x y on V is defined by
(x y)( b
u,b
v) =<b
u, x >< b
v, y > .
Notice(s): 2.8.6
33
2
0 -tensor
H.
b
x V b
y V : H(b
x,b
y) =<b
x, Hb
y>.
Explicitly: H = H(b
e i , )e i , so Hb
v = H(b
e i ,b
v )ei .
There exists just one linear transformation H : V V such that
b
x V b
y V : H(b
x,b
y) =<b
y, H b
x>.
Explicitly: H = H(,b
e i )e i , so H b
v = H(b
v,b
e i )e i .
Proof
Choose a fixed b
b V and look to the 10 -tensor b
x 7 H(b
x, b
b ). Interpret anyhow
b
b as variable, such that there is defined a linear transformation H : V V by
b
b 7 H(, b
b ) = < , H b
b >. Then H( b
u,b
v) =<b
u, Hb
v >.
1
See Paragraph 2.8.3 for the explicit notation of 0 -tensor.
0
After a basis transition holds H b
u = H(b
ei ,b
u ) ei0 .
1
Choose a fixed b
a V and look to the 0 -tensor b
x 7 H(b
a,b
x ). Interpret anyhow
b
a as variable, such that there is defined a linear transformation H : V V by
b
a 7 H( b
b, ) = < , H b
b >. Then H( b
u,b
v) =<b
v, H b
u >.
1
See Paragraph 2.8.3 for the explicit notation of 0 -tensor.
0
After a basis transition holds H b
u = H( b
u,b
e i ) ei0 . Compare with Paragraph 2.4.
34
Notice(s): 2.8.7
If b
x V b
y V : H(b
x,b
y ) = H(b
y,b
x ) then holds H = H .
If b
x V b
y V : H(b
x,b
y ) = H(b
y,b
x ) then holds H = H .
Definition 2.8.17 Let {ei } be a basis of V and H a 20 -tensor on V. The numbers
0
0
Hi j , defined by Hij = H(b
e i ,b
e j ), are called the contravariant components of the
tensor H with respect to the basis {ei }.
Notice(s): 2.8.8
The addition "contravariant" to (contravariant) components is here more a verbiage. See nevertheless also Paragraph 2.8.11.
For b
x,b
y V holds H(b
x,b
y ) = xi y j Hij .
The action of a 20 -tensor on two covectorsb
x and b
y can be written as H(b
x,b
y) =
i
j
H xi y j .
0 j0
j0
j0
= H(b
e i ,b
e j ) = H(Aii b
e i, A j b
e j) =
Aii A j Hij . Hereby the relation is denoted between the (contra)variant compo
nents of a 20 -tensor for two arbitrary bases.
Definition 2.8.18 Let {ei } be basis on V. To every pair indices i and j the
ei e j is defined by
2
0 -tensor
(ei e j )(b
x,b
y) =< b
x, ei >< b
y, e j > .
Lemma 2.8.2
x,b
y V holds that
Proof Let T02 (V) then for all b
(b
x,b
y ) = (xib
e i , y jb
e j ) = ij xi y j = ij < b
x, ei >< b
y, e j > = ij (ei e j )(b
x,b
y ),
35
or = i j ei e j .
The Vector Space T02 (V) is accordingly spanned by the set {ei e j }. The final part to
prove is that the system {ei e j } is linear independent. This part is done similarly as in
Lemma 2.8.1.
2.8.7
1
1 -tensor = mixed
2-tensor =
linear transformation: V V and V V
Definition 2.8.19 A 11 -tensor is a linear transformation of V V to R, which is
linear in both arguments. A 11 -tensor is also called a bilinear function on V V.
Clarification(s): 2.8.3
For a
1
1 -tensor
R holds:
R(b
x + b
y, z ) = R(b
x, z ) + R(b
y, z ),
R(b
x, y + z ) = R(b
x, y) + R(b
x, z),
for all b
x, b
y, V , y, z, V and for every , R.
Comment(s): 2.8.5
The set of 11 -tensors is a Vector Space over R, which is notated by V V and
also with T11 (V).
1
y on V
1 -tensor x b
is defined
(x b
y)( b
u, v) = < b
u, x >< b
y, v > .
36
Definition 2.8.21
With a linear transformation R : V 7 V is associated a
1
1 -tensor:
R(b
x, y) = < b
x, Ry > .
With a linear transformation P : V 7 V is associated a
1
1 -tensor:
P(b
x, y) = < Pb
x, y > .
There exists an 1-1 correspondence between the 11 -tensors and the linear transforma
tions from V to V. There exists an 1-1 correspondence between the 11 -tensors and the
linear transformations from V to V .
1
1 -tensor
R.
b
x V y V : R(b
x, y) = < b
x, R y > .
Explicitly: R = R(b
e i , )ei , so R v = R(b
e i , v )ei .
There exists just one linear transformation R : V V such that
b
x V y V : R(b
x, y) = < R b
x, y > .
Explicitly: R = R(, e j )b
e j , so R b
u = R( b
u, e j )b
ej .
Proof
Choose a fixed a V and look to the 10 -tensor b
x 7 R(b
x, a ). Interpret anyhow
a as variable, such that there is defined a linear transformation R : V V by
a 7 R(, a ) = < , R a > = Ra.
0
After a basis transition holds Ru = R(b
ei ,b
u ) ei0 , such that the representation is
independent for basis transistions.
Choose a fixed b
b V and look to the 01 -tensor y 7 R( b
b, y ), an element of V .
Interpret anyhow b
b as variable, such that there is defined a linear transformation
b
R : V V by b 7 R( b
b, ) = < R b
b, >. Then R( b
u, v ) = < R b
v, u >.
j
37
Notice(s): 2.8.9
Shifting b
x and y in R(b
x, y) gives a meaningless expression.
Definition 2.8.22 Let {ei } be a basis of V and R a 11 -tensor on V. The numbers
Ri j , defined by Rij = R(b
e i , e j ), are called the (mixed) components of the tensor R
with respect to the basis {ei }.
Notice(s): 2.8.10
= R(b
e i , e j0 ) = R(Aii b
e i , A j0 e j ) =
Aii A j0 Rij . Hereby the relation is denoted between the (mixed) components of
a 11 -tensor for two arbitrary bases.
Definition 2.8.23 Let {ei } be basis on V. To every pair indices i and j the
ei b
e j is defined by
1
1 -tensor
(ei b
e j )(b
x, y ) = < b
x, ei >< b
e j, y > .
Lemma 2.8.3
38
2.8.8
0
3 -tensor = covariant
3-tensor =
linear transformation: V (V V) and (V
V) V
Definition 2.8.24 A 03 -tensor is a linear transformation of V V V to R, which
is linear in each of its three vector arguments.
The meaning is an obvious expansion of the Clarification(s) 2.8.1, 2.8.2 and 2.8.3. See
also the general definition 2.8.1.
Definition 2.8.25 For every pair of 03 -tensors and , the 03 -tensor + is
defined by ( + )(x, y, z) = (x, y, z) + (x, y, z) and for every R the 03 -tensor
is defined by ()(x, y, z) = (x, y, z).
Comment(s): 2.8.6
The set of 03 -tensors is a Vector Space over R, which is notated by V V V
and also with T30 (V).
Definition 2.8.26
by
b V the
For every b
u,b
v, w
0
u b
v
3 -tensor b
b on V is defined
w
b )( x, y, z ) = < b
b, z > .
(b
u b
vw
u, x >< b
v, y >< w
Definition 2.8.27 Let {ei } be a basis of V and a 03 -tensor on V. The numbers
hi j , defined by hij = ( eh , ei , e j ), are called the covariant components of the
tensor with respect to the basis {ei }. The collection of covariant components of a
0
3 -tensor are organized in a 3-dimensional cubic matrix and is notated by [hij ].
39
Lemma 2.8.4
The set {b
e j b
e j b
e j } is a basis of T30 (V). There holds: = hij b
e j b
e j b
e j.
If dim( V ) = n then dim( T30 (V) ) = n3 .
Comment(s): 2.8.8
A 03 -tensor can be understood, on 3 different manners as a linear transformation from V to T20 (V) = V V . Thereby on 6 different manners as a linear
transformation of V to the "Vector Space of linear transformations V V ".
Simply said, if there is put a vector a in a slot of the Tensor , there is got a
0
i
2 -tensor, for instance (, a, ). In index notation hij a .
A 03 -tensor can be understood, on 6 different manners as a linear transformation from the "Vector Space of linear transformation V V " to V . Let
H = Hij ei e j . For instance:
(ei , e j , ) = (ei , e j , ek ) Hij b
e k = (ei , e j , ek ) Hij < b
e k, >
2.8.9
2
2 -tensor = mixed
4-tensor =
linear transformation: (V V) (V V) =
Definition 2.8.28 A 22 -tensor is a linear transformation of V V V V to R,
which is linear in each of its four vector arguments.
For more explanation, see Definition 2.8.1.
40
Comment(s): 2.8.9
The set of 22 -tensors is a Vector Space over R, which is notated by
V V V V and also with T22 (V).
Definition 2.8.29
defined by
2
2 -tensor
a b b
c b
d is
( a b b
c b
d )( b
u,b
v, x, y) = < b
u, a >< b
v, b >< b
c, x >< b
d, y > .
Notice(s): 2.8.11
Definition 2.8.30
jk
hi ,
jk
hi
2
2 -tensor
on V. The numbers
defined by
= (b
e j ,b
e k , eh , ei ), are called the (mixed) components of the
tensor with respect to the basis {ei }. The collection of mixed components of a
jk
2
2 -tensor are organized in a 4-dimensional cubic matrix and is notated by [hi ].
Lemma 2.8.5
jk
The set {e j ek b
e h b
e i } is a basis of T22 (V). There holds: = hi e j ek b
e h b
e i.
If dim( V ) = n then dim( T22 (V) ) = n4 .
41
Comment(s): 2.8.10 A 22 -tensor can be understood on many different ways as
a linear transformation of a "space of linear transformations" to a "space of linear
transformations".
x 7 (R)x
= Rlm (b
e j ,b
e m , el , x)e j
0
= Rlm0 (b
e j ,b
e m , el0 , x)e j0
(R) : V V :
b
x 7 (R)b
x
= Rlm (b
x,b
e m , el , e j )b
ej
0
ej
x,b
e m , el0 , e j0 )b
= Rlm0 (b
In index notation is written
jk
jk
jk
This "game" can also be played with summations about other indices.
The case: (V V ) (V V ) and (V V) (V V).
Let K : V V be a linear transformation. Write K = Kijb
ei b
e j . In this case
there is worked only with the index notation.
jk
[Khi ] 7 [hi K jk ],
jk
[xh ] 7 [hi K jk xh ],
jk
With H : V V and H = H jk e j ek ,
jk
[H jk ] 7 [hi Hhi ],
jk
[x j ] 7 [hi Hhi x j ],
jk
Et cetera.
The Hooke-tensor in the linear elasticity theory is an important example of a 4-tensor.
This tensor transforms linearly a "deformation condition", described by a linear transformation, to a "stress condition", also described by a linear transformation. See for
more information Appendix ??.
42
2.8.10 Continuation of the general considerations
about rs -tensors.
Contraction and .
s covectors
b , ,b
a b d b
p b
q b
u (b
v, w
z, f, g, , k) =
| {z } | {z }
r covectors
s vectors
<b
v, a > < b
z, d >< b
p, f > < b
u, k > .
For every choice of the covectors and vectors the righthand side is
a product of (r + s) real numbers!
Definition 2.8.32 For every pair of rs -tensors T and t, the rs -tensor T+t is defined
b , ,b
b , ,b
b , ,b
by (T+t)(b
v, w
z, f, g, , k) = T(b
v, w
z, f, g, , k)+t(b
v, w
z, f, g, , k)
r
b , ,b
and for every R the s -tensor T is defined by (T)(b
v, w
z, f, g, , k) =
b , ,b
T(b
v, w
z, f, g, , k).
The proof of the following theorem goes the same as the foregoing lower order examples.
43
Theorem 2.8.4
The set of rs -tensors is a Vector Space over R, which is notated by Tsr (V).
Let {ei } be basis on V and dim(V) = n then a basis of Tsr (V) is given by
1 i1 n, , 1 ir n,
j1
j2
js
e b
e b
e }, with
{ ei1 ei2 eir b
1 j1 n, , 1 js n.
So dimTsr = nr+s .
In the expansion
i i i
e j1 b
e j2 b
e js ,
T = T j1 j2 jr ei1 ei2 eir b
s
1 2
T j1 j2 jr = T(b
e i1 ,b
e i2 , ,b
e ir , e j1 , e j2 , , e js )
s
1 2
i0
i0
i0
i i i
1 2
The order of a tensor can be decreased with 2 points. The definition makes use of a
basis of V. The definition is independent of which basis is chosen.
Definition 2.8.33 Let {ei } be basis on V. Let T Tsr (V) with r 1 and s 1.
Consider the summation
0
T( ,b
e i , , ei , ) = T( ,b
e i , , ei0 , ).
The dual basis vectors stay on a fixed chosen "covector place". The basis vectors
stay on a fixed chosen "vector place". The defined summation is a r1
s1 -tensor. The
r1 (V) is called a contraction.
corresponding linear transformation from Tsr (V) to Ts2
44
Example(s): 2.8.2
1
1 -tensor
Comment(s): 2.8.11
lmnr
45
There is known out of the paragraphs 2.5 and 2.6 that if there is "chosen an inner product on V", "V and V can be identified with each other". There exists a bijective linear
transformation G : V V with inverse G1 : V V. To every basis {ei } on V, there is
available the associated "reciprocal" basis {ei } in V, such that (ei , e j ) = ij .
This means that it is sufficient to work with 0p -tensors, the covariant tensors. Every
"slot" of some mixed rs -tensor, which is sensitive for covectors can be made sensitive
for a vector by transforming such a vector by the linear transformation G. Otherwise
every "slot" which is sensitive for vectors can be made sensitive for covectors by using
the linear transformation G1 .
Summarized: If there is chosen some fixed inner product, it is enough to speak about
p-tensors. Out of every p-tensor there can be constructed some type rs -tensor, with
r + s = p.
Conclusion(s): (IMPORTANT)
A FIXED CHOSEN inner product (, ) on V leads to:
calculations with the usual rules of an inner product, replace all angular hooks
< , > by round hooks (, ),
correct expressions, if the hats b are dropped in all the formules of paragraph
2.8.1 till 2.8.10.
46
Example(s): 2.8.3
Comment(s): 2.9.1
Out of the linear Algebra, the 1-dimensional blocks of numbers (the rows and
columns) and the 2-dimensional blocks of numbers (the matrices) are wellknown. Within the use of these blocks of numbers is already made a difference between upper and lower indices. Here will be considered q-dimensional
blocks of numbers with upper, lower or mixed indices. These kind of "super
matrices" are also called "holors" 1. For instance the covariant components of
a 4-tensor leads to a 4-dimensional block of numbers with lower indices.
47
Notation(s):
T02 (Rn ) = Rnn is the Vector Space of all the nn-matrices with upper indices.
T11 (Rn ) = Rnn is the Vector Space of all the n n-matrices with mixed indices.
T20 (Rn ) = Rnn is the Vector Space of all the n n-matrices with lower indices.
T21 (Rn ) is the Vector Space of all the 3-dimensional cubic matrices with one
upper index and two lower indices.
Tsr (Rn ), with r, s {0, 1, 2, } fixed, is the Vector Space of all (r + s)-dimensional
holors with s lower indices and r upper indices.
Comment(s): 2.9.2
The Vector Space Tsr (Rn ) over R is of dimension n(r+s) and is isomorf with
(r+s)
Rn . If for instance the indices are lexicographic ordered then an identi(r+s)
fication with Rn
can be achieved.
Notation(s):
Comment(s): 2.9.3
In the following definitions are given alternative definitions of tensors. A tensor will be defined as a transformation of Bas(V) to Tsr (Rn ) for certain r and
s. This transformation will be such that if the action on one basis is known,
that the action on another basis can be calculated with the use of transition
matrices. In other words, if the holor with respect to a certain basis is known,
then the holors with respect to other bases are known.
48
Definition 2.9.1 A 0-tensor, 00 -tensor is a transformation of Bas(V) to T00 (Rn )
which adds to every basis an unique number.
Notation(s):
0
1 -tensor
or covector is a transformation
x j0 = A j0 x j .
Comment(s): 2.9.4
0
Notation(s):
Definition 2.9.3 A contravariant 1-tensor, 10 -tensor or vector is a transformation
F : Bas(V) T01 (Rn ) with the property
)
F({ei }) = [x j ]
j0 j
j0
x
=
A
x.
0
j
F({ei0 }) = [x j ]
Comment(s): 2.9.5
49
Notation(s):
Definition 2.9.4 A covariant 2-tensor or 02 -tensor is a transformation
S : Bas(V) T20 (Rn ) with the property
)
S({ei }) = [Tkl ]
Tk0 l0 = Akk0 All0 Tkl .
S({ei0 }) = [Tk0 l0 ]
Comment(s): 2.9.6
Definition 2.9.5 A contravariant 2-tensor or 20 -tensor is a transformation
S : Bas(V) T02 (Rn ) with the property
)
0 0
0
S({ei }) = [Tkl ]
Tk l = Akk All0 Tkl .
0 l0
k
S({ei0 }) = [T ]
Comment(s): 2.9.7
50
1
1 -tensor
is a transformation
S({ei }) = [Tlk ]
0
0
0
Tlk0 = Akk All Tlk .
0
S({ei0 }) = [Tl0 ]
Comment(s): 2.9.8
0
p -tensor
is a transformation
S({ei }) = [Ti1 ip ]
ip
i
Ti0 i0p = Ai10 Ai0 Ti1 ip .
p
1
1
Definition 2.9.8 A contravariant q-tensor or 0q -tensor is a transformation
q
S : Bas(V) T0 (Rn ) with the property
)
S({ei }) = [Ti1 ip ]
i01
i01 i0p
p
0
0
T
=
A
Ai Ti1 ip .
i i
i1
p
S({ei0 }) = [T 1 p ]
51
k k
S({ei }) = [Tl 1l r ]
k10
k10 kr0
kr0 l1
1 s
k k
0
0
Al0 Alls0 Tl 1l r .
A
=
A
T
k1 kr
0
0
k
k
l
l
r
1 s
1
s
S({ei0 }) = [Tl0 l0 ]
1
1 s
1
Comment(s): 2.9.9
A rs -tensor is called contravariant of the order r and covariant of the order s.
The preceding treated mathematical operations, such as addition, scalar multiplication, multiplication and contraction of tensors are calculations which
lead to the same new tensor independent whatever basis is used. In other
words, the calculations can be done on every arbitrary basis. Such a calculation is called "tensorial". The calculations are invariant under transformations
of coordinates.
To describe a tensor it is enough to give a holor which respect to a certain basis.
With the definitions in this paragraph the holors with respect to other bases
can be calculated.
Some examples.
52
Example(s): 2.9.1
k
km0 = Akk Akm0 = Akk Am
m0 m0
and there follows that F1 is a mixed 2-tensor. This can also be seen in
matrix language. The argument is then:
0
0
There holds I, = A I A, for every invertible n n-matrix A, .
F1 is the Kronecker tensor.
Condider the transformation F2 which adds to every basis of V the matrix
[km ]. The question becomes if F2 is a covariant 2-tensor?
Holds "I,, = (A, )T I A, " forevery invertible n n-matrix? The answer is
"no", so F2 is not a covariant 2-tensor.
Condider the transformation F3 which adds to every basis of V the matrix
[km ]. The question becomes if F3 is a contravariant 2-tensor?
0
0
answer is "no", because "I,, = A I (A )T " is not valid for every invertible
n n-matrix.
If there should be a restriction to orthogonal transition matrices then F2
and F3 should be 2-tensors.
The to the mixed 2-tensor F1 belonging linear transformation from V to
V is given by x 7< b
e i , x > ei = xi ei = x, the identitiy map on V.
Consider the transformation F which adds to every basis of V the matrix
diag(2, 1, 1). The question becomes if F is a covariant, contravariant or a
mixed 2-tensor? It is not difficult to find an invertible matrix A such that
diag(2, 1, 1) 6= A1 diag(2, 1, 1) A. So it follows immediately that F is not a
2-tensor of the type as asked.
53
Example(s): 2.9.2
such that ({ei }) = ({ei0 }). is obviously a scalar. The argument in the
0
matrix language should be that trace((A, )1 Q A, ) = trace(Q, ) for every
invertible n n-matrix A, .
Consider a covariant 2-tensor . Write
({ei }) = [qkl ] = Q.
Is the transformation : Bas(V) R, defined by
({ei }) =
n
X
qll = trace(Q),
l=1
54
Example(s): 2.9.3
Given are the tensors qij , qij , qij . Is the change of indices a tensor operation or
in matrix language: "Is the transposition of a matrix a tensor operation?" For
matrices with mixed indices this is not the case, but for matrices with lower or
upper indices this is a tensor operation. The explanation will follow in matrix
language.
0
0
? Mixed indices: Notate Q = [qij ] and Q, = [qij0 ]. Then holds
0
Example(s): 2.9.4
Let n = 3. Given are the contravariant 1-tensors xi and y j . Calculate the cross
product z1 = x2 y3 x3 y2 , z2 = x3 y1 x1 y3 and z3 = x1 y2 x2 y1 . This is
not a tensor operation, so zk is not a contravariant 1-tensor. In other words: zk
is not a vector. To see that there is made use of the following calculation rule
U, V R3 S R33 , S invertible (SU) (SV) = det(S) ST (U V).
0
(2.3)
0
55
Example(s): 2.9.5
Comment(s): 2.10.1
Definition 2.10.2 Let Tk (V)( = Tk0 (V)). The tensor is called symmetric
if for every number of k vectors v1 , , vk V and for every Sk holds that
(v1 , , vk ) = (v(1) , , v(k) ). The tensor is called antisymmetric if for every
number of k vectors v1 , , vk V and for every Sk holds that (v1 , , vk ) =
sgn() (v(1) , , v(k) ).
56
Comment(s): 2.10.2
If k > n, then is every antisymmetric tensor equal to the tensor which adds 0
tot every elemenst of its domain.
The change of an arbitrary pair of "input"-vectors has no influence to a symmetric tensor, it gives a factor 1 to an antisymmetric tensor.
The sets of the symmetric and the antisymmetric k-tensors are subspaces of
Tk (V).
Notation(s):
W
The Vector Space of the symmetric k-tensors is notated by k (V).
V
The Vector Space of the antisymmetric k-tensors is notated by k (V).
W
V
W
V
The agreement is that 0 (V) = 0 (V) = R and 1 (V) = 1 (V) = V .
Notice(s): 2.10.1
b
f + b
g b
g =b
f b
g for all R.
f b
g = b
g b
f, b
f b
f = 0 and b
57
b
< b
f
,
x
>
<
f
,
x
>
1 1
1 k
.
.
b
b
.
..
..
f1 fk (x1 , , xk ) = det
b
< fk , x1 > < b
fk , xk >
Notice(s): 2.10.2
b
f1 b
fk = 0 if and only if the set {b
f1 , ,b
fk } is linear dependent.
Vk
(V).
Clarification(s): 2.10.1
Consequence(s):
The dimension of
Vk
!
n
(V) is equal to
.
k
(2.4)
58
Comment(s): 2.10.3
In this definition is made use of the operator perm, which is called permanent.
Perm adds a number to a matrix. The calculation is almost the same as the
calculation of a determinant, the only difference is that there stays a plus sign
for every form instead of alternately a plus or minus sign.
Notice(s): 2.10.3
b
fb
g =b
gb
f.
b
fb
g = 0 b
f = 0 and/or b
g = 0.
b
f + b
g b
g =b
f b
g + b
g b
g =b
fb
g + b
gb
g for all R.
b
< b
f
,
x
>
<
f
,
x
>
1
1
1
k
.
.
b
b
.
..
..
f1 fk (x1 , , xk ) = perm
b
b
< fk , x1 > < fk , xk >
59
Notice(s): 2.10.4
The order in b
f1 b
fk is not of importance, another order gives the same symmetric k-tensor.
b
f1 b
fk = 0 if and only if there exists an index j such that b
fk = 0.
Wk
(V).
Clarification(s): 2.10.2
i1 ik
1i1 ik n
b
e i1 b
e ik
,
i1 ik
i
with i1 ik = (ei1 , , eik ) and i1 ik = b
e 1 b
e ik (ei1 , , eik ). In this last
expression the Einstein summation convention is not applicable!
Consequence(s):
The dimension of
Wk
!
n+k1
(V) is equal to
.
k
Example(s): 2.10.1
60
Notice(s): 2.10.5
Definition 2.10.8
defined by
If
Vk
(V) and
Vl
Vk+l
(V) is
(k + l) !
A( ).
k!l!
Comment(s): 2.10.4
61
Vk
(V),
Vl
(V),
Vm
(V) and
Vm
= (1)(kl)
( ) = ( )
( + ) = + .
Clarification(s): 2.10.3
The proof of the given theorem is omitted. The proof is a no small accounting
and combinatorial issue, which can be found in
(Abraham et al., 2001) ,Manifolds, , page 387.
The practical calculations with the wedge product are done following obvib + b
ous rules. For instance if k = 2, l = 1 and = b
uw
v b
x, = b
x + b
z
b b
b b
, dan geldt = b
uw
x + b
uw
z + b
v b
x b
z.
62
Example(s): 2.10.2
!
!
0
1
with basis {e1 , e2 }, given by e1 =
and e1 =
. The associConsider
0
1
ated dual basis is given by {b
e 1 ,b
e 2 }. The following notations are here employed
e1 =
, e2 =
and b
e 1 = dx,b
e 2 = dy.
x
y
V
The Vector Space 1 (R2 ) = (R2 ) is 2-dimensional and a basis of this space
V
is given by {dx, dy}. Let b
, b
1 (R2 ) and expand these covectors to their covariant components with respet to the basis {dx, dy}. So b
= 1 dx + 2 dy,
with 1 = b
and 2 = b
. On the same way b
= 1 dx + 2 dy and rhere
x
y
follows that
R2
b
b
= (1 dx + 2 dy) (1 dx + 2 dy)
= 1 2 dx dy + 2 1 dy dx = (1 2 2 1 )dx dy.
!
!
a1
b1
Let a =
R2 . The numbers a1 , a2 , b1 and b2 are the con,b =
b2
a2
travariant components of a and b with respect tot the basis { , }. There
x y
holds that
(dx dy)(a, b) = < dx, a >< dy, b > < dx, b >< dy, a > = 1 2 2 1 .
This number is the oriented surface of the parallelogram spanned by the vectors a and b.
V
The Vector Space 2 (R2 ) is 1-dimensional and a basis is given by {dx dy}.
63
Example(s): 2.10.3
= (1, 0, 0)T , e2 =
=
x
y
64
Example(s): 2.10.4
= (1, 0, 0, 0)T , e1 =
t
= (0, 1, 0, 0)T , e2 =
= (0, 0, 1, 0)T and e3 =
= (0, 0, 0, 1)T . The
x
y
z
corresponding dual basis is notated by {dt, dx, dy, dz}.
V
The basis of the 4-dimensional Vector Space 1 (R4 ) is {dt, dx, dy, dz}.
V
The basis of the 6-dimensional Vector Space 2 (R4 ) is
{dt dx, dt dy, dt dz, dx dy, dx dz, dy dz}.
V
The basis of the 4-dimensional Vector Space 3 (R4 ) is
{dt dx dy, dt dx dz, dt dy dz, dx dy dz}.
V
The basis of the 1-dimensional Vector Space 4 (R4 ) is {dt dx dy dz}.
V
Let = 01 dt dx + 12 dx dy + 13 dx dz 2 (R4 ) and = 0 dt + 2 dy
V1 4
V
(R ) then 3 (R4 ) and there holds that
Consider R4 with basis {e0 , e1 , e2 , e3 } given by e0 =
V2
(R4 ) then
V4
= 01 23 dt dx dy dz.
Let a, b, c, d R4 and these vectors are expanded to their contravariant com
ponents with respect to the basis { , , , }. There holds that
t x y z
(dt dz)(a, b) = a0 b3 b0 a3 .
This number is the oriented surface of the projection on the t, z-plane of the
parallelogram spanned by a and b.
0
a
b0
b2
b3
c0
c2
c3
65
Comment(s): 2.10.5
V
Through the choice of a n (V) is introduced an oriented volume on V. The
number (v1 , , vn ) gives the volume of the parallelepiped spanned by the
V
vectors v1 , , vn . Because the Vector Space n (V) is one dimensional, every
two choices of differ some multiplicative constant. If there is defined an inner product on V, it is customary to choose such that for orthonormal bases
{ei } on V holds that (e1 , , en ) = 1. A basis with the plus-sign(minus-sign)
is called positive(negative) oriented.
a1
a1
a(nk)
66
Comment(s): 2.11.1
..
(
a1
x11
..
.
xn1
x1k
.. ,
.
xnk
e i , a j >,
because of the representation given in Clarification 2.4, where aij = < b
for i = 1, , n, and j = 1, , (n k), and xij = < b
e i , x j >, for i = 1, , n, and
j = 1, , k. Developping this determinant to the first (n k) columns, then
becomes clear that (
a1
.
..
.
det .
. ,
i
x k xik
1
k
with 1 i1 < i2 < < ik n.
a1
a(nk) is a
This result means that the antisymmetric k-tensor
i1
ik
n
linear combination of the k k-tensors b
e b
e . This result was to expect
V
i1
ik
because of the fact that {b
e b
e } is a basis of k (V).
67
Example(s): 2.11.1
Consider R3 with basis {e1 , e2 , e3 }, given by e1 = (1, 0, 0)T , e2 = (0, 1, 0)T and
e1 b
e2 b
e 3 . Then holds
e3 = (0, 0, 1)T . Define the volume on V by = b
that (e1 , e2 , e3 ) = 1.
V
Let a, b R3 then
a
b 1 (R3 ) and there holds that
1
a b1 x1
a
b)(x) = det a2 b2 x2 = (a2 b3 a3 b2 ) x1 + (a3 b1 a1 b3 ) x2 + (a3 b1 a1 b3 ) x3 ,
(
3
a b3 x3
such that
b = (a2 b3 a3 b2 )b
e 1 + (a3 b1 a1 b3 )b
e 2 + (a3 b1 a1 b3 )b
e 3.
In addition
V2
1
a x1 y1
2
2
2
a)(x, y) = det a x y
(
3
3
3
a x y
2
1
3
3
2
x
y
x
y
x
+ a3 det
+ a2 det
= a1 det
2
x
x y1
x3 y3
y1
,
2
y
or
a = a1 b
e2 b
e 3 + a2 b
e3 b
e 1 + a3 b
e1 b
e 2.
Notice(s): 2.11.1
If for the basis {ei } of V holds that (e1 , , en ) = 1, then holds that
=b
e1 b
e n.
Moreover holds for every k {1, , (n 1)},
e1
ek = b
e (k+1) b
e n.
ei1
eik = (1) b
e j1 b
e j(nk) .
The j1 , , j(nk) are the indices which are left over and is the amount of
permutations to get the indices i1 , , ik , j1 , , j(nk) in their natural order
1, 2, , n.
68
Clarification(s): 2.12.1
The startingpoint of an inner product means that the inner product is symmetric, see Def. 2.5.1 i. In this paragraph it is of importance.
Comment(s): 2.12.1
With the help of the inner product there can be made a bijection between V and
V . This bijection is notated by G, see Theorem 2.5.1. For every a, b, x, y V
there holds that
!
(a, x) (a, y)
b
(Ga Gb)(x, y) = b
a b (x, y) = det
(b, x) (b, y)
and for a1 , , ak , x1 , , xk V there holds that
(Ga1 Gak )(x1 , xk ) =
b
a1 b
ak (x1 , , xk ) =
(a1 , x1 ) (a1 , xk )
.. .
det ..
(ak , x1 ) (ak , xk )
n
n
Because
of
the
fact
that
=
k
nk
V
V
, there holds that
k
(nk)
dim
(V) = dim
(V) . Through the choice of the inner product and
the volume it is apparently possible to define an isomorphism between
Vk
V
(V) and (nk) (V).
69
Example(s): 2.12.1
V
V
Consider R = 0 and let R then = n .
Consider R3 and the normal inner product and volume then holds that
1
b
e b
e2 =
e1
e2 = b
e 3.
Notice(s): 2.12.1
e i1
e ik =
(1)(r+)b
e j1 b
e j(nk) ,
with r is the number of negative values in {(ei1 , ei1 ), , (eik , eik )}, see 2.2 and
is the amount of permutations to get the indices i1 , , ik , j1 , , j(nk) in their
natural order 1, 2, , n.
70
Example(s): 2.12.2
, y
} be the standard basis of R2 and notate the corresponding dual
Let { x
basis by {dx, dy}. The same notation as used in Example 2.10.2. Notice that
the standard basis is orthonormal. Define the oriented volume on V by =
G = dx dx + dy dy.
Let
V0
Let
V1
V2
(R2 ).
= (1 dx + 2 dy) = 1 dx + 2 dy = 1 dy 2 dx =
Obviously holds that = and .
V
Let 2 (R2 ) then holds that
= (12 d dy) = 12 (d dy) = 12
V0
(R2 ).
V1
(R2 ).
71
Example(s): 2.12.3
Consider R3 . The used notations are the same as in Example 2.10.3. Define
the inner product by
G = dx dx + dy dy + dz dz
and the oriented volume by = dx dy dz. There holds that
1 = dx dy dz
dx = dy dz
(dx dy) = dz
dy = dx dz
(dx dz) = dy
dz = dx dy
(dy dz) = dx
(dx dy dz) = 1
dt = dx dy dz
(dt dy) = dx dz
dx = dt dy dz
(dt dz) = dx dy
dy = dt dx dz
(dx dy) = dt dz
dz = dt dx dy
(dx dz) = dt dy
(dy dz) = dt dx
(dt dx dy) = dz
(dt dx dz) = dy
(dt dx dy dz) = 1
(dt dy dz) = dx
(dx dy dz) = dt
Note that the inner product has signature (+, , , ) or (, +, +, +). Which signature is used, is a matter of convention. But today the signature (+, , , ) is very
often used, becomes standard.
72
Let V be a n-dimensional vector space over R with three bases {ei }, {{b f ei0 } and
{{b f ei }.
Let see that
j0
A j A j0 = A j .
2.
Let V be a symplectic vector space. Prove that the dimension of V is even and that
axiom (ii) of the inner product can never be satisfied.
3.
4.
5.
Prove the identification of V with V . Indication: define a suitable linear transformation of V to V and prove that this is an isomorphism.
73
Identification of V with V .
Let the linear transformation : V 7 V be defined by ((x))() = (x)
then (x) (V ) = V . If ((x))() = 0 for every V then (x) = 0 for
every V and there follows that x = 0. So is injective, together with
dimV = dimV = dimV = n < gives that is a bijective map between V
and V .
Nowhere is used a "structure". Nowhere are used coordinates or something
like an inner product. is called a canonical or natural isomorphism.
Identification of V with V .
The sets V and V contain completely different objects.
There is needed some "structure" to identify V with V .
Definition 2.14.1
non-degenerate if
then y = 0 and
then x = 0
.
74
Suppose V is the space of real polynomials in the variable x with degree less than or equal 2. A basis of this space is for instance {1, x, x2 },
so e1 = 1, e2 = x and e3 = x2 . A basis of the dual space is defined by the covectors {b
e 1, b
e 2, b
e 3 } with b
e i (e j ) = ij , with 1 i 3
and 1 j 3. Covectors can be computed, for instance by b
e i (p) =
i1 p(1) + i1 p(0) + i1 p(1). After some calculations, the result is that
1
1
1
1
b
e 3 (p) = p(1) 1 p(0) + p(1).
e 1 (p) = 1 p(0),b
e 2 (p) = p(1) + p(1), b
2
2
2
2
b
The covectors are linear functions. An arbitrary covector f V is given by
fb = 1b
e 1 + 2b
e 2 + 3b
e 3 and fb(1 e1 + 2 e2 + 3 e3 ) = 1 1 + 2 2 + 3 3 .
But if the basis of V changes, the covectors also change. If for instance
the basis of V is {1, 1 + x, x + x2 } than the basis of covectors becomes
1
1
b
e 1 (p) = 1 p(1),b
e 2 (p) = 1 p(1) + 1 p(0), b
e 3 (p) = p(1) 1 p(0) + p(1).
2
2
75
, and .
First of all < , >: V V R, the Kronecker tensor, see Section 2.3:
<b
u, x > = b
u(x).
If x is fixed, then it becomes a linear function on V , so a contravariant 1-tensor,
if b
u is fixed, then a linear function on V, so a covariant 1-tensor.
With this Kronecker tensor are built all other kind of tensors.
The
r
s -tensors,
b , ,b
a b d b
p b
q b
u (b
v, w
z, f, g, , k) =
| {z } | {z }
r covectors
<b
v, a > < b
z, d >< b
p, f > < b
u, k >
s vectors
Tsr (V),
The k-tensor, see Section 2.10.4, can be seen as a construction where the rs -tensors
are used,
b
< b
f
,
x
>
<
f
,
x
>
1
1
1
k
V
.
.
b
b
k (V),
..
..
f1 fk (x1 , , xk ) = det
b
b
< fk , x1 > < fk , xk >
with b
f1 , ,b
fk V .
This tensor is antisymmetric.
Another k-tensor, see Section 2.10.6 can also be seen as a construction where the
r
s -tensors are used,
b
< b
f
,
x
>
<
f
,
x
>
1
1
1
k
W
.
.
b
b
k (V),
.
.
f1 fk (x1 , , xk ) = perm
.
.
b
b
< fk , x1 > < fk , xk >
with b
f1 , ,b
fk V . For the calculation of perm, see Comment 2.10.3. Another
notation for this tensor is b
f1 b
fk ( = b
f1 b
fk ).
This tensor is symmetric.
76
a1
a1
1
a1
a1(nk)
..
.
n
a(nk)
x11
..
.
xn1
x1k
.. ,
.
xnk
V
V
The Hodge transformation : k (nk) (V) is defined by
(
k=0:
1 = , followed by linear expansion,
a1
77
V V R
V V R
(v, w) 7 v0 M w
( f, w) 7 f M w
Mij = M(bi , b j )
Mij = M(i , b j )
M V V
M V V
M = Ms t s t
M = Mst bs t
Mst = gs u Mu t
G1 M
V V R
V V R
(v, f ) 7 v0 M f 0
( f, g) 7 f M g0
Mi = M(bi , j )
Mij = M(i , j )
M V V
M V V
M = Mst s bt
M = Ms t bs bt
Mst = Ms u gu t
Ms t = gs u Mu v gv t
M G1
G1 M G1
78
The map f is also called a chart map. The inverse map f : U is called a
parametrization of . The variables x j are functions of the variables ui . If there is
notated f = (g1 , , gn ) then holds that x j = g j (ui ). The chart map and the parametrization are often not to describe by simple functions. The inverse function theorem
tells that f is differentiable in every point of U and also that
fi
x j
(x , , x )
1
g j
uk
(u1 , , un ) = ik ,
"
fi
"
gl
and
are the
x j
uk
inverse of each other. The curves, which are described by the equations f i (x1 , , xn ) =
C, with C a constant, are called curvilinear coordinates belonging to the coordinate
curves.
79
Example(s): 3.1.1
x j = (L1 )k uk (L1 )k bk .
arccos
y 0, x 6= 0
x2 +y2
(x, y) =
y 0, x 6= 0.
2 arccos 2 2
x +y
The origins of all the Tangent Spaces TX (Rn ), with X , form together against the
open subset .
X
Let xk = gk (u1 , , un ) be a parametrization of . The vectors ci = u
i are tangent to
the coordinate curves in X. So there is formed, on a natural way, a with the curvilinear
80
0
ij0
xi k xi k0 k
=
(x ) i0 (x (x )).
xi
x
X
} or shorter with
xi
0
}. If there is a transition to other coordinates xi there holds that
i
x
xi
=
,
0
0
xi
xi xi
such that the transition matrix of the basis { i } to the basis { i0 } is equal to the matrix
x
x
"
#
i
x
. Consequently the transition matrix of the basis { i0 } to the basis { i } is given
0
x
x
xi
"
0 #
xi
by the matrix
.
xi
In Chapter 2 is still spoken about a general Vector Space V. In the remaining lecture
notes the Tangent Space TX (Rn ) plays at every point X Rn the rule of this general
Vector Space V. At every point X Rn is added besides the Tangent Space TX (Rn ) also
(Rn ) = (T (Rn )) and more general the Vector Space T r (Rn )
the Cotangent Space TX
X
Xs
of the tensors which are covariant of the order s and contravariant of the order r. But
it also possible to add subspaces such as the spaces of the symmetric or antisymmetric
W
V
tensors, notated by X (Rn ), respectively X (Rn ) at X.
(Rn ). This dual basis, associTo every basis of TX (Rn ) belongs also a dual basis of TX
i
i0
ated with the coordinates xi , is notated by {dx
# dual basis {dx } belonging to the
" }. 0The
0
xi
coordinates xi are founded with the matrix
. There holds that
xi
0
xi
dx =
dxi ,
xi
i0
which follows out of Lemma 2.6.1. Be aware of the fact that the recipoke and dual bases
are in accordance with each other after the choice of an inner product, see the conlusion
81
at the end of Section 2.6 . The result agrees with the folklore of the
infinitesimal calculus!
Definition 3.2.2 A vector field, contravariant vector field or 10 -tensor field a on
S
Rn is a map of Rn to XRn TX (Rn ). At every point X Rn there is added a vector
a(X) element out of the corresponding Tangent Space TX (Rn ).
.
xi
xi k k0 i k k0
(x (x )) a (x (x )),
a (x ) =
xi
i0
k0
0
ai
xi i
=
a.
xi
Definition 3.2.3 A covector field, covariant vector field or 01 -tensor field on Rn
S
(Rn ). At every point X Rn there is added an element
is a map of Rn to XRn TX
(Rn ) of T (Rn ).
(X) out of the dual space TX
X
There belongs to a covector field on Rn , n functions i on Rn , such that (X) =
0
0
0
0
i (xk )dxi . In other (curvilinear) coordinates xi is written (xk (xk )) = i0 (xk )dxi and
there holds that
0
i0 (xk ) =
which is briefly written as i0 =
xi k0
k k0
(x
)
(x
(x )),
i
0
xi
xi
0 i .
xi
S
Definition 3.2.4 A rs -tensor field on Rn is a map of Rn to XRn TX rs (Rn ). At
every point X Rn there is added a rs -tensor (X) an element out of TX rs (Rn ).
82
There belongs to a
r
s -tensor
i i
i i
(X) = j1 jr (xk )
s
dx j1 dx js .
xi1
xir
0
(x
(x
))
(x
)
0
0 (x ) j j (x (x )),
s
1
s
xi1
xir
1
x js
x j1
which is briefly written as
0
i01 i0r
j0 j0
1
xi1 k
xir k x j1
x js i1 ir k k0
=
(x (x )).
(x
(x
0
0
xi1
xir
x js j1 js
x j1
(3.2)
i0 i0 (xl ) =
1
i i
1i1 << ik n
(3.3)
with
i i
J i10 ik0 =
1
(xi1 , , xik )
0
i0
(xi1 , , x k )
.
83
xi1
ik
dx dx =
j0
1
xik
j0
k
j0
dx j1 dx k .
(3.4)
The terms in the summation of the right part of 3.4 are not equal to zero if the indices
j0p for p = 1, , k are not equal. Choose a fixed, ordered collection of indices i01 , , i0k
with 1 i01 < < i0k n. Choose now the terms in the summation of the right side of
3.4 such that the unordered collection j01 , , j0k is exactly the collection i01 , , i0k . Note
that there are k! possibilities. To every unordered collection j01 , , j0k there is exactly
one Sk such that j0p = i0(p) , for p = 1, , k. Out all of this follows
X xi1
dxi1 dxik =
1i0 << i0 n Sk
1
i0
xik
j0
x (1)
x (k)
i0
i0
i0
i0
dx (1) dx (k) .
i0
To put the term dx (1) dx (k) into the order dxi1 dx k has to be corrected
with a factor sgn(), the factor obtained by the order of the permutation. So
X
X
ik
i1
0
x
x
i0
dxi1 dxik =
sgn() i0 j0 dxi1 dx k .
x (1)
x (k)
1i0 << i0 n Sk
1
i i
In the term between the brackets we recognize the determinant J i10 ik0 , such that
1
dxi1 dxik =
1i0 << i0 n
1
i i
i0
k
With all of this follows that the representation in 3.1 can be written as
(X) =
X
1i1 << ik n
X
1i0 << i0 n
1
i1 ik (xl )
i i
1i0 << i0 n
1
1i1 << ik n
i0
0
0
i i
i0
J i10 ik0 (xl ) i1 ik (X) dxi1 dx k .
1 k
Compare this with 3.2 and immediately follows the relation 3.3.
All the given definitions of the tensor fields, in this section, are such that the tensor
fields are defined as maps on Rn . Often are tensor fields not defined on the whole Rn
but just on an open subset of it. The same calculation rules remain valid of course.
84
We assume that the elements of Fsr are smooth enough. The components of an element
i i
F Fsr we note by F j1 jr with 1 ik n, 1 jl n, 1 k r and 1 l s. For
1
i01 i0r
G j0 j0
1
xir k k0 x j1 k0
xi1 k k0
x js k0 i1 ir k k0
(x
(x
))
=
(x
(x
))
(x
)
0
0 (x ) F j j (x (x )).
s
1
xi1
xir
x js
x j1
This means that if, for some curvilinear coordinate system on , a nr+s number of func
tions on f () are given, that there exists just one rs -tensor field on .
It has to be clear that the components of a tensor field out of definition 3.2.4 are the
same as the components of a tensor field out of definition 3.3.1, both with respect to the
same curvilinear coordinate system.
The alternative definition is important, because one wants to do algebraic and analytical operations, for instance differentation, without to be linked to a fixed chosen coordinate system. If after these calculations a set of functons is obtained, it is the question
if these functions are the components of a tensor field. That is the case if they satisfy
the transformation rules. Sometimes they are already satisfied if there is satisfied to
these transformation rules inside a fixed chosen class ( a preferred class) of curvilinear coordinate systems. An example of such a preferred class is the class of the affine
coordinate transformations. This class is described by
0
xi = bi + Lii xi ,
(3.5)
h 0i
h 0i
with bi Rn and Lii Rnn invertible. Coordinates which according 3.5 are associated with the cartesian coordinates are called affine coordinates. Even more important
are certain subgroups of it:
h 0i
i.
Lii orthogonal: Euclidean invariance, the "Principle of Objectivity in the
continuum
mechanics.
h 0i
i
L Lorentz: Lorentz invariance in the special theory of relativity.
ii.
h i0 i
iii. Lii symplectic: Linear canonical transformations in the classical mechanics.
If inside the preferred class the transformations are valid, than are the components of
the obtained tensor field outside the preferred class. An explicit formula is most of the
time not given or difficult to obtain.
All the treatments done in the previous chapter can also be done with the tensor fields.
They can be done pointswise for every X on the spaces TXsr (Rn ).
85
xi k0
xi k k0 xi k0
xi k k0 x j k0 i k k0
=
(x
)
=
(x
)
=
(x
(x
))
(x (x )) j0 (x ) j (x (x ))
0
0
xi
xi
x j
x j
x
there is indeed defined a 11 -tensor field.
0
0
ij0 (xk )
0
2 -tensor
field on Rn de-
i
j
k
g = gij dx dx , with gij (x ) =
,
.
xi x j X
(3.6)
86
k
l
= g( i ,
= gkl ki lj = gij .
) = (gkl dx dx )
,
,
xi x j X
x x j
xi x j
There is used that dxp (
p
) = s , see also Definition 2.8.12.
s
x
A curvilinear coordinate system {xi } is called an orthogonal curvilinear coordinate system, if at every point X, [gij (xk )] is a diagonal matrix. This diagonal matrix is pointwise
to tranfer into a diagonal matrix with only the numbers 1 on the diagonal. Generally
this is not possible for all points simultaneously, because this would impose too many
constraints to the curvilinear coordinates. If the chosen inner product is positive, then
there can be entered functions hi such that gij = ij h2i . These functions are called
scale factors.
1
and hi dxi (not summate!) are
hi xi
=
gii = 1 (not summate!)
,
hi xi =
hi xi hi xi X
h2i
and
q
q
i
i
i
hi dx = (hi dx , hi dx )X = h2i gii = 1 (not summate!).
)
n
o
1
i are orthonormal bases to the corresponding
The bases
and
h
dx
i
hi xi
tangent space and its dual.
(
87
dx1 dxn =
0
(x1 , , xn )
10
n0
dx
dx
0
0
(x1 , , xn )
such that in general (dx1 dxn )(v1 , , vn ) will give another volume than
(dx1 dxn )(v1 , , vn ). If we restrict ourselves to affine coordinate transformations
(x1 , , xn )
0
0
0 i
i
i
i
x = b + Li x with det L = 1, than holds
= 1. In such a case, the
0
0
(x1 , , xn )
volume is called invariant under the given coordinate transformation.
0
0
0
A density is a antisymmetric Tensor Field of the form 0 (xk )dx1 dxn with a
function 0 , which satisfies
0
(x1 , , xn )
0 .
0
(x1 , , xn )
x
p
x x
y
!
2
2
x + y
r
cos
r sin
=
,
=
y
y y
sin
r
cos
x
p
2
2
x + y
r
and out of this result follows easily that
r r
p x
cos
sin
x y
x2 + y2
= sin
cos
r
r
2
x + y2
x y
x2 + y2
.
x
x2 + y2
With the use of these transition matrices we find the following relations between the
bases and dual bases
88
= p
+ p
x2 + y2 x
x2 + y2 y
r
= y
+x
x
y
y
x
dx + p
dy
dr = p
x2 + y2
x2 + y2
d = x2 + y2 dx + x2 + y2 dy
sin
= cos
r
r
cos
y = sin r + r
(
dx = cos dr r sin d
dy = sin dr + r cos d
With the help of these relations are tensor fields, given in cartesian coordinates, to
rewrite in other coordinates, for instance polar coordinates. In polar coordinates is the
vectorfield
y
x
+ 2
2
2
2
x + y x
x + y y
1
, the 2-form (x2 + y2 ) dx dy by r3 dr d and the volume form dx dy
given by
r r
by r dr . The fundamental tensor field which belongs to the natural inner product
on R2 can be described in polar coordinates by
dx dx + dy dy = dr dr + r2 d d.
(3.7)
b2 1
T = 2
1 2
+ 1+ 2 2
+ 2
,
r
z
z
b a2
r r
r r
where a and b, with a < b, are the radii of the inside and outside wall of the tube.
Further is some material constant.
89
= sin sin
sin
cos
cos
sin
cos
sin
0
z
xz
x
p
y
p
x2 + y2 + z2
x2 + y2
yz
y
x
.
p
= p 2
2
2
2
2
x + y
x + y + z
z
p
2
2
x + y
0
x2 + y2 + z2
cos sin
x y z
cos cos
x y z
sin
sin
x y z
x2 + y2 + z2
xz
=
(x2 + y2 + z2 ) px2 + y2
(x + y2 )
sin sin
sin cos
cos
sin
y
p
x2 + y2 + z2
yz
p
(x2 + y2 + z2 ) x2 + y2
x
2
(x + y2 )
cos
sin
2
2
2
x + y + z
2
2
x + y
.
2
(x + y2 + z2 )
With the help of these two transition matrices the relations between bases and dual
bases can be shown. Tensor Fields expressed in cartesian coordinates can be rewritten in
spherical coordinates. So is the volume dx dy dz rewritten in spherical coordinates
equal to 2 sin d d d. The electrical field due to a point charge in the originis
given by cartesian coordinates by
3
!
2
2
2
2
x
+ y
+z
(x + y + z )
x
y
z
90
1
. Further transforms the fun2
damental tensor field, corresponding to the natural inner product on R3 , as follows
dx dx + dy dy + dz dz = d d + 2 d d + 2 sin2 d d.
The state of stress of a hollw ball under an internal pressure p is given by the contravariant 2-tensor field
!
!
!
!
a3 p
1
1
b3
b3
b3
T = 3
1 3
,
+ 1+
+ 1+
b a3
2 3 2
2 3 2 sin2
where a and b, with a < b, are the radii of the inside and outside wall of the ball.
f
xi
xi f
0
xi xi
Definition 3.6.1 The covariant tensor field d f = i f dxi is called the gradient
field of the scalar field f .
Let a be a vector field and let ai be the components of this vector field with respect to
the curvilinear coordinates xi . The functions ai j f form the components of a 11 tensor
field.
Definition 3.6.2 The contraction ai i f is called the directional derivative of f in
the direction a, notation
La f = < d f, a > = ai i f.
If there is defined an inner product on Rn , than there can be formed out of the gradient
field, the contravariant vectorfield
91
G1 d f = gki i f
.
xk
1 f 1
1 f 1
+ +
.
1
1
h1 x h1 x
hn xn hn xn
(Lv w) j = wi i v j vi i0 w j
0
0
j j
j
i0 i k
i0 i k
= Ai w Ai0 k A j v Ai v Ai0 k A j w j
0
0
j j
j
k
k
= w k A j v v k A j w j
j0
j0
j0
j0
j
k
j
j
k
j
= w v k A j + A j k v v w k A j + A j k w
j0
j0
= A j wk k v j vk k w j + wk v j vk w j k A j
j0
j0
j0
= A j (Lv w) j + w j vk j Ak k A j
j0
= A j (Lv w) j .
It seems that the functions (Lv w) j are the components of a contravariant vector field.
The vector field Lv w is called the Lie product of v and w. With this product the space of
vector fields forms a Lie algebra. For a nice geometrical interpretation of the Lie product
we refer to (Abraham et al., 2001) ,Manifolds, or (Misner et al., 1973) ,Gravitation.
92
i
j
)
defined by
k
(
j k X =
i
j
i X,
j k
k j
(
)
i
i
= < dxi , j0 k0 X >
0
0
j k
0
j
= < Aii dxi , A j0 j Akk0 k X >
0
j
j
= Aii < dxi , A j0 Akk0 j k X + A j0 j Akk0 k X >
0
0
j
j
= Aii A j0 Akk0 < dxi , j k X > + Aii A j0 j Akk0 < dxi , k X >
i
i0 j
k
i0 j
k
+
A
A
A
ik ,
= Ai A j0 Ak0
0
j
0
i
k
j
j k
what means that
i0
(
0
j
0
)
k
0
Aii
j
A j0
(
Akk0
i
j
+ Aii j0 Aik0 .
(3.8)
The term Aii j0 Aik0 is in general not equal to zero, so the Christoffel symbols are not the
components of a 12 -tensor field.
In the definition of the Christoffel symbols there is on no way used an inner product on
Rn . If there is defined a symmetric inner product on Rn , than the Christoffel symbols
are easy to calculate with the help of the fundamental tensor field. The Christoffel
symbols are than to express in the components of the fundamental vector field gij en
its inverse gkl . There holds, see also Lemma 3.6,
93
i g jk + j gki k gij = i j X, k X + j X, i k X +
j k X, i X + k X, j i X +
k i X, j X i X, k j X
= 2 k X, i j X .
The inner product can be written as the action of a covector on a vector. So
(
)
l
l
k X, i j X = gkl < dx , i j X > = gkl
,
i j
out of this follows the identity
(
i g jk + j gki k gij = 2 gkl
l
i
)
.
j
1
Multiply the obtained identity by gmk and then it turns out that
2
)
(
1
m
= gmk i g jk + j gki k gij .
i j
2
(3.9)
2
2
1 1 2
r and
=
=
1 2
2 1
2 r2 r
= r.
2 2
All the other Christoffel symbols, which belong to the polar coordinates, are equal to
zero.
94
Let a be a vector field on Rn and let {xi } be a curvilinear coordinate system on Rn . Write
0
a = ai i . Let there also be a second curvilinear coordinate system {xi } on Rn , then
x
holds
0
0
0
0
j
j
(3.10)
j0 ai = A j0 j Aii ai = A j0 Aii j ai + ai j0 Aii .
The second term in Formula 3.10 is in general not equal to zero, so the functions j ai
are not the components of a 11 -tensor field.
Definition 3.6.5 We define the n2 function 5 j ai by
(
)
i
i
i
5 ja = ja +
ak .
j k
(3.11)
1
1 -tensor
field.
Proof Because of the transformation rule of the Christoffel symbols, see Formula 3.8,
and of Formula 3.10 holds
0
0
0
0
k
i
5 j0 ai = j0 ai +
a
0
0
j k
0
0 j
i
0
0
0
j
= Aii A j0 j ai + j0 Aii ai + Aii A j0 Akk0
+ Aii j0 Aik0 Akl al .
j k
0
0
j
k l
+ 0 Ai0 ai + Ai0 0 Ai Ak0 al
A
a
= Aii A j0 j ai + Akk0
j i
j
i
k0
l
l
j k
k
j
i0
i
i0 k0
i
0
0
= Aii A j0 j ai +
a
+
A
a
+
A
A
A
al
j i
j
i
l
k0
j k
Think to the simple formula
0
0
0
0 = j0 kl = j0 Aik0 Akl = Aik0 j0 Akl + Akl j0 Aik0 ,
such that
95
0
0
0
0
0
j
5 j0 ai = Aii A j0 5 j ai + j0 Aii ai Aii Aik0 j0 Akl al
0
0
0
0
j
= Aii A j0 5 j ai + j0 Aii ai ik0 j0 Akl al
0
= Aii A j0 5 j ai .
Definition 3.6.6 The covariant derivative of a vector field a, notation 5a, is given
by the 11 -tensor field
5a = 5 j ai dx j
,
xi
Let be a covector field on Rn . It is easy to see that the functions j i are not the com
ponents of a 02 -tensor field. For covector fields we introduce therefore also a covariant
derivative.
Lemma 3.6.2 The n2 functions 5 j i defined by
(
)
k
5 j i = j i
.
j i k
form the components of
0
2 -tensor
field.
(3.12)
With the help of the covariant derivative of a vector field, there can be given a definition
of the divergence of a vector field.
Definition 3.6.8 The divergence of a vector field a is given by the scalar field 5i ai .
The functions ai are the components of a with respect to some arbitrary curvilinear
coordinate system.
96
Notice(s): 3.6.2 Because of the fact that the calculation of a covariant derivative
is a tensorial operation, it does not matter with respect of what coordinate system
the functions ai are calculated and subsequent to calculate the divergence of a. The
0
fact is that 5i0 ai = 5i ai .
With the help of covariant derivative, the gradient field and the fundamental tensor
field there can be given a definition of the Laplace operator
Definition 3.6.9
fined by
Notice(s): 3.6.3 Again the observation that because of the tensorial actions of
the various operations, it does not matter what coordinate system is chosen for the
calculations.
Later on we come back to the classical vector operations grad, div, rot and 4. They are
looked from some other point of view.
V
Hereby presents dxi1 dxik a basisvector of kX (Rn ). In Lemma 3.2.1 is described
how the functions i i transform if there is made a transition to other curvilinear
1
97
coordinates.
In this section we define a differentiation operator d, such that k-forms become
(k + 1)-forms, while n-forms become zero.
!
( f1 , , fr )
(1)
= 0.
xl (x1 , , xl1 , xl+1 , , xr+1 )
l
(3.14)
Proof We give a sketch of the proof. Call F = ( f1 , , fr )T . The l-th sommand of the
summation in the left part of Formula 3.14 is than, on a factor 1, to write as
!
F
F
F
F
det
, , l1 , l+1 , , r+1 =
x1
x
xl
x
x
2 F
F
F
F
l 1 , , l1 , l+1 , , r+1 +
x
x x
x
x
F
2 F
F
F
F F
2 F
F
+ 1 , , l1 l , l+1 , , r+1 + 1 , , l1 , l l+1 , , r+1 +
x
x x
x
x x x
x
x x
F
F
F
2 F
+ 1 , , l1 , l+1 , , l r+1 .
x
x
x
x x
In this way Formula 3.14 is to write as a summation of r (r + 1) terms, ( r (r + 1) is always
a even number!), in the form of pairs
!
F
2 F
F
det
, , k l , , r+1 ,
x1
x
x x
which cancel each other.
98
n
X
X
i i
1
k
r dxi1 dxik .
d =
dx
xr
r=1 1 i1 < < ik n
Note that a summand, where r is equal to one of the i j s, is equal to zero. The sum
formed by the terms, where r is not equal to one of the i j s, is obviously to write as
X
d =
(d) j j dx j1 dx jk+1
1
k+1
1 j1 < < jk+1 n
Note that the exterior derivative of a n-forms is indeed 0. At this moment, there is still
the question if d0 , this is the exterior derivative of with respect to the coordinates
0
{xi }, is the same as d.
Example(s): 3.6.1
d =
dx +
dy
x
y
a 1-form. Let be a covector field en write = 1 dx + 2 dy than is
!
2
1
d =
dx dy
x
y
a 2-form. Let be 2-form en write = 12 dx dy than is d = 0. Note that
in all the cases applying d twice always gives zero.
99
Example(s): 3.6.2
d =
dx +
dy +
dz
x
y
z
a 1-form. Let be a covector field en write = 1 dx + 2 dy + 3 dz than is
!
1
1
1
dx +
dy +
dz dx +
d =
x
y
z
!
!
3
2
2
2
3
3
dx +
dy +
dz dy +
dx +
dy +
dz dz =
x
y
z
x
y
z
!
!
!
3
3
1
1
2
2
dx dy +
dx dz +
dy dz
x
y
x
z
y
z
a 2-form. Let be a covector field en write = 12 dx dy + 13 dx dz +
23 dy dz than is
12
13
23
dz dx dy +
dy dx dz +
dx dy dz
z
y
x
!
23
13
12
+
dx dy dz
=
x
y
z
d =
Proof Let {xi } and {xi } be two coordinate systems. We prove the proposition for the
differential form
= dx1 dxk ,
where is an arbitrary function of the variables xi . The approach to prove the proposition for an arbitrary differential form of the form dxi1 dik is analog. The
proposition follows by taking linear combinations.
The exterior derivative of with respect to the variables xi is given by
n
X
r
dx dx1 dxk .
d =
xr
r=1
100
On the basis of Lemma 3.2.1, can be written, with respect to the coordinates xi , as
X
x1 , , xk
i0
i01
k .
dx
dx
=
0
0
i
i
1, , x k
0
0
x
1 i < < i n
1
1 , , xk
n
X
X
i0
i01
0
r0
k +
d =
(3.15)
i0
0
0 dx dx dx
r
i
0
x x 1 , , x k
0
0
r =1 1 i < < i n
1
n
X
0
r =1
1 i0 < < i0 n
1
k
0
xr
1
k
x , , x
r0
i0
i01
dx dx dx k .
i0
0
i
x 1, , x k
(3.16)
The first sum, see Formula 3.15, is to write as ( with the use of the index notation)
r0
1
k
0 dx dx dx =
r
x
0
xr xr l
dx dx1 dxk =
0
xr xl xr
r
dx dx1 dxk ,
xr
and that we recognize as d. The second sum, see Formula 3.16, is to write as
1 , , xk
X
X
x
r0
0
0
i
i
dx dx 1 dx k
0
0
0
r
i
i
0 0
0
0
0
0
0
x 1, , x k
1 j < < j
1
k+1
k+1
(3.17)
where the inner sum is a summation over all possible combinations r0 , i01 < < i0k ,
a collection of (k + 1) natural numbers, which coincides with the collection j01 < <
j0k+1 . The inner sum of Formula 3.17 can then be written as
k+1
X
x1 , , xk
j0l
j0
j0
j0
j01
l1 dx l+1 dx k+1 .
dx
dx
dx
0
0
0
0
0
j
j
j
j
x l x j1 , , x l1 , x l+1 , , x k+1
l=1
j0
j0
j0
j0
101
Proof Because of Theorem 3.6.1, it is enough to prove the proposition just for one
coordinate system {xi }. The same as in the foregoing theorem it is enough to prove the
proposition for the k-form
= dx1 dxk ,
with an arbitrary function of the variables xi . There is
n
n X
X
2
dxr dxl dx1 dxk .
d d =
l
r
x x
l=1 r=1
This summation exist out of n (n 1) terms, an even number of terms. These terms
become pairwise zero, because
2
2
=
and dxr dxl = dxl dxr .
xl xr
xr xl
Theorem 3.6.2 is the generalisation to Rn of the classical results
rot grad = 0
and
div rot = 0 in R3 .
n
X
p=1
Furthermore holds
n
X
d =
p dxp dx1 dxl+m
x
p=1
and
102
n
X
In this last expression it costs a factor (1)l to get dxp to the front of that expression.
Hereby is the theorem proven for the special case.
1
2
+
.
x
y
f
f
dx +
dy,
x
y
f
f
dx +
dy,
y
x
!
2 f
2 f
d df =
+
dx dy,
x2
y2
df =
2 f
2 f
d df =
+
.
x2
y2
This last result is the Laplace operator with respect to the natural inner product and
volume form on R2 .
103
x
y
x
z
y
z
!
!
!
1
2
3
2
3
1
d =
dx +
dy +
dz,
y
z
z
x
x
y
= 1 dy dz 2 dx dz + 3 dx dy,
!
1
2
3
dx dy dz,
d =
+
+
x
y
z
d =
1
2
3
+
+
.
x
y
z
f
f
f
dx +
dy +
dz,
x
y
z
f
f
f
dx dy
dx dz +
dy dz,
z
y
x
!
2 f
2 f
2 f
d df =
+
+
dx dy dz,
x2
y2
z2
df =
2 f
2 f
2 f
d df =
+
+
.
x2
y2
z2
Also in R3 , the operator d d seems to be the Laplace operator for scalar fields.
Notice(s): 3.7.1 All the combinations of d and are coordinate free and can be
written out in any desired coordinate system.
104
2
2
2
2
,
t2
x2
y2
z2
1
in every tangent space TX (Rn ) and its dual. Further holds that GX
= hi dxi ,
hi xi
wherein may not be summed over i.
105
gi j j . In the classical literature it is customary, when there are used orthogonal curvix
linear coordinates, to give the components of vector fields with respect to orthonormal
bases. Because gij = h2
ij , is the gradient of with respect to the orthonormal base
i
)
(
1
to write as
hi xi
grad =
1 1
1 1
1 1
+
.
+
2
2
1
1
h1 x h1 x
h2 x h2 x
h3 x3 h3 x3
1
.
hi xi
1
1
1
+ 2
+ 3
,
2
1
h1 x
h2 x
h3 x3
than holds
G = 1 h1 dx1 + 2 h2 dx2 + 3 h3 dx3 ,
such that
dG =
!
!
3h
1h
2 h2
1 h1
3
1
dx1 dx2 +
dx1 dx3 +
x2
x3
x1
x1
!
3 h3
2 h2
dx2 dx3 .
x2
x3
106
To calculate the Hodge image of dG , we want that the basis vectors are orthonormal.
1
h1 dx1 h2 dx2 , than follows that dx1 dx2 =
Therefore we write dx1 dx2 =
h1 h2
h3
dx3 . With a similar notation for the dx1 dx3 and dx2 dx3 it follows that
h1 h2
!
!
1h
3h
3 h3
2 h2
1
1
3
1
dG =
h1 dx1 +
h2 dx2
h2 h3
h1 h3
x2
x3
x3
x1
!
1
2 h2
1 h1
h3 dx3 ,
h1 h2
x2
x1
such that we finally find that
!
!
2 h2 1
3 h3 1
3 h3
1
1 h1
1
curl =
+
+
h2 h3
h1 x1
h1 h3
h2 x2
x2
x3
x3
x1
!
2 h2
1
1 h1 1
.
h1 h2
h3 x3
x2
x1
1
1
1
+ 2
+ 3
,
2
1
h1 x
h2 x
h3 x3
than holds
G = 1 h2 h3 dx2 dx3 2 h1 h3 dx1 dx3 + 3 h1 h2 dx1 dx2 ,
such that
!
1 h2 h3
2 h1 h3
3 h1 h2
d G =
+
+
dx1 dx2 dx3
2
3
1
x
x
x
and so we get
!
1
1 h2 h3
2 h1 h3
3 h1 h2
div =
+
+
.
h1 h2 h2
x2
x3
x1
107
This is the well-known formula, which will also be found, if the divergence of is
written out as given in Definition 3.6.8.
108
Determine the Christoffel symbols belonging to the cylindrical and spherical coordinates on R3 .
2.
109
x = cos + sin ,
y = sin + cos ,
z = .
The determinant of the Jacobian matrix is given by
cos sin sin + sin
x x x
J =
= sin cos cos sin = 1.
0
0
1
The inverse coordinate transformation is given by
= x cos z y sin z,
= x sin z + y cos z,
= z.
110
111
The introduced function s, we want to use as parameter for space curves. The parameter
is then s and is called arclength. The integral given in 4.1 is most often difficult, or not
all, to determine. Therefore we use the arclength parametrisation of a space curve only
for theoretical purposes.
Henceforth we consider the Euclidean inner product on R3 . Out of the main theorem
of the integration follows than that
2
dX dX
ds
.
(4.2)
=
,
dt
dt dt
The derivative of s to t is the length of the tangent vector.
K, than holds
2
2
dX dX
dt
dX dX
dt
=
=
,
,
ds ds
ds
dt dt
ds
If s is chosen as parameter of
ds
dt
2
= 1,
(4.3)
where we used Formula 4.2. Property 4.3 makes the use of the arclength as parameter
dX
Y = X + X.
The parameter in this definition is in such a way that || gives the distance from the
tangent point X along this tangent line.
Definition 4.1.3 The osculation plane to a curve K at the point X is the plane that
is given by the parametric representation
Y = X + X + X.
112
det Y X, X,
= 0.
ds3
For some arbitrary parametrisation of a space curve K, with parameter t, the parameter
representation of the tangent line and the osculation plane in X, with X 6= 0, are given
by
Y = X +
dX
dt
and
Y = X +
d2 X
dX
+ 2 .
dt
dt
113
is called the curvature vector. We introduce the vectors n and b, both are unit vectors
on the straight lines of the principal normal and the binormal. We agree that n points
The vectors , n and b
in the direction of X and that b points in the direction of X X.
are oriented on such a way that
b = n, n = b , = n b.
Let Y be a point in the neighbourhood of X at the space curve K. Let 4 be the angle
between the tangent lines in X and Y and let 4 be the angle between the binormals in
X and Y. Note that 4 is also the angle between the osculation planes in X and Y.
Definition 4.1.4 The curvature and the torsion of the space curve K in the
point X is defined by
d
=
ds
!2
d
=
ds
!2
4
= lim
4s0 4s
!2
4
= lim
4s0 4s
!2
(4.4)
(4.5)
b .
Lemma 4.1.1 There holds that 2 = (, ) and 2 = b,
Proof Add in a neighbourhood of X to every point of the curve an unit vector a, such
that the map s 7 a(s) is sufficiently enough differentiable. The length of a is equal to
1, there holds that (a, a) = 1 and there follows that (a, a)
= 0. Differentiate this last
equation to s and there follows that (a,
a)
+ (a, a ) = 0. Let 4 be the angle between
a(s) and a(s + 4s), where X(s + 4s) is point in the neighbourhood of X(s). There holds
that
cos (4) = (a(s), a(s + 4s)).
A simple goniometric formula and a Taylor expansion gives as result that
1
1
1 2 sin2 ( 4) = (a(s), a(s) + (4s) a(s)
= (a(s), a (s)), 4s 0.
sin x
= 1, for x 0, such that
x
114
lim
4s 0
4
4s
2
= (a,
a).
The curvature of a curve is a measure for the change of direction of the tangent line.
1
R = is called the radius of curvature. So far we have confined ourselves till points at
K which are no inflection points. But it is easy to define the curvature in an inflection
point. In an inflection point is X = 0, such that with Lemma 4.1.1 follows that = 0.
The reverse is also true, the curvature in a point is equal to zero if and only if that point
is an inflection point.
For the torsion we have a similar geometrical characterisation. The torsion is zero if and
only if the curve belongs to a fixed plane. The torsion measures the speed of rotation
of the binormal vector at some point of the curve.
The three vectors , n and b form in each point a orthonormal basis. The consequence
is that the derivative of each of these vectors is a linear combination of the other two.
These relations we describe in the following theorem. They are called the formules of
Frenet.
Theorem 4.1.1 The formules of Frenet read
= n,
(4.6)
n = + b,
(4.7)
b = n,
(4.8)
The sign of is now also defined. The sign of has to be taken so that Equation 4.8
is satisfied.
Proof The definition of is such that it is a multiple of n. Out of Lemma 4.1.1 it follows
that the length of is equal to and so there follows directly Equation 4.6.
With the result of above we conclude that (, b) = 0. The fact that (b, ) = 0 there
) = (b, ) = 0. Hereby follows that b is a multiple of n. Out of
follows that (b,
Lemma 4.1.1 follows that || is the length of b. Because of the agreement about the sign
of , we have Equation 4.8.
Because (n, ) = 0 there follows that (n,
) = (n, ) = and because (n, b) = 0
They call the positive oriented basis {, n, b} the Frenet frame or also the Frenet trihedron, the repre mobile, and the moving frame. Build the of the arclength depend
matrix
115
F = (, n, b},
then holds that FT F = I and det = 1, so the matrix F is direct orthogonal (so, orthogonal and detF = 1). The formulas of Frenet can now be written as
0 0
d
F = F R, with R = 0 .
(4.9)
ds
0
0
Theorem 4.1.2 Two curves with the same curvature and torsion as functions
of the arclength are identical except for position and orientation in space. With a
translation and a rigid rotation one of the curves can be moved to coincide with
the other. The equations = (s) and = (s) are called the natural equations of
the curve.
Proof Let the given functions and be continuous functions of s [0, a), with
a some positive constant. To prove that there exists a curve K of which the curvature
and the torsion are given by respectively and . But also to prove that this curve K is
uniquely determined apart from a translation and a rigid rotation. The equations 4.9
can be interpreted as a linear coupled system of 9 ordinary differential equations. With
the existence and uniqueness results out of the theory of ordinary differential equations
follows that there exists just one continuous differentiable solution F(s) of differential
equation 4.9, to some initial condition F(0). This matrix F(0) is naturally a direct orthonormal matrix. The question is wether F(s) is for all s [0, a) a direct orthonormal matrix? Out of F = F R follows that F T = RT FT = R FT . There holds that
d T
F FT = F R FT and F F T = F R FT , that means that
F F = 0. The matrix F(s) FT (s)
ds
is constant and has to be equal to F(0) FT (0) = I. Out of the continuity of F(s) follows
that det F(s) = det F(0) = 1. So the matrix F(s) is indeed for every s [0, a) a direct orthonormal matrix. The matrix F(s) gives the vectors , n and b, from which the searched
curve follows.
The arclength parametrisation of this curve is given by
Z s
X(s) = a +
ds,
0
where a is an arbitrary chosen vector. Out of this follows directly the freedom of translation of the curve.
be another initial condition and let F(s)
be the associated solution. Because of
Let F(0)
F(0)
= S F(0). The associated solution is given by F(s)
= S F(s), because
d
d
(S F(s)) = S F(s) = S F(s) R.
ds
ds
116
(4.10)
1
1
1
|X| = |n| = ,
2
Out of Equation 4.10 follows that b + is constant, let say to u. Note that (u, u) =
117
X u = b
follows that
!
2 1
(u, u),
X u, u = = 1 + 2
(4.11)
so
X
Evidently is X
2 + 2
2 + 2
2 + 2
!
u, u = 0.
s u, u a constant and equal to (X(0), u). The vector X
The tangent vector X makes a constant angle with some fixed vector u,
see Equation 4.11.
The function h(s) = (X(s) X(0), u) tells how X(s) has "risen"in the direction
dh
u is constant, so h(s) rises at a constant
= X,
u, since leaving X(0). And
ds
rate relative to the arclength.
d3
1
X
+
X
u = 0.
2 + 2 ds3
2 + 2
d2
1
X
+
X
X(s) 2
s
u
m
=
n(s),
+ 2
2 + 2
Evidently is the vector
such that
s u m = 2
.
X(s) 2
2
+
+ 2
118
The space curve is, as we already know, a cylindrical helix, especially a circular helix.
j for
).
u j
Definition 4.2.1
A surface S in R3 is the set of points X = X(u1 , u2 ) =
xi (u1 , u2 ) Ei with (u1 , u2 ) .
We call the functions xi = xi (u j ) a parametric representation of the surface S. The parameters u j are the coordinates at the surface. If one of these coordinates is kept constant,
there is described a curve at the surface. Such a curve belongs to the parametric repand is called a parametric curve. The condition that the rank of the matrix
hresentation
i
i
j x is equal to two expresses the fact that the two parametric curves u1 = constant and
u2 = constant can not fall together. Just as the curves in R3 , there are infinitely many
possibilities to describe the same surface S with the help of a parametric representa0
0
0
0
tion. With the help of the substitution u1 =" u1 (u# 1 , u2 ), u2 = u2 (u1 , u2 ), where we
ui
suppose that the determinant of the matrix
0 is nonzero, there is obtained a new
ui
0
parametric
of the surface S, with the coordinates ui . The assumption
" representation
#
h
i
ui
i is
=
6
0
is
again
guaranteed
by
the
fact
that
the
rank
of
the
matrix
x
that det
j
0
ui
ui
equal to two. From now on the notation of the partial derivatives i0 will be Aii0 .
u
Let S be a surface in R3 with the coordinates ui . A curve K at the surface S can be described by ui = ui (t), where t is a parameter for K. The tangent vector in a point X at
this curve is given by
119
dX
dui
=
i X,
dt
dt
which is a combination of the vectors 1 X and 2 X. The tangent lines in X to all the
curves through X at the surface lie in a plane. This plane is the tangent plane in X to
S, notation TX (S). This tangent plane is a 2dimensional linear subspace of the tangent
space TX (R3 ). The vectors 1 X and 2 X form on a natural way a basis of this subspace.
0
With the transition to other coordinates ui holds i0 X = Aii0 i X, this means that in the
tangent plane there is a transistion to another basis.
Example(s): 4.2.1
representation
120
The vectors 1 X and 2 X are tangent vectors to the parameter curves u2 = C and u1 =
C. If the parameters curves intersect each other at an angle , than holds that
g12
(1 X, 2 X)X
.
=
g11 g22
|1 X| |2 X|
cos =
It is evident that the parameter curves intersect each other perpendicular, if g12 = 0. A
parametrisation ui of a surface S is called orthogonal if g12 = 0.
Example(s): 4.2.2 The in Example 4.2.1 given parametrisation of a sphere with
radius R is orthogonal. Because there holds that g11 = R2 , g22 = R2 sin2 and
g12 = g21 = 0.
k
i
)
j
= gkl (i j X, l X).
It is clear that
hij = (i j X, NX ).
(4.13)
121
1
2 -tensor field,
(
0
)
0
= g
k0 l0
(i0 j0 X, l0 X) =
0
Akk
j
A j0
(
Aii0
)
j
The second term is in general not equal to zero, so the Christoffel symbols are not the
components of a tensor field. ( See also Formula 3.8, the difference is the inner product!)
Furthermore holds that
j
j0
Definition 4.2.3 The second fundamental tensor field is the covariant 2-tensor
field of which the components, with respect to the base uk , are given by hij , so
h = hi j dui du j .
Lemma 4.2.2 The Christoffel symbols are completely described with the help of
the components of the first fundamental tensor field. There holds that
)
(
1
k
= gkl (i g jl + j gli l gij ).
(4.14)
i j
2
Proof There holds that
i g jl = i ( j X, l X) = (i j X, l X) + ( j X, il X),
j gli = j (l X, i X) = ( jl X, i X) + (l X, ji X),
l gij = l (i X, j X) = (li X, j X) + (i X, l j X),
such that
(
i g jl + j gli l gij = 2 (i j X, l X) = 2 glk
from which Formula 4.14 easily follows.
Note that Formula 4.14 corresponds with Formula 3.9.
k
i
)
,
j
122
Theorem 4.2.1 The intersection of a surface S with a flat plane, that lies in some
small neighbourhood of a point X at S and is parallel to TX (S), is in the first approximation a hyperbola, ellipse or a pair of parallel lines and is completely determined
by the second fundamental tensor.
Proof We take Cartesian coordinates x, y and z in R3 such that X is the origin and the
tangent plane TX (S) coincides with the plane z = 0. In a sufficiently small neighbourhood of the origin, the surface S can be descibed by an equation of the form z = f (x, y).
A parametric representation of S is given by x1 = x, x2 = y and x3 = z = f (x, y). We
assume that the function f is enough times differentiable, such that in a neighbourhood
of the origin the equation of S can written as
z = f (x, y) = f (0, 0) +
=
f
f
1
(0, 0) +
(0, 0) + (r x2 + 2 s x y + t y2 ) + h.o.t.
2
x
y
1
(r x2 + 2 s x y + t y2 ) + h.o.t.,
2
with
r =
2 f
2 f
2 f
(0,
0),
r
=
(0,
0)
and
t
=
(0, 0).
x y
x2
y2
The abbreviation h.o.t. means higher order terms. A plane that lies close to X and is
parallel to the tangent plane to S at X is described by z = , with small enough. The
intersection of this plane with X is given in a first order approximation by the equation
r x2 + 2 s x y + t y2 = 2 .
The tangent vectors to the coordinate curves, with respect to the coordinates x and y,
in the origin, are given by
!T
f
x X = 1, 0,
x
and
f
y X = 1, 0,
y
!T
,
123
We conclude that the intersection is completely determined by the numbers hij and that
the intersection is an ellipse if det[hi j] > 0, a hyperbola if det[hi j] < 0 and a pair of
parallel lines if det[hi j] = 0.
This section will be closed with a handy formula to calculate the components of the
second fundamental tensor field. Note that 1 X 1 X = NX , with = |1 X 2 X|.
This can be represented with components of the first fundamental tensor field. There
holds
= |1 X| |2 X| sin,
with the angle between 1 X and 2 X, such that 0 < < . There follows that
s
q
q
q
g212
2
2
= g11 g22 (1 cos ) = g11 g22 (1
) = g11 g22 g12 = det[gij ].
g11 g22
Furthermore is
hi j = (i j X, NX ) =
1
1
(1 X 2 X, i j X) = p
det(1 X, 2 X, i j X).
det[gij ]
X = = u j j X + u j u k k j X = u j j X + u j u k
X
+
h
N
X
l
k
j
k j
= u l + u j u k
(4.15)
l X + u j u k h jk NX .
j k
The length of the curvature vector is given by the curvature in X ( see Lemma 4.1.1).
Definition 4.2.4 The geodesic curvature of K is the length of the projection of X
onto the tangent plane TX (S).
124
It is, with the help of Formula 4.15, easy to see that the geodesic curvature can be calulated with the help of the formula
s
!
!
(
)
(
)
j
i
q
j
p
(4.16)
u u gij .
u i +
u l u k u +
p q
l k
Note that the geodesic curvature only depends on components of the first fundamental
tensor field.
Definition 4.2.5
NX .
Out of Formula 4.15 follows that the principal curvature is given by u j u k h jk . Note that
the principal curvature only depends on the components of the second fundamental
tensor field and the values of u i . These last values determine the direction of . This
means that different curves at the surface S, with the same tangent vector in a point X
at S, have an equal principal curvature. This result is known as the theorem of Meusnier
Definition 4.2.6 A geodesic line or geodesic of the surface S is a curve at S, of
which in every point the principal normal and the normal at the surface fall together.
Out of Formula 4.15 follows that a geodesic is decribed by the equations
)
(
i
i
u j u k = 0.
u +
j k
(4.17)
This is a non-linear inhomogeneous coupled system of two ordinary differential equations of the second order in u1 and u2 . An analytic solution is most of the time difficult
or not at all to determine. Out of the theory of ordinary differential equations follows
that there is exactly one geodesic line through a given point and a given direction.
If a curve is not parametrised with its arclength, but with another parameter, for instance t, then is the arclength given by Formula 4.1. This formula can be expressed by
the coordinates u j and the components of the first fundamental tensor field. It is easy
to see that the arclength s of a curve K at S, with parameter t, between the points t = t0
and t = t1 is given by
Z t1 r
dui du j
s =
gij
dt
(4.18)
dt dt
t0
We use this formula to prove the following theorem.
125
ds = 0,
ds u k
uk
u k
uk
s0
s0
(4.19)
hereby is used partial integration and there is used that k (s0 ) = k (s1 ) = 0. Because
Formula 4.19 should apply to every function k , we find that
T
d T
= 0.
ds u k
uk
(4.20)
Because of the fact that T in every point of K takes the value 1, it is no problem to replace
T by T2 in Formula 4.20 and the equations become
d
i j
i j
g
u
u
= 0,
g
u
u
ij
ij
ds u k
uk
or
d
u i u j k gij
gki u i + gk j u j = 0,
ds
or
u i u j k gij 2u i gki + ( j gki + i gk j ) u i u j = 0,
or
2 gki u i + (i gk j + j gki k gij ) u i u j = 0,
126
or
(
i
u +
)
j
u i u j = 0.
This are exactly the equations for geodesic lines. Because of the fact that K satisfies
these equations, is K a geodesic through X0 and X1 .
4.2.5
Example(s): 4.2.4 The vector field formed by the basis vectors j X(t), with i fixed,
j
Example(s): 4.2.5 The vector field formed by the reciproke basis vectors dui ,
j
with i fixed, is a tangent vector field and has covariant components i and contravariant components gij .
dv(t)
will not be an element of
dt
the tangent plane TX(t) (S). In the following definition we will give an definition of a
derivative which has that property.
Let v be tangent vector field. In general, the derivative
127
v
,
dt
dv
v
= PX
,
dt
dt
with PX the projection at the tangent plane TX (S).
The covariant differentiation in a point X at S is a linear operation such that the tangent
vectors at S, which grasp at the curve K, is imaged at TX (S). For every scalar field f at
K holds
!
( f v)
df
d( f v)
df
dv
v
=
= PX
= PX
v + f
v + f
.
(4.21)
dt
dt
dt
dt
dt
dt
Note that for every w TX(t) (S) holds that
dv
v
w,
= w,
,
dt
dt
because w is perpendicular to NX . For two tangent vector fields v and w alnog the same
curve of the surface there follows that
d(v, w)
w
w
+
= v,
,w .
(4.22)
dt
dt
dt
This formula is a rule of Leibniz.
Example(s): 4.2.6 Consider the vector field out of Example 4.2.3. Call this vector field w. The covariant derivative of this vector field along the curve K can be
expressed by Christoffel symbols. There holds
w
dw
d du j
= PX
= PX
jX
dt
dt
dt dt
!
d2 u j
du j d
= PX
jX +
jX
dt dt
dt2
d2 u j
du j duk
d2 u j
du j duk
l
= PX
X
+
X
=
X
+
j
j
j
k
k
dt dt
dt dt
dt2
dt2
k dul
d2 u j
j
du
X,
= 2 +
k l dt dt j
dt
!!
l X
j
128
Example(s): 4.2.7 Consider the vector field out of Example 4.2.4. Also the covariant derivative of this vector field is to express in Christoffel symbols.
k
j
d j
j d
j du
i X =
i j X = P X
i j X + i j X = i
dt
dt
dt
dt
dt
duk
X.
=
i k
dt j
l X
k j
Example(s): 4.2.8 Consider the vector field out of Example 4.2.5. There holds
d j
d i
i
0 =
i =
du , j X =
du , j X + dui , j X ,
dt
dt
dt
dt
where the rule of Leibniz 4.22 is used. Out of this result follows that
(
) k
!
(
) k
du
du
i
l
i
i
i
du , j X = du , j X = du ,
l X =
,
j k dt
j k dt
dt
dt
such that
) k
(
du
i
i
du =
du j .
j k dt
dt
In particular, we can execute the covariant differentiation along the parameter curves.
These are obtained by taking one of the variables uk as parameter and the other variables
u j , j 6= k fixed. Then follows out of Example 4.2.7 that for the covariant derivative of the
basis vectors along the parameter curves that
(
) l
du
j
X
=
j X. (uk is the parameter instead of t.)
i
k
k
i
l
du
du
At the same way follows out Example 4.2.8 that the covariant derivatives of the reciproke basis vectors along the parameter curves are given by
(
) k
(
)
du
i
i
dui =
du j =
du j .
(4.23)
j k dul
j l
dul
In general the covariant derivative of a tangent vector field v along a parameter curve
is given by
(
)!
j
j
j
j
j
l
v =
v j X = k v j X + v
j X = k v + v
j X,
k l
duk
duk
duk
and here we used Formula 4.21. The covariant derivative of a tangent vector field with
respect to the reciproke basis vectors is also easily to write as
129
(
)!
l
j
j
j
v =
v j du = k v j du + v j k du = k v j vl
du j .
k
k
k
j
du
du
du
(4.24)
Lemma 4.2.3 The functions k v j , given by Formula 4.25 are the components of a
1
1 -tensor field at S. This tensor field is called the covariant derivative of v at S.
Proof Prove it yourself.
)
j
,
0
2 -tensor
(4.26)
field.
k i j , k ij and k i by
l
l
k ij = k ij
il ,
lj
k j
k i
i
j
lj
il
ij
ij
k = k +
k l
k l
j
l
l
j
j
j
k i = k i +
l .
i
k l
k i
(4.27)
(4.28)
(4.29)
130
Lemma 4.2.4 The components of the first fundamental vector field behave by
covariant differentiation like constants, or k gij = 0 and k gij = 0.
Proof Out of Formula 4.22 follows
i
j
+
du
,
du
,
k gij = k dui , du j = dui ,
duk
duk
such that
(
)
(
)
j
i
il
k g =
g
glj ,
l k
l k
ij
where is made use of Formula 4.23. With the help of Definition 4.28 follows then
k gi j = 0. In a similar way it is to see that k gij = 0.
Definition 4.2.10 Let v be a tangent vector field on K at S and write v = ui i X.
This tangent vector field is called parallel (transported) along K if
(
) i !
i
dv j
du k
j
v i X =
+
v j X = 0.
(4.30)
i k dt
dt
dt
Note that a curve K is a geodesic if and only if the tangent vector field of this curve is
parallel transported along this curve. If K is a geodesic than there holds that
!
dui
i X = 0.
dt dt
IMPORTANT NOTE
The system of differential equations for parallel transport in 2 dimensions reads
l
l
1
1
du
du
v1
dt
dt
l
2
l
1
v
0
d
= .
2 +
2
dt v
0
2
2
dul
dul v
l 1 dt l 2 dt
This is mostly a nonautonomous coupled system of ordinary linear differential equations of the form
1
dv
1
2
dv2
131
132
u = , v = t, 0 t 2 .
4
This curve starts and ends in the point
1
1
a =
2, 0,
2 .
2
2
Transport the vector (0, 1, 0) out of Ta (S) parallel along the curve K.
Is K a geodisc?
2.
g.
h.
u = , v = t, 0 t 2 .
4
133
1
1
2, 0,
2 + log ( 2 1) .
a =
2
2
Transport the vector (0, 1, 0) out of Ta (S) parallel along the curve K.
Is K a geodisc?
3.
134
135
gmn
. The expression
xl
1
called)a Christoffel symbol of the first kind and is often notated
2 (gmn,l + gnl,m glm,n ) is (
r
by [ln, m] = lnm = grm
.
l n
136
Chapter 5 Manifolds
Section 5.1 Differentiable Functions
|h| 0
Take cartesian hcoordinates
ath Rni and Rm . Let f k be the k-th component function of f
i
j
and write a = ak . Let A = Ai be the matrix of A with respect to the standard bases
j
fi
x j
(a).
The linear transformation A is called the derivative in a and the matrix A is called the
df
functional matrix. For A, we use also the notation
(a). If m = n, than can also be
dX
f 1, , f n
determined the determinant of A. This determinant is just
(a), the Jacobi
x1 , , xn
determinant of f in a.
Let K be a curve in U with parameter t (, ), for a certain value of > 0. So
dX
K : t X(t). Let a = X(0). The tangent vector at K in the point a is given by
(0). Let
dt
L be the image curve in V of K under f . So L : t Y(t) = f (X(t)). Call b = Y(0) = f (a).
df
dY
dX
The tangent vector at L in the point b is given by
(0) =
(a)
(0).
dt
dX
dt
If two curves K1 and K2 through a at a have an identical tangent vector than it follows
that the image curves L1 and L2 of respectively K1 and K2 under f have also an identical
137
tangent vector.
The three curves K1 , K2 and K3 through a have at a tangent vectors, which form an
addition parallelogram. There holds than that the tangent vectors at the image curves
L1 , L2 and L3 also form an addition parallelogram.
Let M be a set 2 3.
Definition 5.2.1 A subset U of M is called a chart ball of M if there exists an open
of Rn such that there exists a map which maps U bijective on U.
The
subset U
n
of R is called chart and the map is called chart map.
open subset U
RRvH: Problem is how to describe that set, for instance, with the help of Euclidean coordinates?
RRvH: It is difficult to translate Dutch words coined by the author. So I have searched for
English words, commonly used in English texts, with almost the same meaning. The book of
(Ivancevic and Invancevic, 2007) ,Applied Differential Geometry was very helpful.
138
and 0 : U0 U
0 . Let U
and U
0 be provided with the respective
chart maps : U U
0
i
i
0
coordinates u and u and write for x U U ,
0
which we simply write as ui = ui (ui ) and ui = ui (ui ). Because of the fact that the
transition maps 4 are differentiable, we can introduce transition matrices by
0
0
Aii
ui
ui
i
=
and
A
=
0.
i0
ui
ui
0
Chart maps are bijective and there holds that det[Aii ] 6= 0 and det[Aii0 ] 6= 0. The intersection U U0 of M is apparently parametrised by ( at least) two curvilinear coordinate
systems.
In the remainder U and U0 are chart balls of M with a non-empty intersection U U0 .
U
0 and , 0 .
The corresponding charts and chart maps we notate respectively with U,
0
U
0 are provided by the coordinates ui and ui .
The open subsets U,
Definition 5.2.5 A curve K at M is a continuous injective map of an open interval
I to M.
Let K be curve at M such that a part of the curve lies at U U0 . That part is a curve that
at the chart U
0 and is a curve in Rn . A point X(t0 ) U U0 ,
appears at the chart U,
and U
0 . At these charts the tangent
for a certain t0 I, can be found at both charts U
vectors at K in X(t0 ) are given by
d( X)
d((0 ) X)
(t0 ) and
(t0 ).
dt
dt
Let K1 : t 7 X(t), t I1 and K2 : 7 Y(), I2 be curves at M, which have a
point P in common in U U0 , say P = X(t0 ) = Y(0 ), for certain t0 I1 and 0 I2 .
coincide. The tangent
Suppose that the tangent vectors on K1 and K2 in P at the chart U
0
also coincide, because by changing of chart
vectors on K1 and K2 in P at the chart U
0
these tangent vectors transform with the transition matrix Aii .
Definition 5.2.6 Two curves K1 and K2 at M which both have the point P in
common are called equivalent in P, if the tangent vectors on K1 and K2 in P at a chart
coincide. From the above it follows that this definition is chart independent.
U
139
The
the tangent vectors i in P at the parameter curves, which belong to the chart U.
u
relationship of the tangent vectors i0 , which belong to the parameter curves of chart
u
0 , is given by
U
i
.
0 = Ai0
i
u
ui
Definition 5.2.8 A function at M is a map of a part of M to the real numbers.
is a function f described by f 1 : U
R. Note that U
= (U). We
At the chart U
notate also functions by there description at the charts.
Definition 5.2.9 Two functions f and g at M are called equivalent in a point P if
for their descriptions f (ui ) and g(ui ) at U holds
j f (ui0 ) = j g(ui0 ),
whereby (P) = (u10 , , un0 ).
ui = ui (ui ), j0 f = A j0 j f.
Definition 5.2.10 A covector in P at the manifold M is a class of in P equivalent
functions. The cotangent space in P is the set of the covectors in P at M.
The cotangent space is a vector space of dimension n. The covectors dui in P of the
form a basis of the cotangent space.
parameter functions ui , which belong to the chart U,
For two charts holds
0
140
d
d
dui
( f K) =
f (ui (t)) = i f
.
dt
dt
dt
This expression, which is chart independent, is called the directional derivative in
P of the function f with respect to the curve K and is conform the definition 3.6.2,
Subsection 3.6.1. In the directional derivative in P we recognize the covectors as linear functions at the tangent space and the tangent vectors as linear functions at the
cotangent space. The tangent space and the cotangent space in P can therefore be considered as each others dual.
The tangent vectors in a point P at the manifold M, we can also define as follows:
Definition 5.2.11 The tangent vector in P is a linear transformation D of the set
of functions at M, which are defined in P, in R, which satisfies
D( f + g) = D( f ) + D(g), D( f g) = f D(g) + g D( f ).
(5.1)
A tangent vector according definition 5.2.7 can be seen as such a linear transformation.
Let K be a curve and define
d
Df =
f K,
dt
than D satisfies 5.1, because for constants and holds
dui
dui
dui
dui
dui
dui
i ( f + g) =
i f +
i g,
i ( f g) = f
i g + g
i f.
dt
dt
dt
dt
dt
dt
Let M be a manifold.
141
142
Example(s): 5.3.1 a mechanical system with n degrees of freedom, with generalised coordinates q1 , , qn , of which the kinetic energy is a positive definite
quadratic norm in q i , with coefficients which depend on qi ,
T =
1
aij (qk ) q i q j .
2
The differential equations of the behaviour of the system are the equations of
Lagrange,
!
d T
T
k = Kk ,
k
dt q
q
where Kk (q j , t) are the generalised outside forces.
The configuration of the system forms a Riemannian manifold of dimension n, T
appears as fundamental tensor. The equations of Lagrange can be written as
(
)
k
+
qk
q i q j = Kk .
i j
When there work no outside forces at the system a geodesic orbit is followed by
the system on the Riemannian manifold.
Notice(s): 5.3.1 According the definition of a Riemannian manifold is every tangent space provided with a positive definite fundamental tensor. With some effort
and some corrections, the results of this paragraph and the paragraphs that follow can be made valid for Riemannian manifolds with an indefinite fundamental
tensor. Herein is every tangent space a Minkowski space. This note is made in
relation to the furtheron given sketch about the general theory of relativity.
143
This gives us the idea to define the covariant derivative of a vector field w = wi i X
along a curve K by
!
(
)
i
dwi
du j k
i
(5.2)
w i X =
+
w i X.
j k dt
dt
dt
Here is a vector field along a curve K. This vector field is independent of the choice of
the coordinates,
0
0
du j0 0
i
dw
i
+
wk =
0
0
j k
dt
dt
j0 dup 0
i
0
d i0 i i0 j k
A w + Ai A j0 Ak0
Ak wq =
+ Ais j0 Ask0 Ap
j k
dt i
dt q
p
h
j
dwi
0
du
k
+ Ai0 du wi + Ai0 As Ak0 du wq .
Aii
+
w
0
h
q
s
i
k
j k
dt
dt
dt
dt
The last term in the expression above is equal to
0 dup
0 dup
0
Ais Ask0 p Aqk
wp = p Aiq
wk ,
dt
dt
such that
0
dwi
+
dt
i0
(
j0
)
k0
i
0 dw
du j k0
w = Aii
+
dt
dt
i
j
!
du j k
w .
k dt
)
Let M be (Riemannian
manifold and {U, } a chart 5, with coordinates ui and Christoffel
)
k
symbols
. Let K be a parametrised curve at the chart U and T a rs tensor field,
l m
that at least is defined in every point of K.We want to introduce a differentiation opera
tor dt
along K such that dt
T is an on K defined rs tensor field. dt
is called the covariant
derivative along K.
We consider first the case r = 1, s = 0. The covariant derivative of a tangent vector field
= (U) is a
RRvH: The definition of a chart ball and a chart is not consequently used by the author. U
chart of M, {U, } is a chart ball of M. The cause of this confusion is the use of the lecture notes of Prof.
Dr. J.J. Seidel, see (Seidel, 1980) ??.
144
(
)
u k
du k
k
=
+
u i u j = 0.
i j
ds
ds
We consider now the case r = 0, s = 1, the covariant derivative of a covector field (
of the covariant components of a vector field) along the curve K. Let r dur be a given
covector field that at least is defined everywhere on K and ar r X a pseudoparallel vector
field along K. Then holds
dr
d r
dar
(a r ) =
r + ar
dt
dt
dt
du j
r
r
k
r dr
=
a
+
a
+
r
r
dt
dt
j k dt
dk
r
du
ak .
dt
j k dt
If we want that the Leibniz rule holds, by taking the covariant derivative, then we have
to define
(
)
dk
du j
r
k =
,
(5.3)
j k dt r
dt
dt
along K.
There can be directly proved that by the change of a chart holds
k0 = Akk0 k .
dt
dt
Analogously an arbitrary 2-tensor field is treated. Take for instance r = 0, s = 2
and notate the components of by ij . Take two arbitrary pseudoparallel vector fields
a = ai i X and b = b j j X, along K and require that the Leibniz rule holds then
d
d
dai
db j
ij b j +
ij ai + ai b j
ij ai b j =
dt
dt
dt
dt
dum
dum
dkl
ak bl .
jl
ki
m k dt
m l dt
dt
So we have to define
dkl
kl =
dt
dt
m
j
du j
k dt ml
)
n
j
du j
.
l dt kn
)
(5.4)
k0 l0 = Akk0 All0 kl .
dt
dt
Simili modo the case that r = 1, s = 1 is tackled by the contraction of a pseudoparallel
vector field and a ditto covector field. This delivers
145
k
d k
l =
+
dt
dt l
k
j
du j p
p dt l
)
r
j
du j k
.
l dt r
)
(5.5)
To higher order tensors it goes the same way. Taking the covariant derivative along
a parametrised curve means, take first the normal derivative and then add for every
index a Christoffel symbol.
For the curve K we choose a special curve, namely the h-th parameter curve. So u j =
k
h wi = h wi +
w ,
h k
h k = h k
r ,
h k
s
r
h kl = h kl
ks ,
rl
h l
h k
k
j
m
jm
mk
jk
jk
jk
+
h i = h i
i ,
h m
h m
h i
h gij = 0, etc., etc..
If there is changed of a chart, the next case behaves (check!) as
j0 k0
h0 i0
j0
jk
The covariant derivative along all parameter curves converts a rs -tensor vector field
into a s +r 1 -tensor on M. Most of the time the covariant derivative is interpreted as the
latter.
Section 5.5
= 0.
x y
y x
However, the second covariant derivative of a vector field is not symmetric. There holds
146
m
k
m
k
h i vk = h i vk
v
+
i v
m
h i
h m
k
k
m
k
k
m
j
j
j
k
= h i vk +
v
+
v
v
+
v
+
v .
m
i
h
h
i j
i j h i
h m
h mi j
Reverse the rule of h and i and out of the difference with the latter follows
(
) (
)(
) (
)(
)!
(
)
k
m
k
m
k
k
+
v j.
h i vk i h vk = h
i
h j
h m i j
i m h j
i j
The left hand side is the difference of two 12 -tensor fields and so the result is a 12 -tensor
field. Because of the fact that v j are the components of a vector field, the expression
between the brackets in the right hand side, are components of a 13 -tensor field.
Definition 5.5.1 The curvature tensor of Riemann-Christoffel is a 13 -tensor field
of which the components are given by
)
) (
)(
) (
)(
)
(
(
k
m
k
k
k
m
k
i
+
.
(5.6)
Khi j = h
i j
h j
h m i j
i m h j
The following relations hold
k
(h i i h ) vk = Khij
v j,
k
(h i i h ) w j = Khij
wk ,
k
m
k
(h i i h ) kj = Khim
m
j Khij m .
On analogous way one can deduce such kind of relations for other type of tensor fields.
With some tenacity the tensorial character of the curvature tensor of RiemannChristoffel, defined in 5.6, can be verified. This can be done only with the use of the
transformation rule:
( 0 )
(
)
0
i
i
i0 j
k
+ Aii j0 Aik0 ,
0
0 = Ai A j0 Ak0
j k
j k
see section 3.6.3, formula 3.8.
Notice(s): 5.5.1 Note that:
k
k
Khij
= Kihj
.
147
(
In the case that
(
=
k
j
)
then
k
k
Khij
+ Kkjhi + Kijh
= 0.
.
i j
k j
k m i j
i m k j
We study this tensor field on symmetry. The first term keeps unchanged if i and j are
changed. The combination of the 3th and 4th term also. Only the 2th term needs some
further inspection. With
(
)
1
k
= gkm [k j; m] = gkm j gkm
k j
2
we find that
(
i
)
j
1 km
1
i g
j gkm + gkm i j gkm .
2
2
The 2th term in the right hand side turns out to be symmetric in i and j. For the 1th
term we write with the help of i gkm = gkr glm i grl the expression
148
1 km
i g
j gkm = gkr glm i grl j gkm .
2
Also this one is symmetric in i and j.
Out of this, there can be derived a constant, given by
K = Khi ghi .
With the help of this scalar is formed the Einstein tensor field
Ghi = Khi
1
K ghi .
2
This tensor field depends only of the components of the fundamental tensor field and
plays an important rule in the general relatively theory. It satisfies also the following
properties
Ghi = Gih and i Ghi = 0.
149
Chapter 6 Appendices
Section 6.1 The General Tensor Concept
On the often heard question: "What is a tensor now really?" is a sufficient answer, vague
and also sufficient general, the following: "A tensor is a function T of a number of
vector variables which are linear in each of these variables separately. Furthermore
this function has not to be necessary real-valued, she may take values in another vector
space."
Notation(s): Given
k Vector spaces E1 , E2 , , Ek .
A vector space F.
Multilinear means that for every inlet, for instance the j-th one, holds that
t(u1 , , u j + v j , , uk ) = t(u1 , , u j , , uk ) + t(u1 , , v j , , uk ),
b.
c.
Exercise. If dim E j = n j and dim F = m, calculate then the dim Lk (E1 , E2 , , Ek ; F).
150
Notation(s):
Exercise. Let see that Lk (E1 , , Ek ; F), with dim F < , is basically the same as
Lk (E1 , , Ek , F ; R).
Theorem 6.1.1 There is a natural isomorphism
L(Ek , Lk1 (E1 , , Ek1 ; F)) ' Lk (E1 , , Ek ; F)
Proof Take L(Ek , Lk1 (E1 , , Ek1 ; F)) and define Lk (E1 , , Ek ; F) by
( You put in a "fixed" vector uk Ek at the k-th position and you hold on a multilinear
function with (k 1) inlets.)
Notation(s): If we take for E1 , , Ek respectively r copies of E and s copies of
E, with r + s = k, and also suppose that F = R, then we write Tsr (E) in stead of
r pieces
s pieces
z }| { z }| {
Lr + s (E , , E , E, , E; R). The elements of this vector space Tsr (E) are called
(mixed) (r + s)-tensors on E, they are called contravariant of the order r and covariant of the order s.
(t1 t2 )( p , , p , q , , q , x1 , , xs1 , y , , y ) =
r1
r2
s2
t1 ( p , , p , x1 , , xs1 ) t2 ( q , , q , y , , y ),
r1
r2
s2
151
Comment(s): 6.1.2
a.
b.
Theorem 6.1.2 If dim(E) = n then has Tsr (E) the structure of a nr + s -dimensional
real vector space. The system
j1
{ei1 eir e
js
| 1 ik n, 1 ik n},
Proof We must show that the previously mentioned system is linear independent in
Tsr (E) and also spans Tsr (E).
Suppose that
j1
i i
j1 jr ei1 eir e
1
k1
kr
js
= 0
i i
(r + s)-tensor then follows with < e , e q > = q that all numbers j1 jr have to be equal
s
1
to zero. Finally, what concerns the span, every tensor t Tsr (E) can be written as
i1
ir
j1
t = t( e e , e j1 , , e js ) ei1 eir e
js
e .
i1
i i
ir
Example(s): 6.1.1 The Kronecker-delta is the tensor T11 (E) which belongs to
the identical transformation I L(E; E) under the canonical isomorphism
152
If P L(E; F) then also, pure notational, P L(T01 (E), T01 (F)). The "pull-back transformation" or simple "pull-back" P L(F , E ) = L(T10 (E); T10 (F)) is defined by
with f F and x E.
Sometimes it is "unhandy" that P develops in the wrong direction, but this can be
repaired if P is an isomorphism, so if P1 : F E, exists.
Definition 6.1.2
defined by
Prs (t)(q 1 , , q r , y1 , , ys ) = t( P q 1 , , P q r , P1 y1 , , P1 ys )
Comment(s): 6.1.4
tions as P.
The following theorem says that "lifting up the isomorphism P to tensor spaces" has all
the desired properties, you expect. Chic expressed: The addition P Psr is a covariant
functor.
153
i.
ii.
Notate P ei =
j
j
Pi b j
j
and
(P1 ) e
l
k
Ql f .
Then holds
Pi Qik = Qi Pik = k .
i i
k k
i i
Proof
154
The Stokes equations play an important rule in the theory of the incompressible viscous
Newtonian fluid mechanics. The Stokes equations can be written as one vector-valued
second order partial differential equation,
grad p = 4 u,
(6.1)
with p the pressure, U the velocity field and the dynamic viscosity. The Stokes equations express the freedom of divergence of the stress tensor. This stress tensor, say S, is
a 20 -tensor field, which can be written as
S = p I + u + (u)T .
(6.2)
The herein occuring 20 -tensor field u ( yet not to confuse with the covariant derivative
of u) is called the velocity gradient field. But, what exactly is the gradient of a velocity
field, and evenso, what is the divergence of a 20 -tensor field? This differentiation operations are quite often not properly handled in the literature. In this appendix we put
on the finishing touches.
155
156
Figures
2.1
Indexgymnastic.
16
157
References
1
2
3
4
5
Abraham, R., Marsden, J. and Ratiu, T. (2001). Manifolds, Tensor Analysis and Applications djvu-file. Springer Verlag.
van Hassel, R. (2010). Program to calculate Christoffel symbols pdf-file..
Ivancevic, V. and Invancevic, T. (2007). Applied Differential Geometry pdf-file. World
Scientific.
Misner, C., Thorne, K. and Wheeler, J. (1973). Gravitation djvu-file. Freeman.
Seidel, J. (1980). Tensorrekening pdf-file. Technische Hogeschool Eindhoven.
158
Index
0-tensor 48
0
0-tensor 27
0
1-tensor 27
0
2-tensor 28
1
0-tensor 27
1
1-tensor 35
2
0-tensor 32
2
2 -tensor 39
3-dim. cubic matrix 38
3
0 -tensor 38
4-dim. cubic matrix 40
Bas(V) 47
b
f b
g 58
b
f b
g 56
[i j; m] 147
K () 83
k-form 82
O(n) 25
O(p, q) 25
b
p b
q 29
r
s-tensor 42
r
s-tensor field 81, 82
r
s -tensor field (alternative) 84
Sk 55
SO(n) 25
SO(p, q) 25
dt 143
Tsr (V) 43
TX (Rn ) 79
b
b 38
u b
vw
x y 32
x b
y 35
x y b
c b
d 40
a
affine coordinates 79, 84
antisymmetrizing transformation 60
arc 110
arclength 110, 111
atlas 137
axial vector 54
b
bilinear function 28, 32, 35
binormal 112
c
canonical isomorphism 73
Cartesian coordinates 79
chart 137
chart ball 137
chart map 78, 137
Christoffel symbol of the first kind 135
Christoffel symbols 92
Christoffel symbols of the second kind
134
contraction 43
contravariant 1-tensor 13, 48
contravariant 2-tensor 49
contravariant components 5, 7, 27, 34
contravariant q-tensor 50
contravariant vector field 81
coordinates, helicoidal 109
coordinate system 78
cotangent Space 80
cotangent space (manifold) 139
covariant 1-tensor 13, 48
covariant 1-tensors 9
covariant 2-tensor 49
covariant components 9, 11, 21, 28, 30,
38
covariant derivative 127, 129
covariant derivative (covector field) 95
covariant derivative (vector field) 95
covariant p-tensor 50
covariant vector field 81
covector field 81
covector (manifold) 139
covectors 9
curl 105
curvature 113
curvature tensor, Riemann-Christoffel
159
146
curvature vector 113
curve 138
curvilinear 78
d
dAlembertian 104
density 87
derivative (of f ) 136
differentiable (in a) 136
differential form of degree k 82
directional derivative 90
directional derivative (manifold) 140
direct orthogonal 115
divergence 95, 106
dual basis 10
Dual Space 9
e
Einstein tensor field 148
equivalent function (manifold) 139
equivalent (of curves in point) 138
esic line 124
Euclidean inner product 17
exterior derivative 98
f
first fundamental tensor field 119
formules of Frenet 114
Frenet frame 114
Frenet trihedron 114
functional matrix 136
function (manifold) 139
functor 152
fundamental tensor 29
fundamental tensor field 85
g
geodesic 124
geodesic curvature 123
gradient 105
gradient field 90
Gram matrix 18
h
helix, circular 110, 118
160
o
oriented volume 65, 86
orthogonal curvilinear 86
orthogonal group 25
orthonormal basis 24
osculation plane 111
s
scalar 27
scalar field 81
scale factors 86
signature 24
Signature inner product 24
space curve 110
standard basis Rn 5
standard basis Rn 9
surface 118
symmetric linear transformation 15
symmetrizing transformation 60
p
parameter representation 110
parameter transformation 110
parametric curve 118
parametric representation 118
parametrisation, orthogonal 120
parametrization 78
perm 58
permanent 58
permutation 55
permutation, even 55
permutation, odd 55
polar coordinates 79
principal curvature 124
principal normal 112
producttensor 44
pseudoparallel (along K) 143
r
radius of curvature 114
real number 27
reciproke basis 20
rectifying plane 112
Representation Theorem of Riesz 19
Riemannian manifold 141
t
tangent bundle 79
tangent line 111
tangent plane 119
Tangent Space 79
tangent space (manifold) 139
tangent vector field 126
tangent vector (manifold) 139, 140
tensor, antisymmetric 55
tensor field (manifold) 140
tensorial 51
tensorproduct 44
tensor, symmetric 55
torsion 113
transformation group 25
transition maps 137
v
vector field 81
161