You are on page 1of 142
Tensor Spaces and Exterior Algebra Recent Titles in This Series 108 107 106 105 104 103 102 101 100 99 98 97 96 95 94 93 92 OL 89 88 87 86 85 84 83 82 81 80 Takeo Yokonuma, Tensor spaces and exterior algebra, 1992 B. M. Makarov, M. G. Goluzina, A. A. Lodkin, and A. N. Podkorytov, Selected problems in real analysis, 1992 G.-C. Wen, Conformal mappings and boundary value problems, 1992 D. R. Yafaev, Mathematical scattering theory: General theory, 1992 R. L, Dobrushin, R. Kotecky, and S. Shlosman, Wulff construction: A global shape from. local interaction, 1992 A.K. Tsikh, Multidimensional residues and their applications, 1992 A.M. IV'in, Matching of asymptotic expansions of solutions of boundary value problems, 1992 Zhang Zhi-fen, Ding Tong-ren, Huang Wen-zao, and Dong Zhen-xi, Qualitative theory of differential equations, 1992 V.L. Popov, Groups, generators, syzygies, and orbits in invariant theory, 1992 Norio Shimakura, Partial differential operators of elliptic type, 1992 V. A. Vassiliev, Complements of discriminants of smooth maps: Topology and applications, 1992 Itiro Tamura, Topology of foliations: An introduction, 1992 A. I, Markushevich, Introduction to the classical theory of Abelian functions, 1992 Guangchang Dong, Nonlinear partial differential equations of second order, 1991 Yu, S. Iyashenko, Finiteness theorems for limit cycles, 1991 A.'T, Fomenko and A. A. Tuzhilin, Elements of the geometry and topology of minimal surfaces in three-dimensional space, 1991 E. M, Nikishin and V. N. Sorokin, Rational approximations and orthogonality, 1991 Mamoru Mimura and Hirosi Toda, Topology of Lie groups, I and II, 1991 S.L. Sobolev, Some applications of functional analysis in mathematical physics, third edition, 1991 Valerii V. Kozlov and Dmitrii V. Treshch@v, Billiards: A genetic introduction to the dynamics of systems with impacts, 1991 A.G. Khovanskii, Fewnomials, 1991 Aleksandr Robertovich Kemer, Ideals of identities of associative algebras, 1991 V.M. Kadets and M. I. Kadets, Rearrangements of series in Banach spaces, 1991 Mikio Ise and Masaru Takeuchi, Lie groups I, I, 1991 Dao Trong Thi and A. T, Fomenko, Minimal surfaces, stratified multivarifolds, and the Plateau problem, 1991 N. L. Portenko, Generalized diffusion processes, 1990 Yasutaka Sibuya, Linear differential equations in the complex domain: Problems of analytic continuation, 1990 I. M, Gelfand and S, G. Gindikin, Editors, Mathematical problems of tomography, 1990 Junjiro Noguchi and Takushiro Ochiai, Geometric function theory in several complex variables, 1990 N. L. Akhiezer, Elements of the theory of elliptic functions, 1990 A. V, Skorokhod, Asymptotic methods of the theory of stochastic differential equations, 1989 V.M. Filippov, Variational principles for nonpotential operators, 1989 Phillip A. Griffiths, Introduction to algebraic curves, 1989 B.S. Kashin and A. A. Saakyan, Orthogonal series, 1989 V. 1. Yudovich, The linearization method in hydrodynamical stability theory, 1989 (Continued in the back of this publication) Translations of MATHEMATICAL MMONOGRAPHS Volume 108 Tensor Spaces and Exterior Algebra Takeo Yokonuma Translated by Takeo Yokonuma American Mathematical Society rovidence, Rhode Island FY Dw Bei] & SRR TENSORU KUUKAN TO GAISEKIDAISUU (Tensor Spaces and Exterior Algebra) by Takeo Yokonuma Copyright © 1977 by Takeo Yokonuma Originally published in Japanese by Iwanami Shoten, Publishers, Tokyo in 7 Translated from the Japanese by Takeo Yokonuma 1991 Mathematics Subject Classification. Primary 15469; Secondary 15475 AssTRACT. This book provides an introduction to tensors and related topics. The book begins with definitions of the basic concepts of the theory; tensor products of vector spaces, tensors, tensor algebras, and exterior algebra, Their properties are then studied and applications given. Algebraic systems with bilinear multiplication are introduced in the final chapter. In partic- ular, the theory of replicas of Chevalley and several properties of Lie algebras that follow from this theory are presented, Library of Congress Cataloging-in-Publication Data Yokonuma, Takeo, 1939- [Tensoru kuukan to gaisckidaisuu, English] ‘Tensor spaces and exterior algebra/Takeo Yokonuma; translated by Takeo Yokonuma, p. cm.—(Translations of mathematical monographs, ISSN 0065-9282; 108) Translation of: Tensoru kuukan to gaisekidaisuu. Includes bibliographical references and index. ISBN 0-8218-4564-0 1. Multilinear algebra. 2. Tensor products. I, Title. IL Series. QA199.5.Y6513 1992 92-16721 512/,57-de20 cP Copyright ©1992 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government. Printed in the United States of America Information on Copying and Reprinting can be found at the back of this volume. ‘The paper used in this book is acid-free and falls within the guidelines established to ensure permanence and durability. @ This publication was typeset using Aj45-TEX, the American Mathematical Society's TeX macro system. 10987654321 979695949392 Contents Preface to the English Edition Preface Chapter I. Definition of Tensor Products §l. §2. §3. §4. §5. §6. Preliminaries Definition of bilinear mappings Linearization of bilinear mappings Definition of tensor products Properties of tensor products Multilinear mappings and tensor products of more than two vector spaces . Tensor products of linear mappings . Examples of tensor product spaces . Construction of tensor products with generators and relations §10. Tensor products of R-modules Exercises Chapter II. Tensors and Tensor Algebras gl. §2. §3. §4, §5. §6. Definition and examples of tensors Properties of tensor spaces Symmetric tensors and alternating tensors Tensor algebras and their properties Symmetric algebras and their properties Definition of relative tensors (pseudotensor, tensor density) Exercises Chapter III. Exterior Algebra and its Applications gl. Definition of exterior algebra and its properties §2. Applications to determinants §3- Inner (interior) products of exterior algebras §4. Applications to geometry Exercises vi CONTENTS Chapter IV. Algebraic Systems with Bilinear Multiplication. Lie Algebras §1. Algebraic systems with bilinear multiplication §2. Replicas of matrices §3. Properties of Lie algebras Exercises References for the English Edition Subject Index 101 101 105 114 125 127 129 Preface to the English Edition This is a translation of my work originally published by Iwanami Shoten, as a volume in their Lecture Series on linear algebra. We assume, therefore, that readers are familiar with several fundamental concepts of linear algebra such as vector spaces, matrices, determinants, etc., though we review some of these concepts in the text. We have made some changes in the references. On this occasion, I would like to express my gratitude to Professor Na- gayoshi Iwahori for giving me encouragement and many valuable suggestions during the preparation of the original work and also to Mr. Hideo Arai of Iwanami Shoten for continued support. I am grateful to the American Mathematical Society and the staff for their effort in publishing this English edition. I also thank Kazunari Noda and Nami Yokonuma for their excellent typing. Takeo Yokonuma December 1991 vii Preface The subject matter of the present book is generally called multilinear alge- bra; we shall discuss mainly tensors and related concepts. The readers may recall several important tensors that are used in differential geometry, me- chanics, electromagnetics, and so on. On the other hand, it is generally said that tensors are difficult to understand. One of the reasons for this is that formerly tensors and tensor fields (mappings whose values are tensors) were not distinguished, and tensor fields were discussed without defining tensors in advance.(') In fact, readers should be aware that sometimes tensor fields are simply called tensors in the literature. In any case, it is important to understand clearly what tensors are. Our purpose is to explain tensors to the readers as clearly as possible. Tensor fields are defined by means of tensors. The analytic theory of tensor fields is known as tensor calculus. Needless to say, it is beyond the scope of linear algebra and we shall not discuss it here. The notions of tensor density and pseudotensor are also important in appli- cations of tensors. Since they are related to representations of linear groups, we shall mention only their definitions (§II.6). As stated above, the notion of tensor seems to have originated from what are now called tensor fields. Research has been done, since the end of last century, with the study of vector calculus and the theory of invariants, on systems of functions which satisfy certain transformation laws. Ricci and Levi-Civita founded tensor calculus. It became well known after being used by Einstein to describe the theory of relativity. When we try to describe a physical phenomenon, we use a coordinate system. Though the description depends on the coordinate system, the phenomenon itself does not. To ex- plain.the situation, systems of functions that satisfy a certain transformation law play an important role. Depending on the type of transformation law, several kinds of tensor (field) have been defined. By the way, it seems that the word “tensor” was derived from tension. Thus, a tensor is classically defined as a system (or an array) of numbers which satisfies a certain transforma- tion law (see §11.2). We can consider that this system describes an “object” whose existence does not depend on coordinate systems. This point of view (‘)See R. Godement: Cours d’algebre, Hermann, p. 269. - PREFACE is sufficient for computation but it does not explain what the object is. In order to define tensors, we begin in Chapter I by constructing tensor products of vector spaces. Then, in Chapter II, we define tensors and study their properties. In many branches of mathematics the construction of tensor products is used as a powerful tool to construct new objects from known ones. For example, we will describe the extension of a field of scalars (§1.8b). The tensor product of representations is another example. In Chapter III, we discuss the notion of exterior algebra. Exterior algebras are basic to the theory of differential forms. In Chapter IV, to show another aspect of the theory of tensor products, we discuss algebraic systems with bilinear multiplication. In particular, we discuss Lie algebras. CHAPTER I Definition of Tensor Products In this chapter, we explain in detail the notion of tensor product of vector spaces. Using this notion, we can explicitly define tensors, as will be done in Chapter II. The tensor product of vector spaces is a new vector space which is asso- ciated with given vector spaces in a certain natural way. (To simplify the presentation, we describe the case of two vector spaces, say V and W. In general, we can define the tensor product of any finite number of vector spaces.) Recall the direct sum of vector spaces. This is another example of a new vector space associated with given vector spaces in a natural way. In the case of the direct sum, we can easily construct the direct sum V @W , starting from vector spaces V and W. However, in the case of the tensor product, the construction is not so easy to understand. So we make a detour in a sense. First, we prove (§3, Theorem 1.1) the existence and the uniqueness of a vector space which has the following property—every bilinear mapping on V x W is “linearized on this space.” Then, we call this vector space a tensor product of V and W. The proof of existence is given in such a way that we can manage the tensor product. By uniqueness, we can define the tensor product without ambiguity. To formulate the results, bilinear mapping is defined in §2 and the tensor product is defined in §4. The various properties of the tensor product are then explained in subsequent sections. §1. Preliminaries First we recall notions that will be used frequently in this book. Through- out this book, we assume that every field has characteristic 0. By definition a field k has characteristic 0 if the sum n-1=1+-:-+1 (n terms) is not equal to 0 for any natural number 7, where | is the unit element of k. The field of rational numbers Q, the field of real numbers R, and the field of complex numbers C have characteristic 0. On the other hand, the finite field with p elements does not have characteristic 0, since p-] =1+-+--+1 is equal to 0. We also assume that all vector spaces over a field k are finite dimen- 1 2 I. DEFINITION OF TENSOR PRODUCTS. sional unless otherwise stated. We denote the dimension of V by dimV or dim, V. A vector space over a field k is sometimes called a k-vector space. Let V be a vector space over a field k. The dual space V* of V is the k-vector space consisting of all linear mappings of V into k. The sum g+y and the scalar multiple ag for 9, yw EV", a€k are defined by the following formulas. (G+ YY) = 9%) + Yr), (ag)(v) =a(gv)) (VET). Let F =(e,,...,¢,) (n=dimV) be a basis for V. For i=1,...,7, consider the element ¢; € V* defined by e/(e;) = 6; ('). Then the elements €i,.+.,@, form a basis for V*, that is called the dual basis of &. In particular, we have dimV* =dimV. Let v be an element of V. Then the mapping which assigns g(v) to g € V* isa linear mapping of V* into k, which we denote by v’. By v++v°, to every element v of V there corresponds an element of (V*)*. We can verify that this correspondence is linear and bijective. Therefore ?) V =(V*)* by this canonical mapping v + v°. Let V and W be k-vector spaces. The set of linear mappings of V into W is also a vector space over k. This set is denoted by Hom(V, W). For F,GeHom(V, W), a€k, the sum and scalar multiplication are defined by the following formulas. (F +G)(v) = F(v) + Gv), (aF)(v)=a(F(v)) (eV). To emphasize the field k , we sometimes use the term k-linear mapping and the notation Hom,(V, W). Let (e,,...,@,) bea basis for V and (f,,..., f,,) abasisfor W (n= dimV, m=dimW). A linear mapping F €¢ Hom(V, W) is determined by n elements Fe;) (l (v, w) + g(v)w(w) €& isa bilinear form on V x W, Similarly, for fixed (vu, w) € V x W, the mapping V* x W*" 3 (gy, w) g(v)y(w) is a bilinear form on V* x W". We denote by 2(V, W; U) the set of bilinear mappings of V x W into U. Then, as in the case of linear mappings we can easily show the following properties: (1) For ®,, ®, eL(V,W;U), a, Bek, define (ab, + B®,)(v, w) =a, (v, w) + fO(v,w) (VEV, wEeW). Then o@, + BO, € -#(V,W;U). With respect to this operation L(V, WU) isa vector space over k. (2) Let (e,,.-.,¢@,) and (f,,...,/,) be bases for V and W respec- tively. Then, an element ® of “(V, W; U) is determined by mn elements He, f) (l U such that B= F ot. VxW—> Uy ae U §3. LINEARIZATION OF BILINEAR MAPPINGS 5 (2) The pair (U,, 1) is unique in the following sense: If the pairs (Ug, 1) and (Uy, 1) consisting of a k-vector space and a k-bilinear mapping sat- isfy conditions (T1) and (T2), then there exists a unique linear isomorphism Fy: Uy Uy such that Fyo1=1'. Norte. (For the reader’s convenience, elementary but important facts are recalled in the notes.) Let X be a vector space and S be a subset of ¥ . The intersection of all subspaces of X containing S is also a subspace containing S and is the smallest among them. This space is called the subspace generated (or spanned) by S and is denoted by (5S). It is easy to see that (S) is the set of all finite linear combination of elements of S. When (S) =X, X is said to be generated by S and Sis called a set of generators (or a generating set) of X. Before giving the proof of the theorem, we restate the condition on (Up, 1). The condition (T1) is equivalent to the uniqueness of F in (T2). Namely, (T1) and (T2) are equivalent to the following (T). (T) For any ® € Y(V, W;U), there exists one and only one k-linear mapping F: U, — U such that b= Fo, Suppose that (U,, 2) satisfies (T1) and (T2). The existence of F' follows from (T2). Suppose that F and F are linear mappings U, > U such that ® = Foi = F'ot, Since F and F’ are linear mappings that coincide on the generating set 1(V x W) of U,, we have F = F’, which shows that (U), 1) satisfies (T). Conversely, suppose that (Up, 1) satisfies (T). Clearly we have (T2). Let Uy be the subspace of U, generated by 1(V x W). Since the image of 1 is contained in Uj, 1 can be considered as a mapping of V x W into Uj, which we denote by 1,. Applying (T2) to ® =1, , we have a linear mapping F:U, - Ug such that 1, = Foz. Let j be the inclusion mapping of Uj into U. Then 1 = jo1,, therefore we have 1 = jor, = joF or. On the other hand, the identity mapping id of U, (id(u) =u for ue U,) clearly satisfies idor = 1, which shows that F = id satisfies (T) for ® =7. Since (joF)ot =1, we have joF = id by the uniqueness of F in (T). In particular, it follows that j is surjective and U} = U,, which implies (T1). PROOF OF UNIQUENESS IN THEOREM 1.1. We first prove (2), which is easier than (1). Assume that (Uj, 1) and (Uj, 1’) have the property mentioned in (1). Since 1’ is a bilinear mapping: V x W -+ Uj, applying (T) to (Up, #) we have a linear mapping Fy: U, -+ Uj such that F,o1=7'. Similarly, we 6 I. DEFINITION OF TENSOR PRODUCTS, have a linear mapping G,: Uj -» U, such that G,or' =1. U, Then 1= Gyo v= (Gyo Fo) ot. For the identity mapping id of Up, clearly we have ido: . From the uniqueness, Go F, = id. Similarly, for the identity mapping id’ of Uj, we have F,oG, = id’ , which shows that F, is an isomorphism. The uniqueness of Fy follows from uniqueness in (T). O Before proving existence, we are going to analyze what conditions are necessary for U,. The relation ® = F oz implies that the image ®(V x W) of ® is con- tained in F(U,), which shows that the subspace (B(V x W)) generated by ®(V x W) is contained in F(U)). Set dimV = n, dimW = m and take bases & = (e,,...,e,) and F = (f,,...,f,) for V and W re- spectively. By property (2) of a bilinear mapping, we can choose nm ele- ments ®(e,, fj) arbitrarily. Therefore we can choose ® and U in sucha way that {®(e;, fll < isn, 1 nm. Since dim U, > dim F(U,), we must have dim U, > nm. In fact, we shall prove that any vector space of dimnm satisfies the conditions (T1) and (T2) for an appropriate 7. PROOF OF EXISTENCE IN THEOREM 1.1. Let Up be a vector space of dim mn. Take a basis Y = (g,,.-., 8m,) Of Uy. Arrange the elements of ¥ in n rows where each row has m elements and set & = (i, J) if g, isin ith row and jth position in that row (1 +++. f4) be their dual bases. We can construct a basis (®, j) for U,, where ®, (l 8 I DEFINITION OF TENSOR PRODUCTS, from which we obtain condition (T1). For every De Y(V, W; U), define a linear mapping Fy: Uy) U by Fy (5 ns] = Veh) Oye as ij Then ® = Fy 01, which implies condition (T2). Thus we can say that the tensor product V@W is simply £(V*, W*; k) and that, in this situation, v @ w is equal to the bilinear mapping 1(v, w). §5. Properties of tensor products We begin with properties which can be easily obtained. Rewriting the bilinearity of the canonical mapping 1, we have the following proposition. Proposition 1,1 (bilinearity of @). For a,B € k, v,v,,% EV, w,w,,w,€W, we have (av, + Bv,) @w =a(v, @w) + B(v, @w), v® (ow, + Bw,) = a(v @w,) + B(v @w,). From the proof of existence of tensor products (§3) we obtain Proposition 1.2. Let (e,,..-,@,) bea basis for V and (f,,.--,f,) @ basis for W. Then the mn elements e,® f, (USi,) for V that contains the linearly independent elements v,,..., v,. Since 7/_,(v; @ w,) = j_(v; @ wi), we have Y7/_,{v; ® (w;—w;)} = 0 by Proposition 1.1. Take any basis (f,,..-, f,) for W. Set w,-w; = Dk Byyfy- Then D9 ,-w) =O {us (Ses) \ =D Aylw9 4) =0. isl i=l jl i=l j=l Since (v; @ f;) is a basis for V @ W, this implies f,;=0 for all ¢ and j, ie, w,=w;. a By property (T) of the tensor product, for any ® € Y(V,W; U) there exists a unique F €¢ Hom(V @ W, U) such that ® = F 01. This correspon- dence gives the next proposition. PROPOSITION 1.3, As k-vector spaces, L(V, W; U) = Hom(V @ W, U). Proor. For the proof of this assertion, we give the inverse of the cor- respondence above and show that it is a linear isomorphism. For F € Hom(V @W,U), define ®, =F or. Then ®, € #(V, W; U). Property (T) implies that the mapping F + ©, is bijective. The fact that this map is linear, namely, ®,,-=0o0,, 7 1=0,+ 0p for F, F'€ Hom(V @W, U), a€k, follows easily from the definition of ®. 0 Recall that the dual space of V* is canonically isomorphic to V (§1). Thus we have the COROLLARY. F+F (VeWws2V"ew". The element F of (V@W)* corresponding to gp@w eV" @W" is given by F(v@w)=o(v)y(w) (VEV, wEW). PROOF. (V@W) =Hom(V @W,kK)=LV,W kK) 2V OW. Let us examine these isomorphisms. The second one is given by V* @ W* 5 geayadeS(V,W;k), where Ov, w) = o(v)y(w) (§4). The first 10 L DEFINITION OF TENSOR PRODUCTS. isomorphism is given by Y(V,W;k) 3 ® + F € (V@W)" such that ®=F ot. Thus, if F is the element corresponding to p® y, F(u@w) = FQ(u, v)) = 8, vw) =9(v)yw). Oo PRopPosITION 1.4. By the correspondence (a ®v ++ av), where a € K, vevV, kKOVEYV. REMARK. The proposition means that the given correspondence extends to a linear isomorphism between the two vector spaces. Notice that the correspondence does not depend on the choice of bases. The same remark applies to Proposition 1.5. PRooF. It is sufficient to show that the pair consisting of V and the bi- linear mapping 1:k x V + V, i(a, v) = av satisfies the conditions (T1) and (T2) (for #(k, V; U)). It then follows that (V, 1) is a tensor product of k and V. Therefore, by Theorem 1.1(2), we have a linear isomorphism Fy: k @V — V such that F,(a @v) = (a, v) = av. Clearly (V, 1) satisfies (T1). Consider (T2). For ® € Y(k,V;U), define a mapping F: V + U by F(v) = O(1,v) (ve V). Then Fe Hom(V, U) and we have F o1 =, since Foi(a, v) = F(av) = ®(1, av) = a(@(1, v)) = (a, v). O PRoposiTIon 1.5 (commutativity of the tensor product). By the correspon- dence (v@w-w®v), wehave V@W=WaV. ProoF. By (T2) (for V @ W), the bilinear mapping Vx W > W@V (v,w) + w ®v induces a linear mapping F:VeawaWel, F(v@w)=wev. ae 1 Vx nN re Wev Similarly we have a linear mapping G:WeVvVAVewWw, Gw@v)=vew. Since FoG=id and GoF =id, F isanisomorphism. 0 With respect to the tensor product of subspaces, we have the following result. ProposiTION 1.6 (tensor product of subspaces). Let V, W be k-vector spaces and V,, W, be their subspaces respectively. Let (V@W,1) bea tensor product of V and W. Let 1, =1|V, x W, be the restriction of 1 to §6. MULTILINEAR MAPPINGS AND TENSOR PRODUCTS un V, x W, and U, be the subspace of V @W generated by 1(V, x W,). Then, the pair (U,, t,) is a tensor product of V, and W,. PRoor. We will show that (U,, 1,) satisfies the conditions (T1) and (T2) for the tensor product of VY, and W,. (T1) follows from the definition. To prove (12) choose ©, € 2(V,,W,;U). Then there exists ® € -£(V,W;U) such that ®|V, x W, = ®,. In fact, let (2,5 -++ 5 ey) (n, =dimV,) and (f,, ss Fin) (m, = dim W,) be bases of V, and W, respectively. Extend them to bases (e,,...,€,), (ft,--->f,) of V and W (n=dimV, m=dimW) and define De L(V, W; U) by we, 5) = {Ber D if1 k by O(u,,...,U,) = 91(Y)--°9,(U,) - Then © is a multilinear form. 12 I. DEFINITION OF TENSOR PRODUCTS: The set of multilinear mappings of V, x ---x V, into U is denoted by L(V,,...,¥,; U). If n =2, we have a bilinear mapping, which we have al- ready discussed. The properties (1), (2), and (3) of (V, W; U) mentioned in §2 extend easily to (V,, ..., V,; U). With respect to the linearization explained in §3, we have the following theorem. TueoreM 1.1’. Let Vi... VU, be k-vector spaces. There exist a k-vector Space U, and an n-multilinear mapping 1 € L(V,,...,V,; Uy) that have the following properties (T1) and (T2). (T1) Uy is generated by the image (V, x-+-x V,) of t. (T2) For any DE LY, +19 %,3U), there exists a linear mapping F: U,—U such that B= F ot. V,x---xVi—+5 Uy Se The proof is similar to the case n = 2. Check it. Deriition 1.4. The pair (Uj, 1) described in Theorem 1.1’ is called a tensor product of V,,..., V,. We write Uy =V,@---@V, and UV, 5-0-5 Ug) =U, BBY, = (v, EV). For n = 3, we have the following proposition. Proposition 1.7. The correspondence (v, ®v,®v, + (v,@v,)@v;) gives an isomorphism Yeh oK=(Keh)er, Proor. Consider the multilinear mapping Y, x V, x 4 > (Y@h)ow given by (U1 5 Vq5 Vz) (V, @ Vy) @ V5. From (T2) this mapping induces a linear mapping F:V,@KOYU7>(YOV)eV,, Flv, ©, @0,) = (v, @U,) @4;. Now, let us construct the inverse mapping of F. Fix an element v ¢ V, and consider the bilinear mapping ®,: , x ¥, + V, @V, @V, defined by ®,(¥,, V2) =v, @V,@v (v, EV, v, € V4). By (T2) applied to War, ®, induces a linear mapping G,: G,:V,®%,7V,e%8%,, G,(v, @v,) = 9, Ov, @v. V.x%— 48%, WLS Yehe% (ve¥,). For v,u'é€ Vi, @€k, we have 1=G,+G,, G,,, =aG,, Gay av u §6. MULTILINEAR MAPPINGS AND TENSOR PRODUCTS, 13 since G, is uniquely defined by @, . Using these facts, we define a bilinear mapping Y: ¥:(V8V,)xK4-V, 8,0, YWx,0)=G,(x%) (KEY @K,vE¥). YeKyxyonKeKer, 1G y Keine. Then induces a linear mapping G: G:(V,@%)@%,7V,0%,0%,, Gxev)=G,(x). By the definition, we have G((v, BY) @ V3) = G,(V, @ ) =U, @V, @ Ys. Thus GoF = identity mapping of ¥, @V,@V;, F oG= identity mapping of (V, @V,)@V,. Thus, G is the inverse mapping of F and F is a linear isomorphism. O Similarly, CoroLLary. The correspondence (v, @U,@v; + V, @(v,@v;)) gives an isomorphism V, @V,8V,=V, 0 (1,87). Combining these results, we have PRoPosITION 1.8 (associativity of the tensor product). The correspondence ((v, @ V,) @v, ++ V, @ (v, @v;)) gives an isomorphism (OV) e%=%,0(%,0%). In the following, we identify V,@V,@V,, (4Y.®VY,)@V,, and V.@(V,8%) by these correspondences. In general, we can show by induction on n that, for any n vector spaces V,, V,,..., V,, a tensor product of these vector spaces (in this order) is naturally isomorphic to V, @---@V, , no matter how parentheses are inserted. For example, in the case n = 4 (VY @V)eY)eY,2(%, OV) a(Ker,) =(V,@(V,8V))@V,24 e(KeK)eu,) =V,8(%,8(, 01) =, eh ov, oV, =, 8(Y,8V,0V,)=V,0%,0%0%,. We identify these spaces hereafter. As in the case n = 2, we have the following results. Proposition 1.1’ (multilinearity of @). Let a, BEk, v;,u; €V;. For any i (L W, (i=1,..., 1) is defined similarly. Next, we compute the matrix of F,@F, , using the matrices for F, and F, with respect to bases for V, and W,. Let (e,), (e;), (fj), and (f)) be bases of V,, %, W,, and W, respectively. Then (e,e), (84) are bases of ¥,@ V, and W, ® W, (Proposition 1.2). Let A = (a,,) and B = (B,,) be the matrices for F, and F, with respect to the bases (e,), (f) z (e), and (f))- Namely, Fe)= Dah AG) = bye 1 h Thus, for all i and j, we have (F, @ F,)(e, @¢)) = Fy(e;) ® F,(e}) = LV ahah ® f- TA When we arrange the bases (e; ee!) and (f,@ ff ) in lexicographical order (85), the matrix of F, @ F, is given as follows: (F, @ Fle, @e,, e, @e,,...) HOA AH) fk nb By 41 Boy ROA Ahr) [enB a B -- a4,B peep Ont ‘min where dim, =n, dimW, =m. In general, we have the following definition. 16 I, DEFINITION OF TENSOR PRODUCTS, DEFINITION 1.5. Let A =(a,;) and B= (B;;) be matrices. The matrix OB a B --- 0,8 a,,B Om B fe OnnB is called the tensor product (or Kronecker product) of A and B. It is denoted by A@B. If A isan mxn matrix and B isan m' xn’ matrix, A@B is an mm’ x nn’ matrix. According to this definition, the matrix of F, ® F, with respect to the bases above is the tensor product of the matrices of F, and F,. EXAMPLE 1,3. Consider the case where F,(e;) = 6,,f,, Fy(e}) = 6,,f, - Then we have , (F, © Fe; @ 6) = 5,5; ©F) 6, j = Kronecker’s symbol). Namely, the tensor product of matrix units é ) E,, and E,, is given by E,, ® E,, = Ey. where n' = dimV,, m’ = dim W,. The following formulas for the tensor product of matrices are obtained by rewriting Proposition 1.9 and Definition 1.5. —1)m! +r, (g—l)n' +s? Proposition 1.9’. Let A, be mxn matrices, B, be m' x n' matrices; let C, bean nx1 matrix and D, be an nx’ matrix. Then we have A, @(B, +B,) = 4, @B, +4, @B,, (A, + 4,)@B, =A, @B,+4,@B,, (04,)@ B, = 4,@(0B,)=a(4,@B,) (ach), (4, @B,) ='4,@'B,, A,C, @B,D, = (A, @B,)(C, ®@D,). Corotary. If A and B are regular matrices, then A@B is regular, and we have ; (A4@B)'=4' eB". This follows from the last formula in Proposition 1.9’ . It is an interesting question how certain properties of matrices A and B transfer to properties of A @ B. For instance, Proposition 1.10. Let A be an nxn matrix whose eigenvalues are O15-..50, and let B bean mxm matrix whose eigenvalues are B,,..., B+ Then the eigenvalues of A®@B are a8; (i=1,...,0, j=1,...,m). °) The matrix whose (i, j) component is | with all other components equal to 0 is denoted by E;. These matrices are called matrix units. §7. TENSOR PRODUCTS OF LINEAR MAPPINGS "7 Proor. Considering an extension field of k, if necessary, we may as- sume that A is similar to an upper triangular matrix with diagonal elements a,,...,@, and that B is similar to that of diagonal elements £,,..., 8, - Namely, there exist regular matrices S and T such that a, * fs By . SAS = STs Tee i ) a, 0 Bry From the corollary to Proposition 1.9’, S@ 7 is regular and (S.@ T)"' = S'@T7!. Hence (S@T)"'(4@B)\(S@T) = (S7'AS) @ (T"'BT) a * B, * which proves the proposition. O Coro.tary. Let A be an nxn matrix and B an mxm matrix. The trace tr(A x B) and the determinant det(A®B) of A®@B are given by tr(4 @ B) = tr(A) - tr(B), det(A ® B) = (det(A))” - (det(B))”. ProoF. Let X be a square matrix whose eigenvalues are 7,,-.., ,- Then r(X) = +--+, det(X) = mm --- My The results follow easily from these formulas. O EXERCISE 1. Let A and B be matrices. Assume that rank(4) = r, and rank(B) = r,. Compute the rank(4 ® B) of A@B. 18 I DEFINITION OF TENSOR PRODUCTS. EXERCIsE 2. Let A,,..., A, be square matrices. Prove that, if 4,@---@ A, is equal to a scalar matrix aJ,, then each A, is a scalar matrix and a=a,-+-a,, where Ai=al,,- §8. Examples of tensor product spaces (a) The space of linear mappings. Let V and W be k-vector spaces. The k-vector space Hom(V , W) consisting of all linear mappings of V into W can be regarded as a tensor product space. Let 9g eV", we W. For (g,w) € V" x W, we define an element Faw €Hom(V, W) as follows: F, y(v)=9(uyw (we), It is easy to see that F, ,, is an element of Hom(/, W). w PRopositION 1.11. The correspondence (y @w «+ Fw) Sives an isomor- phism between V" @W and Hom(V, W). PRoor. The mapping ®: V"x W — Hom(V , W), which assigns Fy, y 10 (g, w) € V* x W, is clearly bilinear. Therefore, by condition (T2) applied to V" ®W, there exists a linear mapping F:V" @W—Hom(V, W) such that For=. ® 4F Hom(V , W) It suffices to show that F is an isomorphism. First, we will prove that EF is one-to-one. Since F is linear, it suffices to deduce t= 0 from F(t) = 0 for 1€ V*@W. Suppose F(t) =0 for te V"@W. 1 can be written in the form 1= 37;_,(9;®w,), where 9, EV", w,€W. We can assume that {w,,..., w,} is linearly independent (cf. §5). Then, Fi) =F (Swe u)) =F (= (9;, w)) isl i=l = LF ono,, w) = o;, w) =DF.w F(1) = 0 implies F(Q(v) = OF, y,(v) =X, 9(v)w; = 0 forall ve Vv. Since {w,,..., w,} is linearly independent, g,(v)=0, i=1,...,7, for all v € V, which implies g, = 0, namely, t=0. To show that F is surjective set dimV =n and dimW =m. Takea basis (e,) for V, the dual basis (e7) for (e,), and a basis (f) for W. Then Pos 5(Ce) = CdS = Subp §8. EXAMPLES OF TENSOR PRODUCT SPACES 19 Therefore the Fs, (l U such that i = Fo1. Since the image of 7 is contained in that of F, by condition (2) above, the image of F is equal to U,ie, F is surjective. Condition (1) implies that dimU, = dimU. Thus F is an isomorphism and, by construction, i= Foz. COROLLARY TO PRoposITION 1.11. If W = V in Proposition 1.11, we have Hom(V ,V)£V*@Vv. With regard to Lemma 1.2 we make a remark about the tensor product of linear mappings. Let F,¢ Hom(V,, W,), i= 1,2. Then F,@F, ¢ Hom(V,0%,, W,@M%) is defined in §7, and we have a bilinear mapping: Hom(V,, W,) x Hom(V,, W,) > Hom(V, @V,, W, @ ). w wv (F,, F) ” F,OF, Using Lemma 1.2, we have Hom(V,, W,) ® Hom(V;, W,) ¥ Hom(V, @ V,, W, ® W). Here, condition (2) is verified by the computation in Example 1.3. Thus we can regard the tensor product of linear mappings defined in §7 as a tensor product of k-vector spaces Hom(V,, W;), i= 1,2. In particular, we can apply the results on the tensor product of vector spaces to the tensor product of linear mappings and of matrices. Consider the case where W, = W, = k in the above formula. Since W, @W,=k, we get again Vievs2(V,e%)', 20 I. DEFINITION OF TENSOR PRODUCTS, which is the corollary to Proposition 1.3. (b) Extension of the field of scalars. Let V bea k-vector space and let k, be a subfield of k. Then V can be regarded naturally as a vector space over k,. This procedure is called restriction of the field of scalars of V to ky. The k,-vector space V is denoted by ¥, . Note that the dimension of V, over ky is given by dim, Ve = nd, where d =[k: ky] = the degree of the extension = the dimension of k as vector space over k, and n= dim, V= the dimension of V over k. Now, let k, be an extension field of k, ie., ky is a field which contains k as subfield. Then, using the tensor product, we can naturally construct a vector space over k, from V. The field k, can be regarded as a vector space over k , and we assume that [k, : k] is finite since we have been discussing only finite-dimensional vector spaces. But, in fact this assumption is not necessary as we explain later (§9). An example of such a k, and k is given by k, = C (the field of complex numbers) and k = R (the field of real numbers), where [k, : k] = 2. Now, construct a tensor product k, @ V, where k, is regarded as a k- vector space. Then k, @V is a k-vector space. In particular, addition is defined. For an element ¢ of k, and an element x of k, ®V, we shall define the scalar multiplication ¢x as follows. For € (vhy, (= the restriction of the field of scalars of V" to k) which assigns 1@v €(V" ye tovev is @ one-to-one k-linear mapping. (2) v' is generated by F,(V) asa vector space over k,. In particular, if (€,,--.€,) is@ (k-) basis of the k-vector space V, (1@e,,--.,1@e,) is @ (k,-) basis of the k,-vector space v'. Thus we have dim, V"' = dim, V. Proor. (1) Note that (V") , is nothing but k,@V asa k-vector space. For v,v' eV, € ek, clearly Fv +v') = Rv) + Fy(v’) and Fy(év) = 1@év = @v = &(1@v) = €F,(v), which shows that F, is a k-linear mapping. Let v € KerF, (= the kernel of F)). Namely, F,(v) = 1@v=0, which implies v = 0 (cf. Corollary to Proposition 1.2). Thus F) is one-to-one. (2) Let x= 0G,@0) eV" =k, @V ek, 0, eV). Then DG @v) = eC oy) = DIE Fou). This implies that v's is generated by F(V). Any element of F(V) isa linear combination of 1@e,,..., 1@e, with coefficients in k. And F,(V) generates the k,-vector space v . Therefore we know that 1@e,,..., 1@e, generate ve, Next, we will show that these elements are linearly independent over k, . Suppose that dG. @e) =0, i where oy ek,. Then 0=SG(1 04) = 1G ee). j i Here, UE @e;) is an element of the k-vector space k,@V and {e,,..., &,} is linearly independent over k . Therefore, by Lemma 1.1 each gj =0. a0 Identifying V and the image of Fy in v" , we can regardas VC". We also define the extension of linear mappings. Proposition 1.13. Let V, W be k-vector spaces and F a k-linear map- ping of V into W. Then there exists a unique k,-linear mapping F of vii into W* such that the restriction F\v of F to V is equal to F , where we regard as V' DV and Wi 2W. Proor. Let id be the identity mapping of k,, which we can consider as a k-linear mapping of the k-vector space k, into itself. By taking the tensor product of linear mappings, we have a k-linear mapping id@F: k, @V > k,®W. We will prove that id@F is a k,-linear mapping when k, @V 22 I. DEFINITION OF TENSOR PRODUCTS and k, @W are regarded as k,-vector spaces v% and wh respectively. It suffices to show (id@F)(Ex) = ¢((id@F)(x)) for all GEk,, xek @V, namely, for all € €k, , (id@F) 0 Ly = Ly 0 (id @F). (The L, on the left side is a mapping of k, @ V and the L, on the right side is a mapping of k, @ W.) Now, for a@v €k, @V, we have (Gd@F) L,)(a® v) = (id@F)(Ea @ v) = Ea@ Flv), (Lz ° (id @F))(a @ v) = L(a @ F(v)) = fa ® F(v). Since the set {a@v|aEk,, v €V} generates k,@V asa k-vector space, these formulas show that k-linear mappings (id@F) o L, and L, 0 (id@F) coincide on k, @V. Thus F =ideF isa k,-linear mapping and satisfies the condition. a oe For the uniqueness, suppose that the k,-linear mappings F and F’ satisfy the condition F|V = F’|V, ie, FRY) =F |F(V). Since F(V) generates the k,-vector space vi, and both F and F’ are k,-linear, it follows that F=F’. a - DEFINITION 1.6. The k,-linear mapping F obtained from Proposition 1.13 is called the extension of F to k, and is denoted by Fe Let (e,) and (f}) be bases for V and W respectively. Then (1 ® e;) and (1@ f)) are bases for v* and W, The matrix of F“ with respect to these bases is the same as that for F with respect to (e,) and (. f) . This follows easily from the construction. With regard to extensions of mappings we have the following more general result. Proposition 1.14. Let V bea k-vector space, W a k,-vector space, and G a k-linear mapping of V into W,. Then there exists a unique k,-linear mapping é of Vv" into W such that | = G (where G is regarded as a k-linear mapping). w= W Proor. Define a mapping ®: k, x V — W, which assigns ¢G(v) € W, to (€,v) €k, x V. Obviously © is bilinear. Therefore by property (T2) of the tensor product k,@V, ® induces a k-linear mapping G: k, @V + W, such that G(E @v) = O(€, v) =EG(v). ky xV—k, eV é 3 1 Ww, §8. EXAMPLES OF TENSOR PRODUCT SPACES. 23 We will prove that G isa k,-linear mapping. For ¢' € k, (GoLy)(E@v) = GE'E@v)) = GCE v) =CEG(V) =CG(ev) =(LyoG@)(Eev) (Ee@vek, eV). By the same argument as for Proposition 1.13, we have for all x ek, @V, GE'x) =F). The uniqueness of G is obvious. 0 Using this proposition we can prove the following theorem. THEOREM 1.3. Let ¥ bea k,-vector space and let F, be a k-linear map- ping of V into V,. Assume they satisfy the following conditions (E1) and (E2). (El) V is generated by F,(V) as a k,-vector space. (E2) For any k,-vector space W and any k-linear mapping G € Hom(V , W,), there exists a k,-linear mapping G e€Hon(/, W) such that Go F, = G, where G is taaied as a k-linear mapping. Then there exists a k,-linear isomorphism x of Vv" onto V such that Ko F,=F,, where « is regarded as a k-linear mapping. Proor. By (El) we know that the G given in (E2) i is unique. By Proposi- tion 1.14, there is a k,-linear mapping Bi: vt _, % such that Fok= It suffices to show that k = F, is an isomorphism. Given condition (E2) and the uniqueness of G in (E2), we can use the same argument as in the proof of the uniqueness of the tensor product (cf. §3, Theorem 1.1). O This theorem characterizes the extension V" of the field of scalars. In other words, for a k-vector space V and an extension field k,, we can define without ambiguity the extension v™ asa pair (7, F) consisting of a k,-vector space V and k-linear mapping Fivoa Uy, satisfying the conditions (El) and (E2). EXAMPLE 1.4, Let V be a vector space over R and V© be the extension of the field of scalars of V to C. Choose a basis (e,,...,¢,) of V and a basis (1, V—1) of the vector space C over R. Every element of V° can be uniquely expressed as a linear combination of 1 @e, and V-1@ e; (1 8,(v=1e¢) — (a;, 6 ER). 24 1, DEFINITION OF TENSOR PRODUCTS Since F)(30,0;¢,) = o,0;(1 ® e,), we can write }7,a,(1@e) = Vjae;, identifying V CV“. Since v-1ee,=L/-(1@¢) = Vv—I(1ee), we can also write 37; 8,(V—1 @ e,) = V—-1(, B,(1 @ e,)) = V-1(D; Be) - Thus every element in V© admits an expression of the form v, + v-iv,, where uv; € V. It is easy to see that this expression is unique. For x =v, + V—lv, (uv, €V), define ¥ = v, — V—Iv,. Then we have xx =x4x, &X=ER, F=x, for x, x’ EV" and EEC. §9. Construction of tensor products with generators and relations In this section we shall examine a third method of constructing tensor products. Though this method is not so easy to understand, it is theoretically interesting, since it can be widely applied. First, we describe a k-vector space generated by a set A, where A isa nonempty arbitrary (infinite or finite) set. Consider a k-vector space Y(A) which has a basis {e(a)|a € A} such that, between the elements of A and the basis elements, there is a bijective correspondence e(a) +a (a € A). dimY (A) is equal to the cardinality of A, therefore, it is not necessarily finite. (A) is called a vector space generated by A. Namely, 7 (A) is the set of all finite linear combinations of e(a) (a € A) with coefficients in k, Salaje(a) (a(a) ek). aca Here, “finite” means that the number of elements a € A with a(a) #0 is finite. Two linear combinations )7,-,a(a)e(a) and )7,-, B(a)e(a) are equal if and only if a(a) = B(a) for every a € A. Addition and the scalar multiplication are defined as follows: Da(aye(a) + > B@e(@) = Yala) + B(a))e(@), aca acd acd > Loalade(a) = ivala)e(a) (ye k). aca aed There is another way to define 7 (A). The set # of all k-valued func- tions defined on A is a k-vector space with respect to usual additional and scalar multiplication. The subset consisting of all f ¢ F such that f(a) =0 except for a finite number of elements of A is a subspace. From the corre- spondence Y(A)> S a(a)e(a) wf, where f(a) = a(a) (@€ A), aca it follows that (A) is isomorphic to this subspace as a k-vector space. §9. CONSTRUCTION OF TENSOR PRODUCTS WITH GENERATORS AND RELATIONS 25 Consider the mapping e: A -- 7 (A) which assigns e(a)) to a, € A, where e(a)) is a basis element (= )),<4@(a)e(@), a(a))=1, and a(a)=0 for a # ay). Then the pair (7(A), e) has the following universality property. Proposition 1.15. Let W bea k-vector space and p a mapping of A into W. Then there exists a unique k-linear mapping F of Y(A) into W such that Foe =p. Proor. Define a mapping F of 2 (A) into W by F(Y),¢4a(a)e(a)) = Daca “(4)p(a). Obviously F is linearand Foe=p. 0 Now, let us construct a tensor product of k-vector spaces V and W. Take the direct product set Vx W(= {(u, w)|u€ V, w € W}) as A in the discussion above and construct a k-vector space Y(V x W). To simplify the notation we denote e((v, w)) by e(u, w) for (vu, w) eV x W and the terms with a(v, w)=0 inasum Ya(v, w)e(v, w) are omitted. We al: write 1-e(v, w) =e(v, w). | Consider the subspace X of 7(V x W) generated by the following ele- ments: e(v, +v,, w)—e(v,, w)—e(v,, w), e(u, w, + w,) —e(v, W,)—e(v, W2), e(av, w)—ae(v, w), e(u, aw) —ae(v, w) (¥,,0,,0EV, w,w,,wEeW, aek). In order to obtain bilinearity of the product @ , we consider these elements. Denote by U, the factor space of 7(V x W) by X. Let a be the canonical mapping of 7(V x W) onto U,. From the definition of X, it follows that, for instance, H(e(v, + ¥,, W)) —A(e(v,, W)) — a(e(v,, w)) = 0. Let 1, = moe, where e: VxW — Y(VxW) is given by (v, w) + e(v, w). THeoreM 1.4. The pair (U,, 1,) constructed above is a tensor product of V andW. Proor. By the definition of X, 1, is a bilinear mapping of V x W into U. We shall show that the pair (U, , 1,) satisfies the conditions (T1) and (T2) in Theorem 1.1. Since {e(v, w)|(v, w) € V x W} generates Y(V x W), U, is generated by {1,(v, w)|(v, w) € V x W}, which proves (T1). To prove (T2), let ® be a bilinear mapping of Vx W into a vector space U. By Proposition 1.15, there is a linear mapping F: 7 (V x W) > U such that F(e(v, w)) =®(v, w) (vu, w)e Vx W). Y(V x W) : x Vxw—+ 1 Su, = 7 x mx Ft ve F & U 26 L DEFINITION OF TENSOR PRODUCTS. Since ® is bilinear, all generators of X are contained in the kernel KerF of F. For example, Fe, +02, Ww) —e(v,, Ww) —e(v,, W)) = F(e(v, + v,, w)) — F(e(v, , w)) — F(e(v,, w)) (linearlity of F) = O(u, +v,, wW)-O(u,, w) —P(v,, w) =0 (bilinearlity of ©). Since KerF is a subspace, X is contained in Ker. Therefore F induces a linear mapping F of the factor space U, = Y(V x W)/X into U such that F = Fo. Then we have Fo1, =Fonoe=Foe=0. 0 Remark. In the construction above of (U,, 1,), it is not necessary to assume that V and W are finite dimensional. Thus, we can construct a k-vector space satisfying (T1) and (T2) for arbitrary k-vector spaces V and Ww. In particular, in the discussion of §8(b) Extension of the field of scalars, the arguments there are still valid without the assumption that [k, : k] is finite. Thus, for any extension field k, of k, we can construct Nore. Let V be a k-vector space (not necessarily finite dimensional) and let W be a subspace of V. Then the cosets u+W of W in V (v €V) forma k-vector space, which is called the factor space V/W of V by W. Addition and scalar multiplication in V/W are defined as follows: forv,v' eV, ack, (V+ W) ++ WM) =(utv)+W, a(v+W)=av+W. The mapping x of V onto V/W defined by x(v) =v + W is linear and is called the canonical mapping. The pair (V/W, ) has the following univer- sal property: Let U be a vector space and f a linear mapping of V into U such that Kerf contains W. Then there exists a unique linear mapping Sf: V/W > U such that f= for. §10. Tensor products of R-modules Modules over a ring are a generalization of the concept of vector spaces. The definition of tensor product generalizes and we present a theorem gen- eralizing Theorem 1.1. In this section we shall discuss this generalization. These results can be easily obtained as a variation of the construction ex- plained in §9. We do not go into the details, since the further properties of a tensor product of modules generally depend on the properties of rings. First we give several definitions. DEFINITION 1.7. Let P be aset. P is called a module if there is a rule which associates with every (ordered) pair m,, m, of elements of P a third element m,+m, of P, which is called the sum of m, and m,, and if the following conditions are satisfied: §10. TENSOR PRODUCTS OF R-MODULES 27 For all m,,m,,m,€P, (m, + m,)+m,=m,+(m,+m,) (associativity). There exists an element 0 € P such that, for all me P, m+0=0+m=m, For each m € P, there exists an element m’ € P such that m+m' = m'+m=0. (m’ is denoted by —m.) For all m,,m,€P, m,+m,=m,+m, (commutativity). DEFINITION 1.8. Let R bea set. R is called a ring if there are two rules which associate with each ordered pair r,,r, of R, their sum r,+7,¢R and product r,r, € R and if the following conditions are satisfied: Risa modu e with respect to addition. For all ry, 7,73€R, (ryra)rs = 7 (rr) (associativity) , (Atma angtnyry peer Alt th) =r thy. (distributivity). Let R be a ring. If there is an element 1 such that, for all re R, rl=lr=r, then the element 1 is called the unit element of R. Remark. Precisely speaking, what we call rules in the definitions are map- pings of product sets. For example, the sum in a module P is a mapping 5: P x P — P and we denote the image s(m,,m,) ((m,,m,) € P x P) by m,+m,. Similarly, the sum and the product in a ring R are also mappings of Rx R into R. The conditions on these operations can be expressed as conditions on these mappings. For instance, associativity in a module P is expressed as “for all m,,m,,m, € P, we have s(s(m,,m,), m;) = 5(m,, s(m,, m,))” and commutativity is “for all m,,m, € P, we have s(m,, m,) = s(m,, m,).” Inaring R, let s and m be the mappings which give the sum and product respectively. Then associativity is expressed in a similar fashion and the distributivity condition is “for all r,,1r,7y € R, we have m(s(r,, 1). 13) = S(M(r 573), M(r2,73)) and m(r,, 5(r2,73)) = s(ra(ry 5 ry), mr» 75). In this section R denotes an arbitrary ring with the unit element 1. As a generalization of vector space we define left R-modules and right R-modules. DeEFINiTION 1.9, Let M be a module. M is called a left R-module if there is a rule which associates with every r€ R and m € M, an element rm of M and if the following conditions are satisfied: (in other words, there exists a mapping Rx M-— M, (r,m)++rm such that:) 28 I. DEFINITION OF TENSOR PRODUCTS: For all r,,7,,r€R,m,,m,,meM, r(m,+m,)=rm, +rm,, (i +n)m=nm+rm, (ryr)m = r,(rym), im=m. Similarly, a module M is called a right R-module if there is a mapping Mx R-—M, (m,r)— mr such that: For all r},7,,7r€R, m,,m,,meM, (m,+m,)r=mrt+m,r, mr +r) =mr,+mr,, m(r ry) = (mr,)r,, ml=m, The difference between a left R-module and a right one consists in the order of operation of the product element r,r, € R on meé M. In the case of a left module, the above condition shows that r, operates first then r, operates. For a right module, the order is reversed. To show this concretely, we write r on the left or the right of m. EXAMPLE 1.5. If R isa field k, a left k-module (or a right k-module) is a k-vector space. ExaMPLe 1.6. If a ring R is commutative (namely, if we have rr, =r", for all r,, 7, € R), it is not necessary to distinguish between left and right R-modules. In this case, we call them simply R-modules. (cf. Example 1.5.) In fact, let M bea left R-module. Define a mapping M x R— R by mr=rm. (The right side has meaning, since M is a left R-module.) Then, with this mapping, M becomes a right R-module. For instance, mr" ryr,)m = rr, )m (17) definition (ira) commutativity of R (an) = (rym. = ri(mr, Misaitrmodue 2) definition a(mn) = mr )ry> cetiiton "TM" EXAMPLE 1.7. Let R= Z (the ring of integers) and let M be a module. For n€N (the set of natural numbers), define nmM=M+---+m (x terms), (—n)m= -(nm), Om=0. From this definition, it follows that any module can be regarded as a Z- module. As a generalization of a basis for a vector space, we will define a free module and a basis for it. Here, we describe the case of a left R-module M, but all steps can be applied to right modules. A subset {m,} of M is called a set of generators (or a generating set) if every element of M can be expressed as a linear combination of a finite number of {m,} with coefficients in R. In this case, we also say that M is generated by {m,}. A subset {m,} of M is R-free if there do not exist §10. TENSOR PRODUCTS OF R-MODULES 29 nontrivial linear relations with coefficients in R, namely, whenever we have a linear relation, for m,; € {m;} d my toon then r,=0 forall j. A R-free set of generators of M is called an R-basis of M and an R- module M which has an R-basis is called a free R-module. As we know, a k-vector space (ie., when R is a field k) always has bases. But, for an arbitrary ring, that is not the case. As a generalization of bilinear mapping, we define the concept of a bal- anced mapping. Dermmirion 1.10. Let M be aright R-module, N aleft R-module, and P amodule. A mapping ®: Mx N — P iscalled an R-balanced mapping if the following conditions are satisfied: for all m,m,,meM,n,n,neNn, réR, we have &(m, + m,n) = P(m,, n)+P(m,, 2), O(m , n, +n) =O(m, m)+0(m, n), &(m, rn) = O(mr, n). The existence theorem for tensor products (Theorem 1.1) generalizes as follows. THEOREM 1.5. Let M bea right R-module and N a left R-module, then (1) There exist a module P, and an R-balanced mapping 1: M x N — Py which satisfy the following conditions (T1) and (T2). (Tl) Py is generated by (M x N), where P, is regarded as a Z-module. (12) For every R-balanced mapping ®: Mx N = P, there exists a module homomorphism F: P,-+ P such that ® = F ot. (2) The pair (P,, 1) is unique in the following sense: If the pairs (Py, t) and (Py: 1') consisting of a module and an R-balanced mapping satisfy the conditions (T1) and (T2), then there exists a unique module isomorphism Fy: Py» Py such that Fyor=1'. This theorem can be proved in a way similar to the case of k-vector spaces. Part (2) is proved by the argument in §3. For Part (1), we can use the method .described in §9. Let X bea‘free Z-module which has a Z-basis (e(m, m)) where e(m, n) corresponds bijectively to (m,n) € Mx N. Let Y be the Z-submodule of X generated by the following elements: e(m, +m,,n)—e(m,, n)—e(m,,n), e(m,n, +n.) —e(m, n,)—e(m, n,), e(m, rh) —e(mr, n) (m, m,,m,€M, n,n,,n,EN, reR). Then consider the factor module P, of X by Y and the canonical homo- morphism z: X — Py which assigns x+¥ tox eX. Define 1: Mx N — Py 0 (ER), 30 i. DEFINITION OF TENSOR PRODUCTS by «(m,n) = x(e(m, n)). We can prove that the pair (P,, 1) has the re- quired properties. Nore. Let P and P’ be modules. A mapping F: P — P’ is called a module homomorphism if F(x + y) = F(x) + F(y) for all x,y € P. If a module homomorphism F is one-to-one and onto, F is called a module isomorphism. A nonempty subset P’ of a module P is called a submodule of P if we have x+ye€P’ and —x€ P’ whenever x and y are elements of. P’. For a subset S of P, the intersection of all submodules of P containing S is also asubmodule which contains S.. It is called the submodule generated by S. Let P be a module and P’ a submodule of P. We can define a factor module P/ 'P’ in a way similar to the case of vector spaces. DeFINiTion 1.11. The pair (Py, 1) (or Py) is called a tensor product of M and N. We denote Py by M@gN and i(m,n)=men. Let S be a ring, and assume that M is not only a right R-module but also a left S-module. Moreover, we assume that the operations of R and S on M are commutative (C) , namely, we have, forall meM,se€S,reR, (sm)r = s(mr). Then we can prove that M®, N is a left S-module. As in the case of extension of the field of scalars (§8(b)), for a fixed element s of S, define a mapping MxN >= M@,N w w (m,n) > sm@n This mapping is an R-balanced mapping. (To check it, we use the commu- tativity of R and S on M.) Thus there exists a module homomorphism of MeN, L:M®@,z;N>M@,N, L(m@n)=sme@n, and we can define sx = L,(x) for x € M@,N.. By this operation, M@_N is aleft S-module. In particular, when R is a field k, since k is commutative, a k-vector space is a (k, k)-bimodule and the tensor product of k-vector spaces is a special case of the discussion here. Exercises In exercises 1 and 2, let V =R° and W = R® be the real vector spaces consisting of all the column vectors of dimension 3 and 2 respectively. Take the basis of V and ¥ of W consisting of the unit vectors as follows. (4) Then M is called an (S, R)-bimodule, EXERCISES 31 efoto [il-s-[E]} v{i-[f)--E} 1. Let x be the clement of Y @ W given by -1 x=|2/e[1]: 3 (1) Express x as a linear combination of the basis elements (¢, ® f)) . (2) Define e} (i=1, 2,3) and ff (j=1, 2) as follows. =e) &, O =e ey, C= e+e, +e, fi=f-fe fRahth Compute the matrix of the change of bases from (¢,®/;) to (¢;®f;) . (3) Compute the components of x with respect to the basis (¢; @ f;). (4) Is it possible to express the following element y as a sum with | or 2 terms of the form v@w (vEV,weW)? (elf -G- 6-0 Nore. Let X bea k-vector space. Let ¥ = (x,) and ¥ = (x;) be bases of X . The matrix (p;;) given by 7 is called the matrix of the change of bases from (x;) to (x;). 2. (1) If we assign ((e,), g(e,), 9(e;)) to g € V*, we have a map- ping T of V™ into the real vector space consisting of all 3-dimensional row vectors of real coefficients. Verify that T is a linear isomorphism. Compute the row vectors corresponding to the dual basis elements of &' = (e,, e), &) defined in 1.(2), by T. Using T, we will identify V with the space of 3-dimensional row vectors. (2) Use the isomorphism V* @W = Hom(V , W), given by Proposition 1.11, to compute the matrix of the linear mapping corresponding to @.2,4@[7] with respect to the bases £ and ¥ . 32 1 DEFINITION OF TENSOR PRODUCTS (3) By the same isomorphism as above, compute the element of V* @W corresponding to the linear mapping whose matrix with respect to the bases & and F is wea 23 4)° In the following exercises, V and W denote k-vector spaces. . Let & = (e,), & = (2) be bases for V and let (a,,) be the matrix of the change of bases from & to &. Also, let ¥ = (fh), F= (f,) be bases of W and let (Byq) be the matrix of the change of bases from ¥ to #. Namely, we have = Slay (=l,...,dimy), f K=SPygfy (G= 1. dim). P Compute the matrix of the change of bases from (e, ® S) to (e @ f) . where the basis elements are arranged in lexicographical order. . Let u be a nonzero element of V @W. Write u in the form u = Lie(Y; @w,) (v, EV, w; € W). Show that, if {v,,...,¥,} or {Wyre w,} is linearly dependent, we can decrease the number of terms in the expression, i.¢., we have another expression u = D_,(v;@w;) (v,€V, w; ew rec r). Therefore, if, in the expression u = a (vu, @w,), the number r of terms is minimal, {v,,...,v,} and {w,,..., w,} are linearly independent. . By the isomorphism Hom(V , W) & V* @ W given by Proposition 1.11, let ¢, be an element of V* @W corresponding to F € Hom(V, W). Show that, if rank F is equal to r, ¢, can be written in the form t, = L(G, @w;) (9; € V", w; € W) with r terms, and 1, cannot be expressed as a sum of less than r terms. (Tensor product of factor spaces). Let V, (resp. W,) be a subspace of V (resp. W). By Proposition 1.6, V, @W and V @W, can be regarded as subspaces of V@W. Let X be the subspace of V @W generated by V, @W and V@W,. Let U, be the factor space (V @W)/X, and nm: V @W -+ U, the canonical mapping (1) Prove that, for (v+V,, w+) € (V/V,) x (W/M,), the element n(v ®w) is independent of the choice of coset representatives v and w, and that, by this correspondence, we can define a bilinear mapping 1): (V/V,) x (W/W) + U. (2) Prove that (Uj, ip) is a tensor product of (V/V,) and (W/W,). . Let F eHom(V@W, V@W). Assume that Fo(id, @G) = (id, @G)oF for any G € Hom(W, W). Prove that there exists an element Fy € Hom(V, V) such that F = F, @id,,, where id, (resp. id,,) is the identity mapping of V (resp. W). CHAPTER II Tensors and Tensor Algebras In this chapter we define tensors. Tensors are elements of vector spaces that are obtained by taking tensor products of several copies of a k-vector space V and of its dual space V”. When we define tensors this way, we can show that their components satisfy the transformation law, which gives the classical definition of tensors. Next, we define symmetric tensors and alternating (skew symmetric) tensors, and discuss their properties. The tensor algebra, which possesses a certain universality, is also defined. §1. Definition and examples of tensors Let V bea k-vector space and let V* be the dual space of V. Let T be a vector space which is obtained as the tensor product of several copies of V and of V*, for example (V@V)@V" or V@V" @V" OY. Then T is generally called a fensor space. From the associativity of the tensor product (Proposition 1.8) we can, when we make a tensor space, neglect parentheses. Therefore a tensor space is isomorphic to some V“! @---@ VY, the tensor product of a sequence v*,..., V* consisting of V and V*, where we set V=V',V*=V', and e, = +1. Elements of this space are called tensors of type (é,,..., &,)- We can make a further identification. Suppose that T is isomorphic to Vv" @---@V* and Tis isomorphic to V" @---@V. (Both have the same number of factors.) If the number of Y among v',...,V% and that among V"', ..., V* are equal (so the numbers of V* are also equal), then there is a natural isomorphism between T and 7” obtained from the commutativity of the tensor product (Proposition 1.5, V@V* =V* @V), and we can identify T and 7’ (for example, T=V@V @V*@V and T' =V* @V @V eV). Therefore, to discuss a tensor space 7 , the numbers of V and V* factors are important. If T is a tensor product of p copies of V and q copies of V", we call 7 the tensor space of type (p,q) and denote it by 7,’(V). Thus, if we identify as above, T,)(V) = V@--@V @V"@.-@V". ee ee eee P factors q factors 33 34 Il, TENSORS AND TENSOR ALGEBRAS Set 7,°(V) = k. Sometimes we write T)’(V) = T?(V), T,°(V) =T,(V), and 7)°(V) = T°(V) = To(V). Elements of the tensor space ra (V) are called tensors of type (p,q) or tensors which are contravariant of degree (or order) p and covariant of degree (or order) g . In particular, tensors of type (p, 0) are called contravariant tensors of degree p and those of type (0, q) covariant tensors of degree q. Moreover the elements of Ty'(V) = V are called contravariant vectors, those of T,(V) = V* covariant vectors, and those of T(V) =k scalars. We will give several examples of tensors, using the isomorphisms presented in Chapter 1. EXAMPLE 2.1. Since Hom(V, V) = V* @V = V@V" (see the corollary to Proposition 1.11), Hom(V, V) = 7,'(V). That is, the dinear transformations of V can be regarded as tensors of type (1, 1). In this case, to g@uEV*@V (gE V", ve V), there corresponds the element 7, , € Hom(V, V) defined by the following formula T, o(*)=9(x)v (xe). And this correspondence extends to the entire space. EXAMPLE 2.2. Since V@W = Y(V", W*;k) (1, §4) and (V*)" = V, we have LV Vi KEV OV" =T,(V). (This result can be obtained also by Proposition 1.3 and its corollary.) That is, the bilinear forms on V are covariant tensors of degree 2. In a similar fashion to the case of 2 variables, we generally obtain PVs Vi RSV @--OV =, V). q factors q factors In this case 9,®---@9, (9, € V") € V°@-+-@V* is regarded as an element of L(V,..., V; k) as follows: (9, ©-+-89,)(0, @ + @Y,) = 9,(%))--9,(¥,) (EV). More generally, we have T)V)2H (Vs... IV SSS 4 factors P factors In fact, P id oo in tee T,)(V) = V"@-- @V8V OV q factors p factors V@--@V @(V") @-@V) 2PVA. VV Vk). Wr §1. DEFINITION AND EXAMPLES OF TENSORS 35 1=Y,8-- Oy, @w,®---@w,ET,)(V) (y,EV", we V) is regarded as an element of Y(V,...,V,V",...,V";k) by UO, ys Ugo Ops e+es Op) = 9y(Wy) ++ Pp(Wy)Wy(Y1) + Vy(%q) (VU EV, EV). By this isomorphism, we can consider an element of Te (V) asa (p+q)- multilinear form defined on the product of q copies of V and p copies of ve. EXAMPLE 2.3. (T,?(V))* = T,7(V). That is, the dual space of 7,’(V) is T,1(V). P. . In fact, since T)(V)=V@---@V eV @--eV", isa savayeunus :suuezavavei#asszersvazavevein.duvezsvaversis Factors 4g factors by the corollary to Proposition 1.3, we have Pryyyt — yt + _7a (1? V)) =V"@--@V' @V@---@V =T7,V). P factors q factors Here t= y,@--@y, @U,@-- Ow, € T,1(V), (vy, EV", w, EV) is regarded as a function on T,”(V) by Ug, B89, OU, @---@u,) = 9,(W,) + 9g(W)y (2%) ¥,(U,) EK (GEV, 4,EV). We can verify this easily, examining the correspondence of each isomorphism between spaces. EXAMPLE 2.4 (Bilinear multiplication). Setting W = U = V in the fol- lowing formula L(V,W;U)=Hom(V @W, U)=(VeW)' @eU=V' ew eu, we have LV iV VZV" OV" OV =T,(V). Here the element m of Y(V , V; V) corresponding to 9,89,80 (9, 9, € Vv", v€V) by these isomorphisms, is given by MY, >) = 9, (U4)9,(v,)0 (V,, EV). In fact, let us examine the isomorphisms above. To 9, ®g, @v there corresponds ®@u € (V@V)" @V, where ® € (V @V)" is such that P(v, @V,) = 9, (U,)92(v2). To B@v there corresponds T « Hom(V @V , V) such that T(x) =@(x)v (x €V @V). Finally we have M(V,, Uz) = T(V, @ Vz) = O(V, @V,)v = 9, (V,)2(vq)v. 36 I, TENSORS AND TENSOR ALGEBRAS Let m be a bilinear mapping V x V V. If we call m(v,,v,) € V (v,, 0, € V) the product of v, and v,, we can define a multiplication of V. As usual, we denote m(uv,, v,) =,v,. By definition we have (M+ U})U, = 14% + 14%, ¥,(U, +05) = 0,0, +,v, (distributivity), (av, vz = v, (a2) = a(v, v5) (U,V; 5 U2, 0; € V, aeék). Thus, an element of T,'(V) defines a bilinear multiplication of V and vice versa. Since defining a multiplication on V is equivalent to giving an element me Tr (V), the properties of a multiplication can be described in terms of m (cf. 1, §10, Remark). §2. Properties of tensor spaces (a) Components of tensors and their transformation laws. Let V bea k- vector space and let V* be the dual space of V. In this section we describe the properties of the tensor space T,2(V)=V@-@V @V'@::-@V" YS P factors q factors defined in §1. If dim, V =n, dim, 7,’(V) = n?* since the dimension of a tensor product is the product of the dimensions of the factors. Choose a basis F=(e,..-,e,) of V. Let Ga sie, f") be the dual basis of #. Then, the following n’*? elements {e, 8+ @e, Of" @-- OSL Ye Gudn = Sur i dj Thus the component ¢, i is given by ®(e,, e)- In general, the components {6;, it of a q-multilinear form ® on V a with respect to & are given by Siang imine ee »€),)- Similarly, the components of an element DEL (Vi... Vie V SK) ——_—"—— q factors P factors 38 IL. TENSORS AND TENSOR ALGEBRAS with respect to & are given by ligsncntgiit i Eyed MC ye ee SP) EXERCISE. Prove these formulas. Next, we study the relation between the components when we take another basis £ = (é,,...,6,) for V. Let F = (2) be the matrix of the change of bases from & to &. Namely, we have {= Dale G=lism. (2.1) First we compute the matrix of the change of dual bases. Let (/* tee apy be the dual basis for & . Since we have rear o oe) del se = t i fimayh ay ecm Emin); (2.2) - Let F' =(g';). By (2.1) = BA (j=1,...,2). (2.3) ja) pe the standard basis of ie (V) corresponding to the basis of V and 2 rede respect to &. *'} the components of z € T,?(V) with PROPOSITION 2.1, With the notation as above, we have the following formu- las relating the components {& jee fem) and {85 sooo few *?}. For all i, reds a and j, (\Si, T,'(V) 8 T.(V) w w (x,y) 4 xy can be considered as bilinear mapping into T,, as "(V). The image of (x,y) € 7, (V) x T,(V) under this mapping is called the product of x and y and is denoted by xy or x@y. Let & bea basis of V. Take the standard bases corresponding to & of TV), TV), and T,,.?"(V). Let (8), be the components of x and y with respect to these bases respectivel: is easily verified that the components of xy are given by {¢, af §2, PROPERTIES OF TENSOR SPACES 41 Next, we shall define the contraction of a tensor, We use the following proposition. PRoposITION 2.3. Consider the tensor space i (V) and assume that p > 0 and q > 0. Fix integers r and s such that 1\ are given by 4 pet Pripsoerdpny 7 Hy seesipet ay Nee aaarece = eS mdcnPebodct : iz In fact, this follows from formula (2.5) for the standard basis element ty Tivwrda ie - Sprmerdgrmed fi-j Cit, hy = { tee : ae ‘ dy)» (2.5) : 0 (if i, 4 i,). REMARK. When we make a summation from 1 to n (= dimV) with respect to an upper index and a lower index as in (2.4) or in the formula (pres ener Fi erent Darga ig anit : i above, some authors omit the summation sign 2, For example, (2.4) is written as fyywsly oh ale pl... pl Shamed EE OB a BS in Ky sory (') By 6, and @,, we indicate that these factors are missing. 42 IL, TENSORS AND TENSOR ALGEBRAS: This convention is often used in tensor calculus and is called Einstein's con- vention after the originator. This notation is convenient, but we do not use it in this book in order to avoid any misunderstanding. EXAMPLE 2.8. By the isomorphism Hom(V, V) © 7,'(V) =V @ V", let T €Hom(V, V) correspond to tr € V @V*. Then, for 1; EV @V", (1) C,'(t;) = tr(T), where C," means the contraction with respect to the first contravariant index and the next covariant index. In this example, we use the same convention. (2) For any v EV, T(v) =C,3(tp@v), where tp @VEVOV" OV. (3) Let T,S ¢ Hom(V, V) and R= ToS. Then, tp = C,(t7 @ts), where 1, @ts EV @V*@VOV". The equality in (1) is obtained from the formula for the components of ty (§2, Example 2.5), Set ¢; = )v,@@;. Then (2) follows from it by definition. Similarly we obtain (3). EXAMPLE 2.9 (Quotient rule of tensor). Assume that, to every basis ¥ of V,, there corresponds n’*? elements dejan frerbiL < i, Sn, 1, av! ; which satisfies the assumption of the quotient rule, for (s, 7) =(1, 0) and (p —t, q—s) = (1, 0). By the quotient rule the {a} are the components of a tensor of type (1, 1). §3. Symmetric tensors and alternating tensors (a) Definitions. There are families of tensors which are called symmetric or alternating. In this section, we give their definitions and study their prop- erties. First, we consider the space 7?(V) of contravariant tensors of degree p. In this section, we write 7’(V) = 7” for simplicity. Let G, be the set of permutations of the set {1,..., p} with p elements. Denote by sgn(c) the signature of o € 6, (ie. sgn(a) = 1 if o is an even permutation and sgn(c) = —1 if o is an odd permutation.) From the next proposition we know that there corresponds to every ele- ment o of G, a linear transformation P, of T”. Proposition 2.4. (1) Let o € 6, - There exists a unique linear transfor- mation P, of T? such that Pi(0, @ ++ @Y,) = V,-4y) @ ++ OY, (v,€V). (2.6) "py Moreover, P, is a linear isomorphism. (2) For 0, t€6,, P,P, = Px. Denote by 1 the identity permutation. Then P,=I (=the identity transformation of T’), Proor. (1) To prove the existence and the uniqueness of P,, consider a 44 I, TENSORS AND TENSOR ALGEBRAS p-multilinear mapping defined as follows; VixeexVo ie w w (Wyre Up) A Uy) B17 BYy—Hipy- From property (T) of tensor products, this mapping induces a linear mapping P,:T’ + T? . Since P, induces a permutation of the standard basis elements (t, i) , it is a linear isomorphism. (2) We have PLP (Y, @ ++ @U,) = P,(U,-14) @ + BU) = Urtegrtay BO Me-'artpy) = P,,(v, ®++-@4,). Since all the elements of {v, @---@v,|v, € V} generate T” , we know that P,P,=P,,. Clearly P/=7. 0 REMARK. We can prove existence as follows: define P, on the standard basis elements by Pit; (where jj, = i,-144)> ” then extend P, to the whole of 7” by linearity. In this case, we must verify (2.6) and show that the definition of P, does not depend on the choice of bases. Exercise 1. Let ¢¢ 7°(V) and {é''"~"'7} be the components of 1 with respect to & . Show that {é'0"""™} are the components of P(t) . Derinirion 2.2. Let 1 € 7”. If P,(t) =1¢ forall o €G,, ¢ is called a symmetric tensor. If P,(t) = sgn(a)t for all o € 6,, then ¢ is called an alternating (or skew-symmetric) tensor. The set of symmetric tensors and that of alternating tensors are vector subspaces of 7” and are denoted by S’(V) (or simply S’) and A?(V) (or simply A”) respectively. EXAMPLE 2.10. If p=1, S(V)=AV)=T'=V. If p=2, since 6, ={1, (1 2)}, 1ES(V) & Pyal=t, tea (V) & Pyy(t)=-t Thus, T?(V) = S?(V) @ A°(V) (direct sum). For a standard basis element t,;, we have Py »(t;;)=t;;. Therefore we know that {t,, +1,; (i < j)} isa basis for S?(V) and {ti — tj (E< J} is a basis for 4’(V), In particular, if dimV =n, dimS*(V)=n(n+1)/2, dim A’(V) = n(n — 1)/2. §3. SYMMETRIC TENSORS AND ALTERNATING TENSORS. 45 EXAMPLE 2.11. From Exercise 1 and the definition, we obtain the follow- ing assertions. We denote by {f'1'"""} the components of ¢ € 7?(V) with respect to a basis for V. (1) ¢ is a symmetric tensor for all € G,. for all transpositions 0 € S,. (2) ¢ is an alternating tensor eo ean vTaey ra sen(a)é"” ee Elan lay = glint EXERCISE 2. Verify these assertions. Let us examine the properties of subspaces S’(V) and A’(V). For this purpose, define the linear transformations J and wo, by the following for- mulas. = a LP, a aD sen(a)P,. oe6, Ges, When there is no risk of confusion, we simply write 4 = and a, =H@, omitting p. The next lemma follows easily from the definition. for all o € G,. for all transpositions a € 6,. Lemma 2.1. For 0 €G,, PSL =SP, =P, Pit = xf P, = sgn(a)s0. PROOF. Po P= (1/P)Po (Sree, Pe) = (1/7!) Lire, PoP: = (1/2!) Xneo, Por Now, if t runs over G,, the same holds for ot. Thus, 1 PF = 5 ee ace a "TES, Similarly, Pot = Jp, >| >> sen(z)P, Se sgn(t)P,P, |. PB TEG, tEG, Since sgn(ot) = sgn(c)sgn(t) and sgn(c) = , Notice that sgn(t) = sgn(a)sgn(ot), and ot runs over G,. Thus Pi = sen(a)#. The other equalities can be obtained by similar reasoning. O Applying this lemma, we have the next proposition. PROPOSITION 2.5. The linear mappings and Sf are projections of T” onto the vector subspaces S’ and A? respectively. Namely, (1) P=F, w= 00, 46 Il, TENSORS AND TENSOR ALGEBRAS (2) For teT?, te eSA(t)=t, ted exw ()=t. Moreover, if p>2, 8 =x =0, which implies S’ n A’ = {0}. Note. Let X be a vector space and Y a subspace of X. Let Y’ bea subspace of X such that X=YoY’ (direct sum). For x € X , decompose itasx=y+y' (ye Y, y' © Y’). The linear transformation P of X defined by P(x) = y is called a projection of X onto Y (defined by Y’). In general, a linear transformation defined by some Y’ is called a projection onto Y. A linear transformation is a projection onto Y if and only if the following hold. () P =P. (2) If x eX, then x € Y if and only if P(x) =x. Proor. (1) =| ye\e-lyv-s, P* Ges, P aes, LL senor, | = + (santo? = a0. P ces, P aes, (2) By definition, teS? @ P,(t)=1 forallaeG,. Thus, if te S?, Y(t) = (1/p!) P(t) = t. Conversely, if H(t) = ¢, P(t) = P,A(t) = A(t) =t forall P,,and re S?. te A’ & P,(t)=sgn(a)t_ for allo € G,. Thus, if ¢€ A’, 9 (t) = (1/p!)Esgn(o)P, (t) = (1/p!)E(sgn(a))"t = 1. Con- versely, if (1) =1, P,(t) = P(t) = sgn(a).# (t) = sgn(a)i for all P,, and te A’. Moreover, 1 1 1 Sat = (3 of nes > wow =3 (3 we) oe6, se6, se6, Since the number of even permutations is equal to the number of odd per- mutations if p > 2, we have Dege| sgn(o) = 0 and Av = 0. Similarly, oP =0. : Suppose that re S’n.A’. te S’ implies t= S(t). Applying on both sides, f(t) = £(t) = 0. On the other hand, 1 € 4’ implies f €./(t). So t=0. 0 This proposition shows that S? =Im.¥ and A? = Im. In particular, for any t € 7’, (t) is a symmetric tensor and (1) is an alternating §3, SYMMETRIC TENSORS AND ALTERNATING TENSORS 47 tensor. Therefore, the mappings .Y and #% are called the symmetrizer and the alternator on T’ . (b) Bases and dimensions of the symmetric tensor space and the alternating tensor space. Starting from a standard basis (4, i) Usi,sn) of T esi (corresponding to a basis of V) and applying the symmetrizer and the alternator, we are going to construct bases of S’ and A’. To read indices more easily, we write t; t(i,,...,4,). By definition, it follows that, for dé 6, A 'y Paltliys e+e a bp) =Hlgrgys eee dyer) From previous observation, we know that the spaces S’ and A’ are gener- ated by {P(i, i) Si, Sn} and {#7 (t(i,,..., 4,1 n=dimV, dim4’ =0. Therefore we consider the case p <1. By definition, we have : ‘ 1 : : H (tis ees i) = YD sOn(t)Higyys +s dey) P 16, and we can proceed in a similar fashion to S’ . By Lemma 2.2, it is necessary to consider only these ¢(i,,..., in), where i,,..., i, are all distinct. And, for these elements, t(i4)5 +++ dei) # Mdg ays -++> deyy)s if t A t!. Thus we have W(t(i,,..., i,) #0. Moreover, if p-element subsets {i,,..., in} and {k,,..., k,} are equal as sets, we have A (Hi, sees ip) = BV (ky, 5K). And if these subsets are different, there are no common terms when we write MH (t(i,,..-, i,)) and 97 (t(k,, ..., k,)) as linear combinations of standard basis elements #(j,,..., j,)- Therefore, the set of all these / (i(i,, ..., /,)) is linearly independent. The cardinality of this set is equal to the number (}) of p-subsets of {1,...,n}. §3. SYMMETRIC TENSORS AND ALTERNATING TENSORS 49 PROPOSITION 2.7. Assume p only the signature makes a difference. For t € 7”, we have t- h(t) = a {t—sgn(o)P,(1)}. " 1c, If we denote by N? the subspace generated by ‘— sgn(a)P,(t) forall o € 6, teT’, the kernel Ker.%, of sf, is equal to N?, In the same way as in Proposition 2.8, for a subset x "of S, which satisfies the condition (G), denote by Ny” the subspace generated ‘by {t—sgn(t)P,()|t eT’, te X}. Then we have Ny” = N’ , For the proof, we use the following two equalities, P(t — sgn(7')P,(t)) = sen(z)sen(t’){P.-(1) — sen(t) P,(P.-())} + san(e){t— sen(c)P.(2)} — sen(x){t—sen(e)P,()} (e, eX). Let o€6,. If o=to', TEX, then t— sgn(o)P, (1) = t— sen(r)sen(o')P.,(t) = {t— sgn(t)P,(f)} + sen(t)P.(t — sen(a’)P,, o(t)). Applying the above remark to X = X,, the set of transpositions, we get the following proposition. PROPOSITION 2.9, Let N’ be the subspace of T? generated by all elements of the form v, @--Ov, such that v; =u, Sor at least one pair i# j. Then N’ is equal to Kers#,. PrRoor, Since Kerf, = =Ny ? | it is enough to show that the generators of N’ are contained in Ny . and vice versa. Let 1+P(t) (t€ r, te X,) sree of Ny, ? | Since P, is linear, we may assume that ¢ is of the form ¢=v, @---@v,. Fora transposition t= A) (E U (set W=---=V, =V in Theorem 1.1'). We mention analogous theorems for S’(V) and 4?(V). DEFINITION 2.3. Let DE L(V,..., V; U) bea p-multilinear mapping. (1) If the values are invariant for all permutations of variables, namely, for all g€ 6, and v,eV (i=1,...,p), DUG) 2 #6 + Use) = PCY» +++ 5 Up)s ® is said to be symmetric. (2) If ®(u,,...,v,) =0 whenever v, =v, for some i# j, ® is said to be alternating. Remark. If ® is alternating, then for any o ¢ 6, and v,eV (i= 1,...,p), POG) gees Ugipy) = sgn(a)P(v,,..., v,)- (2.7) In fact, if ® is alternating, for any transposition o = (i j) (i 7?(V) be the canonical mapping of tensor product 7°(V) = V @---@V. The composite mappings FouvxxVaS(V), HonV xx V4 AV) are a symmetric and alternating p-multilinear mapping respectively. In fact, these mappings have a universal property. We have the following theorem. THEOREM 2.1. (1) Let V and U be k-vector spaces. If ® € L(V,...,V5U) is a symmetric p-multilinear mapping, then there exists a unique linear mapping F:S?(V) > U such that F o(4,01)=® Vive bs TV) 25 S?(V) oe ” Pp ' 7 ln ¢ ° | 7F ¢ be U (2) A similar result holds for alternating p-multilinear mappings. That is, if¥eL(V,...,V;U) isanalternating p-multilinear mapping, then there exists a unique linear mapping G:A?(V) + U such that Go (#f,01)=% Proor. (1) By the fundamental property (T) of tensor products, ® in- duces a unique linear mapping F,:7?(V) - U. Suppose that there exists a linear mapping F:S’(V) — U which satisfies the condition of the theorem, then Fo SF, satisfies the requirement for F). Thus the uniqueness of Fy implies Fee, = Fy. For xé€ S?(V), F(x) = Fo(x) is necessary since F(x) =x. In] particular, this shows the uniqueness of F. "Now we define a mapping F:S’(V) + U by F(x) = F(x) for x € S?(V). ai F, is linear, F is linear also. It is necessary to show that Fo(Zol)= For any f € Alt) € S?(V). Therefore, by definition of F we have (Fo.%)(t) = (Fy oA V(t) for 1 T?(V). So, if we show that Fo.) = Fy ,we have Fo, = Kr and Fo(4,01)=Fyor=®. By Tinea of i “and F,, itis enough ‘to show (Fy eZ )(0) = = F(t) for =v, 8+: Ov, ©, eV). On the other hand, for a € .%, Foo P,(v, @ +++ @U,) = Fy(v,-111) @-+- @U,-1@)) = Fooly, HOY ys vee U4-"ipy) = @(v,,.--, ¥,) (since ® is symmetric) . amHaye 22 Ug=lipy) SRE RE AERA EERIE SES RESTS EY TAME TIN §3. SYMMETRIC TENSORS AND ALTERNATING TENSORS 53 Thus, for 1 =v, @---@v,, we have F, ve Hl = FD Foo hve 9) oeG, = O(0,,...,U,) = Fyoi(y,,.-. U,) = Flv, @-+-@v,) = Fy). (2) is proved in the same way. O (e) Case of covariant tensors. As for the space T,(V) = T’(V") of covari- ant tensors of degree p , we can define a linear transformation P* of TV) corresponding to an element o of 6, in a similar fashion to contravariant tensor’s case. Namely, P* is defined by PE(O, ++ @ Op) = 9-14) BB P-1yp) for g,,..., 9, € V". Using P; , symmetric tensors, alternating tensors, and the subspaces S’(V*), A’(V") consisting of them are defined. As we have already discussed in Examples 2.2 and 2.3 §1, 7,(V) can be identified with the dual space of 77(V) and 7,(V) = 2(V,.-.,V;k) (= the space of p-multilinear forms on V). When 7,(V) is identified with the dual space of 7?(V), P’(y, ®---@@,) is a linear form on 7?(V) and P3(9, @-+@G,)(Y, @-+ OY, Pq-'(1) O71 BYq~Mp))(V, B +++ B Uy) = Oq-ryY1) ++ Pg~! py Mp) = 91 Ug)“ Pp Meipy) = (9, @ ++ @ MU) @-- B Ugwpy) = (98+ 9,)(P; Yo, @---@v,) (9, €V", v, EV). This equality shows that P* = 'P>' Next, considering the isomorphism TV) =L(V,..., V5 k), we denote by ®(f*) the p-multilinear form on y corresponding tore T,(V) = (T°(V))* . That is, O()(u,, -.., ¥,) =F (Y, @--- @Y,) (eV, f E(T?(v))*). Then the p-multilinear form ®(P*(r*)) is obtained by permuting the vari- 54 Il. TENSORS AND TENSOR ALGEBRAS, ables of ®(z"), as we can see by B(PR())(V, +4) = PE(P)(U, @ +B ¥,) =r (PO (uy, ®---®4,)) =F Wg) @-* @ Ye) = OC) () s+ Uyipy): In particular, the p-multilinear forms corresponding to f° € S?(V") (resp. A?(V*)) are symmetric (resp. alternating). §4. Tensor algebras and their properties (a) Definition of tensor algebra. Collecting all contravariant tensors of different degree, we define the (infinite) direct sum of 77(V), p=0,1,..., TV) -@rv. p=0 In general, suppose that, to each element i of J = {0,1,...,} there corresponds a k-vector space V,. Then we define the infinite direct sum V =@®, V, as follows. Asa set, ¥ consists of all infinite sequences V=(%%s-) (EK) such that the number of indices 7 with v, 4 0 is finite. If we define, for V = (UY...) EV and v' = (vy, y,...) € V, the sum v+v' and the scalar multiple av (a € k) by the following formulas, then V has the structure of a k-vector space (not necessarily finite dimensional). VV =(VytU),Yy+Ys---), AV = (ad, ad,,«..). If we identify v, € V, with the element (020 0,0,7 5) ith term of v, then V, can be considered as a subset of V. Let v= (UUs. DE V. Then there exists an iy € I such that v, = 0 forall j > ij. Thus we z J can write j v=y iu, (,€%). j=0 We write this expression formally as v = Xjeo v;- By setting V, = TV) in the definition of (infinite) direct sum, the (in- finite) direct sum 7(V) of contravariant tensor spaces T'(V) becomes a k-vector space (in general, infinite dimensional). Next, let us define the product of two elements of T(V). As we explained in §2 (b), using the tensor product of 7’(V) and T%(V), the product of §4. TENSOR ALGEBRAS. 35 x €T?(V) and y € T"(V) is defined and belongs to 7’**(V). Denote it by x ®y. Moreover, the mapping r’(V)x TV) = T’**(V) w w (ey) x@y is bilinear. Thus, for = 72o1,. = Coty» tt € T?(V), define t@1' as follows; © of -¥(5 104). p= \ris=p Clearly 1@ 1! € T(V) and it is called the product of f and 2’. The mapping TV)*xTV) + TV) w w (Wf) 4 tet is bilinear. From the associativity of tensor product, the multiplication thus defined satisfies the associativity law, ie., for rf, t,t'e T(V), (erjet" =r1e( et"). If we consider 1 € k = 7°(V) as an element of T(V), we have, for all teT, t@1=1@t=t. The vector space T(V) equipped with this multiplication is called the tensor algebra of V. When we regard T’(V) as a subspace of T(V), the element of the form v,@---@u, (uv, EV) of T?(V) is called a decomposable tensor. (b) Preliminaries for associative algebras. Here we define several notions which we need later. Let A be a k-vector space (not necessarily finite dimensional). If A has a bilinear multiplication which is associative, that is if there exists a bilinear mapping m:AxA-— A such that m satisfies associativity (§1.10, Remark), then A (more precisely the pair (A, m)) is called an associative algebra over k. Usually m(a, a’) is written as aa’ or a-a’ . If there exists a unit element e for the multiplication, e is called the unit element of A. EXAMPLE 2,15. Let A = M,(k), the k-vector space of n x n matrices with coefficients in k, and m(a, a’) = aa’ (product of matrices). Then A is an associative algebra. EXAMPLE 2.15’. Let V bea k-vector space. Set A = Hom(V, V), the k-vector space of linear transformations of V, and m(a, a’) = aoa’ (com- position of mappings). Then A is an associative algebra. EXAMPLE 2.16. Let V bea k-vector space. The tensor algebra 7(V) with m(t, ')=1@1' is an associative algebra. Let (A, m) and (4’, m) be associative algebras. If a linear mapping ® of a k-vector space A /to A’ satisfies @om = m'o®, that is, if 56 IL, TENSORS AND TENSOR ALGEBRAS @(aa') = B(a)(a') for all a, a’ € A, then ® is called an associative al- gebra homomorphism. Moreover, if A and A’ have the unit elements e and e’ respectively, an associative algebra homomorphism must satisfy also ®(e) =e’. In particular, if ® is one-to-one and onto, ® is called an (asso- ciative algebra) isomorphism. When there exists an isomorphism ®: A — A, A and 4’ are said to be isomorphic. Let (A, m) be an associative algebra. If a (vector) subspace B of A satisfies the condition m(B, B) CB, B is called a subalgebra of A. When B isasubalgebra, B is regarded as an associative algebra with the restriction of m to Bx B. Ifa subalgebra a of an associative algebra (A, m) satisfies the conditions m(A,a) Ca and m(a, A) Ca, then a is called an ideal of A. Let a be an ideal of A. Construct the factor space A/a of the k-vector space A with a subspace a. We can define multiplication 7 of A/a as follows: For 4/ad@=a+a and @ =a' +a, mia, @)=m(a,a’) (= the coset of m(a, a’)). Since a is an ideal, 7(@, a’) € A/a is independent of the choice of coset representatives for a and a’. Thus we get a bilinear mapping 771: A/ax A/a > A/a and (A/a, m) is an associative algebra. This is called a factor algebra of A by a. EXAMPLE 2.17. Let (A, m) be an associative algebra and let {B,};-, be a family of subalgebras. Then the intersection NierB; is also a subalgebra. Let S bea subset of A. The intersection of all subalgebras which contain S is called the subalgebra generated by S and denoted by B;. Bg is the smallest subalgebra containing S. If S = {s}, we can easily verify that B, = {Y77_,@,5'|n=1,2,...,0, € k} . In general, make all possible finite products s,-- “5; (some elements can appear twice or more) of elements of S, then Bg is the set of all finite linear combinations of these with coefficients in k. In particular, if B, = A, we say that S is a set of generators for A or that A is generated by S. As sets of generators for tensor algebra 7(V), we can take, for example, S)={l}uV, S,={l}U{e,,...,¢,}, where (e,,...,@,) isa basis for V. EXAMPLE 2.18. Let (4, m) be an associative algebra and {a,},,, a family of ideals. Then the intersection (),-, 0, is also an ideal. Let S be a subset of A. The intersection of all ideals which contain S is called the ideal generated by S and denoted by ag. ag is the set of all (finite) linear combinations with coefficients in k of those elements which are obtained by multiplying elements of A to left or right or both sides of elements of S. Next, we shall define a grading. When, to each element 7 of I = {0, 1, :-+}, there corresponds a k-vector space A, and A is the (infinite) direct sum A = @f2, 4, of them, then A is called a graded vector space (of type §4. TENSOR ALGEBRAS 7 1).(°) Then we can view A, © A. Elements of some A; are called homo- geneous elements and the index i such that a € A, is called the degree of the homogeneous element a. Moreover, if a € A is written as a = 7°), (a, € A,), a, is called the homogeneous component of a of degree i. If a graded vector space A = 72,4, is an associative algebra and the multiplication m, for all i and j, satisfies m(A;, A;)CA then A is called a graded associative algebra. The tensor algebra T(V) = Bro T’(V) is an example of graded associative algebra. Let A= O04; be a graded associative algebra and B a subalgebra of A. B is called a graded subalgebra of A, if B can be expressed as the sum B= On (8 A;). (That is, every element b of B can be written in the form b = S77%),, where b, € BN A,.) i itp? PRoposITION 2.10. Let A = Bro A, be a graded associative algebra and B a subalgebra of A. The following three conditions are equivalent. (1) B is a graded subalgebra of A. (2) For all b € B, the homogeneous components of b are also contained in B. (3) (As associative algebra) B is generated by homogeneous elements. Proor. Any element b € B can be expressed uniquely as b = Yoa, where a, € A,. Thus (1) and (2) are equivalent. (1) + (3) is obvious, too. To show (3) = (2), let {b,} (A € A) be a set of generators for B consisting of homogeneous elements. Then, all b € B can be expressed as bay, asaya, 774, (O(a, jaya © #)- The homogeneous component of b of degree i is the sum of all the terms Oa, frayed ody , whose degrees are equal to i. This shows (2). O An ideal a of graded associative algebra is called a graded ideal or a ho- mogeneous ideal if it is graded as subalgebra. Similarly to the case of graded subalgebras, we have the following proposition. PRoposiTION 2.11. Let A be a graded associative algebra and let a be an ideal of A. The following three conditions are equivalent, (1) a is a graded ideal of A. (2) For all a € a, the homogeneous components of a are also contained in a. (3) a is generated as an ideal by homogeneous elements. Proor. Only the assertion (3) + (2) is necessary to prove. Let {x,} (A€ A) be the set of generators for a consisting of homogeneous elements. All elements x of a can be expressed in the form x = }),a,x,b, (a,b, € A) (°) In general, for the set I’ of grading, we can take a commutative semigroup with a unit element (where the operation is denoted by +). 58 Il. TENSORS AND TENSOR ALGEBRAS (where some terms may not have the parts a, or b,). Let ~ ~ = ae b= bi i=0 i=0 be the decomposition of a, and b, into homogeneous components. Then 0 x= 0a,x,b,= > ( > aus) : z k=0 \i+6(@)tj=k (We denote by 5(A) the degree of x,.) The homogeneous components of x are )7a, ,x,b, ,, which are elements of a. O If a is a graded ideal, a can be written as a= @?-,9,, where a, = aNA,. Thus the factor algebra A/a can be considered as a direct sum of A,/a, (i € I). Moreover, since m(A;, a;) Cay; and m(a;, Aj) Soy ys where m is the multiplication on A, the induced multiplication 7% of A/a satisfies F(A, 9, Aj/9;) © Acaj/0 Thus A/a = @72o(4,/a,) is naturally a graded associative algebra. iy (c) Universality of tensor algebra. Coming back to the main theme of the ‘book, we describe the universality of the tensor algebra T(V). We have the following theorem. THEOREM 2.2. Let V bea k-vector space, A an associative algebra with the unit element e, and F a linear mapping of V into A. Then F can be extended uniquely to an associative algebra homomorphism F:T(V) > A. Namely, there exists a unique associative algebra homomorphism F such that F (1) =e and Fo1=F, where 1 denotes the natural inclusion mapping of V=T'(V) into T(V). vy —4 TV) 1 F iF + A Proor. For all p (p > 2), define a p-multilinear mapping OV x xXV A, (5 +++» ¥,) = F(%)---F(,). P factors By the fundamental property (T) of tensor products (Theorem 1.1), ®, induces a linear mapping F:V@"@V=T(V) >A, Fu, ® +++ @ v,) = F(v,)-+-F(u,). Moreover, defining F, = F and F(a) = ae (a € k), we have Fy: V = T\(V) + A and Fyk = TV) + A. Then, assigning F(t) = Dp Fp (ty)

You might also like