You are on page 1of 5

12/24/2017 matrices - Differences between a matrix and a tensor - Mathematics Stack Exchange

_
Mathematics Stack Exchange is a Here's how it works:
question and answer site for people
studying math at any level and
professionals in related fields. Join
them; it only takes a minute:
Anybody can ask Anybody can The best answers are voted
Sign up a question answer up and rise to the top

Differences between a matrix and a tensor

What is the difference between a matrix and a tensor? Or, what makes a tensor, a tensor? I know that a matrix is a table of values,
right? But, a tensor?

(matrices) (tensors)

asked Jun 5 '13 at 21:52


FormlessCloud
836 3 7 11

25 Continuing with your analogy, a matrix is just a two-dimensional table to organize information and a tensor
is just its generalization. You can think of a tensor as a higher-dimensional way to organize information. So
a matrix (5x5 for example) is a tensor of rank 2. And a tensor of rank 3 would be a "3D-matrix" like a 5x5x5
matrix. – Fixed Point Jun 5 '13 at 23:03

2 I thought a rank n tensor is multi-linear function which takes n vectors and returns a vector of the same
vector space? – Martin Thoma Jan 3 at 12:37

3 One point of confusion is that in machine learning, people often use the term "tensor" when they really
mean just "multidimensional array". I disagree that a tensor of rank 3 would be a "3D matrix", but
admittedly it's not uncommon to hear the word "tensor" used in this way.
stats.stackexchange.com/questions/198061/… – littleO Jan 4 at 10:53

8 Answers

Maybe to see the difference between rank 2 tensors and matrices, it is probably best to see a
concrete example. Actually this is something which back then confused me very much in the
linear algebra course (where we didn't learn about tensors, only about matrices).

As you may know, you can specify a linear transformation a between vectors by a matrix. Let's
call that matrix A. Now if you do a basis transformation, this can also be written as a linear
transformation, so that if the vector in the old basis is v, the vector in the new basis is T v −1

(where v is a column vector). Now you can ask what matrix describes the transformation a in
the new basis. Well, it's the matrix T AT .−1

Well, so far, so good. What I memorized back then is that under basis change a matrix
transforms as T AT .
−1

But then, we learned about quadratic forms. Those are calculated using a matrix A as u Av . T

Still no problem, until we learned about how to do basis changes. Now, suddenly the matrix did
not transform as T AT , but rather as T AT . Which confused me like hell: How could one
−1 T

and the same object transform differently when used in different contexts?

Well, the solution is: Because we are actually talking about different objects! In the first case,
we are talking about a tensor which takes vectors to vectors. In the second case, we are talking
about a tensor which takes two vectors into a scalar, or equivalently, which takes a vector to a
covector.

Now both tensors have n components, and therefore it is possible to write those components
2

in a n × n matrix. And since all operations are linear resp. bilinear, the normal matrix-matrix
and matrix-vector products together with transposition can be used to write the operations of
the tensor. Only when looking at basis transformations, you see that both are indeed not the
same, and the course did us (well, at least me) a disservice by not telling us that we are really
looking at two different objects, and not just at two different uses of the same object, the
matrix.

https://math.stackexchange.com/questions/412423/differences-between-a-matrix-and-a-tensor 1/5
12/24/2017 matrices - Differences between a matrix and a tensor - Mathematics Stack Exchange
Indeed, speaking of a rank-2 tensor is not really accurate. The rank of a tensor has to be given
by two numbers. The vector to vector mapping is given by a rank-(1,1) tensor, while the
quadratic form is given by a rank-(0,2) tensor. There's also the type (2,0) which also
corresponds to a matrix, but which maps two covectors to a number, and which again
transforms differently.

The bottom line of this is:

The components of a rank-2 tensor can be written in a matrix.


The tensor is not that matrix, because different types of tensors can correspond to the
same matrix.
The differences between those tensor types are uncovered by the basis transformations
(hence the physicist's definition: "A tensor is what transforms like a tensor").

Of course another difference between matrices and tensors is that matrices are by definition
two-index objects, while tensors can have any rank.

edited Jan 3 at 12:26 answered Jun 5 '13 at 22:46


Martin Thoma celtschk
3,999 6 35 90 25.2k 3 50 86

1 This is a great answer, because it reveals the question to be wrong. In fact, a matrix is not even a matrix,
much less a tensor. – Ryan Reich Jun 5 '13 at 23:35

3 I'm happy with the first sentence in your comment @RyanReich but utterly confused by: "a matrix is not
even a matrix". Could you elaborate or point towards another source to explain this (unless I've taken it out
of context?) Thanks. – AJP Sep 11 '16 at 19:54

3 @AJP It's been a while, but I believe what I meant by that was that a matrix (array of numbers) is different
from a matrix (linear transformation = (1,1) tensor). The same array of numbers can represent several
different basis-independent objects when a particular basis is chosen for them. – Ryan Reich Sep 11 '16 at
20:00

Indeed there are some "confusions" some people do when talking about tensors. This happens
mainly on Physics where tensors are usually described as "objects with components which
transform in the right way". To really understand this matter, let's first remember that those
objects belong to the realm of linear algebra. Even though they are used a lot in many branches
of mathematics the area of mathematics devoted to the systematic study of those objects is
really linear algebra.

So let's start with two vector spaces V , W over some field of scalars F. Now, let T : V → W be
a linear transformation. I'll assume that you know that we can associate a matrix with T . Now,
you might say: so linear transformations and matrices are all the same! And if you say that,
you'll be wrong. The point is: one can associate a matrix with T only when one fix some basis
of V and some basis of W . In that case we will get T represented on those bases, but if we
don't introduce those, T will be T and matrices will be matrices (rectangular arrays of
numbers, or whatever definition you like).

Now, the construction of tensors is much more elaborate than just saying: "take a set of
numbers, label by components, let they transform in the correct way, you get a tensor". In
truth, this "definition" is a consequence of the actual definition. Indeed the actual definition of
a tensor is meant to introduce what we call "Universal Property".

The point is that if we have a collection of p vector spaces V and another vector space W we
i

can form functions of several variables f : V × ⋯ × V → W . A function like this will be


1 p

called multilinear if it's linear in each argument with the others held fixed. Now, since we know
how to study linear transformations we ask ourselves: is there a construction of a vector space S
and one universal multilinear map T : V × ⋯ × V → S such that f = g ∘ T for some
1 p

g : S → W linear and such that this holds for all f ? If that's always possible we'll reduce the

study of multilinear maps to the study of linear maps.

The happy part of the story is that this is always possible, the construction is well defined and S
is denoted V ⊗ ⋯ ⊗ V and is called the tensor product of the vector spaces and the map T
1 p

is the tensor product of the vectors. An element t ∈ S is called a tensor. Now it's possible to
prove that if V has dimension n then the following relation holds:
i i

dim(V 1 ⊗ ⋯ ⊗ V p ) = ∏ ni

i=1

This means that S has a basis with ∏ n elements. In that case, as we know from basic
p
i
i=1

linear algebra, we can associate with every t ∈ S its components in some basis. Now, those
components are what people usually call "the tensor". Indeed, when you see in Physics people
saying: "consider the tensor T " what they are really saying is "consider the tensor T whose
αβ

components in some basis understood by context are T ". αβ

https://math.stackexchange.com/questions/412423/differences-between-a-matrix-and-a-tensor 2/5
12/24/2017 matrices - Differences between a matrix and a tensor - Mathematics Stack Exchange
So if we consider two vector spaces V and V with dimensions respectivly n and m , by the
1 2

result I've stated dim(V ⊗ V ) = nm, so for every tensor t ∈ V ⊗ V one can associate a set
1 2 1 2

of nm scalars (the components of t), and we are obviously allowed to plug those values into a
matrix M (t) and so there's a correspondence of tensors of rank 2 with matrices.

However, exactly as in the linear transformation case this correspondence is only possible
when we have selected bases on the vector spaces we are dealing with. Finally, with every
tensor it is possible to associate also a multilinear map. So tensors can be understood in their
fully abstract and algebraic way as elements of the tensor product of vector spaces, and can also
be understood as multilinear maps (this is better for intuition) and we can associate matrices to
those.

So after all this hassle with linear algebra, the short answer to your question is: matrices are
matrices, tensors of rank 2 are tensors of rank 2, however there's a correspondence between
then whenever you fix a basis on the space of tensors.

My suggestion is that you read "Kostrikin's Linear Algebra and Geometry" chapter 4 on
multilinear algebra. This book is hard, but it's good to really get the ideas. Also, you can see
about tensors (constructions in terms of multilinear maps) in good books of multivariable
Analysis like "Calculus on Manifolds" by Michael Spivak or "Analysis on Manifolds" by James
Munkres.

edited Nov 22 '15 at 21:33 answered Jun 6 '13 at 1:23


user1620696
10.2k 3 31 93

I must be missing something, but can't you just set S = W , g = IdW ? – YoTengoUnLCD Apr 27 at 5:29

The point is that we want a space S constructed from the vector spaces Vi such that we can use it for all W .
In other words, given just Vi we can build the pair (S, g) and use once and for all for any W and f . That is
why it is calles universal property. – user1620696 Apr 27 at 14:48

As a place-holder answer waiting perhaps for clarification by the questioner's (and others')
reaction: given that your context has a matrix be a table of values (which can be entirely
reasonable)...

In that context, a "vector" is a list of values, a "matrix" is a table (or list of lists), the next item
would be a list of tables (equivalently, a table of lists, or list of lists of lists), then a table of
tables (equivalently, a list of tables of lists, or list of lists of tables...). And so on. All these are
"tensors".

Unsurprisingly, there are many more sophisticated viewpoints that can be taken, but perhaps
this bit of sloganeering is useful?

answered Jun 5 '13 at 22:04


paul garrett
28.7k 3 53 110

1 In addition to the answer of celtschk, this makes tensors make some sense to me (and their different ranks)
– Mr Tsjolder Jun 23 '16 at 18:30

1 So basically a tensor is an array of objects in programming. Tensor1 = array. Tensor2 = array of array.
Tensor3 = array of array of array. – Pacerier Aug 28 at 20:59

@Pacerier, yes, from a programming viewpoint that would be a reasonable starter-version of what a tensor
is. But, as noted in my answer, in various mathematical contexts there is complication, due, in effect, to
"collapsing" in the indexing scheme. – paul garrett Aug 28 at 21:37

Tensors are objects whose transformation laws make them geometrically meaningful. Yes, I am
a physicist, and to me, that is what a tensor is: there is a general idea that tensors are objects
merely described using components with respect to some basis, and as coordinates change
(and thus the associated basis changes), the tensor's components should transform
accordingly. What those laws are follows, then, from the chain rule of multivariable calculus,
nothing more.

What is a matrix? A representation of a linear map, also with respect to some basis. Thus, some
tensors can be represented with matrices.

Why some? Well, contrary to what you may have heard, not all tensors are inherently linear
maps. Yes, you can construct a linear map from any tensor, but that is not what the tensor is.
From a vector, you can construct a linear map acting on a covector to produce a scalar; this is
where the idea comes from, but it's misleading. Consider a different kind of quantity,
representing an oriented plane. We'll call it a bivector: From a bivector, you can construct a

https://math.stackexchange.com/questions/412423/differences-between-a-matrix-and-a-tensor 3/5
12/24/2017 matrices - Differences between a matrix and a tensor - Mathematics Stack Exchange
linear map taking in a covector and returning a vector, or a linear map taking two covectors
and returning a scalar.

That you can construct multiple maps from a bivector should indicate that the bivector is, in
itself, neither of these maps but a more fundamental geometric object. Bivectors are
represented using antisymmetric 2-index tesors, or antisymmetric matrices. In fact, you can
form bivectors from two vectors. While you can make that fit with the mapping picture, it
starts to feel incredibly arbitrary.

Some tensors are inherently linear maps, however, and all such maps can be written in terms
of some basis as a matrix. Even the Riemann tensor, which has ( ) by ( ) components, can be n

2
n

written this way, even though it's usually considered a map of two vectors to two vectors, three
vectors to one vector, four vectors to a scalar...I could go on.

But not all matrices represent information that is suitable for such geometric considerations.

answered Jun 6 '13 at 3:24


Muphrid
14.7k 1 13 35

You may want to have a look at this web sites http://en.wikipedia.org/wiki/Tensor


https://physics.stackexchange.com/questions/20437/are-matrices-and-second-rank-tensors-
the-same-thing

edited Apr 13 at 12:40 answered Jun 5 '13 at 22:04


Community ♦ triomphe
1 1,829 1 11 34

The shortest answer I can come up with is that a Tensor is described by a matrix (or rank 1
vector) but also the type of thing represented. Matrices have no such "type" associated with
them. If you misapply linear algebra on inconsistently typed matrices the math yields
mathematically valid garbage.

Intuitively you can't transform apples into peach pie. But you can transform apples into apple
pie. Matrices have no intrinsic type associated with them so a linear algebra recipe to do the
peach pie transform will produce garbage from the apples matrix.

A more mathematical example is that if you have a vector describing text terms in a document
and a vector describing DNA codes, you cannot take the cosine of the normalized vectors (dot
product) to see how "similar" they are. The dot product is mathematically valid but since they
are from different types and represent different things the dot product is meaningless garbage.
But if you do the same with 2 text term vectors you can make statements about how similar
they are from the result, it is indeed not garbage.

What I'm calling "type" is more rigorously defined but the above gets the gist I think.

answered Mar 30 at 18:34


pferrel
21 1

All matrices are not tensors, although all tensors of rank 2 are matrices.

Example

x −y
T = [ ]
2 2
x −y

This matrix T is not tensor rank 2. We test matrix T to rotation matrix

cos(θ) sin(θ)
A = [ ]
− sin(θ) cos(θ)

Now, e xpand tensor equation rank2,for example



T = Σ(A1i ∗ A1j ∗ T ij ) (1)
11

Now, calculate
′ ′
T = x = x ∗ cos(θ) + y ∗ sin(θ) (2)
11

https://math.stackexchange.com/questions/412423/differences-between-a-matrix-and-a-tensor 4/5
12/24/2017 matrices - Differences between a matrix and a tensor - Mathematics Stack Exchange
You see (1) is unequal to (2), then we can conclude that the matrix T isn't a tensor
of rank 2.

Tensor must follow the conversion(transformation) rules, but matrices generally


are not.

edited Aug 9 at 16:55 community wiki


4 revs, 2 users 54%
nbro

"Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one
index), and matrices (that have exactly two indices) to an arbitrary number of indices. " [1]

You can't index into a scalar s ∈ R . You can index into a vector v ∈ R with a single index i as
d

v . You can index into a matrix M ∈ R with two indices (i, j) as M .


n×m
i ij

If I continue adding indices, you might imagine a matrix-like object T ∈ R you can
n×m×p

index into with three indices (i, j, k) as T . How about 4, 5, 6, . . . indices? A tensor is such an
ijk

object that allows any number of indices you want.

In this way scalars, vectors and matrices are the special case of tensors with 0, 1 and 2 indices.

[1] http://mathworld.wolfram.com/Tensor.html

answered Nov 30 at 11:28


Alexander Mathiasen
11 1

https://math.stackexchange.com/questions/412423/differences-between-a-matrix-and-a-tensor 5/5

You might also like