You are on page 1of 213

Square matrices[edit]

Main article: Square matrix


A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is
known as a square matrix of order n. Any two square matrices of the same order can be added
and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the
imaginary line which runs from the top left corner to the bottom right corner of the matrix.

Main types[edit]
Name

Example with n = 3

Diagonal matrix

Lower triangular matrix

Upper triangular matrix

Diagonal and triangular matrices[edit]


If all entries of A below the main diagonal are zero, A is called an upper triangular matrix.
Similarly if all entries of A above the main diagonal are zero, A is called a lower triangular
matrix. If all entries outside the main diagonal are zero, A is called adiagonal matrix.
Identity matrix[edit]
The identity matrix In of size n is the n-by-n matrix in which all the elements on the main
diagonal are equal to 1 and all other elements are equal to 0, e.g.

It is a square matrix of order n, and also a special kind of diagonal matrix. It is called
identity matrix because multiplication with it leaves a matrix unchanged:
AIn = ImA = A for any m-by-n matrix A.

Symmetric or skew-symmetric matrix[edit]


A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If
instead, A was equal to the negative of its transpose, i.e., A = AT, then A is a skewsymmetric matrix. In complex matrices, symmetry is often replaced by the concept
of Hermitian matrices, which satisfy A = A, where the star or asterisk denotes
the conjugate transpose of the matrix, i.e., the transpose of the complex
conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices
have an eigenbasis; i.e., every vector is expressible as a linear combination of
eigenvectors. In both cases, all eigenvalues are real.[29] This theorem can be
generalized to infinite-dimensional situations related to matrices with infinitely many
rows and columns, see below.
Invertible matrix and its inverse[edit]
A square matrix A is called invertible or non-singular if there exists a matrix B such
that
AB = BA = In.[30][31]
If B exists, it is unique and is called the inverse matrix of A, denoted A1.
Definite matrix[edit]
Positive definite matrix

Indefinite matrix

Q(x,y) = 1/4 x2 + y2

Q(x,y) = 1/4 x2 1/4 y2

Points such that Q(x,y)=1


(Ellipse).

Points such that Q(x,y)=1


(Hyperbola).

A symmetric nn-matrix is called positive-definite (respectively negative-definite;


indefinite), if for all nonzero vectorsx Rn the associated quadratic form given by
Q(x) = xTAx

takes only positive values (respectively only negative values; both some
negative and some positive values).[32] If the quadratic form takes only nonnegative (respectively only non-positive) values, the symmetric matrix is
called positive-semidefinite (respectively negative-semidefinite); hence the
matrix is indefinite precisely when it is neither positive-semidefinite nor
negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are
positive, i.e., the matrix is positive-semidefinite and it is invertible.[33] The
table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two different vectors instead yields the bilinear
form associated to A:
BA (x, y) = xTAy.[34]
Orthogonal matrix[edit]
An orthogonal matrix is a square matrix with real entries whose columns
and rows are orthogonal unit vectors (i.e., orthonormal vectors).
Equivalently, a matrix A is orthogonal if its transpose is equal to
its inverse:

which entails

where I is the identity matrix.


An orthogonal matrix A is necessarily invertible (with
inverse A1 = AT), unitary (A1 = A*), and normal (A*A = AA*).
The determinant of any orthogonal matrix is either +1 or 1.
Aspecial orthogonal matrix is an orthogonal matrix
with determinant +1. As a linear transformation, every
orthogonal matrix with determinant +1 is a pure rotation, while
every orthogonal matrix with determinant -1 is either a
pure reflection, or a composition of reflection and rotation.
The complex analogue of an orthogonal matrix is a unitary
matrix.

Main operations[edit]
Trace[edit]
The trace, tr(A) of a square matrix A is the sum of its diagonal
entries. While matrix multiplication is not commutative as
mentioned above, the trace of the product of two matrices is
independent of the order of the factors:

tr(AB) = tr(BA).
This is immediate from the definition of matrix multiplication:

Also, the trace of a matrix is equal to that of its


transpose, i.e.,
tr(A) = tr(AT).
Determinant[edit]
Main article: Determinant

A linear transformation on R2 given by the indicated


matrix. The determinant of this matrix is 1, as the
area of the green parallelogram at the right is 1, but
the map reverses the orientation, since it turns the
counterclockwise orientation of the vectors to a
clockwise one.

The determinant det(A) or |A| of a square


matrix A is a number encoding certain properties of
the matrix. A matrix is invertible if and only if its
determinant is nonzero. Its absolute value equals
the area (in R2) or volume (in R3) of the image of the
unit square (or cube), while its sign corresponds to
the orientation of the corresponding linear map: the
determinant is positive if and only if the orientation
is preserved.
The determinant of 2-by-2 matrices is given by

The determinant of 3-by-3 matrices involves 6


terms (rule of Sarrus). The more lengthy Leibniz
formula generalises these two formulae to all
dimensions.[35]

The determinant of a product of square


matrices equals the product of their
determinants:
det(AB) = det(A) det(B).[36]
Adding a multiple of any row to another
row, or a multiple of any column to another
column, does not change the determinant.
Interchanging two rows or two columns
affects the determinant by multiplying it by
1.[37] Using these operations, any matrix
can be transformed to a lower (or upper)
triangular matrix, and for such matrices the
determinant equals the product of the
entries on the main diagonal; this provides
a method to calculate the determinant of
any matrix. Finally, the Laplace
expansion expresses the determinant in
terms of minors, i.e., determinants of
smaller matrices.[38] This expansion can be
used for a recursive definition of
determinants (taking as starting case the
determinant of a 1-by-1 matrix, which is its
unique entry, or even the determinant of a
0-by-0 matrix, which is 1), that can be seen
to be equivalent to the Leibniz formula.
Determinants can be used to solve linear
systems using Cramer's rule, where the
division of the determinants of two related
square matrices equates to the value of
each of the system's variables.[39]
Eigenvalues and eigenvectors[edit]
Main article: Eigenvalues and eigenvectors
A number and a non-zero vector v satisfying
Av = v

are called an eigenvalue and


an eigenvector of A, respectively.[nb
1][40]

The number is an eigenvalue of

an nn-matrix A if and only if AIn is


not invertible, which is equivalentto
[41]

The polynomial pA in
an indeterminate X given by
evaluation the determinant
det(XInA) is called
the characteristic polynomial of A.
It is a monic
polynomial of degree n. Therefore
the polynomial equation pA() = 0
has at most n different solutions,
i.e., eigenvalues of the
matrix.[42] They may be complex
even if the entries of A are real.
According to the CayleyHamilton
theorem, pA(A) = 0, that is, the
result of substituting the matrix
itself into its own characteristic
polynomial yields the zero matrix.

Computational
aspects[edit]
Matrix calculations can be often
performed with different
techniques. Many problems can be
solved by both direct algorithms or
iterative approaches. For example,
the eigenvectors of a square matrix
can be obtained by finding
a sequence of
vectors xn converging to an
eigenvector when n tends
to infinity.[43]
To be able to choose the more
appropriate algorithm for each
specific problem, it is important to
determine both the effectiveness
and precision of all the available

algorithms. The domain studying


these matters is called numerical
linear algebra.[44] As with other
numerical situations, two main
aspects are the complexity of
algorithms and their numerical
stability.
Determining the complexity of an
algorithm means finding upper
bounds or estimates of how many
elementary operations such as
additions and multiplications of
scalars are necessary to perform
some algorithm, e.g., multiplication
of matrices. For example,
calculating the matrix product of
two n-by-n matrix using the
definition given above
needs n3multiplications, since for
any of the n2 entries of the
product, n multiplications are
necessary. The Strassen
algorithm outperforms this "naive"
algorithm; it needs
only n2.807multiplications.[45] A refined
approach also incorporates specific
features of the computing devices.
In many practical situations
additional information about the
matrices involved is known. An
important case are sparse
matrices, i.e., matrices most of
whose entries are zero. There are
specifically adapted algorithms for,
say, solving linear
systems Ax = b for sparse
matrices A, such as the conjugate
gradient method.[46]
An algorithm is, roughly speaking,
numerically stable, if little

deviations in the input values do


not lead to big deviations in the
result. For example, calculating the
inverse of a matrix via Laplace's
formula (Adj (A) denotes
the adjugate matrix of A)
A1 = Adj(A) / det(A)
may lead to significant
rounding errors if the
determinant of the matrix is
very small. The norm of a
matrix can be used to capture
the conditioning of linear
algebraic problems, such as
computing a matrix's inverse.[47]
Although most computer
languages are not designed
with commands or libraries for
matrices, as early as the
1970s, some engineering
desktop computers such as
the HP 9830had ROM
cartridges to add BASIC
commands for matrices. Some
computer languages such
as APL were designed to
manipulate matrices,
and various mathematical
programscan be used to aid
computing with matrices.[48]

Decomposition[edit
]
Main articles: Matrix
decomposition, Matrix
diagonalization, Gaussian
elimination and Montante's
method
There are several methods to
render matrices into a more

easily accessible form. They


are generally referred to
as matrix
decomposition or matrix
factorization techniques. The
interest of all these techniques
is that they preserve certain
properties of the matrices in
question, such as determinant,
rank or inverse, so that these
quantities can be calculated
after applying the
transformation, or that certain
matrix operations are
algorithmically easier to carry
out for some types of matrices.
The LU decomposition factors
matrices as a product of lower
(L) and an upper triangular
matrices (U).[49] Once this
decomposition is calculated,
linear systems can be solved
more efficiently, by a simple
technique called forward and
back substitution. Likewise,
inverses of triangular matrices
are algorithmically easier to
calculate. The Gaussian
elimination is a similar
algorithm; it transforms any
matrix to row echelon
form.[50] Both methods proceed
by multiplying the matrix by
suitable elementary matrices,
which correspond to permuting
rows or columns and adding
multiples of one row to another
row. Singular value
decomposition expresses any
matrix A as a product UDV,
where Uand V are unitary

matrices and D is a diagonal


matrix.

An example of a matrix in
Jordan normal form. The grey
blocks are called Jordan
blocks.

The eigendecomposition or dia


gonalization expresses A as a
product VDV1, where D is a
diagonal matrix and V is a
suitable invertible
matrix.[51] If A can be written in
this form, it is
called diagonalizable. More
generally, and applicable to all
matrices, the Jordan
decomposition transforms a
matrix into Jordan normal form,
that is to say matrices whose
only nonzero entries are the
eigenvalues 1 to n of A,
placed on the main diagonal
and possibly entries equal to
one directly above the main
diagonal, as shown at the
right.[52] Given the
eigendecomposition,
the nth power of A (i.e., n-fold
iterated matrix multiplication)
can be calculated via

An = (VDV1)n = VDV1VDV1...VDV1 = VDnV1


and the power of a
diagonal matrix can be
calculated by taking the
corresponding powers of
the diagonal entries, which
is much easier than doing
the exponentiation
for A instead. This can be
used to compute the matrix
exponential eA, a need
frequently arising in
solving linear differential
equations, matrix
logarithms and square
roots of matrices.[53] To
avoid numerically illconditionedsituations,
further algorithms such as
the Schur
decomposition can be
employed.[54]

Abstract
algebraic
aspects and
generalizations[
edit]
Matrices can be
generalized in different
ways. Abstract algebra
uses matrices with entries
in more general fields or
even rings, while linear
algebra codifies properties
of matrices in the notion of
linear maps. It is possible
to consider matrices with
infinitely many columns
and rows. Another

extension are tensors,


which can be seen as
higher-dimensional arrays
of numbers, as opposed to
vectors, which can often
be realised as sequences
of numbers, while matrices
are rectangular or twodimensional array of
numbers.[55] Matrices,
subject to certain
requirements tend to
form groups known as
matrix groups.

Matrices with more


general entries[edit]
This article focuses on
matrices whose entries are
real or complex
numbers. However,
matrices can be
considered with much
more general types of
entries than real or
complex numbers. As a
first step of generalization,
any field, i.e.,
a set where addition, subtr
action, multiplication and di
vision operations are
defined and well-behaved,
may be used instead
of R or C, for
example rational
numbers or finite fields.
For example, coding
theory makes use of
matrices over finite fields.
Wherever eigenvalues are
considered, as these are
roots of a polynomial they

may exist only in a larger


field than that of the entries
of the matrix; for instance
they may be complex in
case of a matrix with real
entries. The possibility to
reinterpret the entries of a
matrix as elements of a
larger field (e.g., to view a
real matrix as a complex
matrix whose entries
happen to be all real) then
allows considering each
square matrix to possess a
full set of eigenvalues.
Alternatively one can
consider only matrices with
entries in an algebraically
closed field, such as C,
from the outset.
More generally, abstract
algebra makes great use
of matrices with entries in
a ring R.[56] Rings are a
more general notion than
fields in that a division
operation need not exist.
The very same addition
and multiplication
operations of matrices
extend to this setting, too.
The set M(n, R) of all
square n-by-n matrices
over R is a ring
called matrix ring,
isomorphic to
the endomorphism ring of
the left R-module Rn.[57] If
the ring R is commutative,
i.e., its multiplication is
commutative, then M(n, R)

is a unitary
noncommutative
(unless n = 1) associative
algebra over R.
The determinant of square
matrices over a
commutative ring R can
still be defined using
the Leibniz formula; such a
matrix is invertible if and
only if its determinant
is invertible in R,
generalising the situation
over a field F, where every
nonzero element is
invertible.[58] Matrices
over superrings are
calledsupermatrices.[59]
Matrices do not always
have all their entries in the
same ring or even in any
ring at all. One special but
common case is block
matrices, which may be
considered as matrices
whose entries themselves
are matrices. The entries
need not be quadratic
matrices, and thus need
not be members of any
ordinary ring; but their
sizes must fulfil certain
compatibility conditions.

Relationship to
linear maps[edit]
Linear maps Rn Rm are
equivalent to m-byn matrices, as
described above. More
generally, any linear

map f: V W between
finite-dimensional vector
spaces can be described
by a matrix A = (aij), after
choosing bases v1,
..., vn of V, and w1,
..., wm of W (so n is the
dimension of V and m is
the dimension of W), which
is such that

In other words,
column j of A expresse
s the image of vj in
terms of the basis
vectors wi of W; thus
this relation uniquely
determines the entries
of the matrix A. Note
that the matrix
depends on the choice
of the bases: different
choices of bases give
rise to different,
but equivalent
matrices.[60] Many of
the above concrete
notions can be
reinterpreted in this
light, for example, the
transpose
matrix AT describes
the transpose of the
linear map given by A,
with respect to the dual
bases.[61]
These properties can
be restated in a more
natural way:

the category of all


matrices with entries in
a field

with

multiplication as
composition
is equivalent to the
category of finite
dimensional vector
spaces and linear
maps over this field.
More generally, the set
of mn matrices can
be used to represent
the R-linear maps
between the free
modules Rm and Rn for
an arbitrary ring R with
unity.
When n = mcompositio
n of these maps is
possible, and this
gives rise to the matrix
ring of nn matrices
representing
the endomorphism
ring of Rn.

Matrix
groups[edit]
Main article: Matrix
group
A group is a
mathematical structure
consisting of a set of
objects together with
a binary operation, i.e.,
an operation
combining any two
objects to a third,
subject to certain
requirements.[62] A

group in which the


objects are matrices
and the group
operation is matrix
multiplication is called
a matrix group.[nb
2][63]

Since in a group

every element has to


be invertible, the most
general matrix groups
are the groups of all
invertible matrices of a
given size, called
the general linear
groups.
Any property of
matrices that is
preserved under matrix
products and inverses
can be used to define
further matrix groups.
For example, matrices
with a given size and
with a determinant of 1
form a subgroup of
(i.e., a smaller group
contained in) their
general linear group,
called a special linear
group.[64] Orthogonal
matrices, determined
by the condition
M M = I,
T

form
the orthogonal
group.[65] Every
orthogonal matrix
has determinant 1
or 1. Orthogonal
matrices with

determinant 1 form
a subgroup
called special
orthogonal group.
Every finite
group is isomorphi
c to a matrix
group, as one can
see by considering
the regular
representation of
the symmetric
group.[66] General
groups can be
studied using
matrix groups,
which are
comparatively
well-understood,
by means
of representation
theory.[67]

Infinite
matrices[edit]
It is also possible
to consider
matrices with
infinitely many
rows and/or
columns[68] even if,
being infinite
objects, one
cannot write down
such matrices
explicitly. All that
matters is that for
every element in
the set indexing
rows, and every
element in the set

indexing columns,
there is a welldefined entry
(these index sets
need not even be
subsets of the
natural numbers).
The basic
operations of
addition,
subtraction, scalar
multiplication and
transposition can
still be defined
without problem;
however matrix
multiplication may
involve infinite
summations to
define the resulting
entries, and these
are not defined in
general.
If R is any ring with
unity, then the ring
of endomorphisms
of
as a
right R module is
isomorphic to the
ring of column
finite
matrices
who
se entries are
indexed by
, and whose
columns each
contain only finitely
many nonzero

entries. The
endomorphisms
of M considered as
a left R module
result in an
analogous object,
the row finite
matrices
who
se rows each only
have finitely many
nonzero entries.
If infinite matrices
are used to
describe linear
maps, then only
those matrices can
be used all of
whose columns
have but a finite
number of nonzero
entries, for the
following reason.
For a matrix A to
describe a linear
map f: VW,
bases for both
spaces must have
been chosen;
recall that by
definition this
means that every
vector in the space
can be written
uniquely as a
(finite) linear
combination of
basis vectors, so
that written as a
(column)
vector v of

coefficients, only
finitely many
entries vi are
nonzero. Now the
columns
ofA describe the
images by f of
individual basis
vectors of V in the
basis of W, which
is only meaningful
if these columns
have only finitely
many nonzero
entries. There is
no restriction on
the rows
of A however: in
the
product Av there
are only finitely
many nonzero
coefficients
of v involved, so
every one of its
entries, even if it is
given as an infinite
sum of products,
involves only
finitely many
nonzero terms and
is therefore well
defined. Moreover
this amounts to
forming a linear
combination of the
columns of A that
effectively involves
only finitely many
of them, whence
the result has only

finitely many
nonzero entries,
because each of
those columns do.
One also sees that
products of two
matrices of the
given type is well
defined (provided
as usual that the
column-index and
row-index sets
match), is again of
the same type,
and corresponds
to the composition
of linear maps.
If R is a normed
ring, then the
condition of row or
column finiteness
can be relaxed.
With the norm in
place, absolutely
convergent
series can be used
instead of finite
sums. For
example, the
matrices whose
column sums are
absolutely
convergent
sequences form a
ring. Analogously
of course, the
matrices whose
row sums are
absolutely
convergent series
also form a ring.

In that vein, infinite


matrices can also
be used to
describe operators
on Hilbert spaces,
where
convergence
and continuity que
stions arise, which
again results in
certain constraints
that have to be
imposed.
However, the
explicit point of
view of matrices
tends to obfuscate
the matter,[nb 3] and
the abstract and
more powerful
tools of functional
analysis can be
used instead.

Empty
matrices[edit]
An empty matrix is
a matrix in which
the number of
rows or columns
(or both) is
zero.[69][70] Empty
matrices help
dealing with maps
involving the zero
vector space. For
example, if A is a
3-by-0 matrix
and B is a 0-by-3
matrix, then AB is
the 3-by-3 zero
matrix

corresponding to
the null map from
a 3-dimensional
space V to itself,
while BA is a 0-by0 matrix. There is
no common
notation for empty
matrices, but
most computer
algebra
systems allow
creating and
computing with
them. The
determinant of the
0-by-0 matrix is 1
as follows from
regarding
the empty
product occurring
in the Leibniz
formula for the
determinant as 1.
This value is also
consistent with the
fact that the
identity map from
any finite
dimensional space
to itself has
determinant 1, a
fact that is often
used as a part of
the
characterization of
determinants.

Applicatio
ns[edit]

There are
numerous
applications of
matrices, both in
mathematics and
other sciences.
Some of them
merely take
advantage of the
compact
representation of a
set of numbers in
a matrix. For
example, in game
theory and econo
mics, the payoff
matrix encodes the
payoff for two
players, depending
on which out of a
given (finite) set of
alternatives the
players
choose.[71] Text
mining and
automated thesaur
us compilation
makes use
of document-term
matrices such
as tf-idf to track
frequencies of
certain words in
several
documents.[72]
Complex numbers
can be
represented by
particular real 2by-2 matrices via

under which
addition and
multiplication
of complex
numbers and
matrices
correspond to
each other.
For example,
2-by-2 rotation
matrices
represent the
multiplication
with some
complex
number
of absolute
value 1,
as above. A
similar
interpretation
is possible
for quaternion
s[73] and Cliffor
d algebras in
general.
Early encryptio
n techniques
such as
the Hill
cipher also
used matrices.
However, due
to the linear
nature of
matrices,
these codes
are

comparatively
easy to
break.[74] Comp
uter
graphics uses
matrices both
to represent
objects and to
calculate
transformation
s of objects
using
affine rotation
matrices to
accomplish
tasks such as
projecting a
threedimensional
object onto a
twodimensional
screen,
corresponding
to a theoretical
camera
observation.[75]
Matrices over
a polynomial
ring are
important in
the study
of control
theory.
Chemistry ma
kes use of
matrices in
various ways,
particularly
since the use
of quantum

theory to
discuss molec
ular
bonding and s
pectroscopy.
Examples are
the overlap
matrixand
the Fock
matrix used in
solving
the Roothaan
equations to
obtain
the molecular
orbitals of
the Hartree
Fock method.

Graph
theory[edit]

An
undirected
graph with
adjacency
matrix

The adjacency
matrix of
a finite
graph is a
basic notion
of graph

theory.[76] It
records which
vertices of the
graph are
connected by
an edge.
Matrices
containing just
two different
values (1 and
0 meaning for
example "yes"
and "no",
respectively)
are
called logical
matrices.
The distance
(or cost)
matrix contain
s information
about
distances of
the
edges.[77] Thes
e concepts
can be applied
to websites co
nnected hyperl
inks or cities
connected by
roads etc., in
which case
(unless the
road network
is extremely
dense) the
matrices tend
to be sparse,
i.e., contain
few nonzero

entries.
Therefore,
specifically
tailored matrix
algorithms can
be used
in network
theory.

Analysis
and
geometry[
edit]
The Hessian
matrix of
a differentiable
function : Rn
R consists
of the second
derivatives of
with respect
to the several
coordinate
directions,
i.e.[78]

At
the saddle
point (x = 0,
y = 0) (red)
of the
function f(x,
y)

= x2 y2,
the Hessian
matrix

is indefinite
.

It encodes
informatio
n about
the local
growth
behaviour
of the
function:
given
a critical
point x = (
x1, ..., xn),
i.e., a
point
where the
first partial
derivatives
of vanis
h, the
function
has a local
minimum i
f the
Hessian
matrix
is positive
definite. Q
uadratic
programmi
ng can be
used to
find global

minima or
maxima of
quadratic
functions
closely
related to
the ones
attached
to
matrices
(see abov
e).[79]
Another
matrix
frequently
used in
geometric
al
situations
is
the Jacobi
matrix of a
differentia
ble
map f: Rn
Rm. If f1,
..., fm deno
te the
componen
ts of f,
then the
Jacobi
matrix is
defined
as [80]

If n >
m,
and if

the
rank
of the
Jacobi
matrix
attains
its
maxim
al
value
m, f is
locally
inverti
ble at
that
point,
by
the im
plicit
functio
n
theore
m.[81]
Partial
differe
ntial
equati
ons ca
n be
classif
ied by
consid
ering
the
matrix
of
coeffic
ients
of the
highes
t-order

differe
ntial
operat
ors of
the
equati
on.
For ell
iptic
partial
differe
ntial
equati
ons thi
s
matrix
is
positiv
e
definit
e,
which
has
decisi
ve
influen
ce on
the
set of
possib
le
solutio
ns of
the
equati
on in
questi
on.[82]
The fi
nite
eleme

nt
metho
d is an
import
ant
numer
ical
metho
d to
solve
partial
differe
ntial
equati
ons,
widely
applie
d in
simula
ting
compl
ex
physic
al
syste
ms. It
attem
pts to
appro
ximate
the
solutio
n to
some
equati
on by
piece
wise
linear
functio
ns,

where
the
pieces
are
chose
n with
respe
ct to a
suffici
ently
fine
grid,
which
in turn
can
be
recast
as a
matrix
equati
on.[83]

Pro
babi
lity
theo
ry
and
stati
stic
s[edit
]

Tw
o
diff
ere
nt
Ma
rko
v
ch
ain
s.
Th
e
ch
art
de
pic
ts
the
nu
mb
er
of
par
ticl
es
(of
a
tot
al
of
10
00)
in
sta
te
"2"
.
Bot
h
limi

tin
g
val
ue
s
ca
n
be
det
er
mi
ne
d
fro
m
the
tra
nsi
tio
n
ma
tric
es,
whi
ch
are
giv
en
by

(re
d)
an
d

(bl
ac
k).

Stoch
astic
matric
es are
squar
e
matric
es
whose
rows
are pr
obabili
ty
vector
s, i.e.,
whose
entrie
s are
nonnegati
ve
and
sum
up to
one.
Stoch
astic
matric
es are
used
to
define
Mark
ov
chains
with
finitely
many
states.
[84]

row of

the
stocha
stic
matrix
gives
the
proba
bility
distrib
ution
for the
next
positio
n of
some
particl
e
curren
tly in
the
state
that
corres
ponds
to the
row.
Prope
rties
of the
Marko
v
chain
like ab
sorbin
g
states,
i.e.,
states
that
any
particl

e
attains
event
ually,
can
be
read
off the
eigenv
ectors
of the
transiti
on
matric
es.[85]
Statist
ics
also
makes
use of
matric
es in
many
differe
nt
forms.[
86]

Des

criptiv
e
statisti
cs is
conce
rned
with
descri
bing
data
sets,
which
can
often

be
repres
ented
as dat
a
matric
es,
which
may
then
be
subjec
ted
to dim
ension
ality
reduct
ion tec
hniqu
es.
Theco
varian
ce
matrix
enco
des
the
mutua
l varia
nce of
sever
al ran
dom
variabl
es.[87]
Anoth
er
techni
que
using
matric

es
are lin
ear
least
squar
es, a
metho
d that
appro
ximate
sa
finite
set of
pairs
(x1, y1)
,
(x2, y2)
, ...,
(xN, yN
), by a
linear
functio
n
yi axi + b, i = 1, ..., N
w
hi
ch
ca
n
be
fo
r
m
ul
at
ed
in
te
r
m
s

of
m
at
ric
es
,
rel
at
ed
to
th
e
si
ng
ul
ar
va
lu
e
de
co
m
po
sit
io
n
of
m
at
ric
es
.[88
]

R
an
do
m
m
at
ric
es

a
re
m
at
ric
es
w
ho
se
en
tri
es
ar
e
ra
nd
o
m
nu
m
be
rs,
su
bj
ec
t
to
su
ita
bl
e
pr
ob
ab
ilit
y
di
str
ib
uti
on

s,
su
ch
as
m
at
rix
no
r
m
al
di
str
ib
uti
on
.
B
ey
on
d
pr
ob
ab
ilit
y
th
eo
ry,
th
ey
ar
e
ap
pli
ed
in
do
m
ai
ns

ra
ng
in
g
fr
o
m
nu
m
be
r
th
eo
ry
to
ph
ys
ic
s.[
89][
90]

S
y
m
m
e
tr
ie
s
a
n
d
tr
a
n
s
f
o
r
m
a
ti

o
n
s
i
n
p
h
y
si
c
s[
e
di
t]
F
ur
th
er
inf
or
m
ati
on
:
S
y
m
m
et
ry
in
ph
ys
ic
s
Li
ne
ar
tr
an
sf
or

m
ati
on
s
an
d
th
e
as
so
ci
at
ed
s
y
m
m
et
rie
s
pl
ay
a
ke
y
rol
e
in
m
od
er
n
ph
ys
ic
s.
F
or
ex
a
m

pl
e,
el
e
m
en
ta
ry
pa
rti
cl
es
in
q
ua
nt
u
m
fie
ld
th
eo
ry
ar
e
cl
as
sif
ie
d
as
re
pr
es
en
tat
io
ns
of
th
e

Lo
re
nt
z
gr
ou
p
of
sp
ec
ial
rel
ati
vit
y
an
d,
m
or
e
sp
ec
ifi
ca
lly
,
by
th
eir
be
ha
vi
or
un
de
r
th
e
sp
in
gr

ou
p.
C
on
cr
et
e
re
pr
es
en
tat
io
ns
in
vo
lvi
ng
th
e
P
au
li
m
at
ric
es
a
nd
m
or
e
ge
ne
ral
g
a
m
m
a
m

at
ric
es
a
re
an
int
eg
ral
pa
rt
of
th
e
ph
ys
ic
al
de
sc
rip
tio
n
of
fe
r
mi
on
s,
w
hi
ch
be
ha
ve
as
s
pi
no
rs.
[91]

F
or
th
e
th
re
e
lig
ht
es
tq
ua
rk
s,
th
er
e
is
a
gr
ou
pth
eo
re
tic
al
re
pr
es
en
tat
io
n
in
vo
lvi
ng
th
e
sp

ec
ial
un
ita
ry
gr
ou
p
S
U(
3)
;
fo
r
th
eir
ca
lc
ul
ati
on
s,
ph
ys
ici
st
s
us
e
a
co
nv
en
ie
nt
m
at
rix
re
pr
es

en
tat
io
n
kn
o
w
n
as
th
e
G
ell
M
an
n
m
at
ric
es
,
w
hi
ch
ar
e
al
so
us
ed
fo
r
th
e
S
U(
3)
g
au
ge

gr
ou
pt
ha
t
fo
r
m
s
th
e
ba
si
s
of
th
e
m
od
er
n
de
sc
rip
tio
n
of
str
on
g
nu
cl
ea
r
int
er
ac
tio
ns
,q
ua

nt
u
m
ch
ro
m
od
yn
a
mi
cs
.
T
he
C
ab
ib
bo

K
ob
ay
as
hi

M
as
ka
w
a
m
at
rix
,
in
tu
rn
,
ex
pr
es

se
s
th
e
fa
ct
th
at
th
e
ba
si
c
qu
ar
k
st
at
es
th
at
ar
e
im
po
rt
an
t
fo
r
w
ea
k
int
er
ac
tio
ns
a
re
no

t
th
e
sa
m
e
as
,
bu
t
lin
ea
rly
rel
at
ed
to
th
e
ba
si
c
qu
ar
k
st
at
es
th
at
de
fin
e
pa
rti
cl
es
wi
th
sp
ec

ifi
c
an
d
di
sti
nc
t
m
as
se
s.[
92]

L
i
n
e
a
r
c
o
m
b
i
n
a
ti
o
n
s
o
f
q
u
a
n
t
u
m
s
t
a

t
e
s[
e
di
t]
T
he
fir
st
m
od
el
of
qu
an
tu
m
m
ec
ha
ni
cs
(
H
ei
se
nb
er
g,
19
25
)
re
pr
es
en
te
d
th
e

th
eo
ry'
s
op
er
at
or
s
by
inf
ini
te
di
m
en
si
on
al
m
at
ric
es
ac
tin
g
on
qu
an
tu
m
st
at
es
.[93
]

hi
s
is
al

so
re
fe
rr
ed
to
as
m
at
rix
m
ec
ha
ni
cs
.
O
ne
pa
rti
cu
lar
ex
a
m
pl
e
is
th
e
de
ns
ity
m
at
rix
th
at
ch
ar
ac

te
riz
es
th
e
"
mi
xe
d"
st
at
e
of
a
qu
an
tu
m
sy
st
e
m
as
a
lin
ea
r
co
m
bi
na
tio
n
of
el
e
m
en
ta
ry,
"p

ur
e"
ei
ge
ns
tat
es
.[94
]

A
no
th
er
m
at
rix
se
rv
es
as
a
ke
y
to
ol
fo
r
de
sc
rib
in
g
th
e
sc
att
eri
ng
ex
pe
ri

m
en
ts
th
at
fo
r
m
th
e
co
rn
er
st
on
e
of
ex
pe
ri
m
en
tal
pa
rti
cl
e
ph
ys
ic
s:
C
oll
isi
on
re
ac
tio
ns
su
ch

as
oc
cu
r
in
pa
rti
cl
e
ac
ce
ler
at
or
s,
w
he
re
no
nint
er
ac
tin
g
pa
rti
cl
es
he
ad
to
w
ar
ds
ea
ch
ot
he
r
an

d
co
lli
de
in
a
s
m
all
int
er
ac
tio
n
zo
ne
,
wi
th
a
ne
w
se
t
of
no
nint
er
ac
tin
g
pa
rti
cl
es
as
th
e
re
su

lt,
ca
n
be
de
sc
rib
ed
as
th
e
sc
al
ar
pr
od
uc
t
of
ou
tg
oi
ng
pa
rti
cl
e
st
at
es
an
d
a
lin
ea
r
co
m
bi
na
tio

n
of
in
go
in
g
pa
rti
cl
e
st
at
es
.
T
he
lin
ea
r
co
m
bi
na
tio
n
is
gi
ve
n
by
a
m
at
rix
kn
o
w
n
as
th
e

Sm
at
rix
,
w
hi
ch
en
co
de
s
all
inf
or
m
ati
on
ab
ou
t
th
e
po
ss
ibl
e
int
er
ac
tio
ns
be
tw
ee
n
pa
rti
cl
es

.[95
]

N
o
r
m
al
m
o
d
e
s[
e
di
t]
A
ge
ne
ral
ap
pli
ca
tio
n
of
m
at
ric
es
in
ph
ys
ic
s
is
to
th
e
de
sc
rip

tio
n
of
lin
ea
rly
co
up
le
d
ha
r
m
on
ic
sy
st
e
m
s.
T
he
e
qu
ati
on
s
of
m
oti
on
of
su
ch
sy
st
e
m
s
ca
n

be
de
sc
rib
ed
in
m
at
rix
fo
r
m,
wi
th
a
m
as
s
m
at
rix
m
ult
ipl
yi
ng
a
ge
ne
ral
iz
ed
ve
lo
cit
y
to
gi
ve
th
e

ki
ne
tic
te
r
m,
an
d
a
fo
rc
e
m
at
rix
m
ult
ipl
yi
ng
a
di
sp
la
ce
m
en
t
ve
ct
or
to
ch
ar
ac
te
riz
e
th
e
int

er
ac
tio
ns
.
T
he
be
st
w
ay
to
ob
tai
n
so
lut
io
ns
is
to
de
te
r
mi
ne
th
e
sy
st
e
m'
s
ei
ge
nv
ec
to
rs,
its
n

or
m
al
m
od
es
,
by
di
ag
on
ali
zi
ng
th
e
m
at
rix
eq
ua
tio
n.
T
ec
hn
iq
ue
s
lik
e
thi
s
ar
e
cr
uc
ial
w
he
n

it
co
m
es
to
th
e
int
er
na
l
dy
na
mi
cs
of
m
ol
ec
ul
es
:
th
e
int
er
na
l
vi
br
ati
on
s
of
sy
st
e
m
s
co
ns

ist
in
g
of
m
ut
ua
lly
bo
un
d
co
m
po
ne
nt
at
o
m
s.[
96]

T
he
y
ar
e
al
so
ne
ed
ed
fo
r
de
sc
rib
in
g
m
ec
ha

ni
ca
l
vi
br
ati
on
s,
an
d
os
cil
lat
io
ns
in
el
ec
tri
ca
l
cir
cu
its
.[97
]

G
e
o
m
e
tr
ic
al
o
p
ti
c
s[
e
di
t]

G
eo
m
et
ric
al
op
tic
s
pr
ov
id
es
fu
rt
he
r
m
at
rix
ap
pli
ca
tio
ns
.
In
thi
s
ap
pr
ox
im
ati
ve
th
eo
ry,
th
e
w

av
e
na
tu
re
of
lig
ht
is
ne
gl
ec
te
d.
T
he
re
su
lt
is
a
m
od
el
in
w
hi
ch
li
gh
t
ra
ys
a
re
in
de
ed
ge
o
m

et
ric
al
ra
ys
. If
th
e
de
fle
cti
on
of
lig
ht
ra
ys
by
op
tic
al
el
e
m
en
ts
is
s
m
all
,
th
e
ac
tio
n
of
al
en
s
or

re
fle
cti
ve
el
e
m
en
t
on
a
gi
ve
n
lig
ht
ra
y
ca
n
be
ex
pr
es
se
d
as
m
ult
ipl
ic
ati
on
of
a
tw
oco
m
po
ne

nt
ve
ct
or
wi
th
a
tw
oby
tw
o
m
at
rix
ca
lle
d
ra
y
tr
an
sf
er
m
at
rix
:
th
e
ve
ct
or'
s
co
m
po
ne
nt
s

ar
e
th
e
lig
ht
ra
y'
s
sl
op
e
an
d
its
di
st
an
ce
fr
o
m
th
e
op
tic
al
ax
is,
w
hil
e
th
e
m
at
rix
en
co
de
s

th
e
pr
op
er
tie
s
of
th
e
op
tic
al
el
e
m
en
t.
A
ct
ua
lly
,
th
er
e
ar
e
tw
o
ki
nd
s
of
m
at
ric
es
,
vi
z.

a
re
fr
ac
tio
n
m
at
rix
d
es
cri
bi
ng
th
e
re
fr
ac
tio
n
at
a
le
ns
su
rf
ac
e,
an
d
at
ra
ns
lat
io
n
m
at
rix
,

de
sc
rib
in
g
th
e
tr
an
sl
ati
on
of
th
e
pl
an
e
of
re
fe
re
nc
e
to
th
e
ne
xt
re
fr
ac
tin
g
su
rf
ac
e,
w
he
re

an
ot
he
r
re
fr
ac
tio
n
m
at
rix
ap
pli
es
.
T
he
op
tic
al
sy
st
e
m,
co
ns
ist
in
g
of
a
co
m
bi
na
tio
n
of
le
ns

es
an
d/
or
re
fle
cti
ve
el
e
m
en
ts,
is
si
m
pl
y
de
sc
rib
ed
by
th
e
m
at
rix
re
su
lti
ng
fr
o
m
th
e
pr
od
uc
t

of
th
e
co
m
po
ne
nt
s'
m
at
ric
es
.[98
]

E
le
c
tr
o
n
ic
s[
e
di
t]
Tr
ad
iti
on
al
m
es
h
an
al
ys
is
in
el
ec

tr
on
ic
s
le
ad
s
to
a
sy
st
e
m
of
lin
ea
r
eq
ua
tio
ns
th
at
ca
n
be
de
sc
rib
ed
wi
th
a
m
at
rix
.
T
he
be
ha

vi
ou
r
of
m
an
y
el
ec
tr
on
ic
co
m
po
ne
nt
s
ca
n
be
de
sc
rib
ed
us
in
g
m
at
ric
es
.
Le
t
A
be
a
2di
m

en
si
on
al
ve
ct
or
wi
th
th
e
co
m
po
ne
nt'
s
in
pu
t
vo
lta
ge
v1
a
nd
in
pu
t
cu
rr
en
t i1
a
s
its
el
e
m
en
ts,

an
d
let
B
b
e
a
2di
m
en
si
on
al
ve
ct
or
wi
th
th
e
co
m
po
ne
nt'
s
ou
tp
ut
vo
lta
ge
v2
a
nd
ou
tp
ut
cu
rr

en
t i2
a
s
its
el
e
m
en
ts.
T
he
n
th
e
be
ha
vi
ou
r
of
th
e
el
ec
tr
on
ic
co
m
po
ne
nt
ca
n
be
de
sc
rib
ed
by

B
=
H

A,
w
he
re
H
is
a
2
x
2
m
at
rix
co
nt
ai
ni
ng
on
ei
m
pe
da
nc
e
el
e
m
en
t
(h
12

,
on
e
ad
mi

tta
nc
e
el
e
m
en
t
(h
21

an
d
tw
o
di
m
en
si
on
le
ss
el
e
m
en
ts
(h
11

an
d
h22
).
C
al
cu
lat
in
g
a
cir
cu

it
no
w
re
du
ce
s
to
m
ult
ipl
yi
ng
m
at
ric
es
.

H
i
s
t
o
r
y
[e
di
t]
M
at
ric
es
ha
ve
a
lo
ng
hi
st

or
y
of
ap
pli
ca
tio
n
in
so
lvi
ng
li
ne
ar
eq
ua
tio
ns
b
ut
th
ey
w
er
e
kn
o
w
n
as
ar
ra
ys
un
til
th
e
18
00
s.

T
he
C
hi
ne
se
te
xt
T
he
Ni
ne
C
ha
pt
er
s
on
th
e
M
at
he
m
ati
ca
l
Ar
tw
ritt
en
in
10
th

2n
d
ce
nt
ur
y

B
C
E
is
th
e
fir
st
ex
a
m
pl
e
of
th
e
us
e
of
ar
ra
y
m
et
ho
ds
to
so
lv
e
si
m
ult
an
eo
us
eq
ua
tio
ns
,[99

nc
lu
di
ng
th
e
co
nc
ep
t
of
de
te
r
mi
na
nt
s.
In
15
45
Ita
lia
n
m
at
he
m
ati
ci
an
G
iro
la
m
o
C
ar
da
no

b
ro
ug
ht
th
e
m
et
ho
d
to
E
ur
op
e
w
he
n
he
pu
bli
sh
ed
A
rs
M
ag
na
.[10
0]

T
he
J
ap
an
es
e
m
at
he
m

ati
ci
an
S
ek
iu
se
d
th
e
sa
m
e
ar
ra
y
m
et
ho
ds
to
so
lv
e
si
m
ult
an
eo
us
eq
ua
tio
ns
in
16
83
.[10
1]

T
he

D
ut
ch
M
at
he
m
ati
ci
an
J
an
de
W
itt
re
pr
es
en
te
d
tr
an
sf
or
m
ati
on
s
us
in
g
ar
ra
ys
in
hi
s
16
59
bo

ok
E
le
m
en
ts
of
C
ur
ve
s(
16
59
).[1
02]

B
et
w
ee
n
17
00
an
d
17
10
G
ott
fri
ed
W
ilh
el
m
Le
ib
ni
z
pu
bli
ci

ze
d
th
e
us
e
of
ar
ra
ys
fo
r
re
co
rdi
ng
inf
or
m
ati
on
or
so
lut
io
ns
an
d
ex
pe
ri
m
en
te
d
wi
th
ov
er
50
dif

fe
re
nt
sy
st
e
m
s
of
ar
ra
ys
.[10
0]

Cr
a
m
er
p
re
se
nt
ed
hi
s
rul
ei
n
17
50
.
T
he
te
r
m
"
m
at
rix
"

(L
ati
nf
or
"w
o
m
b"
,
de
riv
ed
fr
o
m
m
at
er

m
ot
he
r[10
3]

w
as
co
in
ed
by
Ja
m
es
Jo
se
ph
S
yl
ve
st
er

in
18
50
,[10
4]

w
ho
un
de
rst
oo
d
a
m
at
rix
as
an
ob
je
ct
gi
vi
ng
ris
e
to
a
nu
m
be
r
of
de
te
r
mi
na
nt
s
to

da
y
ca
lle
d
mi
no
rs,
th
at
is
to
sa
y,
de
te
r
mi
na
nt
s
of
s
m
all
er
m
at
ric
es
th
at
de
riv
e
fr
o
m
th
e
ori

gi
na
l
on
e
by
re
m
ov
in
g
co
lu
m
ns
an
d
ro
w
s.
In
an
18
51
pa
pe
r,
S
yl
ve
st
er
ex
pl
ai
ns
:
I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which
different systems of determinants may be engendered as from the womb of a common
parent.[105]

A
r
t
h
u
r
C
a
y
l
e
y
p
u
b
l
i
s
h
e
d
a
t
r
e
a
t
i
s
e
o
n
g
e
o
m
e
t
r
i

c
t
r
a
n
s
f
o
r
m
a
t
i
o
n
s
u
s
i
n
g
m
a
t
r
i
c
e
s
t
h
a
t
w
e
r
e
n
o
t
r

o
t
a
t
e
d
v
e
r
s
i
o
n
s
o
f
t
h
e
c
o
e
f
f
i
c
i
e
n
t
s
b
e
i
n
g
i
n
v
e
s

t
i
g
a
t
e
d
a
s
h
a
d
p
r
e
v
i
o
u
s
l
y
b
e
e
n
d
o
n
e
.
I
n
s
t
e
a
d
h
e
d

e
f
i
n
e
d
o
p
e
r
a
t
i
o
n
s
s
u
c
h
a
s
a
d
d
i
t
i
o
n
,
s
u
b
t
r
a
c
t
i
o

n
,
m
u
l
t
i
p
l
i
c
a
t
i
o
n
,
a
n
d
d
i
v
i
s
i
o
n
a
s
t
r
a
n
s
f
o
r
m
a
t

i
o
n
s
o
f
t
h
o
s
e
m
a
t
r
i
c
e
s
a
n
d
s
h
o
w
e
d
t
h
e
a
s
s
o
c
i
a
t
i
v

e
a
n
d
d
i
s
t
r
i
b
u
t
i
v
e
p
r
o
p
e
r
t
i
e
s
h
e
l
d
t
r
u
e
.
C
a
y
l
e
y

i
n
v
e
s
t
i
g
a
t
e
d
a
n
d
d
e
m
o
n
s
t
r
a
t
e
d
t
h
e
n
o
n
c
o
m
m
u
t
a

t
i
v
e
p
r
o
p
e
r
t
y
o
f
m
a
t
r
i
x
m
u
l
t
i
p
l
i
c
a
t
i
o
n
a
s
w
e
l
l
a

s
t
h
e
c
o
m
m
u
t
a
t
i
v
e
p
r
o
p
e
r
t
y
o
f
m
a
t
r
i
x
a
d
d
i
t
i
o
n
.
[

1
0
0
]

E
a
r
l
y
m
a
t
r
i
x
t
h
e
o
r
y
h
a
d
l
i
m
i
t
e
d
t
h
e
u
s
e
o
f
a

r
r
a
y
s
a
l
m
o
s
t
e
x
c
l
u
s
i
v
e
l
y
t
o
d
e
t
e
r
m
i
n
a
n
t
s
a
n
d
A
r

t
h
u
r
C
a
y
l
e
y
'
s
a
b
s
t
r
a
c
t
m
a
t
r
i
x
o
p
e
r
a
t
i
o
n
s
w
e
r
e
r

e
v
o
l
u
t
i
o
n
a
r
y
.
H
e
w
a
s
i
n
s
t
r
u
m
e
n
t
a
l
i
n
p
r
o
p
o
s
i
n
g

a
m
a
t
r
i
x
c
o
n
c
e
p
t
i
n
d
e
p
e
n
d
e
n
t
o
f
e
q
u
a
t
i
o
n
s
y
s
t
e
m

s
.
I
n
1
8
5
8
C
a
y
l
e
y
p
u
b
l
i
s
h
e
d
h
i
s
M
e
m
o
i
r
o
n
t
h
e
t

h
e
o
r
y
o
f
m
a
t
r
i
c
e
s
[
1
0
6
]
[
1
0
7
]

i
n
w
h
i
c
h
h
e
p
r
o
p
o
s

e
d
a
n
d
d
e
m
o
n
s
t
r
a
t
e
d
t
h
e
C
a
y
l
e
y
H
a
m
i
l
t
o
n
t
h
e
o
r
e

m
.
[
1
0
0
]

A
n
E
n
g
l
i
s
h
m
a
t
h
e
m
a
t
i
c
i
a
n
n
a
m
e
d
C
u
l
l
i
s

w
a
s
t
h
e
f
i
r
s
t
t
o
u
s
e
m
o
d
e
r
n
b
r
a
c
k
e
t
n
o
t
a
t
i
o
n
f
o
r
m

a
t
r
i
c
e
s
i
n
1
9
1
3
a
n
d
h
e
s
i
m
u
l
t
a
n
e
o
u
s
l
y
d
e
m
o
n
s
t
r
a

t
e
d
t
h
e
f
i
r
s
t
s
i
g
n
i
f
i
c
a
n
t
u
s
e
t
h
e
n
o
t
a
t
i
o
n
A
=
[
a

i
,
j

]
t
o
r
e
p
r
e
s
e
n
t
a
m
a
t
r
i
x
w
h
e
r
e
a
i
,
j

r
e
f
e
r
s
t
o

t
h
e
i
t
h
r
o
w
a
n
d
t
h
e
j
t
h
c
o
l
u
m
n
.
[
1
0
0
]

T
h
e
s
t
u
d
y

o
f
d
e
t
e
r
m
i
n
a
n
t
s
s
p
r
a
n
g
f
r
o
m
s
e
v
e
r
a
l
s
o
u
r
c
e
s
.
[
1

0
8
]

N
u
m
b
e
r
t
h
e
o
r
e
t
i
c
a
l
p
r
o
b
l
e
m
s
l
e
d
G
a
u
s
s

t
o
r
e
l
a
t
e
c
o
e
f
f
i
c
i
e
n
t
s
o
f
q
u
a
d
r
a
t
i
c
f
o
r
m
s
,
i
.
e

.
,
e
x
p
r
e
s
s
i
o
n
s
s
u
c
h
a
s
x
2

+
x
y

2
y
2

,
a
n
d
l
i
n
e

a
r
m
a
p
s
i
n
t
h
r
e
e
d
i
m
e
n
s
i
o
n
s
t
o
m
a
t
r
i
c
e
s
.
E
i
s
e
n

s
t
e
i
n
f
u
r
t
h
e
r
d
e
v
e
l
o
p
e
d
t
h
e
s
e
n
o
t
i
o
n
s
,
i
n
c
l
u
d

i
n
g
t
h
e
r
e
m
a
r
k
t
h
a
t
,
i
n
m
o
d
e
r
n
p
a
r
l
a
n
c
e
,
m
a
t
r
i
x

p
r
o
d
u
c
t
s
a
r
e
n
o
n
c
o
m
m
u
t
a
t
i
v
e
.
C
a
u
c
h
y
w
a
s
t
h

e
f
i
r
s
t
t
o
p
r
o
v
e
g
e
n
e
r
a
l
s
t
a
t
e
m
e
n
t
s
a
b
o
u
t
d
e
t
e
r
m

i
n
a
n
t
s
,
u
s
i
n
g
a
s
d
e
f
i
n
i
t
i
o
n
o
f
t
h
e
d
e
t
e
r
m
i
n
a
n
t
o

f
a
m
a
t
r
i
x
A
=
[
a
i
,
j

]
t
h
e
f
o
l
l
o
w
i
n
g
:
r
e
p
l
a
c
e
t
h
e

p
o
w
e
r
s
a
j
k

b
y
a
j
k

i
n
t
h
e
p
o
l
y
n
o
m
i
a
l

,
w
h
e
r

d
e
n
o
t
e
s
t
h
e
p
r
o
d
u
c
t
o
f
t
h
e
i
n
d
i
c
a
t
e
d
t
e
r
m
s
.

H
e
a
l
s
o
s
h
o
w
e
d
,
i
n
1
8
2
9
,
t
h
a
t
t
h
e
e
i
g
e
n
v
a
l
u
e
s
o

f
s
y
m
m
e
t
r
i
c
m
a
t
r
i
c
e
s
a
r
e
r
e
a
l
.
[
1
0
9
]

J
a
c
o
b
i
s
t

u
d
i
e
d
"
f
u
n
c
t
i
o
n
a
l
d
e
t
e
r
m
i
n
a
n
t
s
"

l
a
t
e
r
c
a
l
l
e
d

J
a
c
o
b
i
d
e
t
e
r
m
i
n
a
n
t
s
b
y
S
y
l
v
e
s
t
e
r

w
h
i
c
h
c
a
n
b

e
u
s
e
d
t
o
d
e
s
c
r
i
b
e
g
e
o
m
e
t
r
i
c
t
r
a
n
s
f
o
r
m
a
t
i
o
n
s
a
t

a
l
o
c
a
l
(
o
r
i
n
f
i
n
i
t
e
s
i
m
a
l
)
l
e
v
e
l
,
s
e
e
a
b
o
v
e
;

K
r
o
n
e
c
k
e
r
'
s
V
o
r
l
e
s
u
n
g
e
n

b
e
r
d
i
e
T
h
e
o
r
i
e
d
e
r
D

e
t
e
r
m
i
n
a
n
t
e
n
[
1
1
0
]

a
n
d
W
e
i
e
r
s
t
r
a
s
s
'
Z
u
r
D
e
t

e
r
m
i
n
a
n
t
e
n
t
h
e
o
r
i
e
,
[
1
1
1
]

b
o
t
h
p
u
b
l
i
s
h
e
d
i
n
1
9

0
3
,
f
i
r
s
t
t
r
e
a
t
e
d
d
e
t
e
r
m
i
n
a
n
t
s
a
x
i
o
m
a
t
i
c
a
l
l
y

,
a
s
o
p
p
o
s
e
d
t
o
p
r
e
v
i
o
u
s
m
o
r
e
c
o
n
c
r
e
t
e
a
p
p
r
o
a
c
h
e

s
s
u
c
h
a
s
t
h
e
m
e
n
t
i
o
n
e
d
f
o
r
m
u
l
a
o
f
C
a
u
c
h
y
.
A
t
t
h
a
t

p
o
i
n
t
,
d
e
t
e
r
m
i
n
a
n
t
s
w
e
r
e
f
i
r
m
l
y
e
s
t
a
b
l
i
s
h
e
d
.

M
a
n
y
t
h
e
o
r
e
m
s
w
e
r
e
f
i
r
s
t
e
s
t
a
b
l
i
s
h
e
d
f
o
r
s
m
a
l
l
m

a
t
r
i
c
e
s
o
n
l
y
,
f
o
r
e
x
a
m
p
l
e
t
h
e
C
a
y
l
e
y

H
a
m
i
l
t
o
n

t
h
e
o
r
e
m
w
a
s
p
r
o
v
e
d
f
o
r
2

2
m
a
t
r
i
c
e
s
b
y
C
a
y
l
e
y
i
n

t
h
e
a
f
o
r
e
m
e
n
t
i
o
n
e
d
m
e
m
o
i
r
,
a
n
d
b
y
H
a
m
i
l
t
o
n
f
o

r
4

4
m
a
t
r
i
c
e
s
.
F
r
o
b
e
n
i
u
s
,
w
o
r
k
i
n
g
o
n
b
i
l
i
n
e
a

r
f
o
r
m
s
,
g
e
n
e
r
a
l
i
z
e
d
t
h
e
t
h
e
o
r
e
m
t
o
a
l
l
d
i
m
e
n
s
i
o

n
s
(
1
8
9
8
)
.
A
l
s
o
a
t
t
h
e
e
n
d
o
f
t
h
e
1
9
t
h
c
e
n
t
u
r
y
t
h
e

G
a
u
s
s

J
o
r
d
a
n
e
l
i
m
i
n
a
t
i
o
n
(
g
e
n
e
r
a
l
i
z
i
n
g
a
s
p
e

c
i
a
l
c
a
s
e
n
o
w
k
n
o
w
n
a
s
G
a
u
s
s
e
l
i
m
i
n
a
t
i
o
n
)
w
a
s
e
s

t
a
b
l
i
s
h
e
d
b
y
J
o
r
d
a
n
.
I
n
t
h
e
e
a
r
l
y
2
0
t
h
c
e
n
t
u
r
y
,

m
a
t
r
i
c
e
s
a
t
t
a
i
n
e
d
a
c
e
n
t
r
a
l
r
o
l
e
i
n
l
i
n
e
a
r
a
l
g
e
b

r
a
.
[
1
1
2
]

p
a
r
t
i
a
l
l
y
d
u
e
t
o
t
h
e
i
r
u
s
e
i
n
c
l
a
s
s
i
f
i

c
a
t
i
o
n
o
f
t
h
e
h
y
p
e
r
c
o
m
p
l
e
x
n
u
m
b
e
r
s
y
s
t
e
m
s
o
f
t

h
e
p
r
e
v
i
o
u
s
c
e
n
t
u
r
y
.
T
h
e
i
n
c
e
p
t
i
o
n
o
f
m
a
t
r
i
x
m
e

c
h
a
n
i
c
s
b
y
H
e
i
s
e
n
b
e
r
g
,
B
o
r
n
a
n
d
J
o
r
d
a
n
l
e

d
t
o
s
t
u
d
y
i
n
g
m
a
t
r
i
c
e
s
w
i
t
h
i
n
f
i
n
i
t
e
l
y
m
a
n
y
r
o
w
s

a
n
d
c
o
l
u
m
n
s
.
[
1
1
3
]

L
a
t
e
r
,
v
o
n
N
e
u
m
a
n
n
c
a
r
r
i
e

d
o
u
t
t
h
e
m
a
t
h
e
m
a
t
i
c
a
l
f
o
r
m
u
l
a
t
i
o
n
o
f
q
u
a
n
t
u
m
m
e

c
h
a
n
i
c
s
,
b
y
f
u
r
t
h
e
r
d
e
v
e
l
o
p
i
n
g
f
u
n
c
t
i
o
n
a
l
a
n
a

l
y
t
i
c
n
o
t
i
o
n
s
s
u
c
h
a
s
l
i
n
e
a
r
o
p
e
r
a
t
o
r
s
o
n
H
i

l
b
e
r
t
s
p
a
c
e
s
,
w
h
i
c
h
,
v
e
r
y
r
o
u
g
h
l
y
s
p
e
a
k
i
n
g
,
c
o
r

r
e
s
p
o
n
d
t
o
E
u
c
l
i
d
e
a
n
s
p
a
c
e
,
b
u
t
w
i
t
h
a
n
i
n
f
i
n
i
t

y
o
f
i
n
d
e
p
e
n
d
e
n
t
d
i
r
e
c
t
i
o
n
s
.

O
t
h
e
r
h
i
s
t
o
r
i
c
a
l

u
s
a
g
e
s
o
f
t
h
e
w
o
r
d

m
a
t
r
i
x

i
n
m
a
t
h
e
m
a
t
i
c
s
[
e
d
i
t
]

T
h
e
w
o
r
d
h
a
s
b
e
e
n
u
s
e
d
i
n
u
n
u
s
u
a
l
w
a
y
s
b
y
a
t
l
e
a
s
t
t

w
o
a
u
t
h
o
r
s
o
f
h
i
s
t
o
r
i
c
a
l
i
m
p
o
r
t
a
n
c
e
.
B
e
r
t
r
a
n
d
R

u
s
s
e
l
l
a
n
d
A
l
f
r
e
d
N
o
r
t
h
W
h
i
t
e
h
e
a
d
i
n
t
h
e
i
r
P

r
i
n
c
i
p
i
a
M
a
t
h
e
m
a
t
i
c
a
(
1
9
1
0

1
9
1
3
)
u
s
e
t
h
e
w
o
r
d


m
a
t
r
i
x

i
n
t
h
e
c
o
n
t
e
x
t
o
f
t
h
e
i
r
A
x
i
o
m
o
f
r
e
d
u
c
i

b
i
l
i
t
y
.
T
h
e
y
p
r
o
p
o
s
e
d
t
h
i
s
a
x
i
o
m
a
s
a
m
e
a
n
s
t
o
r
e
d

u
c
e
a
n
y
f
u
n
c
t
i
o
n
t
o
o
n
e
o
f
l
o
w
e
r
t
y
p
e
,
s
u
c
c
e
s
s
i
v
e

l
y
,
s
o
t
h
a
t
a
t
t
h
e

b
o
t
t
o
m

(
0
o
r
d
e
r
)
t
h
e
f
u
n
c
t
i
o
n

i
s
i
d
e
n
t
i
c
a
l
t
o
i
t
s
e
x
t
e
n
s
i
o
n
:
Let us give the name of matrix to any function, of however many variables, which does
not involve any apparent variables. Then any possible function other than a matrix is
derived from a matrix by means of generalization, i.e., by considering the proposition
which asserts that the function in question is true with all possible values or with some
value of one of the arguments, the other argument or arguments remaining
undetermined.[114]
F
o
r
e
x
a
m
p
l

e
a
f
u
n
c
t
i
o
n

(
x
,
y
)
o
f
t
w
o
v
a
r
i
a
b
l
e
s
x
a
n
d
y
c
a

n
b
e
r
e
d
u
c
e
d
t
o
a
c
o
l
l
e
c
t
i
o
n
o
f
f
u
n
c
t
i
o
n
s
o
f
a
s
i

n
g
l
e
v
a
r
i
a
b
l
e
,
e
.
g
.
,
y
,
b
y

c
o
n
s
i
d
e
r
i
n
g

t
h
e
f
u

n
c
t
i
o
n
f
o
r
a
l
l
p
o
s
s
i
b
l
e
v
a
l
u
e
s
o
f

i
n
d
i
v
i
d
u
a
l
s

a
i

s
u
b
s
t
i
t
u
t
e
d
i
n
p
l
a
c
e
o
f
v
a
r
i
a
b
l
e
x
.
A
n
d
t
h
e

n
t
h
e
r
e
s
u
l
t
i
n
g
c
o
l
l
e
c
t
i
o
n
o
f
f
u
n
c
t
i
o
n
s
o
f
t
h
e
s
i

n
g
l
e
v
a
r
i
a
b
l
e
y
,
i
.
e
.
,

a
i

(
a
i

,
y
)
,
c
a
n
b
e
r
e
d
u

c
e
d
t
o
a

m
a
t
r
i
x

o
f
v
a
l
u
e
s
b
y

c
o
n
s
i
d
e
r
i
n
g

t
h
e
f

u
n
c
t
i
o
n
f
o
r
a
l
l
p
o
s
s
i
b
l
e
v
a
l
u
e
s
o
f

i
n
d
i
v
i
d
u
a
l
s


b
i

s
u
b
s
t
i
t
u
t
e
d
i
n
p
l
a
c
e
o
f
v
a
r
i
a
b
l
e
y
:
bjai: (ai, bj).

A
l
f
r

e
d

T
a
r
s
k
i
i
n
h
i
s
1
9
4
6
I
n
t
r
o
d
u
c
t
i
o
n
t
o
L
o
g
i
c
u
s

e
d
t
h
e

w
o
r
d

m
a
t
r
i
x

s
y
n
o
n
y

m
o
u
s
l
y

w
i
t
h
t
h
e
n
o
t
i
o

n
o
f
t
r
u
t
h
t
a
b
l
e
a
s
u
s
e
d
i
n

m
a
t
h
e

m
a
t
i
c
a
l
l
o
g
i
c
.

[
1
1
5
]

You might also like