You are on page 1of 72

The Matrix Cookbook

[ http://matrixcookbook.com ]
Kaare Brandt Petersen
Michael Syskind Pedersen
Version: November 15, 2012

Introduction
What is this? These pages are a collection of facts (identities, approximations, inequalities, relations, ...) about matrices and matters relating to them.
It is collected in this form for the convenience of anyone who wants a quick
desktop reference .
Disclaimer: The identities, approximations and relations presented here were
obviously not invented but collected, borrowed and copied from a large amount
of sources. These sources include similar but shorter notes found on the internet
and appendices in books - see the references for a full list.
Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at cookbook@2302.dk.
Its ongoing: The project of keeping a large repository of relations involving
matrices is naturally ongoing and the version will be apparent from the date in
the header.
Suggestions: Your suggestion for additional content or elaboration of some
topics is most welcome acookbook@2302.dk.
Keywords: Matrix algebra, matrix relations, matrix identities, derivative of
determinant, derivative of inverse matrix, dierentiate a matrix.
Acknowledgements: We would like to thank the following for contributions
and suggestions: Bill Baxter, Brian Templeton, Christian Rishj, Christian
Schr
oppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripidis
Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, J
urgen
Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer,
Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut,
Markus Froeb, Michael Hubatka, Miguel Barao, Ole Winther, Pavel Sakov,
Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui
He. We would also like thank The Oticon Foundation for funding our PhD
studies.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2

CONTENTS

CONTENTS

Contents
1 Basics
1.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . .
2 Derivatives
2.1 Derivatives
2.2 Derivatives
2.3 Derivatives
2.4 Derivatives
2.5 Derivatives
2.6 Derivatives
2.7 Derivatives
2.8 Derivatives

of
of
of
of
of
of
of
of

a Determinant . . . . . . . . . . . .
an Inverse . . . . . . . . . . . . . . .
Eigenvalues . . . . . . . . . . . . . .
Matrices, Vectors and Scalar Forms
Traces . . . . . . . . . . . . . . . . .
vector norms . . . . . . . . . . . . .
matrix norms . . . . . . . . . . . . .
Structured Matrices . . . . . . . . .

3 Inverses
3.1 Basic . . . . . . . . . . .
3.2 Exact Relations . . . . .
3.3 Implication on Inverses .
3.4 Approximations . . . . .
3.5 Generalized Inverse . . .
3.6 Pseudo Inverse . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

6
6
6
7

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

8
8
9
10
10
12
14
14
14

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

17
17
18
20
20
21
21

4 Complex Matrices
24
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Higher order and non-linear derivatives . . . . . . . . . . . . . . . 26
4.3 Inverse of complex sum . . . . . . . . . . . . . . . . . . . . . . . 27
5 Solutions and Decompositions
5.1 Solutions to linear equations .
5.2 Eigenvalues and Eigenvectors
5.3 Singular Value Decomposition
5.4 Triangular Decomposition . .
5.5 LU decomposition . . . . . .
5.6 LDM decomposition . . . . .
5.7 LDL decompositions . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

28
28
30
31
32
32
33
33

6 Statistics and Probability


34
6.1 Definition of Moments . . . . . . . . . . . . . . . . . . . . . . . . 34
6.2 Expectation of Linear Combinations . . . . . . . . . . . . . . . . 35
6.3 Weighted Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 36
7 Multivariate Distributions
7.1 Cauchy . . . . . . . . . .
7.2 Dirichlet . . . . . . . . . .
7.3 Normal . . . . . . . . . .
7.4 Normal-Inverse Gamma .
7.5 Gaussian . . . . . . . . . .
7.6 Multinomial . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

37
37
37
37
37
37
37

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3

CONTENTS

7.7
7.8
7.9

CONTENTS

Students t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wishart, Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 Gaussians
8.1 Basics . . . . . . . .
8.2 Moments . . . . . .
8.3 Miscellaneous . . . .
8.4 Mixture of Gaussians

37
38
39

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

40
40
42
44
44

9 Special Matrices
9.1 Block matrices . . . . . . . . . . . . . . . .
9.2 Discrete Fourier Transform Matrix, The . .
9.3 Hermitian Matrices and skew-Hermitian . .
9.4 Idempotent Matrices . . . . . . . . . . . . .
9.5 Orthogonal matrices . . . . . . . . . . . . .
9.6 Positive Definite and Semi-definite Matrices
9.7 Singleentry Matrix, The . . . . . . . . . . .
9.8 Symmetric, Skew-symmetric/Antisymmetric
9.9 Toeplitz Matrices . . . . . . . . . . . . . . .
9.10 Transition matrices . . . . . . . . . . . . . .
9.11 Units, Permutation and Shift . . . . . . . .
9.12 Vandermonde Matrices . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

46
46
47
48
49
49
50
52
54
54
55
56
57

10 Functions and Operators


10.1 Functions and Series . . . . .
10.2 Kronecker and Vec Operator
10.3 Vector Norms . . . . . . . . .
10.4 Matrix Norms . . . . . . . . .
10.5 Rank . . . . . . . . . . . . . .
10.6 Integral Involving Dirac Delta
10.7 Miscellaneous . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

58
58
59
61
61
62
62
63

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Functions
. . . . . .

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

A One-dimensional Results
64
A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 65
B Proofs and Details
66
B.1 Misc Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4

CONTENTS

CONTENTS

Notation and Nomenclature


A
Aij
Ai
Aij
An
A 1
A+
A1/2
(A)ij
Aij
[A]ij
a
ai
ai
a
<z
<z
<Z
=z
=z
=Z

Matrix
Matrix indexed for some purpose
Matrix indexed for some purpose
Matrix indexed for some purpose
Matrix indexed for some purpose or
The n.th power of a square matrix
The inverse matrix of the matrix A
The pseudo inverse matrix of the matrix A (see Sec. 3.6)
The square root of a matrix (if unique), not elementwise
The (i, j).th entry of the matrix A
The (i, j).th entry of the matrix A
The ij-submatrix, i.e. A with i.th row and j.th column deleted
Vector (column-vector)
Vector indexed for some purpose
The i.th element of the vector a
Scalar
Real part of a scalar
Real part of a vector
Real part of a matrix
Imaginary part of a scalar
Imaginary part of a vector
Imaginary part of a matrix

det(A)
Tr(A)
diag(A)
eig(A)
vec(A)
sup
||A||
AT
A T
A
AH

Determinant of A
Trace of the matrix A
Diagonal matrix of the matrix A, i.e. (diag(A))ij = ij Aij
Eigenvalues of the matrix A
The vector-version of the matrix A (see Sec. 10.2.2)
Supremum of a set
Matrix norm (subscript if any denotes what norm)
Transposed matrix
The inverse of the transposed and vice versa, A T = (A 1 )T = (AT )
Complex conjugated matrix
Transposed and complex conjugated matrix (Hermitian)

A B
AB

Hadamard (elementwise) product


Kronecker product

0
I
Jij

The null matrix. Zero in all entries.


The identity matrix
The single-entry matrix, 1 at (i, j) and zero elsewhere
A positive definite matrix
A diagonal matrix

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5

Basics
(AB)

(ABC...)

(A )

...C

1 T

(A

A +B

(AB)T

BT A T

(3)
(4)
(5)

...C B A

(A )
(A + B)H

=
=

(A )
AH + BH

(AB)H

BH A H

(ABC...)

(2)

(1)
1

(ABC...)

1.2

(A + B)

1.1

BASICS

(6)

1 H

(7)
(8)
(9)

...C B A

(10)

Trace
Tr(A)

Tr(A)

Aii

(11)

Tr(A)

Pi

Tr(A )

(13)

Tr(AB)

Tr(BA)

(14)

Tr(A + B)

Tr(A) + Tr(B)

(15)

i i,
T

= eig(A)

(12)

Tr(ABC)

Tr(BCA) = Tr(CAB)

(16)

aT a

Tr(aaT )

(17)

Determinant

Let A be an n n matrix.
det(A)

det(cA)

det(AT )

det(A)

det(AB)

i i

= eig(A)

cn det(A),

(18)

if A 2 Rnn

(19)
(20)

det(A) det(B)

(21)

1/ det(A)

(22)

det(An )

det(A)n

(23)

det(A

det(I + uv )

1+u v

(24)

det(I + A) = 1 + det(A) + Tr(A)

(25)

For n = 2:
For n = 3:
1
det(I + A) = 1 + det(A) + Tr(A) + Tr(A)2
2

1
Tr(A2 )
2

(26)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6

1.3

The Special Case 2x2

BASICS

For n = 4:
det(I + A)

1 + det(A) + Tr(A) +
+Tr(A)2
1
+ Tr(A)3
6

1
2

1
Tr(A2 )
2
1
1
Tr(A)Tr(A2 ) + Tr(A3 )
2
3

(27)

For small ", the following approximation holds


1
det(I + "A)
= 1 + det(A) + "Tr(A) + "2 Tr(A)2
2

1.3

1 2
" Tr(A2 )
2

(28)

The Special Case 2x2

Consider the matrix A


A=

A11
A21

A12
A22

Determinant and trace


det(A) = A11 A22

A12 A21

(29)

Tr(A) = A11 + A22

(30)

Eigenvalues
2

p
Tr(A) + Tr(A)2
=
2
1+

Eigenvectors
v1 /

4 det(A)
2
2

= Tr(A)

1
=
det(A)

=
1 2

A12
A11
1

Inverse
A

Tr(A) + det(A) = 0

v2 /

A22
A21

Tr(A)

Tr(A)2
2

4 det(A)

= det(A)

A12
A11
2
A12
A11

(31)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7

DERIVATIVES

Derivatives

This section is covering dierentiation of a number of expressions with respect to


a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.8 for dierentiation of structured matrices. The basic
assumptions can be written in a formula as

that is for e.g. vector forms,

@x
@xi
=
@y i
@y

@Xkl
=
@Xij

ik lj

@x
@yi

@x
@y

=
i

(32)

@x
@y

=
ij

@xi
@yj

The following rules are general and very useful when deriving the dierential of
an expression ([19]):
@A
@(X)
@(X + Y)
@(Tr(X))
@(XY)
@(X Y)
@(X Y)
@(X 1 )
@(det(X))
@(det(X))
@(ln(det(X)))
@XT
@XH

2.1
2.1.1

=
=
=
=
=
=
=
=
=
=
=
=
=

0
@X
@X + @Y
Tr(@X)
(@X)Y + X(@Y)
(@X) Y + X (@Y)
(@X) Y + X (@Y)
X 1 (@X)X 1
Tr(adj(X)@X)
det(X)Tr(X 1 @X)
Tr(X 1 @X)
(@X)T
(@X)H

(A is a constant)

(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)

Derivatives of a Determinant
General form
@ det(Y)
@x
X @ det(X)
Xjk
@Xik

=
=

det(Y)Tr Y
ij

(46)

@x

det(X)

@ 2 det(Y)
@x2

1 @Y

"

(47)
"

det(Y) Tr Y

@Y
1 @ @x

@x

@Y
+Tr Y 1
Tr Y
@x

1 @Y
Tr Y
Y
@x

#
1 @Y

@x
1 @Y

@x

(48)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8

2.2

Derivatives of an Inverse

2.1.2

DERIVATIVES

Linear forms
@ det(X)
@X
X @ det(X)
Xjk
@Xik

=
=

det(X)(X
ij

1 T

(49)

det(X)

(50)

@ det(AXB)
@X

2.1.3

det(AXB)(X

1 T

) = det(AXB)(XT )

(51)

Square forms

If X is square and invertible, then


@ det(XT AX)
= 2 det(XT AX)X
@X

(52)

If X is not square but A is symmetric, then


@ det(XT AX)
= 2 det(XT AX)AX(XT AX)
@X

(53)

If X is not square and A is not symmetric, then


@ det(XT AX)
= det(XT AX)(AX(XT AX)
@X
2.1.4

+ AT X(XT AT X)

(54)

Other nonlinear forms

Some special cases are (See [9, 7])


@ ln det(XT X)|
@X
@ ln det(XT X)
@X+
@ ln | det(X)|
@X
@ det(Xk )
@X

2.2

2(X+ )T

(55)

2XT

(56)

1 T

) = (XT )

(X

k det(Xk )X

(57)
(58)

Derivatives of an Inverse

From [27] we have the basic identity


@Y 1
=
@x

1 @Y

@x

(59)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 9

2.3

Derivatives of Eigenvalues

DERIVATIVES

from which it follows


@(X 1 )kl
@Xij

(X

)ki (X

)jl

(60)

@aT X 1 b
=
X T abT X T
(61)
@X
@ det(X 1 )
=
det(X 1 )(X 1 )T
(62)
@X
@Tr(AX 1 B)
=
(X 1 BAX 1 )T
(63)
@X
@Tr((X + A) 1 )
=
((X + A) 1 (X + A) 1 )T
(64)
@X
From [32] we have the following result: Let A be an n n invertible square
matrix, W be the inverse of A, and J(A) is an n n -variate and dierentiable
function with respect to A, then the partial dierentials of J with respect to A
and W satisfy
@J
@J
= A T
A T
@A
@W

2.3

Derivatives of Eigenvalues

@ X
@
eig(X) =
Tr(X) = I
(65)
@X
@X
@ Y
@
eig(X) =
det(X) = det(X)X T
(66)
@X
@X
If A is real and symmetric, i and vi are distinct eigenvalues and eigenvectors
of A (see (276)) with viT vi = 1, then [33]
@

@vi

2.4
2.4.1

=
=

viT @(A)vi
( iI

(67)

A) @(A)vi

(68)

Derivatives of Matrices, Vectors and Scalar Forms


First Order
@xT a
@x
@aT Xb
@X
@aT XT b
@X
@aT Xa
@X
@X
@Xij
@(XA)ij
@Xmn
@(XT A)ij
@Xmn

@aT x
@x

abT

(70)

baT

(71)

@aT XT a
@X

Jij

(69)

aaT

(72)
(73)

im (A)nj

(Jmn A)ij

(74)

in (A)mj

(Jnm A)ij

(75)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 10

2.4

Derivatives of Matrices, Vectors and Scalar Forms

2.4.2

DERIVATIVES

Second Order
@ X
Xkl Xmn
@Xij

@bT XT Xc
@X
@(Bx + b)T C(Dx + d)
@x
@(XT BX)kl
@Xij
@(XT BX)
@Xij

Xkl

(76)

X(bcT + cbT )

(77)

BT C(Dx + d) + DT CT (Bx + b)

(78)

klmn

kl

=
=

lj (X

B)ki +

kj (BX)il

XT BJij + Jji BX

(79)

(Jij )kl =

ik jl

(80)

See Sec 9.7 for useful properties of the Single-entry matrix Jij
@xT Bx
@x
@bT XT DXc
@X
@
(Xb + c)T D(Xb + c)
@X

(B + BT )x

(81)

DT XbcT + DXcbT

(82)

(D + DT )(Xb + c)bT

(83)

Assume W is symmetric, then


@
(x As)T W(x As)
@s
@
(x s)T W(x s)
@x
@
(x s)T W(x s)
@s
@
(x As)T W(x As)
@x
@
(x As)T W(x As)
@A

2AT W(x

=
=

2W(x

=
=

s)

2W(x
2W(x

As)

(84)
(85)

s)

(86)

As)

(87)

As)sT

2W(x

(88)

As a case with complex values the following holds


@(a

xH b)2
@x

2b(a

xH b)

(89)

This formula is also known from the LMS algorithm [14]


2.4.3

Higher-order and non-linear


n
X1
@(Xn )kl
=
(Xr Jij Xn
@Xij
r=0

1 r

)kl

(90)

For proof of the above, see B.1.3.

n
X1
@ T n
a X b=
(Xr )T abT (Xn
@X
r=0

1 r T

(91)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 11

2.5

Derivatives of Traces

@ T n T n
a (X ) X b
@X

n
X1 h

Xn

1 r

DERIVATIVES

abT (Xn )T Xr

r=0

+(Xr )T Xn abT (Xn

1 r T

(92)

See B.1.3 for a proof.


Assume s and r are functions of x, i.e. s = s(x), r = r(x), and that A is a
constant, then

T
T
@ T
@s
@r
s Ar =
Ar +
AT s
(93)
@x
@x
@x
@ (Ax)T (Ax)
@x (Bx)T (Bx)

=
=

2.4.4

@ xT AT Ax
@x xT BT Bx
AT Ax
xT AT AxBT Bx
2 T
2
x BBx
(xT BT Bx)2

(94)
(95)

Gradient and Hessian

Using the above we have for the gradient and the Hessian
f
@f
rx f =
@x
@2f
@x@xT

2.5

xT Ax + bT x

(96)

(A + AT )x + b

(97)

A + AT

(98)

Derivatives of Traces

Assume F (X) to be a dierentiable function of each of the elements of X. It


then holds that
@Tr(F (X))
= f (X)T
@X
where f () is the scalar derivative of F ().
2.5.1

First Order
@
Tr(X)
@X
@
Tr(XA)
@X
@
Tr(AXB)
@X
@
Tr(AXT B)
@X
@
Tr(XT A)
@X
@
Tr(AXT )
@X
@
Tr(A X)
@X

(99)

AT

(100)

A T BT

(101)

BA

(102)

(103)

(104)

Tr(A)I

(105)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 12

2.5

Derivatives of Traces

2.5.2

DERIVATIVES

Second Order
@
Tr(X2 )
@X
@
Tr(X2 B)
@X
@
Tr(XT BX)
@X
@
Tr(BXXT )
@X
@
Tr(XXT B)
@X
@
Tr(XBXT )
@X
@
Tr(BXT X)
@X
@
Tr(XT XB)
@X
@
Tr(AXBX)
@X
@
Tr(XT X)
@X
@
Tr(BT XT CXB)
@X

@
Tr XT BXC
@X
@
Tr(AXBXT C)
@X
h
i
@
Tr (AXB + C)(AXB + C)T
@X
@
Tr(X X)
@X

2XT

(106)

(XB + BX)T

(107)

BX + BT X

(108)

BX + BT X

(109)

BX + BT X

(110)

XBT + XB

(111)

XBT + XB

(112)

XBT + XB

(113)

AT XT BT + BT XT AT

(114)

@
Tr(XXT )
@X

(115)

CT XBBT + CXBBT

(116)

BXC + BT XCT

(117)

AT CT XBT + CAXB

(118)

2AT (AXB + C)BT

(119)

@
Tr(X)Tr(X) = 2Tr(X)I(120)
@X

k(Xk

2X

See [7].
2.5.3

Higher Order
@
Tr(Xk )
@X
@
Tr(AXk )
@X
T T

T
@
@X Tr B X CXX CXB

k
X1

1 T

(Xr AXk

(121)
r 1 T

(122)

r=0

CXXT CXBBT
+CT XBBT XT CT X
+CXBBT XT CX
+CT XXT CT XBBT

(123)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 13

2.6

Derivatives of vector norms

Other
@
Tr(AX 1 B) = (X 1 BAX 1 )T =
@X
Assume B and C to be symmetric, then

DERIVATIVES

2.5.4

h
@
Tr (XT CX)
@X

@
Tr (XT CX)
@X
h

@
Tr (A + XT CX)
@X

(XT BX)

(XT BX)

A T BT X

(CX(XT CX)

2CX(XT CX)

(124)

)(A + AT )(XT CX)


XT BX(XT CX)

+2BX(XT CX)

2CX(A + XT CX)

XT BX(A + XT CX)
1

2.6
2.6.1

cos(X)T

Two-norm
a||2 =

x a
||x a||2

@ x a
I
=
@x kx ak2
kx ak2

(x

a)(x a)T
kx ak32

@||x||22
@||xT x||2
=
= 2x
@x
@x

2.7

(128)

Derivatives of vector norms


@
||x
@x

(129)
(130)
(131)

Derivatives of matrix norms

For more on matrix norms, see Sec. 10.4.


2.7.1

Frobenius norm
@
@
||X||2F =
Tr(XXH ) = 2X
(132)
@X
@X
See (248). Note that this is also a special case of the result in equation 119.

2.8

(127)

See [7].

(125)

(126)

+2BX(A + XT CX)

@Tr(sin(X))
@X

Derivatives of Structured Matrices

Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc.
In that case the derivatives of the previous section does not apply in general.
Instead, consider the following general rule for dierentiating a scalar function
f (A)
"
#
T
X @f @Akl
df
@f
@A
=
= Tr
(133)
dAij
@Akl @Aij
@A
@Aij
kl

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 14

2.8

Derivatives of Structured Matrices

DERIVATIVES

The matrix dierentiated with respect to itself is in this document referred to


as the structure matrix of A and is defined simply by
@A
= Sij
@Aij

(134)

If A has no special structure we have simply Sij = Jij , that is, the structure
matrix is simply the single-entry matrix. Many structures have a representation
in singleentry matrices, see Sec. 9.7.6 for more examples of structure matrices.
2.8.1

The Chain Rule

Sometimes the objective is to find the derivative of a matrix which is a function


of another matrix. Let U = f (X), the goal is to find the derivative of the
function g(U) with respect to X:
@g(U)
@g(f (X))
=
@X
@X

(135)

Then the Chain Rule can then be written the following way:
M

@g(U)
@g(U) X X @g(U) @ukl
=
=
@X
@xij
@ukl @xij

(136)

k=1 l=1

Using matrix notation, this can be written as:


h @g(U)
@g(U)
@U i
= Tr (
)T
.
@Xij
@U
@Xij
2.8.2

(137)

Symmetric

If A is symmetric, then Sij = Jij + Jji

Jij Jij and therefore

df
@f
@f
=
+
dA
@A
@A

diag

@f
@A

(138)

That is, e.g., ([5]):


@Tr(AX)
@X
@ det(X)
@X
@ ln det(X)
@X
2.8.3

A + AT

det(X)(2X

(X

2X

I)

(A I), see (142)

(X

I))

(139)
(140)
(141)

Diagonal

If X is diagonal, then ([19]):


@Tr(AX)
@X

A I

(142)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 15

2.8

Derivatives of Structured Matrices

2.8.4

DERIVATIVES

Toeplitz

Like symmetric matrices and diagonal matrices also Toeplitz matrices has a
special structure which should be taken into account when the derivative with
respect to a matrix with Toeplitz structure.

@Tr(AT)
@T
@Tr(TA)
2 @T
6
6
6
6
6
4

(143)

Tr(A)

Tr([AT ]n1 )

Tr([AT ]1n ))

Tr(A)

Tr([[AT ]1n ]2,n


.
.
.
A1n

Tr([[AT ]1n ]n
.

.
1)
.

.
.

.
.
.
.

.
.
.

1,2 )

.
.
.
.
.
.

Tr([[AT ]1n ]2,n

1)

.
.
.

An1
.
.
.

Tr([[AT ]1n ]n

.
.

Tr([AT ]1n ))

1,2 )

Tr([AT ]n1 )
Tr(A)

(A)

7
7
7
7
7
5

As it can be seen, the derivative (A) also has a Toeplitz structure. Each value
in the diagonal is the sum of all the diagonal valued in A, the values in the
diagonals next to the main diagonal equal the sum of the diagonal next to the
main diagonal in AT . This result is only valid for the unconstrained Toeplitz
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
@Tr(AT)
@Tr(TA)
=
= (A) + (A)T
@T
@T

(A) I

(144)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 16

3
3.1
3.1.1

INVERSES

Inverses
Basic
Definition

The inverse A

of a matrix A 2 Cnn is defined such that


1

AA

=A

A = I,

(145)

where I is the n n identity matrix. If A 1 exists, A is said to be nonsingular.


Otherwise, A is said to be singular (see e.g. [12]).
3.1.2

Cofactors and Adjoint

The submatrix of a matrix A, denoted by [A]ij is a (n 1) (n 1) matrix


obtained by deleting the ith row and the jth column of A. The (i, j) cofactor
of a matrix is defined as
cof(A, i, j) = ( 1)i+j det([A]ij ),
The matrix of cofactors can be created from the cofactors
2
3
cof(A, 1, 1)

cof(A, 1, n)
6
7
6
7
6
7
..
..
cof(A) = 6
7
.
cof(A,
i,
j)
.
6
7
4
5
cof(A, n, 1)

cof(A, n, n)

(146)

(147)

The adjoint matrix is the transpose of the cofactor matrix


adj(A) = (cof(A))T ,
3.1.3

(148)

Determinant

The determinant of a matrix A 2 Cnn is defined as (see [12])


det(A)

=
=

n
X
j=1
n
X

( 1)j+1 A1j det ([A]1j )

(149)

A1j cof(A, 1, j).

(150)

j=1

3.1.4

Construction

The inverse matrix can be constructed, using the adjoint matrix, by


A

1
adj(A)
det(A)

(151)

For the case of 2 2 matrices, see section 1.3.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 17

3.2

Exact Relations

3.1.5

INVERSES

Condition number

The condition number of a matrix c(A) is the ratio between the largest and the
smallest singular value of a matrix (see Section 5.3 on singular values),
d+
d

c(A) =

(152)

The condition number can be used to measure how singular a matrix is. If the
condition number is large, it indicates that the matrix is nearly singular. The
condition number can also be estimated from the matrix norms. Here
c(A) = kAk kA

k,

(153)

where k k is a norm such as e.g the 1-norm, the 2-norm, the 1-norm or the
Frobenius norm (see Sec 10.4p
for more on matrix norms).
The 2-norm of A equals (max(eig(AH A))) [12, p.57]. For a symmetric
matrix, this reduces to ||A||2 = max(|eig(A)|) [12, p.394]. If the matrix is
symmetric and positive definite, ||A||2 = max(eig(A)). The condition number
based on the 2-norm thus reduces to
kAk2 kA

3.2
3.2.1

k2 = max(eig(A)) max(eig(A

)) =

max(eig(A))
.
min(eig(A))

(154)

Exact Relations
Basic
(AB)

3.2.2

=B

(155)

The Woodbury identity

The Woodbury identity comes in many variants. The latter of the two can be
found in [12]
(A + CBCT )

(A + UBV)

=
=

C(B

U(B

+ CT A
+ VA

C)

U)

CT A

VA

(156)
(157)

If P, R are positive definite, then (see [30])


(P
3.2.3

+ BT R

B)

BT R

= PBT (BPBT + R)

(158)

(159)

The Kailath Variant


(A + BC)

=A

B(I + CA

B)

CA

See [4, page 153].


3.2.4

Sherman-Morrison
(A + bcT )

=A

A 1 bcT A 1
1 + cT A 1 b

(160)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 18

3.2

Exact Relations

3.2.5

INVERSES

The Searle Set of Identities

The following set of identities, can be found in [25, page 151],


1

(I + A
T

+B

A(A + I)
1

A(A + B)

+B

(I + AB)

A(I + BA)

(I + AB)

=
=

B(A + B)
(A + B)B

A(I + BA)

B)

B = B(A + B)
1

B
A

B(I + B A

A(A + B)

(161)
T

3.2.6

(A + BB )
(A

(162)
1

(163)
(164)
(165)

(166)
(167)

Rank-1 update of inverse of inner product

Denote A = (XT X) 1 and that X is extended to include a new column vector


= [X v]. Then [34]
in the end X
"
#
AXT vvT XAT
AXT v
A
+
T
T
T
T
T
T
T
1
v v v XAX v
v v v XAX v
X)

(X
=
T
T
v XA
vT v vT XAXT v

3.2.7

1
vT v vT XAXT v

Rank-1 update of Moore-Penrose Inverse

The following is a rank-1 update for the Moore-Penrose pseudo-inverse of real


valued matrices and proof can be found in [18]. The matrix G is defined below:
(A + cdT )+ = A+ + G

(168)

Using the the notation


=

1 + d T A+ c
+

A c

(A+ )T d

w
m

=
=

(I
(I

(169)
(170)
(171)

AA )c
+

A A) d

(172)
(173)

the solution is given as six dierent cases, depending on the entities ||w||,
||m||, and . Please note, that for any (column) vector v it holds that v+ =
vT
vT (vT v) 1 = ||v||
2 . The solution is:
Case 1 of 6: If ||w|| =
6 0 and ||m|| =
6 0. Then
G

=
=

vw+ (m+ )T nT + (m+ )T w+


1
1
vwT
mnT +
mwT
||w||2
||m||2
||m||2 ||w||2

Case 2 of 6: If ||w|| = 0 and ||m|| =


6 0 and
G

=
=

(174)
(175)

= 0. Then

vv+ A+ (m+ )T nT
1
1
vvT A+
mnT
2
||v||
||m||2

(176)
(177)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 19

3.3

Implication on Inverses

Case 3 of 6: If ||w|| = 0 and


G=

mv A

6= 0. Then

||v||2 ||m||2 + | |2

||v||2

Case 4 of 6: If ||w|| =
6 0 and ||m|| = 0 and
G

Case 5 of 6: If ||m|| = 0 and


1

A+ nwT

6= 0. Then

||n||2 ||w||2 + | |2

(A ) v + n

(178)

||w||2

A+ n + v

(179)
(180)

||n||2

w+n

(181)

= 0. Then

vv+ A+ A+ nn+ + v+ A+ nvn+


1
1
v T A+ n
vvT A+
A+ nnT +
vnT
2
2
||v||
||n||
||v||2 ||n||2

=
=

3.3

+ T

= 0. Then

Case 6 of 6: If ||w|| = 0 and ||m|| = 0 and


G

||m||2

A+ nn+ vw+
1
1
A+ nnT
vwT
||n||2
||w||2

=
=

G=

m+v

INVERSES

(182)
(183)

Implication on Inverses
If

(A + B)

=A

+B

then

AB

A = BA

(184)

See [25].
3.3.1

A PosDef identity

Assume P, R to be positive definite and invertible, then


(P

+ BT R

B)

BT R

= PBT (BPBT + R)

(185)

See [30].

3.4

Approximations

The following identity is known as the Neuman series of a matrix, which holds
when | i | < 1 for all eigenvalues i
(I

A)

1
X

An

(186)

( 1)n An

(187)

n=0

which is equivalent to
1

(I + A)

1
X

n=0

When | i | < 1 for all eigenvalues


following approximations holds

i,

A)

(I + A)

(I

it holds that A ! 0 for n ! 1, and the

I + A + A2

(188)

(189)

A+A

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 20

3.5

Generalized Inverse

INVERSES

The following approximation is from [22] and holds when A large and symmetric
A
If

A(I + A)

A
=I

(190)

is small compared to Q and M then


2

(Q +

M)

=Q

MQ

(191)

Proof:
2

(Q +
1

(QQ

Q+

((I +
Q

(I +

(192)

Q)

(193)

)Q)

(194)

MQ

MQ

M)

MQ

(195)

This can be rewritten using the Taylor expansion:


Q
Q

3.5
3.5.1

(I

MQ

(I +

+(

MQ

MQ

1 2

...)

(196)
Q

(197)

Definition

The matrix A

3.6.1

MQ

Generalized Inverse

A generalized inverse matrix of the matrix A is any matrix A


[26])
AA A = A

3.6

such that (see


(198)

is not unique.

Pseudo Inverse
Definition

The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A+


that fulfils
I

AA+ A = A

II

A+ AA+ = A+

III

AA+ symmetric

IV

A+ A symmetric

The matrix A+ is unique and does always exist. Note that in case of complex matrices, the symmetric condition is substituted by a condition of being
Hermitian.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 21

3.6

Pseudo Inverse

3.6.2

INVERSES

Properties

Assume A+ to be the pseudo-inverse of A, then (See [3] for some of them)


(A+ )+

(AT )+

(A+ )T

(200)

+ H

(201)

H +

(A )

(199)

(A )

(A )

(202)

(A+ A)AH

AH

(203)

6=

(A )
+

(A A)A

(204)

(cA)

(1/c)A

A+
T

(A A)

T +

(AA )

A+

A
H

(A A)

H +

(AA )

f (AA )

(A A) A

(206)

AT (AAT )+

(207)

T +

A (A )

T +

(A ) A
H

(208)

(209)

(A A) A

(210)

AH (AAH )+

(211)

H +

A (A )

H +

(A ) A

(212)
(213)

(A AB) (ABB )

f (0)I

A+ [f (AAH )

(AB)
f (AH A)

(205)
T

f (0)I

A[f (A A)

+ +

(214)

f (0)I]A

(215)

(216)

f (0)I]A

where A 2 Cnm .
Assume A to have full rank, then
(AA+ )(AA+ )

AA+

(A A)(A A)

A A

Tr(AA+ )

rank(AA+ )

Tr(A A)

(217)
(218)
+

(See [26])

(219)

(See [26])

(220)

rank(A A)

(A+ AB)+ (ABB+ )+

For two matrices it hold that


(AB)+
(A B)
3.6.3

A B

(221)
(222)

Construction

Assume that A has full rank, then


A nn
A nm
A nm

Square
Broad
Tall

rank(A) = n
rank(A) = n
rank(A) = m

)
)
)

A+ = A 1
A+ = AT (AAT ) 1
A+ = (AT A) 1 AT

The so-called broad version is also known as right inverse and the tall version as the left inverse.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 22

3.6

Pseudo Inverse

INVERSES

Assume A does not have full rank, i.e. A is n m and rank(A) = r <
min(n, m). The pseudo inverse A+ can be constructed from the singular value
decomposition A = UDVT , by
A+ = Vr Dr 1 UTr

(223)

where Ur , Dr , and Vr are the matrices with the degenerated rows and columns
deleted. A dierent way is this: There do always exist two matrices C n r
and D r m of rank r, such that A = CD. Using these matrices it holds that
A+ = DT (DDT )

(CT C)

CT

(224)

See [3].

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 23

COMPLEX MATRICES

Complex Matrices

The complex scalar product r = pq can be written as

<r
<p
=p
<q
=
=r
=p <p
=q

4.1

(225)

Complex Derivatives

In order to dierentiate an expression f (z) with respect to a complex z, the


Cauchy-Riemann equations have to be satisfied ([7]):
df (z)
@<(f (z))
@=(f (z))
=
+i
dz
@<z
@<z
and

df (z)
=
dz
or in a more compact form:

@<(f (z)) @=(f (z))


+
@=z
@=z

@f (z)
@f (z)
=i
.
@=z
@<z

(226)

(227)

(228)

A complex function that satisfies the Cauchy-Riemann equations for points in a


region R is said yo be analytic in this region R. In general, expressions involving
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann
equations. In order to avoid this problem, a more generalized definition of
complex derivative is used ([24], [6]):
Generalized Complex Derivative:
df (z)
1 @f (z)
=
dz
2 @<z

@f (z)
.
@=z

(229)

df (z)
1 @f (z)
@f (z)
=
+i
.

dz
2 @<z
@=z

(230)

Conjugate Complex Derivative

The Generalized Complex Derivative equals the normal derivative, when f is an


analytic function. For a non-analytic function such as f (z) = z , the derivative
equals zero. The Conjugate Complex Derivative equals zero, when f is an
analytic function. The Conjugate Complex Derivative has e.g been used by [21]
when deriving a complex gradient.
Notice:
df (z)
@f (z)
@f (z)
6=
+i
.
(231)
dz
@<z
@=z
Complex Gradient Vector: If f is a real function of a complex vector z,
then the complex gradient vector is given by ([14, p. 798])
rf (z)

=
=

df (z)
dz
@f (z)
@f (z)
+i
.
@<z
@=z

(232)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 24

4.1

Complex Derivatives

COMPLEX MATRICES

Complex Gradient Matrix: If f is a real function of a complex matrix Z,


then the complex gradient matrix is given by ([2])
rf (Z)

=
=

df (Z)
dZ
@f (Z)
@f (Z)
+i
.
@<Z
@=Z

(233)

These expressions can be used for gradient descent algorithms.


4.1.1

The Chain Rule for complex numbers

The chain rule is a little more complicated when the function of a complex
u = f (x) is non-analytic. For a non-analytic function, the following chain rule
can be applied ([7])
@g(u)
@x

=
=

@g @u
@g @u
+
@u @x @u @x
@g @u @g @u
+
@u @x
@u
@x

(234)

Notice, if the function is analytic, the second term reduces to zero, and the function is reduced to the normal well-known chain rule. For the matrix derivative
of a scalar function g(U), the chain rule can be written the following way:

T
T
Tr(( @g(U)
Tr(( @g(U)
@g(U)
@U ) @U)
@U ) @U )
=
+
.
@X
@X
@X

4.1.2

(235)

Complex Derivatives of Traces

If the derivatives involve complex numbers, the conjugate transpose is often involved. The most useful way to show complex derivative is to show the derivative
with respect to the real and the imaginary part separately. An easy example is:
@Tr(X )
@Tr(XH )
=
@<X
@<X

@Tr(X )
@Tr(XH )
i
=i
@=X
@=X

(236)

(237)

Since the two results have the same sign, the conjugate complex derivative (230)
should be used.
@Tr(X)
@Tr(XT )
=
@<X
@<X
@Tr(X)
@Tr(XT )
i
=i
@=X
@=X

=
=

(238)
I

(239)

Here, the two results have dierent signs, and the generalized complex derivative
(229) should be used. Hereby, it can be seen that (100) holds even if X is a
complex number.
@Tr(AXH )
@<X
@Tr(AXH )
i
@=X

(240)

(241)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 25

4.2

Higher order and non-linear derivatives


@Tr(AX )
@<X
@Tr(AX )
i
@=X

COMPLEX MATRICES

AT

(242)

AT

(243)

@Tr(XXH )
@Tr(XH X)
=
@<X
@<X
H
@Tr(XX )
@Tr(XH X)
i
=i
@=X
@=X

2<X

(244)

i2=X

(245)

By inserting (244) and (245) in (229) and (230), it can be seen that
@Tr(XXH )
= X
@X
@Tr(XXH )
=X
@X

(246)
(247)

Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (233) is given by
rTr(XXH ) = 2
4.1.3

@Tr(XXH )
= 2X
@X

(248)

Complex Derivative Involving Determinants

Here, a calculation example is provided. The objective is to find the derivative of


det(XH AX) with respect to X 2 Cmn . The derivative is found with respect to
the real part and the imaginary part of X, by use of (42) and (37), det(XH AX)
can be calculated as (see App. B.1.4 for details)
@ det(XH AX)
@X

=
=

1 @ det(XH AX)
@ det(XH AX)
i
2
@<X
@=X
T
det(XH AX) (XH AX) 1 XH A

(249)

and the complex conjugate derivative yields


@ det(XH AX)
@X

=
=

4.2

1 @ det(XH AX)
@ det(XH AX)
+i
2
@<X
@=X
det(XH AX)AX(XH AX) 1

(250)

Higher order and non-linear derivatives


@ (Ax)H (Ax)
@x (Bx)H (Bx)

=
=

@ xH AH Ax
@x xH BH Bx
AH Ax
xH AH AxBH Bx
2 H
2
x BBx
(xH BH Bx)2

(251)
(252)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 26

4.3

4.3

Inverse of complex sum

COMPLEX MATRICES

Inverse of complex sum

Given real matrices A, B find the inverse of the complex sum A + iB. Form
the auxiliary matrices
E

A + tB

(253)

(254)

and find a value of t such that E


(A + iB)

=
=

(1
(1

tA,

exists. Then
1

it)(E + iF)

(255)
1

it)((E + FE

(1

it)(E + FE

(E + FE

F)

((I

(E + FE

F)

(I

i(E + FE

F)

F)

F)

1
1

(I

tFE
tFE

F)

FE

i(tI + FE

))

i(E + FE
iFE
1
1

)(256)
(257)
(258)

(tI + FE

(259)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 27

5
5.1
5.1.1

SOLUTIONS AND DECOMPOSITIONS

Solutions and Decompositions


Solutions to linear equations
Simple Linear Regression

Assume we have data (xn , yn ) for n = 1, ..., N and are seeking the parameters
a, b 2 R such that yi
= axi + b. With a least squares error function, the optimal
values for a, b can be expressed using the notation
x = (x1 , ..., xN )T

1 = (1, ..., 1)T 2 RN 1

y = (y1 , ..., yN )T

and

as

5.1.2

Rxx = xT x

Rx1 = xT 1 R11 = 1T 1

Ryx = yT x

Ry1 = yT 1

a
b

Rxx
Rx1

Rx1
R11

Rx,y
Ry1

(260)

Existence in Linear Systems

Assume A is n m and consider the linear system


Ax = b

(261)

Construct the augmented matrix B = [A b] then


Condition
rank(A) = rank(B) = m
rank(A) = rank(B) < m
rank(A) < rank(B)
5.1.3

Solution
Unique solution x
Many solutions x
No solutions x

Standard Square

Assume A is square and invertible, then


Ax = b
5.1.4

x=A

(262)

Degenerated Square

Assume A is n n but of rank r < n. In that case, the system Ax = b is solved


by
x = A+ b
where A+ is the pseudo-inverse of the rank-deficient matrix, constructed as
described in section 3.6.3.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 28

5.1

Solutions to linear equations5

5.1.5

SOLUTIONS AND DECOMPOSITIONS

Cramers rule

The equation
Ax = b,

(263)

where A is square has exactly one solution x if the ith element in x can be
found as
det B
xi =
,
(264)
det A
where B equals A, but the ith column in A has been substituted by b.
5.1.6

Over-determined Rectangular

Assume A to be n m, n > m (tall) and rank(A) = m, then


Ax = b

x = (AT A)

AT b = A+ b

(265)

that is if there exists a solution x at all! If there is no solution the following


can be useful:
Ax = b
)
xmin = A+ b
(266)
Now xmin is the vector x which minimizes ||Ax b||2 , i.e. the vector which is
least wrong. The matrix A+ is the pseudo-inverse of A. See [3].
5.1.7

Under-determined Rectangular

Assume A is n m and n < m (broad) and rank(A) = n.


Ax = b

xmin = AT (AAT )

(267)

The equation have many solutions x. But xmin is the solution which minimizes
||Ax b||2 and also the solution with the smallest norm ||x||2 . The same holds
for a matrix version: Assume A is n m, X is m n and B is n n, then
)

AX = B

Xmin = A+ B

(268)

The equation have many solutions X. But Xmin is the solution which minimizes
||AX B||2 and also the solution with the smallest norm ||X||2 . See [3].
Similar but dierent: Assume A is square n n and the matrices B0 , B1
are n N , where N > n, then if B0 has maximal rank
AB0 = B1

Amin = B1 BT0 (B0 BT0 )

(269)

where Amin denotes the matrix which is optimal in a least square sense. An
interpretation is that A is the linear approximation which maps the columns
vectors of B0 into the columns vectors of B1 .
5.1.8

Linear form and zeros


Ax = 0,

5.1.9

8x

A=0

(270)

Square form and zeros

If A is symmetric, then
xT Ax = 0,

8x

A=0

(271)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 29

5.2

Eigenvalues and Eigenvectors5

5.1.10

SOLUTIONS AND DECOMPOSITIONS

The Lyapunov Equation


AX + XB

vec(X)

(I A + BT I)

(272)
1

vec(C)

(273)

Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec operator.
5.1.11

Encapsulating Sum
P

n An XBn

vec(X)

C
P

(274)
T
n Bn

An

vec(C)

(275)

See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec
operator.

5.2
5.2.1

Eigenvalues and Eigenvectors


Definition

The eigenvectors vi and eigenvalues

are the ones satisfying

Avi =
5.2.2

i vi

(276)

Decompositions

For matrices A with as many distinct eigenvalues as dimensions, the following


holds, where the columns of V are the eigenvectors and (D)ij = ij i ,
AV = VD

(277)

For defective matrices A, which is matrices which has fewer distinct eigenvalues
than dimensions, the following decomposition called Jordan canonical form,
holds
AV = VJ
(278)
where J is a block diagonal matrix with the blocks Ji = i I + N. The matrices
Ji have dimensionality as the number of identical eigenvalues equal to i , and N
is square matrix of same size with 1 on the super diagonal and zero elsewhere.
It also holds that for all matrices A there exists matrices V and R such that
AV = VR
where R is upper triangular with the eigenvalues
5.2.3

(279)
i

on its diagonal.

General Properties

Assume that A 2 Rnm and B 2 Rmn ,


eig(AB)

eig(BA)

rank(A) = r

At most r non-zero

(280)
i

(281)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 30

5.3

Singular Value Decomposition


5 SOLUTIONS AND DECOMPOSITIONS

5.2.4

Symmetric

Assume A is symmetric, then


VVT

R
P

p
i i

Tr(A )

eig(I + cA)

eig(A

cI)
1

eig(A

(i.e. V is orthogonal)

(282)

(i.e.

(283)

1+c

is real)

(284)
(285)

(286)

(287)

For a symmetric, positive matrix A,


eig(AT A) = eig(AAT ) = eig(A) eig(A)
5.2.5

(288)

Characteristic polynomial

The characteristic polynomial for the matrix A is


0

=
=

det(A
n

I)

g1

n 1

(289)
+ g2

n 2

... + ( 1) gn

(290)

Note that the coefficients gj for j = 1, ..., n are the n invariants under rotation
of A. Thus, gj is the sum of the determinants of all the sub-matrices of A taken
j rows and columns at a time. That is, g1 is the trace of A, and g2 is the sum
of the determinants of the n(n 1)/2 sub-matrices that can be formed from A
by deleting all but two rows and columns, and so on see [17].

5.3

Singular Value Decomposition

Any n m matrix A can be written as

A = UDVT ,

where

5.3.1

U
D
V

=
=
=

(291)

eigenvectors of AAT
p
diag(eig(AAT ))
eigenvectors of AT A

nn
nm
mm

(292)

Symmetric Square decomposed into squares

Assume A to be n n and symmetric. Then

T
A = V
D
V
,

(293)

where D is diagonal with the eigenvalues of A, and V is orthogonal and the


eigenvectors of A.
5.3.2

Square decomposed into squares

Assume A 2 Rnn . Then

UT

(294)

where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 31

5.4

Triangular Decomposition

5.3.3

SOLUTIONS AND DECOMPOSITIONS

Square decomposed into rectangular

Assume V D UT = 0 then we can expand the SVD of A into


D 0
U
A = V V
,
0 D
UT

(295)

where the SVD of A is A = VDUT .


5.3.4

Rectangular decomposition I

Assume A is n m, V is n n, D is n n, UT is n m

A
D
UT
= V
,

(296)

where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
5.3.5

Rectangular decomposition II

Assume A is n m, V is n m, D is m m, UT is m m
2
32

4
54
A
V
D
UT
=
5.3.6

3
5

(297)

Rectangular decomposition III

Assume A is n m, V is n n, D is n m, UT is m m
2
3

4
5,
A
D
UT
= V

(298)

where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.

5.4

Triangular Decomposition

5.5

LU decomposition

Assume A is a square matrix with non-zero leading principal minors, then


A = LU

(299)

where L is a unique unit lower triangular matrix and U is a unique upper


triangular matrix.
5.5.1

Cholesky-decomposition

Assume A is a symmetric positive definite square matrix, then


A = UT U = LLT ,

(300)

where U is a unique upper triangular matrix and L is a lower triangular matrix.


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 32

5.6

5.6

LDM decomposition

SOLUTIONS AND DECOMPOSITIONS

LDM decomposition

Assume A is a square matrix with non-zero leading principal minors1 , then


A = LDMT

(301)

where L, M are unique unit lower triangular matrices and D is a unique diagonal
matrix.

5.7

LDL decompositions

The LDL decomposition are special cases of the LDM decomposition. Assume
A is a non-singular symmetric definite square matrix, then
A = LDLT = LT DL

(302)

where L is a unit lower triangular matrix and D is a diagonal matrix. If A is


also positive definite, then D has strictly positive diagonal entries.
1 If the matrix that corresponds to a principal minor is a quadratic upper-left part of the
larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the
principal minor is called a leading principal minor. For an n times n square matrix, there are
n leading principal minors. [31]

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 33

STATISTICS AND PROBABILITY

Statistics and Probability

6.1

Definition of Moments

Assume x 2 Rn1 is a random variable


6.1.1

Mean

The vector of means, m, is defined by


(m)i = hxi i
6.1.2

(303)

Covariance

The matrix of covariance M is defined by


(M)ij = h(xi

hxi i)(xj

hxj i)i

(304)

or alternatively as
M = h(x
6.1.3

m)(x

m)T i

(305)

Third moments

The matrix of third centralized moments in some contexts referred to as


coskewness is defined using the notation
(3)

mijk = h(xi
as

hxi i)(xj

hxj i)(xk

hxk i)i

h
i
(3) (3)
M3 = m::1 m::2 ...m(3)
::n

(306)
(307)

where : denotes all elements within the given index. M3 can alternatively be
expressed as
M3 = h(x m)(x m)T (x m)T i
(308)
6.1.4

Fourth moments

The matrix of fourth centralized moments in some contexts referred to as


cokurtosis is defined using the notation
(4)

mijkl = h(xi

hxi i)(xj

hxj i)(xk

hxk i)(xl

hxl i)i

(309)

as
h
i
(4)
(4)
(4)
(4)
(4)
(4)
(4)
(4)
M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4)
::nn

(310)

or alternatively as

M4 = h(x

m)(x

m)T (x

m)T (x

m)T i

(311)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 34

6.2

Expectation of Linear Combinations


6 STATISTICS AND PROBABILITY

6.2

Expectation of Linear Combinations

6.2.1

Linear Forms

Assume X and x to be a matrix and a vector of random variables. Then (see


See [26])
E[AXB + C]

Var[Ax]

Cov[Ax, By]

AE[X]B + C
AVar[x]A

(312)

(313)

ACov[x, y]B

(314)

Assume x to be a stochastic vector with mean m, then (see [7])

6.2.2

E[Ax + b]

Am + b

(315)

E[Ax]

Am

(316)

E[x + b]

m+b

(317)

Quadratic Forms

Assume A is symmetric, c = E[x] and = Var[x]. Assume also that all


coordinates xi are independent, have the same central moments 1 , 2 , 3 , 4
and denote a = diag(A). Then (See [26])
E[xT Ax]

Tr(A) + cT Ac

222 Tr(A2 ) + 42 cT A2 c + 43 cT Aa + (4

Var[x Ax]

(318)
322 )aT a (319)

Also, assume x to be a stochastic vector with mean m, and covariance M. Then


(see [7])
E[(Ax + a)(Bx + b)T ]

AMBT + (Am + a)(Bm + b)T


T

E[xx ]
E[xaT x]

=
=

M + mm
(M + mmT )a

E[xT axT ]

aT (M + mmT )

E[(Ax)(Ax) ]

E[(x + a)(x + a) ]

(320)
(321)
(322)

A(M + mm )A

(323)
T

(324)

M + (m + a)(m + a)

(325)

E[(Ax + a)T (Bx + b)]

Tr(AMBT ) + (Am + a)T (Bm + b) (326)

E[xT x]

Tr(M) + mT m

Tr(AM) + mT Am

E[x Ax]
T

E[(Ax) (Ax)]
T

E[(x + a) (x + a)]

=
=

(327)

(328)
T

Tr(AMA ) + (Am) (Am)


T

Tr(M) + (m + a) (m + a)

(329)
(330)

See [7].

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 35

6.3

Weighted Scalar Variable

6.2.3

STATISTICS AND PROBABILITY

Cubic Forms

Assume x to be a stochastic vector with independent coordinates, mean m,


covariance M and central moments v3 = E[(x m)3 ]. Then (see [7])
E[(Ax + a)(Bx + b)T (Cx + c)]

Adiag(BT C)v3

+Tr(BMCT )(Am + a)
+AMCT (Bm + b)
+(AMBT + (Am + a)(Bm + b)T )(Cm + c)
T

E[xx x]

v3 + 2Mm + (Tr(M) + mT m)m

E[(Ax + a)(Ax + a)T (Ax + a)]

Adiag(AT A)v3
+[2AMAT + (Ax + a)(Ax + a)T ](Am + a)
+Tr(AMAT )(Am + a)

E[(Ax + a)b (Cx + c)(Dx + d) ]

(Ax + a)bT (CMDT + (Cm + c)(Dm + d)T )

+(AMCT + (Am + a)(Cm + c)T )b(Dm + d)T


+bT (Cm + c)(AMDT

6.3

(Am + a)(Dm + d)T )

Weighted Scalar Variable

Assume x 2 Rn1 is a random variable, w 2 Rn1 is a vector of constants and


y is the linear combination y = wT x. Assume further that m, M2 , M3 , M4
denotes the mean, covariance, and central third and fourth moment matrix of
the variable x. Then it holds that

h(y
h(y
h(y

hyi

wT m

(331)

w M2 w

(332)

hyi)3 i

w T M3 w w

(333)

hyi) i
4

hyi) i

w M4 w w w

(334)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 36

MULTIVARIATE DISTRIBUTIONS

Multivariate Distributions

7.1

Cauchy

The density function for a Cauchy distributed vector t 2 RP 1 , is given by


p(t|, ) =

P/2

( 1+P
2 )
(1/2) 1 + (t

det()
1

)T

1/2

(t

(1+P )/2

(335)

where is the location, is positive definite, and denotes the gamma function. The Cauchy distribution is a special case of the Student-t distribution.

7.2

Dirichlet

The Dirichlet distribution is a kind of inverse distribution compared to the


multinomial distribution on the bounded continuous variate x = [x1 , . . . , xP ]
[16, p. 44]
P

P
P
p p Y 1
p(x|) = QP
xp p
(
)
p
p
p

7.3

Normal

The normal distribution is also known as a Gaussian distribution. See sec. 8.

7.4

Normal-Inverse Gamma

7.5

Gaussian

See sec. 8.

7.6

Multinomial

If the vector n contains counts, i.e. (n)i 2 0, 1, 2, ..., then the discrete multinomial disitrbution for n is given by
d

P (n|a, n) =

where ai are probabilities, i.e. 0 ai 1 and

7.7

d
X

Y
n!
a ni ,
n1 ! . . . nd ! i i

Students t

ni = n

(336)

ai = 1.

The density of a Student-t distributed vector t 2 RP 1 , is given by


p(t|, , ) = ()

P/2

( +P
2 )
(/2) 1 +

det()
1 (t

)T

1/2
1

(t

(+P )/2

(337)

where is the location, the scale matrix is symmetric, positive definite,


is the degrees of freedom, and denotes the gamma function. For = 1, the
Student-t distribution becomes the Cauchy distribution (see sec 7.1).
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 37

7.8

Wishart

7.7.1

MULTIVARIATE DISTRIBUTIONS

Mean
E(t) = ,

7.7.2

(338)

Variance
cov(t) =

7.7.3

>1

>2

(339)

Mode

The notion mode meaning the position of the most probable value
mode(t) =
7.7.4

(340)

Full Matrix Version

If instead of a vector t 2 RP 1 one has a matrix T 2 RP N , then the Student-t


distribution for T is
p(T|M, , , )

N P/2

P
Y

p=1

[( + P p + 1)/2]

[( p + 1)/2]

det() /2 det() N/2

det 1 + (T M) 1 (T

M)T

(+P )/2

(341)

where M is the location, is the rescaling matrix, is positive definite, is


the degrees of freedom, and denotes the gamma function.

7.8

Wishart

The central Wishart distribution for M 2 RP P , M is positive definite, where


m can be regarded as a degree of freedom parameter [16, equation 3.8.1] [8,
section 2.5],[11]
p(M|, m)

2mP/2 P (P

1)/4

1
QP
p

[ 12 (m + 1

det() m/2 det(M)(m

1
exp
Tr( 1 M)
2
7.8.1

p)]

1)/2

(342)

Mean
E(M) = m

(343)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 38

7.9

Wishart, Inverse

7.9

MULTIVARIATE DISTRIBUTIONS

Wishart, Inverse

The (normal) Inverse Wishart distribution for M 2 RP P , M is positive definite, where m can be regarded as a degree of freedom parameter [11]
p(M|, m)

2mP/2 P (P

1)/4

1
QP

[ 12 (m + 1

det()m/2 det(M) (m

1
exp
Tr(M 1 )
2
7.9.1

p)]

1)/2

(344)

Mean
E(M) =

1
P

(345)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 39

8
8.1
8.1.1

GAUSSIANS

Gaussians
Basics
Density and normalization

The density of x N (m, ) is


1
p(x) = p
exp
det(2)

1
(x
2

Note that if x is d-dimensional, then


Integration and normalization

Z
1
exp
(x m)T 1 (x m) dx
2

Z
1 T
exp
x 1 x + mT 1 x dx
2

Z
1 T
exp
x Ax + cT x dx
2

m)T

(x

m)

(346)

det(2) = (2)d det().

=
=
=

p
det(2)

p
1
det(2) exp mT 1 m
2

p
1
1
det(2A ) exp cT A T c
2

If X = [x1 x2 ...xn ] and C = [c1 c2 ...cn ], then

Z
p
1
exp
Tr(XT AX) + Tr(CT X) dX = det(2A
2

1)

exp

1
Tr(CT A
2

C)

The derivatives of the density are


@p(x)
@x
@2p
@x@xT
8.1.2

=
=

p(x) 1 (x

p(x) 1 (x

m)
m)(x

(347)
m)T

a
Tc

c
b

(348)

Marginal Distribution

Assume x Nx (, ) where

xa
a
x=
=
xb
b

(349)

then

8.1.3

p(xa )

p(xb )

Nxa (a , a )

(350)

Nxb (b , b )

(351)

Conditional Distribution

Assume x Nx (, ) where

xa
a
x=
=
xb
b

a
Tc

c
b

(352)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 40

8.1

Basics

GAUSSIANS

then
n
a = a + c b 1 (xb
a = a c 1 T

c
b
n
T
1
b = b + c a (xa
b = b T 1 c

c
a

a)
p(xa |xb ) = Nxa (
a ,
b)
p(xb |xa ) = Nxb (
b ,

b )

(353)

a )

(354)

Note, that the covariance matrices are the Schur complement of the block matrix, see 9.1.5 for details.
8.1.4

Linear combination

Assume x N (mx , x ) and y N (my , y ) then


Ax + By + c N (Amx + Bmy + c, Ax AT + By BT )
Rearranging Means
p
det(2(AT 1 A)
p
NAx [m, ] =
det(2)

(355)

8.1.5

1)

Nx [A

m, (AT

A)

(356)

If A is square and invertible, it simplifies to


NAx [m, ] =
8.1.6

1
Nx [A
| det(A)|

m, (AT

A)

(357)

Rearranging into squared form

If A is symmetric, then
1 T
x Ax + bT x
2

1
Tr(XT AX) + Tr(BT X)
2

8.1.7

1
(x A 1 b)T A(x A
2
1
Tr[(X A 1 B)T A(X
2

1
b) + bT A 1 b
2
1
1
A B)] + Tr(BT A
2

Sum of two squared forms

In vector formulation (assuming 1 , 2 are symmetric)


1
(x
2
1
(x
2
1
(x
2

=
c 1

mc

m1 )T 1 1 (x

m1 )

(358)

m2 )T 2 1 (x

m2 )

(359)

mc )T c 1 (x

mc ) + C

(360)

1 1 + 2 1
1

(361)
1

(1 + 2 ) (1 m1 + 2 m2 )
1 T
(m 1 + mT2 2 1 )(1 1 + 2 1 )
2 1 1

1 T
m1 1 1 m1 + mT2 2 1 m2
2

(362)
1

(1 1 m1 + 2 1 m2 )(363)
(364)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 41

B)

8.2

Moments

GAUSSIANS

In a trace formulation (assuming 1 , 2 are symmetric)


1
Tr((X
2
1
Tr((X
2
1
Tr[(X
2

=
c 1
Mc

=
=

8.1.8

M1 )T 1 1 (X

M1 ))

(365)

M2 )T 2 1 (X

M2 ))

(366)

Mc )T c 1 (X

Mc )] + C

(367)

1 1 + 2 1
(1 1 + 2 1 ) 1 (1 1 M1 + 2 1 M2 )
1 h
Tr (1 1 M1 + 2 1 M2 )T (1 1 + 2 1 )
2
1
Tr(MT1 1 1 M1 + MT2 2 1 M2 )
2

(368)
(369)
i
1
(1 1 M1 + 2 1 M2 )
(370)

Product of gaussian densities

Let Nx (m, ) denote a density of x, then


Nx (m1 , 1 ) Nx (m2 , 2 ) = cc Nx (mc , c )

mc

Nm1 (m2 , (1 + 2 ))

1
1
p
exp
(m1
2
det(2(1 + 2 ))
(1 1 + 2 1 )

(1 1 + 2 1 )

cc

=
=

m2 )T (1 + 2 )

(371)

(m1

m2 )

(1 1 m1 + 2 1 m2 )

but note that the product is not normalized as a density of x.

8.2
8.2.1

Moments
Mean and covariance of linear forms

First and second moments. Assume x N (m, )


E(x) = m

Cov(x, x) = Var(x) = = E(xxT )

(372)

E(x)E(xT ) = E(xxT )

mmT

(373)

As for any other distribution is holds for gaussians that


E[Ax]
Var[Ax]
Cov[Ax, By]

=
=
=

AE[x]
AVar[x]A

(374)
T

ACov[x, y]B

(375)
T

(376)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 42

8.2

Moments

8.2.2

GAUSSIANS

Mean and variance of square forms

Mean and variance of square forms: Assume x N (m, )


E(xxT )
T

E[x Ax]
T

Var(x Ax)

=
=
=

+ mmT

(377)
T

Tr(A) + m Am

(378)

Tr[A(A + A )] + ...
+mT (A + AT )(A + AT )m

E[(x
If =

0 T

m ) A(x

m )]

(m

m0 )T A(m

(379)
(380)

I and A is symmetric, then


Var(xT Ax)

Assume x N (0,

Tr(A2 ) + 4

mT A 2 m

(381)

I) and A and B to be symmetric, then


Cov(xT Ax, xT Bx) = 2

8.2.3

m0 ) + Tr(A)

Tr(AB)

(382)

Cubic forms

Assume x to be a stochastic vector with independent coordinates, mean m and


covariance M
E[xbT xxT ]

mbT (M + mmT ) + (M + mmT )bmT


+bT m(M

8.2.4

mmT )

(383)

Mean of Quartic Forms


E[xxT xxT ]

2( + mmT )2 + mT m(

mmT )

+Tr()( + mmT )
T

E[xx Axx ]

( + mmT )(A + AT )( + mmT )


+mT Am(

mmT ) + Tr[A]( + mmT )

E[xT xxT x]

2Tr(2 ) + 4mT m + (Tr() + mT m)2

E[xT AxxT Bx]

Tr[A(B + BT )] + mT (A + AT )(B + BT )m
+(Tr(A) + mT Am)(Tr(B) + mT Bm)

E[aT xbT xcT xdT x]


=

(aT ( + mmT )b)(cT ( + mmT )d)


+(aT ( + mmT )c)(bT ( + mmT )d)
+(aT ( + mmT )d)(bT ( + mmT )c)

2aT mbT mcT mdT m

E[(Ax + a)(Bx + b)T (Cx + c)(Dx + d)T ]


=

[ABT + (Am + a)(Bm + b)T ][CDT + (Cm + c)(Dm + d)T ]


+[ACT + (Am + a)(Cm + c)T ][BDT + (Bm + b)(Dm + d)T ]
+(Bm + b)T (Cm + c)[ADT

(Am + a)(Dm + d)T ]

+Tr(BCT )[ADT + (Am + a)(Dm + d)T ]


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 43

8.3

Miscellaneous

GAUSSIANS

E[(Ax + a)T (Bx + b)(Cx + c)T (Dx + d)]


=

Tr[A(CT D + DT C)BT ]
+[(Am + a)T B + (Bm + b)T A][CT (Dm + d) + DT (Cm + c)]
+[Tr(ABT ) + (Am + a)T (Bm + b)][Tr(CDT ) + (Cm + c)T (Dm + d)]

See [7].
8.2.5

Moments
E[x]

k m k

(384)

Cov(x)

XX

8.3
8.3.1

k k0 (k + mk mTk

mk mTk0 )

(385)

k0

Miscellaneous
Whitening

Assume x N (m, ) then


z=

1/2

(x

m) N (0, I)

(386)

Conversely having z N (0, I) one can generate data x N (m, ) by setting


x = 1/2 z + m N (m, )

(387)

Note that 1/2 means the matrix which fulfils 1/2 1/2 = , and that it exists
and is unique since is positive definite.
8.3.2

The Chi-Square connection

Assume x N (m, ) and x to be n dimensional, then


z = (x
where

2
n

8.3.3

Entropy

m)T

(x

m)

2
n

(388)

denotes the Chi square distribution with n degrees of freedom.

Entropy of a D-dimensional gaussian


Z
p
D
H(x) =
N (m, ) ln N (m, )dx = ln det(2) +
2

8.4
8.4.1

(389)

Mixture of Gaussians
Density

The variable x is distributed as a mixture of gaussians if it has the density


p(x) =

K
X

k p
exp
det(2k )
k=1

1
(x
2

mk )T k 1 (x

mk )

(390)

where k sum to 1 and the k all are positive definite.


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 44

8.4

Mixture of Gaussians

Derivatives
P
Defining p(s) = k k Ns (k , k ) one get

GAUSSIANS

8.4.2

@ ln p(s)
@j

=
=

@ ln p(s)
@j

=
=

@ ln p(s)
@j

=
=

j Ns (j , j )
@
P
ln[j Ns (j , j )]

N
(
,

)
@
k
s
k
j
k
k
j Ns (j , j ) 1
P
k k Ns (k , k ) j
j Ns (j , j )
@
P
ln[j Ns (j , j )]

N
(
,

)
@
k
k
j
k k s

j Ns (j , j ) 1
P
j (s j )
k k Ns (k , k )
j Ns (j , j )
@
P
ln[j Ns (j , j )]

N
(
,

)
@
k
j
k
k k s
j Ns (j , j ) 1
P
j 1 + j 1 (s j )(s
k k Ns (k , k ) 2

But k and k needs to be constrained.

(391)
(392)
(393)
(394)
(395)

1
j )T j (396)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 45

9
9.1

SPECIAL MATRICES

Special Matrices
Block matrices

Let Aij denote the ijth block of A.


9.1.1

Multiplication

Assuming the dimensions of the blocks matches we have

A11 A12
B11 B12
A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22
B21 B22
A21 B11 + A22 B21 A21 B12 + A22 B22
9.1.2

The Determinant

The determinant can be expressed as by the use of


C1

C2
as
det
9.1.3

A11
A21

A12
A22

A11

A12 A221 A21

(397)

A22

A21 A111 A12

(398)

= det(A22 ) det(C1 ) = det(A11 ) det(C2 )

The Inverse

The inverse can be expressed as by the use of


C1

C2
as

=
9.1.4

A11
A21

A12
A22

A11

A12 A221 A21

(399)

A22

A21 A111 A12

(400)

C1 1
C2 1 A21 A111

A111 + A111 A12 C2 1 A21 A111


A221 A21 C1 1

A221

A111 A12 C2 1
C2 1

C1 1 A12 A221
+ A221 A21 C1 1 A12 A221

Block diagonal

For block diagonal matrices we have

det

A11
0
A11
0

0
A22
0
A22

det(A11 ) det(A22 )

(A11 )
0

0
(A22 )

(401)
(402)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 46

9.2

Discrete Fourier Transform Matrix, The

9.1.5

SPECIAL MATRICES

Schur complement

Regard the matrix

A11
A21

A12
A22

The Schur complement of block A11 of the matrix above is the matrix (denoted
C2 in the text above)
A22 A21 A111 A12
The Schur complement of block A22 of the matrix above is the matrix (denoted
C1 in the text above)
A11 A12 A221 A21
Using the Schur complement, one can rewrite the inverse of a block matrix

A11
A21

A12
A22
I

A221 A21

0
I

(A11

A12 A221 A21 )


0

0
A221

I
0

A12 A221
I

The Schur complement is useful when solving linear systems of the form

A11 A12
x1
b1
=
A21 A22
x2
b2
which has the following equation for x1
(A11

A12 A221 A21 )x1 = b1

A12 A221 b2

When the appropriate inverses exists, this can be solved for x1 which can then
be inserted in the equation for x2 to solve for x2 .

9.2

Discrete Fourier Transform Matrix, The

The DFT matrix is an N N symmetric matrix WN , where the k, nth element


is given by
j2kn
WNkn = e N
(403)
Thus the discrete Fourier transform (DFT) can be expressed as
X(k) =

N
X1

x(n)WNkn .

(404)

n=0

Likewise the inverse discrete Fourier transform (IDFT) can be expressed as


x(n) =

N 1
1 X
X(k)WN kn .
N

(405)

k=0

The DFT of the vector x = [x(0), x(1), , x(N


form as
X = WN x,

1)]T can be written in matrix


(406)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 47

9.3

Hermitian Matrices and skew-Hermitian

SPECIAL MATRICES

1)]T . The IDFT is similarly given as

where X = [X(0), X(1), , x(N

x = WN1 X.

(407)

Some properties of WN exist:

If WN = e

j2
N

WN1

WN WN
WN

1
W
N N
NI

WH
N

(410)

WNm

(411)

(408)
(409)

, then [23]
m+N/2

WN

Notice, the DFT matrix is a Vandermonde Matrix.


The following important relation between the circulant matrix and the discrete Fourier transform (DFT) exists
TC = WN1 (I (WN t))WN ,
where t = [t0 , t1 , , tn

9.3

1]

(412)

is the first row of TC .

Hermitian Matrices and skew-Hermitian

A matrix A 2 Cmn is called Hermitian if


AH = A
For real valued matrices, Hermitian and symmetric matrices are equivalent.
A is Hermitian
A is Hermitian

,
,

xH Ax 2 R,
eig(A) 2 R

8x 2 Cn1

(413)
(414)

Note that
A = B + iC
where B, C are hermitian, then
B=
9.3.1

A + AH
,
2

C=

AH
2i

Skew-Hermitian

A matrix A is called skew-hermitian if


AH

A=

For real valued matrices, skew-Hermitian and skew-symmetric matrices are


equivalent.
A Hermitian
A skew-Hermitian
A skew-Hermitian

, iA is skew-hermitian
H

, x Ay =

x A y,

) eig(A) = i ,

2R

(415)
8x, y

(416)
(417)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 48

9.4

Idempotent Matrices

9.4

SPECIAL MATRICES

Idempotent Matrices

A matrix A is idempotent if
AA = A
Idempotent matrices A and B, have the following properties
An
I
I

forn = 1, 2, 3, ...

(418)

is idempotent

(419)

is idempotent

(420)

is idempotent

(421)

rank(A)

) AB

is idempotent

(422)

Tr(A)

(423)

A)

(424)

A)A

(425)

(426)

f (sI + tA)

(I

A(I
(I

9.4.1

A,

AH
If AB = BA

Note that A

A)f (s) + Af (s + t)

(427)

I is not necessarily idempotent.

Nilpotent

A matrix A is nilpotent if
A2 = 0
A nilpotent matrix has the following property:
f (sI + tA)
9.4.2

If (s) + tAf 0 (s)

(428)

Unipotent

A matrix A is unipotent if
AA = I
A unipotent matrix has the following property:
f (sI + tA)

9.5

[(I + A)f (s + t) + (I

A)f (s

t)]/2

(429)

Orthogonal matrices

If a square matrix Q is orthogonal, if and only if,


QT Q = QQT = I
and then Q has the following properties
Its eigenvalues are placed on the unit circle.
Its eigenvectors are unitary, i.e. have length one.
The inverse of an orthogonal matrix is orthogonal too.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 49

9.6

Positive Definite and Semi-definite Matrices

SPECIAL MATRICES

Basic properties for the orthogonal matrix Q


Q

QT

QQ

Q Q

det(Q)

9.5.1

Ortho-Sym

A matrix Q+ which simultaneously is orthogonal and symmetric is called an


ortho-sym matrix [20]. Hereby
QT+ Q+
Q+

(430)

QT+

(431)

The powers of an ortho-sym matrix are given by the following rule


Qk+

=
=

9.5.2

1 + ( 1)k
1 + ( 1)k+1
I+
Q+
2
2
1 + cos(k)
1 cos(k)
I+
Q+
2
2

(432)
(433)

Ortho-Skew

A matrix which simultaneously is orthogonal and antisymmetric is called an


ortho-skew matrix [20]. Hereby
QH Q
Q

(434)
Q

(435)

The powers of an ortho-skew matrix are given by the following rule


Qk

=
=

9.5.3

ik + ( i)k
ik ( i)k
I i
Q
2
2

cos(k )I + sin(k )Q
2
2

(436)
(437)

Decomposition

A square matrix A can always be written as a sum of a symmetric A+ and an


antisymmetric matrix A
A = A+ + A
(438)

9.6
9.6.1

Positive Definite and Semi-definite Matrices


Definitions

A matrix A is positive definite if and only if


xT Ax > 0,

8x 6= 0

(439)

8x

(440)

A matrix A is positive semi-definite if and only if


xT Ax

0,

Note that if A is positive definite, then A is also positive semi-definite.


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 50

9.6

Positive Definite and Semi-definite Matrices

9.6.2

SPECIAL MATRICES

Eigenvalues

The following holds with respect to the eigenvalues:


A pos. def.
A pos. semi-def.
9.6.3

, eig( A+A
)>0
2
H
, eig( A+A
) 0
2

(441)

Trace

The following holds with respect to the trace:


A pos. def.
A pos. semi-def.
9.6.4

)
)

Tr(A) > 0
Tr(A) 0

Inverse

If A is positive definite, then A is invertible and A


9.6.5

(442)

is also positive definite.

Diagonal

If A is positive definite, then Aii > 0, 8i


9.6.6

Decomposition I

The matrix A is positive semi-definite of rank r , there exists a matrix B of


rank r such that A = BBT
The matrix A is positive definite , there exists an invertible matrix B such
that A = BBT
9.6.7

Decomposition II

Assume A is an n n positive semi-definite, then there exists an n r matrix


B of rank r such that BT AB = I.
9.6.8

Equation with zeros

Assume A is positive semi-definite, then XT AX = 0


9.6.9

AX = 0

Rank of product

Assume A is positive definite, then rank(BABT ) = rank(B)


9.6.10

Positive definite property

If A is n n positive definite and B is r n of rank r, then BABT is positive


definite.
9.6.11

Outer Product

If X is n r, where n r and rank(X) = n, then XXT is positive definite.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 51

9.7

Singleentry Matrix, The

9.6.12

Small pertubations

If A is positive definite and B is symmetric, then A


sufficiently small t.
9.6.13

SPECIAL MATRICES

tB is positive definite for

Hadamard inequality

If A is a positive definite or semi-definite matrix, then


Y
det(A)
Aii
i

See [15, pp.477]


9.6.14

Hadamard product relation

Assume that P = AAT and Q = BBT are semi positive definite matrices, it
then holds that
P Q = RRT
where the columns of R are constructed as follows: ri+(j 1)NA = ai bj , for
i = 1, 2, ..., NA and j = 1, 2, ..., NB . The result is unpublished, but reported by
Pavel Sakov and Craig Bishop.

9.7
9.7.1

Singleentry Matrix, The


Definition

The single-entry matrix Jij 2 Rnn is defined as the matrix which is zero
everywhere except in the entry (i, j) in which it is 1. In a 4 4 example one
might have
2
3
0 0 0 0
6 0 0 1 0 7
7
J23 = 6
(443)
4 0 0 0 0 5
0 0 0 0
The single-entry matrix is very useful when working with derivatives of expressions involving matrices.
9.7.2

Swap and Zeros

Assume A to be n m and Jij to be m p

AJij = 0 0 . . . Ai

...

(444)

i.e. an n p matrix of zeros with the i.th column of A in place of the j.th
column. Assume A to be n m and Jij to be p n
2
3
0
6 .. 7
6 . 7
6
7
6 0 7
6
7
7
Jij A = 6
(445)
6 Aj 7
6 0 7
6
7
6 . 7
4 .. 5
0

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 52

9.7

Singleentry Matrix, The

SPECIAL MATRICES

i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.th
row.
9.7.3

Rewriting product of elements


Aki Bjl = (Aei eTj B)kl

Aik Blj = (A ei eTj BT )kl


Aik Bjl = (AT ei eTj B)kl
Aki Blj =
9.7.4

=
=

(Aei eTj BT )kl

(AJij B)kl
T

ij

ij

(A J B )kl
(A J B)kl
ij

(AJ B )kl

(446)
(447)
(448)
(449)

Properties of the Singleentry Matrix

If i = j
Jij Jij = Jij

(Jij )T (Jij )T = Jij

Jij (Jij )T = Jij


If i 6= j

Jij Jij = 0

(Jij )T Jij = Jij


(Jij )T (Jij )T = 0

Jij (Jij )T = Jii


9.7.5

(Jij )T Jij = Jjj

The Singleentry Matrix in Scalar Expressions

Assume A is n m and J is m n, then


Tr(AJij ) = Tr(Jij A) = (AT )ij

(450)

Assume A is n n, J is n m and B is m n, then


Tr(AJij B)

(AT BT )ij

(451)

Tr(AJji B)
Tr(AJij Jij B)

=
=

(BA)ij
diag(AT BT )ij

(452)
(453)

Assume A is n n, Jij is n m B is m n, then


T

xT AJij Bx

(AT xxT BT )ij

(454)

ij ij

diag(AT xxT BT )ij

(455)

x AJ J Bx
9.7.6

Structure Matrices

The structure matrix is defined by


@A
= Sij
@Aij

(456)

Sij = Jij

(457)

If A has no special structure then

If A is symmetric then
Sij = Jij + Jji

Jij Jij

(458)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 53

9.8

Symmetric, Skew-symmetric/Antisymmetric

9.8
9.8.1

SPECIAL MATRICES

Symmetric, Skew-symmetric/Antisymmetric
Symmetric

The matrix A is said to be symmetric if


A = AT

(459)

Symmetric matrices have many important properties, e.g. that their eigenvalues
are real and eigenvectors orthogonal.
9.8.2

Skew-symmetric/Antisymmetric

The antisymmetric matrix is also known as the skew symmetric matrix. It has
the following property from which it is defined
A=

AT

(460)

Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n n antisymmetric matrices also have the following properties.
det(AT )

det( A) = ( 1)n det(A)

(461)

det(A)

det( A) = 0,

(462)

if n is odd

The eigenvalues of an antisymmetric matrix are placed on the imaginary axis


and the eigenvectors are unitary.
9.8.3

Decomposition

A square matrix A can always be written as a sum of a symmetric A+ and an


antisymmetric matrix A
A = A+ + A
(463)
Such a decomposition could e.g. be
A=

9.9

A + AT
A AT
+
= A+ + A
2
2

(464)

Toeplitz Matrices

A Toeplitz matrix T is a matrix where the elements of each diagonal is the


same. In the n n square case, it has the following structure:
2
3 2
3
t11 t12 t1n
t0
t1 tn 1
6
.. 7 6
.. 7
..
..
6 t21 . . . . . .
6
.
.
. 7
. 7
7=6 t 1
7
T=6
(465)
6 .
7
6
7
..
..
..
..
..
4 ..
5
.
. t12 5 4
.
.
.
t1
tn1 t21 t11
t (n 1) t 1
t0

A Toeplitz matrix is persymmetric. If a matrix is persymmetric (or orthosymmetric), it means that the matrix is symmetric about its northeast-southwest
diagonal (anti-diagonal) [12]. Persymmetric matrices is a larger class of matrices, since a persymmetric matrix not necessarily has a Toeplitz structure. There

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 54

9.10

Transition matrices

SPECIAL MATRICES

are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is
given by:
2
3
t0
t1 tn 1
6
.. 7
..
..
6 t1
.
.
. 7
7
T=6
(466)
6 .
7
..
..
5
4 ..
.
.
t1
tn 1 t1
t0
The circular Toeplitz matrix:
2

t0

6
6 tn 1
TC = 6
6 .
4 ..
t1

t1
..
.
..

The upper triangular Toeplitz matrix:


2
t 0 t1
6
6 0 ...
TU = 6
6 . .
..
4 ..
0

..
.
..
.
tn

..
.
..
.
0

and the lower triangular Toeplitz matrix:


2
t0
0
6
.
..
6 t 1
TL = 6
6
..
..
4
.
.
t (n 1)
9.9.1

tn
..
.

t1
t0

tn
..
.
t1
t0

..
.
..
.
t 1

3
7
7
7
7
5

7
7
7,
7
5
3
0
.. 7
. 7
7
7
0 5
t0

(467)

(468)

(469)

Properties of Toeplitz Matrices

The Toeplitz matrix has some computational advantages. The addition of two
Toeplitz matrices can be done with O(n) flops, multiplication of two Toeplitz
matrices can be done in O(n ln n) flops. Toeplitz equation systems can be solved
in O(n2 ) flops. The inverse of a positive definite Toeplitz matrix can be found
in O(n2 ) flops too. The inverse of a Toeplitz matrix is persymmetric. The
product of two lower triangular Toeplitz matrices is a Toeplitz matrix. More
information on Toeplitz matrices and circulant matrices can be found in [13, 7].

9.10

Transition matrices

A square matrix P is a transition matrix, also known as stochastic matrix or


probability matrix, if
X
0 (P)ij 1,
(P)ij = 1
j

The transition matrix usually describes the probability of moving from state i
to j in one step and is closely related to markov processes. Transition matrices
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 55

9.11

Units, Permutation and Shift

SPECIAL MATRICES

have the following properties


Prob[i ! j in 1 step]

Prob[i ! j in 2 steps]

Prob[i ! j in k steps]

If all rows are identical


P

(P)ij

(470)

(P2 )ij

(471)

(472)

(P )ij
n

) P =P
=

(473)

is called invariant

where is a so-called stationary probability vector, i.e., 0 i 1 and


1.

9.11

(474)
P

i =

Units, Permutation and Shift

9.11.1

Unit vector

Let ei 2 Rn1 be the ith unit vector, i.e. the vector which is zero in all entries
except the ith at which it is 1.
9.11.2

9.11.3

Rows and Columns


i.th row of A

eTi A

(475)

j.th column of A

Aej

(476)

Permutations

Let P be some permutation


2
0 1
P=4 1 0
0 0

matrix, e.g.
3
0

0 5 = e2
1

e1

e3

For permutation matrices it holds that

3
eT2
= 4 eT1 5
eT3

PPT = I
and that
AP =

Ae2

Ae1

Ae3

(478)
2

3
eT2 A
PA = 4 eT1 A 5
eT3 A

(477)

(479)

That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted sequence.
9.11.4

Translation, Shift or Lag Operators

Let L denote the lag (or translation


example by
2
0
6 1
L=6
4 0
0

or shift) operator defined on a 4 4


0
0
1
0

0
0
0
1

3
0
0 7
7
0 5
0

(480)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 56

9.12

Vandermonde Matrices

SPECIAL MATRICES

i.e. a matrix of zeros with one on the sub-diagonal, (L)ij = i,j+1 . With some
signal xt for t = 1, ..., N , the n.th power of the lag operator shifts the indices,
i.e.
n 0
for t = 1, .., n
(Ln x)t =
(481)
xt n for t = n + 1, ..., N
A related but slightly dierent matrix is the recurrent shifted operator defined
on a 4x4 example by
2
3
0 0 0 1
6
7
=6 1 0 0 0 7
L
(482)
4 0 1 0 0 5
0 0 1 0
ij = i,j+1 +
i.e. a matrix defined by (L)
eect
n x)t = xt0 , t0 = [(t
(L

i,1 j,dim(L) .

n)

On a signal x it has the

mod N ] + 1

(483)

is like the shift operator L except that it wraps the signal as if it


That is, L
was periodic and shifted (substituting the zeros with the rear end of the signal).
is invertible and orthogonal, i.e.
Note that L

9.12

T
=L

(484)

Vandermonde Matrices

A Vandermonde matrix has the form [15]


2
1 v1 v12
6 1 v2 v22
6
V=6 . .
..
4 .. ..
.
1 vn vn2

v1n
v2n
..
.

vnn

7
7
7.
5

(485)

The transpose of V is also said to a Vandermonde matrix. The determinant is


given by [29]
Y
det V =
(vi vj )
(486)
i>j

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 57

10

10
10.1
10.1.1

Functions and Operators


Functions and Series
Finite Series
(Xn

10.1.2

FUNCTIONS AND OPERATORS

I)(X

I)

= I + X + X2 + ... + Xn

(487)

Taylor Expansion of Scalar Function

Consider some scalar function f (x) which takes the vector x as an argument.
This we can Taylor expand around x0
f (x)
= f (x0 ) + g(x0 )T (x
where
g(x0 ) =
10.1.3

@f (x)
@x

1
x0 ) + (x
2

x0 )T H(x0 )(x

H(x0 ) =

x0

@ 2 f (x)
@x@xT

x0 )

(488)

x0

Matrix Functions by Infinite Series

As for analytical functions in one dimension, one can define a matrix function
for square matrices X by an infinite series
f (X) =

1
X

c n Xn

(489)

n=0

P
assuming the limit exists and is finite. If the coefficients cn fulfils n cn xn < 1,
then one can prove that the above series exists and is finite, see [1]. Thus for
any analytical function f (x) there exists a corresponding matrix function f (x)
constructed by the Taylor expansion. Using this one can prove the following
results:
1) A matrix A is a zero of its own characteristic polynomium [1]:
X
p( ) = det(I
A) =
cn n
)
p(A) = 0
(490)
n

2) If A is square it holds that [1]


A = UBU

f (A) = Uf (B)U

(491)

3) A useful fact when using power series is that


An ! 0forn ! 1
10.1.4

if

|A| < 1

(492)

Identity and commutations

It holds for an analytical matrix function f (X) that


f (AB)A = Af (BA)

(493)

see B.1.2 for a proof.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 58

10.2

Kronecker and Vec Operator

10.1.5

10

FUNCTIONS AND OPERATORS

Exponential Matrix Function

In analogy to the ordinary scalar exponential function, one can define exponential and logarithmic matrix functions:
eA

etA

ln(I + A)

1
X
1 n
1
A = I + A + A2 + ...
n!
2
n=0
1
X
1
( 1)n An = I
n!
n=0

1
A + A2
2

(494)
...

(495)

1
X
1
1
(tA)n = I + tA + t2 A2 + ...
n!
2
n=0
1
X
( 1)n
n
n=1

An = A

1 2 1 3
A + A
2
3

(496)
...

(497)

Some of the properties of the exponential function are [1]


eA eB
A

(e )
d tA
e
dt

d
Tr(etA )
dt
det(eA )
10.1.6

10.2
10.2.1

eA+B

if

AB = BA

(498)

(499)

AetA = etA A,

Tr(AetA )

(501)

eTr(A)

(502)

t2R

(500)

Trigonometric Functions

sin(A)

cos(A)

1
X
( 1)n A2n+1
=A
(2n + 1)!
n=0
1
X
( 1)n A2n
=I
(2n)!
n=0

1 3
1
A + A5
3!
5!

1 2
1
A + A4
2!
4!

...

...

(503)
(504)

Kronecker and Vec Operator


The Kronecker Product

The Kronecker product of an m n matrix A and an r q matrix B, is an


mr nq matrix, A B defined as
2
3
A11 B A12 B ... A1n B
6 A21 B A22 B ... A2n B 7
6
7
AB=6
(505)
7
..
..
4
5
.
.
Am1 B

Am2 B ...

Amn B

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 59

10.2

Kronecker and Vec Operator

10

FUNCTIONS AND OPERATORS

The Kronecker product has the following properties (see [19])


A (B + C)

A (B C)

A B 6=

(A A B B)
(A B)

(A B)(C D)
(A B)

=
=
=
=

AB+AC

(506)

(A B) C

(508)

BA

in general

(507)

A B (A B)

(509)

AC BD

(511)

A B
1

(510)

(512)

rank(A B)

rank(A)rank(B)

(514)

Tr(A B)

(515)

det(A B)

Tr(A)Tr(B) = Tr(A B )
det(A)rank(B) det(B)rank(A)
{eig(B A)}

(517)

(A B)

{eig(A B)}

{eig(A B)}

eig(A B)

A B

(513)

if A, B are square

(516)

{eig(A)eig(B)T }

(518)

eig(A) eig(B)

(519)

if A, B are symmetric and square

Where { i } denotes the set of values i , that is, the values in no particular
order or structure, and A denotes the diagonal matrix with the eigenvalues of
A.
10.2.2

The Vec Operator

The vec-operator applied on a matrix A stacks the columns into a vector, i.e.
for a 2 2 matrix
2
3
A11

6 A21 7
A11 A12
7
A=
vec(A) = 6
4 A12 5
A21 A22
A22
Properties of the vec-operator include (see [19])
vec(AXB)
T

Tr(A B)

vec(A + B)

vec(A)

aT XBXT c

(BT A)vec(X)
T

(520)

vec(A) vec(B)

(521)

vec(A) + vec(B)

(522)

vec(A)

vec(X)T (B caT )vec(X)

(523)
(524)

See B.1.1 for a proof for Eq. 524.

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 60

10.3

Vector Norms

10.3
10.3.1

10

FUNCTIONS AND OPERATORS

Vector Norms
Examples
||x||1

||x||22

||x||p

||x||1

X
i
H

|xi |

x x
"
X
i

(525)

|xi |p

(526)

#1/p

(527)

max |xi |

(528)

Further reading in e.g. [12, p. 52]

10.4
10.4.1

Matrix Norms
Definitions

A matrix norm is a mapping which fulfils


||A||
||A||

||A + B||

||cA||

10.4.2

(529)

0,A=0

(530)

|c|||A||,

||A|| + ||B||

c2R

(531)
(532)

Induced Norm or Operator Norm

An induced norm is a matrix norm induced by a vector norm by the following


||A|| = sup{||Ax||

||x|| = 1}

(533)

where || || on the left side is the induced matrix norm, while || || on the right
side denotes the vector norm. For induced norms it holds that
||I||
||Ax||
||AB||
10.4.3

1
||A|| ||x||,
||A|| ||B||,

for all A, x
for all A, B

(534)
(535)
(536)

Examples
||A||1

||A||2
||A||p

=
=

||A||1

||A||F

X
max
|Aij |
j
i
q
max eig(AH A)
( max ||Ax||p )1/p
||x||p =1
X
max
|Aij |
i
j
sX
q
|Aij |2 = Tr(AAH )

(537)
(538)
(539)
(540)
(Frobenius)

(541)

ij

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 61

10.5

Rank

10

||A||max
||A||KF

=
=

max |Aij |

FUNCTIONS AND OPERATORS

(542)

ij

||sing(A)||1

(Ky Fan)

(543)

where sing(A) is the vector of singular values of the matrix A.


10.4.4

Inequalities

E. H. Rasmussen has in yet unpublished material derived and collected the


following inequalities. They are collected in a table as below, assuming A is an
m n, and d = rank(A)
||A||max

||A||max
||A||1
||A||1
||A||2
||A||F
||A||KF

m
pn
mn
p
p mn
mnd

||A||1
1

||A||1
1
m

pn
n
p
pn
nd

p
m
p
pm
md

||A||2
p1
pm
n

||A||F
p1
pm
n
1

d
d

||A||KF
p1
pm
n
1
1

which are to be read as, e.g.


||A||2

m ||A||1

(544)

10.4.5

Condition Number
p
The 2-norm of A equals (max(eig(AT A))) [12, p.57]. For a symmetric, positive definite matrix, this reduces to max(eig(A)) The condition number based
on the 2-norm thus reduces to
kAk2 kA

10.5
10.5.1

k2 = max(eig(A)) max(eig(A

)) =

max(eig(A))
.
min(eig(A))

(545)

Rank
Sylvesters Inequality

If A is m n and B is n r, then
rank(A) + rank(B)

10.6

n rank(AB) min{rank(A), rank(B)}

(546)

Integral Involving Dirac Delta Functions

Assuming A to be square, then


Z
p(s) (x As)ds =

1
p(A
det(A)

x)

Assuming A to be underdetermined, i.e. tall, then


(
)
Z
p 1 T p(A+ x) if x = AA+ x
det(A A)
p(s) (x As)ds =
0
elsewhere

(547)

(548)

See [9].
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 62

10.7

Miscellaneous

10.7

10

FUNCTIONS AND OPERATORS

Miscellaneous

For any A it holds that


rank(A) = rank(AT ) = rank(AAT ) = rank(AT A)

(549)

It holds that
A is positive definite

9B invertible, such that A = BBT

(550)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 63

A
A.1
A.1.1

One-dimensional Results
Gaussian
Density
p(x) = p

A.1.2

A.1.3

1
2

Normalization
Z
(s )2
e 2 2 ds
Z
2
e (ax +bx+c) dx
Z
2
ec2 x +c1 x+c0 dx

exp

(551)

2 2
r
2

b
4ac
exp
a
4a
2
r

c1 4c2 c0
exp
c2
4c2

=
=
=

p(x)

(x

(x

(552)
(553)
(554)

(555)

(556)

1 (x )2
p(x)
2

1 (x )2
1
2

=
=

(557)
(558)

Completing the Squares


c 2 x2 + c 1 x + c 0 =
a = c2

1 c1
2 c2

b=

c 2 x2 + c 1 x + c 0 =
=

c1
2c2

b)2 + w

a(x

or

A.1.5

)2

(x

Derivatives
@p(x)
@
@ ln p(x)
@
@p(x)
@
@ ln p(x)
@

A.1.4

ONE-DIMENSIONAL RESULTS

1
2c2

w=

1 c21
+ c0
4 c2

(x

)2 + d

d = c0

c21
4c2

Moments

If the density is expressed by

1
(s )2
p(x) = p
exp
2 2
2 2

or

p(x) = C exp(c2 x2 + c1 x)

(559)

then the first few basic moments are

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 64

A.2

One Dimensional Mixture of Gaussians


A ONE-DIMENSIONAL RESULTS

hxi

hx2 i

c1
2c2

=
+ 2

hx3 i

+ 3

hx4 i

4 + 62

+3

2
c1
1
2c2 + h 2c2
i
c21
c1
3
2
(2c )
2c2
2 4
2
c1
c1
+
6
2c2
2c2

and the central moments are


h(x

h(x
h(x
h(x

)i

) i
)3 i
)4 i

=
2

=
=

=
=

0h

1
2c2

1
2c2

1
2c2

+3

1
2c2

i2

A kind of pseudo-moments (un-normalized integrals) can easily be derived as


2
r
Z

c1
exp(c2 x2 + c1 x)xn dx = Zhxn i =
exp
hxn i
(560)
c2
4c2
From the un-centralized moments one can derive other entities like
hx2 i
hx3 i
hx4 i

A.2
A.2.1

hxi2
hx2 ihxi
hx2 i2

=
=

+ 42

1
2c2
2c1
(2c2 )2
2
(2c2 )2

c2

4 2c12

One Dimensional Mixture of Gaussians


Density and Normalization
p(s) =

K
X
k

A.2.2

=
=

Moments

k
2

2
k

exp

1 (s
2

k ) 2
2
k

(561)

A useful fact of MoG, is that


hxn i =

X
k

k hxn ik

(562)

where hik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities

X
1
1 (x k )2
p(x) =
k p
exp
(563)
2
2
2 k2
k
k
X

p(x) =
k Ck exp ck2 x2 + ck1 x
(564)
k

as

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 65

hxi

hx2 i

hx i

hx4 i

P
P
P
P

k k

k k (

2
k

k k (3

=
+ 2k )
2
k k

4
2
k k (k + 6k

=
3k )
2
k

+3

4
k)

P
P
P
P

PROOFS AND DETAILS

k k
k k
k k

ck1
2ck2

2ck2

ck1
2
(2ck2 )

1
2ck2

If all the gaussians are centered, i.e. k = 0 for all k, then


hxi

hx2 i
hx3 i

=
=

hx4 i

0
P

0
P

=
k k

k 3

2
k

=
=
4
k

0
P

0
P

k 3

1
2ck2

1
2ck2

ck1
2ck2

3
2

c2k1
2ck2
ck1
2ck2

ii

c2

6 2ck1
+3
k2

i2

From the un-centralized moments one can derive other entities like
2

P
2
0
hx2 i hxi2
=
k k 0
k,k0 k k k + k

P
2
3
2
2
0
hx3 i hx2 ihxi =
(
+

)
0 k k 0 3 k k + k
k
k
k
k,k
4

P
2 2
4
0
hx4 i hx2 i2
=
( k2 + 2k )( k20 + 2k0 )
k,k0 k k k + 6k k + 3 k
A.2.3

Derivatives
P
Defining p(s) = k k Ns (k , k2 ) we get for a parameter j of the j.th component
j Ns (j , j2 ) @ ln(j Ns (j , j2 ))
@ ln p(s)
=P
(565)
2
@j
@j
k k Ns (k , k )
that is,
@ ln p(s)
@j

@ ln p(s)
@j

@ ln p(s)
@ j

j Ns (j , j2 ) 1
P
2
k k Ns (k , k ) j

j Ns (j , j2 ) (s
P
2
k k Ns (k , k )

j Ns (j , j2 ) 1
P
2
k k Ns (k , k ) j

(566)
j )

(567)

2
j

"

(s

j ) 2
2
j

(568)

Note thatP
k must be constrained to be proper ratios. Defining the ratios by
j = erj / k erk , we obtain
@ ln p(s) X @ ln p(s) @l
@l
=
where
= l ( lj j )
(569)
@rj
@l @rj
@rj
l

B
B.1
B.1.1

Proofs and Details


Misc Proofs
Proof of Equation 524

The following proof is work of Florian Roemer. Note the the vectors and matrices below can be complex and the notation XH is used for transpose and
conjugated, while XT is only transpose of the complex matrix.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 66

B.1

Misc Proofs

PROOFS AND DETAILS

Define the row vector y = aH XB and the column vector z = XH c. Then


aT XBXT c = yz = zT yT
Note that y can be rewritten as vec(y)T which is the same as
vec(conj(y))H = vec(aT conj(X)conj(B))H
where conj means complex conjugated. Applying the vec rule for linear forms
Eq 520, we get
y = (BH aT vec(conj(X))H = vec(X)T (B conj(a))
where we have also used the rule for transpose of Kronecker products. For yT
this yields (BT aH )vec(X). Similarly we can rewrite z which is the same as
vec(zT ) = vec(cT conj(X)). Applying again Eq 520, we get
z = (I cT )vec(conj(X))
where I is the identity matrix. For zT we obtain vec(X)(I c). Finally, the
original expression is zT yT which now takes the form
vec(X)H (I c)(BT aH )vec(X)
the final step is to apply the rule for products of Kronecker products and by
that combine the Kronecker products. This gives
vec(X)H (BT caH )vec(X)
which is the desired result.
B.1.2

Proof of Equation 493

For any analytical function f (X) of a matrix argument X, it holds that


!
1
X
n
f (AB)A =
cn (AB)
A
n=0

=
=
=

1
X

n=0
1
X

cn (AB)n A
cn A(BA)n

n=0
1
X

cn (BA)n

n=0

=
B.1.3

Af (BA)

Proof of Equation 91

Essentially we need to calculate

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 67

B.1

Misc Proofs

@(Xn )kl
@Xij

@
@Xij

=
=

u1 ,...,un

Xk,u1 Xu1 ,u2 ...Xun

1 ,l

k,i u1 ,j Xu1 ,u2 ...Xun

+Xk,u1

PROOFS AND DETAILS

1 ,l

u1 ,i u2 ,j ...Xun

1 ,l

..
.
+Xk,u1 Xu1 ,u2 ...
=

n
X1

(Xr )ki (Xn

un

1 r

1 ,i

l,j

)jl

r=0

n
X1

(Xr Jij Xn

1 r

)kl

r=0

Using the properties of the single entry matrix found in Sec. 9.7.4, the result
follows easily.
B.1.4

Details on Eq. 571

@ det(XH AX)

det(XH AX)Tr[(XH AX)

@(XH AX)]

det(XH AX)Tr[(XH AX)

(@(XH )AX + XH @(AX))]

det(XH AX) Tr[(XH AX)


+Tr[(XH AX)

@(XH )AX]

XH @(AX)]

det(XH AX) Tr[AX(XH AX)


+Tr[(XH AX)

@(XH )]

XH A@(X)]

First, the derivative is found with respect to the real part of X


Tr[AX(XH AX) 1 @(XH )]
@ det(XH AX)
= det(XH AX)
@<X
@<X
Tr[(XH AX) 1 XH A@(X)]
+
@<X
= det(XH AX) AX(XH AX) 1 + ((XH AX)

XH A)T

Through the calculations, (100) and (240) were used. In addition, by use of
(241), the derivative is found with respect to the imaginary part of X
Tr[AX(XH AX) 1 @(XH )]
@ det(XH AX)
i
= i det(XH AX)
@=X
@=X
Tr[(XH AX) 1 XH A@(X)]
+
@=X
H
= det(X AX) AX(XH AX) 1 ((XH AX) 1 XH A)T
Hence, derivative yields
@ det(XH AX)
@X

=
=

1 @ det(XH AX)
@ det(XH AX)
i
2
@<X
@=X
T
H
H
1 H
det(X AX) (X AX) X A

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 68

B.1

Misc Proofs

PROOFS AND DETAILS

and the complex conjugate derivative yields


@ det(XH AX)
@X

=
=

1 @ det(XH AX)
@ det(XH AX)
+i
2
@<X
@=X
H
H
det(X AX)AX(X AX) 1

Notice, for real X, A, the sum of (249) and (250) is reduced to (54).
Similar calculations yield
@ det(XAXH )
@X

=
=

1 @ det(XAXH )
@ det(XAXH )
i
2
@<X
@=X
T
H
H
H
det(XAX ) AX (XAX ) 1

(570)

1 @ det(XAXH )
@ det(XAXH )
+i
2
@<X
@=X
det(XAXH )(XAXH ) 1 XA

(571)

and
@ det(XAXH )
@X

=
=

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 69

REFERENCES

REFERENCES

References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera dierentialekvationer. Studenterlitteratur, 1992.
[2] J
orn Anem
uller, Terrence J. Sejnowski, and Scott Makeig. Complex independent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):13111323, November 2003.
[3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathematics and Computin Science Series. Clarendon Press, 1990.
[4] Christopher Bishop. Neural Networks for Pattern Recognition. Oxford
University Press, 1995.
[5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes.
[6] D. H. Brandwood. A complex gradient operator and its application in
adaptive array theory. IEE Proceedings, 130(1):1116, February 1983. PTS.
F and H.
[7] M. Brookes. Matrix Reference Manual, 2004. Website May 20, 2004.
[8] Contradsen K., En introduktion til statistik, IMM lecture notes, 1984.
[9] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004.
[10] Nielsen F. A., Formula, Neuro Research Unit and Technical university of
Denmark, 2002.
[11] Gelman A. B., J. S. Carlin, H. S. Stern, D. B. Rubin, Bayesian Data
Analysis, Chapman and Hall / CRC, 1995.
[12] Gene H. Golub and Charles F. van Loan. Matrix Computations. The Johns
Hopkins University Press, Baltimore, 3rd edition, 1996.
[13] Robert M. Gray. Toeplitz and circulant matrices: A review. Technical
report, Information Systems Laboratory, Department of Electrical Engineering,Stanford University, Stanford, California 94305, August 2002.
[14] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River,
NJ, 4th edition, 2002.
[15] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge
University Press, 1985.
[16] Mardia K. V., J.T. Kent and J.M. Bibby, Multivariate Analysis, Academic
Press Ltd., 1979.
[17] Mathpages on Eigenvalue Problems and Matrix Invariants,

http://www.mathpages.com/home/kmath128.htm
[18] Carl D. Meyer. Generalized inversion of modified matrices. SIAM Journal
of Applied Mathematics, 24(3):315323, May 1973.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 70

REFERENCES

REFERENCES

[19] Thomas P. Minka. Old and new matrix algebra useful for statistics, December 2000. Notes.
[20] Daniele Mortari OrthoSkew and OrthoSym Matrix Trigonometry John
Lee Junkins Astrodynamics Symposium, AAS 03265, May 2003. Texas
A&M University, College Station, TX
[21] L. Parra and C. Spence. Convolutive blind separation of non-stationary
sources. In IEEE Transactions Speech and Audio Processing, pages 320
327, May 2000.
[22] Kaare Brandt Petersen, Jiucang Hao, and Te-Won Lee. Generative and
filtering approaches for overcomplete representations. Neural Information
Processing - Letters and Reviews, vol. 8(1), 2005.
[23] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing.
Prentice-Hall, 1996.
[24] Laurent Schwartz. Cours dAnalyse, volume II. Hermann, Paris, 1967. As
referenced in [14].
[25] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and
Sons, 1982.
[26] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons,
2002.
[27] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974.
[28] Inna Stainvas. Matrix algebra in dierential calculus. Neural Computing
Research Group, Information Engeneering, Aston University, UK, August
2002. Notes.
[29] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice Hall,
1993.
[30] Max Welling. The Kalman Filter. Lecture Note.
[31] Wikipedia on minors: Minor (linear algebra),

http://en.wikipedia.org/wiki/Minor_(linear_algebra)
[32] Zhaoshui He, Shengli Xie, et al, Convolutive blind source separation in
frequency domain based on sparse representation, IEEE Transactions on
Audio, Speech and Language Processing, vol.15(5):1551-1563, July 2007.
[33] Karim T. Abou-Moustafa On Derivatives of Eigenvalues and Eigenvectors
of the Generalized Eigenvalue Problem. McGill Technical Report, October
2010.
[34] Mohammad Emtiyaz Khan Updating Inverse of a Matrix When a Column
is Added/Removed. Emt CS,UBC February 27, 2008

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 71

Index
Anti-symmetric, 54

Normal-Inverse Gamma distribution, 37


Normal-Inverse Wishart distribution, 39

Block matrix, 46

Orthogonal, 49

Chain rule, 15
Cholesky-decomposition, 32
Co-kurtosis, 34
Co-skewness, 34
Condition number, 62
Cramers Rule, 29

Power series of matrices, 58


Probability matrix, 55
Pseudo-inverse, 21
Schur complement, 41, 47
Single entry matrix, 52
Singular Valued Decomposition (SVD),
31
Skew-Hermitian, 48
Skew-symmetric, 54
Stochastic matrix, 55
Student-t, 37
Sylvesters Inequality, 62
Symmetric, 54

Derivative of a complex matrix, 24


Derivative of a determinant, 8
Derivative of a trace, 12
Derivative of an inverse, 9
Derivative of symmetric matrix, 15
Derivatives of Toeplitz matrix, 16
Dirichlet distribution, 37
Eigenvalues, 30
Eigenvectors, 30
Exponential Matrix Function, 59

Taylor expansion, 58
Toeplitz matrix, 54
Transition matrix, 55
Trigonometric functions, 59

Gaussian, conditional, 40
Gaussian, entropy, 44
Gaussian, linear combination, 41
Gaussian, marginal, 40
Gaussian, product of densities, 42
Generalized inverse, 21

Unipotent, 49
Vandermonde matrix, 57
Vec operator, 59, 60
Wishart distribution, 38
Woodbury identity, 18

Hadamard inequality, 52
Hermitian, 48
Idempotent, 49
Kronecker product, 59
LDL decomposition, 33
LDM-decomposition, 33
Linear regression, 28
LU decomposition, 32
Lyapunov Equation, 30
Moore-Penrose inverse, 21
Multinomial distribution, 37
Nilpotent, 49
Norm of a matrix, 61
Norm of a vector, 61
72

You might also like