You are on page 1of 28

Linear Algebra and Matrices

Methods for Dummies


FIL
November 2011

Narges Bazargani and Sarah Jensen


ONLINE SOURCES

Web Guides

http://mathworld.wolfram.com/LinearAlgebra.html

http://www.maths.surrey.ac.uk/explore/emmaspages/option1.html

http://www.inf.ed.ac.uk/teaching/courses/fmcs1/

Online introduction:
- http://www.khanacademy.org/video/introduction-to-
matrices?playlist=Linear+Algebra
What Is MATLAB?
- And why learn about matrices?

MATLAB = MATrix LABoratory

Typical uses include:


Math and computation
Algorithm development
Modelling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including Graphical User
Interface building
Everything in MATLAB is a matrix !

Zero-dimentional matrix
A Scalar - a single number is really a 1 x 1 matrix in Matlab! 4

1 dimentional matrix
A vector is a 1xn matrix with 1 row [1 2 3]
n
m 2 7 4
A matrix is an mxn matrix
3 8 9

Even a picture is a matrix!


Building matrices I MATLAB
with [ ]:

2 7 4
A = [2 7 4]
2
A = [2; 7; 4] 7
4
; separates the different rows

2 7 4
A = [2 7 4; 3 8 9] 3 8 9

: separates collums
Matrix formation in MATLAB

X = [1 2 3; 4 5 6; 7 8 9] = 1 2 3

4 5 6
7 8 9

Submatrices in MATLAB
Subscripting each element of a matrix can be addressed with a pair
of numbers; row first, column second
X(2,3) = 6
X(3,:) = ( 7 8 9 )
1 2 3
X( [2 3], 2) = 5
4 5 6
7 8 9 8

Matrix addition and subtraction

NB Only matrices of the same size can be added and substracted

Addition

Subtraction
Matrix Multiplication I

Different kinds of multiplication I MATLAB

Scalar multiplication

Matlab does all this for you!: 3 * A


Matrix multiplication II
Sum product of respective rows and columns

n l
k
Matrix multiplication rule: m a11 a12 a13 b11 b12

a21 a22 a23 X b21 b22

A x B is only viable if n=k. a31 a32 a33 b31 b32

a41 a42 a43

b11 b12 a11 a12 a13

b21 b22 x a21 a22 a23

b31 b32 a31 a32 a33

a41 a42 a43

Matlab does all this for you!: C = A * B


Elementwise multiplication

a11 a12 a13 b11 b12


Matrix multiplication rule:
a21 a22 a23 X b21 b22

Matrixes need the exact same m a31 a32 a33 b31 b32

and n a41 a42 a43

b11 b12 a11 a12

b21 b22 x a21 a22

b31 b32 a31 a32

Matlab does all this for you!: A .* B


Transposition reorganising matrices

column row row column

In Matlab: AT = A
Identity matrices
Tool to solve equation
This identity matrix Is a matrix which plays a similar role as the
number 1 in number multiplication

1 0 0

0 1 0
0 1
0
Worked example 1 2 3 1 0 0 1+0+0 0+2+0 0+0+3
A In = A 4 5 6 X 0 1 0 = 4+0+0 0+5+0 0+0+6
for a 3x3 matrix: 7 8 9 0 0 1 7+0+0 0+8+0 0+0+9

In Matlab: eye(r, c) produces an r x c identity matrix


Inverse matrices
Definition: Matrix A is invertible if there exists a matrix B such that:

2 -1 2+1 -1 + 1
1 1 X = = 1 0
3 3 3 3 3 3
1 1 -2+ 2 1+2
-1 2 0 1
3 3 3 3 3 3

Notation for the inverse of a matrix A is A-1

If A is invertible, A-1 is also invertible A is the inverse matrix of A-1.

In Matlab: A-1 = inv(A)


Determinants
Determinant is a function:

A Matrix A has an inverse matrix (A-1)if and only


if det(A) 0
In Matlab: det(A) = det(A)
With more than 1 equation and more
than 1 unknown
1
Can use solution x a b from the single
equation to solve
2 x1 3x2 5
For example x1 2 x2 1

2 3 x1 5

In matrix form 1 2 x2 1 AX = B
Need to find determinant of matrix A (because X =A-1B)

a b
From earlier det( A) ad bc
c d

2 3
(2 x -2) (3 x 1) = -4 3 = -7 1 2
So determinant is -7

To find A-1:

1 1 2 3 1 2 3 1
A
(7) 1 2 7 1 2
4
if B is

1 2 3 1 1 14 2
X
7 1 2 4 7 7 1 So
scalars, vectors and matrices in
SPM
Scalar: Variable described by a single
number e.g. intensity of each voxel in MRI scan

Vector: Physics vector is Variable described by magnitude and direction


Here we talk about column of numbers e.g. voxel intensity at a different
times or different voxels at the same time

x1
v
v
n x 2
v
e
xn

Matrix: Rectangular array of vectors defined by number of rows and columns.

x11 x12 x1n


.
.
xn1xnn
Vectorial Space and Matrix Rank
Vectorial space: is a space that contains vectors and all the those that can be
obtained by multiplying vectors by a real number then adding them (linear
combination). In other words, because each column of the matrix can be
represented by a vector, the ensemble of n vector-column defines a vectorial
space for a matrix.

Rank of a matrix: corresponds to the number of vectors that are linearly


independents from each other. So, if there is a linear relationship between the
lines or columns of a matrix, then the matrix will be rank-deficient (and the
determinant will be zero). For example, y

in the graph below there is a linear


relationship between X1 and X2,
and the determinent is zero.
4
x2
And the Vectorial space defined
will has only 1 dimension. 2 x1

x
1 2
Eigenvalues et eigenvectors
Eigenvalues are multipliers. They are numbers that represent how much linear transformation or
stretching has taken place. An eigenvalue of a square matrix is a scalar that is represented by the
Greek letter (lambda).

Eigenvectors of a square matrix are non-zero vectors, after being multiplied by the matrix, remain
parallel to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by
which the eigenvector is scaled when multiplied by the matrix.

All eigenvalues and eigenvectors satisfy the equation Ax = x for a given square matrix A. i.e.
Matrix A acts by stretching
the vector x, not changing its direction, so x is an
eigenvector of A.
One can represent eigenvectors of A as a
set of orthogonal vectors representing different
dimensions of the original matrix A.
(Important in Principal Component Analysis, PCA)
Matrix Representations of Neural
Connections
Can create a mathematical model of the
connections in a neural system

Connections are the excitatory or inhibitory

Excitatory Connection Inhibitory Connection

Input Neuron Output Neuron Input Neuron Output Neuron


Matrix Representations of Neural
Connections
#2 Excitatory = Makes it easier for the
post synaptic cell to fire
-1
Inhibitory = Makes it harder for the
#1 +1 #3 post synaptic cell to fire

We can translate this information into a set of vectors (1 row matrices)


Input vector = (1 1) relates to activity (#1 #2)
Weight vector = (1 -1) relates to connection weight (#1 #2)

Activity of Neuron 3 1
1 1. (11) (1 1) (1) (1) 0
Input x weight 1
Cancels out! But it is more
complicated than this!
How are matrices relevant to fMRI data?
Basics of MR Physics

Angular Momentum: Neutrons, protons and electrons spin about their axis.
The spinning of the nuclear particles produces angular momentum.

Certain nuclei exhibit magnetic properties. A proton has mass, a positive


charge, and spins, it produces a small magnetic field. This small magnetic
field is referred to as the magnetic moment that is a vector quantity with
magnitude and direction and is oriented in the same direction as the angular
momentum.

Under normal circumstances these magnetic moments have no fixed


orientation (so no overall magnetic field). However, when exposed to an
external magnetic field (B0), nuclei begin to align. To detect net
magnetisation signal a second magnetic field is introduced (B1) which is
applied perpendicular to B0, and it has to be at the resonant frequency.
How are matrices relevant to fMRI
data?
Y = X . +
Observed = Predictors * Parameters + Error
BOLD = Design Matrix * Betas + Error
Y

Y is a matrix of BOLD signals


Each column represents a single
voxel sampled at successive time
points.

Time
Each voxel is considered as
independent observation
So, we analysis of individual voxels
over time, not groups over space

Intensity
Explanatory variables Solve equation for
These are assumed to be tells us how
Response variable
measured without error. much of the BOLD
A single voxel sampled
May be continuous, indicating signal is explained
at successive time points.
levels of an experimental by X
Each voxel is considered as
factor.
independent observation.

a
m
b3
N of scans

b4
= b5
+
b6
b7
b8
b9

Y = X b + e
Observed Predictors
Pseudoinverse
In SPM, design matrices are NOT
square matrices (more lines than
columns, especially for fMRI).

So, there is not a unique solution, i.e.


there is more than one solution
possible.

SPM will use a mathematical trick


called the pseudoinverse, which is an
approximation, where the solution is
constrained to be the one where the
b values that are minimum.
How are matrices relevant to fMRI?
Image time-series
Spatial filter Design matrix Statistical Parametric Map

Realignment Smoothing General Linear Model

Statistical RFT
Normalisation Inference

Anatomical p <0.05
reference Parameter estimates
In Practice
Estimate MAGNITUDE of signal changes and
MR INTENSITY levels for each voxel at various
time points
Relationship between experiment and voxel
changes are established
Calculation require linear algebra and matrices
manipulations
SPM builds up data as a matrix.
Manipulation of matrices enables unknown
values to be calculated.
Thank you!

Questions?

You might also like