You are on page 1of 46

Digital Image Processing

Mathematical tools
Hamid Laga
Institut Telecom / Telecom Lille1
hamid@img.cs.titech.ac.jp
http://www.img.cs.titech.ac.jp/~hamid/
Mathematical tools
Linear algebra
Vectors and matrices
Inner product
Convolution
Eigendecomposition
Probability theory
Basics
Probability distributions
2
Mathematical tools
Objectives
This review is aimed to ease understanding of
the IP topics, not for rigorous math. Review
3
Linear algebra - vectors
What is a vector
A point in the multi-dimensional space with
respect to some coordinate system
Notation
4
Linear algebra - vectors
Scalar product (dot / inner product)
A product of two vectors and results in a scalar
Formulation
5
)) , ( cos( b a angle b a =
Linear algebra - vectors
Interpretations of the scalar product
Projection
Angle
Similarity
6
Linear algebra - vectors
Norms
7
Linear algebra - vectors
Linear combination
A linear combination of v
1
,v
2
,,v
n
is an expression of the form
where the os are scalars.
Linear dependence
A vector v is said to be linearly dependent on a set of
vectors v
1
,v
2
,,v
n
if and only if v can be written as a linear
combination of these vectors.
Otherwise, v is linearly independent of the set of vectors
v
1
,v
2
,,v
n
.
8
Linear algebra - bases
9
Linear algebra - bases
Example
In 3D space the basis are
10
Linear algebra - bases
Coordinate system:
Origin + standard basis
11
Linear algebra - bases
Change of basis
12
Linear algebra - bases
13
Linear algebra - Matrices
Example rotation matrix
14
Linear algebra - Matrices
Definition
An mn (read "m by n") matrix is a rectangular array of
entries or elements
mis the number of rows, n the number of columns
A is the identity matrix ( I ) if its all diagonal elements are
1, and the other elements (off-diagonal) are 0.
15
Linear algebra - Matrices
Rank of a matrix
16
Linear algebra - Matrices
Inverse of a matrix
17
Linear algebra - Matrices
Inverse of a matrix - properties
18
Linear algebra - Matrices
Determinant of a matrix
19
Linear algebra - Eigendecomposition
Eigenvalues and eigenvections
20
Linear algebra - Eigendecomposition
Eigendecomposition
21
1
=VDV A
Linear algebra - Eigendecomposition
Eigendecomposition
Matrix inverse via eigendecomposition
if matrix A can be eigendecomposed and if none of its
eigenvalues are zero, then A is nonsingular and its
inverse is given by
Because D is a diagonal matrix, its inverse is given
22
1
=VDV A
1 1 1
= V VD A
| |
ii
ii
D

1
1
=

Linear algebra - Eigendecomposition


Properties
23
Linear algebra - Eigendecomposition
26
Mathematical tools
Probability theory
Basics
Probability distributions
27
Mathematical tools Probability theory
Probability = frequency, chance,
Example - Tossing a coin
Tosses will produce Heads (H) or Tails (T)
If X is the output X e {H, T}
The output X is random (we cannot predict it!!)
X called random variable
H and T are two events
(event of obtaining Head, and event of obtaining Tail)
28
Mathematical tools Probability theory
Probability = frequency, chance,
Example - Tossing a coin
Tosses will produce Heads (H) or Tails (T)
If X is the output X e {H, T}
The output X is random (we cannot predict it!!)
X called random variable which takes values either H or T
H and T are called events
(event of obtaining Head, and event of obtaining Tail)
29
Mathematical tools Probability theory
Example - Tossing a coin
Tosses will produce Heads (H) or Tails (T)
If X is the output X e {H, T}
The output X is random (we cannot predict it!!)
X called random variable which takes values either H or T
H and T are called events
(event of obtaining Head, and event of obtaining Tail)
Lets denote by
n the total number of tosses
nh number of Heads obtained
nt number of tails obtained (n = nh + nt)
nh/n + nt / n = 1
nh / n relative frequency of the event H
nt / n relative frequency of the event T
We call these values (relative frequencies) as the
probability of the event, denoted as P(x)
30
Mathematical tools Probability theory
Properties
Probability of an event is a positive number
bounded by 0 and 1 (P(X) e [0, 1])
P(X Y) = P(X or Y) = P(X) + P(Y) P(X Y)
P(X Y) = P(X,Y)
If X Y is empty then X and Y are mutually
exclusive
31
X Y
Y X
Mathematical tools Probability theory
Conditional probability
Probability of an event X given another event Y
P(X / Y)
X and Y are statistically independent if and only
if P(X/Y) = P(X)
For N events to be statistically independent,
it must be true that, for all pairwise
combinations
32
Mathematical tools Probability theory
Bayes theorem
Simple but very powerful
33
Mathematical tools Probability theory
Continuous random variables
The random variable can take any real value
We are interested in the probability that the
variable lies in a specific range, particularly
F is the cumulative distribution function
Density function
34
Mathematical tools Probability theory
Expected values of a variable x (average /
mean)
Continuous
Discrete
35
Mathematical tools Probability theory
Variance of a variable x
Continuous
Discrete
36
Mathematical tools Probability theory
Variance of a variable x
Continuous
Discrete
o is called standard deviation
Normalized random variable
Mean = 0, and variance = 1.
37
Mathematical tools Probability theory
n-the central moment
Continuous
Discrete
38
Mathematical tools Probability theory
Mean, variance, moments in image processing
histogram processing, segmentation, and description.
Moments are used to characterize the probability density function
of a random variable.
The second, third, and fourth central moments are intimately
related to the shape of the probability density function of a
random variable.
The second central moment (the centralized variance) is a
measure of spread of values of a random variable about its
mean value,
the third central moment is a measure of skewness (bias to the
left or right) of the values of x about the mean value,
the fourth moment is a relative measure of flatness.
In general, knowing all the moments of a density specifies
that density.
39
Mathematical tools Probability theory
Gaussian (normal) distribution
40
Mathematical tools Probability theory
Gaussian (normal) distribution
41
Mathematical tools Probability theory
Several random variables / multivariate
Vector random variable
42
Mathematical tools Probability theory
Covariance between two random var. X and Y
Cov(X, Y) = E[ X E(X)] E[Y E(Y)]
Cov(X, X) = var(X)
Covariance matrix
43
Mathematical tools Probability theory
Covariance matrix
Covariance matrices are real and symmetric.
The elements along the main diagonal of C are
the variances of the elements x, such that c
ii
=
o
x
i
.
When all the elements of x are uncorrelated or
statistically independent,
c
ij
= 0, and
the covariance matrix becomes a diagonal matrix.
If all the variances are equal,
the covariance matrix becomes proportional to the identity
matrix, with the constant of proportionality being the variance
of the elements of x.
44
Mathematical tools Probability theory
Correlation
Measures the relation between two random
variables X, and Y
e [-1 1]
Corr = 1 highly correlated
Corr = 0 not correlated
Corr = -1 inversely correlated
45
Mathematical tools Probability theory
Correlation
Example
Industry (highly correlated 1)
X is the price of some raw materials used in car industry
Y is the price of one car
Stock market (not correlated == 0)
X is the price of one share of Google, Y is the pluviometry
Correlation should be 0 (?)
Stock market (inversely correlated -1)
X is the oil price
Y is the number of car sales in France
Statistical independent uncorrelated (=0)
46
Take home points
Linear algebra
Dot product
Eigendecomposition (eigenvalues, eigenvectors)
Probability theory
Gaussian distribution
Bayes theorem
Covariance / correlation
47
Other topics you should read
Principal Component Analysis
48

You might also like