Professional Documents
Culture Documents
RGB, nRGB
HSV/I/L
YCrCb
Pixel Statistics
Color Models
Classification
Appearance-Based Methods
PCA
ICA, FLD
Non-negative Matrix Factorization, Sparse Matrix Factorization
Multilinear PCA,
Multilinear ICA
Data collection.
Data analysis - organize & summarize data to bring
out main features and clarify their underlying
structure.
Inference and decision theory extract relevant info
le
op
from collected data and use it as
Pe
D
Views
a guide for further action.
Illuminations
Statistical Modeling
Today
Maximum Likelihood
ns
ressio
Exp
PART I: 2D Vision
Data Collection
Definitions
Data Analysis
Variables
Response Variables are directly measurable,
they measure the outcome of a study.
Explaining Association
An association between two variables x and y can reflect many
types of relationships
association
causality
Linear Models
Multilinear Models
Second-order methods
covariance
Linear Models
meaningful representation
Descriminant Models:
Images
Image Representation
.... ..
.
.
. .
.. ............
i kr1
pixel kl
255
5
25
xe
pi
I k r
An image is a point in
i2 ... ik
i1
ik+1
.
I = .
.
.
.
ik(r1)+1
i
kr
pixel 1
0
pixel value
l2
255
i1
i2
i = M
i
kr
1
0
0
0
1
= i1 M + i 2 0 L + i kr M
M
0
0
0
1
Image Representation
Representation
i2 ... ik
i1
ik+1
.
I = .
.
.
.
ik(r1)+1
i
kr
i1
i2
i = M
i
kr
1
O
i1
0
i2
1
i
kr
Basis Matrix, B
i = Bc
vector of coefficients, c
l3
xe
pi
.
i2
pixel 2
i2
i3
i3
..
................
......
l3
xe
pi
1
0
0
1 0 0 i1n
i n = i1n 0 + i2 n 1 + i3 n 0 = 0 1 0 i
2n
0 0 1 i3n
0
0
1
Basis Matrix, B
i1
i1
pixel 1
pixel 2
..
................
.....
i2
new basis
l3
xe
pi
pixel 1
1 0
D, data matrix
C, coefficient matrix
C=B D
pixel 1
i
1n
= 0 1 = Bcn
i2n
i1 i 2 L i N = 0 1 c1 c 2 L c N
1 0
pixel 2
1 0
1
0
= i1n 0 + i2 n 1
1
0
Toy Example-Recognition
l3
xe
pi
Toy Example-Recognition
1
0
0
1 0 0 i1n
i n = i1n 0 + i2 n 1 + i3 n 0 = 0 1 0 i
2n
0 0 1 i3n
0
0
1
..
.................
......
pixel 1
.5 0 .5 i1new
c new = B 1i new =
i2 new
0 1 0 i3new
Euclidean distance:
d
= yL y
( y Lc yc )
N
c =1
PCA: Theory
x2
x2
e2
e2
Problem formulation
Input:
X = [x1 L x N ] points in d-dimensional space
Solve for: B dxm basis matrix (md)
:
x1
x1
...
1 N
(i n )(i n )T
N 1 n =1
ST =
1
(D M )(D M )T
N 1
where
M = [ L ]
i1
i 2
1
=
i1 i 2 L i N
N 1
i N
Sample Covariance:
cov(x, y )
x y
1 N
T
(x x )(x x )
N 1 i =1
ST =
cor (x, y ) =
cov(x, y ) =
C = [c1 L c N ] = BT [x1 L x N ]
Look for: B
Such that:
[c1
x = [x1 L xN ]
L c N ] = BT [i1 L i N ]
correlation is minimized
ck = bTk x = bik xi
cov(C) is diagonal
i =1
CC
= B (D M)(D M) B
= BT ST B
Algebraic Derivation of b1
b k = [b1k
var[ck ]
L
= Sb1 b1 = 0
b1
Therefore,
b1
is maximal
=1
L = b Sb1 b b1 1
T
1
We have maximized
L bdk ]
Algebraic Derivation of b1
T
k
(S I )b1 = 0
is an eigenvector of
corresponding to eigenvalue = 1
Algebraic Derivation of b2
Data Loss
) (
L
= Sb 2 b 2 b1 = 0
b 2
)
x2
x2
x1 Bc +
i
BoptT(xi - )
x1
2D data
x1
1D data
2D data
j =1
j =1
j = m +1
C x = UC y U
Remember that:
SVD: definition
C x = (X )(X ) DD
T
then:
d N
where D IR
EVD
is non-square
vs.
SVD
and
And:
12
Cy =
N2
O
0 d d
VT
D' = U
2m
~
D = UC y V T
Cy =
O
N N N
q = min( N , d )
Non-square
C x = UC y UT
=
O
q qq
Square
D = U VT
UT U = VVT = I
~
D = UC y V T
dN
PCA : Conclusion
Consider a set of images, & each image is made up of 3 pixels and pixel 1 has the same
T
value as pixel 3 for all images
i = i i i
s.t. i = i and 1 n N
[ 1n
2n
3n
1n
3n
PCA chooses axis in the direction of highest variability of the data, maximum scatter
pixel 2
PCA-Dimensionality Reduction
|
|
| |
| |
i1 i 2 L i N = B c1 c 2 L c N
|
|
| |
| |
1st axis
..
.................
...... 2
l3
xe
pi
data matrix, D
nd
axis
D = USV T (svd of D)
pixel 1
set B = U
c new = BT i new
pixel 2
1st axis
.... ..
................
. .. 2
l3
xe
pi
nd
B 1 = BT
axis
pixel 1
Linear Representation:
. ... ..
.............
......
pixel kl
5
25
x
pi
2
el
c1
5
25
pixel 1
0
pixel kl
Eigenimages
255
255
+ c9
+ c28
e
pix
l2
pixel 1
0
255
di = Uc i
Running Sum:
+ c3
c1
2
1 term
3 terms
9 terms
28 terms
ST =
1 N
(i n )(i n ) T
N 1 n =1
PCA Classifier
EigenImages-Basis Vectors
d f (y ) = y U f U f y
T
d =
2002 by M. Alex O. Vasilescu
. . 1 axis
................
......
.
.
.........
............. 2 axis
pixel 2
st
person 1
person 2
nd
pixel 1
d n (y )
L =1
>
d f (y )
<
L =0
Face Detection/Recognition
Consider a set of images of 2 people under fixed viewpoint & N lighting condition
Each image is made up of 2 pixels
pixel 2
1st axis
..
..........................
...................
2
person 1
nd
axis
person 2
pixel 1
Reduce dimensionality by throwing away the axis along which the data varies the least
The coefficient vector associated with the 1st basis vector is used for classifiction
Possible classifier: Mahalanobis distance
Each image is represented by one coefficient vector
Each person is displayed in N images and therefore has N coefficient vectors
Face Localization
Scan and classify using image windows at different positions and scales
Face examples
Off-line
training for
multiple scales
Conf.=5
Classifier
Feature Extraction
Non-face examples
10