You are on page 1of 79

Projection and Some of Its Applications

Mohammed Nasser
Professor, Dept. of Statistics, RU,Bangladesh
Email: mnasser.ru@gmail.com

The use of matrix theory is now widespread .- - - -- are


essential in ----------modern treatment of univeriate and
multivariate statistical methods.

----------C.R.Rao
11

Contents
Oblique and Orthogonal Projection in R2
Orthogonal Projection into a Line Rn
Inner product Space
Projection into a Subspace
Gram-Schmidt Orthogonalization
Projection and Matrices
Projection in Infinite-dimensional Space
Projection in Multivariate Methods

22

Mathematical Concepts

Mathematical Concepts

Covariance

Projection

Variance

1.Oblique and Orthogonal


Projection in R2
are two independent vectors in
1
0
1 and 1
R2


1

0
V 1 {l | l R } andV 2 {m
|m R}

1

1
are two one-dimensional subspaces in R2
R2 =V1 +V2

V1 V2 ={0}
R2 =V1 V2

1.Oblique and Orthogonal


Projection in R2
V2

(1,1)

V1

(0,1)

1.Oblique and Orthogonal


Projection in R2
1
0
1 and 1

are two independent vectors in


R2

x1
1
0
a1 a2
1
1
x2
x1

a2

x2 x1

a1

x1 1 0 a1
1 0

x
1
1
1
1
a
2

x1 a1
x a
2 2

x1
1
0
x1 ( x2 x1 )
1
1
x2

1.Oblique and Orthogonal


Projection in R2
We define:L: R

1
t
1

x1
1
L
x1

1
x2

as

We can easily show that it is a linear map


This linear map is called projection (oblique??)

1
The vector is projected on the space generated by
1

along the space generated by 0

1.Oblique and Orthogonal


Projection in R2
V2
(1,1)

V1

(-1,1)

1.Oblique and Orthogonal


Projection in R2
1
- 1
1 and 1


Let us consider

are two orthogonal


(independent) vectors in R2

x1
1
1
x a1 1 a2 1


2

In this case we can find values of a without


inverse
x1
1
1 1 a11 1
1
x2
x1
1 1
x2
xT v

a1

2
1

v
1 1
1

10

1.Oblique and Orthogonal


Projection in R2
We define:L: R

1
t
1

as

1
x
x1
1 1

x
1

2
1
1

T

This linear map is called orthogonal


projection.
1
The vector is projected on the space generated by
1

along the space generated by 1

11

Projections
(2,2,2)

b = (2,2)

(0,0,1)

(0,1,0)
(1,0,0)

a = (1,0)
2
1 0 0 2


2 0 1 0 2
0
0 0 0 2

2
aT b
c T a
a a
0

12

1.Orthogonal Projection Into a Line


Definition 1.1:
Orthogonal Projection
The orthogonal projection of v into the line spanned by a nonzero s is the
vector.

v s
s
projs v v s s
s s

If S has unit length, then

Example 1.1:
2x.

projs v v s s

s
s s

=(v.s)s and its length is (v.s)

Orthogonal projection of the vector ( 2 3 ) T into the line y =

x
s
s x2 4 x2
2 x
2
1
1 1
8
projs
2 6
5
5
5 2
3

5x
1
2

1
s
5

1
2

13
13

Example 1.2: Orthogonal projection of a general vector in R 3 into the yaxis

0
e 2 1

x x

proje2 y

y

z
z

0
1

0

0

1
0

0
y 1

0
y

Example 1.3:
Project = Discard orthogonal components
A railroad car left on an east-west track without its brake is pushed by a wind
blowing toward the northeast at fifteen miles per hour; what speed will the car reach?

15
w
2

1
1

15
v proje1 w

2
15
speed v
2

1
0

14
14

Example 1.4: Nearest Point

A
(a,b)

Let A=(a,b) and B =(c,d) be


two vectors. We have to
find the nearest vector to A
on B

(c,d)
C
K(c,d)

That means we have to find the value of k for which


F(k)=((a,b)- k(c,d))T((a,b)- k(c,d)) is minimum , i.e. , length
of AC is minimum.
Easy application of derivative shows that
={(A.B)/B.B } B

15
15

Exercises 1
Consider the function mapping a plane to itself that
takes a vector to its
projection into the line y = x.
(a)Produce a matrix that describes the functions action.
(b)Show also that this map can be obtained by first
rotating everything in the plane /4 radians clockwise,
then projecting into the x-axis, and then rotating /4
radians counterclockwise.
1.

2. Show that

proj s (v).(v proj s (v )) 0


16
16

2.Inner product Space


2.1Definition
An inner product on a real spaces V is a function that associates
a number, denoted u, v , with each pair of vectors u and v
of V. This function has to satisfy the following conditions for
vectors u, v, and w, and scalar c.
1. u, v = v, u (symmetry axiom)
2. u + v, w = u, w + v, w (additive axiom)
3. cu, v = c u, v (homogeneity axiom)
4. u, u 0, and u, u = 0 if and only if u = 0
(position definite axiom)
17

2.Inner product Space

A vector space V on which an inner product is defined is

called an inner product space.Any function on a vector


space that satisfies the axioms of an inner product defines
an inner product on the space. .

There can be many inner products on a given vector


space

18

Example 2.1
Let u = (x1, x2), v = (y1, y2), and w = (z1, z2) be arbitrary vectors in
R2. Prove that u, v , defined as follows, is an inner product
on R2.
u, v = x1y1 + 4x2y2
Determine the inner product of the vectors (2, 5), (3, 1) under
this inner product.
Solution
Axiom 1: u, v = x1y1 + 4x2y2 = y1x1 + 4y2x2 = v, u
Axiom 2: u + v, w = (x1, x2) + (y1, y2) , (z1, z2)
= (x1 + y1, x2 + y2), (z1, z2)
= (x1 + y1) z1 + 4(x2 + y2)z2
= x1z1 + 4x2z2 + y1 z1 + 4 y2z2
= (x1, x2), (z1, z2) + (y1, y2), (z1, z2)19

Axiom 3: cu, v = c(x1, x2), (y1, y2)


= (cx1, cx2), (y1, y2)
= cx1y1 + 4cx2y2 = c(x1y1 + 4x2y2)
= c u, v
Axiom 4: u, u = (x1, x2), (x1, x2) x12= 4 x22 0
2
2
Further, x1 4 x2 0 if and only if x1 = 0 and x2 = 0. That is u =
0. Thus u, u 0, and u, u = 0 if and only if u = 0.
The four inner product axioms are satisfied,
u, v = x1y1 + 4x2y2 is an inner product on R2.
The inner product of the vectors (2, 5), (3, 1) is
(2, 5), (3, 1) = (2 3) + 4(5 1) = 14

20

Example 2.2
Consider the vector space M22 of 2 2 matrices. Let u and v
defined as follows be arbitrary 2 2 matrices.
e

f
b

, v
u

g
h
c
d
Prove that the following function is an inner product on M22.
u, v = ae + bf + cg + dh
2
0

3
5
and

1
9

0.

Determine the inner product of the matrices


Solution
Axiom 1: u, v = ae + bf + cg + dh = ea + fb + gc + hd = v,
u
Axiom 3: Let k be a scalar. Then
ku,
v3 =5 kae
+ kbf + kcg + kdh = k(ae + bf + cg + dh) = k u,
2

,
(2 5) (3 2) (0 9) (1 0) 4
v

9 0

21

Example 2.3
Consider the vector space Pn of polynomials of degree n. Let f
and g be elements of Pn. Prove that the following function
defines an inner product of Pn1.
f , g f ( x) g ( x)dx
0

Determine the inner product of polynomials


f(x) = x2 + 2x 1 and g(x) = 4x + 1
Solution
1
1
Axiom 1: f , g 0 f ( x) g ( x)dx 0 g ( x) f ( x)dx g , f
1

Axiom 2: f g , h [ f ( x) g ( x)]h( x)dx


0

[ f ( x)h( x) g ( x)h( x)]dx


0

[ f ( x)h( x)]dx g ( x)h( x)dx


f , h g, h

22

We now find the inner product of the functions f(x) = x2 + 2x 1


and g(x) = 4x + 1
1

x 2 x 1, 4 x 1 ( x 2 2 x 1)(4 x 1)dx
2

(4 x 3 9 x 2 2 x 1)dx
0

23

Norm of a Vector
The norm of a vector in Rn can be expressed in terms of the dot
product as follows
( x1 , x2 , , xn ) ( x12 xn2 )
( x1 , x2 , , xn ) ( x1 , x2 , , xn )
Generalize this definition:
The norms in general vector space do not necessary have geometric
interpretations, but are often important in numerical work.

Definition 2.2
Let V be an inner product space. The norm of a vector v is
denoted ||v|| and it defined by
v v, v

24

Example 2.4
Consider the vector space Pn of polynomials with inner product
1

f , g f ( x) g ( x)dx
0

The norm of the function f generated by this inner product is


f

f, f

2
[
f
(
x
)]
dx

Determine the norm of the function f(x) = 5x2 + 1.


Solution Using the above definition of norm, we get
5x 1
2

2
2
[
5
x

1
]
dx

0
1

4
2
[
25
x

10
x
1]dx

28
3

The norm of the function f(x) = 5x + 1 is


2

28
3

25

Example 2.5
Consider the vector space M22 of 2 2 matrices. Let u and v
defined as follows be arbitrary 2 2 matrices.
e

f
b

, v
u

g
h
c
d
It is known that the function u, v = ae + bf + cg + dh is an
inner product on M22 by Example 2.
The norm of the matrix is u

u, u

a2 b2 c2 d 2

26

Angle between two vectors


The dot product in Rn was used to define angle between vectors.
The angle between vectors u and v in Rn is defined by

uv
cos
(??)
u v

Definition 2.3
Let V be an inner product space. The angle between two
nonzero vectors u and v in V is given by
u, v
cos
u v

27

Angle between two vectors

In R2 we first define cos, then prove C-S


inequality

In Rn we first prove C-S inequality


, then define cos

28

Example 2.6
Consider the inner product space Pn of polynomials with inner
1
product
f , g f ( x) g ( x)dx

The angle between two nonzero functions f and g is given by


cos

f,g
f g

f ( x) g ( x)dx

f g

Determine the cosine of the angle between the functions


f(x) = 5x2 and g(x) = 3x
Solution We first compute ||f || and ||g||.
5x
2

[5 x ] dx 5 and 3x
2 2

Thus

cos

f ( x) g ( x)dx
f g

2
[
3
x
]
dx 3

(5 x 2 )(3x)dx
5 3

15
4

29

Example 2.7
Consider the vector space M22 of 2 2 matrices. Let u and v
defined as follows be arbitrary 2 2 matrices.
e

f
b

, v
u

g
h
c
d
It is known that the function u, v = ae + bf + cg + dh is an
inner product on M22 by Example 2.
The norm of the matrix is u

u, u

a2 b2 c2 d 2

The angle between u and v is


u, v
ae bf cg dh
cos

u v
a2 b2 c2 d 2 e2 f 2 g 2 h2
30

Orthogonal Vectors
Def 2.4. Let V be an inner product space. Two nonzero vectors u
and v in V are said to be orthogonal if
u, v 0
Example 2.8
Show that the functions f(x) = 3x 2 and g(x) = x are orthogonal
1
in Pn with inner product
f , g f ( x) g ( x)dx.
0

Solution
1

3 x 2, x (3x 2)( x)dx [ x 3 x 2 ]10 0


0
Thus the functions f and g are orthogonal in this inner product
Space.
31

Distance
As for norm, the concept of distance will not have direct
geometrical interpretation. It is however, useful in numerical
mathematics to be able to discuss how far apart various
functions are.

Definition 2.5
Let V be an inner product space with vector norm defined by
v v, v
The distance between two vectors (points) u and v is defined
d(u,v) and is defined by
d (u , v ) u v ( u v , u v )

32

Example 2.8
Consider the inner product space Pn of polynomials discussed
earlier. Determine which of the functions g(x) = x2 3x + 5 or h(x)
= x2 + 4 is closed to f(x) = x2.
Solution
1

[d ( f , g )] f g , f g 3 x 5, 3 x 5 (3x 5) 2 dx 13
2

[d ( f , h)] f h, f h 4, 4 (4) 2 dx 16
2

Thus d ( f , g ) 13 and d ( f , h) 4.
The distance between f and h is 4, as we might suspect, g is closer
than h to f.

33

3.Gram-Schmidt Orthogonalization
Given a vector s, any vector v in an inner product space V
can be decomposed as
v projs v v projs v v // v
v // v 0
where
Definition 3.1:
Mutually Orthogonal Vectors
Vectors v1, , vk V are mutually orthogonal if vi vj = 0
ij
Theorem 3.1:
A set of mutually orthogonal non-zero vectors is
linearly independent.
Proof
:
v

c
v

c
v

0
j
i
i
j
j
j
c
v

ii

cj = 0 j

34
34

3.Gram-Schmidt Orthogonalization
Corollary .3.1:
A set of k mutually orthogonal nonzero vectors in V k is a basis for
the space.
Definition 3.2: Orthogonal Basis
An orthogonal basis for a vector space is a basis of mutually
orthogonal vectors.
Definition 3.3: Orthonormal Basis
An orthonormal basis for a vector space is a basis of mutually
orthogonal
vectors
of unit length.
Definition 3.4:
Orthogonal
Complement
The orthogonal complement of a subspace M of R3 is M = { v R3 |
v is perpendicular to all vectors in M }

( read M perp ).

The orthogonal projection projM (v ) of a vector is its projection into


M along M .

35

Lemma 3.1:
Let M be a subspace of Rn.
Hence, v Rn.,

Then M is also a subspace and Rn. = M M .

v projM (v) is perpendicular to all vectors in M.

Proof: Construct bases using G-S orthogonalization.


Theorem 3.2:
Let v be a vector in Rn and let M be a subspace of Rn. with basis 1 , , k .
If A is the matrix whose columns are the s then
projM (v ) = c11 + + ck k
where the coefficients ci are the entries of the vector (AT A)-1 AT v.

That is,

projM (v ) = A (AT A)1 AT v.


Proof:
By lemma
3.7,

projM v M

0 A T v Ac

projM v A c

AT A c AT v

where c is a column vector

c AT A

AT v

36
36

Interpretation of Theorem 3.2:


If = 1 , , k is an orthonormal basis, then ATA = I.
In which case,

projM (v ) = A (AT A)1 AT v = A AT v.

projM vv

1T

M
k
k T


L
1

T
v
1

M
k
vk T

v1

k M

v
k B


L
1

vj j

v j vj

with

j 1

In particular, if = k , then A = AT = I.
In case is not orthonormal, the task is to find C s.t. B = AC and BTB = I.
I AC

Hence

AC

C A AC
T

projM v BBT v

A T A CT C1 CCT
1

AC AC v
T

CCT A T A

ACCT A T v A A T A A T v
1

37
37

Example 3.1:
To orthogonally project

From

into subspace

x, y R

1 0
2 0
1
0

A A
0 1
0 1
0
1
0

1 0

A AT A

1
0

x 0

y
1


1


0

1
v 1

we get

P y

0
1

x z 0

1 0
A 0 1

1 0

AT A

1 0
1 0
1/
2
0
1
0

0 1 1/ 2 0 1/ 2
A T 0 1

0 1 0 1 0

0 1
0

1 0
1 0

1/ 2 0 1/ 2 1
1
projP v 0
1
0

1/ 2 0 1/ 2 1

1/ 2 0

0 1

1/ 2 0 1/ 2
0
1
0

1/ 2 0 1/ 2

38
38

Exercises 3.
1.

Perform the Gram-Schmidt process on this basis for R3,

2
2 ,

2

1
0
1

0
3

Show that the columns of an nn matrix form an


orthonormal set if and only if the inverse of the matrix is its
transpose. Produce such a matrix.
2.

39
39

4 Projection Into a Subspace


Definition 3.1:For any direct sum V = M N and any v V
such that v = m + n with m M and n N .The projection of
v into M along N is defined as E(v)= projM, N (v) = m
Reminder: M & N need not be orthogonal.There need not
even be an inner product defined.
Theorem3.1: Show that (i) E is linear and (ii) E2=E.
Theorem3.2: Let E:VV is linear and E2=E then,
(i) E(u)=u for any u ImE. (ii) V is the direct sum of the
image and kernel of E. i.e., V=ImE KerE. (iii) E is the
projection of V into ImE, its image along KerE.
40
40

Projection and Matrices


Let L(V)={L|L is a linear map between V and itself}. L(V)
is a vector space under function addition and scalar
multiplication.
dim(L)=n2 if d(V)=n.
If we fix a basis in V, there arises a one to one
correspondence between L and set of all matrices of
order n (the last set is also a vector space with
dimension n2). The result implies that matrices identify
linear operators.

41

Orthogonal Projection

(1,2)
(2,1)

two vector spaces V1, V2 by multiplying the


by k; where k R.

1
V1=k 2

and V2=k

2

1

1
vectors 2

2
and 1

42
42

Orthogonal Projection
Now, we can find the vector space R2 as V1 V2
x1
Let be any vector of R2 and k1, k2R then we can
x2
write
2
x1
1

=k1 2 +k2 1
x2
Therefore,

x1

x
2

k1

k2

1 2 k

1
2 1 k 2

1 2
= 2 1

x1

x2

k1

k
2

1
2
x1 x 2
3
3
2x 1x

1
2
3
3

43
43

Orthogonal Projection
Now, let P be a projection matrix and x be a vector in R2,
then a projection from R2 to V1 is given by,

Px=k1

1
2

=[-1/3 x1 +2/3 x2 ]

1
2

1
3
2

2
3
4
3

x1

x
2

Ex. 1) Check that P is idempotent but not symmetric. Why?


2) Prove that if the second vector is
idempotent as well as symmetric

2
1

, P will be then

44
44

Meaning of Pnn xn1


Case 1: Pnn is singular but not idempotent

1
2
P
, rank(P ) 1, P P
2 2

The whole space, Rn is mapped to the column space of


Pnn , an improper subspace of Rn .
x1


x2

1 1
P 2 2

1 1 x1

2 2 x2

(x1 x2)
2

An vector of the subspace may mapped to another vector of


the Subspace.,
1

1
3
2

45

Meaning of Pnn xn1


Case 2: Pnn is singular and idempotent( asymmetric)
1 0
2
P
, rank(P ) 1 and P P
1 0

The whole space, Rn is mapped to the column


space of Pnn , an improper subspace of Rn .
x1
1
Px
3

1
x 1

3

x2
1
2

1
An vector of the subspace is mapped to the same
vector of the Subspace.,
2

P

2

2

2

Px is not orthogonal to x-Px

46

Meaning of Pnn xn1


Case 3: Pnn is singular and idempotent( symmetric)
Meaning: The whole space, Rn is mapped to the column space of
Pnn , an improper subspace of R n . An vector of the subspace is
mapped to the same vector of the Subspace. It is orthogonal
projection, That is, the subspace is to its complement. For example,

Px=(x1+x2) 1/ 2
1/ 2

1
2
1
2

1
2
1
2

Px is orthogonal to x-Px
47

Meaning of Pnn xn1


Case 4: Pnn is non-singular and non-orthogonal
Meaning: The whole space, R n is mapped to the column space of
Pnn , same as Rn . The mapping is one-to-one and onto.We have
now columns of Pnn as a new (oblique) basis in place of standard
basis. Angles between vectors and length of vectors are not
preserved.For example,

1 2
2 1

x y Px, y x P

48

Meaning of Pnn xn1


Case : Pnn is non-singular and orthogonal
Meaning: The

whole space, Rn is mapped to the column space of


Pnn , same as Rn . The mapping is one-to-one and onto.We
have now columns of Pnn as a new (orthogonal) basis in place
of standard basis. Angles between vectors and length of vectors
are preserved. We have only a rotation of axes. For example,

1
1
2
2

1
1
2
2

From a symmetric
matrix we have
always such a P
of its n
independent
eigen vectors

From a symmetric
matrix we have alway
a symmetric
idempotent P of its
r(<n )independent
eigen vectors
49

Projection Theorem In a Hilbert Space


Let be a closed subspace of a Hilbert space, . There
exists a unique pair of mappings P:H and Q:such
that x=Px+Qx for all x . P and Q have the following
properties:

i) x x=x, Qx=0

Px=0, Qx=x

ii) x T

ii) Px is closest vector in to x.

iv) Qx is closest vector in to x


v) ||Px||2+ ||Qx||2 =||x||2
vi)

P and Q are linear maps and P2=P, Q2=Q

Finite dimensional spaces are always closed 50

51

Applications
The common characteristic (structure) among the
following statistical methods?
1. Principal Components Analysis
2. (Ridge ) regression
3. Fisher discriminant analysis
4. Canonical correlation analysis
5.Singular value decomposition
6. Independent component analysis
We consider linear combinations of input vector:

f ( x) wT x

We make use concepts of length and dot product


available in Euclidean space.

51

What is feature reduction?


Original data

reduced data

Linear transformation

G
T

Y d

d p

X p

p d

: X Y G X
T

52

Dimensionality Reduction
One approach to deal with high dimensional data is by
reducing their dimensionality.
Project high dimensional data onto a lower dimensional
sub-space using linear or non-linear transformations.

53

Principal Component Analysis (PCA)


Find a basis in a low dimensional sub-space:
Approximate vectors by projecting them in a low dimensional subspace:

(1) Original space representation:


x a1v1 a2 v2 ... aN vN
where v1 , v2 ,..., vn is a base in the original N-dimensional space

(2) Lower-dimensional sub-space representation:


x b1u1 b2u2 ... bK u K
where u1 , u2 ,..., u K is a base in the K -dimensional sub-space (K<N)

Note: if K=N, then

x x
54

Principal Component Analysis (PCA)


Information loss
Dimensionality reduction implies information loss !!
PCA preserves as much information as possible:

min || x x ||

(reconstruction error)

What is the best lower dimensional sub-space?


The best low-dimensional space is centered at the sample mean and has directions determined by the best eigenvectors of the covariance matrix of the data x.
By best eigenvectors we mean those corresponding to the largest eigenvalues ( i.e., principal components).
Since the covariance matrix is real and symmetric, these eigenvectors are orthogonal and form a set of basis vectors.

Singular Value

55

Principal Component Analysis (PCA)


Methodology
Suppose x1, x2, ..., xM are N x 1 vectors

56

Principal Component Analysis (PCA)


Methodology cont.

bi uiT ( x x )

57

Principal Component Analysis (PCA)


Linear transformation implied by PCA
The linear transformation RN RK that performs the dimensionality reduction is:

58

Principal Component Analysis


(PCA)
Eigenvalue spectrum

N
59

Principal Comzponent Analysis (PCA)


What is the error due to dimensionality reduction?

It can be shown that error due to dimensionality reduction is equal to:

e i
i 1

2
i

i k 1

60

Principal Component Analysis (PCA)

Standardization

The principal components are dependent on the units used to


measure the original variables as well as on the range of values they
assume.
We should always standardize the data prior to using PCA.
A common standardization method is to transform all the data to
have zero mean and unit standard deviation:

61

Principal Component Analysis (PCA)


Case Study: Eigenfaces for Face Detection/Recognition
M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive
Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.

Face Recognition

The simplest approach is to think of it as a template


matching problem
Problems arise when performing recognition in a
high-dimensional space.
Significant improvements can be achieved by first
mapping the data into a lower dimensionality
space.
How to find this lower-dimensional space?
62

Principal Component Analysis (PCA)

Main idea behind eigenfaces

average face

63

Principal Component Analysis (PCA)

Computation of the eigenfaces cont.

Mind that this is normalized..

ui

64

Principal Component Analysis (PCA)

Computation of the eigenfaces cont.

65

Principal Component Analysis (PCA)


Representing faces onto this basis

66

Principal Component Analysis (PCA)


Representing faces onto this basis cont.

67

Principal Component Analysis (PCA)


Face Recognition Using Eigenfaces

68

Principal Component Analysis (PCA)

Face Recognition Using Eigenfaces cont.

The distance er is called distance within the face space (difs)


Comment: we can use the common Euclidean distance to compute er,
however, it has been reported that the Mahalanobis distance performs better:

69

Principal Component Analysis (PCA)


Face Detection Using Eigenfaces

70

Principal Component Analysis (PCA)


Face Detection Using Eigenfaces cont.

71

Principal Component Analysis


Reconstruction
(PCA)
of faces and nonfaces

72

Principal Component Analysis


(PCA)
Applications

dffs
Face detection, tracking, and recognition

73

Principal Components
Analysis
So, principal components are given by:
b1 = u11x1 + u12x2 + ... + u1NxN
b2 = u21x1 + u22x2 + ... + u2NxN
...
bN= aN1x1 + aN2x2 + ... + aNNxN
xjs are standardized if correlation matrix is used
(mean 0.0, SD 1.0)
74

Principal Components Analysis


Score of ith unit on jth principal
component
bi,j = uj1xi1 + uj2xi2 + ... + ujNxiN

75

PCA Scores
5

xi2

bi,1

bi,2

2
4.0

4.5

5.0

xi1

5.5

6.0

76

Principal Components Analysis


Amount of variance accounted for by:
1st principal component, 1, 1st eigenvalue
2nd principal component, 2, 2ndeigenvalue
...

1 > 2 > 3 > 4 > ...


Average j = 1 (correlation matrix)
77

Principal Components Analysis:


Eigenvalues
5

U1
2
4.0

4.5

5.0

5.5

6.0

78

Thank you

79

You might also like