You are on page 1of 22

# Factor Analysis

Factor Analysis
The fact is that some factors are not observable disqualifies
regression and other methods.
Factor analysis is a method for investigating whether a number
of variables of interest Y
1
, Y
2
, , Y
l
, are linearly related to a
smaller number of unobservable factors F
1
, F
2
, , F
k
.
Let us consider an example to explain Factor Analysis.
Example
Example: Students entering a
certain MBA program must take
three required courses in
policy. Let Y
1
, Y
2
, and Y
3
,
respectively, represent a
courses. The available data
consist of the grades of five
students (in a 10-point
numerical scale above the
passing mark), as shown in
Table

Example
It has been suggested that these grades are functions of two
underlying factors, F1 and F2, tentatively and rather loosely
described as quantitative ability and verbal ability, respectively.
It is assumed that each Y variable is linearly related to these
two factors, as follows:

Explanation
The error terms e
1
, e
2
, and e
3
, serve
to indicate that the hypothesized
relationships are not exact.
In the special vocabulary of factor
analysis, the parameters
ij
are

12
1

on factor F
2
.
Quantitative skills should help a
student in finance, but not in
marketing or policy. Verbal skills
should be helpful in marketing or
policy but not in finance. In other
have roughly the following structure:

Explanation
Of course, the zeros in the preceding table are not expected to
be exactly equal to zero. By `0' we mean approximately equal to
zero and by `+' a positive number substantially different from
zero.
expectations tested by regressing each Y against the two
factors.
Such an approach, however, is not feasible because the factors
cannot be observed.
An entirely new strategy is required i.e. factor analysis.

Assumptions of FA
A1: The error terms e
i
are independent of one another, and such that
E(e
i
) = 0
Var(e
i
) =

Cov(e
i
,e
j
)=0
A2: The unobservable factors F
j
are independent of one another and of the error terms,
and are such that
E(F
j
) = 0
Var(F
j
) =1
Cov(F
i
,F
j
)=0
Cov(F
i
,e
i
)=0
This means in part that there is no relationship between quantitative and verbal ability.
Local (i.e. conditional independence) given the factor: observed variables are
independent of one another.
Cov( y
i
,y
j
| F ) = 0
Cov(Y,F)=B
Explanation
In more advanced models of factor analysis, the condition that
the factors are independent of one another can be relaxed.
As for the factor means 0 and variances 1, the assumption is
that the factors are standardized.
It is an assumption made for mathematical convenience; since
the factors are not observable, we might as well think of them
as measured in standardized form.
Explanation
Each observable variable is a linear function of independent
factors and error terms, and can be written as
Explanation
The communality of the variable, is the part that is explained by
the common factors F
1
and F
2
.
The specific variance, is the part of the variance of Y
i
that is not
accounted by the common factors.
If the two factors were perfect predictors of grades, then
e
1
= e
2
= e
3
= 0 always, and
1
2
=
2
2
=
3
2
= 0.
Explanation
To calculate the covariance of any two observable variables, Y
i

and Y
j
, we can write

Explanation
We can arrange all the variances and covariances in the form of
the following table:

This table is a theoretical variance covariance matrix.

Explanation
Let us now turn to the available data, and calculate the
observed variance covariance matrix:

Explanation
On the one hand, therefore, we have the observed variances
and covariances of the variables;
On the other, the variances and covariances implied by the
factor model.
If the model's assumptions are true, we should be able to
ij
so that the resulting estimates of the
theoretical variances and covariances are close to the observed
ones i.e.
S=BB

+

Problem
number of sets of values of the
ij
yielding the same theoretical variances
and covariances.
For this reason, factor analysis usually proceeds in two stages.
ij
is calculated which yields theoretical
variances and covariances that fit the observed ones as closely as
possible according to a certain criterion.
may not lend themselves to a reasonable interpretation.
Thus, in the second stage, the first loadings are rotated" in an effort to
arrive at another set that fit equally well the observed variances and
covariances, but are more consistent with prior expectations or more easily
interpreted.
Stage 1
By using principal component method, we can calculate
ij

In this method, fist we find eigenvalues and eigenvectors of
observed variance covariance matrix S or observed correlation
matrix R. where

3

ij
is given by:
=

2

Stage 1
For our data eigenvalues are 9.87146 8.00773 0.04081
respectively.
And eigenvectors are:

0.99913 0.00841 0.05788
0.04212 0.79081 0.61062
0.04068 0.61201 0.78981

Stage 1
The solution for our example is

Stage 1
The estimate of the specific variance of Y
i
, is the difference
between the observed variance and estimated communality of
Y
i .

Stage 2

When the first factor solution does not reveal the hypothesized
The rotation is called orthogonal rotation if the axes are maintained
at right angles.
Varimax: the most commonly used method of rotation. It maximizes
Oblique: permits a minor amount of correlation among factors.
However, there is no single popular method of this type of rotations.
It requires considerable expertise.

Determine the Number of Factors
Priori Determination: In our example, it is expected that some
loadings will be close to zero, while others will be positive or
negative and substantially different from zero.
Determination Based on Eigenvalues: Only factors with
Eigenvalues greater than 1.0 are retained.
Determination Based on Percentage of Variance.
Determination Based on Scree Plot: A scree plot is a plot of
the Eigenvalues against the number of factors in order of
extraction. The point at which the scree begins denotes the true
number of factors.

Determine the Number of Factors