Professional Documents
Culture Documents
where 1, 2, . . . , k are constants such that not all of them are zero
simultaneously.
Today, however, the term multicollinearity is used to include the case where
the X variables are intercorrelated but not perfectly so, as follows:
1X1 + 2X2 + +2Xk + vi = 0
(10.1.2)
NATURE OF
MULTI-COLLINEARITY
To see the difference between perfect and less than perfect multicollinearity,
assume, for example, that 2 0. Then, (10.1.1) can be written as:
where, say, Y = total cost of production and X = output. The variables X2i
(output squared) and X3i (output cubed) are obviously functionally related
to Xi, but the relationship is nonlinear.
Why does the classical linear regression model assume that there is no
multicollinearity among the Xs? The reasoning is this:
If multicollinearity is perfect, the regression coefficients of the X variables
are indeterminate and their standard errors are infinite.
If multicollinearity is less than perfect, the regression coefficients, although
determinate, possess large standard errors which means the coefficients
cannot be estimated with great precision or accuracy.
Assume that X3i = X2i , where is a nonzero constant (e.g., 2, 4, 1.8, etc.).
Substituting this into (7.4.7), we obtain
To see this differently, let us substitute X3i = X2i into (10.2.1) and obtain the
following [see also (7.1.9)]:
yi = 2x2i + 3(x2i)+ui
= (2 + 3)x2i +ui
= x2i + ui
= 2 + 3
(10.2.3)
where
= (2 + 3)
(10.2.4)
(10.2.5)
gives us only one equation in two unknowns (note is given) and there is an
infinity of solutions to (10.2.6) for given values of and .