Professional Documents
Culture Documents
86
Chapter 7
Multiple Regression
A multiple linear regression model is a linear model that describes
how a y-variable relates to two or more xvariables (or transformations of
x-variables).
For example, suppose that a researcher is studying factors that might affect systolic blood pressures for women aged 45 to 65 years old. The response
variable is systolic blood pressure (Y ). Suppose that two predictor variables
of interest are age (X1 ) and body mass index (X2 ). The general structure of
a multiple linear regression model for this situation would be
Y = 0 + 1 X1 + 2 X2 + .
The equation 0 + 1 X1 + 2 X2 describes the mean value of blood
pressure for specific values of age and BMI.
The error term () describes the characteristics of the differences between individual values of blood pressure and their expected values of
blood pressure.
One note concerning terminology. A linear model is one that is linear in
the beta coefficients, meaning that each beta coefficient simply multiplies an
x-variable or a transformation of an x-variable. For instance y = 0 + 1 x +
2 x2 + is called a multiple linear regression model even though it describes
a quadratic, curved, relationship between y and a single x-variable.
87
88
7.1
(7.1)
D. S. Young
89
STAT 501
90
7.2
Within a multiple regression model, we may want to know whether a particular x-variable is making a useful contribution to the model. That is,
given the presence of the other x-variables in the model, does a particular
x-variable help us predict or explain the y-variable? For instance, suppose
that we have three x-variables in the model. The general structure of the
model could be
Y = 0 + 1 X1 + 2 X2 + 3 X3 + .
(7.2)
As an example, to determine whether variable X1 is a useful predictor variable
in this model, we could test
H0 : 1 = 0
HA : 1 6= 0.
If the null hypothesis above were the case, then a change in the value of
X1 would not change Y , so Y and X1 are not related. Also, we would still be
left with variables X2 and X3 being present in the model. When we cannot
reject the null hypothesis above, we should say that we do not need variable
X1 in the model given that variables X2 and X3 will remain in the model.
In general, the interpretation of a slope in multiple regression can be tricky.
Correlations among the predictors can change the slope values dramatically
from what they would be in separate simple regressions.
To carry out the test, statistical software will report p-values for all coefficients in the model. Each p-value will be based on a t-statistic calculated
as
t = (sample coefficient - hypothesized value) / standard error of coefficient.
STAT 501
D. S. Young
91
b1 0
b1
=
.
s.e.(b1 )
s.e.(b1 )
Note that the hypothesized value is usually just 0, so this portion of the
formula is often omitted.
7.3
Examples
Pr(>|t|)
3.83e-06
0.0925
1.46e-12
9.69e-06
***
.
***
***
* 0.05 . 0.1 1
STAT 501
92
Histogram of Residuals
20
0.06
Residuals
0.03
0.02
Density
0.04
10
0.05
15
0.01
15
0.00
10
10
10
20
200
Residuals
(a)
220
240
260
Fitted Values
(b)
Figure 7.1: (a) Histogram of the residuals for the heat flux data set. (b) Plot
of the residuals.
Pr(>|t|)
2.78e-12 ***
1.75e-12 ***
3.00e-05 ***
* 0.05 . 0.1 1
D. S. Young
93
STAT 501
94
the least squares plane is provided in Figure 7.2. In this 3D plot, observations
above the plane (i.e., observations with positive residuals) are given by green
points and observations below the plane (i.e., observations with negative
residuals) are given by red points. The output for fitting a multiple linear
regression model to this data is below:
Residuals:
Min
1Q
-149.95 -34.42
Median
-14.74
3Q
11.58
Max
560.38
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 53.3483
11.6908
4.563 1.17e-05 ***
Cr
1.8577
0.2324
7.994 6.66e-13 ***
Co
2.1808
1.7530
1.244
0.216
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1
Residual standard error: 74.76 on 128 degrees of freedom
Multiple R-squared: 0.544,
Adjusted R-squared: 0.5369
F-statistic: 76.36 on 2 and 128 DF, p-value: < 2.2e-16
Note that Co is found to be not statistically significant. However, the scatterplot in Figure 7.2 clearly shows that the data is skewed to the right for
each of the variables (i.e., the bulk of the data is clustered near the lower-end
of values for each variable while there are fewer values as you increase along
a given axis). In fact, a plot of the standardized residuals against the fitted
values (Figure 7.3) indicates that a transformation is needed.
Since the data appears skewed to the right for each of the variables, a
log transformation on Cr INAA, Cr, and Co will be taken. The scatterplot
in Figure 7.4 shows the results from this transformations along with the
new least squares plane. Clearly, the transformation has done a better job
linearizing the relationship. The output for fitting a multiple linear regression
model to this transformed data is below:
Residuals:
Min
1Q Median
-0.8181 -0.2443 -0.0667
STAT 501
3Q
0.1748
Max
1.3401
D. S. Young
95
Figure 7.2: 3D scatterplot of the Kola data set with the least squares plane.
Standardized Residuals
100
200
300
400
500
600
Fitted Values
Figure 7.3: The standardized residuals versus the fitted values for the raw
Kola data set.
D. S. Young
STAT 501
96
Figure 7.4: 3D scatterplot of the Kola data set where the logarithm of each
variable has been taken.
Standardized Residuals
4.0
4.5
5.0
5.5
6.0
Fitted Values
Figure 7.5: The standardized residuals versus the fitted values for the logtransformed Kola data set.
STAT 501
D. S. Young
97
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.65109
0.17630 15.037 < 2e-16 ***
ln_Cr
0.57873
0.08415
6.877 2.42e-10 ***
ln_Co
0.08587
0.09639
0.891
0.375
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1
Residual standard error: 0.3784 on 128 degrees of freedom
Multiple R-squared: 0.5732,
Adjusted R-squared: 0.5665
F-statistic: 85.94 on 2 and 128 DF, p-value: < 2.2e-16
There is also a noted improvement in the plot of the standardized residuals
versus the fitted values (Figure 7.5). Notice that the log transformation of
Co is not statistically significant as it has a high p-value (0.375).
After omitting the log transformation of Co from our analysis, a simple
linear regression model is fit to the data. Figure 7.6 provides a scatterplot
of the data and a plot of the standardized residuals against the fitted values.
These plots, combined with the following simple linear regression output,
indicate a highly statistically significant relationship between the log transformation of Cr INAA and the log transformation of Cr.
Residuals:
Min
1Q
Median
-0.85999 -0.24113 -0.05484
3Q
0.17339
Max
1.38702
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.60459
0.16826
15.48
<2e-16 ***
ln_Cr
0.63974
0.04887
13.09
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1
1
Residual standard error: 0.3781 on 129 degrees of freedom
Multiple R-squared: 0.5705,
Adjusted R-squared: 0.5672
F-statistic: 171.4 on 1 and 129 DF, p-value: < 2.2e-16
D. S. Young
STAT 501
98
6.5
3.5
5.5
5.0
ln(Cr_INAA)
4.0
4.5
Standardized Residuals
6.0
4
ln(Cr)
(a)
4.0
4.5
5.0
5.5
6.0
Fitted Values
(b)
Figure 7.6: (a) Scatterplot of the Kola data set where the logarithm of
Cr INAA has been regressed on the logarithm of Cr. (b) Plot of the standardized residuals for this simple linear regression fit.
STAT 501
D. S. Young
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
99
North
16.66
16.46
17.66
17.50
16.40
16.28
16.06
15.93
16.60
16.41
16.17
15.92
16.04
16.19
16.62
17.37
18.12
18.53
15.54
15.70
16.45
17.62
18.12
19.05
16.51
16.02
15.89
15.83
16.71
Time
13.20
14.11
15.68
10.53
11.00
11.31
11.96
12.58
10.66
10.85
11.41
11.91
12.85
13.58
14.21
15.56
15.83
16.41
13.10
13.63
14.51
15.38
16.10
16.73
10.58
11.28
11.91
12.65
14.06
D. S. Young
STAT 501
100
X1
40.9
60.7
29.6
40.9
27.8
23.5
16.6
29.2
6.9
57.1
50
129
106
36.5
66.6
37.2
42
17.5
67.1
10.6
11.3
44
34.1
29.3
49.2
118
10.8
59.3
20
24.4
28.6
37.9
10.2
X2
6.2
10.5
10.1
8.7
5.2
3.8
15.2
5.1
2
7.8
9.7
30.7
13.7
7.3
10.5
9.6
9.2
3.9
8.6
2.5
2
14.1
12.1
5.2
14
18.3
3.4
9.6
3.8
5.4
6.6
30.3
4.7
Y
300
270
140
120
240
110
64
92
37
250
190
210
220
81
170
120
120
64
120
69
27
130
240
87
180
330
78
300
110
120
76
130
46
X1
71.4
66.3
99.1
18.6
30.5
28.9
23
44.2
27.7
18.1
10
16.3
31.6
19.5
25.2
13.6
23.2
41.6
18
37.4
32.3
217
16.7
29.8
15.5
14
30
21.9
33
30.8
55.5
25.9
16.5
X2
11.8
9
16.1
3.6
8.6
5.2
5.3
9.1
8
4.7
3.3
3.5
8.7
6.8
5.5
2.9
6.4
10.1
4
8.7
7.2
10
5
7.8
2.5
3
6.9
9
5.6
6.5
11.5
6.9
4.7
Y
200
230
220
93
140
130
120
120
100
100
87
83
130
90
110
170
88
150
120
97
97
400
49
160
70
68
120
390
71
110
130
96
63
X1
52.1
18
23.7
37.7
16.1
40
38.4
23.7
16.4
13.4
24
28.8
18.4
9.4
5.9
83.8
280
21.9
18.5
26.7
50.2
30.9
25.5
21.4
32.3
31.9
28.7
36.2
45.2
16.3
50
15.8
19.3
X2
7.7
6.1
9.3
6.1
2.7
5.5
14.4
4.9
5.4
2.5
4.1
15.2
3.8
3.3
1.6
16.2
25.2
5
3.5
5.1
7
10
5.5
3.9
6.3
3.7
6.7
5.3
3.8
5.4
7.5
4.8
3.5
Y
140
75
54
110
68
100
100
90
82
100
93
110
63
47
86
160
640
62
92
170
340
120
61
140
220
110
120
94
99
59
130
79
54
X1
21.3
73.5
80.1
75.4
32.3
19.4
20
48.3
40
22.5
31.8
17.1
10.9
30.6
52.6
53.9
88.6
25.6
18.9
16.7
19.6
15.1
25.1
8.4
25.4
18.7
21.6
24
19.3
243
9.6
36.4
X2
4.2
8.5
18.8
16.6
8.7
3.9
5.5
8.2
6.7
2.4
14.7
6.2
2
7.3
11.4
11.5
15.5
6.8
3.1
4.5
5.6
4.2
7
2
6.9
4.4
7.6
5.2
4.3
24.1
2.4
6
Y
110
210
170
790
100
62
300
110
120
95
180
180
50
110
130
210
320
69
110
84
86
110
150
34
140
72
110
110
130
590
47
110
Table 7.2: The subset of the Kola data. Here X1 , X2 , and Y are the variables
Cr, Co, and Cr INAA, respectively.
STAT 501
D. S. Young
Chapter 8
Matrix Notation in Regression
There are two main reasons for using matrices in regression. First, the notation simplifies the writing of the model. Secondly, and most importantly,
matrix formulas provide the means by which statistical software calculates
the estimated coefficients and their standard errors, as well as the set of predicted values for the observed sample. If necessary, a review of matrices and
some of their basic properties can be found in Appendix B.
8.1
Y=
Y1
Y2
..
.
Yn
2. The X matrix is a matrix in which each row gives the x-variable data
for a different observation. The first column equals 1 for all observations
(unless doing a regression through the origin), and each column after
101
102
1 X1,1 . . . X1,p1
1 X2,1 . . . X2,p1
X = ..
.
..
..
...
.
.
.
1 Xn,1 . . . Xn,p1
In the subscripting, the first value is the observation number and the
second number is the variable number. The first column is always a
column of 1s. The X matrix has n rows and p columns.
0
1
= .. .
.
p1
Notice the subscript for the numbering of the s. As an example,
for simple linear regression, = (0 1 )T . The vector will contain
symbols, not numbers, as it gives the population parameters.
4. is a n-dimensional column vector listing the errors:
1
2
= .. .
.
n
Again, we will not have numerical values for the vector.
As an example, suppose that data for a y-variable and two x-variables is
as given in Table 8.1. For the model
yi = 0 + 1 xi,1 + 2 xi,2 + 3 xi,1 xi,2 + i ,
the matrices Y, X, , and are as follows:
STAT 501
D. S. Young
103
6 5 10 12 14 18
1 1 3 5 3 5
1 2 1 1 2 2
Y=
6
5
10
12
14
18
, X =
1
1
1
1
1
1
1
1
3
5
3
5
1 1
2 2
1 3
1 5
2 6
2 10
, =
2 , =
1
2
3
4
5
6
1. Notice that the first column of the X matrix equals 1 for all rows
(observations), the second column gives the values of xi,1 , the third
column lists the values of xi,2 , and the fourth column gives the values
of the interaction values xi,1 xi,2 .
2. For the theoretical model, we do not know the values of the beta coefficients or the errors. In those two matrices (column vectors) we can
only list the symbols for these items.
3. There is a slight abuse of notation that occurs here which often happens
when writing regression models in matrix notation. I stated earlier how
capital letters are reserved for random variables and lower case letters
are reserved for realizations. In this example, capital letters have been
used for the realizations. There should be no misunderstanding as
it will usually be clear if we are in the context of discussing random
variables or their realizations.
Finally, using Calculus rules for matrices, it can be derived that the ordinary least squares estimates of the coefficients are calculated using the
matrix formula
b = (XT X)1 XT y,
D. S. Young
STAT 501
104
= (Y Y)
= (Y XbT (Y Xb),
where b = (b0 b1 . . . bp1 )T . As in the simple linear regression case, these
regression coefficient estimators are unbiased (i.e., E(b) = ). The formula
above is used by statistical software to calculate values of the sample coefficients.
An important theorem in regression analysis (and Statistics in general)
is the Gauss-Markov Theorem, which we alluded to earlier. Since we
have the proper matrix notation in place, we will now formalize this very
important result.
Theorem 1 (Gauss-Markov Theorem) Suppose that we have the linear regression model
Y = X + ,
where E(i |X) = 0 and E(i |X) = 2 for all i = 1, . . . , n. Then
= b = (XT X)1 XT Y
e=YY
= Y HY
= (Inn H)Y,
STAT 501
D. S. Young
105
8.2
Variance-Covariance Matrix of b
V(b)
= MSE(XT X)1 .
D. S. Young
STAT 501
106
V(b)
j,j ,
th
where V(b)
diagonal element of the estimated variance-covariance
j,j is the j
matrix of the sample beta coefficients (i.e., the (estimated) standard error).
Furthermore, the Bonferroni joint 100(1 )% confidence intervals are:
bj
tnp;1/(2p)
V(b)
j,j ,
for j = 0, 1, 2, . . . , (p 1).
8.3
Statistical Intervals
The statistical intervals for estimating the mean or predicting new observations in the simple linear regression case can easily extend to the multiple
regression case. Here, it is only necessary to present the formulas.
First, let use define the vector of given predictors as
1
Xh,1
Xh,2
Xh =
.
..
.
Xh,p1
We are interested in either intervals for E(Y |X = Xh ) or intervals for the
value of a new response y given that the observation has the particular value
Xh . First we define the standard error of the fit at Xh given by:
q
T
1
s.e.(Yh ) = MSE(XT
h (X X) Xh ).
Now, we can give the formulas for the various intervals:
100 (1 )% Confidence Interval:
yh tnp;1/2 s.e.(
yh ).
STAT 501
D. S. Young
107
s.e.(
yh ).
yh pFp,np;1
100 (1 )% Prediction Interval:
p
yh tnp;1/2 MSE/m + [s.e.(
yh )]2 ,
where m = 1 corresponds to a prediction interval for a new observation at a given Xh and m > 1 corresponds to the mean of m new
observations calculated at the same Xh .
Bonferroni Joint 100 (1 )% Prediction Intervals:
p
yhi tnp;1/(2q) MSE + [s.e.(
yhi )]2 ,
for i = 1, 2, . . . , q.
Scheff
e Joint 100 (1 )% Prediction Intervals:
q
yhi qFq,np;1
(MSE + [s.e.(
yh )]2 ),
for i = 1, 2, . . . , q.
[100 (1 )%]/[100 P %] Tolerance Intervals:
One-Sided Intervals:
(, yh + K,P MSE)
and
(
yh K,P MSE, )
STAT 501
108
yh K/2,P/2 MSE,
where K/2,P/2 is found similarly as in the simple linear regression setting, but with n as given above and f = np, where p is the dimension
of Xh .
8.4
Example
V(b)
= 79.7819 0.5521 0.0472 0.0066
0.2918 0.0066 0.0113
Var(b0 )
Cov(b0 ,b1 )
Cov(b0 ,b2 )
Var(b0 )Var(b1 )
Var(b0 )Var(b2 )
0 )Var(b0 )
Var(b
Var(b1 )
Cov(b1 ,b2 )
Cov(b1 ,b0 )
rb = Var(b1 )Var(b0 )
Var(b
)Var(b
)
Var(b
1
1
1 )Var(b2 )
Cov(b2 ,b0 )
Cov(b2 ,b1 )
Var(b2 )
Var(b2 )Var(b0 )
Var(b2 )Var(b1 )
Var(b2 )Var(b2 )
1565.5532
44.0479
23.2797
(1565.5532)(1565.5532)
(1565.5532)(3.7657)
(1565.5532)(0.9046)
44.0479
3.7657
0.5305
=
(3.7657)(1565.5532)
(3.7657)(3.7657)
(3.7657)(0.9046)
23.2797
0.5305
0.9046
(0.9046)(1565.5532)
(0.9046)(3.7657)
(0.9046)(0.9046)
1
0.5737 0.6186
0.5737
1
0.2874 .
=
0.6186 0.2874
1
STAT 501
D. S. Young
109
1
Corr(Y, X1 )
. . . Corr(Y, Xp1 )
Corr(X1 , Y )
1
. . . Corr(X1 , Xp1 )
r=
.
..
..
..
..
.
.
.
.
Corr(Xp1 , Y ) Corr(Xp1 , X1 ) . . .
1
Note that all of the diagonal entries are 1 because the correlation between
a variable and itself is a perfect (positive) association. This correlation matrix is what most statistical software reports and it does not always report
rb . The interpretation of each entry in r is identical to the Pearson correlation coefficient interpretation presented earlier. Specifically, it provides the
strength and direction of the association between the variables corresponding to the row and column of the respective entry. For this example, the
correlation matrix is:
1
0.8488 0.1121
1
0.2874 .
r = 0.8488
0.1121 0.2874
1
We can also calculate the 95% confidence intervals for the regression coefficients. First note that t26,0.975 = 2.0555.
The 95% confidence interval for
1 is calculated using 24.2150 2.0555 3.7657 and for 2 it is calculated
D. S. Young
STAT 501
110
using 4.7963 2.0555 0.9046. Thus, we are 95% confident that the true
population regression coefficients for the north and south focal points are
between (-28.2039, -20.2262) and (2.8413, 6.7513), respectively.
STAT 501
D. S. Young
Chapter 9
Indicator Variables
We next discuss how to include categorical predictor variables in a regression
model. A categorical variable is a variable for which the possible outcomes are nameable characteristics, groups or treatments. Some examples
are gender (male or female), highest educational degree attained (secondary
school, college undergraduate, college graduate), blood pressure medication
used (drug 1, drug 2, drug 3), etc.
We use indicator variables to incorporate a categorical x-variable into a
regression model. An indicator variable equals 1 when an observation is in
a particular group and equals 0 when an observation is not in that group. An
interaction between an indicator variable and a quantitative variable exists
if the slope between the response and the quantitative variable depends upon
the specific value present for the indicator variable.
9.1
112
(9.1)
The difficulty with this model is that the X matrix has a linear dependency,
so we cant estimate the individual coefficients (technically, this is because
there will be an infinite number of solutions for the betas). The dependency
stems from the fact that Xi,3 + Xi,4 + Xi,5 = 1 for all observations because
each patient uses one (and only one) of the treatments. In the X matrix, the
linear dependency is that the sum of the last three columns will equal the
first column (all 1 s). This scenario leads to what is called collinearity and
we investigate this in the next chapter.
One solution (there are others) for avoiding this difficulty is the leave
one out method. The leave one out method has the general rule that
whenever a categorical predictor variable has k categories, it is possible to
define k indicator variables, but we should only use k 1 of them to describe
the differences among the k categories. For the overall fit of the model, it
does not matter which set of k 1 indicators we use. The choice of which
k 1 indicator variables we use, however, does affect the interpretation of
the coefficients that multiply the specific indicators in the model.
In our example with three treatments (and three possible indicator variables), we might leave out the third indicator giving us this model:
yi = 0 + 1 xi,1 + 2 xi,2 + 3 xi,3 + 4 xi,4 + i .
(9.2)
For the overall fit of the model, it would work equally well to leave out
the first indicator and include the other two or to leave out the second and
include the first and third.
STAT 501
D. S. Young
9.2
113
Coefficient Interpretations
STAT 501
114
9.3
D. S. Young
115
More technically, the null hypothesis is that the coefficients multiplying the
indicator all equal 0.
For our example with three treatments of high blood pressure and additional x-variables age and body mass, the details for doing an overall test of
treatment differences are:
Full model is: yi = 0 + 1 xi,1 + 2 xi,2 + 3 xi,3 + 4 xi,4 + i .
Null hypothesis is: H0 : 3 = 4 = 0.
Reduced model is: yi = 0 + 1 xi,1 + 2 xi,2 + i .
9.4
Interactions
To examine a possible interaction between a categorical predictor and a quantitative predictor, include product variables between each indicator and the
quantitative variable.
As an example, suppose we thought there could be an interaction between the body mass variable (X2 ) and the treatment variable. This would
mean that we thought that treatment differences in blood pressure reduction
depend on the specific value of body mass. The model we would use is:
yi = 0 + 1 xi,1 + 2 xi,2 + 3 xi,3 + 4 xi,4 + 5 xi,2 xi,3 + 6 xi,2 xi,4 + i .
To test whether there is an interaction, the null hypothesis is H0 : 5 =
6 = 0. We would use the general linear F test procedure to carry out the
test. The full model is the interaction model given three lines above. The
reduced model is now:
yi = 0 + 1 xi,1 + 2 xi,2 + 3 xi,3 + 4 xi,4 + i .
A visual way to assess if there is an interaction is by using an interaction
plot. An interaction plot is created by plotting the response versus the
quantitative predictor and connecting the successive values according to the
grouping of the observations. Recall that an interaction between factors
occurs when the change in response from lower levels to higher levels of one
factor is not quite the same as going from lower levels to higher levels of
another factor. Interaction plots allow us to compare the relative strength of
the effects across factors. What results is one of three possible trends:
D. S. Young
STAT 501
116
9.5
Relationship to ANCOVA
D. S. Young
117
60
Treatment 1
Treatment 2
Treatment 3
80
Disordinal Interaction
Treatment 1
Treatment 2
Treatment 3
50
100
No Interaction
40
Response
20
2.0
10
1.5
1.0
30
20
60
40
Response
2.5
3.0
3.5
4.0
1.0
1.5
2.0
Predictor
2.5
3.0
3.5
4.0
Predictor
(a)
(b)
Ordinal Interaction
50
60
Treatment 1
Treatment 2
Treatment 3
40
30
Response
10
20
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Predictor
(c)
Figure 9.1: (a) A plot of no interactions amongst the groups (notice how the
lines are nearly parallel). (b) A plot of a disordinal interaction amongst the
groups (notice how the lines intersect). (c) A plot of an ordinal interaction
amongst the groups (notice how the lines dont intersect, but if we were to
extrapolate beyond the predictor limits, then the lines would likely cross).
D. S. Young
STAT 501
118
with interactions (if we are interested in testing for interactions with indicator variables and other variables). However, in the design and analysis
of experiments literature, this model is also used, but with a slightly different motivation. Various experimental layouts using ANOVA tables are
commonly used in the design and analysis of experiments. These ANOVA
tables are constructed to compare the means of several levels of one or more
treatments. For example, a one-way ANOVA can be used to compare six
different dosages of blood pressure pills and the mean blood pressure of individuals who are taking one of those six dosages. In this case, there is one
factor with six different levels. Suppose further that there are four different
races represented in this study. Then a two-way ANOVA can be used since
we have two factors - the dosage of the pill and the race of the individual
taking the pill. Furthermore, an interaction term can be included if we suspect that the dosage a person is taking and the race of the individual have
a combined effect on the response. As you can see, you can extend to the
more general n-way ANOVA (with or without interactions) for the setting
with n treatments. However, dealing with n > 2 can often lead to difficulty
in interpreting the results.
One other important thing to point out with ANOVA models is that,
while they use least squares for estimation, they differ from how categorical
variables are handled in a regression model. In an ANOVA model, there is
a parameter estimated for the factor level means and these are used for the
linear model of the ANOVA. This differs slightly from a regression model
which estimates a regression coefficient for, say, n 1 indicator variables
(assuming there are n levels of the categorical variable and we are using
the leave-one-out method). Also, ANOVA models utilize ANOVA tables,
which are broken down by each factor (i.e., you would look at the sums of
squares for each factor present). ANOVA tables for regression models simply
test if the regression model has at least one variable which is a significant
predictor of the response. More details on these differences are better left to
a course on design of experiments.
When there is also a continuous variable measured with each response,
then the n-way ANOVA model needs to reflect the continuous variable. This
model is then referred to as an Analysis of Covariance (or ANCOVA)
model. The continuous variable in an ANCOVA model is usually called the
covariate or sometimes the concomitant variable. One difference in how
an ANCOVA model is approached is that an interaction between the covariate and each factor is always tested first. The reason why is because an
STAT 501
D. S. Young
119
9.6
Coded Variables
In the early days when computing power was limited, coding of the variables
accomplished simplifying the linear algebra and thus allowing least squares
solutions to be solved manually. Many methods exist for coding data, such
as:
Converting variables to two values (e.g., {-1, 1} or{0, 1}).
Converting variables to three values (e.g., {-1, 0, 1}).
Coding continuous variables to reflect only important digits (e.g., if
the costs of various nuclear programs range from $100,000 to $150,000,
D. S. Young
STAT 501
120
The purpose of coding is to simplify the calculation of (XT X)1 in the various
regression equations, which was especially important when this had to be
done by hand. It is important to note that the above methods are just a few
possibilities and that there are no specific guidelines or rules of thumb for
when to code data.
Today when (XT X)1 is calculated with computers, there may be a significant rounding error in the linear algebra manipulations if the difference
in the magnitude of the predictors is large. Good statistical programs assess
the probability of such errors, which would warrant using coded variables.
When coding variables, one should be aware of different magnitudes of the
parameter estimates compared to those for the original data. The intercept
term can change dramatically, but we are concerned with any drastic changes
in the slope estimates. In order to protect against additional errors due to
the varying magnitudes of the regression parameters, you can compare plots
of the actual data and the coded data and see if they appear similar.
9.7
Examples
D. S. Young
121
##########
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -23.8112
20.4315 -1.165
0.261
subprograms
0.8541
0.1066
8.012 5.44e-07 ***
institution 35.3686
26.7086
1.324
0.204
sub.inst
-0.2019
0.1556 -1.297
0.213
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 38.42 on 16 degrees of freedom
Multiple R-Squared: 0.8616,
Adjusted R-squared: 0.8356
F-statistic: 33.2 on 3 and 16 DF, p-value: 4.210e-07
##########
350
The above gives the t-tests for these predictors. Notice that only the predictor
of application subprograms (i.e., X1 ) is statistically significant, so we should
consider dropping the interaction term for starters.
250
200
150
100
50
Number of ManYears
300
Academic Institution
Private Firm
100
200
300
400
Number of Subprograms
STAT 501
122
Institution
0
1
0
1
0
1
0
0
1
0
0
1
0
1
1
1
1
0
0
1
Man-Years
52
58
207
95
346
244
215
112
195
54
48
39
31
57
20
33
19
6
7
56
D. S. Young
123
equation is
yi = 3.47742 + 0.75088xi,1 ,
which can be found from the following output:
##########
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.47742
13.12068 -0.265
0.794
subprograms 0.75088
0.07591
9.892 1.06e-08 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 38.38 on 18 degrees of freedom
Multiple R-Squared: 0.8446,
Adjusted R-squared: 0.836
F-statistic: 97.85 on 1 and 18 DF, p-value: 1.055e-08
##########
Example 2: Steam Output Data (continued )
Consider coding the steam output data by rounding the temperature to the
nearest integer value ending in either 0 or 5. For example, a temperature of
57.5 degrees would be rounded up to 60 degrees while a temperature of 76.8
degrees would be rounded down to 75 degrees. While you would probably
not utilize coding on such an easy data set where magnitude is not an issue,
it is utilized here just for illustrative purposes.
Figure 9.3 compares the scatterplots of this data set with the original
temperature value and the coded temperature value. The plots look comparable, suggesting that coding could be used here. Recall that the estimated
regression equation for the original data was yi = 13.6230 0.0798xi . The
estimated regression equation for the coded data is yi = 13.7765 0.0824xi ,
which is also comparable.
D. S. Young
STAT 501
124
Steam Data
Steam Data
11
12
10
10
11
12
30
40
50
60
(a)
70
30
40
50
60
70
(b)
STAT 501
D. S. Young
Chapter 10
Multicollinearity
Recall that the columns of a matrix are linearly dependent if one column
can be expressed as a linear combination of the other columns. A matrix
theorem is that if there is a linear dependence among the columns of X, then
(XT X)1 does not exist. This means that we cant determine estimates of
the beta coefficients since the formula for determining the estimates involves
(XT X)1 .
In multiple regression, the term multicollinearity refers to the linear
relationships among the x-variables. Often, the use of this term implies that
the x-variables are correlated with each other, so when the x-variables are not
correlated with each other, we might say that there is no multicollinearity.
10.1
There are various sources for multicollinearity. For example, in the data
collection phase an investigator may have drawn the data from such a narrow
subspace of the independent variables that collinearity appears. Physical
constraints, such as design limits, may also impact the range of some of these
independent variables. Model specification (such as defining more variables
than observations or specifying too many higher-ordered terms/interactions)
and outliers can both lead to collinearity.
When there is no multicollinearity among x-variables, the effects of the
individual x-variables can be estimated independently of each other (although
we will still want to do a multiple regression). When multicollinearity is
present, the estimated coefficients are correlated (confounding) with each
125
126
10.2
D. S. Young
127
.
.
.
s X1
s X2
sXp1
X = n1
,
.
.
.
.
..
..
..
..
Xn,2 X2
Xn,p1 Xp1
Xn,1 X1
...
sX
sX
sX
1
p1
1
Y = n1
Y1 Y
sY
Y2 Y
sY
..
.
Yn Y
sY
STAT 501
128
for j = 1, 2, . . . , (p 1) and
sP
sY =
n
i=1 (Yi
Y )2
.
n1
D. S. Young
129
For example, one such technique involves taking the square eigenvector relative to the
square eigenvalue and then seeing what percentage each quantity in this (p1)-dimensional
vector explains of the total variation for the corresponding regression coefficient.
D. S. Young
STAT 501
130
these observations are handled. You can typically use some of the residual
diagnostic measures (e.g., DFITs, Cooks Di , DFBETAS, etc.) for identifying potential collinearity-influential observations since there is no established
or agreed-upon method for classifying such observations.
Finally, there are also some more advanced regression procedures that
can be performed in the presence of multicollinearity. Such methods include
principal components regression and ridge regression. These methods are
discussed later.
10.3
Examples
60 50 70 42 50 45
40 45 43 60 60 65
M F M F M F
1
1
1
X=
1
1
1
STAT 501
1
0
1
0
1
0
0
1
0
1
0
1
D. S. Young
131
The sum of the last two columns equals the first column for every row in the
X matrix. This is a linear dependence, so parameter estimates cannot be
calculated because (XT X)1 does not exist. In practice, the usual solution
is to drop one of the indicator variables from the model. Another solution is
to drop the intercept (thus dropping the first column of X above), but that
is not usually done.
For this example, we cant proceed with a multiple regression analysis because there is perfect collinearity with X2 and X3 . Sometimes, a generalized
inverse can be used (which requires more of a discussion beyond the scope
of this course) or if you attempt to do an analysis on such a data set, the
software you are using may zero out one of the variables that is contributing
to the collinearity and then proceed to do an analysis. However, this can
lead to errors in the final analysis.
Example 2: Heat Flux Data Set (continued )
Let us return to the heat flux data set. Let our model include the east,
south, and north focal points, but also incorporate time and insolation as
predictors. First, let us run a multiple regression analysis which includes
these predictors.
##########
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 325.43612
96.12721
3.385 0.00255 **
east
2.55198
1.24824
2.044 0.05252 .
north
-22.94947
2.70360 -8.488 1.53e-08 ***
south
3.80019
1.46114
2.601 0.01598 *
time
2.41748
1.80829
1.337 0.19433
insolation
0.06753
0.02899
2.329 0.02900 *
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 8.039 on 23 degrees of freedom
Multiple R-Squared: 0.8988,
Adjusted R-squared: 0.8768
F-statistic: 40.84 on 5 and 23 DF, p-value: 1.077e-10
##########
We see that time is not a statistically significant predictor of heat flux and,
in fact, east has become marginally significant.
D. S. Young
STAT 501
132
north
2.612066
south
3.175970
time insolation
5.370059
2.319035
Notice that the V IF for time is fairly high (about 5.37). This is somewhat
a high value and should be investigated further. So next, the pairwise scatterplots are given in Figure 10.1 (we will only look at the plots involving
the time variable since that is the variable we are investigating). Notice how
there appears to be a noticeable linear trend between time and the south
focal point. There also appears to be some sort of curvilinear trend between
time and the north focal point as well as between time and insolation. These
plots, combined with the V IF for time, suggests to look at a model without
the time variable.
After removing the time variable, we obtain the new V IF values:
##########
east
1.277792
##########
north
1.942421
south insolation
1.206057
1.925791
Notice how removal of the time variable has sharply decreased the V IF
values for the other variables. The regression coefficient estimates are:
##########
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 270.21013
88.21060
3.063 0.00534 **
east
2.95141
1.23167
2.396 0.02471 *
north
-21.11940
2.36936 -8.914 4.42e-09 ***
south
5.33861
0.91506
5.834 5.13e-06 ***
insolation
0.05156
0.02685
1.920 0.06676 .
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 8.17 on 24 degrees of freedom
Multiple R-Squared: 0.8909,
Adjusted R-squared: 0.8727
F-statistic: 48.99 on 4 and 24 DF, p-value: 3.327e-11
##########
STAT 501
D. S. Young
18.5
34
16.5
17.5
35
North
13
14
15
16
11
12
13
Time
14
15
16
Time
(a)
(b)
40
900
850
36
650
34
Insolation
800
750
700
38
South
16.0
12
15.5
31
32
11
17.0
36
18.0
37
East
33
133
19.0
38
600
32
11
12
13
14
Time
(c)
15
16
11
12
13
14
15
16
Time
(d)
Figure 10.1: Pairwise scatterplots of the variable time versus (a) east, (b)
north, (c) south, and (d) insolation. Notice how there appears to be some sort
of relationship between time and the predictors north, south, and insolation.
D. S. Young
STAT 501
134
So notice how now east is statistically significant while insolation is marginally significant. If we proceeded to drop insolation from the model, then
we would be back to the analysis we did earlier in the chapter. This illustrates
how dropping or adding a predictor to a model can change the significance of
other predictors. We will return to this when we discuss stepwise regression.
STAT 501
D. S. Young
Chapter 11
ANOVA II
As in simple regression, the ANOVA table for a multiple regression model displays quantities that measure how much of the variability in the y-variable is
explained and how much is not explained by the x-variables. The calculation
of the quantities involved is nearly identical to what we did in simple regression. The main difference has to do with the degrees of freedom quantities.
The basic structure is given in Table 11.1 and the explanations follow.
Source
Regression
Error
Total
df
p1
np
n1
Pn SS
yi y)2
Pni=1 (
(yi yi )2
Pi=1
n
)2
i=1 (yi y
MS
F
MSR MSR/MSE
MSE
P
The sum of squares for total is SSTO = ni=1 (yi y)2 , which is the
sum of squared deviations from the overall mean of y and dfT = n 1.
SSTO is a measure of the overall variation in the y-values. In matrix
notation, SSTO = ||Y Y 1||2 .
P
The sum of squared errors is SSE = ni=1 (yi yi )2 , which is the sum
of squared observed errors (residuals) for the observed data. SSE is a
measure of the variation in y that is not explained by the regression.
For multiple linear regression, dfE = n p, where p = number of beta
coefficients in the model (including the intercept 0 ). As an example,
135
136
SSE
dfE
SSE
,
np
which estimates 2 ,
11.1
SSR
dfR
SSR
.
p1
1. The F -statistic in the ANOVA given in Table 11.1 can be used to test
whether the y-variable is related to one or more of the x-variables in
the model. Specifically, F = MSR/MSE is a test statistic for
H0 : 1 = 2 = . . . = p1 = 0
HA : at least one i 6= 0 for i = 1, . . . , p 1.
The null hypothesis means that the y-variable is not related to any of
the x-variables in the model. The alternative hypothesis means that
the y-variable is related to one or more of the x-variables in the model.
Statistical software will report a p-value for this test statistic. The pvalue is calculated as the probability to the right of the calculated value
of F in an F distribution with p1 and np degrees of freedom (often
written as Fp1,np;1 ). The usual decision rule also applies here in
that if p < 0.05, reject the null hypothesis. If that is our decision,
we conclude that y is related to at least one of the x-variables in the
model.
2. MSE is the estimate of the error variance 2 . Thus s = MSE estimates the standard deviation of the errors.
3. As in simple regression, R2 = SSTO-SSE
, but here it is called the coSSTO
efficient of multiple determination. R2 is interpreted as the proSTAT 501
D. S. Young
137
11.2
The general linear F -test procedure is used to test any null hypothesis
that, if true, still leaves us with a linear model (linear in the s). The most
common application is to test whether a particular set of coefficients are
all equal 0. As an example, suppose we have a response variable (Y ) and 5
predictor variables (X1 , X2 , . . . , X5 ). Then, we might wish to test
H0 : 1 = 3 = 4 = 0
HA : at least one of {1 , 3 , 4 } =
6 0.
The purpose for testing a hypothesis like this is to determine if we could
eliminate variables X1 , X3 , and X4 from a multiple regression model (an
action implied by the statistical truth of the null hypothesis).
The full model is the multiple regression model that includes all variables
under consideration. The reduced model is the regression model that would
result if the null hypothesis is true. The general linear F -statistic is
F =
SSE(reduced)SSE(full)
dfE (reduced)dfE (full)
MSE(full)
Here, this F -statistic has degrees of freedom df1 = dfE (reduced) dfE (full)
and df2 = dfE (full). With the rejection region approach and a 0.05 significance level, we reject H0 if the calculated F is greater than the tabled value
Fdf1 ,df2 ;1 , which is the 95th percentile of the appropriate Fdf1 ,df2 -distribution.
A p-value is found as the probability that the F -statistic would be as large
or larger than the calculated F .
To summarize, the general linear F -test is used in settings where there
are many predictors and it is desirable to see if only one or a few of the
predictors can adequately perform the task of estimating the mean response
and prediction of new observations. Sometimes the full and reduced sums of
squares (that we introduced above) are referred to as extra sum of squares.
D. S. Young
STAT 501
138
11.3
The extra sums of squares measure the marginal reduction in the SSE
when one or more predictor variables are added to the regression model given
that other predictors are already in the model. In probability theory, we write
A|B which means that event A happens GIVEN that event B happens (the
vertical bar means given). We also utilize this notation when writing extra
sums of squares. For example, suppose we are considering two predictors,
X1 and X2 . The SSE when both variables are in the model is smaller than
when only one of the predictors is in the model. This is because when both
variables are in the model, they both explain additional variability in Y
which drives down the SSE compared to when only one of the variables is
in the model. This difference is what we call the extra sums of squares. For
example,
SSR(X1 |X2 ) = SSE(X2 ) SSE(X1 , X2 ),
which measures the marginal effect of adding X1 to the model, given that
X2 is already in the model. An equivalent expression is to write
SSR(X1 |X2 ) = SSR(X1 , X2 ) SSR(X2 ),
which can be viewed as the marginal increase in the regression sum of squares.
Notice (for the second formulation) that the corresponding degrees of freedom
is (3-1)-(2-1)=1 (because the df for SSR(X1 , X2 ) is (3-1)=2 and the df for
SSR(X2 ) is (2-1)=1). Thus,
MSR(X1 |X2 ) =
SSR(X1 |X2 )
.
1
When more predictors are available, then there are a vast array of possible decompositions of the SSR into extra sums of squares. One generic
formulation is if you have p predictors, then
SSR(X1 , . . . , Xj , . . . , Xp ) = SSR(Xj ) + SSR(X1 |Xj ) + . . .
+ SSR(Xj1 |X1 , . . . , Xj2 , Xj )
+ SSR(Xj+1 |X1 , . . . , Xj ) + . . .
+ SSR(Xp |X1 , . . . , Xp1 ).
In the above, j is just being used to indicate any one of the p predictors.
You can also calculate the marginal increase in the regression sum of squares
STAT 501
D. S. Young
139
11.4
Formal lack of fit testing can also be performed in the multiple regression
setting; however, the ability to achieve replicates can be more difficult as more
predictors are added to the model. Note that the corresponding ANOVA
table (Table 11.2) is similar to that introduced for the simple linear regression
setting. However, now we have the notion of p regression parameters and the
number of replicates (m) refers to the number of unique X vectors. In other
words, each predictor must have the same value for two observations for it
to be considered a replicate. For example, suppose we have 3 predictors for
our model. The observations (40, 10, 12) and (40, 10, 7) are unique levels
for our X vectors, whereas the observations (10, 5, 13) and (10, 5, 13) would
constitute a replicate.
Formal lack of fit testing in multiple regression can be difficult due to
sparse data, unless the experiment was designed properly to achieve replicates. However, other methods can be employed for lack of fit testing when
you do not have replicates. Such methods involve data subsetting. The basic approach is to establish criteria by introducing indicator variables, which
in turn creates coded variables (as discussed earlier). By coding the variables,
you can artificially create replicates and then you can proceed with lack of fit
testing. Another approach with data subsetting is to look at central regions
of the data (i.e., observations where the leverage is less than (1.1) p/n) and
treat this as a reduced data set. Then compare this reduced fit to the full fit
D. S. Young
STAT 501
140
df
p1
np
mp
nm
n1
SS
SSR
SSE
SSLOF
SSPE
SSTO
MS
MSR
MSE
MSLOF
MSPE
F
MSR/MSE
MSLOF/MSPE
Table 11.2: ANOVA table for multiple linear regression which includes a lack
of fit test.
(i.e., the fit with all of the data), for which the formulas for a lack of fit test
can be employed. Be forewarned that these methods should only be used as
exploratory methods and they are heavily dependent on what sort of data
subsetting method is used.
11.5
Partial R2
D. S. Young
141
partial R2 is as follows:
SSR(X2 , X3 |X1 )
SSE(X1 )
SSE(X1 ) SSE(X1 , X2 , X3 )
=
SSE(X1 )
SSE(reduced) SSE(full)
.
=
SSE(reduced)
2
RY,2,3|1
=
2
RY,B|A
=
These partial R2 values can also be used to calculate the power for the
corresponding general linear F -test. The power of this test is calculated by
first finding the tabled 100 (1 )th percentile of the Fu,nk1 -distribution.
Next we calculate Fu,nk1;1 (), which is the 100 (1 )th percentile
of a non-central Fu,nk1 -distribution with non-centrality parameter . The
non-centrality parameter is calculated as:
=n
2
2
RY,A,B
RY,B
.
2
1 RY,A,B
Finally, the power is simply the probability that the calculated Fu,nk1;1 ()
value is greater than the calculated Fu,nk1;1 value under the Fu,nk1 ()distribution.
D. S. Young
STAT 501
142
11.6
Note that you can produce p 1 partial leverage regression plots (i.e., one for each
predictor).
STAT 501
D. S. Young
143
use another type of residual to check the assumption of linearity for each
predictor. Partial residuals are residuals that have not been adjusted for a
particular predictor variable (say, Xj ). Suppose, we partition the X matrix
such that X = (Xj , X j ). For this formulation, Xj is the same as the
X matrix, but with the vector of observations for the predictor Xj omitted
(i.e., this vector of values is X j ). Similarly, let us partition the vector of
T
estimated regression coefficients as b = (bT
j , bj ) . Then, the set of partial
residuals for the predictor Xj would be
ej = Y Xj bj
+Y
Xj bj
=YY
= e (Xj bj Xb)
= e + bj X j .
Note in the above that bj is just a univariate quantity, so bj X j is still an
n-dimensional vector. Finally, a plot of ej versus Xj has slope bj . The more
the data deviates from a straight-line fit for this plot (which is sometimes
called a component plus residual plot), the greater the evidence that a
higher-ordered term or transformation on this predictor variable is necessary.
Note also that the vector e would provide the residuals if a straight-line fit
were made to these data.
11.7
Examples
STAT 501
144
##########
Analysis of Variance Table
Response: flux
Df Sum Sq Mean Sq F value
Pr(>F)
Regression 5 13195.5 2639.1 40.837 1.077e-10 ***
Residuals 23 1486.4
64.6
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
##########
Now, using the results from earlier (which seem to indicate a model including
only the north and south focal points as predictors) let us test the following
hypothesis:
H0 : 1 = 2 = 5 = 0
HA : at least one of {1 , 2 , 5 } =
6 0.
In other words, we only want our model to include the south (x3 ) and north
(x4 ) focal points. We see that MSE(full) = 64.63, SSE(full) = 1486.40, and
dfE (full) = 23.
Next we calculate the ANOVA table for the above null hypothesis:
##########
Analysis of Variance Table
Response: flux
Df Sum Sq Mean Sq F value
Pr(>F)
Regression 2 12607.6 6303.8 79.013 8.938e-12 ***
Residuals 26 2074.3
79.8
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
##########
The ANOVA for this analysis shows that SSE(reduced) = 2074.33 and
dfE (reduced) = 26. Thus, the F -statistic is:
F =
STAT 501
2074.331486.40
2623
64.63
= 3.027998,
D. S. Young
145
which follows a F3,23 distribution. The p-value (i.e., the probability of getting
an F -statistic as extreme or more extreme than 3.03 under a F3,23 distribution) is 0.0499. Thus we just barely claim statistical significance and conclude that at least one of the other predictors (insolation, east focal point,
and time) is a statistically significant predictor of heat flux.
We can also calculate the power of this F -test by using the partial R2
values. Specifically,
2
RY,1,2,3,4,5
= 0.8987602
2
RY,3,4
= 0.8587154
0.8987602 0.8587154
= (29)
1 0.8987602
= 11.47078.
2
=
RY,1,2,5|3,4
This means that insolation, the east focal point, and time explain about
28.34% of the variation in heat flux that could not be explained by the north
and south focal points.
Example 2: Simulated Data for Partial Leverage Plots
Suppose we have a response variable, Y , and two predictors, X1 and X2 . Let
us consider three settings:
D. S. Young
STAT 501
146
1. Y is a function of X1 ;
2. Y is a function of X1 and X2 ; and
3. Y is a function of X1 , X2 , and X22 .
Setting (3) is a quadratic regression model which falls under the polynomial
regression framework, which we discuss in greater detail later. Figure 11.1
shows the partial leverage regression plots for X1 and X2 for each of these
three settings. In Figure 11.1(a), we see how the plot indicates that there
is a strong linear relationship between Y and X1 when X2 is in the model,
but this is not the case between Y and X2 when X1 is in the model (Figure
11.1(b)). Figures 11.1(c) and 11.1(d) shows that there is a strong linear
relationship between Y and X1 when X2 is in the model as well as between
Y and X2 when X1 is in the model. Finally, Figure 11.1(e) shows that there
is a linear relationship between Y and X1 when X2 is in the model, but there
is an indication of a quadratic (i.e., curvilinear) relationship between Y and
X2 when X1 is in the model.
For setting (2), the fitted model when Y is regressed on X1 and X2 is
D. S. Young
147
40
20
rY[2]
20
rY[1]
10
40
10
20
20
20
40
rY[2]
10
rY[1]
10
20
(b)
40
(a)
rX[2]
rX[1]
rX[1]
400
300
100
(d)
150
(c)
rX[2]
rY[2]
rY[1]
100
50
200
50
200 100
rX[1]
(e)
rX[2]
(f)
Figure 11.1: Scatterplots of (a) rY[1] versus rX[1] and (b) rY[2] versus rX[2] for
setting (1), (c) rY[1] versus rX[1] and (d) rY[2] versus rX[2] for setting (2), and
(e) rY[1] versus rX[1] and (f) rY[2] versus rX[2] for setting (3).
D. S. Young
STAT 501
148
--Signif. codes:
STAT 501
D. S. Young