You are on page 1of 2

Kutner What to Read, Scan, and Skip Ch 1 Linear Regression with One Predictor Table 1.

1.1 (p19) Notice the various Sums of Squares (aka, SS), e.g., SSXY = 70,690. You also have x = 70 and if you wanted the variance of x, that would be sx2 = 19,800/(25-1) = 825. Note, if you know the variance (or std dev) of X, then you can find SSxx = (x - x )2. 1.8 p27 Skip method of maximum likelihood (p27-32) Ch 2 Inferences in Regression and Correlation Analysis Read all except Skip 2.2 Inferences concerning the y-intercept (p48-49) Scan

2.3 Some Observations, 2.4 Interval Estimation on E(Y | X=x0) , i.e., mean response for all observations at x0, 2.5 Prediction Interval for new observation with X=x0 Confidence Bands, especially Figure 2.6 General Linear Test Approach

Skip 2.11 Normal Correlation Models (although good visual on p79) Ch 3 Diagnostics and Remedial Measures Read 3.1 - 3.3 (to p114), 3.9 thru p133 (about transformations), and 3.11 Case Study See p112 & p129 Omission of Important Predictors, p114 Some Final Comments. 3.9 Transforms, especially ln (or log), x-squared, sqrt(x), etc. Skip Box-Cox (p134)

Ch 4 SKIP Ch 5 Matrix Approach to Simple Linear Regression Scan 5.1 to p188 (especially linear dependence, rank of a matrix, inverse of a matrix). Read 5.8 (plus take a look at the formula for a normal multi-variable on p197). Keep going to p201 which demonstrates taking multi-variate derivatives in order to obtain the normal equations for our estimation. READ 5.10-5.11 Scan 5.12 (our sums of squares or SS actually being matrices in quadratic form. Read 5.13s regression coefficients + mean response/prediction intervals

Ch 6 Multiple Regression I Read 6.1 6.3 (note the point in 6.3 in which Kutner minimizes Q in order to obtain the estimates of the coefficient coefficients) See p218 for our first introduction of a discrete (qualitative) predictor, plus p219 for an introduction to polynomial regression., and p220 for interactions. There are some nice graphics for models that are beyond linear (p221). The bottom of p223 (equation 6.25) is that typo; the 2x1 matrix should be just XY. Read 6.4, Scan 6.5, Read 6.6 Skip 6.7, which is basically CIs for the E(Y) and the PI for a New Observation. Read 6.8s scatter plots and residual plots, especially the correlation matrix (6.67) or perhaps the lattice of scatterplots. Youre looking for correlations between the predictors and Y that are fairly high, but fairly high corrections between predictors (eg, X1 and X2) can be problematic. We wont cover Brown-Forsythe, Breusch-Pagan, or lack-of-fit. Scan 6.9 Good example of two predictors, the matrices, nice 3-dimensional scatterplots. Also, see Figure 6.7, plus the fitted values and residuals on p241. This is a nice display for the Dwaine Studio data, demonstrating actual matrices as they evolve. Skip p246 thru the end of the chapter. These are higher dimensional examples of CIs on E(Y) for a given predictor values and the PI on a new observations. This repeats what we did in simple linear regression (CIs are at their best closer to the center of gravity, CIs are narrower than PIs for a given value of X, that is, more than one predictor). Ch 7 Multiple Regression II Scan 7.1-3, the extra sum of squares for Y via X1, X2, and X3. READ 7.6 multicollinearity Ch 8 Quantitative and Qualitative Predictors (READ 8.1-8.3) Ch 9 Model Selection & Validation - Read 9.1-6 Ch 15 Overview of Experimental Designs - Scan 15.1-15.3 (thru p660) Ch 16 Single-Factor Studies (Basically a review of Stat 250s ANOVA) Scan 16.1-16.6 Ch 17 Scan 17.1-17.5 Ch 19 Two-Factor Studies Read 19.1 and 19.2 (especially p822 Interacting Factor Effects) Scan 19.3-7 Read 19.8-19.9

You might also like